A few years from now, each one of us in the tech recruitment will look back and label the current tech hiring process as obsolete. At present, the inclination towards hiring tech talent based on basic skill tests is high. But the truth as we all know it and fail to admit more often is that the right way to go about tech recruitment is a combination of both algorithmic and tests advanced level skill utilization. There are candidates out there in the developer community who exude a great deal of confidence when it comes to facing interview panels but can’t effectively code.
For a moment let us look at the general process being followed to hire for any open tech position
- You skim through a pile of resumes and filter candidates based on the skills and relevance
- Put the filtered candidates through skill-based assessments to eliminate further
- Then comes the interview part. Your tech team begins interviewing candidates
This is where the shit gets real. Random people start to look for different ways to find qualities in candidates. Some put a candidate through a whiteboard code test, some just resort to trivial questions, few just talk their way to the selection, and suck at it.
The underlying problem here is that there is no clarity about the selection & evaluation criteria.
The bottom line:
Talent is out there, but it can’t be found on a resume
Don’t be shocked about the quality of hire when your process resembles the scenario above. This can’t go on, at least, for the progress of the developer community.
Thankfully, there is a way out. We have stressed about this enough i.e embracing Project-based assessments that mirror assessments to the actual work. Many companies have slowly started realizing the importance of hiring candidates through project-based assessments that give a better insight into how a candidate will perform after joining
Evaluating manually is time-intensive? Set it to automatic!
Every assessment follows a particular pattern from start to end. Each Test has to be designed, a scoring rubric needs to be created, and finally, iterate. In Talscale, one can create assessments in either Web services, Web pages, and Unit Test cases.
Types of projects
A problem statement is given to the candidates pertaining to a category. Based on the problem statement and Test cases attached to the project problem, the candidate's code has to pass the Test cases with the expected response. This shall test candidates on putting together stuff and apply them to get the expected output.
A typical Testcase evaluation report and corresponding scores
For example:
In a Web service question, a candidate is supposed to work on an idea that propagates between the client and the server application. Let's say a project that makes use of an API to fetch the required output. For a particular URL request, a candidate’s output has to match the expected response when the URL is hit.
Similarly, in Web pages, the code of the candidate is subjected to evaluation on a few important parameters in the regions of code.
The responses for each problem statement in any of the three categories are predetermined. The candidates who code & match the exact response upon request score marks. More essentially, you shall not be scoring candidates separately.
Advantage?
Reduced time to hire: The automated evaluation of each project-based assessments drastically reduces the evaluation time compared to manually. A tedious process of manually evaluating is reduced to a 15-minute process. However, a recruiter can choose between both modes of evaluation while setting the problem statement.
Happy Recruiting with TalScale!
Want to explore what we got? Get in touch with us! We would love to have a chat with you