I'm evaluating applicants as well jumped straight to AI. I wanted to mention that you can have AI evaluate applicants, student papers, etc. many times (if you are confident in using the API you could automate this, as one of my more savvy colleagues has) and take the average or most consistent rating/ranking. I have been doing this manually with 4 phd applicants I'm considering (anonymized info of course) and it is giving me remarkably consistent rankings that it might have taken me some time to come to myself. Amazing stuff!
I’ve used the hiring method of binning different aptitudes and then combing for a score. While less subject to biased whims of the moment, the construct validity still lacks. Sure, a few individuals who can’t keep it together get screened out, but otherwise, I think the effects are close to random.
I think the solution may lie more in changing our relationship with meritocracy than than magically better outcomes. There generally is no medal for second place and certainly no reward for being shortlisted; and yet a list of short listed candidates are often indistinguishable for predictive success. Therefore, maybe we should normalise the degree of luck in being the selected one, while still acknowledging what it takes to even be in the running. In other words, don’t treat success in a ranking as an all or none dichotomy.
This can be applied with or without AI, I’m agnostic on that point, leaning towards its incorporation.
Sigh. You aren't a mathematical modeler nor a clinical trial designer.
Null hypothesis. There are likely some wrong choices like people with major psychological disorders or outright fraudulent applications. Otherwise it is doubtful that a criterion is going to predict .... wait, what criterion would you even choose to say "By GAWD THAT WAS THE VERY BEST OUTCOME IMAGINABLE".?
So, you have no criteria for success or failure and then if you DID, it wouldn't help.
I'm evaluating applicants as well jumped straight to AI. I wanted to mention that you can have AI evaluate applicants, student papers, etc. many times (if you are confident in using the API you could automate this, as one of my more savvy colleagues has) and take the average or most consistent rating/ranking. I have been doing this manually with 4 phd applicants I'm considering (anonymized info of course) and it is giving me remarkably consistent rankings that it might have taken me some time to come to myself. Amazing stuff!
I’ve used the hiring method of binning different aptitudes and then combing for a score. While less subject to biased whims of the moment, the construct validity still lacks. Sure, a few individuals who can’t keep it together get screened out, but otherwise, I think the effects are close to random.
I think the solution may lie more in changing our relationship with meritocracy than than magically better outcomes. There generally is no medal for second place and certainly no reward for being shortlisted; and yet a list of short listed candidates are often indistinguishable for predictive success. Therefore, maybe we should normalise the degree of luck in being the selected one, while still acknowledging what it takes to even be in the running. In other words, don’t treat success in a ranking as an all or none dichotomy.
This can be applied with or without AI, I’m agnostic on that point, leaning towards its incorporation.
Sigh. You aren't a mathematical modeler nor a clinical trial designer.
Null hypothesis. There are likely some wrong choices like people with major psychological disorders or outright fraudulent applications. Otherwise it is doubtful that a criterion is going to predict .... wait, what criterion would you even choose to say "By GAWD THAT WAS THE VERY BEST OUTCOME IMAGINABLE".?
So, you have no criteria for success or failure and then if you DID, it wouldn't help.
eliminate the worst dreadful choices.
Go rando, d00d.
Counterpoint: job performance, research productivity, completion rates, and program fit are in fact criteria.