How We Optimized Premium Listings on Our Nursing Job Board | by Ben Tengelsen | Dec, 2024

In Math: We have now N jobs. Each day we generate a vector of N integers between 0 and 100. We feed this vector it right into a black field that’s principally simply Google. If we do a great job, the black field rewards us with many job purposes.

By placing the “proper” jobs on the prime of the web page (loaded phrase there), we are able to enhance upon a chronological kind. Earlier than we are able to establish the best jobs, we have to know whether or not Google truly rewards higher-placed jobs and, if that’s the case, by how a lot.

Generally, simply to justify all of the simplifying assumptions I’m going to make later, I begin a mission by writing down the mathematics equation I’d like to unravel. I think about ours seems one thing like this:

  • S is our vector of relevancy scores. There are N jobs, so every s_i (a component of S) corresponds to a unique job. A perform referred to as applies turns S right into a scalar. Every day we’d like to search out the the S that makes that quantity as massive as attainable — the relevancy scores that generate the best variety of job purposes for intelycare.com/jobs.
  • applies is a positive goal perform on Day 0. Afterward our goal perform may change (e.g. income, lifetime worth). Applies are straightforward to depend, although, and lets me spend my complexity tokens elsewhere. It’s Day 0. We’ll come again to those questions on Day 1.
  • Drawback. We all know nothing in regards to the applies perform till we begin feeding it relevancy scores. 😱

First issues first: Seeing that we all know nothing in regards to the applies perform, our first query is, “how will we select an ongoing wave of each day S vectors so we are able to study what the applies perform seems like?”

  • We all know (1) which jobs are boosted and when, (2) what number of applies every job receives every day. Word the absence of page-load knowledge. It’s Day 0! You won’t have all the information you need on Day 0, but when we’re intelligent we are able to make do with what we’ve.
  • Word the refined change in our goal. Earlier our aim was to perform some enterprise goal (maximize applies), and finally we’ll come again to that aim. We’ve taken off the enterprise hat for a minute and placed on our science hat. Our solely aim now’s to study one thing. If we are able to study one thing, we are able to use it (later) to assist obtain some enterprise goal.🤓
  • Since our aim is to study one thing, above all we need to keep away from studying nothing. Bear in mind it’s Day 0 and we’ve no assure that the Google Monster can pay any consideration to how we kind issues. We might as effectively go for broke and ensure this factor even works earlier than throwing extra time at enhancing it.

How will we select an preliminary wave of each day S vectors? We’ll give each job a rating of 0 (default rating), and select a random subset of jobs to spice up to 100.

  • Possibly I’m stating the apparent, but it surely needs to be random if you wish to isolate the impact of page-position on job purposes. We would like the one distinction between boosted jobs and different jobs to be their relative ordering on the web page as decided by our relevance scores. [I can’t tell you how many phone screens I’ve conducted where a candidate doubled down on running an A/B test with the good customers in one group and the bad customers in the other group. In fairness, I’ve also vetted marketing vendors who do the same thing 😭].
  • The randomness will likely be good in a while for different causes. It’s seemingly that some jobs profit from page-placement greater than others. We’ll have a neater time figuring out these jobs with a giant, randomly-generated dataset.

We all know we are able to’t enhance each job. Anytime I put a job on the prime of the web page, I bump all different jobs down the web page (basic instance of a “spillover”).

  • The spillover will get worse as I enhance increasingly jobs, I impose a higher and higher punishment on all different jobs by pushing them down within the kind (together with different boosted jobs).
  • With little exception, nursing jobs are in-person and native, so any boosting spillovers will likely be restricted to different close by jobs. That is essential.

How will we select an preliminary wave of each day S vectors? (last reply) We’ll give each job a rating of 0 (default rating), and select a random subset of jobs to spice up to 100. The dimensions of the random subset will fluctuate throughout geographies.

  • We create 4 teams of distinct geographies with roughly the identical quantity of net visitors in every group. Every group is balanced alongside the important thing dimensions we predict are essential. We randomly enhance a unique proportion of jobs in every group.

Right here’s the way it regarded…