API Usage Notice



Foreword

HRMTS offers various RESTFul APIs for integrations with career pages and other third-party systems. Some sample URL(s) for these API are:


These APIs are operating in a web farm with load balancing. The resources for these APIs are carefully planned to handle at least triple the load of Customers’ expected needs. However, HRMTS continues to observe that, even with such large margins, these resources keep on getting choked due to extreme amount of API consumption.


A closer analysis reveals Customers calling the API hundreds of thousands of times per day. Such extreme traffic affects all kinds of resources on the web farm and database servers, such as CPU, RAM, database transactions, network bandwidth, thread pools, connection pools and so forth. Nevertheless, this kind of consumption would also affect Customers’ own resources as well, including their career pages and third-party systems.


Audience

Information on this page is intended to Customers’ technical staff or consultants implementing integrations by consuming the System's API.


The remainder of the document describes the exact causes and solutions to the problem.


Job Portal API

When it comes to Job Portal API, there are two typical causes of large traffic:

  1. Caching – Customer has not implemented any caching in their integration.
  2. Sub-Calls – Customer is making multiple sub-calls to get data for a single page.


Caching

In contradiction to the RESTFul API to fetch candidates, the nature of the Job Portal API is different. This is used to fetch published positions to be presented in a vacancy list. This is type of data that is static most of the time, and is a good candidate of caching on the Customers’ side.


HRMTS strongly urges to cache the data returned from this API at the consumer system for as long as possible. The duration of the cache can depend on type and activity of the organization. There can be typical times when new positions are published and unpublished at the Customer. Most publishing activity takes place between 08:00 and 17:00, while most automatic unpublishing takes place at midnight.


Please note that this caching must be implemented on Customer’s servers, and not in end-client’s browsers. A vacancy list pulled form HRMTS will be the same for all the clients that are polling the Customer’s career page. Therefore, the data retrieved from HRMTS and cached on the Customer’s server can be served to all the clients to the career page.


Such caching dose not only reduce unnecessary traffic to HR Manager, but also yields direct benefit to the Customer’s career page:

  • They will be able to deliver data to clients much faster.
  • They will reduce the load on their own network.
  • They will be able to implement fallback in case communication to the API fails or times out.


Examples

Assume a customer that has 100 000 daily hits on their career page:

  • Without caching, the API calls made to fetch vacancy list would be 100 000+.
  • With a 10-minute caching, the number of calls would be 144.
  • With a 15-minute caching, the number of calls would be 96.
  • With a 30-minute caching, the number of calls would be 48.
  • With a 60-minute caching, the number of calls would be 24.


Sub-Calls

Although the Job Portal API offers many functions to retrieve specific data, it also offers a special method to retrieve all the information to render a career page in a single call. As it is mentioned in the documentation, there are specific methods to retrieve position list, department list, category list, location list and so forth. There is also a method “Contents” that lists all of the above information in a single call. So, when designing the career page using API integration, the Customer has option to use specific methods to minimize the calls to the API while offering a full and rich user experience and detailed information on the career page.


Examples

Assume a customer that has 1 000 000 daily hits on their career page, and makes separate API calls to retrieve data blocks for position list, department list, category list and location list:

  • With separate calls and no caching, the API calls made to fetch vacancy list would be 4 000 000.
  • With a single call to “Contents” without caching, the number of calls would be 1 000 000.
  • With a 10-minute caching, the number of calls would be 144.
  • With a 15-minute caching, the number of calls would be 96.
  • With a 30-minute caching, the number of calls would be 48.
  • With a 60-minute caching, the number of calls would be 24.


Suggestion

A very optimal configuration would be as follows:

  • Call the Job Portal API’s “Content” method every 15 minutes between 08:00 and 17:00, caching the results between the calls.
    • This would capture all the activity during business hours.
  • Use the cached data from the last call at 17:00 until midnight.
  • Call the API again right after midnight.
    • This would capture all automatic unpublishings.
  • Use the cache from the last call at midnight until 08:00.


The above suggestion would end up with a total of 37 API calls to the server during a 24-hour cycle without significant delays in update of vacancy list on the career page.


RESTFul API

When it comes to RESTFul API, or any API for that matter, there are two typical causes of unnecessary large traffic:

  1. Frequency – Customer is simply calling the API too many times.
  2. Take Size – Customer is defining too large take size.


Frequency

The most common method used in the RESTFul API, as of today, is retrieving the candidates – new applicants or new hires. Retrieving candidates from the System is one of the most expensive operation. There are too many Customers who are consuming this API every single minute of the day – 24x7. When too many Customers do the same thing all the time, the load on the web farm gets too high, and negatively effects all other Customers, including themselves.


Although it can be interesting to fetch new candidates the moment they register in the System, there is little significant practical loss, for most Customers, in having some delay in this retrieval. Depending on the size of the organization and number of average candidates/hires, HRMTS recommends to consume the API to fetch them once a day, once an hour or a few times per hour.


Examples

Assume a Customer consumes the API to fetch new candidates every minute of the hour.

  • By calling the API every minute, the number of calls during 24 hour accumulate to 1 440.
  • By reducing this frequency to every 10 minutes, the number of calls would be 144.
  • By reducing this frequency to every 15 minutes, the number of calls would be 96.
  • By reducing this frequency to every 30 minutes, the number of calls would be 48.
  • By reducing this frequency to every 60 minutes, the number of calls would be 24.


Take Size

Too often, HRMTS observes that the take size on various lists is set to a large number, such as 1000. When the System has little data to return, then it is no problem. However, when there is much data to return, then such a call will not only take time to execute, but also use significant bandwidth to return data. Too many of such calls from many sources often cause timeouts, that again cause the Customers’ systems to repeat the calls, and end up starting a vicious circle that worsen an already stretched situation.


The API in the System always returns a special node called “Counts” that is designed to assist a third-party caller to traverse the data in smaller chunks. The Counts node returns following values:

  • Search Count – Number of Data Objects that match the search criteria. This count may be more than what is delivered in the current call (Take Count).
  • Skip Count – Number of Data Objects that match the search criteria, and are skipped or jumped.
  • Take Count – Number of Data Objects that match the search criteria, and are returned in the current call.
  • Total Count – Total number of Data Objects of this kind available on the server.


Examples

API Take Sample

In the screenshot above, there were 6 hits on the search criteria, all 6 of them were returned, none of them were skipped, and all together there are 251 such Data Objects present in the database.


Suggestion

A very optimal configuration would be to call the API once an hour (or once a day), with a small take size (e.g. 25). The consumer could then make additional calls based on the counts returned.


Action Required

Due to extremely high (and unnecessary) consumption of API, HRMTS requires that all our Customers immediately review their integration implementation in light of above information, and make the necessary changes to minimize the calls to the API. This will yield best performance to both Customer and HRMTS.


No API method must be called more than 200 times during a 24-hour period.

The frequency of calls must not exceed more than 8 times per hour.


Please create a support ticket if you have any questions or need assistance.


Warning

HRMTS has implemented an API gatekeeper that enforces the limitations described above. This gatekeeper monitors the requests made to API per customer per API method, and rejects any call that exceeds the quota or throttle (speed limit). Upon rejection, the HTTP status code 204 (No Content) is returned, or an error message reading “API quota for the method is exceeded.”


Important Notes

  • The gatekeeper is operational as of today, but only in monitor mode. The blockage mode is disabled to allow customers a grace period to adjust their API usage.
  • If the throttle limit of 8 calls per hour is exceeded, then additional API calls are rejected only for the remainder of the hour.
  • If the quota limit of 200 calls per day is exceeded, then additional API calls are rejected for the remainder of the day.
  • If any customer thinks that 200 API calls per day (or 8 calls per hour) is not sufficient for them, then please contact support and provide your reasoning.