Adhering to fairness, transparency, and accuracy, we provide a most accurate assessment of an individual's intelligence. Here are some key elements of the IQ scoring method:

Each participant has 20 minutes to answer 30 questions. The question difficulties are categorized based on a Rasch and 2PL-IRT model, which are derived from the results of over half a million individual test-takers who completed the test.

Under the Rasch model, the difficulty level of the items gets transformed via the same logit transformation because we assume that the difficulty of the items also follows a normal distribution.

Logit equation for item difficulty:

where “p” is the proportion of people taking the test who correctly answered the item.

In this case, the number of people who correctly answered a given item is a sufficient statistic for our equation. That is, all we need to know is the percentage of people who answered a given item correctly and we can transform item difficulty to arrive at a logit value that can be placed on the same logit scale as person ability.

The IRT (Item Response Theory) model is a statistical framework commonly used in the field of educational and psychological assessment to understand and model the relationship between an individual's ability or trait level and their responses to a set of items or test questions. This model is often used to analyze and score IQ tests, among other types of assessments. IQ, or intelligence quotient, is a measure of cognitive ability and is typically assessed using a variety of test items or questions.

In 2PL- IRT model, each item (test question) is associated with 2 item parameters, which describe its characteristics:

Difficulty (Item Difficulty Parameter): This parameter indicates how hard or easy an item is. It represents the level of the ability or trait that a test-taker needs to have in order to answer the item correctly. In the context of IQ tests, some items may be more difficult and require a higher level of cognitive ability to answer correctly, while others may be easier.

Discrimination (Item Discrimination Parameter): This parameter measures how well the item distinguishes between individuals with different levels of the trait or ability being assessed. Items with higher discrimination values are better at differentiating between high- and low-ability individuals.

**The Rasch equation (1PL-IRT model):**

**The equation of 2PL-IRT model:**

Combining the equations of an individual's capability and the difficulty of an item, we can visually represent test items and their attributes through an Item Characteristic Curve (ICC). This curve graphically illustrates the likelihood of an individual answering a question right based on their skill level. The vertical axis denotes the probability of a correct answer, while the horizontal axis represents the test-taker's capability. Each curve corresponds to a specific item, enabling us to assess the traits of a test item and predict its performance for specific test participants.

First and foremost, the participant starts the test at the level 1 question (the easiest one). Then, for each correct answer, the difficulty level increases by 1. Otherwise, for each wrong answer, the question’s difficulty decreases by 1. Points earned for a right answer correspond to the difficulty level of the question. Incorrect answers do not result in any points being earned. With this feature enabled, the participants will see a fresh combination of questions each time they attempt the activity.

You're taking our online IQ test designed with adaptive question adjustment. Here's how it work:

**Starting Point**: The system begins by presenting a question of average difficulty to the test-taker.

**Response-Based Adjustment**:

- If you answer the question correctly, the system infers that you might find more challenging questions appropriate. Therefore, the next question you receive will be slightly harder.
- Conversely, if you answer the question incorrectly, the system will present a slightly easier question next, presuming the initial one might have been too challenging.

**Continuous Adaptation**:

- As you proceed with the test, each response continuously informs the system about your ability level. With every right answer, questions become progressively harder, ensuring that you're always challenged. With every wrong answer, the difficulty decreases, ensuring the questions are not overwhelmingly hard.
- This adaptive process helps zero in on your true ability level more efficiently than a standard test where every test-taker receives the same set of questions in the same order.

**End of the Test**:

By the end of the test, the system has a detailed performance profile based on how you responded to various difficulty levels. Instead of just counting the number of correct answers, it considers the difficulty of the questions you answered correctly. So, if you answered more challenging questions correctly than someone else, you might receive a higher IQ score, even if you both answered the same total number of questions correctly.

The points you get from correct answers are summed to create the Raw Score. Then, the raw score will be compared to the entire population to convert to the final IQ score, with the mean of 100 and the standard deviation of 15.

Using the Rasch Measurement, the presumption of normality is evident when raw scores, which represent the proportion of correctly answered items, undergo a logit transformation. This transformation can be expressed as in the logit equation:

where:

𝜃 is the individual's ability.

p is the proportion of items that an individual has the correct answer.

In the Rasch model framework, any individual has a certain probability of either passing or failing any test item, regardless of its level of difficulty. This implies that it's possible for any individual to produce any combination of responses. Even the most proficient person might not get every question right.

Typically, the estimation of a person's measure is based on the most probable observed response pattern, or the response pattern that aligns best with the Rasch model. Yet, sometimes we might have a preliminary idea of a person's ability measure (or an item's challenge level) before gathering current data. We might want to merge this preliminary insight with the new estimation. To do this, we initially calibrate the test items as standard. Then, by integrating the item calibrations, our initial insight, and the observed responses, we derive an enhanced, post-data, measure for the individual. This method is rooted in Thomas Bayes' realization:

Prior Probability x Likelihood from Data = Posterior Probability.

This means: Prob (B' given {X}) = Prob (B') x Prob ({X} given B') / Sum of all B [ Prob (B) x Prob ({X} given B) ],

Where B' signifies a specific value of the individual's measure. The sum pertains to all potential values of our initial idea, B. {X} denotes the individual's set of responses. The EAP estimate for the individual's measure is the expected outcome of this calculation.