{"id":600,"date":"2021-03-26T15:46:29","date_gmt":"2021-03-26T19:46:29","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/?post_type=chapter&#038;p=600"},"modified":"2021-04-08T17:17:57","modified_gmt":"2021-04-08T21:17:57","slug":"multiple-choice-questions","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/chapter\/multiple-choice-questions\/","title":{"raw":"Selected Response Items","rendered":"Selected Response Items"},"content":{"raw":"While not the sole assessment method, TBL relies heavily on [pb_glossary id=\"741\"]selected response item[\/pb_glossary] (commonly called multiple-choice questions or MCQ) quizzes in the Readiness Assurance Tests (RAT) and in the group application activities. A well-developed MCQ is an efficient and reliable way to generate valid evidence that reflects conclusions about student learning. As with everything discussed in this manual, selected response items demand a deliberate and evidence-based approach. The process to develop quality MCQs can be exhaustive to ensure that each item reliably reflects the intended [pb_glossary id=\"728\"]construct [\/pb_glossary] at an appropriate cognitive level. A low stakes quiz may merely involve several hours of instructor time to develop questions, colleagues to review them and a review of the questions for quality and validity after administration. Meanwhile, a high-stakes licensing exam generally involves a panel of expert developers, a review by subject matter experts, a field testing phase of the item, hundreds of hours and thousands of dollars to develop a test bank with an adequate quantity of items (Downing &amp; Haladyna, 2006; <em>Guidelines for the Development of Multiple-Choice Questions<\/em>, 2010; Williams, 2020).\r\n\r\nIt is beyond the scope of this manual to adequately prepare instructors to develop high-quality selected response items, but because MCQs are so prevalent in TBL, an overview of the process will be described.\r\n<h4><strong>The Anatomy of a MCQ<\/strong><\/h4>\r\nA MCQ is built according to a consistent framework:\r\n\r\n[h5p id=\"11\"]\r\n<p style=\"text-align: right\"><em>(adapted from Gierl, n.d.)<\/em><\/p>\r\n\r\n<h4><strong>Identifying the [pb_glossary id=\"728\"]Construct[\/pb_glossary]<\/strong><\/h4>\r\nAs with the development of a TBL course and TBL learning modules, the development of quality MCQs begins at the end by determining what is to be measured by a selected response test and each item or question that make up the test. A MCQ test should evaluate an overall construct, while individual items (questions) should target a specific concept that is a component of the larger construct. Selected response items do this by forcing students to make a choice (Downing &amp; Haladyna, 2006; Gierl, n.d.; Sibley &amp; Ostafichuk, 2015; Sibley &amp; Roberson, 2016).\r\n\r\nNot only does an instructor need to identify the specific concept that an item will test, a target cognitive level needs to be identified according to a model such as <a href=\"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/chapter\/backwards-design#Bloom's\">Bloom's taxonomy<\/a>. Varying cognitive levels are desired depending on the activity it is designed for.\r\n<ul>\r\n \t<li>RAT items will typically focus on remembering, understanding and light application. These questions will often ask students to perform tasks such as: identify, distinguish, classify and organize. Questions will usually begin with: \"What is...?\" and \"Why does...?\"<\/li>\r\n \t<li>Group application activities should push students into higher level application, analysis and evaluation. Questions designed for the application activities will typically contain verbs such as: solve, compare, categorize, organize and design. Questions will often contain a superlative in their wording, such as: \"What is the\u00a0<strong>most<\/strong>...?\" or \"Which is the\u00a0<strong>best<\/strong>...?\" in order to force a specific choice. Students will be required to construct a rationale for their choice in order to adequately answer and defend their decision (Roberson &amp; Franchini, 2014; Sibley &amp; Ostafichuk, 2015; Sibley &amp; Roberson, 2016; Williams, 2020).<\/li>\r\n<\/ul>\r\n<h4><strong>Guiding Principles for Writing Selected Response Items<\/strong><\/h4>\r\nThe following considerations should be taken into account when developing new MCQs or revising existing ones:\r\n<ul>\r\n \t<li>Items should represent a specific and important concept or topic<\/li>\r\n \t<li>Each item should pose a clear question that students could answer without looking at the options<\/li>\r\n \t<li>Avoid negatively worded stems or options<\/li>\r\n \t<li>All options should be homogenous in terms of wording, grammar, length and content<\/li>\r\n \t<li>Avoid \"all of the above\" and \"none of the above\" options<\/li>\r\n \t<li>Distractors should all be plausible and none should be obvious as a distractor (all options could be correct)<\/li>\r\n<\/ul>\r\n<p style=\"text-align: right\"><em>(Gierl, n.d.; Williams, 2020)<\/em><\/p>\r\n<p style=\"text-align: right\"><\/p>","rendered":"<p>While not the sole assessment method, TBL relies heavily on <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_600_741\">selected response item<\/a> (commonly called multiple-choice questions or MCQ) quizzes in the Readiness Assurance Tests (RAT) and in the group application activities. A well-developed MCQ is an efficient and reliable way to generate valid evidence that reflects conclusions about student learning. As with everything discussed in this manual, selected response items demand a deliberate and evidence-based approach. The process to develop quality MCQs can be exhaustive to ensure that each item reliably reflects the intended <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_600_728\">construct <\/a> at an appropriate cognitive level. A low stakes quiz may merely involve several hours of instructor time to develop questions, colleagues to review them and a review of the questions for quality and validity after administration. Meanwhile, a high-stakes licensing exam generally involves a panel of expert developers, a review by subject matter experts, a field testing phase of the item, hundreds of hours and thousands of dollars to develop a test bank with an adequate quantity of items (Downing &amp; Haladyna, 2006; <em>Guidelines for the Development of Multiple-Choice Questions<\/em>, 2010; Williams, 2020).<\/p>\n<p>It is beyond the scope of this manual to adequately prepare instructors to develop high-quality selected response items, but because MCQs are so prevalent in TBL, an overview of the process will be described.<\/p>\n<h4><strong>The Anatomy of a MCQ<\/strong><\/h4>\n<p>A MCQ is built according to a consistent framework:<\/p>\n<div id=\"h5p-11\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-11\" class=\"h5p-iframe\" data-content-id=\"11\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"MCQs\"><\/iframe><\/div>\n<\/div>\n<p style=\"text-align: right\"><em>(adapted from Gierl, n.d.)<\/em><\/p>\n<h4><strong>Identifying the <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_600_728\">Construct<\/a><\/strong><\/h4>\n<p>As with the development of a TBL course and TBL learning modules, the development of quality MCQs begins at the end by determining what is to be measured by a selected response test and each item or question that make up the test. A MCQ test should evaluate an overall construct, while individual items (questions) should target a specific concept that is a component of the larger construct. Selected response items do this by forcing students to make a choice (Downing &amp; Haladyna, 2006; Gierl, n.d.; Sibley &amp; Ostafichuk, 2015; Sibley &amp; Roberson, 2016).<\/p>\n<p>Not only does an instructor need to identify the specific concept that an item will test, a target cognitive level needs to be identified according to a model such as <a href=\"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/chapter\/backwards-design#Bloom's\">Bloom&#8217;s taxonomy<\/a>. Varying cognitive levels are desired depending on the activity it is designed for.<\/p>\n<ul>\n<li>RAT items will typically focus on remembering, understanding and light application. These questions will often ask students to perform tasks such as: identify, distinguish, classify and organize. Questions will usually begin with: &#8220;What is&#8230;?&#8221; and &#8220;Why does&#8230;?&#8221;<\/li>\n<li>Group application activities should push students into higher level application, analysis and evaluation. Questions designed for the application activities will typically contain verbs such as: solve, compare, categorize, organize and design. Questions will often contain a superlative in their wording, such as: &#8220;What is the\u00a0<strong>most<\/strong>&#8230;?&#8221; or &#8220;Which is the\u00a0<strong>best<\/strong>&#8230;?&#8221; in order to force a specific choice. Students will be required to construct a rationale for their choice in order to adequately answer and defend their decision (Roberson &amp; Franchini, 2014; Sibley &amp; Ostafichuk, 2015; Sibley &amp; Roberson, 2016; Williams, 2020).<\/li>\n<\/ul>\n<h4><strong>Guiding Principles for Writing Selected Response Items<\/strong><\/h4>\n<p>The following considerations should be taken into account when developing new MCQs or revising existing ones:<\/p>\n<ul>\n<li>Items should represent a specific and important concept or topic<\/li>\n<li>Each item should pose a clear question that students could answer without looking at the options<\/li>\n<li>Avoid negatively worded stems or options<\/li>\n<li>All options should be homogenous in terms of wording, grammar, length and content<\/li>\n<li>Avoid &#8220;all of the above&#8221; and &#8220;none of the above&#8221; options<\/li>\n<li>Distractors should all be plausible and none should be obvious as a distractor (all options could be correct)<\/li>\n<\/ul>\n<p style=\"text-align: right\"><em>(Gierl, n.d.; Williams, 2020)<\/em><\/p>\n<p style=\"text-align: right\">\n<div class=\"glossary\"><span class=\"screen-reader-text\" id=\"definition\">definition<\/span><template id=\"term_600_741\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_600_741\"><div tabindex=\"-1\"><p>A testing item that asks examinees to \"choose and answer to a question or a statement from a listing of several possible answers\" (Downing &amp; Haladyna, 2006, p. 287).<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_600_728\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_600_728\"><div tabindex=\"-1\"><p>The content domain that is to be measured by a MCQ (Downing &amp; Haladyna, 2006)<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><\/div>","protected":false},"author":2609,"menu_order":3,"comment_status":"open","ping_status":"closed","template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-600","chapter","type-chapter","status-publish","hentry"],"part":57,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapters\/600","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/users\/2609"}],"replies":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/comments?post=600"}],"version-history":[{"count":25,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapters\/600\/revisions"}],"predecessor-version":[{"id":733,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapters\/600\/revisions\/733"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/parts\/57"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapters\/600\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/media?parent=600"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/pressbooks\/v2\/chapter-type?post=600"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/contributor?post=600"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/selkirktbl\/wp-json\/wp\/v2\/license?post=600"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}