{"id":135,"date":"2018-10-31T18:13:37","date_gmt":"2018-10-31T22:13:37","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/simplestats\/?post_type=chapter&#038;p=135"},"modified":"2019-11-15T18:52:37","modified_gmt":"2019-11-15T23:52:37","slug":"10-2-1-the-linear-regression-model","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/simplestats\/chapter\/10-2-1-the-linear-regression-model\/","title":{"raw":"10.2.1 The Linear Regression Model and the Line of Best Fit","rendered":"10.2.1 The Linear Regression Model and the Line of Best Fit"},"content":{"raw":"[latexpage]\r\n\r\nYou might have noticed that there was no uncertainty of any kind in the Example 10.2 about the assignment requirements and mark in the previous section. The line in that case represented a <em>deterministic<\/em> relationship -- <em>x<\/em> fully determined\u00a0<em>y<\/em> (i.e.,<em> x<\/em> fully explained the variability of <em>y<\/em>) -- hence all the observation were on the line itself.\r\n\r\n&nbsp;\r\n\r\nAs such, this was not a typical situation and this was not a typical <em>regression<\/em> line. In reality, in statistical inference we deal with <em>probabilistic<\/em> associations, where the regression line does <em>not<\/em> capture all observations in itself but their <em>general<\/em> (on average) <em>trend<\/em>. That is, in a usual regression model situation, some observations will be above the line and some below it; thus some observations would be <em>underestimated<\/em> and others would be <em>overestimated<\/em> because <strong>the line serves as a <em>prediction<\/em> <\/strong>(an expectation, a summary, a trend) of the association. And as we know by now, predictions\/estimations always contain a level of uncertainty.\r\n\r\n&nbsp;\r\n\r\nSpecifically, we cannot expect that a single independent variable <em>x<\/em> will explain away <em>all<\/em> variability in a dependent variable <em>y<\/em>; there will always be some unexplained (by the regression model) variability left. Figure 10.2 illustrates.\r\n\r\n&nbsp;\r\n\r\n<em>Figure 10.2 Assignment Mark as a Function of Completed Requirements (With Variance)<\/em>\r\n\r\n<img src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability.png\" alt=\"\" width=\"462\" height=\"370\" class=\"wp-image-1345 size-full aligncenter\" \/>\r\n\r\nIn Figure 10.2 I have added seven more observations to the case we had in Figure 10.1 in the previous section, this time allowing for additional variability in the assignment marks: no longer is it enough to know the number of requirements completed to predict the assignment grade. (Imagine that the professor has started evaluating the completed requirements substantively, not just counting them: in this case while the number of requirements is still essential for the grade, <em>something else<\/em>[footnote]This <em>something else<\/em> is an 'unobserved variable', or a variable not included in the model (even though we could speculate about it). This type of unobserved variable\/s is the source for the unexplained variance in <em>y<\/em>.[\/footnote] also affects the final assignment mark.)\r\n\r\n&nbsp;\r\n\r\nAn actual <strong>regression model accommodates the uncertainty inherent in estimation through two interrelated concepts, <em>error of prediction<\/em> (a.k.a. statistical error) and <em>residuals<\/em>.<\/strong>\r\n\r\n&nbsp;\r\n\r\n<strong>The <em>error of prediction<\/em> reflects the difference between the observations and the predicted values we would have if we had data about the population.<\/strong> That is, if we imagined a line of best fit of the population[footnote]This line of course does not exist, it is a heuristic device.[\/footnote],\u00a0<em>\u03b1+\u03b2x<\/em>, the difference between our observations and that line would be:\r\n\r\n&nbsp;\r\n\r\n$$y-(\\alpha+\\beta x)=\\epsilon$$ = <em>error of prediction<\/em>[footnote]This is the small-case Greek letter <em>e<\/em>, <em>\u03b5<\/em> [EHpsilon].[\/footnote]\r\n\r\n&nbsp;\r\n\r\nThat is, we need to include the error term in the regression model:\r\n\r\n&nbsp;\r\n\r\n$$y=\\alpa+\\beta x +\\epsilon$$\r\n\r\n&nbsp;\r\n\r\nConsidering that we pretty much never have information about the population, however, we can restate <strong>the <em>sample<\/em> regression model like this<\/strong>:\r\n\r\n&nbsp;\r\n\r\n$$y=a+bx+e$$\r\n\r\n&nbsp;\r\n\r\n<strong>where <em>a<\/em> is the estimated<em>\u00a0\u03b1<\/em>, <em>b<\/em> is the estimated\u00a0<em>\u03b2<\/em>, and <em>e<\/em> is the estimated\u00a0<em>\u03b5<\/em>, with all estimations based on sample data. Note that <em>e<\/em> here is called the <em>residual<\/em>, and it is not only the estimation of the unobservable error of prediction, but also simply the difference between an observation and its predicted value<\/strong>:\r\n\r\n&nbsp;\r\n\r\n$$y-(a+bx)=e$$ = <em>residual<\/em>\r\n\r\n&nbsp;\r\n\r\nSince <em>a+bx<\/em> is the regression line, or the prediction, it also stands for the predicted (estimated values), which we can, as usual, denote $\\hat{y}$. Then, since\r\n\r\n&nbsp;\r\n\r\n$$\\hat{y}=a+bx$$,\r\n\r\n&nbsp;\r\n\r\nwe also have\r\n\r\n&nbsp;\r\n\r\n$$y-\\hat{y}=e$$\r\n\r\n&nbsp;\r\n\r\nor, again, that <strong>the residuals are the difference between the observations and their predicted values.<\/strong>\r\n\r\n&nbsp;\r\n\r\nWith this, we come at a full circle and the reason for all the notation and protracted explanations above (and here you thought I was subjecting you to all these equations without a purpose): in a graph, <strong>the residuals are simply the distance between the observations and the regression line<\/strong>. (In Figure 10.2 this is the empty space -- the shortest distance -- between an observation and the regression line.)\r\n\r\n&nbsp;\r\n\r\nA comprehensive treatment of the residuals (through a full-blown analysis of variance) is beyond the scope of this book but they do help us understand the nature of the regression line and of the logic of regression in general. You see, <strong>the regression line is called a line of <em>best fit<\/em> precisely because it <em>minimizes<\/em> the residuals<\/strong> -- it is created in such a way as to minimize the residuals (and therefore the error of prediction) and fit the data\/observations as best as possible. Visually, this will mean that the line is drawn to pass <em>as close as possible<\/em> to all the observations.\r\n\r\n&nbsp;\r\n\r\nIn fact, <strong>linear regression is also called <em>OLS regression<\/em>, which stands for <em>ordinary least squares<\/em>.<\/strong> The<em> least squares<\/em>\u00a0concept comes from the fact that to minimize the distances of the observations to the prediction line, we need to first square them before adding them together[footnote]I.e., $\\Sigma{(y-\\hat{y})^2}$.[\/footnote] -- just like we needed to do that in the calculation of the variance and the sum of squares (or the distances would cancel each other out)[footnote]The <em>ordinary<\/em> part is there to differentiate between another regression version called <em>generalized least squares regression<\/em>, or <em>GLS<\/em> regression (not discussed here).[\/footnote].\r\n\r\n&nbsp;\r\n\r\nBut how do we ensure that the regression line minimizes the residuals? The next section explains.","rendered":"<p>You might have noticed that there was no uncertainty of any kind in the Example 10.2 about the assignment requirements and mark in the previous section. The line in that case represented a <em>deterministic<\/em> relationship &#8212; <em>x<\/em> fully determined\u00a0<em>y<\/em> (i.e.,<em> x<\/em> fully explained the variability of <em>y<\/em>) &#8212; hence all the observation were on the line itself.<\/p>\n<p>&nbsp;<\/p>\n<p>As such, this was not a typical situation and this was not a typical <em>regression<\/em> line. In reality, in statistical inference we deal with <em>probabilistic<\/em> associations, where the regression line does <em>not<\/em> capture all observations in itself but their <em>general<\/em> (on average) <em>trend<\/em>. That is, in a usual regression model situation, some observations will be above the line and some below it; thus some observations would be <em>underestimated<\/em> and others would be <em>overestimated<\/em> because <strong>the line serves as a <em>prediction<\/em> <\/strong>(an expectation, a summary, a trend) of the association. And as we know by now, predictions\/estimations always contain a level of uncertainty.<\/p>\n<p>&nbsp;<\/p>\n<p>Specifically, we cannot expect that a single independent variable <em>x<\/em> will explain away <em>all<\/em> variability in a dependent variable <em>y<\/em>; there will always be some unexplained (by the regression model) variability left. Figure 10.2 illustrates.<\/p>\n<p>&nbsp;<\/p>\n<p><em>Figure 10.2 Assignment Mark as a Function of Completed Requirements (With Variance)<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability.png\" alt=\"\" width=\"462\" height=\"370\" class=\"wp-image-1345 size-full aligncenter\" srcset=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability.png 462w, https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability-300x240.png 300w, https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability-65x52.png 65w, https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability-225x180.png 225w, https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/uploads\/sites\/564\/2019\/04\/scatterplot-class-assignment-requirements-mark-with-variability-350x280.png 350w\" sizes=\"auto, (max-width: 462px) 100vw, 462px\" \/><\/p>\n<p>In Figure 10.2 I have added seven more observations to the case we had in Figure 10.1 in the previous section, this time allowing for additional variability in the assignment marks: no longer is it enough to know the number of requirements completed to predict the assignment grade. (Imagine that the professor has started evaluating the completed requirements substantively, not just counting them: in this case while the number of requirements is still essential for the grade, <em>something else<\/em><a class=\"footnote\" title=\"This something else is an 'unobserved variable', or a variable not included in the model (even though we could speculate about it). This type of unobserved variable\/s is the source for the unexplained variance in y.\" id=\"return-footnote-135-1\" href=\"#footnote-135-1\" aria-label=\"Footnote 1\"><sup class=\"footnote\">[1]<\/sup><\/a> also affects the final assignment mark.)<\/p>\n<p>&nbsp;<\/p>\n<p>An actual <strong>regression model accommodates the uncertainty inherent in estimation through two interrelated concepts, <em>error of prediction<\/em> (a.k.a. statistical error) and <em>residuals<\/em>.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>The <em>error of prediction<\/em> reflects the difference between the observations and the predicted values we would have if we had data about the population.<\/strong> That is, if we imagined a line of best fit of the population<a class=\"footnote\" title=\"This line of course does not exist, it is a heuristic device.\" id=\"return-footnote-135-2\" href=\"#footnote-135-2\" aria-label=\"Footnote 2\"><sup class=\"footnote\">[2]<\/sup><\/a>,\u00a0<em>\u03b1+\u03b2x<\/em>, the difference between our observations and that line would be:<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 18px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-b96453adbb78fe796e748228cab4ea35_l3.png\" height=\"18\" width=\"130\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#121;&#45;&#40;&#92;&#97;&#108;&#112;&#104;&#97;&#43;&#92;&#98;&#101;&#116;&#97;&#32;&#120;&#41;&#61;&#92;&#101;&#112;&#115;&#105;&#108;&#111;&#110;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p> = <em>error of prediction<\/em><a class=\"footnote\" title=\"This is the small-case Greek letter e, \u03b5 [EHpsilon].\" id=\"return-footnote-135-3\" href=\"#footnote-135-3\" aria-label=\"Footnote 3\"><sup class=\"footnote\">[3]<\/sup><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>That is, we need to include the error term in the regression model:<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 16px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-f3b126408e1a72f9041f74000215e55d_l3.png\" height=\"16\" width=\"97\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#121;&#61;&#92;&#97;&#108;&#112;&#97;&#43;&#92;&#98;&#101;&#116;&#97;&#32;&#120;&#32;&#43;&#92;&#101;&#112;&#115;&#105;&#108;&#111;&#110;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Considering that we pretty much never have information about the population, however, we can restate <strong>the <em>sample<\/em> regression model like this<\/strong>:<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 17px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-84f0bee76bdfc1a5b0350cd60acbc9d7_l3.png\" height=\"17\" width=\"111\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#121;&#61;&#97;&#43;&#98;&#120;&#43;&#101;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><strong>where <em>a<\/em> is the estimated<em>\u00a0\u03b1<\/em>, <em>b<\/em> is the estimated\u00a0<em>\u03b2<\/em>, and <em>e<\/em> is the estimated\u00a0<em>\u03b5<\/em>, with all estimations based on sample data. Note that <em>e<\/em> here is called the <em>residual<\/em>, and it is not only the estimation of the unobservable error of prediction, but also simply the difference between an observation and its predicted value<\/strong>:<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 18px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-90881ef8c8773e5b4c7e093bb33a0fa6_l3.png\" height=\"18\" width=\"125\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#121;&#45;&#40;&#97;&#43;&#98;&#120;&#41;&#61;&#101;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p> = <em>residual<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>Since <em>a+bx<\/em> is the regression line, or the prediction, it also stands for the predicted (estimated values), which we can, as usual, denote <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-ddbbb251a1eac6ff930e639227a7e32d_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#104;&#97;&#116;&#123;&#121;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"17\" width=\"9\" style=\"vertical-align: -4px;\" \/>. Then, since<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 17px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-cbea33915168181d08fccc57b27c6f82_l3.png\" height=\"17\" width=\"82\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#92;&#104;&#97;&#116;&#123;&#121;&#125;&#61;&#97;&#43;&#98;&#120;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p>,<\/p>\n<p>&nbsp;<\/p>\n<p>we also have<\/p>\n<p>&nbsp;<\/p>\n<p class=\"ql-center-displayed-equation\" style=\"line-height: 17px;\"><span class=\"ql-right-eqno\"> &nbsp; <\/span><span class=\"ql-left-eqno\"> &nbsp; <\/span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-19079720f84f061fa4da21466792af2f_l3.png\" height=\"17\" width=\"72\" class=\"ql-img-displayed-equation quicklatex-auto-format\" alt=\"&#92;&#091;&#121;&#45;&#92;&#104;&#97;&#116;&#123;&#121;&#125;&#61;&#101;&#92;&#093;\" title=\"Rendered by QuickLaTeX.com\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>or, again, that <strong>the residuals are the difference between the observations and their predicted values.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>With this, we come at a full circle and the reason for all the notation and protracted explanations above (and here you thought I was subjecting you to all these equations without a purpose): in a graph, <strong>the residuals are simply the distance between the observations and the regression line<\/strong>. (In Figure 10.2 this is the empty space &#8212; the shortest distance &#8212; between an observation and the regression line.)<\/p>\n<p>&nbsp;<\/p>\n<p>A comprehensive treatment of the residuals (through a full-blown analysis of variance) is beyond the scope of this book but they do help us understand the nature of the regression line and of the logic of regression in general. You see, <strong>the regression line is called a line of <em>best fit<\/em> precisely because it <em>minimizes<\/em> the residuals<\/strong> &#8212; it is created in such a way as to minimize the residuals (and therefore the error of prediction) and fit the data\/observations as best as possible. Visually, this will mean that the line is drawn to pass <em>as close as possible<\/em> to all the observations.<\/p>\n<p>&nbsp;<\/p>\n<p>In fact, <strong>linear regression is also called <em>OLS regression<\/em>, which stands for <em>ordinary least squares<\/em>.<\/strong> The<em> least squares<\/em>\u00a0concept comes from the fact that to minimize the distances of the observations to the prediction line, we need to first square them before adding them together<a class=\"footnote\" title=\"I.e., .\" id=\"return-footnote-135-4\" href=\"#footnote-135-4\" aria-label=\"Footnote 4\"><sup class=\"footnote\">[4]<\/sup><\/a> &#8212; just like we needed to do that in the calculation of the variance and the sum of squares (or the distances would cancel each other out)<a class=\"footnote\" title=\"The ordinary part is there to differentiate between another regression version called generalized least squares regression, or GLS regression (not discussed here).\" id=\"return-footnote-135-5\" href=\"#footnote-135-5\" aria-label=\"Footnote 5\"><sup class=\"footnote\">[5]<\/sup><\/a>.<\/p>\n<p>&nbsp;<\/p>\n<p>But how do we ensure that the regression line minimizes the residuals? The next section explains.<\/p>\n<hr class=\"before-footnotes clear\" \/><div class=\"footnotes\"><ol><li id=\"footnote-135-1\">This <em>something else<\/em> is an 'unobserved variable', or a variable not included in the model (even though we could speculate about it). This type of unobserved variable\/s is the source for the unexplained variance in <em>y<\/em>. <a href=\"#return-footnote-135-1\" class=\"return-footnote\" aria-label=\"Return to footnote 1\">&crarr;<\/a><\/li><li id=\"footnote-135-2\">This line of course does not exist, it is a heuristic device. <a href=\"#return-footnote-135-2\" class=\"return-footnote\" aria-label=\"Return to footnote 2\">&crarr;<\/a><\/li><li id=\"footnote-135-3\">This is the small-case Greek letter <em>e<\/em>, <em>\u03b5<\/em> [EHpsilon]. <a href=\"#return-footnote-135-3\" class=\"return-footnote\" aria-label=\"Return to footnote 3\">&crarr;<\/a><\/li><li id=\"footnote-135-4\">I.e., <img src=\"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-content\/ql-cache\/quicklatex.com-904b7be2c553edcccaaa2c434d6c5aec_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#83;&#105;&#103;&#109;&#97;&#123;&#40;&#121;&#45;&#92;&#104;&#97;&#116;&#123;&#121;&#125;&#41;&#94;&#50;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"74\" style=\"vertical-align: -4px;\" \/>. <a href=\"#return-footnote-135-4\" class=\"return-footnote\" aria-label=\"Return to footnote 4\">&crarr;<\/a><\/li><li id=\"footnote-135-5\">The <em>ordinary<\/em> part is there to differentiate between another regression version called <em>generalized least squares regression<\/em>, or <em>GLS<\/em> regression (not discussed here). <a href=\"#return-footnote-135-5\" class=\"return-footnote\" aria-label=\"Return to footnote 5\">&crarr;<\/a><\/li><\/ol><\/div>","protected":false},"author":533,"menu_order":3,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-135","chapter","type-chapter","status-publish","hentry"],"part":128,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapters\/135","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/wp\/v2\/users\/533"}],"version-history":[{"count":8,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapters\/135\/revisions"}],"predecessor-version":[{"id":2147,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapters\/135\/revisions\/2147"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/parts\/128"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapters\/135\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/wp\/v2\/media?parent=135"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/pressbooks\/v2\/chapter-type?post=135"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/wp\/v2\/contributor?post=135"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/simplestats\/wp-json\/wp\/v2\/license?post=135"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}