{"id":5254,"date":"2013-12-06T16:07:56","date_gmt":"2013-12-06T15:07:56","guid":{"rendered":"http:\/\/www.walkingrandomly.com\/?p=5254"},"modified":"2013-12-07T19:41:58","modified_gmt":"2013-12-07T18:41:58","slug":"simple-nonlinear-least-squares-curve-fitting-in-r","status":"publish","type":"post","link":"https:\/\/walkingrandomly.com\/?p=5254","title":{"rendered":"Simple nonlinear least squares curve fitting in R"},"content":{"rendered":"<p>The R code used for this example comes from <a href=\"https:\/\/twitter.com\/geospacedman\">Barry Rowlingson<\/a>, so huge thanks to him.<\/p>\n<p>A question I get asked a lot is &#8216;How can I do <a href=\"http:\/\/en.wikipedia.org\/wiki\/Non-linear_least_squares\">nonlinear least squares<\/a> curve fitting in X?&#8217; where X might be MATLAB, Mathematica or a whole host of alternatives. \u00a0Since this is such a common query, I thought I&#8217;d write up how to do it for a very simple problem in several systems that I&#8217;m interested in<\/p>\n<p>This is the R version. For other versions,see the list below<\/p>\n<ul>\n<li><a href=\"https:\/\/www.walkingrandomly.com\/?p=5181\">Simple nonlinear least squares curve fitting in Julia<\/a><\/li>\n<li><a href=\"https:\/\/www.walkingrandomly.com\/?p=5218\">Simple nonlinear least squares curve fitting in Maple<\/a><\/li>\n<li><a href=\"https:\/\/www.walkingrandomly.com\/?p=5180\">Simple nonlinear least squares curve fitting in Mathematica<\/a><\/li>\n<li><a href=\"https:\/\/www.walkingrandomly.com\/?p=5196\">Simple nonlinear least squares curve fitting in MATLAB<\/a><\/li>\n<li><a href=\"https:\/\/www.walkingrandomly.com\/?p=5215\">Simple nonlinear least squares curve fitting in Python<\/a><\/li>\n<\/ul>\n<p><strong>The problem<\/strong><\/p>\n<pre>xdata = -2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9\r\nydata = 0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001<\/pre>\n<p>and you&#8217;d like to fit the function<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/walkingrandomly.com\/wp-content\/ql-cache\/quicklatex.com-17f46b025dd91e024cf2dc04211e19ba_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#32;&#70;&#40;&#112;&#95;&#49;&#44;&#112;&#95;&#50;&#44;&#120;&#41;&#32;&#61;&#32;&#112;&#95;&#49;&#32;&#92;&#99;&#111;&#115;&#40;&#112;&#95;&#50;&#32;&#120;&#41;&#43;&#112;&#95;&#50;&#32;&#92;&#115;&#105;&#110;&#40;&#112;&#95;&#49;&#32;&#120;&#41;&#32;\" title=\"Rendered by QuickLaTeX.com\" height=\"18\" width=\"297\" style=\"vertical-align: -4px;\"\/><\/p>\n<p>using nonlinear least squares. \u00a0You&#8217;re starting guesses for the parameters are p1=1 and P2=0.2<\/p>\n<p>For now, we are primarily interested in the following results:<\/p>\n<ul>\n<li>The fit parameters<\/li>\n<li>Sum of squared residuals<\/li>\n<li>Parameter confidence intervals<\/li>\n<\/ul>\n<p>Future updates of these posts will show how to get other results. Let me know what you are most interested in.<\/p>\n<p><strong>Solution in R<\/strong><\/p>\n<pre># construct the data vectors using c()\r\nxdata = c(-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9)\r\nydata = c(0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001)\r\n\r\n# look at it\r\nplot(xdata,ydata)\r\n\r\n# some starting values\r\np1 = 1\r\np2 = 0.2\r\n\r\n# do the fit\r\nfit = nls(ydata ~ p1*cos(p2*xdata) + p2*sin(p1*xdata), start=list(p1=p1,p2=p2))\r\n\r\n# summarise\r\nsummary(fit)<\/pre>\n<p>This gives<\/p>\n<pre>Formula: ydata ~ p1 * cos(p2 * xdata) + p2 * sin(p1 * xdata)\r\n\r\nParameters:\r\n   Estimate Std. Error t value Pr(&gt;|t|)    \r\np1 1.881851   0.027430   68.61 2.27e-12 ***\r\np2 0.700230   0.009153   76.51 9.50e-13 ***\r\n---\r\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\r\n\r\nResidual standard error: 0.08202 on 8 degrees of freedom\r\n\r\nNumber of iterations to convergence: 7 \r\nAchieved convergence tolerance: 2.189e-06<\/pre>\n<p>Draw the fit on the plot by getting the prediction from the fit at 200 x-coordinates across the range of xdata<\/p>\n<pre>new = data.frame(xdata = seq(min(xdata),max(xdata),len=200))\r\nlines(new$xdata,predict(fit,newdata=new))<\/pre>\n<p><a href=\"https:\/\/www.walkingrandomly.com\/wp-content\/uploads\/2013\/12\/R_nonlinear.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5293\" alt=\"R_nonlinear\" src=\"https:\/\/www.walkingrandomly.com\/wp-content\/uploads\/2013\/12\/R_nonlinear.png\" width=\"752\" height=\"534\" srcset=\"https:\/\/walkingrandomly.com\/wp-content\/uploads\/2013\/12\/R_nonlinear.png 752w, https:\/\/walkingrandomly.com\/wp-content\/uploads\/2013\/12\/R_nonlinear-300x213.png 300w\" sizes=\"auto, (max-width: 752px) 100vw, 752px\" \/><\/a><br \/>\nGetting the sum of squared residuals is easy enough:<\/p>\n<pre>sum(resid(fit)^2)<\/pre>\n<p>Which gives<\/p>\n<pre>[1] 0.0538127<\/pre>\n<p>Finally, lets get the parameter confidence intervals.<\/p>\n<pre>confint(fit)<\/pre>\n<p>Which gives<\/p>\n<pre>Waiting for profiling to be done...\r\n        2.5%     97.5%\r\np1 1.8206081 1.9442365\r\np2 0.6794193 0.7209843<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>The R code used for this example comes from Barry Rowlingson, so huge thanks to him. A question I get asked a lot is &#8216;How can I do nonlinear least squares curve fitting in X?&#8217; where X might be MATLAB, Mathematica or a whole host of alternatives. \u00a0Since this is such a common query, I [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[4,36],"tags":[],"class_list":["post-5254","post","type-post","status-publish","format-standard","hentry","category-math-software","category-r"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p3swhs-1mK","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/5254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5254"}],"version-history":[{"count":5,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/5254\/revisions"}],"predecessor-version":[{"id":5302,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=\/wp\/v2\/posts\/5254\/revisions\/5302"}],"wp:attachment":[{"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/walkingrandomly.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}