{"id":357,"date":"2024-06-03T03:00:00","date_gmt":"2024-06-03T01:00:00","guid":{"rendered":"https:\/\/lorentzen.ch\/?p=357"},"modified":"2024-06-10T14:08:32","modified_gmt":"2024-06-10T12:08:32","slug":"a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance","status":"publish","type":"post","link":"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/","title":{"rendered":"A Tweedie Trilogy  \u2014 Part I: Frequency and Aggregration Invariance"},"content":{"rendered":"\n<p><strong>TLDR:<\/strong> In this first part of the Tweedie Trilogy, we will take a look at what happens to a GLM if we aggregate the data by a group-by operation. A frequency model for insurance pricing will serve as an example.<\/p>\n\n\n\n<p>This trilogy celebrates the 40th birthday of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tweedie_distribution\">Tweedie distributions<\/a> in 2024 and highlights some of their very special properties.<\/p>\n\n\n<div class=\"wp-block-ub-table-of-contents-block ub_table-of-contents\" id=\"ub_table-of-contents-84cc0d8c-5abd-4337-9ad9-5a2f7ceeaf3c\" data-linktodivider=\"false\" data-showtext=\"show\" data-hidetext=\"hide\" data-scrolltype=\"auto\" data-enablesmoothscroll=\"false\" data-initiallyhideonmobile=\"false\" data-initiallyshow=\"true\"><div class=\"ub_table-of-contents-header-container\" style=\"\">\n\t\t\t<div class=\"ub_table-of-contents-header\" style=\"text-align: left; \">\n\t\t\t\t<div class=\"ub_table-of-contents-title\">Table of Contents<\/div>\n\t\t\t\t\n\t\t\t<\/div>\n\t\t<\/div><div class=\"ub_table-of-contents-extra-container\" style=\"\">\n\t\t\t<div class=\"ub_table-of-contents-container ub_table-of-contents-1-column \">\n\t\t\t\t<ul style=\"\"><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#0-intro\" style=\"\">Intro<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#1-mean-variance-relation\" style=\"\">Mean-Variance Relation<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#2-insurance-pricing-models\" style=\"\">Insurance Pricing Models<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#3-convolution-and-aggregation-invariance\" style=\"\">Convolution and Aggregation Invariance<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#4-poisson-distribution\" style=\"\">Poisson Distribution<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#5-frequency-example\" style=\"\">Frequency Example<\/a><\/li><li style=\"\"><a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/03\/a-tweedie-trilogy-part-i-frequency-and-aggregration-invariance\/#6-outlook\" style=\"\">Outlook<\/a><\/li><\/ul>\n\t\t\t<\/div>\n\t\t<\/div><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"0-intro\">Intro<\/h2>\n\n\n\n<p>Tweedie distributions and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generalized_linear_model\">Generalised Linear Models<\/a> (GLM) have an intertwined relationship. While GLMs are, in my view, one of the best reference models for estimating expectations, Tweedie distributions lie at the heart of expectation estimation. In fact, basically all applied GLMs in practice use Tweedie distributions with three notable exceptions: the binomial, the multinomial and the negative binomial distribution.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"1-mean-variance-relation\">Mean-Variance Relation<\/h2>\n\n\n\n<p><strong>&#8220;An index which distinguishes between some important exponential families&#8221;<\/strong> is the original publication title of Maurice Charles Kenneth Tweedie in 1984\u2014but note that Shaul K. Bar-Lev and Peter Enis published around the same time; as their 1986 paper was received November 1983, the distribution could also be named Bar-Lev &amp; Enis distribution.<sup data-fn=\"a48ec13d-54dd-4cd2-b67c-f6e0826927a4\" class=\"fn\"><a href=\"#a48ec13d-54dd-4cd2-b67c-f6e0826927a4\" id=\"a48ec13d-54dd-4cd2-b67c-f6e0826927a4-link\">1<\/a><\/sup> This index is meanwhile called the <strong>Tweedie power parameter <code><span class=\"katex-eq\" data-katex-display=\"false\">p<\/span><\/code><\/strong>. Recall that distributions of the exponential dispersion family always fulfil a mean-variance relationship. Its even a way to define them. For the Tweedie distribution, denoted <code><span class=\"katex-eq\" data-katex-display=\"false\">Tw_p(\\mu, \\phi)<\/span><\/code>, the relation reads<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\begin{align*}\n\\operatorname{E}[Y] &amp;= \\mu\n\\\\\n\\operatorname{Var}[Y] &amp;= \\phi \\mu^p\n\\end{align*}<\/pre><\/div>\n\n\n\n<p>with dispersion parameter <code><span class=\"katex-eq\" data-katex-display=\"false\">\\phi<\/span><\/code>. Some very common members are given in the following table.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><code><strong><span class=\"katex-eq\" data-katex-display=\"false\">p<\/span><\/strong><\/code><\/td><td><strong>distribution<\/strong><\/td><td><strong>domain <code><span class=\"katex-eq\" data-katex-display=\"false\">Y<\/span><\/code><\/strong><\/td><td><strong>domain <code><span class=\"katex-eq\" data-katex-display=\"false\">\\mu<\/span><\/code> <\/strong><\/td><\/tr><tr><td>0<\/td><td>Normal \/ Gaussian<\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}<\/span><\/code><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}<\/span><\/code><\/td><\/tr><tr><td>1<\/td><td><a href=\"https:\/\/en.wikipedia.org\/wiki\/Poisson_distribution\">Poisson<\/a><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">0, 1, 2, \\ldots<\/span><\/code><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><\/tr><tr><td><code><span class=\"katex-eq\" data-katex-display=\"false\">(1,2)<\/span><\/code><\/td><td><a href=\"https:\/\/en.wikipedia.org\/wiki\/Compound_Poisson_distribution\">Compound Poisson-Gamma<\/a><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+ \\cup \\{0\\}<\/span><\/code><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><\/tr><tr><td>2<\/td><td><a href=\"https:\/\/en.wikipedia.org\/wiki\/Gamma_distribution\">Gamma<\/a><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><\/tr><tr><td>3<\/td><td><a href=\"https:\/\/en.wikipedia.org\/wiki\/Inverse_Gaussian_distribution\">inverse Gaussian<\/a><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><td><code><span class=\"katex-eq\" data-katex-display=\"false\">\\mathrm{R}_+<\/span><\/code><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"2-insurance-pricing-models\">Insurance Pricing Models<\/h2>\n\n\n\n<p>In non-life insurance pricing, most claims happen somewhat randomly, typically the occurrence as well as the size. Take the theft of your bike or a water damage of your basement due to flooding as an example. Pricing actuaries usually want to predict the expected loss <code><span class=\"katex-eq\" data-katex-display=\"false\">E[Y|X]<\/span><\/code> given some features <code><span class=\"katex-eq\" data-katex-display=\"false\">X<\/span><\/code> of a policy. The set of features could contain the purchasing price of your bike or the proximity of your house to a river.<\/p>\n\n\n\n<p>Instead of directly modelling the expected loss per exposure <code><span class=\"katex-eq\" data-katex-display=\"false\">w<\/span><\/code>, e.g. the time duration of the insurance contract, the most used approach is the famous <strong>frequency-severity split<\/strong>:<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\begin{align*}\n\\operatorname{E}\\left[\\frac{Y}{w}\\right] = \\underbrace{\\operatorname{E}\\left[\\frac{N}{w}\\right]}_{frequency} \\cdot\n\\underbrace{\\operatorname{E}\\left[\\left. \\frac{Y}{n}\\right| N=n\\right]}_{severity}\n\\end{align*}<\/pre><\/div>\n\n\n\n<p>For simplicity, the conditioning on <code><span class=\"katex-eq\" data-katex-display=\"false\">X<\/span><\/code> is suppressed, it would occur in every expectation. The first part <code><span class=\"katex-eq\" data-katex-display=\"false\">\\operatorname{E}\\left[\\frac{N}{w}\\right]<\/span><\/code>is the (expected) <strong>frequency<\/strong>, i.e. the number of claims per exposure (time). The second term <code><span class=\"katex-eq\" data-katex-display=\"false\">\\operatorname{E}\\left[\\left.\\frac{Y}{N}\\right| N\\right]<\/span><\/code> is the (expected) <strong>severity<\/strong>, i.e. the average claim size (per claim) given a fixed number of claims. Here, we focus on the frequency part.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"3-convolution-and-aggregation-invariance\">Convolution and Aggregation Invariance<\/h2>\n\n\n\n<p>This property might first seem very theoretical, but it may be one of the most important properties for the estimation of expectations <code><span class=\"katex-eq\" data-katex-display=\"false\">E[Y|X]<\/span><\/code> with GLMs. It is in fact a property valid for the whole exponential dispersion family: <strong>The weighted mean of i.i.d. random variables has <\/strong>(almost)<strong> the same distribution!<\/strong><\/p>\n\n\n\n<p>If<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\begin{align*}\nY_i &amp;\\overset{i.i.d}{\\sim} \\mathrm{Tw}_p(\\mu, \\phi\/w_i) \\,,\n\\\\\nw_+ &amp;= \\sum_i w_i \\quad\\text{with } w_i &gt;0 \\,,\n\\end{align*}<\/pre><\/div>\n\n\n\n<p>then<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\begin{align*}\nY &amp;=\\sum_i^n \\frac{w_i Y_i}{w_+} \\sim \\mathrm{Tw}_p(\\mu, \\phi\/w_+) \\,.\n\\end{align*}<\/pre><\/div>\n\n\n\n<p>It is obvious that the mean of <code><span class=\"katex-eq\" data-katex-display=\"false\">Y<\/span><\/code>is again <code><span class=\"katex-eq\" data-katex-display=\"false\">\\mu<\/span><\/code>. But is is remarkable that it has the<strong> same distribution<\/strong> with the <strong>same power parameter<\/strong>, only the 2nd argument with the dispersion parameter differs. But the dispersion parameter cancels out in GLM estimations. In fact, we will show that two GLMs, one on aggregated data, give identical results. Another way of saying the same in statistical terms is that (weighted) averages are the sufficient statistic for the expectation within the exponential dispersion family.<\/p>\n\n\n\n<p>This is quite an essential property for data aggregation. It means that one can aggregate rows with identical features and still do an analysis (of the conditional expectation) without loss of information.<\/p>\n\n\n\n<p>The weighted average above can be written a bit more intuitive. For instance, a frequency <code><span class=\"katex-eq\" data-katex-display=\"false\">Y_i=\\frac{N_i}{w_i}<\/span><\/code> has weighted average <code><span class=\"katex-eq\" data-katex-display=\"false\">Y=\\frac{\\sum_i N_i}{\\sum_i w_i}<\/span><\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"4-poisson-distribution\">Poisson Distribution<\/h2>\n\n\n\n<p>When modelling counts, the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Poisson_distribution\">Poisson distribution<\/a> is by far the easiest distribution one can think of. It only has a single parameter, is a member of the Tweedie family, and fulfils the mean-variance relation<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\begin{equation*}\n\\operatorname{E}[N] = \\mu = \\operatorname{Var}[N] \\,.\\end{equation*}<\/pre><\/div>\n\n\n\n<p>In particular, <code><span class=\"katex-eq\" data-katex-display=\"false\">p=1<\/span><\/code>. While the distribution is strictly speaking only for counts, i.e. <code><span class=\"katex-eq\" data-katex-display=\"false\">N<\/span><\/code> takes on non-negative integer values, Poisson regression also works for any non-negative response variable like <code><span class=\"katex-eq\" data-katex-display=\"false\">N\/w \\in \\mathrm{R}<\/span><\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"5-frequency-example\">Frequency Example<\/h2>\n\n\n\n<p>For demonstration, we fit a Poisson GLM on the <a href=\"https:\/\/www.openml.org\/d\/41214\">french motor third-party liability claims dataset<\/a>, cf. the corresponding <a href=\"https:\/\/scikit-learn.org\/stable\/auto_examples\/linear_model\/plot_poisson_regression_non_normal_loss.html#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py\">scikit-learn example<\/a> and the <a href=\"https:\/\/doi.org\/10.2139\/ssrn.3164764\">case study 1 of the Swiss Association of Actuaries<\/a> on the same dataset.<\/p>\n\n\n\n<div class=\"wp-block-codemirror-blocks-code-block code-block\"><pre class=\"CodeMirror\" data-setting=\"{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:&quot;language&quot;,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text\/x-python&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}\">from glum import GeneralizedLinearRegressor\nimport pandas as pd\n\n# ... quite some code ... here we abbreviate.\ny_freq = df[&quot;ClaimNb&quot;] \/ df[&quot;Exposure&quot;]\nw_freq = df[&quot;Exposure&quot;]\nX = df[x_vars]\nglm_params = {\n    &quot;alpha&quot;: 0,\n    &quot;drop_first&quot;: True,\n    &quot;gradient_tol&quot;: 1e-8,\n}\nglm_freq = GeneralizedLinearRegressor(\n    family=&quot;poisson&quot;, **glm_params\n).fit(X, y_freq, sample_weight=w_freq)\nprint(\n  f&quot;Total predicted number of claims = &quot;\n  f&quot;{(w_freq * glm_freq.predict(X)).sum():_.2f}&quot;\n)\n# Total predicted number of claims = 26_444.00\n\n# Now aggregated\ndf_agg = df.groupby(x_vars, observed=True).sum().reset_index()\nprint(\n    f&quot;Aggregation reduced number of rows from {df.shape[0]:_}&quot;\n    f&quot;to {df_agg.shape[0]:_}.&quot;\n)\n# Aggregation reduced number of rows from 678_013 to 133_413.\ny_agg_freq = df_agg[&quot;ClaimNb&quot;] \/ df_agg[&quot;Exposure&quot;]\nw_agg_freq = df_agg[&quot;Exposure&quot;]\nX_agg = df_agg[x_vars]\nglm_agg_freq = GeneralizedLinearRegressor(\n    family=&quot;poisson&quot;, **glm_params\n).fit(X_agg, y_agg_freq, sample_weight=w_agg_freq)\nprint(\n    f&quot;Total predicted number of claims = &quot;\n    f&quot;{(w_agg_freq * glm_agg_freq.predict(X_agg)).sum():_.2f}&quot;\n)\n# Total predicted number of claims = 26_444.00<\/pre><\/div>\n\n\n\n<p>In fact, both models have the same intercept term and same coefficients, they are really identical models (up to numerical precision):<\/p>\n\n\n\n<div class=\"wp-block-codemirror-blocks-code-block code-block\"><pre class=\"CodeMirror\" data-setting=\"{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:&quot;language&quot;,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text\/x-python&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}\">print(\n    f&quot;intercept freq{'':&lt;18}= {glm_freq.intercept_}\\n&quot;\n    f&quot;intercept freq aggregated model = {glm_agg_freq.intercept_}&quot;\n)\n# intercept freq                  = -3.7564376764216747\n# intercept freq aggregated model = -3.7564376764216747\n\nnp.max(np.abs(glm_freq.coef_ - glm_agg_freq.coef_)) &lt; 1e-13\n# True<\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"6-outlook\">Outlook<\/h2>\n\n\n\n<p>The full notebook can be found <a href=\"https:\/\/github.com\/lorentzenchr\/notebooks\/blob\/master\/blogposts\/2024-06-03%20frequency_freMTPL2.ipynb\">here<\/a>.<\/p>\n\n\n\n<p>In the next week, <a href=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/10\/a-tweedie-trilogy-part-ii-offsets\/\" data-type=\"link\" data-id=\"https:\/\/lorentzen.ch\/index.php\/2024\/06\/10\/a-tweedie-trilogy-part-ii-offsets\/\">part II<\/a> of this trilogy will follow. There, we will meet some more of its quite remarkable properties.<\/p>\n\n\n\n<p><strong>Further references:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Tweedie M.C.K. 1984. &#8220;An index which distinguishes between some important exponential families&#8221;. Statistics: Applications and New Directions. Proceedings of the Indian Statistical Institute Golden Jubilee International Conference, Indian Statistical Institute, Cal- cutta, pp. 579\u2013604.<\/li>\n\n\n\n<li>Bar-Lev, S.K., Enis, P. Reproducibility in the one-parameter exponential family. <em>Metrika<\/em> <strong>32<\/strong>, 391\u2013394 (1985). <a href=\"https:\/\/doi.org\/10.1007\/BF01897827\">https:\/\/doi.org\/10.1007\/BF01897827<\/a><\/li>\n\n\n\n<li>Shaul K. Bar-Lev. Peter Enis. &#8220;Reproducibility and Natural Exponential Families with Power Variance Functions.&#8221; Ann. Statist. 14 (4) 1507 &#8211; 1522, December, 1986. <a href=\"https:\/\/doi.org\/10.1214\/aos\/1176350173\">https:\/\/doi.org\/10.1214\/aos\/1176350173<\/a><\/li>\n<\/ul>\n\n\n<ol class=\"wp-block-footnotes\"><li id=\"a48ec13d-54dd-4cd2-b67c-f6e0826927a4\">A great thanks to Prof. Mario W\u00fcthrich for pointing out the references of Bar-Lev and Enis. <a href=\"#a48ec13d-54dd-4cd2-b67c-f6e0826927a4-link\" aria-label=\"Jump to footnote reference 1\">\u21a9\ufe0e<\/a><\/li><\/ol>","protected":false},"excerpt":{"rendered":"<p>This trilogy celebrates the 40th birthday of Tweedie distributions in 2024 and highlights some of their very special properties.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"[{\"content\":\"A great thanks to Prof. Mario W\u00fcthrich for pointing out the references of Bar-Lev and Enis.\",\"id\":\"a48ec13d-54dd-4cd2-b67c-f6e0826927a4\"}]"},"categories":[16,9],"tags":[6,22],"class_list":["post-357","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-statistics","tag-python","tag-tweedie-trilogy"],"featured_image_src":null,"author_info":{"display_name":"Christian Lorentzen","author_link":"https:\/\/lorentzen.ch\/index.php\/author\/christian\/"},"_links":{"self":[{"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/posts\/357","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/comments?post=357"}],"version-history":[{"count":94,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/posts\/357\/revisions"}],"predecessor-version":[{"id":1725,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/posts\/357\/revisions\/1725"}],"wp:attachment":[{"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/media?parent=357"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/categories?post=357"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lorentzen.ch\/index.php\/wp-json\/wp\/v2\/tags?post=357"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}