{"id":15697,"date":"2021-03-01T14:00:00","date_gmt":"2021-03-01T22:00:00","guid":{"rendered":"https:\/\/devwww.3cloudsolutions.com\/post\/how-to-improve-power-bi-performance-part-iii-3\/"},"modified":"2024-01-05T14:18:53","modified_gmt":"2024-01-05T22:18:53","slug":"how-to-improve-power-bi-performance-part-iii","status":"publish","type":"post","link":"https:\/\/3cloudsolutions.com\/resources\/how-to-improve-power-bi-performance-part-iii\/","title":{"rendered":"How to Improve Power BI Performance &#8211; Part III"},"content":{"rendered":"<p>At the end of my <a href=\"https:\/\/3cloudsolutions.com\/resources\/how-to-improve-power-bi-performance-part-ii\/\">last Power BI performance blog post<\/a>, I mentioned that I wanted to get into the details of optimizations that can be done on your DAX code to ensure that you are getting the best performance that you can from Power BI. In this blog post, we hope to help you understand how the VertiPaq engine works a bit below surface level, which can help you optimize your DAX code. While doing this, we are going to emphasize a DAX performance maxim: Filter columns, not tables. Before we go there, let&#8217;s review how Power BI processes your DAX code.<strong><br \/>\n<\/strong><\/p>\n<p><!--more--><\/p>\n<p><img decoding=\"async\" style=\"width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2022\/11\/Blog_DAX.png\" alt=\"Blog_DAX\" width=\"1000\" \/><\/p>\n<p>In <a href=\"https:\/\/3cloudsolutions.com\/resources\/how-to-improve-power-bi-performance-part-i\/\"><span style=\"color: #007cba;\">Part I of my series<\/span><\/a>, I talked about how the VertiPaq engine in SSAS Tabular and Power BI use highly-efficient compression algorithms to greatly reduce the amount of storage space needed to store your data. You want as much of it as possible stored in RAM and\/or the CPU caches. That\u2019s great that we can compress the data down so much, but what happens when we need to read all that data? Do we have to re-expand it all again to read it? This doesn&#8217;t seem like it would be a good idea. If Power BI worked so hard to compress all the data, are we just reversing that when we have to read it?<\/p>\n<h2><span style=\"color: #007cba;\">How Does Power BI Read This Compressed Data?<br \/>\n<\/span><\/h2>\n<p>To better understand what&#8217;s going on, it&#8217;s important to know that your DAX code is being processed by two different engines, the <span style=\"font-weight: bold;\">formula engine<\/span> and the <span style=\"font-weight: bold;\">storage engine<\/span>. The formula engine processes your DAX code, understands it, and makes a plan for how to retrieve the data needed to execute your code. It then uses this plan to make a series of queries with the storage engine to retrieve the data, evaluate the results, and return the information needed to satisfy your DAX code.<\/p>\n<p>The formula engine has to be very sophisticated to both interpret your DAX code and to put together a plan on what data can be retrieved. As a result, it is single threaded. No matter how many CPUs or Cores you have, the formula engine will only run one step at a time. It also only understands data that has been decompressed or materialized, so when it needs to read data you will pay a price in performance to have the data expanded.<\/p>\n<p>The storage engine, on the other hand, is built for speed and can run with multiple threads. This focus on speed means that it&#8217;s not as sophisticated as the formula engine, but it can process compressed data without having to expand it first. The storage engine takes requests for data from the formula engine and retrieves it from the compressed data.<\/p>\n<p>As a general performance rule then, the more work you can push to the fast, multi-threaded storage engine the better because its faster. Bless its heart though (as they say in the South), it just can&#8217;t handle complex operations. Therefore, optimizing DAX code can sometimes mean balancing what has to be retrieved and materialized for consumption by the formula engine versus what can be processed by the storage engine.<\/p>\n<h2><span style=\"color: #007cba;\">Filter Columns, Not Tables<br \/>\n<\/span><\/h2>\n<p>I have a simple data model that contains weather data collected from weather stations across the United States from 1938 to the present. This weather data is a daily measurement of the minimum, maximum, and average temperature over a 24-hour period. The data comes from over 17,000 weather stations, so the fact table is over 430 million rows. I have the model set up with a fact table, date dimension table, and a station dimension table that contains information about the station\u2019s location, elevation, city, and state. The model is a very simple star schema as shown below:<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/1-2.png\" alt=\"1-2\" width=\"1000\" \/><\/p>\n<p>So, first I want to know how many temperature readings there are in my entire collection grouped by state. The DAX to do this is straightforward, so I will test it in DAX Studio and see what the performance is. DAX Studio is an open-source DAX editor and performance tool that I use almost daily. You can find the latest version <span style=\"color: #007cba;\"><a style=\"text-decoration: none; color: #007cba;\" href=\"https:\/\/daxstudio.org\/\" target=\"_blank\" rel=\"noopener\">here<\/a><\/span>.<\/p>\n<p>DAX Studio lets me execute queries and capture the query plans produced, as well as execution statistics. The details of how to do this is outside the scope of this blog, but if you look at the screenshot below, you can see that I defined a measure, then called it from a DAX statement. DAX Studio breaks down the query into formula engine and storage engine queries and provides details on their execution.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/2-1.png\" alt=\"2-1\" width=\"1000\" \/><\/p>\n<p>The DAX I used to get my temperature readings by state was the following:<\/p>\n<p><span style=\"color: #035aca;\">DEFINE<br \/>\n<\/span><span style=\"color: #035aca;\">\u00a0\u00a0 MEASURE <\/span><span style=\"color: #333333;\">Fact[PositiveValues]<\/span> = <span style=\"color: #035aca;\">CALCULATE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">DISTINCTCOUNT<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[value]<\/span><span style=\"color: gray;\">))<br \/>\n<\/span><span style=\"color: #035aca;\">EVALUATE<br \/>\n<\/span><span style=\"color: #035aca;\">CALCULATETABLE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">SUMMARIZECOLUMNS<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">Stations[State]<\/span>, <span style=\"color: #d93124;\">&#8220;# of Values&#8221;<\/span>, <span style=\"color: #333333;\">[PositiveValues]<\/span><span style=\"color: gray;\">))<\/span><\/p>\n<p>If I look at the server timings data captured by DAX Studio, even without knowing too much about the internals of the storage queries, you can tell that this was fast.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/3-3.png\" alt=\"3-3\" width=\"1000\" \/><br \/>\nThe total amount of time it took to run was 293 milliseconds, and it took just one Storage Engine query to return 52 rows (50 states plus DC &amp; US Minor Outlying Islands) of data. The data materialized was somewhere around a KB. That&#8217;s smokin&#8217; fast!<\/p>\n<p>Now, say I want to only count the number of daily minimum values that are greater than zero. Let\u2019s throw some DAX together and take a look at how that executes. Here is the DAX that I used:<\/p>\n<p><span style=\"color: #035aca;\">DEFINE<br \/>\n<\/span><span style=\"color: #035aca;\">\u00a0\u00a0 MEASURE <\/span><span style=\"color: #333333;\">Fact[PositiveValues]<\/span> = <span style=\"color: #035aca;\">CALCULATE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">DISTINCTCOUNT<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[value]<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #035aca;\">Filter<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact&#8217;<\/span>, <span style=\"color: #333333;\">&#8216;Fact'[Value]<\/span> &gt; <span style=\"color: #ee7f18;\">0 <\/span><span style=\"color: #333333;\">&amp;&amp; &#8216;Fact'[Reading]<\/span> = <span style=\"color: #d93124;\">&#8220;TMIN&#8221;<\/span><span style=\"color: gray;\">))<br \/>\n<\/span><span style=\"color: #035aca;\">EVALUATE<br \/>\n<\/span><span style=\"color: #035aca;\">CALCULATETABLE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">SUMMARIZECOLUMNS<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">Stations[State]<\/span>, <span style=\"color: #d93124;\">&#8220;Positive Values&#8221;<\/span>, <span style=\"color: #333333;\">[PositiveValues]<\/span><span style=\"color: gray;\">))<\/span><\/p>\n<p>Basically, my measure now filters the Fact table, so that I&#8217;m only counting values that are greater than zero that are of READING = TMIN.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/4-4.png\" alt=\"4-4\" width=\"1000\" \/><br \/>\nWhoa! Suddenly our super fast calculation ground to a crawl. The total amount of time taken was a whopping 24 seconds, and it took 53 storage engine queries to complete, instead of just one. Worse yet, the first query returned over 380,000 rows and materialized over 3MB of data!<\/p>\n<h2><span style=\"color: #007cba;\">What Happened?<br \/>\n<\/span><\/h2>\n<p>Filtering using a table, that\u2019s what happened! If you review the code again, the filter that we pass into the CALCULATE function is Filter(&#8216;Fact&#8217;, &#8216;Fact'[Value] &gt; 0 &amp;&amp; &#8216;Fact'[Reading] = &#8220;TMIN&#8221;). The return value for the FILTER function is a TABLE, so we are filtering the results of the measure specified in CALCULATE by a table. To do this, the formula engine first needs to retrieve the table from storage. I used to think that table filters were bad because the entire table would be returned by the storage engine, but that&#8217;s not technically true.<\/p>\n<p>In order to see this in DAX Studio, we need to look at the query that was sent to the storage engine. To do that, we need to look at a different part of the data returned in DAX Studio highlighted below.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2022\/11\/5-4.png\" alt=\"5-4\" width=\"1000\" \/><\/p>\n<p>Under the &#8220;Query&#8221; column, we see the request sent to the storage engine to fulfill. The pane on the far right is a detailed view of that request. While it looks like standard T-SQL, the storage engine understands a specialized version of T-SQL, called xmSQL.<\/p>\n<p>If you look at the xmSQL that is sent to the storage engine, you can see that it has optimized things so that it only brings back the two columns that I need (Fact[Value] and Stations[State]) and it has filtered the table down to minimum temperatures above zero degrees.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 500px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/6-2.png\" alt=\"6-2\" width=\"500\" \/><\/p>\n<p>The rest of the storage engine queries are quicker, but there are 52 of them. What the formula engine does with the materialized table of data is that it then runs a distinct count against each set of data that matches the filter conditions, state by state! The following is a snapshot of the xmSQL generated to count how many values there are for State = South Dakota, Reading = TMIN and VALUE &gt; 0. One query is run for each state.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 500px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/7-2.png\" alt=\"7-2\" width=\"500\" \/><\/p>\n<h2><span style=\"color: #007cba;\">There Must Be a Better Way<br \/>\n<\/span><\/h2>\n<p>This seems to be a rather long-winded way to do this in less queries, right?<\/p>\n<p>Think back to our performance recommendation that was mentioned earlier: <span style=\"font-weight: bold;\">filter columns, not tables.<\/span> Look what happens when I identify specific columns as the filters passed to calculate as shown:<\/p>\n<p><span style=\"color: #035aca;\">DEFINE<br \/>\n<\/span><span style=\"color: #035aca;\">\u00a0\u00a0 MEASURE <\/span><span style=\"color: #333333;\">Fact[PositiveValues]<\/span> = <span style=\"color: #035aca;\">CALCULATE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">DISTINCTCOUNT<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[value]<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #035aca;\">Filter<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">ALL<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[Value]<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #333333;\">&#8216;Fact'[Value]<\/span> &gt; <span style=\"color: #ee7f18;\">0<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #035aca;\">FILTER<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">ALL<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[Reading]<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #333333;\">&#8216;Fact'[Reading]<\/span> = <span style=\"color: #d93124;\">&#8220;TMIN&#8221;<\/span><span style=\"color: gray;\">))<br \/>\n<\/span><span style=\"color: #035aca;\">EVALUATE<br \/>\n<\/span><span style=\"color: #035aca;\">\u00a0\u00a0 CALCULATETABLE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">SUMMARIZECOLUMNS<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">Stations[State]<\/span>, <span style=\"color: #d93124;\">&#8220;Positive Values&#8221;<\/span>, <span style=\"color: #333333;\">[PositiveValues]<\/span><span style=\"color: gray;\">))<\/p>\n<p><\/span><\/p>\n<p><span style=\"color: gray;\"><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 1000px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/8-1.png\" alt=\"8-1\" width=\"1000\" \/><\/span><\/p>\n<p>Blazing fast speeds are back! It only took one storage engine query, and that query returned the familiar 52 rows and one 1KB of data. When we pass columns as filters, the formula engine can produce a plan that is simple enough for the storage engine to process all in one query, and it doesn\u2019t have to pass back a table, it just passes back the results. Here is the xmSQL for this query.<\/p>\n<p><img decoding=\"async\" style=\"margin-left: auto; margin-right: auto; display: block; width: 500px;\" src=\"https:\/\/3cloudsolutions.com\/wp-content\/uploads\/2021\/09\/9-2.png\" alt=\"9-2\" width=\"500\" \/><\/p>\n<p>On a side note, there is a shorter way to write the DAX instead of explicitly calling FILTER. Here&#8217;s what that looks like:<\/p>\n<p><span style=\"color: #035aca;\">DEFINE<br \/>\n<\/span><span style=\"color: #035aca;\">\u00a0\u00a0 MEASURE <\/span><span style=\"color: #333333;\">Fact[PositiveValues]<\/span> = <span style=\"color: #035aca;\">CALCULATE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">DISTINCTCOUNT<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">&#8216;Fact'[value]<\/span><span style=\"color: gray;\">)<\/span>, <span style=\"color: #333333;\">&#8216;Fact'[Value]<\/span> &gt; <span style=\"color: #ee7f18;\">0<\/span>, <span style=\"color: #333333;\">&#8216;Fact'[Reading]<\/span> = <span style=\"color: #d93124;\">&#8220;TMIN&#8221;<\/span><span style=\"color: gray;\">)<br \/>\n<\/span><span style=\"color: #035aca;\">EVALUATE<br \/>\n<\/span><span style=\"color: #035aca;\">CALCULATETABLE<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #035aca;\">SUMMARIZECOLUMNS<\/span><span style=\"color: gray;\">(<\/span><span style=\"color: #333333;\">Stations[State]<\/span>, <span style=\"color: #d93124;\">&#8220;Positive Values&#8221;<\/span>, <span style=\"color: #333333;\">[PositiveValues]<\/span><span style=\"color: gray;\">))<\/span><\/p>\n<p>I know there was a lot of technical content in this post, but I wanted to show some of the details of DAX code optimization.<\/p>\n<h2><span style=\"color: #007cba;\">Let&#8217;s Recap<br \/>\n<\/span><\/h2>\n<ol>\n<li style=\"vertical-align: middle; margin-top: 0in; margin-right: 0in; margin-bottom: 0in;\">Filter columns, not tables<\/li>\n<li style=\"vertical-align: middle; margin-top: 0in; margin-right: 0in; margin-bottom: 0in;\">The engine used to execute DAX queries is composed of two engines: the formula engine and the storage engine &#8211; understanding how the two operate together can help you write better DAX code<\/li>\n<li style=\"vertical-align: middle; margin-top: 0in; margin-right: 0in; margin-bottom: 0in;\">DAX Studio captures a lot of detailed information that can help you troubleshoot why your DAX code is running slow<\/li>\n<li style=\"vertical-align: middle; margin-top: 0in; margin-right: 0in; margin-bottom: 0in;\">This DAX stuff can get pretty complicated as the model gets larger and sometimes little changes can make a big difference<\/li>\n<li style=\"vertical-align: middle; margin-top: 0in; margin-right: 0in; margin-bottom: 0in;\">Filter columns, not tables \u2013 it&#8217;s so important that I&#8217;ve listed it twice<\/li>\n<\/ol>\n<p>As a consultant on 3Cloud&#8217;s <a style=\"text-decoration: none;\" href=\"\/managed-services\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #007cba;\">Managed Services<\/span><\/a> team, I&#8217;m passionate about optimizing and fine-tuning Power BI. When my customers can focus their attention on their business objectives and not on Power BI, I know that I&#8217;m helping make them more successful with their implementation of Power BI. If you&#8217;re interested in hearing more about how our Managed Services team can work with you to optimize your Power BI Models, DAX queries, and bring governance and order to your Power BI environment, please <span style=\"color: #007cba;\"><a style=\"color: #007cba;\" href=\"\/get-started\/\" target=\"_blank\" rel=\"noopener\">reach out<\/a><\/span> and let us know how we can help.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this blog post, we hope to help you understand how the VertiPaq engine works a bit below surface level, which can help you optimize your DAX code. While doing this, we are going to emphasize a DAX performance maxim: Filter columns, not tables. Before we go there, let\\&#8217;s review how Power BI processes your DAX code.<\/p>\n","protected":false},"author":21,"featured_media":12698,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[394,260],"tags":[305,273],"class_list":["post-15697","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business-intelligence","category-data-ai","tag-modern-bi","tag-power-bi","topics-blog"],"acf":[],"_links":{"self":[{"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/posts\/15697","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/comments?post=15697"}],"version-history":[{"count":0,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/posts\/15697\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/media\/12698"}],"wp:attachment":[{"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/media?parent=15697"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/categories?post=15697"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/3cloudsolutions.com\/wp-json\/wp\/v2\/tags?post=15697"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}