Testing Performance Of Power BI Reports In The Browser, Part 2

In the first part of this series I talked about why you might want to test the performance of a Power BI report in the browser, as opposed to using Performance Analyzer in Power BI Desktop, and the different factors you have to take into account when doing this kind of testing. In this post I’ll describe how you can find out how long it takes for your report to run in a browser.

You can think of Power BI as a low code/no code tool for creating web applications for reporting and analytics, and as such you can use the same tools that web developers use for measuring web site performance to measure Power BI report render time. Now I am not a web developer (I guess most of you aren’t either) so I’ve had a lot to learn about this subject, and I’m all too aware of how much more there is still to learn, but I wanted to write up what I have learned so far because I think it’s so useful. Please excuse any mistakes, misconceptions or over-simplifications!

I’m going to use Google Chrome as my browser in this post. Yes, I know, I work for Microsoft but this is the new Microsoft and I have used Chrome as my browser of choice for the last few years; anyway, Edge is moving to be Chromium-based so everything I show in this post works in exactly the same way in the current Edge beta I have installed on my PC. As I said last time, it’s also important to test performance in whatever browser your end users use to understand what their experience is, but any optimisations you make in your report that work for Chrome or Edge beta are likely to work for other browsers too.

Here’s the methodology that I have come up with for measuring report performance in the browser:

Choose the page you want to test

You can only test the performance of one page in a report at a time, so choose the page that is either the most used, used by the most important users, or takes the longest time to render.

Add a blank page to your report

Before you publish your report, in Power BI Desktop add a blank page with no visuals on to it. It doesn’t need to be the page that is opened when the report opens and you will be able to delete it later. Why do this? When you’re testing how long it takes for your report page to render, you’re probably doing so because you want to improve performance. Some things in the report page that influence performance you have the power to change, such as the design of the dataset, the DAX in the measures, the number and type of visuals on a page; some things will always happen when a report runs and you have to accept that overhead. Testing how long a blank page takes to render will give you an idea of how long this latter category of “things that always happen” takes, and you can subtract this time from the time your chosen report page takes to run.

Get the embed url for your report page and the blank page

After you have published your report, get the secure embedding urls for your chosen report page and the blank page. You can find out how to do this here. Getting the embed url for the report takes a few clicks, but you will need to do a bit of copy-and-pasting to get the embed urls to open the report on the blank page and your chosen report page.

Why do this? Once again, it’s all down to focusing on what you have control over. When a user views a report they normally do so in the Power BI Service portal and the time this takes consists of the time taken to render the portal plus the time taken to render the report. Since you don’t have any control over how long the portal takes to render you might as well ignore this in your testing, so you can get a clearer picture of the amount of time taken to run the report. When you paste the embed url for the report into a browser tab the portal will not be displayed and you’ll only see the report.

Create a new user profile in Chrome

This is something I do anyway for each customer I work with, because it makes handling multiple Power BI logins much easier, but for performance testing you should create and use a new user profile in Chrome. This allows you to delete any Chrome extensions just for this profile (apparently they can affect page render times) and delete the cache if you want, without affecting your main profile. You may also need to test from an incognito window if there are some extensions you can’t delete.

Run your report pages to warm the SSAS/Power BI cache

Copy the embed urls for the blank page and the report page into new tabs and let them load before you do anything else, to warm the SSAS/Power BI cache. I discussed testing performance on a cold or warm SSAS/Power BI cache in my last post, and as I said then, while I believe it is extremely important to understand the impact of SSAS/Power BI caching on report rendering performance, it’s only really feasible to do the kind of DAX performance testing that requires this in Power BI Desktop. For testing report page rendering performance in the browser it’s easier to assume that you are running on a warm SSAS/Power BI cache.

Create an audit

The easiest way to measure report load time is to open Chrome DevTools (you can do this by clicking Control+Shift+j in Chrome) while on the browser tab with the report page you want to test and use the functionality on the Audits tab. To get an understanding of what the Audits tab does I recommend you read this page and watch the accompanying video, paying particular attention to the steps in the section on how to establish a baseline. To keep things simple I suggest you choose the Desktop option in the Device section (choosing Mobile does not seem to render the Phone layout of your report if you have created one); check the Performance box in the Audits section; and then choose to apply one of the throttling options if you want:

image

When you click the Run audits button various tests will be run, and after a short wait you’ll see the results:

image

You can read more about what these results mean here but there is one number that a Power BI developer should be particularly interested in: Time to Interactive. As far as I can see this is pretty much the amount of time it takes for the report page to render (because, of course, defining what “render” means is complicated). Subtract the amount of time it takes to render the blank page from the amount of time it takes to render your chosen report page and you will know how much of the report page render time you can actually influence with any changes you make. I strongly recommend you run tests both with and without network throttling applied, because applying throttling really highlights any inefficiencies in your report design.

Use the Network tab

If you want to peak behind the scenes and see more detail about what happens when your report page renders, as well as have more control over things like network throttling, you can use the Network tab in Chrome DevTools. You can find out how to use it here (with more details here) but all you really need to do is:

  • Open up a new tab with the embed url of the report page you want to test
  • Open Chrome DevTools and go to the Network tab
  • Make sure the Disable Cache box is checked and set up any network throttling options you want to use for your tests. The ability to add custom network configurations is particularly useful:image 
  • Press and hold the refresh button on the browser toolbar and a dropdown menu will appear; click on Empty Cache and Hard Reload to refresh the page and simulate a user’s first visit to the report. This will automatically start data capture in the Network tab.image
  • When your report page has finished rendering, click on the red stop button in the top left hand corner of the Network tab. You need to be quick but not that quick, for reasons I’ll explain in the next bullet.image
  • Down at the bottom of the Network tab you’ll see some useful metrics, including the number of requests made, the amount of data transferred and most importantly, the finish time. This last metric is not the time elapsed from the start of the refresh until the point you clicked the stop button; it’s the time elapsed from the start of the refresh until the last request before you clicked stop, so this is another good way of finding the render time of your report. You’ll see that requests do get made after the report has finished rendering which is why you need to be fairly quick clicking the stop button, but this is still a lot more accurate than using a stopwatch to time report page rendering.image
  • The majority of the pane will be taken up with a list of requests made and a waterfall chart that visualises how long each request took.image

Don’t get too clever!

You’ll see a lot of really interesting information on both the Audits tab and the Network tab and I’m sure most of you will be able to interpret what you see there and use it to tune your reports. I hesitated about mentioning the waterfall chart of the Network tab in this post but decided to do because I have found things on there that have helped me while performance tuning that I would not have found any other way. However, remember everything that you see on the waterfall chart in the Network tab is internal, undocumented and subject to change at any time. As a result please, please, please do not send me messages like “I’ve seen x happening on the Network tab, can you tell me what it means?” or “I’m a web developer and I can’t believe the stupid stuff I can see Microsoft doing here, why are you doing this?” because I’m not able to provide any answers.

As always, if you have any feedback or suggestions please leave a comment. In the next part of this series I’ll show you a worked example of how you can use all this to solve a performance problem that does not manifest in Power BI Desktop.

Testing Performance Of Power BI Reports In The Browser, Part 1

If your Power BI report is slow, most of the time the reason is something to do with the way you have modelled your data or the way you have written the DAX in your measures. There are plenty of resources out there to help you troubleshoot this kind of problem (Marco’s video “My Power BI report is slow: what should I do?” is a great place to start) but it’s important to understand that there are several other possible reasons why a Power BI report is slow, and these types of performance problem may only manifest themselves after the report has been published when a user is viewing it in the browser. As a result, it’s important to be able to test how long a report takes to render in a browser. If you can’t do this, how can you know if any changes you make help or hurt performance?

It turns out that testing performance of a report in the browser is not as straightforward as it seems. In this post I’m going to describe some of the factors you have to take into account when doing this type of testing; in the next post I’ll go into more detail about how you actually measure report rendering times in the browser and how to see what happens when the report is rendered.

Power BI is always changing

The first thing to remember is that the Power BI Service is always being updated and improved, which means that even if everything else in your environment stays the same the performance of any given report may vary (and hopefully get better) from day to day. Most changes to the Power BI Service are not publicised and there’s no way of knowing how much impact they will have on your report.  You need to take this into account when you do testing: don’t create a benchmark and then wait a week to see if any changes you have made have improved performance. Instead create a new benchmark every day you are testing and compare the changes you make to it.

Use the browser your users are using

Report performance can vary a lot depending on the browser you’re using. Before you ask, I don’t have any recommendations for the best or fastest browser to use for Power BI but generally speaking the more ‘modern’ your browser the better; this tweet from my colleague Chris Hamill showing how much faster the new Chromium-based version of Edge is compared to regular Edge is really striking. If you’re going to test performance of a report, though, you need to use the browser(s) your users are using, especially if that browser is IE.

Client hardware

The spec of the PC or mobile device running the browser is also a major factor in performance. As a Power BI developer it’s likely your hardware will be far superior to whatever device your users have to view your report with. It may be a good idea to borrow a PC, laptop, phone or tablet that is equivalent to what your users use, just for testing purposes.

Network factors

Users in different parts of the world with different internet connections will experience different report performance, and network conditions and therefore report performance may also vary by time of day. Power BI has some basic functionality to show the network latency that users are experiencing but it’s better to get someone who really knows what they are doing to help you understand what’s happening at the network level (I am absolutely not an expert in this). To a certain extent you can reproduce a different network conditions in the browser for testing purposes, but ideally you should to try to reproduce any problems and do any testing wherever the users are based and at the times of day they are using Power BI.

Power BI query caching

Datasets in Power BI Premium have an option to cache queries and improve performance for the initial page of a report. If you have this turned on, it means that the initial report load might be faster than subsequent interactions such as changing a filter or a slicer, so you should not just test the initial render time.

The Power BI/Analysis Services cache

Even if your dataset is perfectly modelled and your DAX is tuned to perfection, the caching that takes place within the Power BI engine (or Analysis Services, if you are using a Live connection) will almost always mean that the first time you run a query it will be slower than subsequent executions.

With Analysis Services Live connections you can clear the cache, and get worst-case cold-cache execution timings by running and XMLA ClearCache command from SQL Server Management Studio or by using DAX Studio’s Clear Cache button. You should also ensure that you are running your tests on an instance of Analysis Services that no-one else is connected to or doing anything on: this ensures that other users’ queries do not populate the cache, or use resources that affect the performance of your queries.

Things aren’t so simple with Power BI Import mode datasets – I’m pretty sure you can use XMLA to clear the cache of a Premium capacity via XMLA endpoints although I’ve not tried it, but even then it’s not likely you can ensure that you’re the only user on that capacity. With Shared capacity there’s no way to clear the cache. As a result you should instead accept that you can only test warm cache performance in the browser by loading your report once before you test performance. In any case, I think the kind of DAX-focussed testing that requires a cold cache can be done more easily in Power BI Desktop.

DirectQuery

If you’re using a DirectQuery dataset then you also need to worry about caching, security and other users running queries in the underlying data source. Again, as far as possible you should try to ensure that no-one else is running queries or doing anything else in the data source that could have an effect on the performance of your queries.

Simulating activity from other users

With DirectQuery connections, Live connections and Import mode, it might be that other users’ activity or processing/refreshing of cubes/models/datasets is causing the performance problem. I still think it’s important to do any testing when there are no other real users around – if you want to simulate other users running queries while you do your testing there are tools that can do that, including the Premium capacity load testing tool.

Do what your users do

Sometimes performance problems only occur in very specific circumstances, for example when a specific item has been selected in a filter or more than one item is selected, so it’s really important to create tests that reflect real-world patterns of interaction. Don’t forget that users might also be subject to row-level security that you as a developer are not affected by.

Conclusion

As you will have realised by now, creating realistic, repeatable tests for Power BI reports in the browser is not an easy task because there are so many different things to take into account. I’m not going to pretend this is an exhaustive list (I haven’t mentioned factors such as using a dataset that has the same data volumes as your production dataset, because that’s not specific to testing reports in the browser) but if you think I have forgotten something please leave a comment below – I would be interested to hear your thoughts.

Click here for part 2 in this series, where I show how to measure report render time in the browser.

Performance Overhead Of Visual Totals On Dimension Security In Analysis Services Multidimensional

Recently I was involved in troubleshooting a mysterious Analysis Services Multidimensional performance problem for a customer: the team worked out that certain queries run by certain users were extremely slow, and that these users were members of roles where dimension security was applied, but the amount of slowdown – queries going through the role were taking over 10 minutes compared to a few seconds when run as an administrator – was unlike anything I had seen before. It turned out that the cause was having the Enable Visual Totals box checked on every attribute on the dimension where security was applied, not just the attributes whose members were secured.

I can’t reproduce the problem with the Adventure Works cube but I can use it to illustrate the problem. Let’s say you have a role that applies dimension security on the Country attribute of the Customer dimension:

image

Normally, in this scenario, you would only check the Enable Visual Totals box for the Country attribute:

image

When running a query with this role applied, in the Query Subcube Verbose event in Profiler you will see a slice is put on the Country attribute:

image

However, if the Enable Visual Totals box is checked for every attribute on the dimension then a slice is put on every attribute that has its hierarchy enabled:

image

The more of these slices there are the slower everything gets inside Analysis Services: slower scans, slower cache registry lookups and so on. In the case of the cube I was looking at the combination of all of these slices, extremely complex MDX calculations and unprocessed indexes led to the massive performance problem. Obviously if you have to use Enable Visual Totals on your role then you have to use it, and it’s extremely unlikely you will encounter this problem, but it’s good to know about it just in case.

Troubleshooting Web Service Refresh Problems In Power BI With The Power Query Diagnostics Feature

Back in 2018 I wrote a blog post about how to troubleshoot web service refresh issues using a third party tool called Fiddler. It’s still relevant but Fiddler is a complex tool and installing it and giving it the right permissions to see everything it needs to see isn’t easy. Luckily, the new Power Query Diagnostics feature in Power BI (released in October 2019) means you don’t need to use Fiddler for this kind of work any more. Full details on how to use this feature are given here, but I thought it would be useful to focus on one particular use case for it.

Let’s take one of the Power Query queries from my previous post as an example:

let
    Source = OData.Feed("https://api.parliament.uk/odata"),
    GovernmentOrganisation_table =
    Source{[
        Name="GovernmentOrganisation",
        Signature="table"
    ]}[Data],
    #"Filtered Rows" =
    Table.SelectRows(
        GovernmentOrganisation_table,
        each ([GroupName] = "Cabinet Office")
    )
in
    #"Filtered Rows"

[Note that this is a public web service which requires no authentication, so you will be able to run this query yourself]

This query connects to the UK Houses of Parliament OData API and returns the table of government organisations filtered down to the row where the GroupName column equals the text “Cabinet Office”. What requests are made to the web service when this query runs?

The first thing to point out is that different things will happen depending on whether you refresh the preview window in the Power Query Editor or refresh the dataset inside the main Power BI window. If you’re doing performance tuning you should ignore what happens when you refresh the preview in the Power Query Editor and always refresh from the main Power BI window.

The second is that, when you refresh a dataset (or a table within a dataset) from the main Power BI window, you might find that more work is being done than you expect as a result of “background refresh” taking place. This is something I blogged about here and you should always turn it off before doing any performance tests, because it’s not something that will happen in the service and it can also slow things down – potentially a lot if there are lots of queries and steps. You can turn it off from the Options dialog in the Data Load tab by deselecting “Allow data preview to download in the background”.

image 

The third is that the Query Diagnostics feature I’m going to use is, at the time of writing, in Preview, so you may need to enable it in the Preview features pane of the Options dialog:

image

With that done, here are the steps to find out what requests are made to the web service when the above query is run.

  1. First, let the query load data into the dataset as usual without doing anything else. You’ll see the table that the query has loaded data into in the Fields pane of the main Power BI window:
    image
  2. Next, reopen the Power Query Editor window, go to the Tools tab and click on the Start Diagnostics button.
    image
  3. Do not do anything else in the Power Query Editor window, instead minimise (do not close) it and then go back to the main Power BI window. There, right-click on the table that holds the output of the query in the Fields pane, right-click and select “Refresh data”:
    image
  4. Once the refresh has taken place, go back to the Power Query Editor window and click the Stop Diagnostics button:
    image
  5. When you have done that, you’ll see two new queries created in a group called Diagnostics:
    image
    These queries contain the diagnostics information for the refresh you’ve just done. The query with “Detailed” in its name has all the detailed information; the other query contains summarised data.
  6. Looking at the “Detailed” query, each row represents a single operation that takes place within the Power Query engine when the refresh took place and while I don’t know what all this means (I’m working on it!), if you go to the Data Source Query column you’ll see a list of all the HTTP requests made to the web service:
    image
  7. Clicking on an individual cell in this column allows you to see the full text of the request and response made in the preview pane underneath the results pane:
    image

While Fiddler may still be necessary for some more advanced scenarios, this is a much easier way of troubleshooting web service data sources in the Power Query Editor, for example when you need to check the exact request made to see if query folding is taking place on an OData data source.

Excel Dynamic Arrays And The CubeSet Function

Among all the exciting announcements made at Ignite last week, one you may have missed (even in all the Excel-related announcements here) was that dynamic arrays have finally reached GA. Ever since they were announced I’ve been interested in how they can be used with Excel cube functions, which allow you to get data from Analysis Services, Power BI and the Excel Data Model/Power Pivot into cells on the worksheet, and I’ve blogged about this once already. Even though right now the CubeValue function can’t be used with dynamic arrays – alas – there are still some interesting ways dynamic arrays and cube functions can be used together and in this post I’ll show you another one I’ve found.

Say you have the following set of tables in the Excel Data Model:

image

And the following measure that counts the number of customers who bought something:

Distinct Customers:=DISTINCTCOUNT('Internet Sales'[CustomerKey])

Let’s also say you have on a worksheet the following table of data showing the number of distinct customers broken down by product, created using the CubeMember and CubeValue functions:

image

Here are the formulas for this table:

image

So far, all very straightforward. Now let’s say you want to use a dynamic array to filter the rows in this table so you only see the products that have more than a certain number of distinct customers, and you want to see a grand total for this filtered list of customers, like so:

ExcelDynamicArray

Here are the formulas to achieve this:

image

To explain what’s going on here:

  • Cell C2 contains the text “ThisWorkbookDataModel”, the name of the connection to the Excel Data Model. Doing this makes the other Excel formulas here much easier to read!
  • Cell F2 – which I’ve turned into a named range called FilterThreshold – contains the number of customers which a product must exceed to be shown.
  • Cell E5 contains the dynamic array Filter function that filters the rows shown in the first two screenshots above and only returns the products that exceed the threshold. The formula is:
    FILTER(B5:B134,C5:C134>FilterThreshold)
  • Cell E6 does something similar to E5 but returns the number of distinct customers for each product. If all I wanted was the products and the number of distinct customers I wouldn’t have needed two separate formulas, I could have used one, but I need to add the grand total too…
  • Cell E4 contains the formula:
    CUBESET($C$2, E5#, “Total”)
    This creates a set from the spill reference E5#, which contains the cells containing CubeMember formulas returned by the Filter function. For some reason CubeSet returns an error if you try to use a dynamic array formula in its second parameter but using a spill reference works ok; this is why I had to use separate formulas in E5 and E6.
  • Finally, cell F4 contains the formula that returns the total distinct count for all products returned in the filtered list using CubeValue and the set created in cell E4:
    CUBEVALUE($C$2, “[Measures].[Distinct Customers]”, E4)
    Since this is a distinct count the only way to get the correct value here is using CubeSet, CubeValue and the DAX measure – there would be no way to calculate the correct value using Excel formulas. The same could be said of almost any other DAX measure that wasn’t a simple aggregation.

You can download the sample workbook for this post here.

Power BI Report Builder And RSCustomDaxFilter

If you’re building DAX queries using Power BI Report Builder you might notice something that looks like a new DAX function called RSCustomDaxFilter. For example, here’s a simple DAX query built from the Adventure Works Tabular demo database, with one measure and one parameter built Calendar Year:

image

Note that the “Enable Multi Value Parameters” option has been selected. Here’s what you’ll see if you view the text of the DAX query in the Query Designer:

EVALUATE 
SUMMARIZECOLUMNS(
RSCustomDaxFilter(
 @DateCalendarYear,
 EqualToCondition,
 [Date].[Calendar Year],
 Int64
 ), 
"Internet Total Sales", 
[Internet Total Sales]
)

But what is RSCustomDaxFilter? If you run your report and see what happens on the server using SQL Server Profiler, here’s the query that actually gets run (in this case I selected the years 2013 and 2014 for the parameter):

EVALUATE
SUMMARIZECOLUMNS (
    FILTER (
        VALUES ( 'Date'[Calendar Year] ),
        ( 'Date'[Calendar Year] = VALUE ( "2013" ) )
            || ( 'Date'[Calendar Year] = VALUE ( "2014" ) )
    ),
    "Internet Total Sales", [Internet Total Sales]
)

What has happened is that RSCustomDaxFilter has been replaced with an expression using the DAX Filter() function that implements the filter on the selected years; it’s just a placeholder for a dynamically-generated DAX expression that is substituted in at runtime.

Why is it needed? Handling multi-value parameters is difficult in DAX when you don’t know how many values are going to be passed to the parameters (it’s a subject I’ve blogged about here and here) and some kind of dynamic code generation is a reasonable solution to this problem.

There is one drawback with this approach though – it can generate a DAX query that is too long to be executed. Here’s the error message you’ll see:

image

The specified query is too complex to be evaluated as a single statement.

I’m not sure what the maximum length of a query is in DAX – I suspect it’s 32768 characters. You’ll hit this limit if you create a parameter on a field with thousands of values in and then select all but a few of these values; from the example above you can imagine how long the resulting DAX query could be:

image

There’s no single workaround for this but some custom DAX (such as the example here) will be necessary; you’ll just need to ensure that the DAX query is as short as possible.

Excel PivotTable Query Performance With Analysis Services, Power Pivot And Power BI (Revisited)

Back in 2016 I wrote the following blog post about changes to the way Excel 365 generated MDX queries for PivotTables connected to Analysis Services, Power Pivot/the Excel Data Model and Power BI datasets:

https://blog.crossjoin.co.uk/2016/07/08/excel-2016-pivottable-mdx-changes-lead-to-big-query-performance-gains/

I know it sounds boring and not something you need to worry about but trust me, this is important – these changes solved the vast majority of Excel PivotTable performance problems that I encountered when I was a consultant so you should read the above post before continuing.

Unfortunately, earlier this year these changes had to be partially rolled back because in some rare cases the queries generated returned incorrect results; this means that you may find that values for subtotals and grand totals are again being returned even when they aren’t being displayed. The good news is that you should still be able to get the improved performance with a few minor tweaks.

Using the example from the previous post, a PivotTable connected to SSAS MD and the Adventure Works cube (the queries generated for SSAS Tabular, Power Pivot and Power BI may be slightly different and slightly better but the basic problem is the same), if you build the following in Excel:

image

image

…even though the subtotals in the PivotTable are not displayed in Excel, the MDX query generated by Excel not only returns them but also returns others that are not needed at all – in fact 36 rows (although the query returns them as columns) of data rather than the 13 rows that are displayed. Here’s a screenshot of the results returned when you run the MDX query in SQL Server Management Studio:

image

image

Any time you see a row or column containing an All Member (in this case All Customers or All Products) you know you are looking at a subtotal or grand total.

[NB The easiest way to get the MDX query generated by your PivotTable is to install the OLAP PivotTable Extensions add-in https://olappivottableextensions.github.io/]

In this case, to get an efficient query, you need to explicitly turn off subtotals and grand totals for the PivotTable in Excel:

image

image

In the PivotTable the only difference you’ll see is that the grand totals are now not displayed:

image

But the query generated now only returns the values that are actually needed, and as a result will be a lot more efficient and potentially a lot faster. In this case, the query now only returns the 12 rows of data displayed:

image

image

Even with subtotals and grand totals turned off there are still some cases where unwanted values will be returned. Take the following PivotTable, where I have used the Calendar hierarchy from the Date dimension on columns and filtered it so I only see the three months in Q1 CY 2012:

image

image

The MDX query generated by Excel requests requests four extra columns with subtotals and a grand total that aren’t displayed:

image

There is a fairly simple workaround though. By changing how the PivotTable is constructed – in this case by not putting a hierarchy on columns but just the Month Of Year attribute, then adding slicers for Calendar Year and Calendar Quarter to control which months are displayed – you can get an efficient query. This version of the PivotTable:

image

image

…results in this query being generated, which only returns the required values:

image

To sum up, then, if you have a PivotTable that seems a bit slow:

  • Turn off subtotals and grand totals unless you really need to see them
  • Look at the MDX query being generated by Excel and see if it is still returning unnecessary subtotals and grand totals, and if it is try rebuilding the PivotTable to get the same results in a different way (for example by not drilling down on hierarchies as in the last example) to get a more efficient query

[All the queries in this post were generated by Excel 365 version 1910 build 12130.20238]

%d bloggers like this: