Deleting Cached Logins For Azure Analysis Services In SQL Server Management Studio

When Azure Analysis Services was announced I had to try it out right away. Of course I didn’t read the instructions properly so when I tried to log in to my Azure Analysis Services instance from SQL Server Management Studio, like an idiot I logged in with the wrong username. The problem is that once you’ve done this, with current versions of SQL Server Management Studio there’s no way of logging out and logging in as a different user. Luckily Igor Uzhviev of Microsoft had a solution for me and I thought I’d share it for anyone else who’s made the same mistake. Here’s what you need to do:

  • Go to your user local app data folder (the path is like this: C:\Users\<your_user_name>\AppData\Local\)
  • In this directory you need to delete:
    1) Any .dat files with names that start with "omlibs-tokens-cache…"
    2) The folder AADCacheOM, if it exists
  • You also need to open Internet Explorer and delete  all cookies and website data, because IE caches auth tokens too

Once you’ve done this, the next time you try to connect to Azure Analysis Services from SSMS you’ll get prompted to log in again. Future releases of SSMS will have proper support for logging in and out of Azure SSAS, but even with the latest version (version 16.5) you’ll still need to following these steps.

First Thoughts On Azure Analysis Services

Hurray! At last we have proper cloud-based Analysis Services! The official blog post with the announcement is here:

https://azure.microsoft.com/en-in/blog/introducing-azure-analysis-services-preview/

…the pricing details are here:

https://azure.microsoft.com/en-us/pricing/details/analysis-services/

…and the documentation is here:

https://azure.microsoft.com/en-us/documentation/services/analysis-services/

It’s still in preview, so while there are some missing features I’m confident that they’ll be added soon. Similarly it’s SSAS Tabular only right now, but the blog post says

Support for multidimensional models will be considered for a future release, based on customer demand.

I’m pretty sure there there will be plenty of demand for Multidimensional support given the installed base that’s out there.

Why should we be excited about this, though? What will Azure Analysis Services be useful for? Obviously, if you want to build a completely cloud-based Microsoft BI solution then Azure SSAS is an important component to have available. Also, consider the fact that the load on a typical SSAS server varies a lot over time: daytime is usually much busier than night-time, weekdays are usually busier than weekends, and some days of the month (such as month ends) may be much busier than others. It will be great to be able to build cloud-based SSAS solutions that can be scaled-up and scaled-down to meet demand, rather than expensive on-premises SSAS solutions that are underutilised most of the time. I also think Azure SSAS will be very interesting to ISVs, and to Power BI users looking to move up from native Power BI storage, although in both cases pricing will be key to adoption. I can’t wait to start using this with my customers!

UPDATE 27/10/2016: You can see the top-rated requests on the Azure Analysis Services forum here https://feedback.azure.com/forums/556165 unsurprisingly, support for SSAS Multidimensional is #1

Nested Variables In DAX

Last week, at the SQL Server Days conference in Belgium, Kasper mentioned in his presentation that it was possible to define variables inside variables in DAX. So, for example, you could define a measure like so:

MyMeasure = 
var Outer1 = 
               var Inner1 = 1
               var Inner2 = 2
               return Inner1 + Inner2
var Outer2 = 3
return Outer1 + Outer2

This measure returns 6 as you might expect:

image

There aren’t any performance benefits to doing this, although of course it helps with code readability and organisation (thanks to Marius for confirming this).

With my newly rekindled love of DAX I thought this was quite interesting. I’m not really sure why though, given that it’s not particularly useful; I think Matt might be right:

image

Defining Variables In DAX Queries

Variables are the best thing to happen to DAX since, well forever – they are so cool I’m almost ready to like DAX as much as I like MDX. There are already several good articles and blog posts out there describing how to use them (see here and here), but I was looking at a Profiler trace the other day and saw something I hadn’t yet realised about them: you can declare and use variables in the DEFINE clause of a DAX query. Since my series of posts on DAX queries still gets a fair amount of traffic, I thought it would be worth writing a brief post showing how this works.

Say you have the following table (called Sales) in your model:

image

You can declare DAX variables in the DEFINE clause of a query like so:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
EVALUATE
CALCULATETABLE ( Sales, FilteredMonths )

This query returns the following result:

image

The benefits of using variables in this way are the same as you get when using variables in measures and calculated columns: improved readability, less repetition and potential performance improvements.

I also wondered whether I would be able to refer to these variables inside measures declared in the DEFINE clause, but unfortunately you can’t. The following query:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
    MEASURE Sales[FirstMeasure] =
        COUNTROWS ( FilteredMonths )
EVALUATE
ROW ( "First Measure", [FirstMeasure] )

…returns the error

“Failed to resolve name ‘FilteredMonths’. It is not a valid table, variable or function name”.

However if you define your calculations inside the query itself, for example using the Summarize() or AddColumns() functions, or like so:

DEFINE
    VAR MyMonth = "January"
    VAR FilteredMonths =
        FILTER ( VALUES ( Sales[Month] ), Sales[Month] = MyMonth )
EVALUATE
ROW (
    "First Calc", COUNTROWS (FilteredMonths),
    "Second Calc", CALCULATE (SUM(Sales[Sales]), FilteredMonths)
)

…the query works:

image

In a lot of cases, multiple calculations contain some of the same logic and being able to use variables to share tables and values between calculations opens up some really interesting opportunities for performance optimisations.

Generating JSON In Power BI And Power Query

Often, when calling web services from Power BI or Power Query, you’ll need to generate some JSON inside your query to send to these web services. The M language makes this relatively easy to do with the Json.FromValue() function but there’s only one example of how to use it in the help so I though it might be useful to provide some worked examples of M data types and how Json.FromValue() turns them into JSON.

First, here’s a function – that I’ve called GetJson() for the examples here – that takes a parameter of any data type, passes it to Json.FromValue() and returns the JSON representation of the input as text:

(InputData) =>
let
    JsonOutput = Json.FromValue(InputData),
    OutputText = Text.FromBinary(JsonOutput)
in
    OutputText

Now, let’s see what this returns for a variety of different data types.

Tables

Take the following table on an Excel worksheet, called “Sales”:

image

The following query loads the data from this table and calls the GetJson() function defined above:

let
    Source = Excel.CurrentWorkbook(){[Name="Sales"]}[Content],
    #"Changed Type" = Table.TransformColumnTypes(
	Source,
	{{"Month", type text}, {"Sales", Int64.Type}}),
    Output = GetJson(#"Changed Type")
in
    Output

It returns the following JSON, an array of objects:

[
{"Month":"January","Sales":1},
{"Month":"February","Sales":2},
{"Month":"March","Sales":3}
]

Lists

M language lists are represented as JSON arrays, so the following query:

GetJson({"Apples", "Oranges", "Pears"})

…returns

["Apples","Oranges","Pears"]

And

GetJson({{1,"Apples"}, {2,"Oranges"}, {3,"Pears"}})

…returns

[[1,"Apples"],[2,"Oranges"],[3,"Pears"]]

Records

M language records are represented as JSON objects, so:

GetJson([month="January", product="Apples", sales=1])

…returns

{"month":"January","product":"Apples","sales":1}

Nested Data

Finally, in M it’s possible to have nested data structures: lists of tables, records containing lists and so on. These can be converted to JSON too, so in the following example of a record containing a text value, a list and a table (created using #table):

GetJson(
[
 product="Apples", 
 colours={"Red","Green"}, 
 sales=
  #table({"Month", "Sales"},
   {
    {"January", 1},
    {"February", 2}
   }
  )
 ]
)

…the JSON output is:

{  
   "product":"Apples",
   "colours":[  
      "Red",
      "Green"
   ],
   "sales":[  
      {  
         "Month":"January",
         "Sales":1
      },
      {  
         "Month":"February",
         "Sales":2
      }
   ]
}

You can download the sample Excel 2016 workbook with these examples in here.

Using The Invoke Custom Function Button In Power BI

There are a lot of cool new features in the September 2016 update for Power BI, so many in fact that several of the really important changes in the Query Editor are in danger of going unnoticed. In this post I want to walk through how to use the new Invoke Custom Function button in the Query Editor in Power BI and explain why it’s such a useful thing to have.

More advanced Power BI users will know that a lot of data loading patterns involve using custom M functions. In the past implementing these patterns involved learning M both for writing functions and also for invoking those functions. A few months ago Power BI introduced the ability to automatically generate functions from queries that use parameters, without needing to write code, and now with the latest update we can also invoke functions easily by clicking a button. This means that a lot more advanced data loading patterns are now available to users who don’t know any M and there’s even less need for someone like me to open the Advanced Editor window and start writing code.

Let’s take a look at how this works with a very simple example. Say you have a table that contains sales data, with the number of units sold and the price paid:

image

You now want to add a new column to this table that calculates the sales value as Units * Price, and you have a function to do this. Here’s what the M code for that function (called “Calculate Value”) could look like:

(Units as number, Price as number) => Units * Price

image

With a query that returns your sales data and another query that returns the Calculate Value function, you can easily create a new column on the sales data query and invoke the function for each row. Go to the Sales query, go to the Add Column tab on the ribbon, and click Invoke Custom Function:

image

You’ll see the Invoke Custom Function dialog appear. Here you can choose the query that returns the function you want to use and enter the values you want to pass to that functions’ parameters. At the moment you can type in a value or pass values from a column in the table you’re invoking (strangely enough you don’t seem to be able to use Power BI parameters here yet though?):

image

Click OK and the function is invoked for every row in the table:

image

To take a more realistic example, look at this post – this new functionality replaces the step where I create a new Custom Column and invoke the GetSheet1() function that I created.

This is why Power BI is so successful: Microsoft are not only implementing high-profile wow features but also adding the obscure, unsexy features that nonetheless make a real difference to the productivity of advanced users.