Business Requirements (Why we built it + what we needed to ...
?Creating an Enterprise Class Dashboard Solution with Power BISummary: This paper describes how a team defined metrics, gathered data, and used Power BI to build dashboards that monitor the performance of a large business division at Microsoft. These dashboards made navigating to the metrics that matter easier and are used across their organization to track business every day. It provides details of both the core requirements and the processes and feature that were used to address them.Writer: Will Thompson, Program Manager, BAPI Business Analytics and OperationsTechnical Reviewers: David Iseminger, Michele Hart, Manpreet Singh Jammu, Siva HarinathPublished: June 2016CopyrightThis document is provided “as-is”. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. ? 2016 Microsoft. All rights reserved.Contents TOC \o "1-2" \h \z \u Business Requirements (Why we built it + what we needed to build) PAGEREF _Toc453160284 \h 4Single view of Multiple Services PAGEREF _Toc453160285 \h 4Reliable + trustworthy data PAGEREF _Toc453160286 \h 6Cross-service metrics PAGEREF _Toc453160287 \h 7Inform Product reviews PAGEREF _Toc453160288 \h 7Building the solution (What we built + how we built it) PAGEREF _Toc453160289 \h 8BAPI Top Level Dashboard PAGEREF _Toc453160290 \h 8L2 Dashboard layout guidelines PAGEREF _Toc453160291 \h 17Data pipeline refresh, monitoring and alerting PAGEREF _Toc453160292 \h 21Publishing the dashboard solution PAGEREF _Toc453160293 \h 22Driving Adoption of the solution PAGEREF _Toc453160294 \h 25Summary PAGEREF _Toc453160295 \h 26Business Requirements (Why we built it + what we needed to build)At Microsoft, managing the business and performance of an entire division can be a challenge. That's no different for the division in which Power BI is part, even though we create the tools and services designed to do just that. Power BI is part of BAPI, or Business Application Platform and Insights, which in turn is part of the Cloud + Enterprise division at Microsoft. BAPI is about 1800 people and is made up of multiple teams that work on related products/services, but each is at varying levels of maturity, and their businesses run quite differently. In that respect it’s similar to many large enterprise organizations, and many of the challenges are the same. Our Corporate Vice President who leads BAPI, James Phillips, tasked us with creating a single dashboard solution that could monitor all these services and their growth over time. After all, Power BI is one of the services in BAPI, so he should have a set of world-class dashboards built on that technology.We first met with key stakeholders to understand their requirements, which is the recommended approach for any BI project. The most important requirements fell into three primary categories, and were the following:Create a Single View of Multiple ServicesEach of the many teams in BAPI is accountable for different metrics, and they needed to be brought into one placeEach team uses their own solution and approach for reporting, so a common tool was neededUse Reliable and trustworthy dataData needed to be up-to-date and automatically refreshedCommon definitions of key metrics were required, to provide cross-team comparisons and all-up combined metricsEngineering teams needed to be alerted when data ingest failed, so they could proactively address issuesInform Product ReviewsEach service has a monthly Product Review with the leadership team, to report the state of their business and how they’re reacting to issues. Microsoft’s CEO, Satya Nadella, wants the whole company to develop a data-driven culture, so we needed to surface dashboards with data that would help make the right decisions during these reviews.The next few sections discuss these three requirements categories with more detail, providing more context to the challenges and required solutions. Then, in Building the Solution, we discuss how we wrangled all those requirements into a comprehensive dashboard solution that was ready to monitor the progress and pace of business in one of Microsoft’s most data-demanding divisions. Single view of Multiple ServicesThe first requirement was to create a single view of multiple services. To provide some context around the scope of the project, BAPI comprises 14 teams each looking after a number of services. They range from the Azure Marketplace to Power BI to the Azure Sites Development platform. It’s comparable to any other medium to big organization with different product lines or departments, in the following ways:Each service tracks and was accountable for its own metricsEach key metric had its own definitionData that backed the metrics existed in different systems and was updated at different frequenciesThe leadership team needed a single place where they could monitor key metrics from each business. It also needed to serve for cross-team metrics, such as a total number of unique users across all the services in the organization. Navigation across multiple dashboardsWe knew we would “eat our own dogfood” and use Power BI for the solution. But with so many teams and their disparate sources of data, even bringing the information into one tool would only address part of the information discovery challenge. The final solution needed to serve as a navigational aid, so that people viewing the solution could find and understand important data from any team in the organization. That would require a way to link and explain any dashboard or report used across the business, and many services had multiple layers of dashboards with further and further detail that would be included.Multiple sources of contentWe needed to provide a coordinated view for each business/service. But we also needed the all-up cross-team dashboard, since each team is responsible for their own metrics and ultimately their own reports/dashboards. Each of those views needed a common look-and-feel, to ensure that anyone in the organization could understand each team’s most important metrics, and how they were performing. That meant the following framework had to be created:Common recommendations on dashboard layoutCommon use of data visualizationsCommon patterns for reporting and analysisWe looked outside of the BAPI organization as we were working through these requirements, and discovered that many other divisions at Microsoft (including our CEO’s dashboard) followed the same best practices. Again, as a dashboard and data visualization tool, our BAPI leader felt we should be able to showcase the best practices for creating dashboards that were easy to consume, and that looked beautiful.Updated regularly with minimal central effortSince multiple teams contributed to this effort, we didn’t want to impose a particular release schedule for updates. It was important that each team could update the dashboard content when their business dictated, without our central team being a bottleneck. That premise applied to the data itself, too – we encouraged teams to have the most up-to-date data possible, but each team would have to control and refresh the data themselves.Multiple users with multiple permissionsWithin each service team, multiple people needed to collaboratively contribute and create their dashboard. Someone would be responsible for new users, someone else would be responsible for service reliability/availability, and so on. This requirement applied at the top level as well – our central team was collating information from each service to do cross-product analysis, and also needed a way to manage multiple people creating this content.Users consuming the dashboards had similar requirements. Employees from across BAPI needed access to their own services’ dashboard, as well as the top-level dashboard. We needed it to be easy to manage security on the dashboards we produced.Reliable + trustworthy dataThe second requirement was to use reliable and trustworthy data. A dashboard is only as useful as the data it contains - the garbage in, garbage out rule applies to dashboards as much as anything else. It was really important that the dashboard data be automatically refreshed, up to date, reliable and trusted. Automatically refreshedThe initial requirement from our management was that data should be “as up to date as possible.” We had to dig into this requirement to find the right balance between cost and the desire for a real time solution. We also knew that data was often updated or refreshed for past events, so if a dashboard showed data as soon as it was available, there was an increasing chance of that data changing in future.Another consideration was how quickly the organization could react to the data. It’s meaningless to provide updates every second if it takes a week for changes to roll out. We settled on refreshing data on a daily basis, with the plan to move to hourly refreshes in future when possible. A crucial requirement was the data refresh needed to be fully automated. Requiring users to manually collect or upload data introduces inherent delays, and can cause significant latency due to sickness, vacations, and other complicating factors.As we spoke to stakeholders from each service, we also realized that each team would provide data on different schedules. Rather than force everyone to snap to the lowest common denominator, we acknowledged that there would be differences and that we would report and identify this clearly.Stream MonitoringWith so many components in our data pipeline, we needed to ensure we could monitor the streams of data that fed the various dashboards. In general, these were the stages most service teams were using:4759325104775Dashboard0Dashboard3224530105410Data marts0Data marts1689735105410Centralized data store0Centralized data store154940105410Product Telemetry0Product Telemetry429069510795027559001079501221105107950If any of these elements failed, or the movement of data between them failed, we would end up with gaps or outdated data in the dashboards. Different teams were already using different tools for their data movement or data marts, so we needed a central process for monitoring this, rather than going to each system to get an overall view of the pipelines.AlertingWith monitoring in place for the data movements mentioned above, we also needed to build alerting for times when elements went wrong. We didn’t want our Vice President to be the alarm system; we knew he’d be looking at these dashboards every day, and wanted to be able to proactively tell him if data was missing (or if there were other issues). Building a system with automated email alerting would allow our engineers to investigate failures as quickly as possible.Definitions/metadata The metrics at the team/service level also needed to be comparable across teams. We needed to establish a common definition of key concepts to answer questions like: what is a user? – is it someone who’s ever used a service, or does it need to have been used in the last x days? Does it have to be a paid user? How about trials? How about internal Microsoft usage? We learned early on that different teams were using very different ways of classifying usage, so we had to standardize items such as this.With the definition established, we also needed to surface those definitions into the dashboards and reports, so that anyone reading them would be able to understand exactly what the numbers they were looking at meant.Cross-service metricsOur team is also responsible for doing cross-team analysis, and providing insights on combined data, in addition to collecting the metrics from all the individual services into one place. Because of this, we had to provide cross-cutting metrics somewhere in our solution, and include links to the source of them. For many, they would be sensitive data outside our control – for example we had to link sensitive HR data into dashboards, even though only the leadership team would be able to see the data itself. Inform Product reviewsThe third requirement was that the solution had to inform product reviews. Since Satya Nadella took over as CEO of Microsoft, fostering a data-driven culture has been a priority. In the BAPI organization this is manifest in monthly Product Reviews, where key metrics for each service are discussed amongst senior management, including: the current state of the metrics, what was done to impact them from last month, and what the results were.Previously such reviews were often done with static PowerPoint slides (some teams also published a Power BI Content Pack with some of their data), but our BAPI Vice President wanted us to use live dashboards that allowed on-the-fly analysis and slicing of data. This meant our dashboards would typically be presented on a screen or projector, and needed to provide easy ways to get to reports that would allow for this further analysis.Of course, if it’s being used for business-critical meetings such as those, the dashboards and data needed to be high quality and reliable.Building the solution (What we built + how we built it)With all those requirements in hand, we set out to create a solution that satisfied all of them. It was a tall order, but the following sections describe what we did to address them, and what our solution looked like. BAPI Top Level DashboardWe experimented with a few options to create the top-level dashboard (also known as the Level 1, or L1 dashboard). To make it as beautiful and understandable as possible, we looked at using a Power BI report with various visuals (including Custom Visuals) but decided that a dashboard was the appropriate way to go, since the requirement for navigating from the top level to a detail dashboard was only available with Power BI dashboards.The following image shows the final dashboard we produced (note that the screenshots used below have fake data for illustration purposes only):We wanted the dashboard to have the following key areas:1: Top level KPIs from the different services within the BAPI organization, shown in the following image:2: All-up aggregate metrics showing total number across these services:3: A ‘Metric of the Month’ or ’Metric of the Moment’ area, designated by MOTM:4: A visual bringing in Tweets and sentiment for BAPI services:In the following sections, we discuss more about each of these areas, and how we built them.KPI visualsThe biggest portion of the L1 dashboard is the KPI visual tiles that represent overall performance of each of the services. We knew there would be around 15 of these visuals so it needed to be easy for users to find the piece of information they’re looking for, and to remove any extraneous elements from the visual.This requirement helped teams identify their key metric, and helped tie services together into the wider team.For most services, the crucial metric is Monthly Active Users or MAU. MAU was pioneered at web apps such as Facebook (see for more information). For us, these are users who’ve taken an intentional action in the last 28 days. The metric is calculated each day to give a rolling calculation. It smooths out any noise introduced by weekends or holidays, to give a general growth curve. We also track Weekly and Daily Active Users. For some services it’s more relevant for us to use the number of apps or number of running processes, but in general we tried to push back to a usage-based metric wherever possible.We looked at a few different visuals to show these metrics, starting with line charts as this was used in many of the detail reports:This is a common indicator of the growth of a service, but when we tried to move this into a small dashboard tile it became very hard to read, like the following visual shows:We tried a few iterations of this layout, for example removing legend, axes, and so on:But losing the context that those legends and axes provided was too much of a sacrifice. We did decide to turn off the x axis for some tiles where we explicitly called out the time period in the subtitle:The line charts also didn’t give us a good way to show the figure for the latest day (although one can hover on a line chart to see this). We considered using two tiles side-by-side, like the following: That approach allowed us to show extra metrics, but we decided it took up too much space. This approach would take up a full screen when multiplied by all 14 services! It also meant having two targets for clicking to navigate to detail dashboards, which we didn’t want.We also tried using a small report page that was pinned using the Report Tile functionality. This allowed us to overlay a metric number onto the line chart, but led to overcrowding of the visuals, and also meant the tile could not be clicked for navigation.We settled on using the KPI visual to display the MAU metric:This allowed us to show an overall trend, the latest figure, and also performance against a target. We also used the ‘Set custom link’ setting to link these tiles to their services’ summary dashboard, rather than the default of the report where the KPI came from.Choosing the targetBroadly, our organization has avoided setting absolute figures as targets, for various reasons… it suggests that when we hit the number we’re “done”; it can drive undesirable behavior around inaccurate reporting of numbers; and it can be demoralizing since metrics can stay red for a long time. Instead, we chose to work against a growth acceleration target – the aim was for each service to grow faster and add users/usage every week. To calculate that metric, we looked at MAU (or the equivalent metric) and compared to 7 days ago and 14 days ago. We used a few DAX calculations to produce this number:The first calculation gives us the latest MAU for the report’s date range is the following:Latest MAU =CALCULATE ( [MAU], FILTER ( ALLSELECTED('Usage Data'), [Date] = MAX ( [Date] ) ))Here we re-apply the date filter to always show MAU for the last Date for which there is data. Similar calculations are used for the -7 and -14 days of data:MAU 7 days ago =CALCULATE ( [MAU], FILTER ( ALLSELECTED('Usage Data'), [Date] = MAX ( [Date] ) -7 ))The delta between these values are then calculated:14-7DayChange = [MAU 7 days ago] - [MAU 14 days ago]?7DayChange = [Latest MAU] - [MAU 7 days ago] And finally, a target value is produced:GrowthAccelerationTarget = DIVIDE([MAU 7 days ago]^2 , [MAU 14 days ago], 0)We use a ratio increase, rather than just adding the absolute change on again, to adjust for services with an already large user base.Images to split up layout of dashboardWe tried to apply a variety of the Gestalt laws () here. To help people understand the grouping of each teams’ KPI, the layout of these tiles reflected an org chart presented at our team meetings. We carried this into the colors used as bookends either side of the layers of services in the organization (these bookends are blue, for Business Applications, in the following image):The broader areas of the dashboard were split up with white space (well, actually not quite white, it’s #EAEAEA) to help the reader further see and understand the separation:The rounded edges on each bookend helped show where one region ended and another began, and contrasting colors were used for the BAPI and Metric of the Month/Moment labels. The white space (actually #EAEAEA) matches the background of dashboard pages in Power BI. We built images that matched the aspect ratio of a 1x1 tile (for the twitter bookend) and a 1x6 tile for the taller gutter, shown below:1x1 image tiles are 250 by 170px. The 1x6 gutter is 217 by 928px.Custom twitter visualPower BI provides a rich framework for developers to build their own visuals, in situations where the out of the box visualizations don’t meet their requirements. We wanted a way to show the most recent tweets that matched a twitter search. We initially just loaded them into a Power BI table, but that wasn’t a very compelling way to visualize it:There are a variety of custom visuals created by the Power BI community and submitted to a gallery (). These are security checked by Microsoft before publishing but do bear in mind they’re 3rd party code. One entry is from Fredrik Hedenstrom who published the Scroller visual:You can download it from . That custom visual was designed to scroll text and a value, for example commodities or stock prices. It required you to add a measure which was shown at the end of the string – but for our tweets we only wanted to show text. It was easy to take the code from the existing visual and modify it, in order to show only the text. We also allow users click the text and open the tweets in , thereby allowing them to respond to our users.Flywheel image + card visualsThe top right of the dashboard shows a diagrammatic view of our organization’s strategy, as well as the overall number of users, apps, and so on:We wanted to map numbers to the different elements of the diagram, and the easiest way to do this was pinning a live page from a Power BI report to the dashboard. We added an image with the boxes and titles as the background of the page, sized the page to match the aspect ratio of a 2x2 tile (510x350px) and added Card visuals showing each of the numbers.To get the Card visuals to line up and look neat, we used the Align features within Power BI Desktop.Use of Text Tiles for placeholdersIn the first few days and weeks building the dashboard, we wanted to quickly iterate on the overall layout and content of the dashboard, while the data pipeline was being built. To do this we added textboxes to the dashboards showing placeholder content. The only items included were a title and short sentence describing the visual that was going to be used there. As the data was completed, we replaced those with the real tiles. That approach gave us a quick way to try out different layouts, and is still in use as new requirements come up – we add a new textbox to the dashboards, then replace it once the data is available.L2 Dashboard layout guidelinesWith many different services each building their own summary dashboards (which we refer to as level 2, or L2), we needed a common layout and structure. Some of the more mature services (Power BI for example) already had dashboards, so we needed to accommodate their existing content and layout as well.We first looked at the overall structure for these dashboards:We tried arranging the content in columns and rows, and quickly established that row-by-row was a better layout. Each row would represent a particular area of focus, for example Usage, Customers, Sign-up, Service Quality, Delivery Velocity and so on, and could include three or four visuals that reported the performance in that area.For the most part, 2x2 tiles were needed to provide any meaningful information. 1x1 tiles were good for single values such as overall MAU.Generally, we decided that dashboards should be up to eight tiles wide. We knew that the dashboards were predominantly being used on desktop monitors or projectors as 1920 x 1028 resolution, and in fullscreen mode at that size the tile titles remained quite readable.We then set down some rules and guidelines. Some of those were being enforced top-down by management, while others were recommendations based on best practices from our existing dashboards that could be discounted if particular situation called for it. Here’s how the overall guidelines manifested in a sample dashboard for Power Apps:Top level metricsThe first rule passed from the top down was that a set of usage metrics for each services was positioned at the top of the dashboard, like the following image shows:These are the most recent Monthly, Weekly and Daily active users for all users and external users. These are the most important numbers to drive day-in-day-out.Tracking this over time gives us the two charts in the second row, shown below:Those were requirements for all summary dashboards.Owners’ contact details and logoInevitably, visiting a dashboard will raise further questions. To efficiently handle questions, we put contact details on every dashboard so that questions would be directed to the relevant person, rather than a central team, and any errors could be resolved quickly.Metric definitions for rowsRows further down the dashboard were determined by each individual service. We provided guidance on the key areas that were important to most services, but allowed them to include others and choose the visuals that worked best for them. In general, we wanted individual services to display usage, customers (i.e. organizations), user acquisition/signup funnels, service health (for example uptime, bugs, and so on) and delivery velocity (such as the speed at which new features were added). Some services stuck to this format, and others added their own. For this reason, we ensured that each row had a Definitions tile along the left:The Definitions tile helped make it clear what we meant by terms such as Active User or Acquisition Funnel or others.Text tiles for commentaryAlthough we’d like everything to be self-explanatory in the data, we found that oftentimes a sentence or two was required to explain dips or spikes in the data. For example, Power BI’s “Publish to Web” capability was used to show data from the Philippines presidential election. This caused a huge spike in usage that needed to be explained in our dashboards. To provide this commentary we also used textboxes on the L2 dashboards to call out events like this where necessary.Metadata description tiles (+caveats in subtitles)Each dashboard tile has space for a title and subtitle, and we recommended they be used to explain the fields used in each visual, as well as any caveats, such as time periods.For example:And from the L1 dashboard:(Pro BI measures users of Analysis Services and Reporting Services in Visual Studio 2013 & 2015 only).This approach, combined with the title tiles mentioned above, gave us a consistent location for the data dictionary, and also acted as an index that users could quickly scan to find the content most relevant to them.We also recommended dashboard owners use this area to highlight any high business impact or sensitive data, ensuring that users know what can be shared outside of our organization.‘L3’ detailed dashboardsMany of the services also built third-level dashboards that went into specific subject areas, for example service performance or user signup funnels. We let each service decide how to construct and use these dashboards, as they were individual to their business processes. We encouraged them to continue to include definitions and row-by-row layout for these dashboards, and to ensure they were linked to the L2 dashboards again using the custom URL link option. For example, many services built a usage dashboard that broke down MAU, WAU and DAU across their web and native apps, and for internal and external users. This was linked from the usage charts on their L2 dashboard.Data pipeline refresh, monitoring and alertingTo operationalize these dashboards, our leadership team expected the previous day’s data to be ready each morning (Pacific Standard Time). Most of our dashboard data came from product telemetry feeding metrics on product usage. The data pipeline for this solution looked like the following:4759325104775Power BI Models0Power BI Models3224530105410Azure SQL Database data marts0Azure SQL Database data marts1689735105410“Cosmos”0“Cosmos”154940105410Product Telemetry0Product Telemetry429069510795027559001079501221105107950Cosmos is Microsoft’s internal big data system, and is used by many Microsoft products. It began as a Microsoft Research project and evolved into the Azure Data Lake public offering. You can read more about Azure Data Lake here. Movement of data in and out of Cosmos needed to be monitored and maintained by the product engineering teams. A series of queries run against Cosmos and calculate DAU, WAU and MAU on a daily basis, saving a snapshot of the data into a SQL data mart. The big data architecture of Cosmos allows this to operate at scale, and means we can load the previous day’s data in time to be pulled into Power BI. However, sometimes either a query can fail or not all the data is loaded. We set up two sets of email alerts to proactively monitor this, and we’re alerted if:The schedule fails to run at allThe number of rows loaded is significantly lower or higher than the previous loadWe had to create our own system to check these conditions, and emails were sent to the team if any failures occurred. Power BI then pulls the data from the SQL data mart into the reports and dashboards every day. That’s also done on a schedule, and we used Power BI’s built-in Scheduled Refresh to do this.We also created email templates to alert the leadership team that explained any issues, identified who was working on it, and included a timeline for when a resolution could be expected. These were sent manually to senior management as we needed to change the content based on the issues and plans.Refresh times and managing missing dataWith all this in place, we were able to be reasonably sure that telemetry from the previous day would be available in dashboards first thing the next morning. However, some teams fed us data on a slower cadence. For example, billing data would only be returned once a month, in line with customers’ billing cycles. This meant that some data would spike on the first day of the month.To counter this, we built a Last Known Good calculation into the data. This looked for the latest date for which we had data from all teams, and marked any dates after that as incomplete. It meant we had an easy option to choose whether or not to include days that only had a subset of data.Publishing the dashboard solutionWith each service creating their own set of dashboards and reports, we needed a consistent way to navigate from the L1 dashboard to the details, and for L2 dashboard owners to update their dashboard independently.Content packs & sharingThere are a few key limitations of Power BI (as of H1 CY2016) that we needed to work around:Can’t pin visuals from a report/dashboard that was shared with you onto another dashboardCan’t pin visuals from a report in one group to a dashboard in anotherCan’t pin visuals from a content pack until you have Personalized, or created a copy of the content packThis meant that we had the following process for each service:The owner for each L2 dashboard creates their dashboard as outlined aboveThey share the dashboard with an Active Directory Security Group. This means that we can just manage the security group’s members to control who has access to the dashboard. We can also add other groups (such as whole teams or organizations). We recommended they turn off ‘resharing’ of the dashboard so users can’t give access to other people.The data for the tile that will end up on the L1 dashboard is published as an Organizational Content Pack. This is the first step to let us pin it to the L1 dashboardThe Org Content Pack is set up by our shared service account. This still can’t be pinned from as per the limitations listed aboveContent Pack is Personalized by the services account. This makes a copy of the content pack, and from there the KPI tile can be created and pinned to the L1 dashboard.The KPI tile has its Custom URL set to point to the dashboard as shared in step 2. We need to do this as any Content Pack would have a unique URL per user.Share the L1 dashboard with the security group so everyone has access.It’s a relatively complex process, but gets around the limitations of not being able to pin shared visuals. It also makes managing access simpler, and means that when a L2 dashboard is updated there’s no action for the central team. The complex steps are a one-off operation when the L1 dashboard is first created, and only need to be redone if the schema used in the KPI tile changes. In practice, we had to redo this a handful of times for some of the services, but once we reached a steady state the data continues to update automatically.L2 shared by custom URLs for NavigationBecause we use the Custom URL to point from the L1 dashboard tile to the L2 summary for each service, we point it to the dashboard shared in Step 2 in the previous section. This shared version of the dashboard has a URL that is the same for everyone. If we left it with the default URL, it would point to the specific instance of the Content Pack where the tile was pinned from – and this doesn’t get updated until the L2 owner re-publishes the Content Pack and it’s re-instantiated and re-pinned.This approach also allowed us to link to content for which dashboard creators didn’t have permission. Our HR department created a dashboard containing sensitive information on employee levels and performance, which most people can’t access. However, the shared URL could be linked to a tile so that when our Vice President or other Leadership Team members clicked, they would be taken to the correct dashboard. The HR team also used row-level security in SQL Server Analysis Services, so they could constrain what individuals saw in that dashboard to only their own team. You can find more information about that here. O365 Groups + Security groupsIn Step 2 and Step 7 we share with an Active Directory security group, but the content pack is published in Step 3 to an O365 ‘modern’ group. These are some limitations of Power BI as of this writing. We tried to use AD groups where possible, since this allowed us to manage membership with existing security groups, nest groups, and so on. We did have to take a dependency on the synchronization between AD and Azure AD, but that is typically resolved within an hour.Driving Adoption of the solutionImplementing these dashboards took about six weeks, including setting up data flows for many services that were still under development and not even publicly launched, as well as all the report and dashboard creation. We had completed a number of iterations on the layout and metric definitions, and reviewed it across the organization. The final task was to make sure that this was adopted into part of our data culture, and used across the business. We wanted to build usage tracking into the dashboards as well, to see who and how often they are actually being used. We’re waiting for the Power BI team to add this to the product at present – Power BI already gives a view on usage to the Tenant Administrator, so we are working with the Microsoft tenant admins to gather this data.Grand RevealWe initially launched it to James Phillips as a summary and landing page that he could visit to identify problem areas, and so on. We knew James was keen to use the dashboard (and his regular email questions made it clear that he was using it), so the trickle-down effect on the rest of the organization would help drive usage. He started to check the dashboard on his desktop and mobile as part of the regular cadence of the business. It became clear he was pleased with it, which we believe was because of the attention we placed on not just the physical aesthetics of the dashboard, but also the navigation design, business logic and metric definition as well as the reliability of the data.Reveal in key staff meetings & Product ReviewsWe also revealed the dashboard to the leadership team, and adoption picked up in key weekly staff meetings with James and the Leadership Team. It became the first thing they looked at on Monday mornings, tracking any issues from the previous week and focusing the efforts for the coming days. There are also monthly Product Reviews where each service presents their learnings and plan. The L2 dashboards are being used as the initial talking points for this meeting, covering key metrics within Power BI before going into any supporting information (such as Power Point decks, or other items).SummaryThe dashboard solution we created here provides:Live, operational dashboardsSingle place to see aggregate data from all the services in the organizationA landing page with navigation to summaries for each serviceUp to date information which we could rely on for critical business decisionsWe learned through this process that Power BI dashboards can be tailored to meet demanding enterprise-wide needs, even when data must be pulled from many different teams with different products and timelines. The ability to condense data, yet provide context from linked dashboards enabled a complex and data-driven organization to produce a solution that met the needs (and a whole host of requirements) of the most demanding and engaged users, leaders, and teams. For more information: : Power BI web site Power BI Guided Learning site : Power BI Community forums ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
Related searches
- why do we need to learn english
- why we need to travel
- why we have to die
- why we do the things we do
- what is needed to start a business
- what is needed to become a teacher
- why we go to university
- documents needed to open business checking
- what is needed to buy a house
- what documents needed to buy car
- what is needed for photosynthesis to occur
- what is needed to renew driver s license