tim laqua dot com Thoughts and Code from Tim Laqua

8Apr/140

jderef.com is dead – Create your own using SSRS and JDE metadata tables

ssrs-jderef-1

Anyone who has used jderef.com is certainly mourning its demise as JDE, like most ERP systems, is a bear to understand without a really big instruction book. Luckily you can chat with your JDE people and, given the correct libraries, modify the following query to get the same information:

SELECT
       RTRIM(TDOBNM)   AS TABLE_NAME,
       RTRIM(d.SIMD)   AS Table_Description,
       RTRIM(d.SIPFX)  AS Table_Prefix,
       RTRIM(TDOBND)   AS Data_Dictionary_Object_Name,
       RTRIM(TDSQLC)   AS SQL_Column_Name,
       RTRIM(b.FRDSCR) AS Column_Description,
       RTRIM(e.DRDL01) AS Field_Type,
       RTRIM(c.FRDTAS) AS Field_Length,
       RTRIM(c.FRDTAD) AS Field_Decimals,
       --RTRIM(c.FROWDI) as DD_Item,
	CASE 
		WHEN RTRIM(c.FROWER) = 'UDC' THEN 'UDC: RTRIM(DRSY)=''' || RTRIM(c.FROER1) || ''' AND RTRIM(DRRT)=''' || RTRIM(c.FROER2) || ''''
		ELSE 'Generic Edit'
	END AS Edit_Type,
       --RTRIM(c.FROWER) as Edit_Rule,
       --RTRIM(c.FROER1) as Edit_Rule_Parm1,
       --RTRIM(c.FROER2) as Edit_Rule_Parm2,
       RTRIM(TDPSEQ)   AS Program_Sequence_Number
FROM
        COPD910.F98711 a
        LEFT OUTER JOIN DD910.F9202 b
                ON a.TDOBND = b.FRDTAI
                   AND FRLNGP = '  '
                   AND FRSYR = '    '
        LEFT OUTER JOIN DD910.F9210 c
                ON a.TDOBND = c.FRDTAI
        LEFT OUTER JOIN OL910.F9860 d
                ON a.TDOBNM = d.SIOBNM
                   AND d.SIFUNO = 'TBLE'
        LEFT OUTER JOIN PRODCTL.F0005 e
                ON LTRIM(RTRIM(c.FROWTP)) = LTRIM(RTRIM(e.DRKY))
                   AND e.DRSY = 'H98'
                   AND e.DRRT = 'DT'
WHERE
        RTRIM(TDOBNM) LIKE ?
ORDER BY
        TDOBNM,
        TDPSEQ

ssrs-jderef-2

Sample RDL can be found here: http://timlaqua.com/jderef.rdl. You will need to change the data source to work in your environment as well as update the following libraries to your environment:

  • COPD910.F98711 - F98711 varies per environment, so make sure you get the right library
  • DD910.F9202 - all the F92 files are in the same library
  • PRODCTL.F0005 - whatever library your UDC table is in
  • OL910.F9860 - the Object Librarian library
Tagged as: , , No Comments
23Jan/132

Hiding SSAS Attributes With Powershell

The challenge here was that we have a cube with some great data as well as some low level detail (customer detail, think PII) that we want to expose to a larger audience. The problem is that the low level detail is not something that the new audience needs (or in some cases, is allowed) to have access to. The obvious answer here is Dimension security - why not use that? Two reasons. First, Dimension security is slow. Second, even if it wasn't slow, to hide all the data in many (30+) attributes is tedious to setup, and when new attributes were added we would have to make sure and disable access to those as well. To be clear, we're not just hiding attributes here, we're creating an entire copy of the existing SSAS database and hiding attributes in the copy.

# Import Required Libraries
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") >$NULL
 
# Connect to server
$server = New-Object Microsoft.AnalysisServices.Server
$server.Connect('localhost')
 
# Make a metadata copy of the existing cube
$newdb = $server.Databases.GetByName('POS Data').Clone()
 
# Change ID and Name of SSAS DB
$newdb.Name = 'POS Data - Limited'
$newdb.ID = 'POS Data - Limited'
 
# Drop Existing SSAS DB if it exists
$server.Databases['POS Data - Limited'].Drop()
 
# Add the new copy to the server
$server.Databases.Add($newdb)
 
# Sync our copy of the new database with the server's copy
$newdb.Update([Microsoft.AnalysisServices.UpdateOptions]::ExpandFull)
 
# Grab the cube we want to work with from the new database
$cube = $newdb.Cubes.GetByName('POS Data')
 
# Hide the Location Dimension
$cube.Dimensions.GetByName('Location').Visible = $false
 
# Hide all attributes and hierarchies in the Customer dimension
$cube.Dimensions.GetByName('Customer').Attributes |  %{$_.AttributeHierarchyEnabled=$false; $_.AttributeHierarchyVisible=$false;}
$cube.Dimensions.GetByName('Customer').Hierarchies | %{$_.Visible=$false; $_.Enabled=$false;}
 
# Enable the key attribute in the customer dimension - it won't work if the key isn't enabled
$cube.Dimensions.GetByName('Customer').Attributes['Dim Customer'].AttributeHierarchyEnabled=$true
 
# Enable the Market attribute in the customer dimension
$cube.Dimensions.GetByName('Customer').Attributes['Market'] | %{$_.AttributeHierarchyEnabled=$true; $_.AttributeHierarchyVisible=$true;}
 
# Hide the Location Database attribute in the Product dimension
$cube.Dimensions.GetByName('Product').Attributes['Location Database'] | %{$_.AttributeHierarchyEnabled=$false; $_.AttributeHierarchyVisible=$false;}
 
# Add a new member to the role granting Read Only permissions in the cube
$newMember = new-object Microsoft.AnalysisServices.RoleMember("domain\tlaqua")
$newdb.Roles['Role 1'].Members.Add($newMember)
 
# Push our updates to the server
$newdb.Update([Microsoft.AnalysisServices.UpdateOptions]::ExpandFull)
 
# Process the new database
$newdb.Process([Microsoft.AnalysisServices.ProcessType]::ProcessFull)
 
# Disconnect from the server
$server.Disconnect()

This approach has two notable downfalls. First, you have to think up a different database name because the original db with the low level detail still exists on the server. Second, you have to ProcessFull the clone that you made. It doesn't close the data, it clones the metadata. All in all, works great for us, this particular cube is only processed once a week and the 20 minutes we lose processing data in to the clone is more than acceptable (looks really clean to the users as well).

5Jul/1212

Consuming Sharepoint Lists via OData with SSIS

We recently needed to bring data from a Sharepoint list in to SSIS for use in an attribute lookup for a cube dimension. Seems like this should be pretty straight forward since SSRS does it natively now, but no - that wizardry hasn't made its way over to the SSIS team yet. As stated before, we don't care for third party dependencies or external non-standard assemblies of any sort in our packages. That means, as usual, we'll be writing a script component to take care of getting our data out of Sharepoint.

First, navigate to the list you are interested in and export it as a data feed

18Jun/120

SQL 2012 Upgrade: Lessons Learned

Monday of last week (2012-06-11) we started our SQL 2012 upgrades by first upgrading our development and staging environment. We encountered very few issues with this particular phase and forged ahead with upgrading one of our monitoring servers - this is the one that hung us up for an hour or two. We decided it didn't need to run enterprise edition and could get by with the Business Intelligence edition. What we didn't notice was that at some point in the past we had setup MDW on that server for evaluation and, after abandoning it, never cleaned it up. The MDW databases uses both partitioning and compression - enterprise only features.

After sorting that out, we moved on to our SSAS presentation server which also received Business Intelligence edition as there are no core limits on SSAS in that edition - that box has 48 cores and the engine licensing scared us a little - then we remembered we don't even run an instance of the engine on that server. Finally, we upgraded our production ETL/DW/SSRS server and that's when we found out that someone declared war on commas in email addresses (I have no idea what standard led them to that decision - I assume they forgot that commas commonly exist in titled email strings). Then pass-through authentication in SSRS tanked, but our DBA quickly caught that one noticing the SSRS config file was completely overwritten and was only allowing NTLM (switching back to Negotiate fixed the issue).

For now, one week later, things appear to be stable and we believe we've found most of the issues. This upgrade certainly didn't yield magic performance improvements as we saw in the 2005 upgrade and the 2008 upgrade and we have yet to use any of the new features - but we're ready. Someone keeps talking about about Tabular instances 😉

As the saga continues and we move on to upgrading our remaining environments, I'll keep the following list updated:

Phase 1: Core ETL, DW, SSRS, and SSAS Servers

  • Context menus are a little different in the new VS environment. Had to manually add the Change Connection command to the context menu that's used when editing XMLA files (SQL 2012: XMLA Change Connection Command Missing)
  • You can't add DATETIME and TIME any more, you have to explicitly CAST the TIME part to DATETIME first
  • Don't worry about the Project Deployment Model in SSIS yet. The Package Deployment Model works fine for now
  • When compiling SSIS packages, they don't sit in the ./bin root by default anymore. This can behavior can be changed by editing the DeploymentOutputPath in the Properties of the solution file
  • Do NOT diagonalgrade a server running Enterprise Edition only features to SQL 2012 Business Intelligence Edition (Upgrading from SQL 2008 R2 Enterprise to SQL 2012 Business Intelligence)
  • The SSRS Config file (RSReportServer.config) gets overwritten. Back yours up before doing the upgrade so you can put the settings back. The one we noticed was the authentication changed to NTLM only rather than Negotiate (we use kerberos primarily) so auth started failing for many data sources
  • The SQLPS (Powershell) shell behaves a little differently. Working with files (copies, moves, etc) will not work correctly unless you break out of the default provider context (it's in the context of SQL's provider when the shell starts). At the top of all of our Powershell type SQL Agent Job Steps we just added a "C:" line to break out of the SQL provider and things worked as expected again
  • In SSAS, go through the data sources in your AS databases and switch any references to SQLNCLI10/10.1 to SQLNCLI11. We've seen SQLNCLI10.1 continue to work fine after the upgrade in some scenarios and fail in others - safest to just update them.
  • msdb.dbo.sp_send_dbmail Msg 14613, Level 16, State 1, Procedure sysmail_verify_addressparams_sp, Line 13. The specified '@recipients' cannot use commas (,) to separate addresses: 'Tim_Laqua@trekbikes.com;Laqua, Tim <tim_laqua@trekbikes.com>;'. To continue, use semicolons (;) to separate addresses." I looked at the validation proc, it tosses out ANY string with a comma in it. Simply remove the commas (so instead of LastName, FirstName use FirstName LastName for titled email addresses)
  • SSRS also declared war on email recipient strings - Some subscriptions will fail with the error "specified string is not in the form required for an e-mail address." Fiddle w/ the email address line, we've seen this be an issue with white space, blanks, etc. Basically, they made the email address validation ridiculously rigid.

Don't try to perfectly plan your upgrade, just be ready to zig when it zags. As a team, we've been through at least two of these major SQL Server upgrades previously and this one was more about being prepared for possible issues rather than spending ridiculous amounts of time trying to perfectly reproduce production load to simulate the production upgrades. We did spend some time tidying up our house first as over the years we have learned that the more you stay away from third party components and assemblies, the smoother these upgrades go. This observation led us to rip all of our third party SSIS components (components from codeplex, purchased components, and ones that we wrote) out of our packages and only use built-in components. Every single package we've had so far upgraded without issues.

Tagged as: No Comments
12Jun/120

Upgrading from SQL 2008 R2 Enterprise to SQL 2012 Business Intelligence

With all the new licensing considerations in SQL 2012, everyone's busy trying to figure out exactly what the cheapest license they can get by with is. For one of our instances, we decided rather than Enterprise, we could run the Business Intelligence edition to save some cash. That upgrade path is explicitly listed in the Supported Version and Edition Upgrades documentation.

What happened?

We clicked through the upgrade wizard as usual and it got through about 85% and then prompted an error:

Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.

We clicked ok, then it finished and the results showed that the SQL engine and a few other features failed the upgrade. Great.

11Jun/120

SQL 2012: XMLA Change Connection Command Missing

I'm fairly sure this has to do with the options I chose when I first opened SSMS, where I asked it to import my settings. Regardless of why it happened, my Change Connection command is gone from my context menu when editing XMLA files:

I use this quite a bit as we have many environments and most of those environments also have staging environments. Here's now to manually add it (and add it to add it to any other context menus you want)

15Mar/122

Automating PowerPivot Data Refresh in Sharepoint 2010

Of course we want to do this, it's a fundamental requirement. Dear Sharepoint, please refresh my PowerPivot when the data is ready, not on some arbitrary schedule. Until this functionality is built-in, people will continue to hack away at it to figure out how to make this happen. Is this supported? Certainly not - so continue at your own risk.

So the method we'll be using here is just to mimic what Sharepoint does when you check the box to "Also refresh as soon as possible" in the schedule configuration page. To accomplish this we open the profiler, connect it to whatever instance our Sharepoint PowerPivot database is hosted on and filter you TextData to "%Schedule%" or filter to just the Sharepoint PowerPivot database (SP2010_PowerPivot_Service_Application in our case) - then open up your test PowerPivot schedule configuration, check the box, click OK, wait for the schedule history to come back up and then stop the trace. Now you know you've got what you need, you just have to find it:

27Feb/124

Slowly Changing Dimensions with MD5 Hashes in SSIS

We recently moved away from the 3rd party Checksum component (and all 3rd party components) in SSIS and I wanted to share the pattern we settled on for maintaining our Type 1 Slowly Changing Dimensions (SCDs). There are two things we wanted to address with our new pattern. First, our previous implementation wasn't performing as well as we needed it to or generating reliable checksums. The second was that we wanted to get away from dependencies on custom assemblies in general. To illustrate the pattern, we're going to build a SCD package off the Adventure Works DW DimCustomer table and skip over the actual source of the business keys and attributes by selecting directly from the completed dimension for now.

First, we assume that our dimension already exists (and we were using some other checksum or MERGE to maintain it). We have to add a column to store the MD5 hash:

ALTER TABLE dbo.DimCustomer ADD
	MD5 VARCHAR(34) NOT NULL DEFAULT ''

Second, we need a staging table to store updated/changed rows. Script out the current dimension as a CREATE, remove all unneeded constraints and indexes, and create a staging table as a heap:

CREATE TABLE [dbo].[Staging_DimCustomer_UpdatedRows](
	[CustomerKey] [int] NOT NULL,
	[GeographyKey] [int] NULL,
	[CustomerAlternateKey] [nvarchar](15) NOT NULL,
	[Title] [nvarchar](8) NULL,
	[FirstName] [nvarchar](50) NULL,
	[MiddleName] [nvarchar](50) NULL,
	[LastName] [nvarchar](50) NULL,
	[NameStyle] [bit] NULL,
	[BirthDate] [datetime] NULL,
	[MaritalStatus] [nchar](1) NULL,
	[Suffix] [nvarchar](10) NULL,
	[Gender] [nvarchar](1) NULL,
	[EmailAddress] [nvarchar](50) NULL,
	[YearlyIncome] [money] NULL,
	[TotalChildren] [tinyint] NULL,
	[NumberChildrenAtHome] [tinyint] NULL,
	[EnglishEducation] [nvarchar](40) NULL,
	[SpanishEducation] [nvarchar](40) NULL,
	[FrenchEducation] [nvarchar](40) NULL,
	[EnglishOccupation] [nvarchar](100) NULL,
	[SpanishOccupation] [nvarchar](100) NULL,
	[FrenchOccupation] [nvarchar](100) NULL,
	[HouseOwnerFlag] [nchar](1) NULL,
	[NumberCarsOwned] [tinyint] NULL,
	[AddressLine1] [nvarchar](120) NULL,
	[AddressLine2] [nvarchar](120) NULL,
	[Phone] [nvarchar](20) NULL,
	[DateFirstPurchase] [datetime] NULL,
	[CommuteDistance] [nvarchar](15) NULL,
	[MD5] [varchar](34) NOT NULL)

Now in to SSIS - We will be building:

  1. Execute SQL Task to Truncate our Staging table(s)
  2. Data Flow Task to Insert new rows and Stage updated rows
  1. OLE DB Source to retrieve our source data
  2. Script Component to Generate Row Numbers
  3. Conditional Split to Evenly Distribute Rows
  4. Script Component to Generate MD5 Hashes
  5. Union All to Squish it all back together
  6. Lookup to get the existing MD5 Hash (if it exists)
  7. Conditional Split to separate Unchanged and Changed rows
  8. RowCount Transformation
  9. OLE DB Destination for Changed rows
  10. OLE DB Destination for New rows
  • Execute SQL Task to Update changed rows
  • Completed Control Flow

    Completed Data Flow

    3Feb/122

    SSAS Cache Isn’t Making Cents

    I stole the pun from my Business Analyst, Mr. John Seiler 😉 Now on to my issue - when SSAS caches the value for my [Actual] measure, it seems to do so based on the results of the first query that requests that coordinate. In this particular cube, there's bunches of tiny fractions and depending on how you slice it, it aggregates a little different. This is a fun problem in itself, but the part that drives me (and the Finance department) crazy is that if you go and slice on something OTHER than that first query that created the cache, the values they see don't always add up to the "Grand Total" in Excel - aka. "All"

    These are the queries used for this test:

    Query A

    SELECT 
    	{Actual} ON 0
    FROM [GL] 
    CELL PROPERTIES VALUE

    Query B

    SELECT 
    	{Actual} ON 0,
    	NON EMPTY Hierarchize({DrilldownLevel({[Account].[Accounts].[All]},,,INCLUDE_CALC_MEMBERS)}) ON 1
    FROM [GL] 
    CELL PROPERTIES VALUE

    Query C

    SELECT 
    	{Actual} ON 0,
    	NON EMPTY Hierarchize({DrilldownLevel({[Date].[Fiscal].[All]},,,INCLUDE_CALC_MEMBERS)}) ON 1
    FROM [GL] 
    CELL PROPERTIES VALUE
    GO

    Results (Cache was cleared with ClearCache on the entire Database before each Series)

    Series 1 Series 2 Series 3

    So basically the Grand Total of this GL cube is a random number based on whoever sneaks the first query in after the cache is cleared (processing, sync, etc).

    And for all of you that think the MDX script breaks everything everywhere - I did comment out my entire MDX script before running these tests.

    11Jan/121

    Hiding SSRS Schedule Jobs In SSMS

    Everybody hates these things. If you're in the Activity Monitor or browsing via the SSMS tree view, these GUID jobs that represent SSRS subscriptions are really just none of our concern. Sure, I can admit that I've seen people manually fire these to re-send a given subscription, but you can just do that using the AddEvent proc in a query window. Personally - I don't want to see these... usually...

    Connect to the database instance you want to filter the agent jobs out on
    Browse to Databases > System Databases > msdb > Programmability > Stored Procedures > System Stored Procedures
    Right-click on dbo.sp_help_category and select Modify...

    At the top, change the definition of @where_clause to NVARCHAR(MAX)

    19
    
      DECLARE @where_clause   NVARCHAR(max)

    At the bottom, add in a few lines to append the @where_clause variable with a predicate that filters out the Report Server category when it's you from your workstation (so you can still see that category from another machine if you need to).

    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    
      SELECT @cmd = @cmd + N'FROM msdb.dbo.syscategories '
     
      SET @where_clause += N'
      AND
    	CASE
    		WHEN 
    			name = ''Report Server'' 
    			AND (
    				SELECT RTRIM(nt_username) + ''|'' + RTRIM(hostname) 
    				FROM sys.sysprocesses 
    				where spid = @@spid) = ''tlaqua|TIMSLAPTOP''  THEN 0
    		ELSE 1
    	END = 1 '
     
      -- Execute the query
      EXECUTE (@cmd + @where_clause + N'ORDER BY category_type, name')
     
      RETURN(@@error) -- 0 means success
    END

    So, what on earth are we doing here? First, replace my nt_username with yours and replace my hostname with yours. From my less-than-exhaustive trial and error testing, it seems that when either the SSMS Jobs node is expanded or the Job Activity Monitor fetches jobs, two DB calls are made - one to fetch categories and another to fetch jobs. I tried filtering out the jobs portion originally and that yielded some errors. So I'm assuming it's trying to marry the categories and the jobs internally, and it expects there to be jobs for each category the first query returned. By not returning the Report Server category at all, the resulting merged list of jobs doesn't contain any jobs belonging to that category (logically an INNER JOIN).

    Sure, this is a dirty hack, but I don't mind.

    Update (2012-01-11)
    Here is the predicate for hiding those jobs from ALL SSMS clients:

    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    
      SET @where_clause += N'
      AND
    	CASE
    		WHEN 
    			name = ''Report Server'' 
    			AND (
    				SELECT program_name 
    				FROM sys.sysprocesses 
    				where spid = @@spid) = ''Microsoft SQL Server Management Studio''  THEN 0
    		ELSE 1
    	END = 1 '
    Tagged as: , 1 Comment