Saturday, February 25, 2012

AS 2005 Slowly Changing Dimension - adding new rows when no change

Hi , I am using the Integration Services slowly changing dimension to move data from a SQL Server 2000 database table to a SQL Server 2005 table.

The other problem is the package is not tracking changes it is spending a lot of time doing lookups (it's slow), but ends up creating new records when there has not been a change.

I'm quite sure the business key is set up correctly (I'm using the PK from the source table).

The database I am transferring from has non Unicode data types (ie varchar and char) and the destination database has Unicode data types (ie nvarchar).

Also some of the fields in the dB are NULL - does this have an effect (ie one null doesn't equal another null)? Or shouldn't that matter?sorted out his issue

http://forums.microsoft.com/MSDN/showpost.aspx?postid=1475782&siteid=1

AS 2005 References/Books

Hi All -

I have come across the need to use a cube in AS 2005 for an application. Unfortunately this will be the first time writting an ASP.Net application which uses one. I was wondering if any of you had any good books or references that I could look at to get an idea of how I should approach this issue.

Thanks in advance.

Have a look at ISBN: 0470176377 ; ISBN13: 9780470176375 at http://www.compman.co.uk/scripts/browse.asp?ref=834216&source=A94

Also look at http://www.codeproject.com/cs/database/CubesAdomd.asp An article on Cubes, MDX, Analysis Services and ADOMD in .NET
and http://www.codeproject.com/cs/database/MyMdDataReader.asp This code replaces the buggy ADOMD.NET DataAdapter and DataReader.

|||Thanks for the help. I will look into them.

AS 2005 modeling question

Hello everybody!

We have three fact-tables with a classic hierarchical order:
-customer table
-order table
-order positions

one customer has multiple orders and one order has multiple order positions

and we have one dimension articles related to the position table.

Further we defined a count measure for each measure group.

The question is how can we connect the articles dimension (related to order positions measure group) to the customer measure group to get a count over customers? We tried to use m:n relations. It was not very fast (10 million fact rows).

Any ideas to model this 1:n scenario fast and clever ;-)?

Thanks in advance
Ole J.

I am not sure about what you mean about the customers table being a fact table. That is a dimension table. Perhaps you can explain further?

Next. In the order postions table you will only have to add the order number and the customer number to that fact table as keys like this:

-CustomerKey

-OrderKey

-OrderItemKey

-Measure.

You will do this in the ETL-process with SSIS(Integration services) when you load your fact table.

Your mistake is perhaps that you have separated orders and order line items in two different fact tables?

HTH

Thomas Ivarsson

|||Thanks Thomas for your first ideas. I would like to tell you a little bit more background.

The customer table is indeed a fact table because there are for example measures like lifetime-values and others... and further the focus of investigation is not only the customer name but the count of customers is more important.

The vision is to have a customer measure group with at least one measure customer count and one or more dimensions like customer sex or others...

And with order facts we want to analyze the count of orders (facts), the amount group (dimension) and so on.

Line items are interesting because of the count over line items, a dimension that filters some properties of the line items and further more.

Is it really necessary to join the tables before we build the cube? We thought that in AS2005 we can simply bring together the measure groups?

What we have done so far is we denormelized the tables in the following way:

Customer table:
CustomerID, CustomerSexID

Order table:
OrderID, AmountGroupID, CustomerID, CustomerSexID

OrderItems table:
OrderItemsID, OrderitemsDimensionID, OrderID, AmountGroupID, CustomerID, CustomerSexID

In BI Studio it looks like this:

MeasureGroup MeasureGroup MeasureGroup
Customer Order OrderItems
Dimension
CustomerSex X X X
OrderAmountGroup X X
OrderItemsDimension X

So if you anaylize rigth now the dimension CustomerSex you get the correct results for:
count customer
count order
count orderitems

But if you want to analyze the OrderItemsDimension you get only the correct results for the count OrderItems because of the fact that until now the Orders and Customers measure group are not related to the dimension OrderItemsDimension. So the question is how can you go the way back from the count orderitems to get a result for the orders or customer counts?

I hope you can now better imagine what i mean..? Any ideas?

Thanks in advance,
Ole J.|||Any ideas?

Thanks in advance!

Ole J.|||

Hello Ole. This looks like a complicated project. I am unable to see the complete picture of what you would like to do.

I am not sure if your best choice is to do data mining, build a cube or combine these approaches.

My recommendation is to have a look att www.Kimball.group and see if there is any design tips that can help you.

You can also have a look here(http://www.sqlserverdatamining.com)

In SSAS2005 the customer sex is an attribute within the customer dimension so I do not understand why you will build this as a single dimension.

Regards

Thomas Ivarsson

|||

If I understand the situation, you already have CustomerSex as a reference dimension in the OrderItems measure group and you just want to control the behavior of the values in the Cutomer and Order measure groups when a order item is specified in the query. (The cube wizard should create this design for you if you specify each of the three tables as both fact and dimension tables.) Try using the IgnoreUnrelatedDimensions property on the measure groups (see http://msdn2.microsoft.com/en-us/library/microsoft.analysisservices.measuregroup.ignoreunrelateddimensions.aspx).

Hope this helps

AS 2005 Deployment Option

For the 3 available options:

Role deployment options

The <project name>.deploymentoptions file specifies one of the following role deployment options:

Existing roles and role members in the destination database are retained, and only new roles and role members are deployed.

All existing roles and members in the destination database are replaced by the roles and members being deployed.

Existing roles and role members in the destination database are retained, and no new roles are deployed.

The value appears in the .deploymentoptions file for the second option is

<RoleDeployment>DeployRolesRetainMembers</RoleDeployment>

What are the labels for the other two options?

I tried searching online and also in the latest BOL but couldnt find it.

As a workaround, I am using the numeric value directly, i.e. 0, 1, 2 where DeployRolesRetainMembers = 1. This works fine, but we do prefer to see the more informative label.

Thanks.

Try running Deployment Wizard application and see these options there. See what are the values generated by the wizard for you.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

I wish it is that simple. It seems the Deployment Wizard does not offer the option when you create the deployable; it offers the option when you execute the deployable but again it does not save the chosen value during deployment.

Anyway, I got the answers from Premier Support:

DeployRolesAndMembers (value 0): Any existing roles and members will be replaced.

DeployRolesRetainMembers (value 1): Deploy roles and retain members. Roles will be deployed along with their members for new roles. Members for existing roles will be retained.

RetainRoles (value 2): Retain roles and members. The roles and members will not be deployed.

AS 2005 Deployment Option

For the 3 available options:

Role deployment options

The <project name>.deploymentoptions file specifies one of the following role deployment options:

Existing roles and role members in the destination database are retained, and only new roles and role members are deployed.

All existing roles and members in the destination database are replaced by the roles and members being deployed.

Existing roles and role members in the destination database are retained, and no new roles are deployed.

The value appears in the .deploymentoptions file for the second option is

<RoleDeployment>DeployRolesRetainMembers</RoleDeployment>

What are the labels for the other two options?

I tried searching online and also in the latest BOL but couldnt find it.

As a workaround, I am using the numeric value directly, i.e. 0, 1, 2 where DeployRolesRetainMembers = 1. This works fine, but we do prefer to see the more informative label.

Thanks.

Try running Deployment Wizard application and see these options there. See what are the values generated by the wizard for you.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

I wish it is that simple. It seems the Deployment Wizard does not offer the option when you create the deployable; it offers the option when you execute the deployable but again it does not save the chosen value during deployment.

Anyway, I got the answers from Premier Support:

DeployRolesAndMembers (value 0): Any existing roles and members will be replaced.

DeployRolesRetainMembers (value 1): Deploy roles and retain members. Roles will be deployed along with their members for new roles. Members for existing roles will be retained.

RetainRoles (value 2): Retain roles and members. The roles and members will not be deployed.

AS 2005 cube browsing with an unprocessed measure group

Hi,

Is it possible to browse a AS 2005 cube in the Cube browser without having processed one of the measure groups.

These measure groups only share a few dimensions and are independent of each other.

Is there a way to browse the cube without having to delete the unprocessed measure group in the AS project/Mgmt studio? There is no real option to disable a measure group though.

Regards

Yes. You should be able to clear one of the measure groups in your cube and still be able to browse the cube.You can send ProcessClear command for that.

Another idea is to create a Perspective and select into it on the measures and dimensions you interested to show to your users.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

AS 2005 Business Intelligence Wizard scripts

In Analysis Services 2005, using the Business Intelligence Wizard in the BI Development Studio to add time calculations to a cube worked fine when only base measures are included in the scope. When a calculated measure is added to the scope of the Business Intelligence Wizard time calculation, all that is returned for the Business Intelligence Wizard time calculation value is “NA”. What could be the problem/how can I make this work correctly?

The exact same time calculation was being created by the Business Intelligence Wizard - the only difference was adding one calculated measure to the scope. Could this have anything to do with pass and solve order? Do the measures that the calculated measure depends on also have to be listed in the scope for the BI Development Studio time calculation? I'm sort of reaching here.

Any help is appreciated,
Dan

Dan,

There are two separate issues, both of which have been fixed for the next CTP:

1. A redundant scope is created when both physical and calculated measures are chosen.
2. Seeing NA in the context of calculated measures.

The only *possible* workaround I know of for issue 2 is to edit the MDX script and replace the AGGREGATE function with SUM, but this should only be done if you are postive that it makes sense to sum the measures in question. Applying this change may then cause you to hit issue 1, at which point you would, if needed, want to remove the first (outer scope) and move those physical measures into the inline scopes (on the left hand side of the '=').

-rob|||Thanks for the thorough explanation. I will attempt to apply the solution(s).

Dan
|||Is a CTP planned between now and the final release?|||Yes, there will be another CTP released in the near future.

-rob

AS 2005 Business Intelligence Wizard - rerunning to add time calculations

In Analysis Services 2005, once you’ve used the Business Intelligence Wizard in the BI Development Studio to add time calculations to a cube, how do you come back later and add additional standard time calculations based on the named calculation that was already created in the cube’s data source view by the prior run of the Wizard?Kendal,

Assuming I understand you correctly, you want to add additional custom time calculations after running the Time Intelligence wizard...

The wizard creates a calculated column in your time dimension table within the DSV. It then adds a new attribute hierarchy to your time dimension based on this new attribute. This serves as an "holding point" for the calculations. It then creates the calculations you selected in the calculations script for the cube, using the following steps:

- it starts the calcuations by adding new members to the attribute hierarchy with a default value of "NA"
- the script then uses a "scope" statement to scope the calculations to the measures you selected in the wizard
- each of the calculations you selected by the wizard are then implemented by using an MDX expression to define the values of the new calculated member across the time hierarchy the calculations apply to
- finally, an "end scope" statement closes things out

If you review the calculation script for the cube, you should see this at the end of the script after running the Time Intelligence wizard.

To add new custom calculations to the same attribute hierarchy created by the wizard, you can just add new calculated members to the hierarchy and add the custom MDX expressions needed. To add a new member, simply add a new "Create Member" statement before the "scope" statement, using the same syntax and construction as the ones added by the wizard. Then, add a new calculation after the "scope" statement that specifies the MDX expression that should be used to calculate values for the new calculated member. Use a syntax similar to that used for the existing calculations.

You should note that any calculated members that you add in this manner will be restricted to the same scope as the ones added by the wizard -- meaning the calculated member will only return values for the same list of members selected in the wizard. You can use variations of this technique with your own scope statements to change this.

Hope this helps and is understandable :-)

Dave Fackler
|||It does help. I had been thinking of the wizard as more of a development tool that would allow me to iteratively refine my cube. However, at this point in time, it serves more to codify best practices and generate examples of how to implement certain features in the context of the cube.

As is, it certainly is a big benefit.

Thanks for the clarification.|||

Hi Dave

I have tried to 'add time intelligence' but when I browse the result I only get something in 'current time'. In 'year to date' I only get the value 'N/A'

On my calculations I have nothing in my 'parent hierarchies' . On the parent hierarchies i got an error

the specified hierarchy [Time].[Year - Quarter - Month - Date Time Calculations] does not exist int the cube.

Scope(

{

[Measures].[Invoice Net Sales Amt]

}

)

( [Time].[Year - Quarter - Month - Date Time Calculations].[Quarter to Date],

[Time].[Quarter].[Quarter].Members ) =

Aggregate(

{ [Time].[Year - Quarter - Month - Date Time Calculations].DefaultMember } *

PeriodsToDate(

[Time].[Year - Quarter - Month - Date].[QuarterName],

[Time].[Year - Quarter - Month - Date].CurrentMember

)

)

End Scope

|||

Some of the calculations produced by the Time Intelligence Wizard, including the Year to Date ones, don't work - see
http://spaces.msn.com/cwebbbi/blog/cns!7B84B0F2C239489A!379.entry

Hopefully this will be fixed in SP1.

Chris

|||

I'm running SP1 on Standard Edition. I'm also using the code (below) that's recommended as a workaround but still get the same problem. NAs are displayed when I try to browse the data for this measure / calculation combo. All I've done is put the code into my calculations script on Adventure Works. Current Date (which I assume is a hidden calculation) works.

Is there any other issue that could be causing this or is there something I've missed? Any help on this would be much appreciated.

Create Member

CurrentCube.[Date].[Fiscal Date Calculations].[Year to Date]

As "NA" ;

Scope(

{

[Measures].[Internet Sales Amount],

[Measures].[Reseller Sales Amount]

}

);

( [Date].[Fiscal Date Calculations].[Year to Date],

[Date].[Fiscal Year].[Fiscal Year].Members ) =

Aggregate(

{ [Date].[Fiscal Date Calculations].DefaultMember } *

PeriodsToDate(

[Date].[Fiscal].[Fiscal Year],

[Date].[Fiscal].CurrentMember

)

) ;

AS 2005 Business Intelligence Wizard - rerunning to add time calculations

In Analysis Services 2005, once you’ve used the Business Intelligence Wizard in the BI Development Studio to add time calculations to a cube, how do you come back later and add additional standard time calculations based on the named calculation that was already created in the cube’s data source view by the prior run of the Wizard?Kendal,

Assuming I understand you correctly, you want to add additional custom time calculations after running the Time Intelligence wizard...

The wizard creates a calculated column in your time dimension table within the DSV. It then adds a new attribute hierarchy to your time dimension based on this new attribute. This serves as an "holding point" for the calculations. It then creates the calculations you selected in the calculations script for the cube, using the following steps:

- it starts the calcuations by adding new members to the attribute hierarchy with a default value of "NA"
- the script then uses a "scope" statement to scope the calculations to the measures you selected in the wizard
- each of the calculations you selected by the wizard are then implemented by using an MDX expression to define the values of the new calculated member across the time hierarchy the calculations apply to
- finally, an "end scope" statement closes things out

If you review the calculation script for the cube, you should see this at the end of the script after running the Time Intelligence wizard.

To add new custom calculations to the same attribute hierarchy created by the wizard, you can just add new calculated members to the hierarchy and add the custom MDX expressions needed. To add a new member, simply add a new "Create Member" statement before the "scope" statement, using the same syntax and construction as the ones added by the wizard. Then, add a new calculation after the "scope" statement that specifies the MDX expression that should be used to calculate values for the new calculated member. Use a syntax similar to that used for the existing calculations.

You should note that any calculated members that you add in this manner will be restricted to the same scope as the ones added by the wizard -- meaning the calculated member will only return values for the same list of members selected in the wizard. You can use variations of this technique with your own scope statements to change this.

Hope this helps and is understandable :-)

Dave Fackler
|||It does help. I had been thinking of the wizard as more of a development tool that would allow me to iteratively refine my cube. However, at this point in time, it serves more to codify best practices and generate examples of how to implement certain features in the context of the cube.

As is, it certainly is a big benefit.

Thanks for the clarification.|||

Hi Dave

I have tried to 'add time intelligence' but when I browse the result I only get something in 'current time'. In 'year to date' I only get the value 'N/A'

On my calculations I have nothing in my 'parent hierarchies' . On the parent hierarchies i got an error

the specified hierarchy [Time].[Year - Quarter - Month - Date Time Calculations] does not exist int the cube.

Scope(

{

[Measures].[Invoice Net Sales Amt]

}

)

( [Time].[Year - Quarter - Month - Date Time Calculations].[Quarter to Date],

[Time].[Quarter].[Quarter].Members ) =

Aggregate(

{ [Time].[Year - Quarter - Month - Date Time Calculations].DefaultMember } *

PeriodsToDate(

[Time].[Year - Quarter - Month - Date].[QuarterName],

[Time].[Year - Quarter - Month - Date].CurrentMember

)

)

End Scope

|||

Some of the calculations produced by the Time Intelligence Wizard, including the Year to Date ones, don't work - see
http://spaces.msn.com/cwebbbi/blog/cns!7B84B0F2C239489A!379.entry

Hopefully this will be fixed in SP1.

Chris

|||

I'm running SP1 on Standard Edition. I'm also using the code (below) that's recommended as a workaround but still get the same problem. NAs are displayed when I try to browse the data for this measure / calculation combo. All I've done is put the code into my calculations script on Adventure Works. Current Date (which I assume is a hidden calculation) works.

Is there any other issue that could be causing this or is there something I've missed? Any help on this would be much appreciated.

Create Member

CurrentCube.[Date].[Fiscal Date Calculations].[Year to Date]

As "NA" ;

Scope(

{

[Measures].[Internet Sales Amount],

[Measures].[Reseller Sales Amount]

}

);

( [Date].[Fiscal Date Calculations].[Year to Date],

[Date].[Fiscal Year].[Fiscal Year].Members ) =

Aggregate(

{ [Date].[Fiscal Date Calculations].DefaultMember } *

PeriodsToDate(

[Date].[Fiscal].[Fiscal Year],

[Date].[Fiscal].CurrentMember

)

) ;

AS 2005 Business Intelligence Wizard - rerunning to add time calculations

In Analysis Services 2005, once you’ve used the Business Intelligence Wizard in the BI Development Studio to add time calculations to a cube, how do you come back later and add additional standard time calculations based on the named calculation that was already created in the cube’s data source view by the prior run of the Wizard?Kendal,

Assuming I understand you correctly, you want to add additional custom time calculations after running the Time Intelligence wizard...

The wizard creates a calculated column in your time dimension table within the DSV. It then adds a new attribute hierarchy to your time dimension based on this new attribute. This serves as an "holding point" for the calculations. It then creates the calculations you selected in the calculations script for the cube, using the following steps:

- it starts the calcuations by adding new members to the attribute hierarchy with a default value of "NA"
- the script then uses a "scope" statement to scope the calculations to the measures you selected in the wizard
- each of the calculations you selected by the wizard are then implemented by using an MDX expression to define the values of the new calculated member across the time hierarchy the calculations apply to
- finally, an "end scope" statement closes things out

If you review the calculation script for the cube, you should see this at the end of the script after running the Time Intelligence wizard.

To add new custom calculations to the same attribute hierarchy created by the wizard, you can just add new calculated members to the hierarchy and add the custom MDX expressions needed. To add a new member, simply add a new "Create Member" statement before the "scope" statement, using the same syntax and construction as the ones added by the wizard. Then, add a new calculation after the "scope" statement that specifies the MDX expression that should be used to calculate values for the new calculated member. Use a syntax similar to that used for the existing calculations.

You should note that any calculated members that you add in this manner will be restricted to the same scope as the ones added by the wizard -- meaning the calculated member will only return values for the same list of members selected in the wizard. You can use variations of this technique with your own scope statements to change this.

Hope this helps and is understandable :-)

Dave Fackler
|||It does help. I had been thinking of the wizard as more of a development tool that would allow me to iteratively refine my cube. However, at this point in time, it serves more to codify best practices and generate examples of how to implement certain features in the context of the cube.

As is, it certainly is a big benefit.

Thanks for the clarification.|||

Hi Dave

I have tried to 'add time intelligence' but when I browse the result I only get something in 'current time'. In 'year to date' I only get the value 'N/A'

On my calculations I have nothing in my 'parent hierarchies' . On the parent hierarchies i got an error

the specified hierarchy [Time].[Year - Quarter - Month - Date Time Calculations] does not exist int the cube.

Scope(

{

[Measures].[Invoice Net Sales Amt]

}

)

( [Time].[Year - Quarter - Month - Date Time Calculations].[Quarter to Date],

[Time].[Quarter].[Quarter].Members ) =

Aggregate(

{ [Time].[Year - Quarter - Month - Date Time Calculations].DefaultMember } *

PeriodsToDate(

[Time].[Year - Quarter - Month - Date].[QuarterName],

[Time].[Year - Quarter - Month - Date].CurrentMember

)

)

End Scope

|||

Some of the calculations produced by the Time Intelligence Wizard, including the Year to Date ones, don't work - see
http://spaces.msn.com/cwebbbi/blog/cns!7B84B0F2C239489A!379.entry

Hopefully this will be fixed in SP1.

Chris

|||

I'm running SP1 on Standard Edition. I'm also using the code (below) that's recommended as a workaround but still get the same problem. NAs are displayed when I try to browse the data for this measure / calculation combo. All I've done is put the code into my calculations script on Adventure Works. Current Date (which I assume is a hidden calculation) works.

Is there any other issue that could be causing this or is there something I've missed? Any help on this would be much appreciated.

Create Member

CurrentCube.[Date].[Fiscal Date Calculations].[Year to Date]

As "NA" ;

Scope(

{

[Measures].[Internet Sales Amount],

[Measures].[Reseller Sales Amount]

}

);

( [Date].[Fiscal Date Calculations].[Year to Date],

[Date].[Fiscal Year].[Fiscal Year].Members ) =

Aggregate(

{ [Date].[Fiscal Date Calculations].DefaultMember } *

PeriodsToDate(

[Date].[Fiscal].[Fiscal Year],

[Date].[Fiscal].CurrentMember

)

) ;

AS 2005 browse cube problem

I have successfully deployed an Analysis Services Project on Analysis
Server 2005 with SP1. However, when I try to browse the cube using
either BI studio or SQL Server management studio, everything stucks.
Dimensions browsing and cube processing works fine!
Any ideas?
Thanks, MakisYou can use Profiler to trace against Analysis Services with 2005. I suggest
you give that a try to examine what is happening behind the scenes
David Lundell
Principal Consultant and Trainer
www.MutuallyBeneficial.com
David@.MutuallyBeneficial.com
<gmarketos@.gmail.com> wrote in message
news:1147262990.507126.46230@.j33g2000cwa.googlegroups.com...
>I have successfully deployed an Analysis Services Project on Analysis
> Server 2005 with SP1. However, when I try to browse the cube using
> either BI studio or SQL Server management studio, everything stucks.
> Dimensions browsing and cube processing works fine!
> Any ideas?
> Thanks, Makis
>|||Thank you David for your answer.
After all I realized that my firewall created the problem.
Makis
David Lundell wrote:[vbcol=seagreen]
> You can use Profiler to trace against Analysis Services with 2005. I sugge
st
> you give that a try to examine what is happening behind the scenes
> --
> David Lundell
> Principal Consultant and Trainer
> www.MutuallyBeneficial.com
> David@.MutuallyBeneficial.com
> <gmarketos@.gmail.com> wrote in message
> news:1147262990.507126.46230@.j33g2000cwa.googlegroups.com...

AS 2005 - errors after password change

Hello all

I changed the password that I use in AS 2005 to connect to my SQL Server database. I changed it in the data source, so when I go into the connection there, I can
connect properly (when I click the Edit button for the Connection
String, and then hit Test Connection).

However, when I go to a dimension, and try to process it, I get the
error:

OLE DB error: OLE DB or ODBC error: Login failed for user
'[MyUserName]'.; 42000.

Errors in the high-level relational engine. A connection could not be
made to the data source with the DataSourceID of '[MyDatabaseName]',
Name of '[MyDatabaseName]'.

I've tried all the ImpersonationInfo options - use specific name and
password, use service account, credentials of the current user, and
default. When I use the "specific name and password", it never saves the
password information.

Also, in the Connection Manager, under SQL Server Authentication, it
has an option to save password. It also NEVER saves the password in
there. So, when I first input my login, password info, and hit 'test
connection', it succeeds. Then, when I click ok, then shut down the
Data Sources properties, and then reopen it, the password has
disappeared.

What am I doing wrong? Where else do I need to change the password?

Thanks,
Sylvia

You are doing nothing wrong.

More to it, Analysis Server does saves your password as soon as you deploy your solution to the server. The problem is with securtiy.

Once you've provided your password to SQL Management Studio and you saved your datasource definition, Analysis Server saves the password in the encrypted form. But to adhere to highest stadards of security, Analysis Server will never send password out. And even if SQL Management studio will try to display datasource dialog for you, Analysis Server will provide SSMS with all data source properites but not real user password.

So for you it should be simple matter of editing data source in the SQL Management studio, saving it and trying to process your dimension or partition.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thank you very much for your reply.

I'm not quite following what I need to do here, in order to be able to process my dimesions and cubes.

I've gone into the SQL Management Studio, into the one data source I have, gone into Properties, edited the Connection String by clicking the three dots. That brings up the Connection Manager. This is where I choose SQL Server authentication, put in a user name and password, and then click Test Connection, which always succeeds. I click OK once for Connection Manager, OK again on the Data Source properties. There's no option to save that I can see.

Then, when I do the same thing again, the password is blank, and I cannot connect without retyping it. And, I cannot process a dimension. I'm almost certain I used to get the failure that was something like "password incorrect for login (mylogin), but now I'm getting the following:

Errors and Warnings from Response
The following system error occurred: The parameter is incorrect. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'XXX', Name of 'XXXX'.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'DashboardNew', Name of 'XXXX 'was being processed.
thanks,

FYI - this is from a database migrated from AS 2000.

In Microsoft Visual Studio, I get this error:

The following system error occurred: Logon failure: unknown user name or bad password. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'XXXX' , Name of 'XXXX' .
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'XXXX , Name of 'XXXX' was being processed.

Where else to I need to save connection information? This worked previously, it's just after I changed the SQL Server login password that I'm getting errors.

Sylvia

|||

Hi Can someone help we the error i am facing..

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>Dummy1</DatabaseID>
<CubeID>Puts</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Processing Dimension 'User' completed successfully.
Start time: 10/17/2006 8:33:13 PM; End time: 10/17/2006 8:33:39 PM; Duration: 0:00:26
Processing Dimension Attribute '(All)' completed successfully.
Start time: 10/17/2006 8:33:13 PM; End time: 10/17/2006 8:33:14 PM; Duration: 0:00:01
Processing Dimension Attribute 'USER NAME' completed successfully.
Start time: 10/17/2006 8:33:14 PM; End time: 10/17/2006 8:33:39 PM; Duration: 0:00:25
Errors and Warnings from Response
The following system error occurred: Logon failure: unknown user name or bad password. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Puts', Name of 'Puts'.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'User', Name of 'User' was being processed.
Errors in the OLAP storage engine: An error occurred while the 'USER NAME' attribute of the 'User' dimension from the 'Dummy1' database was being processed.

1) my connection to database is well establised and i can see that from data source.

2) using data source view, when i go to tables, and right click and say Explore, i can see the data in BI management studio.

so, when i use Process for the cube or when i run Deploy, it gives the above error.

Could anyone please help me out.

Thanks so much,

Kushal

kushal_k@.hotmail.com


|||

Couple of simple ways to deal with the problem;

1. Use Windows authentication in SQL Server

2. If you cant: Script your data source object in SQL Management studio and enter username and password for the connection there and send script back to the server.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks Edward.

For the second step, could you be pls bit more clear as i am new to this IDE.

Does it mean that i need to create a Data Source first in Sql Server 2005 Analysis service and then use the generated script in BI management studio?

Thanks,

Kushal

|||

No problem.

After you've created your project in BI Dev Studio and tried to deploy, the database has been created in Analysis Server.

Open SQL Management studio.
Connect to Analysis Server.
Navigate to your data source
Right click menu->Script Data Source As->Alter To->New Query Editor Window.
In the script that appears you can modify connection sting to contain username and password of your choice.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks a lot Edward. you help was indeed great.

It's working fine now.

Cheers!!!

Kushal

AS 2005 - errors after password change

Hello all

I changed the password that I use in AS 2005 to connect to my SQL Server database. I changed it in the data source, so when I go into the connection there, I can
connect properly (when I click the Edit button for the Connection
String, and then hit Test Connection).

However, when I go to a dimension, and try to process it, I get the
error:

OLE DB error: OLE DB or ODBC error: Login failed for user
'[MyUserName]'.; 42000.

Errors in the high-level relational engine. A connection could not be
made to the data source with the DataSourceID of '[MyDatabaseName]',
Name of '[MyDatabaseName]'.

I've tried all the ImpersonationInfo options - use specific name and
password, use service account, credentials of the current user, and
default. When I use the "specific name and password", it never saves the
password information.

Also, in the Connection Manager, under SQL Server Authentication, it
has an option to save password. It also NEVER saves the password in
there. So, when I first input my login, password info, and hit 'test
connection', it succeeds. Then, when I click ok, then shut down the
Data Sources properties, and then reopen it, the password has
disappeared.

What am I doing wrong? Where else do I need to change the password?

Thanks,
Sylvia

Hello. Are Analysis Services and the database on the same machine? If so you can try to change the account that you are running these services under to a new local account, that you will have to create on this machine.

If it is you laptop and not a production installation you can set this account with password never expires and add it to your local admin group. Change each services account to this account and repeat the password.

After this switch you should be able to use Windows Integrated security in your connections. It will not work if you will need to connect to other services on other machines/servers.

I am not sure that it will work but give it a try.

Regards

Thomas Ivarsson

|||

See my reply to your other post.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

AS 2005 - errors after password change

Hello all

I changed the password that I use in AS 2005 to connect to my SQL Server database. I changed it in the data source, so when I go into the connection there, I can
connect properly (when I click the Edit button for the Connection
String, and then hit Test Connection).

However, when I go to a dimension, and try to process it, I get the
error:

OLE DB error: OLE DB or ODBC error: Login failed for user
'[MyUserName]'.; 42000.

Errors in the high-level relational engine. A connection could not be
made to the data source with the DataSourceID of '[MyDatabaseName]',
Name of '[MyDatabaseName]'.

I've tried all the ImpersonationInfo options - use specific name and
password, use service account, credentials of the current user, and
default. When I use the "specific name and password", it never saves the
password information.

Also, in the Connection Manager, under SQL Server Authentication, it
has an option to save password. It also NEVER saves the password in
there. So, when I first input my login, password info, and hit 'test
connection', it succeeds. Then, when I click ok, then shut down the
Data Sources properties, and then reopen it, the password has
disappeared.

What am I doing wrong? Where else do I need to change the password?

Thanks,
Sylvia

You are doing nothing wrong.

More to it, Analysis Server does saves your password as soon as you deploy your solution to the server. The problem is with securtiy.

Once you've provided your password to SQL Management Studio and you saved your datasource definition, Analysis Server saves the password in the encrypted form. But to adhere to highest stadards of security, Analysis Server will never send password out. And even if SQL Management studio will try to display datasource dialog for you, Analysis Server will provide SSMS with all data source properites but not real user password.

So for you it should be simple matter of editing data source in the SQL Management studio, saving it and trying to process your dimension or partition.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thank you very much for your reply.

I'm not quite following what I need to do here, in order to be able to process my dimesions and cubes.

I've gone into the SQL Management Studio, into the one data source I have, gone into Properties, edited the Connection String by clicking the three dots. That brings up the Connection Manager. This is where I choose SQL Server authentication, put in a user name and password, and then click Test Connection, which always succeeds. I click OK once for Connection Manager, OK again on the Data Source properties. There's no option to save that I can see.

Then, when I do the same thing again, the password is blank, and I cannot connect without retyping it. And, I cannot process a dimension. I'm almost certain I used to get the failure that was something like "password incorrect for login (mylogin), but now I'm getting the following:

Errors and Warnings from Response
The following system error occurred: The parameter is incorrect. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'XXX', Name of 'XXXX'.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'DashboardNew', Name of 'XXXX 'was being processed.
thanks,

FYI - this is from a database migrated from AS 2000.

In Microsoft Visual Studio, I get this error:

The following system error occurred: Logon failure: unknown user name or bad password. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'XXXX' , Name of 'XXXX' .
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'XXXX , Name of 'XXXX' was being processed.

Where else to I need to save connection information? This worked previously, it's just after I changed the SQL Server login password that I'm getting errors.

Sylvia

|||

Hi Can someone help we the error i am facing..

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>Dummy1</DatabaseID>
<CubeID>Puts</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Processing Dimension 'User' completed successfully.
Start time: 10/17/2006 8:33:13 PM; End time: 10/17/2006 8:33:39 PM; Duration: 0:00:26
Processing Dimension Attribute '(All)' completed successfully.
Start time: 10/17/2006 8:33:13 PM; End time: 10/17/2006 8:33:14 PM; Duration: 0:00:01
Processing Dimension Attribute 'USER NAME' completed successfully.
Start time: 10/17/2006 8:33:14 PM; End time: 10/17/2006 8:33:39 PM; Duration: 0:00:25
Errors and Warnings from Response
The following system error occurred: Logon failure: unknown user name or bad password. .
Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Puts', Name of 'Puts'.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'User', Name of 'User' was being processed.
Errors in the OLAP storage engine: An error occurred while the 'USER NAME' attribute of the 'User' dimension from the 'Dummy1' database was being processed.

1) my connection to database is well establised and i can see that from data source.

2) using data source view, when i go to tables, and right click and say Explore, i can see the data in BI management studio.

so, when i use Process for the cube or when i run Deploy, it gives the above error.

Could anyone please help me out.

Thanks so much,

Kushal

kushal_k@.hotmail.com


|||

Couple of simple ways to deal with the problem;

1. Use Windows authentication in SQL Server

2. If you cant: Script your data source object in SQL Management studio and enter username and password for the connection there and send script back to the server.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks Edward.

For the second step, could you be pls bit more clear as i am new to this IDE.

Does it mean that i need to create a Data Source first in Sql Server 2005 Analysis service and then use the generated script in BI management studio?

Thanks,

Kushal

|||

No problem.

After you've created your project in BI Dev Studio and tried to deploy, the database has been created in Analysis Server.

Open SQL Management studio.
Connect to Analysis Server.
Navigate to your data source
Right click menu->Script Data Source As->Alter To->New Query Editor Window.
In the script that appears you can modify connection sting to contain username and password of your choice.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks a lot Edward. you help was indeed great.

It's working fine now.

Cheers!!!

Kushal

AS 2000, Excel 2003 and calulated measures

Hi

I have created a calculated measure - which when i view in the Analysis Manager - Browse it displays what I need.

In my Excel Pivot Table I cannot see the field when I Show Field List? is it possible to have calc members in Excel 2003?

however, my initial problem is this (and I am hoping someone can come up with a better solution):

I have a fact table that is simply number of hours worked per month and linked to dimensions such as the customer. Each Customer has a budgeted time (which is on the customer dimension) - I need to include this budget.

I have a separate cube that is the detail of the budget and I have tried creating a virtual cube - the problem with this is the budget is only visible if I use dimensions from that cube and the hours are only visible when I use dimensions from that cube - I would liek to see them together. (the dimensions in both cubes are identical)

Anyone?

Steve

Sounds like you have a bunch of private dimensions in your cubes. Its been a while since I worked on 2K, but i think global dimensions (dimensions that can be shared in virtual cubes) are called "shared dimensions". If you used the wizard to create the dimensions and cubes I think it defaults to adding dimensions as private.

|||

That may solve the virtual cube issue - thanks.

Any idea why I cannot see a calculated member in Excel while I can see it in the MMC browser?

Thanks

|||

Shared dimensions and virtual cubes work a treat - thanks.

Apparently, a calculated measure that is based on a dimension table does not appear in Excel 2003!!

AS 2000 with 64 bit OS

I moved one of my AS 2000 olap db new server -Windows 2003 "64-bit" OS with lot more processor then old server "32-bit" Windows OS.

Good news is my cube processing I save 50% time but my query performance is slow, I was thinking my query should fly.

I have AS 2000 "32-bit" in "64-bit" Windows server. I have same settings in AS 2000 as old server.

Thank you - Ashok

Well, first of all, if you running IA 64, you should install 64bit version of Analysis Services and run natively.

Second, running on higher end machine doesnt neccessarily means single query performance is going to improve drastically. With higher-end hardware, you should get higher throughput, ability to handle larger databases.
For single query you might be way better off creating additional aggregations supporting your query patterns. With processing times improving you should be able to afford having higher persentage of aggregations that should improve perofmance.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

AS 2000 DTS data extraction from cube to flat file

Currently we have a DTS that extracts data from a cube to a SQL Server table and then from SQL Server to a flat file.

This is taking a considerable time is there a faster way of doing this?

Thanks.
Mark.Moving to Integration Services forum

AS 2000 DSO loses connection

We use DSO with ASP.NET 1.1 via COM layer through interop. However, sometimes after awhile the connection can no longer be established. The error occurs somewhere in the following code, most likely the first line:

dsoServer.Connect ServerName
Set m_dsoServer = dsoServer

If m_dsoServer.MDStores.Find(mvarDBName) Then
Set m_dsoDatabase = m_dsoServer.MDStores(mvarDBName)
Else
' Dont do nothing for now
End If

The error reads: Error: Cannot connect to the repository.

Restarting IIS fixes this problem until it recurs. Any idea?

Thanks,
Boris Zakharin

Calling DSO from ASP.NET is not really a supported scenario :(

Is your repository in Access or migrated to SQL Server ?

|||Access|||Then please migrate it to SQL Server. There are plenty of benefits in doing so, and it also might help in your scenario, although as I already mentioned, it is not supported one.|||Could you point me to where it says that using DSO from ASP.NET is not supported?
I have found at least one KB article talking about how to do this very thing (http://support.microsoft.com/kb/823066). Does this not mean that it is in fact supported?|||It's more like there is no documentation which states that DSO is supported from ASP.NET. (And up to AS2000 it wasn't supported even for ASP.) Microsoft doesn't ship PIA for DSO, so using DSO from any .NET application is not supported, although technically you can build interop assembly yourself. Remember, DSO was built in VB6 for VB6 programmers, before .NET was invented, and there are many potential issues around integration between the two.|||So, what is the supported way to access AS 2000 data from ASP.NET?|||In order to access data you should use ADOMD.NET. DSO is for the administrator operations.|||I assume things like creating a cube and changing a filter are considered administrative. We allow users to do that from our web application.|||

I am not sure what you mean by "changing a filter".

Creating cubes from web application is a loaded operation. You may want to consider migrating to AS2005, where you could either use AMO from ASP.NET or generate XMLA scripts directly.

AS / Creating cubes

Hi,
I have started to learn Analysis Services. I did the tutorial step by step,
but the last step "create the 'Sales' cube" doesn't work. I get the error
message "Unable to create cube 'sales'" - Unspecified error. It doesn't even
work with the manuelly in footmart creatd cube. What is wrong here? I have
also installed sp3a.
My other question is that is it possible to create diagramms in AS? If no,
how do you create you database diagramms?
Many thanks in advance
Do you belong to OLAP Admin group? Do you have write permission on OLAP
cube data directory?
Eric Li
SQL DBA
MCDBA
Stefan wrote:

> Hi,
> I have started to learn Analysis Services. I did the tutorial step by step,
> but the last step "create the 'Sales' cube" doesn't work. I get the error
> message "Unable to create cube 'sales'" - Unspecified error. It doesn't even
> work with the manuelly in footmart creatd cube. What is wrong here? I have
> also installed sp3a.
> My other question is that is it possible to create diagramms in AS? If no,
> how do you create you database diagramms?
> Many thanks in advance
>

AS / Creating cubes

Hi,
I have started to learn Analysis Services. I did the tutorial step by step,
but the last step "create the 'Sales' cube" doesn't work. I get the error
message "Unable to create cube 'sales'" - Unspecified error. It doesn't even
work with the manuelly in footmart creatd cube. What is wrong here? I have
also installed sp3a.
My other question is that is it possible to create diagramms in AS? If no,
how do you create you database diagramms?
Many thanks in advanceDo you belong to OLAP Admin group? Do you have write permission on OLAP
cube data directory?
Eric Li
SQL DBA
MCDBA
Stefan wrote:

> Hi,
> I have started to learn Analysis Services. I did the tutorial step by step
,
> but the last step "create the 'Sales' cube" doesn't work. I get the error
> message "Unable to create cube 'sales'" - Unspecified error. It doesn't ev
en
> work with the manuelly in footmart creatd cube. What is wrong here? I have
> also installed sp3a.
> My other question is that is it possible to create diagramms in AS? If no,
> how do you create you database diagramms?
> Many thanks in advance
>

Articles or Blogs

I am aware how symmetric and assymetric keys are used in sql.
Can some one point me to good articles on what are symmetric and assymetric
keys ?.
thxhttp://support.microsoft.com/defaul...b;EN-US;q246071
or google: symmetric and asymmetric keys
"skg" <skg@.yahoo.com> wrote in message
news:ePKOm9eSGHA.4740@.TK2MSFTNGP14.phx.gbl...
>I am aware how symmetric and assymetric keys are used in sql.
> Can some one point me to good articles on what are symmetric and
> assymetric keys ?.
> thx
>|||skg wrote:
> I am aware how symmetric and assymetric keys are used in sql.
> Can some one point me to good articles on what are symmetric and assymetri
c
> keys ?.
> thx
Take a look at some of the stuff in Laurentiu Cristofor's blog.
http://blogs.msdn.com/lcris/
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||Thanks All.
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1142625992.469109.200260@.j33g2000cwa.googlegroups.com...
> skg wrote:
> Take a look at some of the stuff in Laurentiu Cristofor's blog.
> http://blogs.msdn.com/lcris/
> --
> David Portas, SQL Server MVP
> Whenever possible please post enough code to reproduce your problem.
> Including CREATE TABLE and INSERT statements usually helps.
> State what version of SQL Server you are using and specify the content
> of any error messages.
> SQL Server Books Online:
> http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
> --
>

Articles on SQL Server capacity

I have been doing some reading for a project comparing various RDBMS and I
have read that SQL Server is not very stable for DBs over 300GB but I have
also read about terrabyte-plus databases running just fine. Are there any
recommendations or articles from Microsoft about the capacity of SQL Server
(2000 or 2005)?Hi
Probably you can check this:
http://chanduas.blogspot.com/2005/05/maximum-capacity-specifications.html
please let me know if you have any questions
--
best Regards,
Chandra
http://chanduas.blogspot.com/
http://groups.msn.com/SQLResource/
---
"Robert Kinesta" wrote:
> I have been doing some reading for a project comparing various RDBMS and I
> have read that SQL Server is not very stable for DBs over 300GB but I have
> also read about terrabyte-plus databases running just fine. Are there any
> recommendations or articles from Microsoft about the capacity of SQL Server
> (2000 or 2005)?|||> I have read that SQL Server is not very stable for DBs over 300GB
Sounds like either advocacy or marketing misinformation. I have worked
on and know of many SQL Server databases that are much larger. Equally,
bad design or improper hardware can defeat a database of any size.
Take a look at:
http://www.microsoft.com/sql/techinfo/administration/2000/scalability.mspx
--
David Portas
SQL Server MVP
--

Articles on SQL Server capacity

I have been doing some reading for a project comparing various RDBMS and I
have read that SQL Server is not very stable for DBs over 300GB but I have
also read about terrabyte-plus databases running just fine. Are there any
recommendations or articles from Microsoft about the capacity of SQL Server
(2000 or 2005)?
Hi
Probably you can check this:
http://chanduas.blogspot.com/2005/05...fications.html
please let me know if you have any questions
best Regards,
Chandra
http://chanduas.blogspot.com/
http://groups.msn.com/SQLResource/
"Robert Kinesta" wrote:

> I have been doing some reading for a project comparing various RDBMS and I
> have read that SQL Server is not very stable for DBs over 300GB but I have
> also read about terrabyte-plus databases running just fine. Are there any
> recommendations or articles from Microsoft about the capacity of SQL Server
> (2000 or 2005)?
|||> I have read that SQL Server is not very stable for DBs over 300GB
Sounds like either advocacy or marketing misinformation. I have worked
on and know of many SQL Server databases that are much larger. Equally,
bad design or improper hardware can defeat a database of any size.
Take a look at:
http://www.microsoft.com/sql/techinf...alability.mspx
David Portas
SQL Server MVP

Articles on SQL Server capacity

I have been doing some reading for a project comparing various RDBMS and I
have read that SQL Server is not very stable for DBs over 300GB but I have
also read about terrabyte-plus databases running just fine. Are there any
recommendations or articles from Microsoft about the capacity of SQL Server
(2000 or 2005)?Hi
Probably you can check this:
http://chanduas.blogspot.com/2005/0...ifications.html
please let me know if you have any questions
best Regards,
Chandra
http://chanduas.blogspot.com/
http://groups.msn.com/SQLResource/
---
"Robert Kinesta" wrote:

> I have been doing some reading for a project comparing various RDBMS and I
> have read that SQL Server is not very stable for DBs over 300GB but I have
> also read about terrabyte-plus databases running just fine. Are there any
> recommendations or articles from Microsoft about the capacity of SQL Serve
r
> (2000 or 2005)?|||> I have read that SQL Server is not very stable for DBs over 300GB
Sounds like either advocacy or marketing misinformation. I have worked
on and know of many SQL Server databases that are much larger. Equally,
bad design or improper hardware can defeat a database of any size.
Take a look at:
http://www.microsoft.com/sql/techin...calability.mspx
David Portas
SQL Server MVP
--

articles on sql 2005

Not an article, but the book Pro SQL Server 2005 is the best I've found thus
far to show all the new features.
ISBN: 1-59059-477-0
"noam" <imss@.netvision.net.il> wrote in message
news:1150633683.695303.17100@.y41g2000cwy.googlegroups.com...
> Hi all
> i'm looking for good articles to learned about administration and
> develope on sql 2005
> Thanks
>Hi all
i'm looking for good articles to learned about administration and
develope on sql 2005
Thanks|||Not an article, but the book Pro SQL Server 2005 is the best I've found thus
far to show all the new features.
ISBN: 1-59059-477-0
"noam" <imss@.netvision.net.il> wrote in message
news:1150633683.695303.17100@.y41g2000cwy.googlegroups.com...
> Hi all
> i'm looking for good articles to learned about administration and
> develope on sql 2005
> Thanks
>

articles on sql 2005

Hi all
i'm looking for good articles to learned about administration and
develope on sql 2005
ThanksNot an article, but the book Pro SQL Server 2005 is the best I've found thus
far to show all the new features.
ISBN: 1-59059-477-0
"noam" <imss@.netvision.net.il> wrote in message
news:1150633683.695303.17100@.y41g2000cwy.googlegroups.com...
> Hi all
> i'm looking for good articles to learned about administration and
> develope on sql 2005
> Thanks
>

Articles on performance comparisons of sql2k5 on x32 and x64

We are looking for performance comparisons between SQL Server 2005 x32 and
x64.
Any out there?Hi,
Microsoft has a webccast showing the perf differences, watch that an
you'll see SQL2005 x64 SCREEEEEEEM , yeah a huge difference by man
miles.
64 bit has monster performance gains.
Regards
Anthon
--
9anthony
----
9anthony9's Profile: http://www.dbtalk.net/m1
View this thread: http://www.dbtalk.net/t29047

Articles on performance comparisons of sql2k5 on x32 and x64

We are looking for performance comparisons between SQL Server 2005 x32 and
x64.
Any out there?
Hi,
Microsoft has a webccast showing the perf differences, watch that and
you'll see SQL2005 x64 SCREEEEEEEM , yeah a huge difference by many
miles.
64 bit has monster performance gains.
Regards
Anthony
9anthony9
9anthony9's Profile: http://www.dbtalk.net/m12
View this thread: http://www.dbtalk.net/t290479

Articles on performance comparisons of sql2k5 on x32 and x64

We are looking for performance comparisons between SQL Server 2005 x32 and
x64.
Any out there?Hi,
Microsoft has a webccast showing the perf differences, watch that and
you'll see SQL2005 x64 SCREEEEEEEM , yeah a huge difference by many
miles.
64 bit has monster performance gains.
Regards
Anthony
9anthony9
---
9anthony9's Profile: http://www.dbtalk.net/m12
View this thread: http://www.dbtalk.net/t290479

Articles on foreach loop container over dataflow task?

Hi everyone,

do you know any articles on foreach loop container that loops over a dataflow task...pls tell me....

thanks in advance,

Praveen Dayanithi wrote:

Hi everyone,

do you know any articles on foreach loop container that loops over a dataflow task...pls tell me....

thanks in advance,

I presume that you mean a ForEach loop that contains a data flow.

Enumerating files in a Foreach loop
(http://blogs.conchango.com/jamiethomson/archive/2005/05/30/1489.aspx)

-Jamie

|||

Hi,

Thanks Jamie...

But i was looking for articles that involves foreach loop container that has a data flow task...

By saying dataflow task i mean i am goin to loop around all the records in the recordset of the dataflow task.

or you can tell me how to solve this issue....

I need to fetch data from table 2 for every record from the recordset which is mergejoin resultset of table1 and table2.

Thanks in advance,

Articles in more than one Category - How to organize tables?

Greetings,
I have one table, named Article, and one table name Category.
The problem is, one Article could be in just one or in several categories.
What is the best way to connect data between Article and Category according to fast search performance?
I have several ideas:
1. To have third cross table Article_Category with fields Article_ID and Category_ID, and search Article_Category table
2. To have several INTEGER columns in Article table (like Category_ID1, Category_ID2,..) and search those columns
3. Add one VARCHAR field in Article table where I could write Category ID's delimited by some character (e.g. by comma), and do text search in only that column.
What is recommended for solving problems like this?

Option 1 makes the most sense. This is how many-to-many relationships are generally implemented.

|||Thanks for advice Adam|||Yes, that is how I implemented them, it works well. Then you can use an inner join to join the article info to the category info.|||

Will you have:
Option A: Many-to-Many (category 1 has article 1 and 2, category 2 has article 3, etc)

Option B: 1-to-Many (Category 1 has article 2, category 2 has article 2, category 3 has article 5, etc)
If B, you can put your article_id in your category table. This would give you the fastest search performance and allow your joins to be simpler.

Nick

|||Hi bmains,
how large are your tables?
Do you run it on Web with ASP.NET?
Do you satisfied with search speed?|||Hi Nick,
one Category will have many articles,
one Article could be in more than one category.

There is 500 000 articles and almost 2000 categories.
Regards|||

If you are talking a join between the join table and the two main tables, then no, I wouldn't worry about speed; it shouldn't be worse performance-wise. Though, if you are talking a lot of joins, then you need to worry more. outer joins are worse than inner joins; inner joins aren't bad; here we have a requirement that you have to try to rewrite a query to use an inner join if possible, when an outer join is used.

articles default

hi all, I was trying to set the articles to "delete data in the existing table that matches the row filter" in the publications "article>articles>table articles>snapshot" of the publication but everytime i reopen that property tab the publication restore to the default setting of 
"drop exisiting table and recreate it" 
i"m using sql server 2000. snapshot replication
-- thanks, joey
You can make use of dynamic snapshot to filter out the data.

articles about data access using System.Data.SqlClient namespace?

Aaarrgghhhhhhh, still searchin' for a nice step-by-step article to learn data access (all insert delete update commands) with the SQL Server .NET Data Provider found in [System.Data.SqlClient] namespace...

Can anyone plz help me to find it?these might help?
http://samples.gotdotnet.com/quickstart/howto/doc/adoplus/ADOPlusOverview.aspx
http://www.stylusinc.net/technology/microsoft/ado.shtml
http://msdn.microsoft.com/library/en-us/cpref/html/frlrfSystemDataSqlClientSqlConnectionClassTopic.asp?frame=true
http://msdn.microsoft.com/data/downloads/samples/default.aspx

Also
Check out the Data Access Application Block for .NET, it creates a wrapper around SqlClient class for somewhat easier use.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/daab-rm.asp

Article: Database Mirroring Not Ready for Production When SQL Server 2005 Ships

Can someone at Microsoft comment on this article, specifically, how it relates to SQL Replication? Will SQL Replication fall into the category of what this article describes, or only the mirroring feature?

Thanks,
PeterYes, Database Mirroring won't be available when SQL Server 2005 ships. Please refer to this: http://www.microsoft.com/sql/2005/productinfo/letter.mspx. However this does not affect replication at all.

article row filter - 2 parameters

hello,

i need to filter an article based on a user-supplied datetime filter (the datetime parameter is specified by the subscriber just before replication). at the same time i need to filter again by user (different subscribers get different rows).

i already did the user-based filter using HOST_NAME( ). but the difficulty here (al least i think so) lies in passing 2 parameters to the filter. i cannot rely on using SUSER_SNAME to pass the user filter, because no one will want to create 500 user accounts. so i guess the only solution here is to pass both parameters using only HOST_NAME( ) and then write 2 splitting functions which uses HOST_NAME( ) as its parameter. am i right ?

publisher/distributor is sql server 2005, all subscribers use sql mobile.

TIA, kamil nowicki

Using datetime would not be recommended since it is not deterministic.

For eg:

lets say you have a filter to get rows being touched only in the last 7 days.

Initially you get all rows.

After 7 days having not touched any rows, you would expect the merge agent deletes the 7 rows which will not be the case.

I would advise you to use SUSER_SNAME() or HOST_NAME() to get specific rows and in addition use another column 'status' or something that you can set/reset according to your business needs.

|||

thanks for replying.

i think that i have to use a datetime filter because the parameter to this filter has to be dynamic (worst case scenario: each subscriber uses different filter parameter for each of his replication sessions). is there another way to accomplish this ?

also, is there a way to store something in SUSER_SNAME() like using SqlCeReplication.HostName to store something into HOSTNAME() ?

|||

If each session of the subscriber uses a different filter, you will get an error with mismatched partitions. You would need to reinitialize the subscriber in that case. Are you ready to reinitialize the subscribers for every sync?

But note that as I mentioned previously, you will not be able to rely on datetime filter. Use a status column or something like that and update this column when you want rows to be in partition or out of partition.

|||

>> Use a status column or something like that and update this column when you want rows to be in partition or out of partition.

but that would mean that i have to use a "fixed filter" (same "replicate from ..." date for every subscriber), wouldn't it ?

|||How about: login=SUSER_SNAME and status='Y'|||

We have a best practice article for time-based filtering, you may want to reference it, it's what Mahesh is talking about:

Best Practices for Time-Based Row Filters

http://msdn2.microsoft.com/en-us/library/ms365153.aspx

|||

and how does that solve my scenario ? i want each subscriber to be able to choose a date ( "replicate from..." ) before each of his replication sessions. SP on the server will not know those dates when executed, so how am i supposed to update the "status" column ?

thanks for the link, i have read that before starting the thread.

|||

Are you saying that every time the subscriber syncs, it sends a new date and expects only releant rows? You cannot achieve this using the non-deterministic filter functions. If you are ready to reinitialize your subscriptions every time you sync, you may do that. Do a reinit on the publisher/subscriber, then send in the appropriate date as the hostname to get the relevant rows. However you wont be able to upload in this session because your filters may not match data that you want to upload.

|||

i took a different approach and now everything is working as it should. i wrote a SP on the backend server which updates every row in the filtered article for a given user (SP is parametrized with @.user nvarchar and @.filter datetime), changing rep_status tinyint column. the SP is called from subscriber on the distributor just before each replication session, so now the SP has all the info to update the filtered article (@.user and @.filter). article is filtered by rep_status.

the drawback is that the backend has to be put out of LAN to the internet (public IP), but later i will write a WebService so that all the data will pass through IIS and then the backend will be again NATed.

edit: and of course this works without subscription reinitialization :)

article request

can anybody tell me a useful online article,site or a book, to know about SQL
server internalsOn Thu, 13 Oct 2005 02:07:03 -0700, "enbee"
<enbee@.discussions.microsoft.com> wrote:
>can anybody tell me a useful online article,site or a book, to know about SQL
>server internals
You could do a lot worse than Kalen Delaney's book Inside SQL Server
2000.
If you are referrring to SQL Server 2005 I believe an updated edition
is in preparation.
Andrew Watt
MVP - InfoPath|||Inside Microsoft SQL Server 2000
http://vyaskn.tripod.com/inside_microsoft_sql_server_2000.htm
Also, "The Guru's Guide to SQL Server Architecture and Internals"
--
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"enbee" <enbee@.discussions.microsoft.com> wrote in message
news:9E5DBA88-D015-47FF-AC47-44AFFA4936CB@.microsoft.com...
can anybody tell me a useful online article,site or a book, to know about
SQL
server internals|||SQL Server magazine puts out a CD every 6 months with the latest and
greatest. Kalen Delaney, as one person noted, is certainly an authority on
SQL Server and there are loads of articles she's written. I use it
frequently and often and every chance I get.
A years subscription (which includes online access to SQL Server magazine)
is about $60.
Cheers.
HTH
"enbee" wrote:
> can anybody tell me a useful online article,site or a book, to know about SQL
> server internals

article request

can anybody tell me a useful online article,site or a book, to know about SQL
server internals
On Thu, 13 Oct 2005 02:07:03 -0700, "enbee"
<enbee@.discussions.microsoft.com> wrote:

>can anybody tell me a useful online article,site or a book, to know about SQL
>server internals
You could do a lot worse than Kalen Delaney's book Inside SQL Server
2000.
If you are referrring to SQL Server 2005 I believe an updated edition
is in preparation.
Andrew Watt
MVP - InfoPath
|||Inside Microsoft SQL Server 2000
http://vyaskn.tripod.com/inside_micr...erver_2000.htm
Also, "The Guru's Guide to SQL Server Architecture and Internals"
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"enbee" <enbee@.discussions.microsoft.com> wrote in message
news:9E5DBA88-D015-47FF-AC47-44AFFA4936CB@.microsoft.com...
can anybody tell me a useful online article,site or a book, to know about
SQL
server internals
|||SQL Server magazine puts out a CD every 6 months with the latest and
greatest. Kalen Delaney, as one person noted, is certainly an authority on
SQL Server and there are loads of articles she's written. I use it
frequently and often and every chance I get.
A years subscription (which includes online access to SQL Server magazine)
is about $60.
Cheers.
HTH
"enbee" wrote:

> can anybody tell me a useful online article,site or a book, to know about SQL
> server internals

article request

can anybody tell me a useful online article,site or a book, to know about SQ
L
server internalsOn Thu, 13 Oct 2005 02:07:03 -0700, "enbee"
<enbee@.discussions.microsoft.com> wrote:

>can anybody tell me a useful online article,site or a book, to know about S
QL
>server internals
You could do a lot worse than Kalen Delaney's book Inside SQL Server
2000.
If you are referrring to SQL Server 2005 I believe an updated edition
is in preparation.
Andrew Watt
MVP - InfoPath|||Inside Microsoft SQL Server 2000
http://vyaskn.tripod.com/inside_mic...server_2000.htm
Also, "The Guru's Guide to SQL Server Architecture and Internals"
--
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"enbee" <enbee@.discussions.microsoft.com> wrote in message
news:9E5DBA88-D015-47FF-AC47-44AFFA4936CB@.microsoft.com...
can anybody tell me a useful online article,site or a book, to know about
SQL
server internals|||SQL Server magazine puts out a CD every 6 months with the latest and
greatest. Kalen Delaney, as one person noted, is certainly an authority on
SQL Server and there are loads of articles she's written. I use it
frequently and often and every chance I get.
A years subscription (which includes online access to SQL Server magazine)
is about $60.
Cheers.
HTH
"enbee" wrote:

> can anybody tell me a useful online article,site or a book, to know about
SQL
> server internals

article or some good reasons to kick devs out of production

My devs all have access to production. They always had and they feel they
should still just in case they need to resolve issues.
Are there any articles on how I can convince them to get out and stay out
Let me know what should i tell them so I can revoke their access
Thanks
if you don't trust someone; hand them specifically what you want them to work on
and nothing else.
I've worked on *MANY* large scale production systems. I've never had "100%
access to 100% of the project".
If they're your devs, "here's the way it is".
I recently pointed out a small bug in an application. would take 2 seconds to
fix. (typo in a dialog).
I am *NOT* to change anything w/o a bug being written up & published.
It was, but said "do not change it unless you're working on that module for
something else".
The problem will be there for upwards of 3+ months because that module is rock
solid. Point? it's what I was told to do as a developer.
Tell your devs they'll get access to what they need.
Now; once you do -- live wit hthe fact that things don't happen as fast and the
backlash of "I couldn't resolve the issue, so-and-so had rights, not me." You
will have created the situation.
On Mon, 30 Jan 2006 20:30:23 -0800, "Hassan" <Hassan@.hotmail.com> wrote:

>My devs all have access to production. They always had and they feel they
>should still just in case they need to resolve issues.
>Are there any articles on how I can convince them to get out and stay out
>Let me know what should i tell them so I can revoke their access
>Thanks
>
-- AntiSpam/harvest --
Remove X's to send email to me.
|||A hammer, baseball bat, or golf club come to mind. So does, "I'm
responsible for it now". There isn't an easy answer. This is pure politics
that can really be argued both ways. My basic answer is that unless someone
is going to be held accountable for maintaining a database, they shouldn't
have any administrative authority. "Just in case" is just an excuse for
"please don't take away my toys".
I usually fix it by simply handing developers a pager. When they ask what
that is for, I tell them that since they have access to the production
database, that also make them responsible for it. Since they are
responsible for it, they also have to carry a pager and be woken up at 3AM
when things go wrong. Access is usually revoked within a day of doing that.

Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Hassan" <Hassan@.hotmail.com> wrote in message
news:uK3TO8hJGHA.740@.TK2MSFTNGP12.phx.gbl...
> My devs all have access to production. They always had and they feel they
> should still just in case they need to resolve issues.
> Are there any articles on how I can convince them to get out and stay out
> Let me know what should i tell them so I can revoke their access
> Thanks
>
|||:-)
I like your style. I may have to try that myself (it's an ongoing issue
where I work - I'm pretty solid on the SQL side of things but not great
on the political side of things).
*mike hodgson*
http://sqlnerd.blogspot.com
Michael Hotek wrote:

>A hammer, baseball bat, or golf club come to mind. So does, "I'm
>responsible for it now". There isn't an easy answer. This is pure politics
>that can really be argued both ways. My basic answer is that unless someone
>is going to be held accountable for maintaining a database, they shouldn't
>have any administrative authority. "Just in case" is just an excuse for
>"please don't take away my toys".
>I usually fix it by simply handing developers a pager. When they ask what
>that is for, I tell them that since they have access to the production
>database, that also make them responsible for it. Since they are
>responsible for it, they also have to carry a pager and be woken up at 3AM
>when things go wrong. Access is usually revoked within a day of doing that.
>
>
>
|||You can state several reason, and that depend upon
1. criticality of your data
- If some dev, run an update and mess up your data you need to restore
last back up and transaction log just before update happend. Then you need to
keep your server offline, will that affect your business? then thats a good
reason.
2. Size of your data
- If data is huge, restoration of corrupt data can take much time
3. Routine maintenece
- As admin you got routine maintenece , also you may go for process
where DB should be in single user mode.
Again why cant you go for an arhictecture where you have
=> Development Server
=> Integrated Server
=> Prodcution Server
Take daily backup from production, and restore it in Integrated Server. Give
dev access to integrated server, so that they will get the recent data from
production. Do the fix and apply that in production.
Thanks,
Sree
"Michael Hotek" wrote:

> A hammer, baseball bat, or golf club come to mind. So does, "I'm
> responsible for it now". There isn't an easy answer. This is pure politics
> that can really be argued both ways. My basic answer is that unless someone
> is going to be held accountable for maintaining a database, they shouldn't
> have any administrative authority. "Just in case" is just an excuse for
> "please don't take away my toys".
> I usually fix it by simply handing developers a pager. When they ask what
> that is for, I tell them that since they have access to the production
> database, that also make them responsible for it. Since they are
> responsible for it, they also have to carry a pager and be woken up at 3AM
> when things go wrong. Access is usually revoked within a day of doing that.
>
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:uK3TO8hJGHA.740@.TK2MSFTNGP12.phx.gbl...
>
>
|||Hi Mike
Something that I don't think has been specifically mentioned is
accountability, it is not the same as trust, because you can give developers
access to the production system but only in a contolled manner, and only to
make changes that have been passed as fit i.e. been through the a test cycle.
The other thing is sensitivity, the more open the access is, then the
greater the risk of a violation. Whoever carries this risk should be aware of
the situation.
If your developers do not have access to a production system, there should
be an equivalent system that can be used if the need arises. If your data is
sensitive, then this alternative system can be de-sensitized.
John
"Mike Hodgson" wrote:

> :-)
> I like your style. I may have to try that myself (it's an ongoing issue
> where I work - I'm pretty solid on the SQL side of things but not great
> on the political side of things).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Michael Hotek wrote:
>
|||That's really the basic point behind handing developers a pager or whatever
you use. People always seem to want administrative access to things without
having any of the responsibility. You simply make it a point that if they
want admin access, of any kind, then they are going to be just as
responsible for the system as you are. That means every time something goes
wrong on the system, they get yanked into it right along with everyone else.
The rest is left unsaid, because it doesn't need to be. It's making a very
fine point about why certain people have admin access and no one else does
without getting into a big argument about it.
When you boil it all down, there isn't a single reason that someone can come
up with that I couldn't argue both sides of. It has nothing to do with data
volume, operations that are executed, errors that are found, change control,
etc.
The only issue that can't be argued is responsiblity. If you have admin
access, that means you are responsible for the system, period. If you give
someone admin access without also requiring them to be responsible for the
system, then you might as well get used to sleeping in your office.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:57A3D790-44EB-403E-AFEB-7CC922094886@.microsoft.com...[vbcol=seagreen]
> Hi Mike
> Something that I don't think has been specifically mentioned is
> accountability, it is not the same as trust, because you can give
> developers
> access to the production system but only in a contolled manner, and only
> to
> make changes that have been passed as fit i.e. been through the a test
> cycle.
> The other thing is sensitivity, the more open the access is, then the
> greater the risk of a violation. Whoever carries this risk should be aware
> of
> the situation.
> If your developers do not have access to a production system, there should
> be an equivalent system that can be used if the need arises. If your data
> is
> sensitive, then this alternative system can be de-sensitized.
> John
>
> "Mike Hodgson" wrote:
|||Hi Mike
I think what I was pointing out was from the other direction, the developers
may want access to the company payrole, but will whoever is responsible for
it want them to have it? I.e. you may need to find the person who is
responsible and say "Do you want them to do this?"
John
"Michael Hotek" wrote:

> That's really the basic point behind handing developers a pager or whatever
> you use. People always seem to want administrative access to things without
> having any of the responsibility. You simply make it a point that if they
> want admin access, of any kind, then they are going to be just as
> responsible for the system as you are. That means every time something goes
> wrong on the system, they get yanked into it right along with everyone else.
> The rest is left unsaid, because it doesn't need to be. It's making a very
> fine point about why certain people have admin access and no one else does
> without getting into a big argument about it.
> When you boil it all down, there isn't a single reason that someone can come
> up with that I couldn't argue both sides of. It has nothing to do with data
> volume, operations that are executed, errors that are found, change control,
> etc.
> The only issue that can't be argued is responsiblity. If you have admin
> access, that means you are responsible for the system, period. If you give
> someone admin access without also requiring them to be responsible for the
> system, then you might as well get used to sleeping in your office.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:57A3D790-44EB-403E-AFEB-7CC922094886@.microsoft.com...
>
>
|||On Mon, 30 Jan 2006 20:30:23 -0800, Hassan wrote:

>My devs all have access to production. They always had and they feel they
>should still just in case they need to resolve issues.
>Are there any articles on how I can convince them to get out and stay out
>Let me know what should i tell them so I can revoke their access
>Thanks
>
Hi Hassan,
A lot of good ideas are mentioned already. But I've missed one so far:
in many countries, privacy laws require you to limit access to
production data to only those who absolutely need it. Developers don't
need permanent access to production data.
(They may sometimes need temp access to prod when troubleshooting a
production error - they should use a special login that is normally
disabled, fully logged, and they should ideally be constantly monitored
while fixing the problem - though the last requirement is somewhat
unrealistic).
Hugo Kornelis, SQL Server MVP
|||I understand, that's a much different issue that's really easy to fix. When
you have sensitive data, that's a simple case to restrict access that no one
can argue with. It's the cases that don't fall into that category that are
more difficult.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:2A73DAF3-A001-4927-94D7-051852672CA3@.microsoft.com...[vbcol=seagreen]
> Hi Mike
> I think what I was pointing out was from the other direction, the
> developers
> may want access to the company payrole, but will whoever is responsible
> for
> it want them to have it? I.e. you may need to find the person who is
> responsible and say "Do you want them to do this?"
> John
> "Michael Hotek" wrote:

article or some good reasons to kick devs out of production

My devs all have access to production. They always had and they feel they
should still just in case they need to resolve issues.
Are there any articles on how I can convince them to get out and stay out
Let me know what should i tell them so I can revoke their access
Thanksif you don't trust someone; hand them specifically what you want them to wor
k on
and nothing else.
I've worked on *MANY* large scale production systems. I've never had "100%
access to 100% of the project".
If they're your devs, "here's the way it is".
I recently pointed out a small bug in an application. would take 2 seconds t
o
fix. (typo in a dialog).
I am *NOT* to change anything w/o a bug being written up & published.
It was, but said "do not change it unless you're working on that module for
something else".
The problem will be there for upwards of 3+ months because that module is ro
ck
solid. Point? it's what I was told to do as a developer.
Tell your devs they'll get access to what they need.
Now; once you do -- live wit hthe fact that things don't happen as fast and
the
backlash of "I couldn't resolve the issue, so-and-so had rights, not me." Y
ou
will have created the situation.
On Mon, 30 Jan 2006 20:30:23 -0800, "Hassan" <Hassan@.hotmail.com> wrote:

>My devs all have access to production. They always had and they feel they
>should still just in case they need to resolve issues.
>Are there any articles on how I can convince them to get out and stay out :
)
>Let me know what should i tell them so I can revoke their access
>Thanks
>
-- AntiSpam/harvest --
Remove X's to send email to me.|||A hammer, baseball bat, or golf club come to mind. So does, "I'm
responsible for it now". There isn't an easy answer. This is pure politics
that can really be argued both ways. My basic answer is that unless someone
is going to be held accountable for maintaining a database, they shouldn't
have any administrative authority. "Just in case" is just an excuse for
"please don't take away my toys".
I usually fix it by simply handing developers a pager. When they ask what
that is for, I tell them that since they have access to the production
database, that also make them responsible for it. Since they are
responsible for it, they also have to carry a pager and be woken up at 3AM
when things go wrong. Access is usually revoked within a day of doing that.

Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Hassan" <Hassan@.hotmail.com> wrote in message
news:uK3TO8hJGHA.740@.TK2MSFTNGP12.phx.gbl...
> My devs all have access to production. They always had and they feel they
> should still just in case they need to resolve issues.
> Are there any articles on how I can convince them to get out and stay out
> Let me know what should i tell them so I can revoke their access
> Thanks
>|||:-)
I like your style. I may have to try that myself (it's an ongoing issue
where I work - I'm pretty solid on the SQL side of things but not great
on the political side of things).
*mike hodgson*
http://sqlnerd.blogspot.com
Michael Hotek wrote:

>A hammer, baseball bat, or golf club come to mind. So does, "I'm
>responsible for it now". There isn't an easy answer. This is pure politic
s
>that can really be argued both ways. My basic answer is that unless someon
e
>is going to be held accountable for maintaining a database, they shouldn't
>have any administrative authority. "Just in case" is just an excuse for
>"please don't take away my toys".
>I usually fix it by simply handing developers a pager. When they ask what
>that is for, I tell them that since they have access to the production
>database, that also make them responsible for it. Since they are
>responsible for it, they also have to carry a pager and be woken up at 3AM
>when things go wrong. Access is usually revoked within a day of doing that
.
>
>
>|||You can state several reason, and that depend upon
1. criticality of your data
- If some dev, run an update and mess up your data you need to restore
last back up and transaction log just before update happend. Then you need t
o
keep your server offline, will that affect your business? then thats a good
reason.
2. Size of your data
- If data is huge, restoration of corrupt data can take much time
3. Routine maintenece
- As admin you got routine maintenece , also you may go for process
where DB should be in single user mode.
Again why cant you go for an arhictecture where you have
=> Development Server
=> Integrated Server
=> Prodcution Server
Take daily backup from production, and restore it in Integrated Server. Give
dev access to integrated server, so that they will get the recent data from
production. Do the fix and apply that in production.
Thanks,
Sree
"Michael Hotek" wrote:

> A hammer, baseball bat, or golf club come to mind. So does, "I'm
> responsible for it now". There isn't an easy answer. This is pure politi
cs
> that can really be argued both ways. My basic answer is that unless someo
ne
> is going to be held accountable for maintaining a database, they shouldn't
> have any administrative authority. "Just in case" is just an excuse for
> "please don't take away my toys".
> I usually fix it by simply handing developers a pager. When they ask what
> that is for, I tell them that since they have access to the production
> database, that also make them responsible for it. Since they are
> responsible for it, they also have to carry a pager and be woken up at 3AM
> when things go wrong. Access is usually revoked within a day of doing tha
t.
>
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:uK3TO8hJGHA.740@.TK2MSFTNGP12.phx.gbl...
>
>|||Hi Mike
Something that I don't think has been specifically mentioned is
accountability, it is not the same as trust, because you can give developers
access to the production system but only in a contolled manner, and only to
make changes that have been passed as fit i.e. been through the a test cycle
.
The other thing is sensitivity, the more open the access is, then the
greater the risk of a violation. Whoever carries this risk should be aware o
f
the situation.
If your developers do not have access to a production system, there should
be an equivalent system that can be used if the need arises. If your data is
sensitive, then this alternative system can be de-sensitized.
John
"Mike Hodgson" wrote:

> :-)
> I like your style. I may have to try that myself (it's an ongoing issue
> where I work - I'm pretty solid on the SQL side of things but not great
> on the political side of things).
> --
> *mike hodgson*
> http://sqlnerd.blogspot.com
>
> Michael Hotek wrote:
>
>|||That's really the basic point behind handing developers a pager or whatever
you use. People always seem to want administrative access to things without
having any of the responsibility. You simply make it a point that if they
want admin access, of any kind, then they are going to be just as
responsible for the system as you are. That means every time something goes
wrong on the system, they get yanked into it right along with everyone else.
The rest is left unsaid, because it doesn't need to be. It's making a very
fine point about why certain people have admin access and no one else does
without getting into a big argument about it.
When you boil it all down, there isn't a single reason that someone can come
up with that I couldn't argue both sides of. It has nothing to do with data
volume, operations that are executed, errors that are found, change control,
etc.
The only issue that can't be argued is responsiblity. If you have admin
access, that means you are responsible for the system, period. If you give
someone admin access without also requiring them to be responsible for the
system, then you might as well get used to sleeping in your office.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:57A3D790-44EB-403E-AFEB-7CC922094886@.microsoft.com...[vbcol=seagreen]
> Hi Mike
> Something that I don't think has been specifically mentioned is
> accountability, it is not the same as trust, because you can give
> developers
> access to the production system but only in a contolled manner, and only
> to
> make changes that have been passed as fit i.e. been through the a test
> cycle.
> The other thing is sensitivity, the more open the access is, then the
> greater the risk of a violation. Whoever carries this risk should be aware
> of
> the situation.
> If your developers do not have access to a production system, there should
> be an equivalent system that can be used if the need arises. If your data
> is
> sensitive, then this alternative system can be de-sensitized.
> John
>
> "Mike Hodgson" wrote:
>|||Hi Mike
I think what I was pointing out was from the other direction, the developers
may want access to the company payrole, but will whoever is responsible for
it want them to have it? I.e. you may need to find the person who is
responsible and say "Do you want them to do this?"
John
"Michael Hotek" wrote:

> That's really the basic point behind handing developers a pager or whateve
r
> you use. People always seem to want administrative access to things witho
ut
> having any of the responsibility. You simply make it a point that if they
> want admin access, of any kind, then they are going to be just as
> responsible for the system as you are. That means every time something go
es
> wrong on the system, they get yanked into it right along with everyone els
e.
> The rest is left unsaid, because it doesn't need to be. It's making a ver
y
> fine point about why certain people have admin access and no one else does
> without getting into a big argument about it.
> When you boil it all down, there isn't a single reason that someone can co
me
> up with that I couldn't argue both sides of. It has nothing to do with da
ta
> volume, operations that are executed, errors that are found, change contro
l,
> etc.
> The only issue that can't be argued is responsiblity. If you have admin
> access, that means you are responsible for the system, period. If you giv
e
> someone admin access without also requiring them to be responsible for the
> system, then you might as well get used to sleeping in your office.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:57A3D790-44EB-403E-AFEB-7CC922094886@.microsoft.com...
>
>|||On Mon, 30 Jan 2006 20:30:23 -0800, Hassan wrote:

>My devs all have access to production. They always had and they feel they
>should still just in case they need to resolve issues.
>Are there any articles on how I can convince them to get out and stay out :
)
>Let me know what should i tell them so I can revoke their access
>Thanks
>
Hi Hassan,
A lot of good ideas are mentioned already. But I've missed one so far:
in many countries, privacy laws require you to limit access to
production data to only those who absolutely need it. Developers don't
need permanent access to production data.
(They may sometimes need temp access to prod when troubleshooting a
production error - they should use a special login that is normally
disabled, fully logged, and they should ideally be constantly monitored
while fixing the problem - though the last requirement is somewhat
unrealistic).
Hugo Kornelis, SQL Server MVP|||I understand, that's a much different issue that's really easy to fix. When
you have sensitive data, that's a simple case to restrict access that no one
can argue with. It's the cases that don't fall into that category that are
more difficult.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:2A73DAF3-A001-4927-94D7-051852672CA3@.microsoft.com...[vbcol=seagreen]
> Hi Mike
> I think what I was pointing out was from the other direction, the
> developers
> may want access to the company payrole, but will whoever is responsible
> for
> it want them to have it? I.e. you may need to find the person who is
> responsible and say "Do you want them to do this?"
> John
> "Michael Hotek" wrote:
>