Friday, March 30, 2012

How to manage security with Access FE and SQL 2005 Express BE

I have upsized my Access 2003 database to SQL Server 2005 Express so that al
l
the tables are now reside in SQL Server; forms, queries and report are still
in Access, but how do I manage users now?
Do I setup users on the SQL Server or maintain those which I have set up in
Access using the security wizard.
It is important that users have no direct access to the tables but
unfortunately they can still do so in Access. I do not wish that they amend
or edit them directly. I am hoping that I could prevent this with the SQL
server, but I don’t know how to and then how do I connect the security of
the
SQL server with that of the Access database.
Please help.Hi,
yeah you can manage users the same as you done in MS ACCESS
1). Create SQL Users
2). Create Roles
3). Assign proper privileges to Roles
4). Map the users to that Roles respectively
and refer http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=24
:-)
Regards
--
Andy Davis
Activecrypt Team
---
SQL Server Encryption Software
http://www.activecrypt.com
"Lisa Tanenbaum" wrote:

> I have upsized my Access 2003 database to SQL Server 2005 Express so that
all
> the tables are now reside in SQL Server; forms, queries and report are sti
ll
> in Access, but how do I manage users now?
> Do I setup users on the SQL Server or maintain those which I have set up i
n
> Access using the security wizard.
> It is important that users have no direct access to the tables but
> unfortunately they can still do so in Access. I do not wish that they amen
d
> or edit them directly. I am hoping that I could prevent this with the SQL
> server, but I don’t know how to and then how do I connect the security o
f the
> SQL server with that of the Access database.
> Please help.
>|||Andy
Thanks for your reply. I am completely new to SQL Server so it wasn't so
obvious to me. I think I understand it a little better now.
"Andy Davis" wrote:
[vbcol=seagreen]
> Hi,
> yeah you can manage users the same as you done in MS ACCESS
> 1). Create SQL Users
> 2). Create Roles
> 3). Assign proper privileges to Roles
> 4). Map the users to that Roles respectively
> and refer http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=24
> :-)
> Regards
> --
> Andy Davis
> Activecrypt Team
> ---
> SQL Server Encryption Software
> http://www.activecrypt.com
>
> "Lisa Tanenbaum" wrote:
>

How to Manage Security in SQL Express (2005 CTP)

Is there a tool or mmc snap-in to manage security for Sql Express? I've got the express manager (XM) but it doesn't it doesn't provide a means to manage logins.
I suppose I could resort to T-SQL, but it seems that there should be a tool somewhere for such an obvious function.Nevermind.. I just resorted to sp_addlogin...|||Try the following link:
http://www.microsoft.com/downloads/details.aspx?FamilyId=C7A5CC62-EC54-4299-85FC-BA05C181ED55&displaylang=en

How to manage RTE text directly in the DB ?

Hi all,
I store my Rich-Text files in my SQL2000 database.
I want to add a line at the end... it's very easy in my VB application with
a Rich Text control... but how to do that directly in the database (via a
Trigger) ?
Does SQL2000 have Rich Text function or show I use an external (Windows)
component ?
Lilian.
SQL Server 2000 has no knowledge of the contents of your character strings.
Content could be RTF, HTML XML, etc.
It might be possible to build the desired RTF string in Transact-SQL but I
don't know enough about RTF formatting codes to say for sure. Maintaining
RTF content is a job best done in application code rather than Transact-SQL.
Another option is to write an ActiveX control or extended stored procedure
to manipulate RTF strings that could be invoked from Transact-SQL.
Hope this helps.
Dan Guzman
SQL Server MVP
"Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
> Hi all,
> I store my Rich-Text files in my SQL2000 database.
> I want to add a line at the end... it's very easy in my VB application
> with
> a Rich Text control... but how to do that directly in the database (via a
> Trigger) ?
> Does SQL2000 have Rich Text function or show I use an external (Windows)
> component ?
> Lilian.
>
|||Maybe using an OLE object like this example:
http://support.microsoft.com/kb/312839/en-us
... but which object ?
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> a crit dans le message
de news:OYknWxePGHA.3100@.TK2MSFTNGP11.phx.gbl...
> SQL Server 2000 has no knowledge of the contents of your character
strings.
> Content could be RTF, HTML XML, etc.
> It might be possible to build the desired RTF string in Transact-SQL but I
> don't know enough about RTF formatting codes to say for sure. Maintaining
> RTF content is a job best done in application code rather than
Transact-SQL.[vbcol=seagreen]
> Another option is to write an ActiveX control or extended stored procedure
> to manipulate RTF strings that could be invoked from Transact-SQL.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
> news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
a
>

How to manage RTE text directly in the DB ?

Hi all,
I store my Rich-Text files in my SQL2000 database.
I want to add a line at the end... it's very easy in my VB application with
a Rich Text control... but how to do that directly in the database (via a
Trigger) '
Does SQL2000 have Rich Text function or show I use an external (Windows)
component ?
Lilian.SQL Server 2000 has no knowledge of the contents of your character strings.
Content could be RTF, HTML XML, etc.
It might be possible to build the desired RTF string in Transact-SQL but I
don't know enough about RTF formatting codes to say for sure. Maintaining
RTF content is a job best done in application code rather than Transact-SQL.
Another option is to write an ActiveX control or extended stored procedure
to manipulate RTF strings that could be invoked from Transact-SQL.
Hope this helps.
Dan Guzman
SQL Server MVP
"Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
> Hi all,
> I store my Rich-Text files in my SQL2000 database.
> I want to add a line at the end... it's very easy in my VB application
> with
> a Rich Text control... but how to do that directly in the database (via a
> Trigger) '
> Does SQL2000 have Rich Text function or show I use an external (Windows)
> component ?
> Lilian.
>|||Maybe using an OLE object like this example:
http://support.microsoft.com/kb/312839/en-us
... but which object ?
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> a crit dans le message
de news:OYknWxePGHA.3100@.TK2MSFTNGP11.phx.gbl...
> SQL Server 2000 has no knowledge of the contents of your character
strings.
> Content could be RTF, HTML XML, etc.
> It might be possible to build the desired RTF string in Transact-SQL but I
> don't know enough about RTF formatting codes to say for sure. Maintaining
> RTF content is a job best done in application code rather than
Transact-SQL.
> Another option is to write an ActiveX control or extended stored procedure
> to manipulate RTF strings that could be invoked from Transact-SQL.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
> news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
a[vbcol=seagreen]
>

How to manage RTE text directly in the DB ?

Hi all,
I store my Rich-Text files in my SQL2000 database.
I want to add a line at the end... it's very easy in my VB application with
a Rich Text control... but how to do that directly in the database (via a
Trigger) '
Does SQL2000 have Rich Text function or show I use an external (Windows)
component ?
Lilian.SQL Server 2000 has no knowledge of the contents of your character strings.
Content could be RTF, HTML XML, etc.
It might be possible to build the desired RTF string in Transact-SQL but I
don't know enough about RTF formatting codes to say for sure. Maintaining
RTF content is a job best done in application code rather than Transact-SQL.
Another option is to write an ActiveX control or extended stored procedure
to manipulate RTF strings that could be invoked from Transact-SQL.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
> Hi all,
> I store my Rich-Text files in my SQL2000 database.
> I want to add a line at the end... it's very easy in my VB application
> with
> a Rich Text control... but how to do that directly in the database (via a
> Trigger) '
> Does SQL2000 have Rich Text function or show I use an external (Windows)
> component ?
> Lilian.
>|||Maybe using an OLE object like this example:
http://support.microsoft.com/kb/312839/en-us
... but which object ?
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> a écrit dans le message
de news:OYknWxePGHA.3100@.TK2MSFTNGP11.phx.gbl...
> SQL Server 2000 has no knowledge of the contents of your character
strings.
> Content could be RTF, HTML XML, etc.
> It might be possible to build the desired RTF string in Transact-SQL but I
> don't know enough about RTF formatting codes to say for sure. Maintaining
> RTF content is a job best done in application code rather than
Transact-SQL.
> Another option is to write an ActiveX control or extended stored procedure
> to manipulate RTF strings that could be invoked from Transact-SQL.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Lilian Pigallio" <lpigallio@.nospam.com> wrote in message
> news:eMeCdsdPGHA.428@.tk2msftngp13.phx.gbl...
> > Hi all,
> >
> > I store my Rich-Text files in my SQL2000 database.
> >
> > I want to add a line at the end... it's very easy in my VB application
> > with
> > a Rich Text control... but how to do that directly in the database (via
a
> > Trigger) '
> >
> > Does SQL2000 have Rich Text function or show I use an external (Windows)
> > component ?
> >
> > Lilian.
> >
> >
>sql

How to Manage Errors with one file

Hi,

I am pretty new in SSIS 2005, and I have some problems... I want to add logging and error management in my package. I found how to made logging. But for errors managing i have some difficulties.

In my package I have only a flat file source and an ole db destination. I want add errors management for both of them. So I create a connection manager for errors on a file. For both element i add redirect row for all available error type and then i add 2 flat file destination. I branch red arrows of flat file source and ole db destination to the flat file destination.

When i run packge i have an error which indicate me that file error is already take by another process... I don't understand why. And i don't want to create on file for each element on package. Have you any idea on why i have this error? Or how can i made what i want do?

Krest

Before going too far - could you checj if your redirect destination do not point to the same file as primary error file? What happens if you turn off package logging?

|||

Hi,

Thanks for you help.

So, i use the same file for the two flat file destination, because i want all my error in the same file. I i turn off logging (SSISmenu->logging and all checkboxes are not checked.

I have the same error as before. here is the exact error message [Flat File Destination 1 [806]] Warning: The process cannot access the file because it is being used by another process.

krest

How to manage different input (Excel files) format

Hi all,

I have created a package which import data from excel file and do some technical & business validation on the data. My package has about 20 control flow items. Now I'm asked to handle a second (and probably more in the future) excel file format (columns name are different, some fields are murged in one single column...).

I definitely don't want to create a different package for each excel file format. But I can't find a way in the control flow to execute a particular DataFlow in one case and another DataFlow in other cases. Typically I would like to evaluate an expression an depending on the result execute a DataFlow or another one. Even in a given DataFlow I cant find a way to have a condition and process different Excel Source depending on an expression result. Or it would be good if I could say to my Excel Source to discover the columns name and types at runtime and let me manage the columns manually in the data flow. Is that possible ? I know SSIS manage metadata on the columns based on the data source is there any way to manage the metadata manually ? I coulnd't find anything about that in BOL.

I guess an easy workaround is to have a different package just to import the different excel files in a common staging table and each package calls a single package which contains all technical & business validation.

Any help will be appreciated.

Kind regards,

Sbastien.

Have you discovered the expressions on precedence constraints? They seems like ideal fit for your requirements.

Double click a precedence constraint line, select a condition and an expression.|||

Right! That's what I needed.

Thanks for your answer.

How to manage cube by program

Can we used codes Visual.dot to manage cube?
yes, you can use the DSO interfaces to manage your cubes.
Look at the DSOXml program (on the MS we bsite) to see a sample.
"ad" <ad@.wfes.tcc.edu.tw> a crit dans le message de news:
OI6EO9uvEHA.3080@.TK2MSFTNGP12.phx.gbl...
> Can we used codes Visual.dot to manage cube?
>
|||http://www.microsoft.com/downloads/d...DisplayLang=en
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:uZ$K2R1vEHA.2600@.TK2MSFTNGP09.phx.gbl...
> yes, you can use the DSO interfaces to manage your cubes.
> Look at the DSOXml program (on the MS we bsite) to see a sample.
> "ad" <ad@.wfes.tcc.edu.tw> a crit dans le message de news:
> OI6EO9uvEHA.3080@.TK2MSFTNGP12.phx.gbl...
>

How to manage cube by program

Can we used codes Visual.dot to manage cube?yes, you can use the DSO interfaces to manage your cubes.
Look at the DSOXml program (on the MS we bsite) to see a sample.
"ad" <ad@.wfes.tcc.edu.tw> a crit dans le message de news:
OI6EO9uvEHA.3080@.TK2MSFTNGP12.phx.gbl...
> Can we used codes Visual.dot to manage cube?
>|||http://www.microsoft.com/downloads/...&DisplayLang=en
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:uZ$K2R1vEHA.2600@.TK2MSFTNGP09.phx.gbl...
> yes, you can use the DSO interfaces to manage your cubes.
> Look at the DSOXml program (on the MS we bsite) to see a sample.
> "ad" <ad@.wfes.tcc.edu.tw> a crit dans le message de news:
> OI6EO9uvEHA.3080@.TK2MSFTNGP12.phx.gbl...
>sql

how to manage concurrency between multiple, disconnected clients

I have a system use MS SQL 2005 & .NET 2.0, my tables don't have rowversion, but I heard SQL 2005 manage a rowversion by itself, can I use this to do a "ConflictDetection".

All I try to do is I want to get a error when I try to update a row which been modified by someone else after I read row.

Thanks.


I have never used SQL Server 2005 timestamp field which is what you are referring to as the row version. This is an autoincrementing field when a change is detected and is useful in preventing dirty writes.

I have used this practice in managing phantom data back with SQL Server 2000 just by adding a column of type INT and incrementing it each time the record is changed. Basically this is how it works, when I retrieve the record, I get a version of 1, at the same time another person gets the same record with version 1 as well. This person then updates the record causing it to increment to 2. In the meantime, I decide to update the record as well, your application or database should compare the version number you have which is 1 with the current version number at the database, if it is the same then allow update as this mean no changes were detected. In this case the version is at 2 which means the copy of the data at version 1 when i retrieved it is stale and you would alert the user that either they cannot save until they do a refresh or inform them if the choose to proceed, data will be lost.

Using row versions is optimistic locking as many people can read the same record.

You can also use pessimistic looking by locking the record once you have it. You can do this by using Transactions in .NET (System.Data.SqlClient.Transaction) and specify the Isolation levels.


How to manage click-through

How will I possibly do click through on this type of chart when basically I'll have to bring in just Grand Total values to do what they want to do here in this mock-up:

SSRS 2005

Expected/Desired Outcome (user drew this using Excel and later wants click-through capability through SSRS 2005 on the bars later on):

http://www.webfound.net/chart3.jpg

I don't see how this is possible, can y ou shed some light? because basically what I'll have to do to get that few of bars is literally bring in just Grand Totals for the series....static Grand Totals from my dataset's stored proc calculation since I cannot perform really a GT by using SUM in the expressions for any fields in the chart or I'll get an error

Hi,

In SSRS 2005 charts you can do the click through. In the chart layout. Just go to "Drop Data fields here" right click on the button for properties and on the Action tab you can select Jump to report.

Before that you need to create a seperate report so that when you click on the value this particular report comes up.

Any doubts let me know.

Amarnath.

|||

thanks much. What about infinite click-through?

Can I even create charts that do this just like inifinte click-through for reports using models?

How to manage all the SqlNK versions at the same time?

Hi everyone,

Nowadays, we've got four sql versions running around: 6.5, 7.0, 2000 and 2005.

If you try attach from Enterprise Manager 2005 servers it doesn't allow you because of 2005 uses SMO instead of DMO.

If you try attach from Management Studio 6.5 or 7.0 servers it doesn't allow you too.

We'd like to have from the same place a tool for all of them. Is it possible?

Thanks in advance,

I have client tools installed for SQL 6.5, 2000 & 2005 in order to manage our environment that is a mixture of SQL 6.5 to 2005 versions, I have no problem in having 3 of them and using at same time.

Could you please explain what you mean by 'if you try to attach....'.

|||

Hi Staya,

I meant, when you do this action: "New Sql Server Registration" from your client.

Yeah, I know, I can open three clients in my own workstation without problems.

I'm talking about that you can't attach from your Management Studio (2005) 7.0 or 6.5 clients.

Beyond of this, from Enterprise Manager (2000) you can attach any Sql Server 2005 because of sql2k uses DMO library instead of SMO (which is native for 2005).

Ok, you could do it from Query Analyzer.

|||I can connect to any of SQL 2k instances at my end using SSMS and check what is the service pack of those SQL instances that are in version 2000. Even the query analyzer in SQL 2005 should be able to connect to SQL 2000 instnaces.|||

Yeah, don't worry.

Idea was that we might have available an unique tool for see all of them.

My idea is maybe something crooked.

|||I believe SSMS is good one to go, but due to the limitations on backward compatibility you might not get what you want. So in this case you need to have 2 or 3 versions of SQL tools to continue the work. I'm sure there is a opportunity for a third party company to develop such tool.

how to manage a DTS Package usin Asp.net

Hi

How i can manage a DTS Package usin Asp.net

Hi,

R u looking for DTS Packages or SSIS Packages. For any SSIS package you can execute it from .Net Code using the given Below Method

1. Name Space to Include

using Microsoft.SqlServer.Dts.Runtime;

2. Create a Package Variable in constructor it takes FileName of the DTSX Package (SSIS Package ). You can explore this class. It has a execute method

Hope this will help

Satya

how to manage 16 digit number

is there any datatye that can accept 16 digit number

bit type is stored in 1 bit,

tinyint in 1byte,

smallint in 2 byte,(16bit ,I guess this is what you want)

int in 4 byte,

bigint in 8 byte

|||

sorry it is not 16 bit number it is 16 digit number like 1234567891234567

|||See the BOL for BIGINT:

bigint

-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)

8 Bytes

HTH, Jens Suessmeyer.

http://www.sqlserver2005.desql

how to manage 16 bit number

is there any datatye that can accept 16 digit number

bit type is stored in 1 bit,

tinyint in 1byte,

smallint in 2 byte,(16bit ,I guess this is what you want)

int in 4 byte,

bigint in 8 byte

|||

sorry it is not 16 bit number it is 16 digit number like 1234567891234567

|||See the BOL for BIGINT:

bigint

-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)

8 Bytes

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

How to makw Conditional Statements in SELECT

I'm trying to code a storedprocedure which looks for a product in a table. It takes one Paramater. Value may be 1 or 2 or 3.

CREATE PROCEDURE [dbo].[procSearchProduct]

@.PromoPrm int

AS

BEGIN

SELECT *

FROM tblProduct AS P

WHERE ?

END

END

--

if @.PromoPrm = 1 then where statement will be like this one: WHERE P.Promo = 'True'

else if @.PromoPrm = 2 then where statement will be like this one: WHERE P.Promo = 'False'

else if @.PromoPrm = 3 then where statement will be like this one: WHERE P.Promo = 'False' OR P.Promo = 'True'

Hope i am clear on my problem. Let me know if you don't understand it. Waiting for a solution.

Happy Coding...

One method is to us if/else:

IF @.PromoPrm = 1

select * from tblProduct p

where p.promo = 'True'

else if @.promo = 2

select * from tblProduct p

where p.promo = 'False'

else if @.promoPrm = 3

select * from tblProduct p

where p.promo = 'False'

or p.promo = 'True'

If your PROMO column can only have the values of TRUE or FALSE you can leave out the WHERE condition on the IF @.promoPrm = 3 portion.

The next comment is that you should avoid using the SELECT * syntax in stored procedures. Give a look to this post concerning this issue:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1108698&SiteID=1

Use of the 'SELECT *' syntax in stored procedures can leave behind "land mines" that go off at a later date and time when changes are made to the columns of the source table.

Another approach to the problem is something like:

select * from tblProduct p

where @.promoPrm = 1

and p.promo = 'TRUE'

or @.promoPrm = 2

and p.promo = 'FALSE'

or @.promoPrm = 3

and ( p.promo = 'TRUE' OR

p.promo = 'FALSE'

)

|||

Here's a version using dynamic SQL.

declare @.sql varchar(max);

declare @.where varchar(max);

set @.where = case @.promoprm

when 1 Then'WHERE Promo = ''True'''

when 2 Then 'WHERE Promo = ''False'''

when 3 Then 'WHERE Promo = ''True'' OR Promo = ''False'''

end;

set @.sql = 'SELECT * FROM tblProducts ' + @.where;

exec(@.sql);

Jamie

|||

Yeap it is a solution but i gave little part of the code. Trying to make a seach form in asp.net . And u know this kind of page sends many parameters to sql. I can't make all combination in sql. I mean what about if this stored procedure takes 5 Parameters like @.PromoPrm . Point this problem is i have to write a code which makes decision on WHERE part. May be like this one :

SELECT *

FROM tblProduct AS P

WHERE case when P.Promo = 1 then P.Promo = 'True' else P.Promo = 'False'

This query is wrong but i hope u got the point?

|||

Beware using any dynamic SQL, especially in a web environment. You don't want to leave yourself open to code injection.

Jamie

|||

To simplyfy this,

Code Snippet

select * from tblProduct p

where @.promoPrm & 1 = 1 and p.promo = 'TRUE'

or @.promoPrm & 2 = 2 and p.promo = 'FALSE'

|||

Give a look to the "dynamic search" article by MVP Erland Sommarskog:

http://www.sommarskog.se/dyn-search.html

Also, it will help if you finish describing your problem rather giving information in portions.

|||

JHunter i think your solution is good but sql injection is important in this stiuation. If i use your code sample than i need to check the paramateres in my code but it is very very difficult because it takes it from query string. Is there any solution to this problem?

Kent thanks for post. Also thanks for link that u gave.

|||

ManiD wrote:

To simplyfy this,

Code Snippet

select * from tblProduct p

where @.promoPrm & 1 = 1 and p.promo = 'TRUE'

or @.promoPrm & 2 = 2 and p.promo = 'FALSE'

ManiD i think your solution is the best. What about the performance. Because i will use this on 5 different paramaters.

|||

If you are only passing an integer to the stored proc, leaving the proc to decide how the where clause is constructed, you'll be okay. Problems come when you start passing strings that contain SQL as parameters .

The simplest check you can do is make sure the parameter contains only "expected" values.

Erlands (as previously posted) page provides some of the best content relating to dynamic SQL.

Jamie

|||How many possible values are there for the PROMO column? If it is but a handful this will probably scan and not seek because of the cardinalities.|||

Ok here is the problem again: I'm trying to make a search form. Form runs a stored procedure call it procSearchProduct. This stored procedure takes 6 parameters.

@.CityPrm varchar(50), --takes the name of city

@.HasPromotionPrm int, -- takes onl 3 value. 1,2 or 3. 1=True , 2=Fale , 3=True OR False

@.HasDescriptionPrm, int,-- takes onl 3 value. 1,2 or 3. 1=True , 2=Fale , 3=True OR False

@.HasImagePrm int,-- takes onl 3 value. 1,2 or 3. 1=True , 2=Fale , 3=True OR False

@.StateofShopPrm varchar(50), -- takes the name of the state

@.ShopClosedOpenedPrm int -- takes onl 3 value. 1,2 or 3. 1=True , 2=Fale , 3=True OR False

what i am trying to do is select values from table. You know the rest. Sory i couldn't paste the whole code because i wrting it on dofferent language so it will not help.

|||

I'd avoid passing CityPrm and StateofShopPrm as varchars. Instead, create lookup tables for the city and state with a numeric primary key.

Populate any dropdown menus from these lookups, but capture the primary key (ID) of the selected value - not the name. This numeric ID can be passed to the stored procedure.

If you've not normalised your main table, you can use the primary key to lookup the textual value for the lookup.

Jamie

|||Also, on these VARCHAR fields, are you doing = searches or LIKE searches?|||Another point , user selects the option "WHOLE WORD" or "SOME OF THEM" in drop down for search condition on City but it will mess the problem much more. Assume that i make = search. And I cant take the State in to a dropdownlist. May be a solution but not for me.

How to make Zip Code 5 characters and not 4 in Excell

Is there a way to get 5 character zip code in excell without making it
into a TEXT?
ex:
zip code: 1024
I want it : 01024
BUT (It's a big but) I dont want it to be text - I need it to stay a
numercial # so that excell can read it as a zip code.
Is this possible?When you format the cells, look under Special - there is a ZipCode
format
Sorcerdon wrote:
> Is there a way to get 5 character zip code in excell without making it
> into a TEXT?
> ex:
> zip code: 1024
> I want it : 01024
> BUT (It's a big but) I dont want it to be text - I need it to stay a
> numercial # so that excell can read it as a zip code.
> Is this possible?|||Where is this located?
In the formating all I have is:
Default
Number
Date
Time
Percentage
Currency

How to make your SQL Server database grow exponentially with linear data

We have this database that we use to collect 1000's of rows of data a day.
Particularly in one table we collect about 1000 pieces of data a day in the
form of very small blobs (64k or so). Our database has been growing
steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10 April
2004. Not bad, seeing the amount of data we collect. Then from 10 April to
today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
our input data increase exponentially? No, according to our logs data have
been arriving in a linear fashion. Maybe there's about 40gb of unused space
in the database? No, after we rebuild the indexes (using dbcc updateusage)
SQL Server reports no free space. Not possible you say?
Well, turns out the problem are with the blob fields. Because we want to
minimize the time it takes to search through the table, we wrote a service
that takes 2 rows, adds the blobs together, then updates the one row with
the resulting blob and deletes the other row. The service in other words
doubles up the blobs. The resulting blobs are also doubled up and so forth.
This in itself should not be a problem because theoretically the new row
should take up the same space (or even slightly less) than the original 2
rows together. Wrong! What happens is when a blob gets bigger than 64k,
SQL Server does not store the blob data with the row anymore, but moves it
to a separate location. Now when you double up a blob and try to update the
row SQL Server realizes it can't store the blob in the same space as the
previous one, so it allocates new space for it. So even though you aren't
adding any new data to the database, the database file actually grows
exponentially! Now some smaller blobs will eventually take up the old space
of a bigger blob, but they are much lower on the exponential curve than the
bigger blobs, so in total your database file grows near exponential with
linear data. This can be quite a problem when your blob size gets to about
50mb or so, and is the cause of our database growing so quickly in such a
short time.
Also, what worsens the problem is that it seems that SQL Server does not
take these deallocated blob space into account when it calculates "unused"
space in the database. Now if we deleted both rows in a double-up operation
and inserted a new row we probably would have been able to shrink the
database periodically. But there does not seem to be anything we can do to
reclaim deallocated blob space Our aim is to stop the blobs growing when
they reach a certain size and archive them off. Has anybody experienced the
same problem, or are there any suggestions as to how we can reclaim (or
avoid) this "unused" space?
Thanks,
Pieter
> We have this database that we use to collect 1000's of rows of data a day.
> Particularly in one table we collect about 1000 pieces of data a day in
the
> form of very small blobs (64k or so). Our database has been growing
> steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10
April
> 2004. Not bad, seeing the amount of data we collect. Then from 10 April
to
> today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
> our input data increase exponentially? No, according to our logs data
have
> been arriving in a linear fashion. Maybe there's about 40gb of unused
space
> in the database? No, after we rebuild the indexes (using dbcc
updateusage)
> SQL Server reports no free space. Not possible you say?
>
> Well, turns out the problem are with the blob fields. Because we want to
> minimize the time it takes to search through the table, we wrote a service
> that takes 2 rows, adds the blobs together, then updates the one row with
> the resulting blob and deletes the other row. The service in other words
> doubles up the blobs. The resulting blobs are also doubled up and so
forth.
> This in itself should not be a problem because theoretically the new row
> should take up the same space (or even slightly less) than the original 2
> rows together. Wrong! What happens is when a blob gets bigger than 64k,
> SQL Server does not store the blob data with the row anymore, but moves it
> to a separate location. Now when you double up a blob and try to update
the
> row SQL Server realizes it can't store the blob in the same space as the
> previous one, so it allocates new space for it. So even though you aren't
> adding any new data to the database, the database file actually grows
> exponentially! Now some smaller blobs will eventually take up the old
space
> of a bigger blob, but they are much lower on the exponential curve than
the
> bigger blobs, so in total your database file grows near exponential with
> linear data. This can be quite a problem when your blob size gets to
about
> 50mb or so, and is the cause of our database growing so quickly in such a
> short time.
>
> Also, what worsens the problem is that it seems that SQL Server does not
> take these deallocated blob space into account when it calculates "unused"
> space in the database. Now if we deleted both rows in a double-up
operation
> and inserted a new row we probably would have been able to shrink the
> database periodically. But there does not seem to be anything we can do
to
> reclaim deallocated blob space Our aim is to stop the blobs growing when
> they reach a certain size and archive them off. Has anybody experienced
the
> same problem, or are there any suggestions as to how we can reclaim (or
> avoid) this "unused" space?
> Thanks,
> Pieter
Hi Pieter,
Have you tried DBCC ShrinkDB and ShrinkFile?
Hope this helps,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.
|||We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
see the article
http://support.microsoft.com/default...b;en-us;324432
When we copied all the rows to a new table, we recovered 43gb of the 60gb
database, which means there was only about 17gb of actual data in the
database. We now have a "recovery" plan where we store the blobs in a
separate data file, so when it gets out of control again we can just move
the blobs in the data file to a new data file without having to take the
database offline again.
Tnx,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:F39XstANEHA.2756@.cpmsftngxa10.phx.gbl...[vbcol=seagreen]
day.[vbcol=seagreen]
> the
> April
April[vbcol=seagreen]
> to
Did[vbcol=seagreen]
> have
> space
> updateusage)
to[vbcol=seagreen]
service[vbcol=seagreen]
with[vbcol=seagreen]
words[vbcol=seagreen]
> forth.
2[vbcol=seagreen]
64k,[vbcol=seagreen]
it[vbcol=seagreen]
> the
aren't[vbcol=seagreen]
> space
> the
> about
a[vbcol=seagreen]
"unused"
> operation
> to
> the
> --
> Hi Pieter,
> Have you tried DBCC ShrinkDB and ShrinkFile?
> Hope this helps,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>
|||> We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
> see the article
> http://support.microsoft.com/default...b;en-us;324432
> When we copied all the rows to a new table, we recovered 43gb of the 60gb
> database, which means there was only about 17gb of actual data in the
> database. We now have a "recovery" plan where we store the blobs in a
> separate data file, so when it gets out of control again we can just move
> the blobs in the data file to a new data file without having to take the
> database offline again.
>
For the record, the bug ID is 357405 for SQL Server 2000. The bug is
scheduled to be fixed in Yukon.
Regards,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.
|||Thank you, looking forward to it
Regards,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...[vbcol=seagreen]
Server,[vbcol=seagreen]
60gb[vbcol=seagreen]
move
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>
|||Hi again,
I am very interested to know if the bug is caused by creating new blobs and
deleting old ones, or from updating and growing existing blobs.
eg. We have 2 scenarios. Lets say in common we have Blob A and B which are
the sources, and Blob C which is the target.
Scenario 1:
Blob A is read
Blob B is read
Blob C is created (A+B)
Blob A+B is deleted
Scenario 2:
Blob A is read
Blob B is read
Blob B is updated (A+B)
Blob A is deleted
In our case Scenario 2 is valid. Would scenario 1 be able to shield us from
this problem?
Regards,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...[vbcol=seagreen]
Server,[vbcol=seagreen]
60gb[vbcol=seagreen]
move
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>

How to make your SQL Server database grow exponentially with linear data

We have this database that we use to collect 1000's of rows of data a day.
Particularly in one table we collect about 1000 pieces of data a day in the
form of very small blobs (64k or so). Our database has been growing
steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10 April
2004. Not bad, seeing the amount of data we collect. Then from 10 April to
today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
our input data increase exponentially? No, according to our logs data have
been arriving in a linear fashion. Maybe there's about 40gb of unused space
in the database? No, after we rebuild the indexes (using dbcc updateusage)
SQL Server reports no free space. Not possible you say?
Well, turns out the problem are with the blob fields. Because we want to
minimize the time it takes to search through the table, we wrote a service
that takes 2 rows, adds the blobs together, then updates the one row with
the resulting blob and deletes the other row. The service in other words
doubles up the blobs. The resulting blobs are also doubled up and so forth.
This in itself should not be a problem because theoretically the new row
should take up the same space (or even slightly less) than the original 2
rows together. Wrong! What happens is when a blob gets bigger than 64k,
SQL Server does not store the blob data with the row anymore, but moves it
to a separate location. Now when you double up a blob and try to update the
row SQL Server realizes it can't store the blob in the same space as the
previous one, so it allocates new space for it. So even though you aren't
adding any new data to the database, the database file actually grows
exponentially! Now some smaller blobs will eventually take up the old space
of a bigger blob, but they are much lower on the exponential curve than the
bigger blobs, so in total your database file grows near exponential with
linear data. This can be quite a problem when your blob size gets to about
50mb or so, and is the cause of our database growing so quickly in such a
short time.
Also, what worsens the problem is that it seems that SQL Server does not
take these deallocated blob space into account when it calculates "unused"
space in the database. Now if we deleted both rows in a double-up operation
and inserted a new row we probably would have been able to shrink the
database periodically. But there does not seem to be anything we can do to
reclaim deallocated blob space Our aim is to stop the blobs growing when
they reach a certain size and archive them off. Has anybody experienced the
same problem, or are there any suggestions as to how we can reclaim (or
avoid) this "unused" space?
Thanks,
Pieter> We have this database that we use to collect 1000's of rows of data a day.
> Particularly in one table we collect about 1000 pieces of data a day in
the
> form of very small blobs (64k or so). Our database has been growing
> steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10
April
> 2004. Not bad, seeing the amount of data we collect. Then from 10 April
to
> today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
> our input data increase exponentially? No, according to our logs data
have
> been arriving in a linear fashion. Maybe there's about 40gb of unused
space
> in the database? No, after we rebuild the indexes (using dbcc
updateusage)
> SQL Server reports no free space. Not possible you say?
>
> Well, turns out the problem are with the blob fields. Because we want to
> minimize the time it takes to search through the table, we wrote a service
> that takes 2 rows, adds the blobs together, then updates the one row with
> the resulting blob and deletes the other row. The service in other words
> doubles up the blobs. The resulting blobs are also doubled up and so
forth.
> This in itself should not be a problem because theoretically the new row
> should take up the same space (or even slightly less) than the original 2
> rows together. Wrong! What happens is when a blob gets bigger than 64k,
> SQL Server does not store the blob data with the row anymore, but moves it
> to a separate location. Now when you double up a blob and try to update
the
> row SQL Server realizes it can't store the blob in the same space as the
> previous one, so it allocates new space for it. So even though you aren't
> adding any new data to the database, the database file actually grows
> exponentially! Now some smaller blobs will eventually take up the old
space
> of a bigger blob, but they are much lower on the exponential curve than
the
> bigger blobs, so in total your database file grows near exponential with
> linear data. This can be quite a problem when your blob size gets to
about
> 50mb or so, and is the cause of our database growing so quickly in such a
> short time.
>
> Also, what worsens the problem is that it seems that SQL Server does not
> take these deallocated blob space into account when it calculates "unused"
> space in the database. Now if we deleted both rows in a double-up
operation
> and inserted a new row we probably would have been able to shrink the
> database periodically. But there does not seem to be anything we can do
to
> reclaim deallocated blob space Our aim is to stop the blobs growing when
> they reach a certain size and archive them off. Has anybody experienced
the
> same problem, or are there any suggestions as to how we can reclaim (or
> avoid) this "unused" space?
> Thanks,
> Pieter
--
Hi Pieter,
Have you tried DBCC ShrinkDB and ShrinkFile?
Hope this helps,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.|||We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
see the article
http://support.microsoft.com/defaul...kb;en-us;324432
When we copied all the rows to a new table, we recovered 43gb of the 60gb
database, which means there was only about 17gb of actual data in the
database. We now have a "recovery" plan where we store the blobs in a
separate data file, so when it gets out of control again we can just move
the blobs in the data file to a new data file without having to take the
database offline again.
Tnx,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:F39XstANEHA.2756@.cpmsftngxa10.phx.gbl...
day.[vbcol=seagreen]
> the
> April
April[vbcol=seagreen]
> to
Did[vbcol=seagreen]
> have
> space
> updateusage)
to[vbcol=seagreen]
service[vbcol=seagreen]
with[vbcol=seagreen]
words[vbcol=seagreen]
> forth.
2[vbcol=seagreen]
64k,[vbcol=seagreen]
it[vbcol=seagreen]
> the
aren't[vbcol=seagreen]
> space
> the
> about
a[vbcol=seagreen]
"unused"[vbcol=seagreen]
> operation
> to
> the
> --
> Hi Pieter,
> Have you tried DBCC ShrinkDB and ShrinkFile?
> Hope this helps,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||> We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
> see the article
> http://support.microsoft.com/defaul...kb;en-us;324432
> When we copied all the rows to a new table, we recovered 43gb of the 60gb
> database, which means there was only about 17gb of actual data in the
> database. We now have a "recovery" plan where we store the blobs in a
> separate data file, so when it gets out of control again we can just move
> the blobs in the data file to a new data file without having to take the
> database offline again.
>
--
For the record, the bug ID is 357405 for SQL Server 2000. The bug is
scheduled to be fixed in Yukon.
Regards,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.|||Thank you, looking forward to it
Regards,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...
Server,[vbcol=seagreen]
60gb[vbcol=seagreen]
move[vbcol=seagreen]
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||Hi again,
I am very interested to know if the bug is caused by creating new blobs and
deleting old ones, or from updating and growing existing blobs.
eg. We have 2 scenarios. Lets say in common we have Blob A and B which are
the sources, and Blob C which is the target.
Scenario 1:
Blob A is read
Blob B is read
Blob C is created (A+B)
Blob A+B is deleted
Scenario 2:
Blob A is read
Blob B is read
Blob B is updated (A+B)
Blob A is deleted
In our case Scenario 2 is valid. Would scenario 1 be able to shield us from
this problem?
Regards,
Pieter
"Eric Crdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...
Server,[vbcol=seagreen]
60gb[vbcol=seagreen]
move[vbcol=seagreen]
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Crdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>sql

How to make your SQL Server database grow exponentially with linear data

We have this database that we use to collect 1000's of rows of data a day.
Particularly in one table we collect about 1000 pieces of data a day in the
form of very small blobs (64k or so). Our database has been growing
steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10 April
2004. Not bad, seeing the amount of data we collect. Then from 10 April to
today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
our input data increase exponentially? No, according to our logs data have
been arriving in a linear fashion. Maybe there's about 40gb of unused space
in the database? No, after we rebuild the indexes (using dbcc updateusage)
SQL Server reports no free space. Not possible you say?
Well, turns out the problem are with the blob fields. Because we want to
minimize the time it takes to search through the table, we wrote a service
that takes 2 rows, adds the blobs together, then updates the one row with
the resulting blob and deletes the other row. The service in other words
doubles up the blobs. The resulting blobs are also doubled up and so forth.
This in itself should not be a problem because theoretically the new row
should take up the same space (or even slightly less) than the original 2
rows together. Wrong! What happens is when a blob gets bigger than 64k,
SQL Server does not store the blob data with the row anymore, but moves it
to a separate location. Now when you double up a blob and try to update the
row SQL Server realizes it can't store the blob in the same space as the
previous one, so it allocates new space for it. So even though you aren't
adding any new data to the database, the database file actually grows
exponentially! Now some smaller blobs will eventually take up the old space
of a bigger blob, but they are much lower on the exponential curve than the
bigger blobs, so in total your database file grows near exponential with
linear data. This can be quite a problem when your blob size gets to about
50mb or so, and is the cause of our database growing so quickly in such a
short time.
Also, what worsens the problem is that it seems that SQL Server does not
take these deallocated blob space into account when it calculates "unused"
space in the database. Now if we deleted both rows in a double-up operation
and inserted a new row we probably would have been able to shrink the
database periodically. But there does not seem to be anything we can do to
reclaim deallocated blob space Our aim is to stop the blobs growing when
they reach a certain size and archive them off. Has anybody experienced the
same problem, or are there any suggestions as to how we can reclaim (or
avoid) this "unused" space?
Thanks,
Pieter> We have this database that we use to collect 1000's of rows of data a day.
> Particularly in one table we collect about 1000 pieces of data a day in
the
> form of very small blobs (64k or so). Our database has been growing
> steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10
April
> 2004. Not bad, seeing the amount of data we collect. Then from 10 April
to
> today (23 April) it grew to 65gb in about 2 weeks. What's going on? Did
> our input data increase exponentially? No, according to our logs data
have
> been arriving in a linear fashion. Maybe there's about 40gb of unused
space
> in the database? No, after we rebuild the indexes (using dbcc
updateusage)
> SQL Server reports no free space. Not possible you say?
>
> Well, turns out the problem are with the blob fields. Because we want to
> minimize the time it takes to search through the table, we wrote a service
> that takes 2 rows, adds the blobs together, then updates the one row with
> the resulting blob and deletes the other row. The service in other words
> doubles up the blobs. The resulting blobs are also doubled up and so
forth.
> This in itself should not be a problem because theoretically the new row
> should take up the same space (or even slightly less) than the original 2
> rows together. Wrong! What happens is when a blob gets bigger than 64k,
> SQL Server does not store the blob data with the row anymore, but moves it
> to a separate location. Now when you double up a blob and try to update
the
> row SQL Server realizes it can't store the blob in the same space as the
> previous one, so it allocates new space for it. So even though you aren't
> adding any new data to the database, the database file actually grows
> exponentially! Now some smaller blobs will eventually take up the old
space
> of a bigger blob, but they are much lower on the exponential curve than
the
> bigger blobs, so in total your database file grows near exponential with
> linear data. This can be quite a problem when your blob size gets to
about
> 50mb or so, and is the cause of our database growing so quickly in such a
> short time.
>
> Also, what worsens the problem is that it seems that SQL Server does not
> take these deallocated blob space into account when it calculates "unused"
> space in the database. Now if we deleted both rows in a double-up
operation
> and inserted a new row we probably would have been able to shrink the
> database periodically. But there does not seem to be anything we can do
to
> reclaim deallocated blob space Our aim is to stop the blobs growing when
> they reach a certain size and archive them off. Has anybody experienced
the
> same problem, or are there any suggestions as to how we can reclaim (or
> avoid) this "unused" space?
> Thanks,
> Pieter
--
Hi Pieter,
Have you tried DBCC ShrinkDB and ShrinkFile?
Hope this helps,
--
Eric Cárdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.|||We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
see the article
http://support.microsoft.com/default.aspx?scid=kb;en-us;324432
When we copied all the rows to a new table, we recovered 43gb of the 60gb
database, which means there was only about 17gb of actual data in the
database. We now have a "recovery" plan where we store the blobs in a
separate data file, so when it gets out of control again we can just move
the blobs in the data file to a new data file without having to take the
database offline again.
Tnx,
Pieter
"Eric Cárdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:F39XstANEHA.2756@.cpmsftngxa10.phx.gbl...
> > We have this database that we use to collect 1000's of rows of data a
day.
> > Particularly in one table we collect about 1000 pieces of data a day in
> the
> > form of very small blobs (64k or so). Our database has been growing
> > steadily from Nov 2003 with an initial size of 7gb to about 25gb on 10
> April
> > 2004. Not bad, seeing the amount of data we collect. Then from 10
April
> to
> > today (23 April) it grew to 65gb in about 2 weeks. What's going on?
Did
> > our input data increase exponentially? No, according to our logs data
> have
> > been arriving in a linear fashion. Maybe there's about 40gb of unused
> space
> > in the database? No, after we rebuild the indexes (using dbcc
> updateusage)
> > SQL Server reports no free space. Not possible you say?
> >
> >
> > Well, turns out the problem are with the blob fields. Because we want
to
> > minimize the time it takes to search through the table, we wrote a
service
> > that takes 2 rows, adds the blobs together, then updates the one row
with
> > the resulting blob and deletes the other row. The service in other
words
> > doubles up the blobs. The resulting blobs are also doubled up and so
> forth.
> > This in itself should not be a problem because theoretically the new row
> > should take up the same space (or even slightly less) than the original
2
> > rows together. Wrong! What happens is when a blob gets bigger than
64k,
> > SQL Server does not store the blob data with the row anymore, but moves
it
> > to a separate location. Now when you double up a blob and try to update
> the
> > row SQL Server realizes it can't store the blob in the same space as the
> > previous one, so it allocates new space for it. So even though you
aren't
> > adding any new data to the database, the database file actually grows
> > exponentially! Now some smaller blobs will eventually take up the old
> space
> > of a bigger blob, but they are much lower on the exponential curve than
> the
> > bigger blobs, so in total your database file grows near exponential with
> > linear data. This can be quite a problem when your blob size gets to
> about
> > 50mb or so, and is the cause of our database growing so quickly in such
a
> > short time.
> >
> >
> > Also, what worsens the problem is that it seems that SQL Server does not
> > take these deallocated blob space into account when it calculates
"unused"
> > space in the database. Now if we deleted both rows in a double-up
> operation
> > and inserted a new row we probably would have been able to shrink the
> > database periodically. But there does not seem to be anything we can do
> to
> > reclaim deallocated blob space Our aim is to stop the blobs growing when
> > they reach a certain size and archive them off. Has anybody experienced
> the
> > same problem, or are there any suggestions as to how we can reclaim (or
> > avoid) this "unused" space?
> >
> > Thanks,
> >
> > Pieter
> --
> Hi Pieter,
> Have you tried DBCC ShrinkDB and ShrinkFile?
> Hope this helps,
> --
> Eric Cárdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||> We have, thanks, with no improvement. Turns out it's a bug in SQL Server,
> see the article
> http://support.microsoft.com/default.aspx?scid=kb;en-us;324432
> When we copied all the rows to a new table, we recovered 43gb of the 60gb
> database, which means there was only about 17gb of actual data in the
> database. We now have a "recovery" plan where we store the blobs in a
> separate data file, so when it gets out of control again we can just move
> the blobs in the data file to a new data file without having to take the
> database offline again.
>
--
For the record, the bug ID is 357405 for SQL Server 2000. The bug is
scheduled to be fixed in Yukon.
Regards,
--
Eric Cárdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.|||Thank you, looking forward to it :)
Regards,
Pieter
"Eric Cárdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...
> > We have, thanks, with no improvement. Turns out it's a bug in SQL
Server,
> > see the article
> > http://support.microsoft.com/default.aspx?scid=kb;en-us;324432
> >
> > When we copied all the rows to a new table, we recovered 43gb of the
60gb
> > database, which means there was only about 17gb of actual data in the
> > database. We now have a "recovery" plan where we store the blobs in a
> > separate data file, so when it gets out of control again we can just
move
> > the blobs in the data file to a new data file without having to take the
> > database offline again.
> >
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Cárdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>|||Hi again,
I am very interested to know if the bug is caused by creating new blobs and
deleting old ones, or from updating and growing existing blobs.
eg. We have 2 scenarios. Lets say in common we have Blob A and B which are
the sources, and Blob C which is the target.
Scenario 1:
Blob A is read
Blob B is read
Blob C is created (A+B)
Blob A+B is deleted
Scenario 2:
Blob A is read
Blob B is read
Blob B is updated (A+B)
Blob A is deleted
In our case Scenario 2 is valid. Would scenario 1 be able to shield us from
this problem?
Regards,
Pieter
"Eric Cárdenas [MSFT]" <ecardena@.online.microsoft.com> wrote in message
news:YYHs2biNEHA.3808@.cpmsftngxa10.phx.gbl...
> > We have, thanks, with no improvement. Turns out it's a bug in SQL
Server,
> > see the article
> > http://support.microsoft.com/default.aspx?scid=kb;en-us;324432
> >
> > When we copied all the rows to a new table, we recovered 43gb of the
60gb
> > database, which means there was only about 17gb of actual data in the
> > database. We now have a "recovery" plan where we store the blobs in a
> > separate data file, so when it gets out of control again we can just
move
> > the blobs in the data file to a new data file without having to take the
> > database offline again.
> >
> --
> For the record, the bug ID is 357405 for SQL Server 2000. The bug is
> scheduled to be fixed in Yukon.
> Regards,
> --
> Eric Cárdenas
> Senior support professional
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>

How to make you reports load faster ?

Any ideas how to make reports faster when returning lots fo rows?
I know you would need to work on your sql query etc..
Or maybe cache it.

But i'm thinking of having a kind of middle tier thing that sits between your sql database and the reports itself.

Any ideas would be appreciated

Dear rote,

Caching is certainly one option. Search the sql books online for Report Caching, Execution Snapshots and Report History.

If possible, keep your report server on a different machine. Adding memory will help increase performance.

http://www.microsoft.com/technet/prodtechnol/sql/2005/pspsqlrs.mspx#E6NAC

HTH,
Suprotim Agarwal

--
http://www.dotnetcurry.com
--

|||

RS comes with two middle tier elements of its own: snapshots and caching <s>. Do either of these meet your needs?

>L<

How to make Update relation?

I have two tables: TableX & TableY

there is two similar fields (Size) with the same datatype, I want to make a relation in a way if I change the value in (TableX.Size) the same value will be applied to (TableY.Size).

How to?

Lewe:

The are a couple of things that you need to know first:

How does TableX relate to TableY? TableX SHOULD have a key to TableY so that corresponding X and Y members can be joined|||

I'm not sure that you are really talking about needing a 'relation'. It sounds as though you only want a method that will change TableY values when a similar value in TableX is changed.

That 'could' be done with a TRIGGER. But first, it will be necessary for you to identify which row in TableY is to be changed as a result of a particular row in TableX getting changed. In other words, "How do the two tables 'relate' to each other?"

How to make Update relation?

I have two tables: TableX & TableY

there is two similar fields (Size) with the same datatype, I want to make a relation in a way if I change the value in (TableX.Size) the same value will be applied to (TableY.Size).

How to?

Lewe:

The are a couple of things that you need to know first:

How does TableX relate to TableY? TableX SHOULD have a key to TableY so that corresponding X and Y members can be joined|||

I'm not sure that you are really talking about needing a 'relation'. It sounds as though you only want a method that will change TableY values when a similar value in TableX is changed.

That 'could' be done with a TRIGGER. But first, it will be necessary for you to identify which row in TableY is to be changed as a result of a particular row in TableX getting changed. In other words, "How do the two tables 'relate' to each other?"

How to make UDF deterministic

I have a user defined function in SQL Server 2000 that splits a Name field into First, Last, Middle names. Originally I utilized the CHARINDEX function to find the spaces in the string. Since I then found out that function is by definition Non-deterministic that didn't work for me, so I re-wrote the function a bit differently.

I am using the function as a field's formula in my table structure. I need the ability to create an index on that field so need the function to be deterministic. Apparently there is still something about how I designed the function that causes it to be non-deterministic. The function code is below; if anyone can help point me in the right direction it'd be greatly appreciated.

ALTER function fSplitName_test
(
@.name varchar(100),
@.fname int)
returns varchar(55)
as

begin
declare @.Fname_position int, @.Lname_position int, @.Mname_position int, @.SplitName varchar(55), @.iint int, @.iLen int,
@.sChar char(1), @.sName varchar(55)

set @.name = ltrim(replace(@.name, ' JR', ''))
set @.sName = @.name
set @.ilen = len(rtrim(@.name))
set @.Fname_position = 0
set @.Lname_position = 0
set @.iint = 0

while @.iint <> @.ilen
begin
set @.schar = left(@.sName, 1)
begin
if @.schar = ' '
begin
if @.Fname_position = 0
set @.Fname_position = @.iint
else
set @.Lname_position = @.iint
end
set @.iint = @.iint + 1
set @.sName = right(@.sName, @.ilen - @.iint)
end
end
if @.fname = 1
begin
if @.fname_position <> 0
set @.SplitName = left(@.name, @.Fname_position)
else
set @.splitname = 'unkn'
end
else
begin
if @.fname = 2
begin
if @.lname_position <> 0
set @.SplitName = right(@.name, @.ilen - @.Lname_position-1)
else
set @.SplitName = right(@.name, @.ilen - @.fname_position)
end
else
begin
if @.fname_position + @.lname_position <> @.ilen and @.lname_position <> 0
begin
set @.SplitName = left(@.name, @.lname_position)
set @.splitname = right(@.splitname,len(@.splitname) - @.fname_position - 1)
end
else
set @.splitname = 'unkn'
end
end
return @.SplitName
endYou might get more response if you post this under the "Microsoft SQL Server" forum. This one's for generic SQL questions - I for one know nothing about SQL Server.|||what if a person has a lastname like this "DELA CRUZ"
can you post an example of the name parameter ex: "Michael Angelo Rodriguez" or "Michael Angelo Dela Cruz" coz as you can see on the 2 ex. it is hard to detemine the first name on the first example and last name on the latter the name should have a middle initial or comma perhaps.|||This is probably completely wrong, but I wouldn't put it past SQL Server to think that you were trying to modify the parameters. That would, of course, make the function non-deterministic. Try copying them to local variables.

Otherwise, the way to debug a problem like this is to break your function into small functions and verify that the small functions are all deterministic.sql

How to make two filegroups to one filegroup

Hello,
I have to filegroups like
database_data1 stored on d:
database_data2 stored on e:
How can I delete the database_data2 and tell SQL 2005 to copy all
data to database_data1?
Thanks for any help in advance!
A. KlemtOne way...
Assuming you have clustered indexes on all you tables is to drop and
recreate the index on the database_data1 filegroup this will also move
all the data over to database_data1
http://sqlservercode.blogspot.com/|||Hi
It is pretty much described in the BOL under "Moving database file"
"Andreas Klemt" <aklemt68@.hotmail.com> wrote in message
news:Oxkia5$JGHA.3200@.tk2msftngp13.phx.gbl...
> Hello,
> I have to filegroups like
> database_data1 stored on d:
> database_data2 stored on e:
> How can I delete the database_data2 and tell SQL 2005 to copy all
> data to database_data1?
> Thanks for any help in advance!
> A. Klemt
>

how to make this stored procedure more performant

ALTER PROCEDURE dbo.sp_AddProdutionPlanning

(

@.ProductionPlanning_Product uniqueidentifier,

@.ProductionPlanning_Date datetime,

@.ProductionPlanning_Quantity int

)

AS

Begin TRAN

Declare @.NewID uniqueidentifier;

Declare @.NewItemID uniqueidentifier;

Declare @.NewItemBatchID uniqueidentifier;

Declare @.Ingr uniqueidentifier;

Declare @.IngrQty int;

Declare @.Batch uniqueidentifier;

Declare @.BtchQty int;

Declare @.BtchConsumed int;

Declare @.MixQty int;

Declare @.QtyLeft int;

Set @.NewID= newid();

INSERT INTO ProductionPlanning

(ProductionPlanning_ID, ProductionPlanning_Product, ProductionPlanning_Date, ProductionPlanning_Quantity, ProductionPlanning_Created,

ProductionPlanning_PreviousUpdate, ProductionPlanning_LastUpdated, ProductionPlanning_Deleted)

VALUES (@.NewID,@.ProductionPlanning_Product,@.ProductionPlanning_Date,@.ProductionPlanning_Quantity,getdate(),getdate(),getdate(),0)

Select NULL mykey, ProductRecipe_ID, ProductRecipe_Ingredient, ProductRecipe_Quantity into #tblRecipe From ProductRecipe Where ((ProductRecipe_Product = @.ProductionPlanning_Product) And (ProductRecipe_Deleted = 0))

Set rowcount 1;

Update #tblRecipe Set mykey=1;

While @.@.Rowcount <>0

Begin

Set rowcount 1;

Select @.Ingr=ProductRecipe_Ingredient,@.IngrQty = ProductRecipe_Quantity From #tblRecipe;

Delete from #tblRecipe where mykey=1;

Set @.MixQty = (@.IngrQty * @.ProductionPlanning_Quantity);

Set @.NewItemID = newid();

INSERT INTO ProductionPlanningItem

(ProductionPlanningItem_ID, ProductionPlanningItem_Ingredient, ProductionPlanningItem_Planning, ProductionPlanningItem_Quantity)

VALUES (@.NewItemID,@.Ingr,@.NewID,@.MixQty)

Select NULL mykey,ProductIngredientBatch_ID, ProductIngredientBatch_Quantity, ProductIngredientBatch_Consumed Into #tblBatch From ProductIngredientBatch Where ProductIngredientBatch_Ingredient = @.Ingr And ProductIngredientBatch_Deleted = 0;

if (@.@.ERROR <> 0) Goto ERR_HANDLER

Set rowcount 1;

Update #tblBatch set mykey=1;

While @.@.rowcount <>0

Begin

set rowcount 1;

Select @.Batch=ProductIngredientBatch_ID, @.BtchQty=ProductIngredientBatch_Quantity, @.BtchConsumed = ProductIngredientBatch_Consumed From #tblBatch

delete from #tblBatch where mykey=1

Set @.NewItemBatchID = newid();

Set @.QtyLeft = @.BtchQty - @.BtchConsumed;

if(@.QtyLeft > @.MixQty)

Begin

INSERT INTO ProductionPlanningItemBatch

(ProductionPlanningItemBatch_ID, ProductionPlanningItemBatch_Item, ProductionPlanningItemBatch_Batch, ProductionPlanningItemBatch_Quantity)

VALUES (@.NewItemBatchID,@.NewItemID,@.Batch,@.MixQty)

if (@.@.ERROR <> 0) Goto ERR_HANDLER

Update ProductIngredientBatch Set ProductIngredientBatch_Consumed = (@.BtchConsumed + @.MixQty) Where ProductIngredientBatch_ID = @.Batch

if (@.@.ERROR <> 0) Goto ERR_HANDLER

End

else

Begin

INSERT INTO ProductionPlanningItemBatch

(ProductionPlanningItemBatch_ID, ProductionPlanningItemBatch_Item, ProductionPlanningItemBatch_Batch, ProductionPlanningItemBatch_Quantity)

VALUES (@.NewItemBatchID,@.NewItemID,@.Batch,@.QtyLeft)

if (@.@.ERROR <> 0) Goto ERR_HANDLER

Update ProductIngredientBatch Set ProductIngredientBatch_Consumed = 0 Where ProductIngredientBatch_ID = @.Batch

if (@.@.ERROR <> 0) Goto ERR_HANDLER

End

if (@.@.ERROR <> 0) Goto ERR_HANDLER

Print @.Batch;

UPDATE ProductIngredient

SET ProductIngredient_Consumed = ProductIngredient_Consumed + @.MixQty

WHERE (ProductIngredient_ID = @.Ingr)

if (@.@.ERROR <> 0) Goto ERR_HANDLER

set rowcount 1;

update #tblBatch set mykey=1

End

Drop table #tblBatch;

Set rowcount 1;

update #tblRecipe set mykey=1

End

Commit Tran

RETURN 0

ERR_HANDLER:

Print 'An error occured';

ROLLBACK TRAN

RETURN 1

Given that nobody has taken a stab at this, I will give you some pointers. Yes, it is possible to rewrite the SP using set-based logic. Start with correlated subqueries that can maintain the running total. Using CTEs in SQL Server 2005 also is an option.

|||And perhaps you could use TRY / CATCH blocks to obtain a better error handling

How to make this simple SELECT query perform faster

Hi,

I have the following simple SELECT query:

SELECT * FROM TRX_LAPD

But the problem that table TRX_LAPD is very big. Althoug, I am using this query in a Network envirenment.

If I implement this query inside Microsoft SQL Server, the first time is somehow slow (40 sec), but becasue of the caching cabability the next time is very fast.

The thing that I am using this query in Visual Web Developer, so to be accessed by other users in the local network, but it seems that there is no caching features; each time I execute the query the implementation remains slow (40 sec) even from the from the server PC which have the SQL Server running.

I used also a simple Stored Procedure, but nothing changed

Below my code in Visual Web Developer with VB script:

Sub getmytable()
Dim sql As String
sql = "SELECT * FROM TRX_LAPD"
'or sql = "EXEC getTRX_LAPD" if I will use the Stored Procedure

MySqlDataSource.SelectCommand = sql
'where MySQLDataSource is an SqlDataSource control
End Sub

and the Stored Procedure that I tried also:

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go

ALTER PROCEDURE [dbo].[getTRX_LAPD]
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM TRX_LAPD
END

The SqlDatasource is bounded to a GridView control
I really appreciate any help.
Thanks

There are two basic rules of querying a database table:

1. Query only the rows that you need.
2. Query only the columns that you need.

By doing "SELECT * FROM TRX_LAPD", you are retreiving all the rows and all the columns all the time. Is that really a necessity? If you are doing this on a "very big" table (as you as say) it is bound to take time. I suggest you should revisit the design of querying the entire table every time.