If you read my post on august 28, 2017, you know that I found a lot of sql servers with 4k au drives. What’s worse is that data and log files were all over the damn place. I even found some database name with spaces in it. FFS! I had to go thru the servers and organize the files on 64k au drives. It was a bit painful at first since I was manually doing the work but after 2 servers, I got tired of manually setting up permission, alter the file path, going to the directories and moving the files over. I can up with the following which worked really good. There are some manual steps involved but not as bad
Recently, I started at a new company as their Enterprise Data Architect. I was reading thru the DBA’s documentation and I read that they formatted all their drives using 64kb allocation unit. I was happy to see this and didn’t think much about it.
Last week, while I was provided some new luns and wanted to check on the storage’s team work. They told me that the drive has been formatted per my request. I ran the following powershell script:
a bushel of kale
3 bok choy
2 stalk of celery
tonight i was working on compressing indexes for a sharepoint database. I ran into some issue because some of the columns were using sparse data type, which is great for saving space on null values. Because of this, you can’t compress the table or indexes.
the following script will help you identify which table has sparse columns instead of looking thru it one at time
select distinct o.name from sys.columns c
right join sys.objects o
on c.object_id = o.object_id
where o.type = ‘U’
and COLUMNPROPERTY( OBJECT_ID(o.name),c.name,’issparse’) = 1
This past weekend, I had to migrate a sql instance from an old UCS blade to a new one. The new blade would boot from the ZFS san instead of the retarded netapp san. It might be retarded because of all the hands that has been in it but suffice to say, it suck.
I ran into two unknowns during this migration