Shivering on the 49th Parallel
Friday, 26 October 2012
this is quick & dirty just to get you installed, maybe one day I'll refine it and add screen shots... but probably not :)
Friday, 26 October 2012 13:16:14 (Pacific Standard Time, UTC-08:00) | Comments [2] | Microsoft | Windows#
Tuesday, 20 March 2012
First post of the new year! also can't be arsed to install WL Writer so doing this in the web form. blech. :) One of my "projects" for 2012 is to suss out DirectAccess, a transparent "VPN-less" secure connection back to the mother ship from a roaming corporate laptop. On paper it sounds pretty good, but from a demonstration point of view, it ranks up there with watching grass grow or paint dry. When set up and configured, a laptop (or desktop I suppose) out of the office and off the corporate network can access network resources behind the firewall. Going the other way, IT can centrally control corporate laptops out in the field via Group Policy, WSUS and other technologies. To give a demo, you'd take your laptop off-campus, fire it up, log in... and... use it... not much of a demo :) the stuff going on behind the scenes is interesting, but not for the average person. My engine, however, gets running. I ordered up an HP Microserver last month to try this out on. I suppose I could have installed 2008 R2 on any old computer kicking around, provided it had two network ports on it, but I also wanted to do a hands-on with this little server. The HP Microserver is ridiculously cheap for what it is: an HP ProLiant server. it's about half the size of a breadbox and has four non-hot-swap SATA drive bays, two memory slots, a PCIe x16 and and a PCIe x1 half-height slot, a 5.25" drive bay for an optical or tape drive and one large low-rpm fan on the back so it's really quiet. All that for about $400. I bumped up the price somewhat by doubling the RAM and adding a server NIC card to get a few more network ports on it, but it was still under $1000. Putting a copy of Windows Server on it is where most of the expense comes from. Since this is a test, I put a TechNet/MSDN copy on it and fired it up. There are a lot of pre-requisites for setting up DirectAccess including a good CA/PKI setup, and probably the most difficult part: 2 consecutive public IP addresses that don't end in 09-10. I've got all that covered now, so my next step will be to make some changes to Active Directory, my edge firewalls and then I can try it out!
Tuesday, 20 March 2012 08:28:53 (Pacific Standard Time, UTC-08:00) | Comments [2] | Active Directory | Hardware | Microsoft | Networking | Servers | Windows#
Thursday, 08 September 2011

Well this is interesting. First of all, do not move any vhd or avhd files around, whether your guest VM is running or not.

I came back from a week’s vacation to find that my VMs were pretty much all broken. Awesomesauce. What happened was that the server that I run SCVMM on is also the Backup Exec server, and due to a mistake by some end-user, the size of the weekly backup jumped about 600gb and the backup2disk folder ran out of space and halted all backups. All the Virtual Machines paused themselves too because the host was out of hard drive space.

To alleviate the situation, a co-worker found 100gb or so of files in a “snapshot” folder under the VM’s folder and moved them elsewhere. What he didn’t know or realize was that these VM files have very specific ACLs that are tied to a username called NT Virtual Machine\{SID}.

When you move a file in Windows, if you’re copying on the same volume (say from My Pictures to My Pictures\vacation 2011) it will take it’s permissions with it. When you move a file to a different volume (to a D drive, or a flash drive or a network drive) it will inherit the permissions of it’s new home. Normally that’s a good thing, but for these snapshot files, it’s a bad thing. a very bad thing.

I discovered this when I found & moved the files back to where they were. The VM still would not start up and was giving all kinds of cryptic errors. unable to mount, unable to start virtual controller, things like that. I should have made a note of the exact errors and put them here for people to find, because figuring out what to do was a bit of a pain. Ultimately I found a KB article that described how to re-set the permissions and re-assign full control to the NT Virtual Machine\GUID user to the folder and then each of the avhd files directly using your favorite tool and mine: icacls.exe

This allowed the machine to re-start up and everything seemed to be OK so after 24 hours I thought I’d figure out how to get rid of those snapshot files and free up that space “the right way”. The first problem was that I did not have any snapshots of this VM, so how could I have snapshot files??

I found this article called “Hyper-V: What are these *.avhd files for? Snapshots? But I have no snapshots!” while Googling around and at first was stumped, because what he was displaying I could not see. I followed his directions to shut down the VM and power it off (the guest) and realized that yes it had been paused and rebooted, but it had never been shut down in nearly two years. I powered it off (it’s an MDT and WSUS server, so no “production” data on it) and looked around for the “merging 1%” to show up and it didn’t. I couldn’t figure it out! why couldn’t I see this happening in my SCVMM administrator’s console? On a whim, I decided to try the “local” Hyper-V MMC snap-in, so I fired up the Server Manager and drilled down to it. There it was, on the main screen under “Operations”: Merge in progress: 11%

I watched it for a few minutes and saw that one of the AVHD files disapeared! it was working! Awesome! so now it’s merging “the big file” which is where all the deployment images and WSUS download data was and is taking a while longer. As soon as the first AVHD file disapeared, I looked at the drives and saw that there was now 80GB free and the backup jobs resumed their steady march.

Once this is done, I’m going to have to do the same to the other Guest VM on this machine, which IS a production machine and probably has even more data in it, so that will have to wait for 5pm and run overnight.

Thursday, 08 September 2011 09:27:09 (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Servers | Windows#
Tuesday, 28 June 2011
Like the sword of fucking Excalibur, only the anointed, chosen one can pull the Export-Mailbox cmdlet out of the stone.
Tuesday, 28 June 2011 13:58:04 (Pacific Standard Time, UTC-08:00) | Comments [0] | Rants | Mail Server | Microsoft | Servers | Windows#
Tuesday, 07 June 2011

Last year I set up a Windows Server 2008 Core server. It was a Hyper-V virtual machine, it was minimum-spec, it didn’t do much other than be a second Domain Controller on the network so I hardly ever had to interact with it. Based on that criteria, and because I wanted to see what it was like, I installed Windows Server 2008 Core.

Windows Server 2008 Core if you’re not familiar is a Windows server with no windows: when you log in, you get a command prompt, and that’s it.

Configuring it after installing was a bit of a bear, because instead of clicking anything, you had to learn, know and type the commands into the terminal, along with all the arguments/switches. I got it set up, configured, joined to the domain and then promoted to be a domain controller and that was pretty much it. I set it up so that I could use Remote Desktop to connect to it, but what I really wanted to do was use the Server Manager on another server to connect to it and manipulate it that way.

I found out the hard way that you can’t really do that. I did find a piece of software written in Visual Basic called CoreConfigurator which created a text-menu-based configuration helper and it was pretty good. They also had a Version 2 which was written in Powershell that had a bit of a GUI to it… but it wasn’t compatible with Windows Server 2008 (the Vista server, if you will) only Windows Server 2008 R2 (the Windows 7 server). I pretty much dropped it after that, since it was running and I didn’t need to do anything to it.

Eventually I upgraded it to Server 2008 R2 when my licensing allowed me to and then I could use CoreConfigurator V2.0. Remote management still wasn’t working, despite the server’s command-line status updates to the contrary. Again, it was working and I had more important things to do.

Today I was trying to track down something (seemingly) entirely unrelated. Some clients could access a DFS share on the domain, and others could not. I followed the trail to the Domain Controller (DC1) and checked DNS services, and they were all fine. I then looked at DC1’s DNS servers and it was pointing at DC2 (the Server Core) so I opened it up and checked it out. I thought to myself “Wouldn’t it be nice if I could control DC2 with the Server Manager on DC1?” so I decided to take another run at it.

On DC2 I entered winrm quickconfig to see what was configured. As expected, it said:
WinRM already is set up to receive requests on this machine.
WinRM already is set up for remote management on this machine.

So I tried “Connect to another computer” in Server Manager and… bonk. “Server Manager cannot connect to server_name. Click retry to try to connect again.” opening the details tab had more detail, but it’s pretty much all gibberish even to me. “Connecting to remote server failed with the following error message: The WS-Management service cannot process the request. The resource URI ...:// was not found in the WS-Management catalog. The catalog contains the metadata that describes resources, or logical endpoints.” Right.

I started with the error code, and then the hex code and ultimately ended up at a Microsoft KnowledgeBase article that hit the nail right on the head.

Error message in Windows Server 2008 R2 or in Windows 7 when you try to connect to a remote server: "Server Manager cannot connect to <server_name>"

Following this article, I typed sconfig from the command-line on the server core, chose item 4 “Configure Remote Management” and then option 3 “Allow Server Manager Remote Management”. It then re-configured Win-RM (which was already configured correctly) but interestingly added three new rules! It didn’t say what those rules were, but after restarting the server (because I had to enable PowerShell) I was able to connect to the server using Server Manager from any of my other servers or my Windows 7 laptop.

Tuesday, 07 June 2011 12:35:39 (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Active Directory | Microsoft | Networking | Servers | Windows#
Wednesday, 19 January 2011

I started out the task flying pretty high. I worked on a deployment for some new HP laptops and Windows 7 Pro x64 and things were working out as planned.

Once I got it to where I could PXE boot the laptop, connect to the deployment share and lay the Windows 7 x64 image down on it, I was time to get down to the nitty gritty: Drivers. Applications. Packages. Automation.

Drivers were fairly easy, I’ve been importing them for awhile now, but what I wanted to do was to segregate them into distinct little piles, rather than one motherlovin’ huge pile of inf files and I wanted a computer to only get the drivers it needed for itself, not the whole lot of them.

MDT 2010 provides for this, and there are plenty of good tutorials out there on the net waiting to be found, so I won’t “waste ink” posting it here again. I highly recommend you use the Readability bookmarklet before going to any of the articles on that site, though. They have ads and crap on all 3 sides and a narrow column in the middle with small text for the actual article.

So we got a bare-bones Windows 7 install at this point, with a bunch of Unknown Devices in the Device Manager. Windows 7 is smart enough that most of them have drivers advertised through Windows Update so right-clicking them and selecting “update driver” will find it… but that’s not why we’re using deployment tools, I want it to come out the other end of my process shiny and clean and ready to be used. Following information in those links above and elsewhere, I was able to have WindowsPE detect the make & model of the laptop, and then look that up in my deployment database and download the drivers I specified. Awesome! All but one… one sticky wicket that wouldn’t work because the manufacturer chose to make the driver file a software installation, instead of just a driver. (hate)

On to the Applications settings in MDT 2010 then! Applications don’t work as well as the drivers do. There’s no Selection Profiles for applications like there are for Drivers. Sure you can set MandatoryInstallation <guid> in the customsettings.ini file for the whole deployment share, but then they get installed on every machine that connects, not just the one laptop model that needs this particular driver, so that’s out, too.

Searching around on this topic led me to the Make & Model settings under Advanced Settings>Database. I created a new entry using the Make and Model of the laptop using the data I got from the BIOS. To find out what yours is, drop to a command prompt and type ‘wmic csproduct get vendor’ or get name. Once you’ve created an entry, you can double-click it to open it’s properties and assign things like Applications, Roles and Administrators. Applications is the one we’re looking for here so I clicked on that tab and then clicked Add. I then selected the Driver software.exe that I had set up (as a silent install… another topic!) and then clicked OK. I updated my deployment share and… it didn’t work.

I tried a few different things, I checked, double-checked, and triple-checked that I got the Vendor and Name correct, I tried moving the application around within the deployment share, but nothing worked. Because I was working with a physical machine, it took about 30 minutes to test out each iteration. While it was doing that, I opened the ZTIGather.log on my virtual machine that I had deployed to yesterday, which is in C:\Windows\Temp\DeploymentLogs and using the Vendor and Name in there, I created another entry in the database and assigned it a very small application (most of the apps I have in the repository are huge… Autocad, Office, etc.) to try that one out. I updated the deployment share and this time, just in case, I also went into Windows Deployment System and replaced the boot image with this newly generated one.

I booted the VM up, let it PXE boot, selected x64 boot image and stepped through the Wizard and when I got to the Applications screen… Holy smokes it was there! pre-checked! I tried un-checking it and then clicked next, but then went back and it was re-checked, so it was treating it as a mandatory application, but only on that make & model of computer! I then rebooted the laptop into the same x64 boot image to see if it was working for my original problem. If it wasn’t, at least I had proved that it wasn’t an error with my database. I flipped through the screens to Applications and the driver was there and pre-checked! Hooray! hurried through the rest of it and let it deploy. Once it got to the Windows 7 desktop and the last stages of the deployment were running, it installed the driver software. I rebooted (windows update kicked in right away) and when it restarted, I checked out the device manager: Nothing was showing as Unknown Device! Hooray! One machine down, 2 more to go, get a few more apps in there and my MDT 2010 deployment share will be ready to kick out the Win7 Pro x64 jams to all comers! (well, within my company and licensing agreement, anyway) Open-mouthed smile

Wednesday, 19 January 2011 16:57:01 (Pacific Standard Time, UTC-08:00) | Comments [0] | Deployment | Microsoft | Servers | Windows#
Thursday, 25 November 2010
The weird thing is that the server continued to, well SERVE the whole time it was in that compromised state, so the users didn’t know anything was wrong. In the meantime my ass was puckered so tight I was pulling the fabric of my seat right up into my ass leaving little rosebuds everywhere I sat.
Thursday, 25 November 2010 18:27:56 (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Servers | Windows#
Wednesday, 20 October 2010

Last night I logged into work from home to initiate a reboot of all the servers. Windows Updates were pending, and had been pending for about a week, but it’s hard to reboot production servers in the middle of the day when people are using it. Throw in some Flex Hours, and they’re in use from 6am to about 8pm.

The Domain Controllers have their own policy for updates, and they’re still required to be initiated manually, and then “restart now” clicked to reboot them.

When new “critical” patches are released and there are known 0-day flaws being exploited, I’ll use the ‘deadline’ feature in Windows Software Update Services (sort of a mini Windows Update server you can run on your own, approve and distribute updates around your own network but only downloading it once from Microsoft) where if a deadline passes and a user has been clicking “restart later” it will disable that button and start a 15 minute countdown before it forcibly reboots.

There was no deadline on this latest batch of updates from the last Patch Tuesday, so the (member) servers were politely asking to be rebooted. I logged into each of them one by one and clicked “restart now” and then waited for them to shut down, restart, and start back up again.

All of them worked and came back up (according to pinging them for responsiveness) except one. It SEEMED to come back up. I could ping it and it responded, so I moved on to the next and the next and the next.

It wasn’t until this morning when I walked in the door and had four people waiting for me saying “the network is down” (which of course was a misnomer, the network wasn’t down, it was just the shares on THE MAIN FILE SERVER that were disconnected) I poked my head into the server room, and the KVM was already set to that server and on the screen (which was blue, but not that Blue) was “Configuring Updates stage 3 of 3 0% Do not turn off your computer” I watched it for a minute to see what happens, as the hard drive LEDs were blinking away, so it WAS doing SOMETHING… then the screen went black.

The cursor was flashing up in the upper-left, so I waited some more… then the BIOS splash screen came up. The server had rebooted itself.

Turns out it had been in this startup, stage 3, fail, reboot loop since 9:00 last night.

Step 1, try a cold-boot. I waited for it to fail again, and then I held down the power button until it powered off. I removed the power cables and let it sit for 30 seconds to make sure everything had powered off, plugged it back in and tried again. Same result.

Step 2, try Safe Mode…. Applying Computer Settings… Configuring Updates stage 3 of 3… reboot. Crap.

Step 3, Last Known Good Configuration. This resets key windows files back to how they were the last time you successfully logged in. You would think that this would break it out of a bad update loop. You would be wrong.

Step 4, booted from the Windows Server 2008 x64 DVD and clicked on Repair. There’s a new “Startup repair” tool that’s incl-wait, it’s not? only in Server 2008 R2 that’s based on Windows 7 and NOT in Server 2008 that’s based on Vista? There are NO repair options for Server 2008 other than re-imaging of the system from the latest full-system-image? You DO have one of those, right?

Step 5, Uncle Google suggested I click through to “Get Vista out of the Infinite Reboot Loop” and the comment there by Tribus was:

I know a different way to resolve this issue without using a restore point.
1. Insert your Vista Media into your dirve and boot from it.
2. Select "Repair your Computer" from the list.
3. Select "Command Prompt" from the recovery choices.
4. At the command prompt change your directory to C:WindowsWinSxS
5. Type: del pending.xml
6. Exit and reboot
This will fix all Windows update reboot loops and does not require you to restore your PC to and earlier state.

Figuring I had nothing else left to lose, I gave this suggestion a shot, even though it was for Vista. If this didn’t work, then I’d be getting on the horn to Microsoft Support for some help. Instead of deleting it, I renamed it pending.xml.old and then exited and rebooted.

“Applying computer settings…” OK so far so good…

“Configuring Updates stage 3 of 3 0%. Do not turn off your computer…” FUCKBURGERS!!!

“Press Ctrl+Alt+Del to Begin” WHAAAAAAAAAAAT? it worked.

Once it was up and running the first thing I did (other than tell the users they could access their files again) was to look in the event log and see what happened. On the first reboot last night at 9pm, there was an event from source Winlogon, Event ID 6004 “The winlogon notification subscriber <TrustedInstaller>failed a critical notification event".”

So the next step is to research that error and see if I can figure out WHICH update caused it… it could be a moot point though because my co-worker turned up some early results that once you do this, you’ve pretty much broken Windows Update on this computer forever. I can live with that for now, because people are working and the data is intact. If I figure out that that is the case, and figure out a workaround, I’ll post a follow-up.

Wednesday, 20 October 2010 09:07:51 (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Servers | Windows#
Wednesday, 18 August 2010

I’ve written before about what a huge, horrible, steaming pile of horse shit you have to wade through to install a 32-bit (x86) driver on a 64-bit (x64) server. It’s SO counter-intuitive it makes me want to scrape my eyeballs out with a grapefruit spoon and then chop off my fingers so I won’t be able to see a computer or type ever again.

In a nutshell, you need to have a 32-bit client running Vista or Windows 7, install “the full meal deal” printer driver on that client, THEN connect to the 64-bit server’s printer share (\\server\printer) and then tell it to use the existing driver. That will then UPLOAD the driver from the client machine to the server and make it available to other 32-bit clients who try to connect to it.

Today I’m in the opposite situation. I PURPOSELY set up a 32-bit Windows Server 2008 (not R2, which is 64-bit only) to run my print queues because 99.9% of my network is 32-bit Windows XP clients and I didn’t want to have to go through this rigmarole for every single one of them. *MY* laptop, however is running Windows 7 Professional 64-bit and it’s unable to connect to the shared printers on the 32-bit server.

Rather than duplicate the steps above, since I was feeling saucy and experimental, I went the other(old) way around. On the 32-bit server, I opened the printer properties, went to the sharing tab and clicked on Additional Drivers. I checked the 64-bit box and it asked me for a driver. I clicked Browse. I navigated to the folder where I had the 64-bit driver .inf file for the printer, selected it and clicked OK.

Fast-forward a few seconds and the window closed, and the box was checked. Just like that. Just how it USED to be in older versions of Windows Server. I went back to my laptop, tried to connect to the printer, and this time instead of failing and saying “Driver Unknown” or even worse, the  0x0004005 error which is one of the more generic error codes you’ll ever see. (I always thought it was “Access Denied”, but that’s just ONE of the errors it COULD be.) Up came a NEW dialog box. Do you trust this printer driver? Yes, of course I do. Just like that, it mapped the printer, using the 64-bit driver on the 32-bit server.

If it’s so bloody easy to do that with a 64-bit driver on a 32-bit server, why the HELL is it SO difficult and bass-ackwards to do it on a 32-bit driver with a 64-bit server??

Wednesday, 18 August 2010 10:09:35 (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Deployment | Hardware | Microsoft | Networking | Servers | Windows#
Tuesday, 13 July 2010

Last Friday, one of the workers here in the office came over to me and said that he got an error in his inbox about a message that had been delayed. Not permanently, just delayed. I said OK, leave it, it’ll retry again for the next 48 hours and looked into it.

I connected to the Exchange 2010 server and opened Exchange Management Console and went straight to the Toolbox and clicked on Queue Viewer. There they were, pretty ducks all in a row all with DNS FAILURE errors. Huh. Interesting. I saw this happen once before when we were setting the server up. The DNS server it was set to use was offline, so no DNS resolution meant it didn’t know where to send the mail. Thinking this was the case this time, I checked the Network Adapter settings and saw that the preferred DNS server was the other VM “next to” the Exchange 2010 VM and the secondary was set to “my” DNS server here in my office.

I checked my DNS server first, just to make sure the service was running, and it was. I then checked the DNS server that was it’s primary and it, too, was running. Mystery. Nslookup queries failed and timed out even for common domain names. Not good. This was happening on both DNS servers.

I called in a support ticket (this was Friday at 4:00) and found out that the Exchange SysAdmin was on vacation and not back until Monday, and he was being covered by another Exchange SysAdmin on East Coast time. She called me back about 20 minutes later and we worked on it for a good 40 minutes with no resolution. She figured that since the DNS server was rebooted, it had been unable to contact the

PDC role holder and authorize/activate itself and that there must be a problem with the VPN between my network and hers.

This seemed like a valid diagnosis, as the other Administrator here at work told me that our router had been failing every 30-40 minutes, but recovering after a minute or two and was obviously dying. Yikes. This caused a little panic as ALL my sites use the same router/firewall and they’re discontinued and I hadn’t yet created a contingency plan to replace them.

She escalated the ticket up to tier 3 networking support, who tested the VPN and said that everything was up on their end, but they couldn’t ping my side of the VPN, therefore there was a problem with the VPN and it was on my end. (naturally). I don’t know too much about the router/firewalls we use here, I’ve been slowly learning as I went, but diagnostics and troubleshooting was beyond the scope of my knowledge beyond “well the blinky light is green, not red, so it’s up”.

Further compounding the matter was that this VPN was temporary, because we were switching it on Monday from an Internet VPN to a private, routed DSL connection into their MPLS network. That ADSL modem was plugged in to power and phone, but not into the LAN as it was just for testing.

At some point over the weekend, one of the emails from their networking people said that they could ping as far as but no further. This was when the light bulb went off in my head. .252 is the address of the new ADSL router, NOT the VPN endpoint! Their network techs were trying to reach my network via a device that was physically unplugged! I thought it was odd, since I was connecting from home via VPN through the same device and it was up.

Monday came and I plugged the DSL modem into the LAN and disabled the Internet VPN connection from my network to theirs, created a new route for all traffic destined for their network to use this new gateway and everything seemed to be working. Outlook clients in my LAN segment were connecting via the MPLS network, verified by the IP addresses on a traceroute… I could Remote Desktop the virtual servers in their network… everything seemed to be working, but their network guys could still not ping my LAN from the MPLS gateway, even though I could ping back to my network from the Virtual servers (which was the important part anyway) so that left me with the DNS problem, which was still ongoing and some people were now starting to get NDRs because the 48 hours had timed out.

I started with my own laptop, and did an nslookup query. request timed out. Damnit! I checked the DNS server, the service was running, I restarted it, it still failed. I looked at the event log and there were a bunch of “DNS server encountered an invalid domain name” errors, but the errors were coming from all these weird IP addresses that were not in my network. I then thought that perhaps it was the forwarding that wasn’t working, based upon a few results that came up when I searched that error message online. I checked the forwarders on my DNS server and found that they were set to use two servers, one of which resolved to a hostname and both of which did not respond to an nslookup query. How on earth did I end up with two (seemingly) random Shaw Cable DNS servers for my forwarders when I have a Telus ADSL connection in this office? that could explain why they didn’t respond; my IP address wasn’t in the Shaw Cable network!

I changed the two forwarders to and which is OpenDNS. I then restarted the DNS Server service and BAM! nslookups all worked. I then went back to the Exchange server and tried again. Still failed. OK, I have an idea of what’s going on now, so I connected to the DNS server there and checked it’s event logs. Similar messages, different addresses. I opened the DNS snap-in and went right to the forwarders. The two forwarders on this server were two Telus servers! This was a co-located (sort of) Virtual Server within an ISP, so how did I end up with Telus servers there?! I changed those two forwarders to OpenDNS and restarted the DNS Server service and as I was opening a command prompt window on the Exchange 2010 server to try an nslookup again, I could see the emails in the retry queue (which was still open) begin to flow out. I tried nslookup queries on a couple domain names that I knew were in the retry queue and they all answered lightning fast as non-authoritative responses.

SO in the end, I figured it out myself, but the million-dollar question that I can’t answer is HOW did my local DNS server get a Shaw DNS server as a forwarder, and how did the VM DNS server in the datacenter get a Telus one??

Tuesday, 13 July 2010 08:44:13 (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Active Directory | Mail Server | Microsoft | Networking | Servers | Windows#
Friday, 28 May 2010

Two lies for the price of one!

This morning I took a new server out of the box for a small branch office. It’s an HP ProLiant ML150 G6 tower server: Xeon Quad-Core processor, 2GB RAM and a 250GB SATA HD. I also upped the RAM to 4GB, added a 2nd 250GB drive and a pair of 500GB drives to give me a RAID1 array for the OS & Apps and a RAID1 array for the data.

Once I configured the RAID arrays, I booted using the Easy Setup CD. The Easy setup CD is something that HP and Dell (among others?) send out with a server to speed up and make life easier on the person installing Windows. It’s Linux based and walks you through picking a drive to install it on (the HP one even comes with an admin tool for the SATA RAID controller to configure those if you hadn’t already done it in the BIOS) and then provide your Name, Company, Product Code and which version of OS you’re installing from a list incl Windows Server 2003, 2003 R2, and 2008 and different flavors (32-bit or 64-bit) The Dell one goes even further into pre-configuring IP addresses and even joining to a domain. Once it has all the information it needs, it creates partitions and copies/pre-stages drivers from the CD to the hard drive so Windows Setup knows where to find it and can “see” your drives on your RAID controller.

I went through the steps and when it came time to choose an OS, Windows Server 2008 R2 was not on the list. I figured Windows Server 2008 x64 was the closest thing and chose that. It did all it’s gyrations and then prompted me to insert the Windows OS disc. I put in my Windows Server 2008 R2 disc and… was rejected. Odd. I tried again, same response. “Please insert the Windows Server 2008 x64 OS Disc”.

At that point I realized that it was looking at the volume name on the disc and whatever my disc was, it wasn’t what was expected. Le Suck.

I got on to HP’s support site to find an updated Easy Setup CD, and eventually found the right page, but it only lists Server 2008, not Server 2008 R2. Lame. I kept looking and searching and ultimately hit the Support Chat button and got an HP Tech Support agent on the line. I explained to him my predicament and he sent me a link back to the page I was just looking at. I knew it was the same page, because the link was purple instead of blue. (ie already visited)

I explained that I already looked at that page and it wasn’t what I was looking for. Then he decided that I must have had a 2008 R2 Hyper-V error and pushed me a link to an MS KB article  that had 3 steps: 1) disable hardware virtualization. 2) install this hotfix. 3) re-enable hardware virtualization.

I calmly explained that I didn’t have Windows installed yet, so how could I possibly install a hotfix? He said I should download it, burn it to disc and then boot off the disc and apply the hotfix. I re-iterated that I did not have Windows installed, so there was nothing to patch with the hotfix.

“OK, skip step 2 then”

Riiiiight. so that leaves me with “disable hardware virtualizations” and “re-enable hardware virtualization”. Since I hadn’t turned it on yet in the first place, it was still a moot point and told him so. He had reached the end of his flowchart now and didn’t know what to do next.

At that point I booted off the Windows Server 2008 R2 disc itself and-as expected- it couldn’t see any drives. I downloaded the SATA RAID controller driver, extracted it to a USB flash drive, jammed it in the server and clicked “load driver”. I pointed it at the folder and it found a driver for an HP BI110i Embedded SATA RAID controller. Jackpot! the drives showed up, but… Windows could not be installed on the selected disk.

After searching Google with the error number that was presented, it turned up some “Windows 7/2008 R2 can only be installed to the first boot device/C drive” so I went back into the BIOS and RAID setups to make sure that Disk 1 was the first device. It was.

I got back up to the Load Driver screen and noticed that my USB flash Drive was designated C:, the DVD-ROM drive D:, Disk 1 Partition 1 was E:, and the WinPE boot drive X:. I deleted the partition on Disk 1 and tried again. Same thing.

Finally, I booted back again without the USB drive, waited for the Load Driver screen to show, clicked Browse and THEN jacked in my flash drive. It showed up as C. I picked the driver and loaded it, and then removed the flash drive, waited 5 seconds, just to be sure, then clicked “Disk 1 Drive 1 Unallocated Space”, held my breath and clicked “Next”…


It worked.


Windows Server 2008 R2 is now installed on my new server and I’m running through Windows Updates and configuring it to be part of my network. Had I done what I knew worked to begin with, I’d be sippin’ a margarita by now but instead I tried to do things “the HP way” and it wasted my lunch hour and most of the afternoon. The Easy CD way (if it had worked) would have been equally quick.

It galls me that a company the size of HP, with the volume of servers they sell, hasn’t released an update to their software yet. Windows Server 2008 R2 was released to manufacturing in June 2009 and went on sale October 2009. It’s almost June 2010 and they still have not addressed this yet. What makes it worse is that this entry-level server is aimed at the segment of the market that doesn’t really have their own IT departments that would be able to figure this out on their own.

I think I’d like that margarita now, senor, por favor!

Friday, 28 May 2010 14:35:35 (Pacific Standard Time, UTC-08:00) | Comments [4] | Hardware | Microsoft | Servers | Windows#
Wednesday, 17 March 2010

There are a lot of blogs, classes, tutorials, how-tos, workshops, links and opinions on how to best deploy Windows 7 using the new Microsoft Deployment Toolkit 2010. What there’s a distinct lack of is how to make these tools work with XP which most of us are still using. I am planning to move to Windows 7 x64 later this year, but we have a software dependency on 32-bit Windows that we have to get past first (and no, Windows XP mode won’t cut it for this app)

I spent most of yesterday downloading software and patches. the Windows Automated Installation Kit 2.0 (which supports Win7, 2008 R2 and back to XP) was a 1.7gb iso file which took a couple hours.

Eventually last night I was ready to start the capture of an existing Windows XP box that I could then deploy to the other new machines.

This morning I tried to do it and it failed. I assumed it was permissions-based since the error was 0x00004005 which I know from past experience is “Access is denied”. After sorting that out, it still failed. Trolling through forums from a Google search, I found some people were able to get it to work by using the IP address of the deployment server, or sometimes the FQDN, rather than just "\\server\share$”

I rebooted, opened Windows Explorer and navigated to \\192.168.x.x\share$ and when it asked me to authenticate (because this is a workgroup computer and the share is a domain resource) I entered my credentials and then I double-clicked the litetouch.vbs script to kick off the imaging process. This time it seemed to work, it downloaded the WinPE files needed, ran sysprep and then rebooted to capture the image… except that’s when it failed.

Digging into the winpeinit.log I saw that there’s no NIC. Awesome. Great. I figured that the driver for the NIC would be part of the Windows image, but I overlooked the fact that the WinPE boot-time would also need the NIC in order to connect to a network share and create the disc image there, and the new machines would need the NIC driver to connect to that same share and copy the image down to the local computer.

No biggie, except that the computer is now stuck in a loop booting into WinPE rather than back into Windows XP. I injected the driver for the NIC into the deployment share’s Out Of Box Drivers and rebuilt/updated the deployment (which also adds the NIC driver to the winpe.iso file). All that’s left to do now is to PXE boot the machine which will download the new winpe (now with more NIC flavor) and start over… except now my PXE server isn’t configured properly :p

Wednesday, 17 March 2010 11:27:45 (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Deployment | Microsoft | Networking | Servers | Windows#
Tuesday, 23 February 2010

How come a “printing system” has to be a 300mb download or CD ordered by mail? I’m all for having that as an OPTION, but for servers and for shared printers, all I need is a driver and that can probably still fit on a floppy disk… if my computers and servers still had floppy drives, but that’s another post!

I already posted about 32-bit printing in an increasingly 64-bit world, and my medium-term solution for that was to stand up a 32-bit Windows Server 2008 VM and use that as a print server.

This post is the next step: printer drivers. Specifically migrating printer drivers from one server to another. For the small amount of printers I have to manage (three printers and two plotters in this office) or even the amount of printers (queues) at my last job (about 40) it’s not so difficult to do it manually. I did just that when we moved into a new building at my last job and stood up a VM just for print queues. Pretty straightforward, really: download the latest printer drivers from the manufacturers web site, unpack them to a network location, Add Printer from the printers window/control panel, new local port, new TCP/IP port, punch in the printer’s IP address, have disk, browse, click, select… done. 40 times. A wee bit time consuming. For this migration here I only had the six, so it should be even easier. But what if the newer version of a printer driver doesn’t work properly with your as-configured software?

That’s where I am right now. We have a Kyocera CM3232 photocopier/printer/scanner/fax. It’s a big one with it’s own onboard cost accounting and “proper” network scanning & faxing. It does color and black & white and prints on up to 11x17 paper (although not borderless printing). On the old OLD server, printing CAD drawings from Acrobat Reader plots properly. On the new-old server, it didn’t. There were some weird issues where drawings would not be rotated based on the settings you selected in Acrobat, but if you left Acrobat’s settings on Portrait but clicked Advanced Print Properties and changed it to landscape on the driver settings, it would work. Not very intuitive and sure to be the cause of plenty of helpdesk calls.

We tried a different driver, we tried an old driver from a CD that presumably came with the printer and nothing seemed to work. In the end, I re-pointed everyone’s printers back to the old server and removed the queues from the new-old server… but that old server isn’t going to last much longer and it’s not easy to find parts for an old IBM X-series Pentium III tower server, and having a single Windows 2000 Server in the mix is also holding the rest of the network back.

The new-old server blew up in December. No big deal for printing, but HUGE FUCKING DEAL for everything else. I managed to get it up and running again, Frankenstein-style and convert it to a virtual machine before shutting it down for good and sending the carcass to the recycling center.

That new one is here, and one of it’s roles is hosting a Windows Server 2008 32-bit VM for print queues, so I’m back to trying to make the new server play nice and plot drawings properly… the Windows Server 2008 driver for the copier is doing the same weird things the 2003 driver was doing… If only there was a way to migrate those queues, drivers and ports over to a new server… oh wait! there is! Hallelujah I think I hear a choir of angels singi—wait, what? that only really works for moving from NT4 to 2000? It wasn’t really updated for 2003, 2003 R2 or 2008? The tool has been retired? Oh good grief!

Fortunately there’s a new version built-in to Server 2008 and Server 2008 R2. You access it from Print Management Administrative Tool, as opposed to the Printers control panel applet. From there you can add the old server as a network print server, right-click it and export printers to a file… then right-click your new server and import printers from a file. I’m in the process of doing that right now, and will be testing it with CAD drawings later today. Fingers crossed.

Tuesday, 23 February 2010 11:43:52 (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Networking | Servers | Windows#
Friday, 12 February 2010

(or a 64-bit domain anyway)

Hooray! 32-bit is dead! Long live 64-bit! … … … not exactly.

While there are more 64-bit machines out there now than there were a year ago and tons more than a few years ago, a lot of businesses are still firmly entrenched in 32-bit Windows XP. I know we are.

We’re a pretty good example of someone who SHOULD make the leap to a 64-bit OS. If there’s one segment of the market that supports 64-bit and is extremely memory-hungry, it’s CAD work. And we’re all about CAD work. I’ve recently upgraded all the computers to 4GB of RAM and standardized them on one video card (nVidia Quadro FX 580 512MB), they’re not taking full advantage of that 4GB of memory because the 32-bit XP Professional can’t address it all. Even with the /3GB switch in the win.ini file, that just means acad.exe can use more than the 2GB limit per process… but I’m getting off topic.

When I started here in Q4 of 2008, I took one look at the “datacenter” and my jaw dropped. The main file server was an old IBM x-server with a Pentium III and a whopping 768mb of RAM and a couple 160GB hard drives in RAID1. The web/intranet server was an even older one. Both were running Windows Server 2000. The Domain Controller was newer, it at least had Windows Server 2003 on it, but it was consumer-grade, non-redundant components in a 2U rackmounted case.

Before Christmas rolled around I had replaced the ancient file server with a pair of Supermicro SuperServers with Quad-core Xeons, 4GB of RAM and 5x1TB SATA2 drives in RAID5 configurations and added an LTO-4 tape backup to the mix. Between Christmas and New Years, the web server died so I replaced that one with another Supermicro identical to the first two, but with just 2x250 and 2x500GB drives in RAID1. All of these servers were running Windows Server 2008 Standard x64.

This led me to a major problem: I was able to install printer drivers for each of the printers on the servers themselves, but with the 64-bit drivers. Client computers (XP Pro SP2 x86) tried to connect and failed because they couldn’t use the 64-bit drivers. In the old days, you could go to the sharing tab of the printer properties and click “Additional Drivers” and that was pretty much that, but cross-architecture is a little more squirrelly, and the solution is counter-intuitive.

Here is how to provide a 32-bit driver in the Additional Drivers page on a 64-bit server:

Step 1: Install the 64-bit driver on the server itself and make sure that you can print.

Step 2: On a 32-bit client (I used XP Pro) download and unpack the drivers for the desired printer (in my case it was an HP Laserjet 4600).

Step 3: Open Windows Explorer and navigate to your printer share: \\64-bit_server\ and then double-click Printers and Faxes.

Step 4: Right-click the desired printer and  select Connect. It will do it’s thing and then Uh-Oh.. where’s the driver? It will ask you to provide a driver. Browse to your local folder where you’ve stashed the .inf files for the printer and let it install. Print a test page to make sure it’s working on your computer.

Step 5: On the server, right-click the printer you just added and select Properties. Click the Sharing tab, and then click the “Additional Drivers” button. Click to check the “x86” button for 2000/XP and click OK. The server will then request the x86 versions of the files FROM your local workstation and upload them TO the server.

This is the back-asswards part that tripped me up. You’re actually uploading the driver TO the server so it’s able to them DOWNLOAD it to OTHER x86 clients that request it.

Step 6: Click ok, ok, ok, all the way back out and you should be good to go.

Friday, 12 February 2010 17:00:00 (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech | Microsoft | Networking | Servers | Windows#
Friday, 22 January 2010

WSUS is a pretty cool piece of software. Basically it acts as a “Windows Update” server for your network. Rather than have all your computers download the same updates each from Windows Update, your WSUS server dowloads it once and then distributes it to all the computers that need it over your LAN connection which is much speedier than 99.9% of the internet connections out there. It also gives you a single place to go to and approve updates. Heard bad things about an update? Don’t approve it for installation and it won’t make it’s way onto any of your machines until you do (or they release an update to supersede it). A nice solution for small and medium sized networks.

You can extend it out to different geographical sites, too. Using a downstream replica server, you can have your server in another office “take it’s lead” from your server and either download the updates from you, or (and this is cool) only download updates that you’ve approved on your server from Microsoft’s servers. If you have a metered or slow connection between the offices, this is a great solution. You still only have one place to approve/deny updates, but you don’t chew up bandwidth pushing the updates from Office A to Office B.

This is the setup that I have. I have six offices (and two satellite offices but they’re not part of the corporate network) and aside from head office, there’s only one server in each location. These servers are Domain Controllers (for logins & resource management), WSUS downstream replicas for Windows Updates, and File & Print servers for that office.

WSUS uses Group Policy Objects (GPOs) to configure your clients (XP, Vista, Windows 7, Server 2003, 2003 R2, 2008, 2008 R2) to look at your own server for Windows Updates, as well as how often to check, and whether or not to allow the users to defer a restart so as not to interrupt them in the middle of something. Here’s where my setup gets trickxy.

I have a GPO called WSUS-Office A that I apply to the Active Directory Site called “Office A” so anyone who logs in at Office A will have their Windows Update Automatic Updates (WUAU) client pointed at the local server. Other offices have their own GPO assigned to their sites to keep everyone looking at the closest/fastest server/connection.

The hitch I ran into today was with my servers because of the Out Of Bound security bulletin released by Microsoft today for MS010-002. Because of the Big Scary Crisis surrounding it, and the fact that it was listed as Critical and affecting IE 6, IE7 and IE8 on Windows 2000 SP4 all the way up to Windows Server 2008 R2, I manually synchronized my WSUS with Microsoft this morning, downloaded the updates and approved them.

I also did a dirty thing to my users: I set a deadline in WSUS of noon today for the installation. That means that they’ll be notified of the download, and if they click the little yellow shield it will install it and then say “Time to restart!” but they can click Restart Later. Once the deadline passes, however, they don’t have a choice. the window comes up and says “restart your computer or I’ll do it for you” and starts a 15 minute countdown timer. I don’t do it often, so they know that I only do it for “critical” updates. Plus I emailed everyone last night and told them it was happening and posted it on the Intranet as an announcement. This morning they all got a second email that it would happen shortly.

Where the patch wasn’t installed was on some of my servers. Some of them got the update, and some of them installed it and rebooted without warning (oops, but they were warned). I started looking into why some of the servers installed it and some didn’t. My first thought was that the Server 2003 servers did but the Server 2008 & R2 servers did not. I thought perhaps that the GPO didn’t apply to/configure the Windows 2008 clients, but that was wrong, too.

Finally I compared a 2008 virtual machine’s Windows Update screen (which wasn’t working) to a 2008 physical machine’s Windows Update screen (which was). The 2008 VM said “You receive updates: For Windows and other products from Microsoft Update” and the 2008 host said “You receive updates: Managed by your System Administrator” Further investigation into the registry (HKLM\Software\Policies\Microsoft\Windows\Windows Update\AU\) showed that the settings that were specified in the GPO were applied to the 2008 Host, but not the 2008 VM.

It then dawned on me that the difference between the two was the host was a member server and the VM was a domain controller. That led me to GPresult and Group Policy Modelling. Using the DC and Administrator accounts, the GPO (identified by a GUID rather than it’s name) that was applied to the site was denied application due to SOM (Scope of Management).

I expanded the forest folders and drilled down to the Domain Controllers OU and saw a blue exclamation mark on it. Blocked Inheritance. That meant that the Domain Controllers OU was going to not inherit any settings from GPOs ‘above’ it, including sites.

So my choices at this point are to remove the block and let everything apply to the DCs. Not a very good idea. There were three policies which would have applied to the DCs: the Default Domain Policy, Remote Desktop Policy and Office 2007 File Format Policy.

The Office 2007 File Format Policy is tame, all it does is make the default filetype for saving the Office 97-2003 compatible instead of the new .docx, .xlsx and .pptx formats. Remote Desktop Policy is equally benign. It’s denied to Domain Admins and auto-disconnects clients from Remote Desktop after 10 minutes of inactivity so it wouldn’t really apply anyway.

The Default Domain Policy had a fair amount of settings in it though: Firewall settings, password policies, that sort of thing which I don’t necessarily want to apply to my Domain Controllers.

SO, removing the Block Inheritance setting probably wouldn’t be a good idea.

The other thing I could do is apply the WSUS-Office A policy to the Domain Controllers OU. It would get around the Block Inheritance issue without applying the default domain policy to them, but it would also “point” each of my offices’ Domain Controllers back here over the slow, metered internet connection. Not ideal either.

The other thing I could do is copy each of the WSUS-OfficeX policies and then apply ALL of them to the Domain Controllers OU and use filtering to make sure that each office’s policy only applies to that office’s WSUS server. That doubles the amount of work I’d have to do if I changed one of the servers though, and if I forgot, it would mean that one of the Domain Controllers was pointing at a non-existing Update Server which could leave it unprotected/unpatched. Guh. Meh. Not ideal.

SO that’s where it stands now. I haven’t done anything yet. I’m remembering in the short term to manually check the DCs for Windows Updates until I can come up with a little more elegant solution to the GPO filtering situation.

Friday, 22 January 2010 17:00:00 (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Microsoft | Servers | Windows#
Tuesday, 20 January 2009

Dell’s local supply chain technician called me yesterday morning to set up a time to replace the parts on my laptop that seemingly blew up. They didn’t have the parts yet, but were expecting them later that day so they’re going to call me back this morning to arrange a time to do the repair.

I brought my laptop to work, and the tech’s office is actually just around the corner from mine, so that way he could do it whenever and when I take it home tonight it’s fixed.

I turned to my co-worker James and said “hey, do you want to see my screwed-up video card?” he came over and I turned the laptop on…. and it worked! WHAT THE HELL??

I’ll mention it to the repair tech, but I’ll still have him replace the parts. Save him a trip out again later, ESPECIALLY if he can replace the GPU with another, non-f’d up one.

Update: Well it must have been it's final hurrah. when the technician arrived, it came up with the BIOS logo screen, but then died. He began to disassemble the laptop to replace the system board (that's the motherboard in Dell-speak) and unfortunately it has the same GPU chip on it as the one being replaced. Ultimately he had to stop and make arrangements to come back tomorrow because--get this-- he couldn't get one of the screws out and has to get a different screwdriver. I have one that's the perfect size for laptops, but unfortunately I left it behind on Vancouver Island last week. He's coming back tomorrow to finish it. It's a darned good thing that I'm a huge nerd and have three other computers at home I can use until this one is back up and running.

Tuesday, 20 January 2009 08:57:30 (Pacific Standard Time, UTC-08:00) | Comments [1] | Tech | Gadgets | Microsoft | Windows#
Saturday, 17 January 2009
Ahh the joys and risks of running beta software.
This morning I fired up an xvid video that I downloaded and partway through the video, the audio stuttered and then froze and the screen froze. The screen went black. then it came back, then went black again. i tried to hit escape, out of full screen so maybe i could catch it and click close, but before that happened, I got a Blue Screen Of Death (BSOD). No big deal, they happen from time to time and it IS beta software.
The problem was when the computer restarted, I didn't get the Dell logo screen. I didn't get the Windows logo startup screen. I didn't get a login screen. What I got was a series of lines running top to bottom mostly on the left side of the monitor... multicolored but slowly becoming all white. The rest of the screen slowly started showing vertical lines until eventually the whole screen turned white. Not good. What the hell? How could a crash physically damage hardware? I tried turning it off and on again, same thing.
Watching closely, I could see and hear the BIOS POST (Power On Self Test). After a minute or two, the hard drive activity light blinked out. On a hunch, I entered my password and hit enter. Hard drive activity resumed and it logged me in. Of course, I couldn't see anything so all I could do was shut down gracefully.
Using my other computer, I checked Dell's support site and did the diagnostics they suggested. Turns out my LCD monitor is fine, but the video card is hosed. How on earth did watching a video cause a crash in the driver that resulted in not only a BSOD but a physical corruption of the card itself? That's unheard of!
In hindsight, I think it was a combination of things. My laptop has the nVidia GM8400 video card in it which is known to have a major design flaw. This affected Dell, HP, even Apple's MacBook Pro laptops that had this chip in it. Ultimately Dell extended the warranty of every system with this chip in it for free. The combination of a flawed video chip and a beta driver for a beta OS was a recipe for disaster.
Ultimately I had to call Dell. The NEXT major obstacle is that I bought this laptop through my corporate account... through Dell Latin America. I'm now in Canada and have to have the system transferred. I called the Dell XPS tech support line (XPS has it's own tech support department, which is one of the nice things about paying a premium for a product) I got through to a technician with a slight FRENCH accent, which leads me to believe the call center is here in Canada, rather than Panama for Dell Latin America or India for Dell US and A.
I explained what happened, and what steps I had already taken. (Having dealt with Dell Tech Support for issues for the hundreds of systems I had at my last job, I learned how to work WITH them rather than them having to rely on their flowcharts) I also told him that since this was the known-bad GPU, that I'd prefer to have a technician come on-site and replace the GPU rather than send my laptop in for depot service. You just never know if you're going to get your own computer back, with a freshly-installed OS and no data, photos, emails, contacts or anything else on it. They said no problem, got my address and-waitasecond. This address isn't in Grand Cayman.
Uh-oh. He processed the dispatch for me and then said he was transferring me to customer care to update my records, since tech support has read-only access to customer records. He gave me the case number and transferred me to Customer Care reception. I gave them my case number and said I needed to transfer from Latin America to Canada, and he put me through to someone. Someone else picked up right away (I think I spent less than 2 minutes on hold this whole time so far) and I explained my situation to him. This person, who DID have an Indian accent told me that it was purchased through a corporate account and would have to be dealt with by the corporate sales department, not customer care and would transfer me. I tried to stop him, and he listened to what I had to say and then repeated his script and transferred me... to an automated message saying that the department I was trying to reach is currently closed, and please try again on the next business day. ARRRRRRRGH! I hung up and the call was 19:44 seconds.
I re-dialed the XPS number, and again got a technician, Robby, who sounded Canadian. I said I had just called a few minutes ago, spoke to a tech, got a case number and then was transferred to Customer Care who sent me down a rabbit hole into a dead end. He apologized, asked for my case number, re-confirmed my name, address, email and phone number. Then he said he would re-submit it to dispatch and could he put me on hold for 3-5 minutes. He came back on in about 3 minutes and told me everything was set, he gave me a dispatch number and told me a technician would be calling me sometime early next week (because it's 5:00 PST on a Saturday) to schedule the best time to come and replace the part. Just like that. I asked him if they were going to replace it with the same GPU, the nVidia 8400 that's known bad or were they going to replace it with something that wasn't borked by the factory. He said he didn't know, it would be up to the technician. If they had a better solution at the time of install then yes they would replace my GPU with a different one.
SO. Windows 7 beta: out. nVidia GS8400m: out. Dell XPS tech support: big thumbs up. The worst part is going to be getting through the next week or so with only my desktop, Laurie's desktop and Laurie's netbook in the apartment :)

Saturday, 17 January 2009 17:17:34 (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech | Gadgets | Microsoft | Windows#
Wednesday, 14 January 2009

I haven’t really been using my computer much this week. I’ve been smokin’ busy at work, so by the time I get home, the last thing I want to do is spend MORE time in front of the screen. Everything is on track now for a business trip tomorrow, so starting this weekend when I get back everything should slow down again… until Monday. :)

The last post I made about Windows 7 I mentioned that the fan was acting weird. I went to Dell’s support site and there was a new BIOS version for my specific laptop. I installed that and the fan began behaving as expected, so thank you Dell. I’ve still got i8kfangui running, but just in informational mode only so I can see the CPU temperature.

feedback Every window has a “Send Feedback” link up next to the minimize, restore/maximize and close buttons. I read today that there’s a registry hack you can make to turn it off if it really bugs you. I don’t know why you’d find it annoying though, it’s a BETA TEST of an operating system. It’s provided free of charge in exchange for reporting metrics, crashes and other things… LIKE FEEDBACK. It’s actually pretty cool. There’s a dropdown that you can select what category you’re reporting on, and then some stars to give you a choice of how well it worked (or didn’t) and then comments.

feedback_dropdownThe dropdown list itself is pretty encompassing, too. Everything from Accessibility features, printing, faxing, security settings even Tablet PC functions. Finally at the bottom there’s an “other” category.

So far I’ve sent between 12 and 15 feedback “emails” to the team. Some of them have just been “This works exactly as advertised and as expected”, a couple suggestions and a few negative ones, too. I sent one when I crashed IE the first time the other night, too. Being a beta, you’re not supposed to use this as your “main machine” and in fact, part of the terms of use specify that you won’t use it ‘in a production environment’. I WILL be implementing it in a production environment in a couple months at work. I’m planning a pilot project for myself and my co-administrator, as well as a couple people who are tech-savvy to run Windows 7 with all our line-of-business applications to iron out any kinks that come up over the next year before we start migrating to it (skipping over Vista) in early 2010 when it’s released.

I wrote on the 2009 advancement plan at work that if I tried to upgrade people to Vista that we’d have a mutiny on our hands. I’ve been running Vista on my laptop since last December when I got it, and forcing myself to use it on my desktop at my last job for almost a year previous so I could get to know it before I had to start fielding calls about it. While Vista came out of the gates flaccid with few compatibilities with existing hardware and software, it was something that needed to be done. If Vista hadn’t come out when it did and been a dog, then there wouldn’t have been new drivers and new versions until Windows 7 came out. Then *IT* would have been the dog that nobody wanted. Vista was the pain of living with no floors in your home while contractors reinforced and rebuilt your foundation and drainage. It sucks, and it’s hard, and it tries your patience, but in the end, what you built on top of it was all the better for it.

While I could have rolled out Vista Business with Aero Glass turned off and the “classic” skin/theme selected to make it look like Windows 2000 Professional, Windows 7 takes that option away. I might have been able to slip it past a few people if it LOOKED like the old Windows :)

What everyone seems to forget is that in 2001, XP was hated just as much as Vista is, with people decrying the “Fisher Price toy” interface and the new double-wide start menu but as people actually used it and adapted to it and started to reap the benefits of the new system, they liked it and ultimately loved it (evidenced by extension after extension for the availability of Windows XP for OEM systembuilders).

The difference between 2001’s hate-in for XP and 2007’s hate-in for Vista is a 24-hour news cycle and a lot more people  out there trying to justify their employment filling column-inches. Vista’s missteps were a convenient mule to whip.

Wednesday, 14 January 2009 21:34:50 (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech | Microsoft | Windows#
Admin Login
Sign In
Pick a theme: