Shivering on the 49th Parallel
Friday, October 26, 2012
this is quick & dirty just to get you installed, maybe one day I'll refine it and add screen shots... but probably not :)
Friday, October 26, 2012 2:16:14 PM (Pacific Daylight Time, UTC-07:00) | Comments [2] | Microsoft | Windows#
Tuesday, March 20, 2012
First post of the new year! also can't be arsed to install WL Writer so doing this in the web form. blech. :) One of my "projects" for 2012 is to suss out DirectAccess, a transparent "VPN-less" secure connection back to the mother ship from a roaming corporate laptop. On paper it sounds pretty good, but from a demonstration point of view, it ranks up there with watching grass grow or paint dry. When set up and configured, a laptop (or desktop I suppose) out of the office and off the corporate network can access network resources behind the firewall. Going the other way, IT can centrally control corporate laptops out in the field via Group Policy, WSUS and other technologies. To give a demo, you'd take your laptop off-campus, fire it up, log in... and... use it... not much of a demo :) the stuff going on behind the scenes is interesting, but not for the average person. My engine, however, gets running. I ordered up an HP Microserver last month to try this out on. I suppose I could have installed 2008 R2 on any old computer kicking around, provided it had two network ports on it, but I also wanted to do a hands-on with this little server. The HP Microserver is ridiculously cheap for what it is: an HP ProLiant server. it's about half the size of a breadbox and has four non-hot-swap SATA drive bays, two memory slots, a PCIe x16 and and a PCIe x1 half-height slot, a 5.25" drive bay for an optical or tape drive and one large low-rpm fan on the back so it's really quiet. All that for about $400. I bumped up the price somewhat by doubling the RAM and adding a server NIC card to get a few more network ports on it, but it was still under $1000. Putting a copy of Windows Server on it is where most of the expense comes from. Since this is a test, I put a TechNet/MSDN copy on it and fired it up. There are a lot of pre-requisites for setting up DirectAccess including a good CA/PKI setup, and probably the most difficult part: 2 consecutive public IP addresses that don't end in 09-10. I've got all that covered now, so my next step will be to make some changes to Active Directory, my edge firewalls and then I can try it out!
Tuesday, March 20, 2012 9:28:53 AM (Pacific Daylight Time, UTC-07:00) | Comments [2] | Active Directory | Hardware | Microsoft | Networking | Servers | Windows#
Thursday, September 08, 2011

Well this is interesting. First of all, do not move any vhd or avhd files around, whether your guest VM is running or not.

I came back from a week’s vacation to find that my VMs were pretty much all broken. Awesomesauce. What happened was that the server that I run SCVMM on is also the Backup Exec server, and due to a mistake by some end-user, the size of the weekly backup jumped about 600gb and the backup2disk folder ran out of space and halted all backups. All the Virtual Machines paused themselves too because the host was out of hard drive space.

To alleviate the situation, a co-worker found 100gb or so of files in a “snapshot” folder under the VM’s folder and moved them elsewhere. What he didn’t know or realize was that these VM files have very specific ACLs that are tied to a username called NT Virtual Machine\{SID}.

When you move a file in Windows, if you’re copying on the same volume (say from My Pictures to My Pictures\vacation 2011) it will take it’s permissions with it. When you move a file to a different volume (to a D drive, or a flash drive or a network drive) it will inherit the permissions of it’s new home. Normally that’s a good thing, but for these snapshot files, it’s a bad thing. a very bad thing.

I discovered this when I found & moved the files back to where they were. The VM still would not start up and was giving all kinds of cryptic errors. unable to mount, unable to start virtual controller, things like that. I should have made a note of the exact errors and put them here for people to find, because figuring out what to do was a bit of a pain. Ultimately I found a KB article that described how to re-set the permissions and re-assign full control to the NT Virtual Machine\GUID user to the folder and then each of the avhd files directly using your favorite tool and mine: icacls.exe

This allowed the machine to re-start up and everything seemed to be OK so after 24 hours I thought I’d figure out how to get rid of those snapshot files and free up that space “the right way”. The first problem was that I did not have any snapshots of this VM, so how could I have snapshot files??

I found this article called “Hyper-V: What are these *.avhd files for? Snapshots? But I have no snapshots!” while Googling around and at first was stumped, because what he was displaying I could not see. I followed his directions to shut down the VM and power it off (the guest) and realized that yes it had been paused and rebooted, but it had never been shut down in nearly two years. I powered it off (it’s an MDT and WSUS server, so no “production” data on it) and looked around for the “merging 1%” to show up and it didn’t. I couldn’t figure it out! why couldn’t I see this happening in my SCVMM administrator’s console? On a whim, I decided to try the “local” Hyper-V MMC snap-in, so I fired up the Server Manager and drilled down to it. There it was, on the main screen under “Operations”: Merge in progress: 11%

I watched it for a few minutes and saw that one of the AVHD files disapeared! it was working! Awesome! so now it’s merging “the big file” which is where all the deployment images and WSUS download data was and is taking a while longer. As soon as the first AVHD file disapeared, I looked at the drives and saw that there was now 80GB free and the backup jobs resumed their steady march.

Once this is done, I’m going to have to do the same to the other Guest VM on this machine, which IS a production machine and probably has even more data in it, so that will have to wait for 5pm and run overnight.

Thursday, September 08, 2011 10:27:09 AM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Microsoft | Servers | Windows#
Tuesday, June 28, 2011
Like the sword of fucking Excalibur, only the anointed, chosen one can pull the Export-Mailbox cmdlet out of the stone.
Tuesday, June 28, 2011 2:58:04 PM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Rants | Mail Server | Microsoft | Servers | Windows#
Tuesday, June 07, 2011

Last year I set up a Windows Server 2008 Core server. It was a Hyper-V virtual machine, it was minimum-spec, it didn’t do much other than be a second Domain Controller on the network so I hardly ever had to interact with it. Based on that criteria, and because I wanted to see what it was like, I installed Windows Server 2008 Core.

Windows Server 2008 Core if you’re not familiar is a Windows server with no windows: when you log in, you get a command prompt, and that’s it.

Configuring it after installing was a bit of a bear, because instead of clicking anything, you had to learn, know and type the commands into the terminal, along with all the arguments/switches. I got it set up, configured, joined to the domain and then promoted to be a domain controller and that was pretty much it. I set it up so that I could use Remote Desktop to connect to it, but what I really wanted to do was use the Server Manager on another server to connect to it and manipulate it that way.

I found out the hard way that you can’t really do that. I did find a piece of software written in Visual Basic called CoreConfigurator which created a text-menu-based configuration helper and it was pretty good. They also had a Version 2 which was written in Powershell that had a bit of a GUI to it… but it wasn’t compatible with Windows Server 2008 (the Vista server, if you will) only Windows Server 2008 R2 (the Windows 7 server). I pretty much dropped it after that, since it was running and I didn’t need to do anything to it.

Eventually I upgraded it to Server 2008 R2 when my licensing allowed me to and then I could use CoreConfigurator V2.0. Remote management still wasn’t working, despite the server’s command-line status updates to the contrary. Again, it was working and I had more important things to do.

Today I was trying to track down something (seemingly) entirely unrelated. Some clients could access a DFS share on the domain, and others could not. I followed the trail to the Domain Controller (DC1) and checked DNS services, and they were all fine. I then looked at DC1’s DNS servers and it was pointing at DC2 (the Server Core) so I opened it up and checked it out. I thought to myself “Wouldn’t it be nice if I could control DC2 with the Server Manager on DC1?” so I decided to take another run at it.

On DC2 I entered winrm quickconfig to see what was configured. As expected, it said:
WinRM already is set up to receive requests on this machine.
WinRM already is set up for remote management on this machine.

So I tried “Connect to another computer” in Server Manager and… bonk. “Server Manager cannot connect to server_name. Click retry to try to connect again.” opening the details tab had more detail, but it’s pretty much all gibberish even to me. “Connecting to remote server failed with the following error message: The WS-Management service cannot process the request. The resource URI ...://schemas.microsoft.com/powershell/Microsoft.ServerM... was not found in the WS-Management catalog. The catalog contains the metadata that describes resources, or logical endpoints.” Right.

I started with the error code, and then the hex code and ultimately ended up at a Microsoft KnowledgeBase article that hit the nail right on the head.

Error message in Windows Server 2008 R2 or in Windows 7 when you try to connect to a remote server: "Server Manager cannot connect to <server_name>"

Following this article, I typed sconfig from the command-line on the server core, chose item 4 “Configure Remote Management” and then option 3 “Allow Server Manager Remote Management”. It then re-configured Win-RM (which was already configured correctly) but interestingly added three new rules! It didn’t say what those rules were, but after restarting the server (because I had to enable PowerShell) I was able to connect to the server using Server Manager from any of my other servers or my Windows 7 laptop.

Tuesday, June 07, 2011 1:35:39 PM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Tech | Active Directory | Microsoft | Networking | Servers | Windows#
Thursday, May 26, 2011

You know what I like about taking Part Time Studies computer classes at BCIT at night? The crowd is a little older, and they’re all already nerds.

I’m taking COMP1451 this semester, which is part two/continuation of COMP1409, “Introduction to Object Oriented Programming” For this class, we’ve been using BlueJ as the… well, I don’t want to call it a development environment, but it kinda is. It’s pretty cool, it’s great for illustrating concepts using Java, but it’s not really a full-on Java development tool.

We’re just over half-way through part 2 of the course and tonight the instructor introduced Eclipse. Eclipse IS a full-blown Java Development Environment, and we spent the evening learning (and re-learning) the differences between what we thought was Java and what really is Java.

One of the exercises we did was to use Eclipse to generate constructors and source code automatically, saving a lot of grunt work typing of really basic things. In BlueJ, one of the things you could do was hit ctrl+m and it would insert a new method at the cursor, complete with javadoc comments. You could fill in what you needed, take out what you didn’t and carry on.

/**
     * An example of a method - replace this comment with your own
     *
     * @param  y   a sample parameter for a method
     * @return     the sum of x and y
     */
    public int sampleMethod(int y)
    {
        // put your code here
        return y;
    }

I used that shortcut a lot, and even developed a little muscle-memory to ctrl-M every time I needed to create a new method. I asked the teacher if there was something similar in Eclipse to save a few keystrokes and she wasn’t sure. We poked around a bit in the Source menu before she said “well, I guess you’ll have to do it the old-fashioned way” and pointed at the keyboard.

I entwined my fingers, turned them inside out as if to crack my knuckles and said “A keyboard. How quaint…” I reached for the mouse, picked it and spoke into it like a microphone “Oh, Computer…” and without even imitating a bad Scottish accent, the instructor, the TA and the two guys on either side of me cracked up.

Someone later pointed out that that movie is 25 years old, but ya know what? Classics never get old. Smile

 

Thursday, May 26, 2011 10:36:51 PM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Tech#
Friday, March 25, 2011

It took almost five years. Coincidentally that’s the length of the warranty I was trying to get it repaired under, but finally, something was done. There was a management change at Divers Supply Grand Cayman, and the new manager wanted to make things right whereas the old manager told me to go fuck myself.

Shortly before Christmas 2010, I was contacted by the new manager of the store (on Facebook) who reached out to make things right.

We talked back and forth and came to an agreement, and I edited and updated the posts from 2006 calling out his store for treating me like an inmate in a shower. I posted an update at the top of the three posts with the date, and then edited some of the piss & vinegar out of my rants. As I figured, it was the then-manager who was behind the absolutely horrendous “service” and treatment. Now that he’s gone (and probably most of the staff) it’s not fair to paint them all with the same brush as him (although given a chance, I don’t think I’d use a paintbrush on him).

It’s going to take awhile (if ever) for the Google search results for Divers Supply Grand Cayman to drop the old titles of the posts and replace them with the newer, gentler ones, but if anyone clicks on any of the links to my previous posts about Divers Supply Grand Cayman then they’ll see the new titles and the update, in bold, at the top of the page.

My replacement dive computer is up here now (although it’s still out at my friend’s house who carried it up here at Christmas from Cayman) so it was the right time to make a new post and edit the old posts.

Friday, March 25, 2011 11:39:04 PM (Pacific Daylight Time, UTC-07:00) | Comments [1] | Cayman | Rants | Underwater#
Wednesday, January 19, 2011

I started out the task flying pretty high. I worked on a deployment for some new HP laptops and Windows 7 Pro x64 and things were working out as planned.

Once I got it to where I could PXE boot the laptop, connect to the deployment share and lay the Windows 7 x64 image down on it, I was time to get down to the nitty gritty: Drivers. Applications. Packages. Automation.

Drivers were fairly easy, I’ve been importing them for awhile now, but what I wanted to do was to segregate them into distinct little piles, rather than one motherlovin’ huge pile of inf files and I wanted a computer to only get the drivers it needed for itself, not the whole lot of them.

MDT 2010 provides for this, and there are plenty of good tutorials out there on the net waiting to be found, so I won’t “waste ink” posting it here again. I highly recommend you use the Readability bookmarklet before going to any of the articles on that site, though. They have ads and crap on all 3 sides and a narrow column in the middle with small text for the actual article.

So we got a bare-bones Windows 7 install at this point, with a bunch of Unknown Devices in the Device Manager. Windows 7 is smart enough that most of them have drivers advertised through Windows Update so right-clicking them and selecting “update driver” will find it… but that’s not why we’re using deployment tools, I want it to come out the other end of my process shiny and clean and ready to be used. Following information in those links above and elsewhere, I was able to have WindowsPE detect the make & model of the laptop, and then look that up in my deployment database and download the drivers I specified. Awesome! All but one… one sticky wicket that wouldn’t work because the manufacturer chose to make the driver file a software installation, instead of just a driver. (hate)

On to the Applications settings in MDT 2010 then! Applications don’t work as well as the drivers do. There’s no Selection Profiles for applications like there are for Drivers. Sure you can set MandatoryInstallation <guid> in the customsettings.ini file for the whole deployment share, but then they get installed on every machine that connects, not just the one laptop model that needs this particular driver, so that’s out, too.

Searching around on this topic led me to the Make & Model settings under Advanced Settings>Database. I created a new entry using the Make and Model of the laptop using the data I got from the BIOS. To find out what yours is, drop to a command prompt and type ‘wmic csproduct get vendor’ or get name. Once you’ve created an entry, you can double-click it to open it’s properties and assign things like Applications, Roles and Administrators. Applications is the one we’re looking for here so I clicked on that tab and then clicked Add. I then selected the Driver software.exe that I had set up (as a silent install… another topic!) and then clicked OK. I updated my deployment share and… it didn’t work.

I tried a few different things, I checked, double-checked, and triple-checked that I got the Vendor and Name correct, I tried moving the application around within the deployment share, but nothing worked. Because I was working with a physical machine, it took about 30 minutes to test out each iteration. While it was doing that, I opened the ZTIGather.log on my virtual machine that I had deployed to yesterday, which is in C:\Windows\Temp\DeploymentLogs and using the Vendor and Name in there, I created another entry in the database and assigned it a very small application (most of the apps I have in the repository are huge… Autocad, Office, etc.) to try that one out. I updated the deployment share and this time, just in case, I also went into Windows Deployment System and replaced the boot image with this newly generated one.

I booted the VM up, let it PXE boot, selected x64 boot image and stepped through the Wizard and when I got to the Applications screen… Holy smokes it was there! pre-checked! I tried un-checking it and then clicked next, but then went back and it was re-checked, so it was treating it as a mandatory application, but only on that make & model of computer! I then rebooted the laptop into the same x64 boot image to see if it was working for my original problem. If it wasn’t, at least I had proved that it wasn’t an error with my database. I flipped through the screens to Applications and the driver was there and pre-checked! Hooray! hurried through the rest of it and let it deploy. Once it got to the Windows 7 desktop and the last stages of the deployment were running, it installed the driver software. I rebooted (windows update kicked in right away) and when it restarted, I checked out the device manager: Nothing was showing as Unknown Device! Hooray! One machine down, 2 more to go, get a few more apps in there and my MDT 2010 deployment share will be ready to kick out the Win7 Pro x64 jams to all comers! (well, within my company and licensing agreement, anyway) Open-mouthed smile

Wednesday, January 19, 2011 4:57:01 PM (Pacific Standard Time, UTC-08:00) | Comments [0] | Deployment | Microsoft | Servers | Windows#
Thursday, November 25, 2010
The weird thing is that the server continued to, well SERVE the whole time it was in that compromised state, so the users didn’t know anything was wrong. In the meantime my ass was puckered so tight I was pulling the fabric of my seat right up into my ass leaving little rosebuds everywhere I sat.
Thursday, November 25, 2010 6:27:56 PM (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Servers | Windows#
Wednesday, October 20, 2010

Last night I logged into work from home to initiate a reboot of all the servers. Windows Updates were pending, and had been pending for about a week, but it’s hard to reboot production servers in the middle of the day when people are using it. Throw in some Flex Hours, and they’re in use from 6am to about 8pm.

The Domain Controllers have their own policy for updates, and they’re still required to be initiated manually, and then “restart now” clicked to reboot them.

When new “critical” patches are released and there are known 0-day flaws being exploited, I’ll use the ‘deadline’ feature in Windows Software Update Services (sort of a mini Windows Update server you can run on your own, approve and distribute updates around your own network but only downloading it once from Microsoft) where if a deadline passes and a user has been clicking “restart later” it will disable that button and start a 15 minute countdown before it forcibly reboots.

There was no deadline on this latest batch of updates from the last Patch Tuesday, so the (member) servers were politely asking to be rebooted. I logged into each of them one by one and clicked “restart now” and then waited for them to shut down, restart, and start back up again.

All of them worked and came back up (according to pinging them for responsiveness) except one. It SEEMED to come back up. I could ping it and it responded, so I moved on to the next and the next and the next.

It wasn’t until this morning when I walked in the door and had four people waiting for me saying “the network is down” (which of course was a misnomer, the network wasn’t down, it was just the shares on THE MAIN FILE SERVER that were disconnected) I poked my head into the server room, and the KVM was already set to that server and on the screen (which was blue, but not that Blue) was “Configuring Updates stage 3 of 3 0% Do not turn off your computer” I watched it for a minute to see what happens, as the hard drive LEDs were blinking away, so it WAS doing SOMETHING… then the screen went black.

The cursor was flashing up in the upper-left, so I waited some more… then the BIOS splash screen came up. The server had rebooted itself.

Turns out it had been in this startup, stage 3, fail, reboot loop since 9:00 last night.

Step 1, try a cold-boot. I waited for it to fail again, and then I held down the power button until it powered off. I removed the power cables and let it sit for 30 seconds to make sure everything had powered off, plugged it back in and tried again. Same result.

Step 2, try Safe Mode…. Applying Computer Settings… Configuring Updates stage 3 of 3… reboot. Crap.

Step 3, Last Known Good Configuration. This resets key windows files back to how they were the last time you successfully logged in. You would think that this would break it out of a bad update loop. You would be wrong.

Step 4, booted from the Windows Server 2008 x64 DVD and clicked on Repair. There’s a new “Startup repair” tool that’s incl-wait, it’s not? only in Server 2008 R2 that’s based on Windows 7 and NOT in Server 2008 that’s based on Vista? There are NO repair options for Server 2008 other than re-imaging of the system from the latest full-system-image? You DO have one of those, right?

Step 5, Uncle Google suggested I click through to “Get Vista out of the Infinite Reboot Loop” and the comment there by Tribus was:

I know a different way to resolve this issue without using a restore point.
1. Insert your Vista Media into your dirve and boot from it.
2. Select "Repair your Computer" from the list.
3. Select "Command Prompt" from the recovery choices.
4. At the command prompt change your directory to C:WindowsWinSxS
5. Type: del pending.xml
6. Exit and reboot
This will fix all Windows update reboot loops and does not require you to restore your PC to and earlier state.

Figuring I had nothing else left to lose, I gave this suggestion a shot, even though it was for Vista. If this didn’t work, then I’d be getting on the horn to Microsoft Support for some help. Instead of deleting it, I renamed it pending.xml.old and then exited and rebooted.

“Applying computer settings…” OK so far so good…

“Configuring Updates stage 3 of 3 0%. Do not turn off your computer…” FUCKBURGERS!!!

“Press Ctrl+Alt+Del to Begin” WHAAAAAAAAAAAT? it worked.

Once it was up and running the first thing I did (other than tell the users they could access their files again) was to look in the event log and see what happened. On the first reboot last night at 9pm, there was an event from source Winlogon, Event ID 6004 “The winlogon notification subscriber <TrustedInstaller>failed a critical notification event".”

So the next step is to research that error and see if I can figure out WHICH update caused it… it could be a moot point though because my co-worker turned up some early results that once you do this, you’ve pretty much broken Windows Update on this computer forever. I can live with that for now, because people are working and the data is intact. If I figure out that that is the case, and figure out a workaround, I’ll post a follow-up.

Wednesday, October 20, 2010 10:07:51 AM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Microsoft | Servers | Windows#
Monday, August 30, 2010

About a year ago or so, I tried to enable SNMP monitoring on m SonicWall TZ170. SNMP is useful for monitoring things like bandwidth usage (by port… so in the TZ170’s case it would tell me how much traffic this hour/day/week/month/year/etc had been funneled through the LAN connection and each of the WAN connections) I wrestled with it for a week or so, failed and gave up. I read the documentation, I configured everything correctly (according to the docs) and… nothing.

Earlier this summer, my TZ170 started flaking out. It would stop responding (like a reboot) for 30 seconds or so. Two times this morning, another time in the afternoon, again overnight… and when it did it took down both internet connections, incoming and outgoing email and all the inter-office VPN links. Not a great situation. By the time anyone noticed and called, it would be back up again. The TZ170 has been discontinued for awhile now, and I wasn’t even able to get any more from a used/recertified reseller in California that had kept me going for awhile. Fortunately, the newer TZ210 is backwards compatible with the TZ170s AND I was able to take advantage of the competitive upgrade to get one cheap cheap, if I signed up for three years of SonicWALL services (content filtering, gateway antivirus, etc).

The TZ210 is great. Each of it’s 8 ports can be configured as a LAN or a WAN port which gives you a lot of flexibility. With the help of a local Sonicwall Partner/technician we were even able to export the settings on the old TZ170 and import it onto the TZ210 and then just re-configure a few things and be back up and running in an hour or so, rather than a day or so of re-creating all the settings and VPN tunnels manually. We even upgraded the VPN tunnels to a better encryption scheme and documented everything (now where did I save that text file…)

Now that I had 8 more-configurable ports, I decided to give the SNMP monitoring another shot. I installed PRTG freeware version on a spare computer, downloaded the MIBs from SonicWall’s support site and then converted/imported them into PRTG as OIDs (Most of these TLAs are beyond even my knowledge…) I added a new device in PRTG and then attached some sensors to it… I gave it the IP address of the SonicWall TZ210, selected SNMP and… it failed.

I went into the SonicWall web interface and confirmed that the network interface’s properties had the SNMP checkbox checked, and that on the Administration tab, that SNMP was configured and had the IP address of the PRTG computer entered and that the community string was set correctly, but it still failed.

Using some of the PRTG testing tools, there was flat-out no response from the SonicWall on port 161 or 162 (the default SNMP ports). Without breaking out a packet sniffer, I deduced that the SonicWALL was dropping the packets. I went to the Firewall config and added a rule allowing LAN to LAN using protocol SNMP. Still nothing.

At that point (late last week) I gave up (again). I did some Googling and came across a couple of entries on Experts Exchange, but even though I have a login it wouldn’t show me the answer, instead telling me I needed to become an expert or pay $12.95/month to see the answer. Lame. That’s new…

I bitched about it on Twitter, stating it was too bad that I couldn’t automatically append a “-Experts-exchange.com” to all my queries to make sure I didn’t get any (now useless) search results from their site. Someone responded that if you follow a link from Google or Bing directly to Experts-Exchange, it will show the answer if you scroll down past all the ads… which is the behavior I was used to, but wasn’t happening on these particular articles.

I tried the SonicWALL forums, and people were using SNMP, so it wasn’t broken or anything… Ultimately I opened a support ticket with SonicWALL (hey I paid for 3 years of it, may as well make use of it!) and they called me first thing this morning and got it sorted out.

I'm not sure if SonicWALL does things differently from the SNMP spec… but then again I’m not an SNMP expert who would know the difference. Here’s the gist of what Darshan the tech went over with me:

  • You DO need the IP address of the system/software that’s monitoring the SNMP does have to be entered on the SNMP configuration page.
  • You DO need the checkbox on the network interface page does have to have SNMP checked.
  • You DO NOT need to create a firewall rule allowing SNMP traffic from LAN to LAN on the firewall. When it’s configured correctly, it auto-creates one that you can’t change.
  • You DO have to use the SonicWALL MIBs that are specific to each model of firewall.

We did end up doing a packet capture and seeing that the SNMP packets were being dropped, which led us back to the Firewall config page and removal of the custom firewall rule. Once we did that (and I think this is the key) we removed the SNMP checkbox from the interface config, let the firewall save/update it’s settings and then re-enabled it. After that, PRTG magically worked.

Now I just have to figure out which settings and ports I want to monitor and get those set up in PRTG! Smile

Monday, August 30, 2010 9:13:49 AM (Pacific Daylight Time, UTC-07:00) | Comments [2] | Tech | Hardware | Networking#
Wednesday, August 18, 2010

I’ve written before about what a huge, horrible, steaming pile of horse shit you have to wade through to install a 32-bit (x86) driver on a 64-bit (x64) server. It’s SO counter-intuitive it makes me want to scrape my eyeballs out with a grapefruit spoon and then chop off my fingers so I won’t be able to see a computer or type ever again.

In a nutshell, you need to have a 32-bit client running Vista or Windows 7, install “the full meal deal” printer driver on that client, THEN connect to the 64-bit server’s printer share (\\server\printer) and then tell it to use the existing driver. That will then UPLOAD the driver from the client machine to the server and make it available to other 32-bit clients who try to connect to it.

Today I’m in the opposite situation. I PURPOSELY set up a 32-bit Windows Server 2008 (not R2, which is 64-bit only) to run my print queues because 99.9% of my network is 32-bit Windows XP clients and I didn’t want to have to go through this rigmarole for every single one of them. *MY* laptop, however is running Windows 7 Professional 64-bit and it’s unable to connect to the shared printers on the 32-bit server.

Rather than duplicate the steps above, since I was feeling saucy and experimental, I went the other(old) way around. On the 32-bit server, I opened the printer properties, went to the sharing tab and clicked on Additional Drivers. I checked the 64-bit box and it asked me for a driver. I clicked Browse. I navigated to the folder where I had the 64-bit driver .inf file for the printer, selected it and clicked OK.

Fast-forward a few seconds and the window closed, and the box was checked. Just like that. Just how it USED to be in older versions of Windows Server. I went back to my laptop, tried to connect to the printer, and this time instead of failing and saying “Driver Unknown” or even worse, the  0x0004005 error which is one of the more generic error codes you’ll ever see. (I always thought it was “Access Denied”, but that’s just ONE of the errors it COULD be.) Up came a NEW dialog box. Do you trust this printer driver? Yes, of course I do. Just like that, it mapped the printer, using the 64-bit driver on the 32-bit server.

If it’s so bloody easy to do that with a 64-bit driver on a 32-bit server, why the HELL is it SO difficult and bass-ackwards to do it on a 32-bit driver with a 64-bit server??

Wednesday, August 18, 2010 11:09:35 AM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Tech | Deployment | Hardware | Microsoft | Networking | Servers | Windows#
Tuesday, July 13, 2010

Last Friday, one of the workers here in the office came over to me and said that he got an error in his inbox about a message that had been delayed. Not permanently, just delayed. I said OK, leave it, it’ll retry again for the next 48 hours and looked into it.

I connected to the Exchange 2010 server and opened Exchange Management Console and went straight to the Toolbox and clicked on Queue Viewer. There they were, pretty ducks all in a row all with DNS FAILURE errors. Huh. Interesting. I saw this happen once before when we were setting the server up. The DNS server it was set to use was offline, so no DNS resolution meant it didn’t know where to send the mail. Thinking this was the case this time, I checked the Network Adapter settings and saw that the preferred DNS server was the other VM “next to” the Exchange 2010 VM and the secondary was set to “my” DNS server here in my office.

I checked my DNS server first, just to make sure the service was running, and it was. I then checked the DNS server that was it’s primary and it, too, was running. Mystery. Nslookup queries failed and timed out even for common domain names. Not good. This was happening on both DNS servers.

I called in a support ticket (this was Friday at 4:00) and found out that the Exchange SysAdmin was on vacation and not back until Monday, and he was being covered by another Exchange SysAdmin on East Coast time. She called me back about 20 minutes later and we worked on it for a good 40 minutes with no resolution. She figured that since the DNS server was rebooted, it had been unable to contact the

PDC role holder and authorize/activate itself and that there must be a problem with the VPN between my network and hers.

This seemed like a valid diagnosis, as the other Administrator here at work told me that our router had been failing every 30-40 minutes, but recovering after a minute or two and was obviously dying. Yikes. This caused a little panic as ALL my sites use the same router/firewall and they’re discontinued and I hadn’t yet created a contingency plan to replace them.

She escalated the ticket up to tier 3 networking support, who tested the VPN and said that everything was up on their end, but they couldn’t ping my side of the VPN, therefore there was a problem with the VPN and it was on my end. (naturally). I don’t know too much about the router/firewalls we use here, I’ve been slowly learning as I went, but diagnostics and troubleshooting was beyond the scope of my knowledge beyond “well the blinky light is green, not red, so it’s up”.

Further compounding the matter was that this VPN was temporary, because we were switching it on Monday from an Internet VPN to a private, routed DSL connection into their MPLS network. That ADSL modem was plugged in to power and phone, but not into the LAN as it was just for testing.

At some point over the weekend, one of the emails from their networking people said that they could ping as far as 192.168.0.252 but no further. This was when the light bulb went off in my head. .252 is the address of the new ADSL router, NOT the VPN endpoint! Their network techs were trying to reach my network via a device that was physically unplugged! I thought it was odd, since I was connecting from home via VPN through the same device and it was up.

Monday came and I plugged the DSL modem into the LAN and disabled the Internet VPN connection from my network to theirs, created a new route for all traffic destined for their network to use this new gateway and everything seemed to be working. Outlook clients in my LAN segment were connecting via the MPLS network, verified by the IP addresses on a traceroute… I could Remote Desktop the virtual servers in their network… everything seemed to be working, but their network guys could still not ping my LAN from the MPLS gateway, even though I could ping back to my network from the Virtual servers (which was the important part anyway) so that left me with the DNS problem, which was still ongoing and some people were now starting to get NDRs because the 48 hours had timed out.

I started with my own laptop, and did an nslookup query. request timed out. Damnit! I checked the DNS server, the service was running, I restarted it, it still failed. I looked at the event log and there were a bunch of “DNS server encountered an invalid domain name” errors, but the errors were coming from all these weird IP addresses that were not in my network. I then thought that perhaps it was the forwarding that wasn’t working, based upon a few results that came up when I searched that error message online. I checked the forwarders on my DNS server and found that they were set to use two Shawcable.net servers, one of which resolved to a hostname and both of which did not respond to an nslookup query. How on earth did I end up with two (seemingly) random Shaw Cable DNS servers for my forwarders when I have a Telus ADSL connection in this office? that could explain why they didn’t respond; my IP address wasn’t in the Shaw Cable network!

I changed the two forwarders to 208.67.222.222 and 208.67.220.220 which is OpenDNS. I then restarted the DNS Server service and BAM! nslookups all worked. I then went back to the Exchange server and tried again. Still failed. OK, I have an idea of what’s going on now, so I connected to the DNS server there and checked it’s event logs. Similar messages, different addresses. I opened the DNS snap-in and went right to the forwarders. The two forwarders on this server were two Telus servers! This was a co-located (sort of) Virtual Server within an ISP, so how did I end up with Telus servers there?! I changed those two forwarders to OpenDNS and restarted the DNS Server service and as I was opening a command prompt window on the Exchange 2010 server to try an nslookup again, I could see the emails in the retry queue (which was still open) begin to flow out. I tried nslookup queries on a couple domain names that I knew were in the retry queue and they all answered lightning fast as non-authoritative responses.

SO in the end, I figured it out myself, but the million-dollar question that I can’t answer is HOW did my local DNS server get a Shaw DNS server as a forwarder, and how did the VM DNS server in the datacenter get a Telus one??

Tuesday, July 13, 2010 9:44:13 AM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Tech | Active Directory | Mail Server | Microsoft | Networking | Servers | Windows#
Friday, May 28, 2010

Two lies for the price of one!

This morning I took a new server out of the box for a small branch office. It’s an HP ProLiant ML150 G6 tower server: Xeon Quad-Core processor, 2GB RAM and a 250GB SATA HD. I also upped the RAM to 4GB, added a 2nd 250GB drive and a pair of 500GB drives to give me a RAID1 array for the OS & Apps and a RAID1 array for the data.

Once I configured the RAID arrays, I booted using the Easy Setup CD. The Easy setup CD is something that HP and Dell (among others?) send out with a server to speed up and make life easier on the person installing Windows. It’s Linux based and walks you through picking a drive to install it on (the HP one even comes with an admin tool for the SATA RAID controller to configure those if you hadn’t already done it in the BIOS) and then provide your Name, Company, Product Code and which version of OS you’re installing from a list incl Windows Server 2003, 2003 R2, and 2008 and different flavors (32-bit or 64-bit) The Dell one goes even further into pre-configuring IP addresses and even joining to a domain. Once it has all the information it needs, it creates partitions and copies/pre-stages drivers from the CD to the hard drive so Windows Setup knows where to find it and can “see” your drives on your RAID controller.

I went through the steps and when it came time to choose an OS, Windows Server 2008 R2 was not on the list. I figured Windows Server 2008 x64 was the closest thing and chose that. It did all it’s gyrations and then prompted me to insert the Windows OS disc. I put in my Windows Server 2008 R2 disc and… was rejected. Odd. I tried again, same response. “Please insert the Windows Server 2008 x64 OS Disc”.

At that point I realized that it was looking at the volume name on the disc and whatever my disc was, it wasn’t what was expected. Le Suck.

I got on to HP’s support site to find an updated Easy Setup CD, and eventually found the right page, but it only lists Server 2008, not Server 2008 R2. Lame. I kept looking and searching and ultimately hit the Support Chat button and got an HP Tech Support agent on the line. I explained to him my predicament and he sent me a link back to the page I was just looking at. I knew it was the same page, because the link was purple instead of blue. (ie already visited)

I explained that I already looked at that page and it wasn’t what I was looking for. Then he decided that I must have had a 2008 R2 Hyper-V error and pushed me a link to an MS KB article  that had 3 steps: 1) disable hardware virtualization. 2) install this hotfix. 3) re-enable hardware virtualization.

I calmly explained that I didn’t have Windows installed yet, so how could I possibly install a hotfix? He said I should download it, burn it to disc and then boot off the disc and apply the hotfix. I re-iterated that I did not have Windows installed, so there was nothing to patch with the hotfix.

“OK, skip step 2 then”

Riiiiight. so that leaves me with “disable hardware virtualizations” and “re-enable hardware virtualization”. Since I hadn’t turned it on yet in the first place, it was still a moot point and told him so. He had reached the end of his flowchart now and didn’t know what to do next.

At that point I booted off the Windows Server 2008 R2 disc itself and-as expected- it couldn’t see any drives. I downloaded the SATA RAID controller driver, extracted it to a USB flash drive, jammed it in the server and clicked “load driver”. I pointed it at the folder and it found a driver for an HP BI110i Embedded SATA RAID controller. Jackpot! the drives showed up, but… Windows could not be installed on the selected disk.

After searching Google with the error number that was presented, it turned up some “Windows 7/2008 R2 can only be installed to the first boot device/C drive” so I went back into the BIOS and RAID setups to make sure that Disk 1 was the first device. It was.

I got back up to the Load Driver screen and noticed that my USB flash Drive was designated C:, the DVD-ROM drive D:, Disk 1 Partition 1 was E:, and the WinPE boot drive X:. I deleted the partition on Disk 1 and tried again. Same thing.

Finally, I booted back again without the USB drive, waited for the Load Driver screen to show, clicked Browse and THEN jacked in my flash drive. It showed up as C. I picked the driver and loaded it, and then removed the flash drive, waited 5 seconds, just to be sure, then clicked “Disk 1 Drive 1 Unallocated Space”, held my breath and clicked “Next”…

 

It worked.

 

Windows Server 2008 R2 is now installed on my new server and I’m running through Windows Updates and configuring it to be part of my network. Had I done what I knew worked to begin with, I’d be sippin’ a margarita by now but instead I tried to do things “the HP way” and it wasted my lunch hour and most of the afternoon. The Easy CD way (if it had worked) would have been equally quick.

It galls me that a company the size of HP, with the volume of servers they sell, hasn’t released an update to their software yet. Windows Server 2008 R2 was released to manufacturing in June 2009 and went on sale October 2009. It’s almost June 2010 and they still have not addressed this yet. What makes it worse is that this entry-level server is aimed at the segment of the market that doesn’t really have their own IT departments that would be able to figure this out on their own.

I think I’d like that margarita now, senor, por favor!

Friday, May 28, 2010 3:35:35 PM (Pacific Daylight Time, UTC-07:00) | Comments [4] | Hardware | Microsoft | Servers | Windows#
Wednesday, March 17, 2010

There are a lot of blogs, classes, tutorials, how-tos, workshops, links and opinions on how to best deploy Windows 7 using the new Microsoft Deployment Toolkit 2010. What there’s a distinct lack of is how to make these tools work with XP which most of us are still using. I am planning to move to Windows 7 x64 later this year, but we have a software dependency on 32-bit Windows that we have to get past first (and no, Windows XP mode won’t cut it for this app)

I spent most of yesterday downloading software and patches. the Windows Automated Installation Kit 2.0 (which supports Win7, 2008 R2 and back to XP) was a 1.7gb iso file which took a couple hours.

Eventually last night I was ready to start the capture of an existing Windows XP box that I could then deploy to the other new machines.

This morning I tried to do it and it failed. I assumed it was permissions-based since the error was 0x00004005 which I know from past experience is “Access is denied”. After sorting that out, it still failed. Trolling through forums from a Google search, I found some people were able to get it to work by using the IP address of the deployment server, or sometimes the FQDN, rather than just "\\server\share$”

I rebooted, opened Windows Explorer and navigated to \\192.168.x.x\share$ and when it asked me to authenticate (because this is a workgroup computer and the share is a domain resource) I entered my credentials and then I double-clicked the litetouch.vbs script to kick off the imaging process. This time it seemed to work, it downloaded the WinPE files needed, ran sysprep and then rebooted to capture the image… except that’s when it failed.

Digging into the winpeinit.log I saw that there’s no NIC. Awesome. Great. I figured that the driver for the NIC would be part of the Windows image, but I overlooked the fact that the WinPE boot-time would also need the NIC in order to connect to a network share and create the disc image there, and the new machines would need the NIC driver to connect to that same share and copy the image down to the local computer.

No biggie, except that the computer is now stuck in a loop booting into WinPE rather than back into Windows XP. I injected the driver for the NIC into the deployment share’s Out Of Box Drivers and rebuilt/updated the deployment (which also adds the NIC driver to the winpe.iso file). All that’s left to do now is to PXE boot the machine which will download the new winpe (now with more NIC flavor) and start over… except now my PXE server isn’t configured properly :p

Wednesday, March 17, 2010 12:27:45 PM (Pacific Daylight Time, UTC-07:00) | Comments [0] | Tech | Deployment | Microsoft | Networking | Servers | Windows#
Tuesday, February 23, 2010

How come a “printing system” has to be a 300mb download or CD ordered by mail? I’m all for having that as an OPTION, but for servers and for shared printers, all I need is a driver and that can probably still fit on a floppy disk… if my computers and servers still had floppy drives, but that’s another post!

I already posted about 32-bit printing in an increasingly 64-bit world, and my medium-term solution for that was to stand up a 32-bit Windows Server 2008 VM and use that as a print server.

This post is the next step: printer drivers. Specifically migrating printer drivers from one server to another. For the small amount of printers I have to manage (three printers and two plotters in this office) or even the amount of printers (queues) at my last job (about 40) it’s not so difficult to do it manually. I did just that when we moved into a new building at my last job and stood up a VM just for print queues. Pretty straightforward, really: download the latest printer drivers from the manufacturers web site, unpack them to a network location, Add Printer from the printers window/control panel, new local port, new TCP/IP port, punch in the printer’s IP address, have disk, browse, click, select… done. 40 times. A wee bit time consuming. For this migration here I only had the six, so it should be even easier. But what if the newer version of a printer driver doesn’t work properly with your as-configured software?

That’s where I am right now. We have a Kyocera CM3232 photocopier/printer/scanner/fax. It’s a big one with it’s own onboard cost accounting and “proper” network scanning & faxing. It does color and black & white and prints on up to 11x17 paper (although not borderless printing). On the old OLD server, printing CAD drawings from Acrobat Reader plots properly. On the new-old server, it didn’t. There were some weird issues where drawings would not be rotated based on the settings you selected in Acrobat, but if you left Acrobat’s settings on Portrait but clicked Advanced Print Properties and changed it to landscape on the driver settings, it would work. Not very intuitive and sure to be the cause of plenty of helpdesk calls.

We tried a different driver, we tried an old driver from a CD that presumably came with the printer and nothing seemed to work. In the end, I re-pointed everyone’s printers back to the old server and removed the queues from the new-old server… but that old server isn’t going to last much longer and it’s not easy to find parts for an old IBM X-series Pentium III tower server, and having a single Windows 2000 Server in the mix is also holding the rest of the network back.

The new-old server blew up in December. No big deal for printing, but HUGE FUCKING DEAL for everything else. I managed to get it up and running again, Frankenstein-style and convert it to a virtual machine before shutting it down for good and sending the carcass to the recycling center.

That new one is here, and one of it’s roles is hosting a Windows Server 2008 32-bit VM for print queues, so I’m back to trying to make the new server play nice and plot drawings properly… the Windows Server 2008 driver for the copier is doing the same weird things the 2003 driver was doing… If only there was a way to migrate those queues, drivers and ports over to a new server… oh wait! there is! Hallelujah I think I hear a choir of angels singi—wait, what? that only really works for moving from NT4 to 2000? It wasn’t really updated for 2003, 2003 R2 or 2008? The tool has been retired? Oh good grief!

Fortunately there’s a new version built-in to Server 2008 and Server 2008 R2. You access it from Print Management Administrative Tool, as opposed to the Printers control panel applet. From there you can add the old server as a network print server, right-click it and export printers to a file… then right-click your new server and import printers from a file. I’m in the process of doing that right now, and will be testing it with CAD drawings later today. Fingers crossed.

Tuesday, February 23, 2010 11:43:52 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Microsoft | Networking | Servers | Windows#
Friday, February 12, 2010

(or a 64-bit domain anyway)

Hooray! 32-bit is dead! Long live 64-bit! … … … not exactly.

While there are more 64-bit machines out there now than there were a year ago and tons more than a few years ago, a lot of businesses are still firmly entrenched in 32-bit Windows XP. I know we are.

We’re a pretty good example of someone who SHOULD make the leap to a 64-bit OS. If there’s one segment of the market that supports 64-bit and is extremely memory-hungry, it’s CAD work. And we’re all about CAD work. I’ve recently upgraded all the computers to 4GB of RAM and standardized them on one video card (nVidia Quadro FX 580 512MB), they’re not taking full advantage of that 4GB of memory because the 32-bit XP Professional can’t address it all. Even with the /3GB switch in the win.ini file, that just means acad.exe can use more than the 2GB limit per process… but I’m getting off topic.

When I started here in Q4 of 2008, I took one look at the “datacenter” and my jaw dropped. The main file server was an old IBM x-server with a Pentium III and a whopping 768mb of RAM and a couple 160GB hard drives in RAID1. The web/intranet server was an even older one. Both were running Windows Server 2000. The Domain Controller was newer, it at least had Windows Server 2003 on it, but it was consumer-grade, non-redundant components in a 2U rackmounted case.

Before Christmas rolled around I had replaced the ancient file server with a pair of Supermicro SuperServers with Quad-core Xeons, 4GB of RAM and 5x1TB SATA2 drives in RAID5 configurations and added an LTO-4 tape backup to the mix. Between Christmas and New Years, the web server died so I replaced that one with another Supermicro identical to the first two, but with just 2x250 and 2x500GB drives in RAID1. All of these servers were running Windows Server 2008 Standard x64.

This led me to a major problem: I was able to install printer drivers for each of the printers on the servers themselves, but with the 64-bit drivers. Client computers (XP Pro SP2 x86) tried to connect and failed because they couldn’t use the 64-bit drivers. In the old days, you could go to the sharing tab of the printer properties and click “Additional Drivers” and that was pretty much that, but cross-architecture is a little more squirrelly, and the solution is counter-intuitive.

Here is how to provide a 32-bit driver in the Additional Drivers page on a 64-bit server:

Step 1: Install the 64-bit driver on the server itself and make sure that you can print.

Step 2: On a 32-bit client (I used XP Pro) download and unpack the drivers for the desired printer (in my case it was an HP Laserjet 4600).

Step 3: Open Windows Explorer and navigate to your printer share: \\64-bit_server\ and then double-click Printers and Faxes.

Step 4: Right-click the desired printer and  select Connect. It will do it’s thing and then Uh-Oh.. where’s the driver? It will ask you to provide a driver. Browse to your local folder where you’ve stashed the .inf files for the printer and let it install. Print a test page to make sure it’s working on your computer.

Step 5: On the server, right-click the printer you just added and select Properties. Click the Sharing tab, and then click the “Additional Drivers” button. Click to check the “x86” button for 2000/XP and click OK. The server will then request the x86 versions of the files FROM your local workstation and upload them TO the server.

This is the back-asswards part that tripped me up. You’re actually uploading the driver TO the server so it’s able to them DOWNLOAD it to OTHER x86 clients that request it.

Step 6: Click ok, ok, ok, all the way back out and you should be good to go.

Friday, February 12, 2010 5:00:00 PM (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech | Microsoft | Networking | Servers | Windows#
Wednesday, February 10, 2010

goulet-ram

Dingle Dangle Dongle… I’m Robert Goulet! doo da deee da dabba doooo

Seriously. It’s 2010. Who still uses Parallel port hardware locks? For that matter WHO STILL USES PARALLEL PORTS?

One of our (I thought older) software packages we use where I work has a parallel port dongle. Dongle not there? No design software for you!

What happens when you upgrade someone off some ancient AMD Athlon to a newer computer from the last few years? one that doesn’t even have a parallel port on the back anymore? Well… not much! But wait! there’s USB! People still make and use USB dongles! We’ll just ask the vendor to replace it! What? No? You don’t have anymore? But the software is still supported isn’t it? Yes? Well what happens if someone loses their dongle? What if there’s a fire? They’re SOL? Maybe? Who knows.

Eventually someone got back to us and said that since version 10.1 you don’t NEED the dongle anymore. We’re on 10.7 so we should be OK without it… right? No?

OH, you mean we have to completely uninstall the whole thing, then re-install from the non-customized version on the DVD, and then apply eight service packs plus our customizations? Sure no problem! I’ll get right on that! I didn’t have anything to do all day, nor did the operator who’s computer is out of commission all day now, either!

Wednesday, February 10, 2010 1:15:44 PM (Pacific Standard Time, UTC-08:00) | Comments [1] | Rants | Tech#
Thursday, January 28, 2010
About a week later the server died. I diagnosed over the phone that it was the power supply and rather than travel over for 5 hours & a ferry ride and then have to stay over just to replace a $100 power supply, I had them take it to a local computer store and have them replace it.
Thursday, January 28, 2010 11:23:10 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Active Directory | Hardware | Microsoft | Servers#
Saturday, January 23, 2010
>(I wrote this almost a year ago and it’s been sitting in my drafts folder since then. It’s still an outstanding issue and I haven’t figured it out yet)
Saturday, January 23, 2010 6:36:00 PM (Pacific Standard Time, UTC-08:00) | Comments [3] | Mail Server | Networking#
Friday, January 22, 2010

WSUS is a pretty cool piece of software. Basically it acts as a “Windows Update” server for your network. Rather than have all your computers download the same updates each from Windows Update, your WSUS server dowloads it once and then distributes it to all the computers that need it over your LAN connection which is much speedier than 99.9% of the internet connections out there. It also gives you a single place to go to and approve updates. Heard bad things about an update? Don’t approve it for installation and it won’t make it’s way onto any of your machines until you do (or they release an update to supersede it). A nice solution for small and medium sized networks.

You can extend it out to different geographical sites, too. Using a downstream replica server, you can have your server in another office “take it’s lead” from your server and either download the updates from you, or (and this is cool) only download updates that you’ve approved on your server from Microsoft’s servers. If you have a metered or slow connection between the offices, this is a great solution. You still only have one place to approve/deny updates, but you don’t chew up bandwidth pushing the updates from Office A to Office B.

This is the setup that I have. I have six offices (and two satellite offices but they’re not part of the corporate network) and aside from head office, there’s only one server in each location. These servers are Domain Controllers (for logins & resource management), WSUS downstream replicas for Windows Updates, and File & Print servers for that office.

WSUS uses Group Policy Objects (GPOs) to configure your clients (XP, Vista, Windows 7, Server 2003, 2003 R2, 2008, 2008 R2) to look at your own server for Windows Updates, as well as how often to check, and whether or not to allow the users to defer a restart so as not to interrupt them in the middle of something. Here’s where my setup gets trickxy.

I have a GPO called WSUS-Office A that I apply to the Active Directory Site called “Office A” so anyone who logs in at Office A will have their Windows Update Automatic Updates (WUAU) client pointed at the local server. Other offices have their own GPO assigned to their sites to keep everyone looking at the closest/fastest server/connection.

The hitch I ran into today was with my servers because of the Out Of Bound security bulletin released by Microsoft today for MS010-002. Because of the Big Scary Crisis surrounding it, and the fact that it was listed as Critical and affecting IE 6, IE7 and IE8 on Windows 2000 SP4 all the way up to Windows Server 2008 R2, I manually synchronized my WSUS with Microsoft this morning, downloaded the updates and approved them.

I also did a dirty thing to my users: I set a deadline in WSUS of noon today for the installation. That means that they’ll be notified of the download, and if they click the little yellow shield it will install it and then say “Time to restart!” but they can click Restart Later. Once the deadline passes, however, they don’t have a choice. the window comes up and says “restart your computer or I’ll do it for you” and starts a 15 minute countdown timer. I don’t do it often, so they know that I only do it for “critical” updates. Plus I emailed everyone last night and told them it was happening and posted it on the Intranet as an announcement. This morning they all got a second email that it would happen shortly.

Where the patch wasn’t installed was on some of my servers. Some of them got the update, and some of them installed it and rebooted without warning (oops, but they were warned). I started looking into why some of the servers installed it and some didn’t. My first thought was that the Server 2003 servers did but the Server 2008 & R2 servers did not. I thought perhaps that the GPO didn’t apply to/configure the Windows 2008 clients, but that was wrong, too.

Finally I compared a 2008 virtual machine’s Windows Update screen (which wasn’t working) to a 2008 physical machine’s Windows Update screen (which was). The 2008 VM said “You receive updates: For Windows and other products from Microsoft Update” and the 2008 host said “You receive updates: Managed by your System Administrator” Further investigation into the registry (HKLM\Software\Policies\Microsoft\Windows\Windows Update\AU\) showed that the settings that were specified in the GPO were applied to the 2008 Host, but not the 2008 VM.

It then dawned on me that the difference between the two was the host was a member server and the VM was a domain controller. That led me to GPresult and Group Policy Modelling. Using the DC and Administrator accounts, the GPO (identified by a GUID rather than it’s name) that was applied to the site was denied application due to SOM (Scope of Management).

I expanded the forest folders and drilled down to the Domain Controllers OU and saw a blue exclamation mark on it. Blocked Inheritance. That meant that the Domain Controllers OU was going to not inherit any settings from GPOs ‘above’ it, including sites.

So my choices at this point are to remove the block and let everything apply to the DCs. Not a very good idea. There were three policies which would have applied to the DCs: the Default Domain Policy, Remote Desktop Policy and Office 2007 File Format Policy.

The Office 2007 File Format Policy is tame, all it does is make the default filetype for saving the Office 97-2003 compatible instead of the new .docx, .xlsx and .pptx formats. Remote Desktop Policy is equally benign. It’s denied to Domain Admins and auto-disconnects clients from Remote Desktop after 10 minutes of inactivity so it wouldn’t really apply anyway.

The Default Domain Policy had a fair amount of settings in it though: Firewall settings, password policies, that sort of thing which I don’t necessarily want to apply to my Domain Controllers.

SO, removing the Block Inheritance setting probably wouldn’t be a good idea.

The other thing I could do is apply the WSUS-Office A policy to the Domain Controllers OU. It would get around the Block Inheritance issue without applying the default domain policy to them, but it would also “point” each of my offices’ Domain Controllers back here over the slow, metered internet connection. Not ideal either.

The other thing I could do is copy each of the WSUS-OfficeX policies and then apply ALL of them to the Domain Controllers OU and use filtering to make sure that each office’s policy only applies to that office’s WSUS server. That doubles the amount of work I’d have to do if I changed one of the servers though, and if I forgot, it would mean that one of the Domain Controllers was pointing at a non-existing Update Server which could leave it unprotected/unpatched. Guh. Meh. Not ideal.

SO that’s where it stands now. I haven’t done anything yet. I’m remembering in the short term to manually check the DCs for Windows Updates until I can come up with a little more elegant solution to the GPO filtering situation.

Friday, January 22, 2010 5:00:00 PM (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | Microsoft | Servers | Windows#
Thursday, January 21, 2010

(This is a crosspost from the Autodesk Discussion/forum website that I was participating in)

Since I started here 15 months ago, I've been wary of messing with NLM because I didn't understand it. I still don't know all of it, but I know a lot more thanks to Travis and the rest of the contributors NLM isn't as big of a scary monster as it was before! There were Group Policy entries in my domain that were specifying an environment variable for the local license server (distributed model) by IP address, and then the next biggest office as a secondary, and third biggest as tertiary--by IP address. So for example if you logged in to a computer in site A your environment variable would be ADSK_FLEX_LICENSE=@192.168.1.2;@192.168.2.2;@192.168.3.2 It worked, it was working, so I had no motivation to change it.

While checking some things out on Travis' suggestions, I changed it to a server name, so on my test computer in site C, the environment variable was ADSK_FLEX_LICENSE=@SiteC_server;@SiteA_Server;@SiteB_Server and it worked. I then changed all my environment variables to computer (NetBIOS) names.

That sorted out 4 of my 5 offices, just the 3rd one, Site C users were still grabbing licenses from sites other than their own. Further investigation showed that two of the users who were using the wrong license server hadn't logged out and back in for some time. (this prompted a quick meeting with the CAD Manager and the Sustainability Committee to make changes to inactivity timers and lock computers after one hour, log users off after 2 and go to system standby after 3 hours outside of regular business hours). When one of the problem users logged back in and started up AutoCAD, they did not get a no license error, but rather Autocad seemed to hang for a good 60-90 seconds with an hourglass... after that AutoCAD started up normally and she was on the correct license server. I did the same thing to the the other user and got similar results.

So in the end, there was some sort of networking issue (which is still undiagnosed) that was causing clients to skip over their own license server, but changing environment variables from IP address to NetBIOS names fixed the problem.

Later in 2010 we may implement other changes recommended here and move to a single/redundant license server instead of the distributed model.

Thursday, January 21, 2010 10:25:31 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Autocad | Networking#
Friday, January 01, 2010

First post of the new decade... maybe I won't let this place grow cobwebs in 2010 like I did in 2009 ;)

Friday, January 01, 2010 12:02:19 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Misc#
Tuesday, May 05, 2009

Back in January I posted a few articles about Windows 7 Beta and what it did to my laptop. It’s not Microsoft’s fault, it’s a combination of Dell and nVidia’s faults. It was the perfect storm: a known design flaw in the video card that affected a boatload of Dell, HP, Sony and Macintosh notebooks. On top of that was a poor design choice by Dell to not actually have contact between the overheating GPU chip and the copper heat pipe that’s supposed to cool it. On top of that was running a Beta OS. On top of that, using a pre-beta alpha-release of a driver for said beta os on a flawed laptop with a flawed GPU. A perfect storm.

While watching a video full-screen in Windows Media Player, the GPU overheated and blew up. Not only did it crash and blue screen and completely wipe out the running OS, but somehow it managed to overwrite the GPU BIOS! That shouldn’t be POSSIBLE, but it happened. The computer would boot up, just no screen. If I watched and waited for the hard drive to stop spinning away during bootup, typed my password and hit enter, it would log me in! I could HEAR the windows startup sound, but no video. No video on the external monitor or HDMI ports, either. Ultimately, because it was under warranty, Dell sent out a technician who replaced the whole motherboard, GPU included (although they replaced it with the same broke-ass GPU chip) so the story ended happily.

One of the things I noticed in the beta was the feedback system, which I used extensively (duh, that’s what betas are for) until I couldn’t. The big huge crash dump from the video card was never sent because after the motherboard was replaced, I was too scared to put the Windows 7 hard drive back in again. I figured I would wait until another beta (or RC) came out and hopefully there’d be a newer driver from nVidia available then, too.

On another note, there’s a way to use a clean, shiny penny to sandwich between the GPU and the heat pipe which drastically improves the transfer of heat to the heat pipe and can avoid just such an occurrence. (you can google nVidia GeForce 8400M GS Copper Mod to see for yourself). On the down side, doing so invalidates your warranty. I’ve refrained from doing it because of that, but when the warranty runs out, that’s on my to-do list for the very next day. Instead of doing a recall and replacing the bum chips (and the heat pipe while they were at it) Dell instead extended everyone’s warranty by 12 months, so if your laptop blows up (like mine did) you’re covered for an extra year.. but if it happens AGAIN after that period, you’ve got a dead laptop. No one else did anything better (HP, Sony, even Apple) so I don’t want to be TOO unfair and shit all over Dell only because they and their tech support have been very good to me over the years. No, really! :)

The Windows 7 RC is out today and will work (for free) until June 10th, 2010 or about 13 months. In the fine print is that starting 2 months before that, your computer will shut down every 2 hours as a warning sign that the expiration is imminent and that it’s time to get a properly licensed copy. Hopefully there’s an upgrade path so you can punch in a new product code and activate Windows without having to re-install with the release version. I can’t see myself NOT re-installing with 100% gold code, but I’m sure there will be people out there who have tweaked and modded their user profile and software set-up JUST SO and won’t relish the thought of starting over.

Tuesday, May 05, 2009 10:04:58 AM (Pacific Daylight Time, UTC-07:00) | Comments [6] | Links | Tech | Microsoft#
Monday, March 09, 2009

Happy Valentine’s Day, ladies. I hope you had a lovely day…

 

This Saturday it’s your turn to return the favor. That’s right, it’s been a month already! March 14th is Steak and BJ Day. It’s pretty simple… It’s steak… and a BJ!

 

www.steakandbjday.com for more details (pretty NSFW content)

 

We’ll be celebrating this year at Little Billy’s Steakhouse in Burnaby, but the jury is out on who’s picking up the tab! ;)

Monday, March 09, 2009 8:45:54 PM (Pacific Daylight Time, UTC-07:00) | Comments [1] | Links | Misc#
Thursday, February 05, 2009
Dailyplate.com has an iPhone app called the Livestrong Calorie Counter that works in conjunction with your DailyPlate account. You can look up their database on-the-go and add foods/exercises and then sync it with your online username/interface.
Thursday, February 05, 2009 9:54:32 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Fitness | Food#
Last week I started the Couch to 5k program again and started all the way back at week one again, thinking that it had been too long since my last run. Where I was gasping for breath a year and a half ago on the last interval, I was able to complete week one’s workout barely breaking a sweat.
Thursday, February 05, 2009 9:39:07 AM (Pacific Standard Time, UTC-08:00) | Comments [1] | Fitness | Food#
Monday, February 02, 2009
Did I mention that since it’s the first business day after the 15th of the month that it was TPS report day??
Monday, February 02, 2009 5:01:37 PM (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech#
Thursday, January 22, 2009
Ironically I watched the first episode (the one where the plane comes apart and they crash land on a tropical island) WHILE I was on a flight from Cayman to Miami on a PSP.
Thursday, January 22, 2009 8:46:51 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Links | Misc#

I’m not sure how I could have possibly forgot, but I let this domain expire. :)

 

I saw the email from Network Solutions on my phone this morning and assumed it was just one of those “your services expire in six months! renew now!” semi-junkmails. Nope! this one said “Your Network Solutions Service has Expired”.

 

Oops.

 

And the DAY before payday, too. Ahh well. I suppose that’s what credit cards are for.

Since my laptop is down for the count (I’m expecting the new replacement laptop to arrive today or tomorrow) I haven't synced my iPhone for about two weeks since I installed Windows 7 to try it out so it hasn’t been syncing my calendar.

My email is downloaded via POP3 from my Exchange mailbox, so when I connect to Outlook Web Access, I don’t have contacts or calendar to remind me there, either.

In the end, no harm, no foul. I’m back up and running and the DNS servers probably didn’t even have a chance to propagate to the pending deletion landing page.

Dell now has three open service calls for me, and I sense it’s going to get worse before it gets better.The local firm that Dell contracts to do their re/re’s told me that I would be receiving a new unit. Then Dell’s national technician appointment center called me to let me know a new part had shipped out and I would be contacted by a technician to arrange a time to come and do it. Then the local tech’s dispatch called me to tell me that the parts hadn’t arrived and would call me back tomorrow (today now) when the parts arrived.

I stopped him and asked him if I was getting a new motherboard or a new system, and he didn’t know, but thought that it was odd that the delivery address was both my home address and their business address.

I got his cell phone number and name and said if nothing showed up by Friday noon I would call him back and he could sort it out with Dell. Fortunately (for both me and Dell) I’m not a one-computer household that’s relying on this one system. I’ve got Laurie’s desktop, her netbook she got for Christmas and a media server plus my work laptop all at my disposal. He thanked me for my patience and said he would be in touch shortly.

Thursday, January 22, 2009 8:24:54 AM (Pacific Standard Time, UTC-08:00) | Comments [0] | Tech | WWW#
Tuesday, January 20, 2009

Dell’s local supply chain technician called me yesterday morning to set up a time to replace the parts on my laptop that seemingly blew up. They didn’t have the parts yet, but were expecting them later that day so they’re going to call me back this morning to arrange a time to do the repair.

I brought my laptop to work, and the tech’s office is actually just around the corner from mine, so that way he could do it whenever and when I take it home tonight it’s fixed.

I turned to my co-worker James and said “hey, do you want to see my screwed-up video card?” he came over and I turned the laptop on…. and it worked! WHAT THE HELL??

I’ll mention it to the repair tech, but I’ll still have him replace the parts. Save him a trip out again later, ESPECIALLY if he can replace the GPU with another, non-f’d up one.

Update: Well it must have been it's final hurrah. when the technician arrived, it came up with the BIOS logo screen, but then died. He began to disassemble the laptop to replace the system board (that's the motherboard in Dell-speak) and unfortunately it has the same GPU chip on it as the one being replaced. Ultimately he had to stop and make arrangements to come back tomorrow because--get this-- he couldn't get one of the screws out and has to get a different screwdriver. I have one that's the perfect size for laptops, but unfortunately I left it behind on Vancouver Island last week. He's coming back tomorrow to finish it. It's a darned good thing that I'm a huge nerd and have three other computers at home I can use until this one is back up and running.

Tuesday, January 20, 2009 8:57:30 AM (Pacific Standard Time, UTC-08:00) | Comments [1] | Tech | Gadgets | Microsoft | Windows#
Saturday, January 17, 2009
Ahh the joys and risks of running beta software.
This morning I fired up an xvid video that I downloaded and partway through the video, the audio stuttered and then froze and the screen froze. The screen went black. then it came back, then went black again. i tried to hit escape, out of full screen so maybe i could catch it and click close, but before that happened, I got a Blue Screen Of Death (BSOD). No big deal, they happen from time to time and it IS beta software.
The problem was when the computer restarted, I didn't get the Dell logo screen. I didn't get the Windows logo startup screen. I didn't get a login screen. What I got was a series of lines running top to bottom mostly on the left side of the monitor... multicolored but slowly becoming all white. The rest of the screen slowly started showing vertical lines until eventually the whole screen turned white. Not good. What the hell? How could a crash physically damage hardware? I tried turning it off and on again, same thing.
Watching closely, I could see and hear the BIOS POST (Power On Self Test). After a minute or two, the hard drive activity light blinked out. On a hunch, I entered my password and hit enter. Hard drive activity resumed and it logged me in. Of course, I couldn't see anything so all I could do was shut down gracefully.
Using my other computer, I checked Dell's support site and did the diagnostics they suggested. Turns out my LCD monitor is fine, but the video card is hosed. How on earth did watching a video cause a crash in the driver that resulted in not only a BSOD but a physical corruption of the card itself? That's unheard of!
In hindsight, I think it was a combination of things. My laptop has the nVidia GM8400 video card in it which is known to have a major design flaw. This affected Dell, HP, even Apple's MacBook Pro laptops that had this chip in it. Ultimately Dell extended the warranty of every system with this chip in it for free. The combination of a flawed video chip and a beta driver for a beta OS was a recipe for disaster.
Ultimately I had to call Dell. The NEXT major obstacle is that I bought this laptop through my corporate account... through Dell Latin America. I'm now in Canada and have to have the system transferred. I called the Dell XPS tech support line (XPS has it's own tech support department, which is one of the nice things about paying a premium for a product) I got through to a technician with a slight FRENCH accent, which leads me to believe the call center is here in Canada, rather than Panama for Dell Latin America or India for Dell US and A.
I explained what happened, and what steps I had already taken. (Having dealt with Dell Tech Support for issues for the hundreds of systems I had at my last job, I learned how to work WITH them rather than them having to rely on their flowcharts) I also told him that since this was the known-bad GPU, that I'd prefer to have a technician come on-site and replace the GPU rather than send my laptop in for depot service. You just never know if you're going to get your own computer back, with a freshly-installed OS and no data, photos, emails, contacts or anything else on it. They said no problem, got my address and-waitasecond. This address isn't in Grand Cayman.
Uh-oh. He processed the dispatch for me and then said he was transferring me to customer care to update my records, since tech support has read-only access to customer records. He gave me the case number and transferred me to Customer Care reception. I gave them my case number and said I needed to transfer from Latin America to Canada, and he put me through to someone. Someone else picked up right away (I think I spent less than 2 minutes on hold this whole time so far) and I explained my situation to him. This person, who DID have an Indian accent told me that it was purchased through a corporate account and would have to be dealt with by the corporate sales department, not customer care and would transfer me. I tried to stop him, and he listened to what I had to say and then repeated his script and transferred me... to an automated message saying that the department I was trying to reach is currently closed, and please try again on the next business day. ARRRRRRRGH! I hung up and the call was 19:44 seconds.
I re-dialed the XPS number, and again got a technician, Robby, who sounded Canadian. I said I had just called a few minutes ago, spoke to a tech, got a case number and then was transferred to Customer Care who sent me down a rabbit hole into a dead end. He apologized, asked for my case number, re-confirmed my name, address, email and phone number. Then he said he would re-submit it to dispatch and could he put me on hold for 3-5 minutes. He came back on in about 3 minutes and told me everything was set, he gave me a dispatch number and told me a technician would be calling me sometime early next week (because it's 5:00 PST on a Saturday) to schedule the best time to come and replace the part. Just like that. I asked him if they were going to replace it with the same GPU, the nVidia 8400 that's known bad or were they going to replace it with something that wasn't borked by the factory. He said he didn't know, it would be up to the technician. If they had a better solution at the time of install then yes they would replace my GPU with a different one.
SO. Windows 7 beta: out. nVidia GS8400m: out. Dell XPS tech support: big thumbs up. The worst part is going to be getting through the next week or so with only my desktop, Laurie's desktop and Laurie's netbook in the apartment :)

Saturday, January 17, 2009 5:17:34 PM (Pacific Standard Time, UTC-08:00) | Comments [2] | Tech | Gadgets | Microsoft | Windows#
Search
Archive
Links
Categories
Admin Login
Sign In
Blogroll
Themes
Pick a theme: