Tag Archive for: IIS

IIS: No Monitoring Hits in the Logs

12 Sep 2010
September 12, 2010

A few weeks ago I wrote about how I was changing my IIS web server configurations to (hopefully) better manage memory and App Pools. That’s been working out quite satisfactorily. Yesterday I realized the changes I made can give another nice side effect.

I was preparing to start some maintenance yesterday morning and I wanted to make sure nobody was hitting the development web server. One easy way to do that is to just check the IIS logs and that’s what I did. Open the log, jump to the end and then scroll back up past all the once a minute “is the server up?” monitoring hits … and it suddenly hit me, those monitoring hits don’t need to be there!

It would be nice to be able to open a log file, jump to the bottom and see the last real hit without having to filter through all the monitoring hits. Since I recently put the page the monitors hit in his own Application (and app pool) this is remarkably trivial to do. One mouse click, in fact.

Seems obvious, doesn’t it? Just uncheck “Log visits.” I couldn’t do this in the past since the monitoring page was in the same app / app pool as all the client sites. But now I can — and I like it.

Oh, Did You Want a Timeout?

24 Aug 2010
August 24, 2010

Back in February I mentioned that I was reconfiguring my IIS 6 web servers to shutdown the App Pools after 2 hours of inactivity. That seemed a much better option than the brute force iisreset that I’d been scheduling as a nightly event.

Turns out I wasn’t quite done yet. Here’s a snip of a conversation I had earlier today with my CTO, Hans.

“Ya know, I wish I had some better tools to see how many active users we have across all the sites at a given moment,” I said. “That would be helpful when I want to sneak in a quick change during the day.”

“Well what do you currently do to check?” he asked.

“I just pop open the latest IIS log file, jump to the bottom and see if the most recent entries are from my once-a-minute WhatsUp Gold site monitoring. If the last few entries are from WhatsUp then I know we’ve been idle that many minutes.”

He nodded and we moved onto another issue which resolved around some memory related issues.

I commented, “It seems like this main w3p process never shrinks. It just keeps growing its memory usage. How weird. Come to think of it, I’ve never seen a Windows Event about it shutting down or spinning back up…”

Hans just gave me the look and said, “Didn’t you mention your monitoring process hits that site every minute?”

#facepalm#

“oh yeah… I guess it’ll never hit that 2 hour timeout, huh?” Don’t laugh… If you poll the site every minute don’t expect it to ever go idle!

Today I spent some time fixing that. I’m not sure what the best practices are but I have an approach that seems reasonable.

First, I created a new site with just one page (ping.html). Next, I created a new App Pool called monitoring just like the Default. But instead of a timeout I configured it to restart itself at 1:00 AM nightly. Then I converted that new site to an IIS application using that new App Pool.

IIS App Pool settings

My maint site's IIS app settings

I changed the WhatsUp monitor to use a custom HTTP Content monitor pointed at the new site. Now it tests for content from the ping.html page instead of just seeing if something responds on port 80 so this is probably even a bit better than it was before.

This brought up another small issue though.

Wait! How do I know which w3p process ties to which App Pool?

Now I have more App Pools all running as the same user. How can I quickly tell which process goes to which pool? Easy!

This picture lays it out:

App Pools and w3p processes

On the IIS server bring up a command prompt, navigate to the system32 directory and run:

cscript iisapp.vbs

The output lists the process ID (PID) and name for each w3p.exe process. Problem solved.

A Quick Look at Log Parser

18 Mar 2010
March 18, 2010

This morning I was alerted to the fact that one of our clients was having problems with our application last night. Unfortunately, none of their end users had captured any of the error messages. I checked the SQL and Windows Event logs but found nothing helpful. Then I had a look in into the IIS web service logs and found some clues.

Working with IIS logs is a bummer. They’re big, unwieldy text files and just a pain to work with. I asked on Twitter for suggestions. Jjakubowski replied with a link to Log Parser so I went to have a look. As soon as I saw the page I recalled that I’ve actually used this free Microsoft utility in the past (man, my memory…). I recall that I wasn’t entirely comfortable with it at the time but decided to dig a little deeper.

Long story short: This thing rocks. It can parse and process just about anything:

Log parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows® operating system such as the Event Log, the Registry, the file system, and Active Directory®. You tell Log Parser what information you need and how you want it processed. The results of your query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart.

I rolled up my sleeves, wrote some SQL and after a bit of experimentation had some great queries built that really helped show where the issues were.

I’m not going attempt to build a tutorial on how to use this beast, but I’ll share a few links and tips that some might find useful.

Tip 1: There’s a help file (.chm) included in the download. Don’t ignore it – it is full of useful information and examples. Seriously. Check the help file before hitting your web search engine of choice. Among other things you can quickly find the column headings (and descriptions) for the type of file you’re working on.

Tip 2: You don’t have to formally install Log Parser.exe on all your machines. Once you have it one you can copy the .exe around. For instance, I copied it up to one of my network shares so that I can easily run it from any of the web servers without having to install anything (I keep some useful .sql files in that same share).

Tip 3: IIS dates are in UTC which can make ‘em a bit of a pain when looking for time-based events. My brain gets tired after having to mentally translate all the times I’m reviewing…

Check out this date handling in this query designed to show errors from the ASP pages:

SELECT  LogFilename, LogRow
, TO_DATE(TO_LOCALTIME(TO_TIMESTAMP(date, time))) as date
, TO_TIME(TO_LOCALTIME(TO_TIMESTAMP(date, time))) as time
, cs-method, cs-uri-stem, cs-uri-query, cs(User-Agent)
FROM <1>
WHERE (sc-status = 500) AND (cs-uri-stem LIKE '%.asp')

Cool, huh? (Cheerfully lifted those functions from this handy little presentation)

And speaking of dates: Suppose you’re looking for recent events in your IIS logs. While you could use some date logic in your query’s WHERE clause, Log Parser will still have to chug through all your log files. That could be a lot of unnecessary processing. Speed things up by using minDateMod parameter to specify the minimum file last modified date (local time) to look at.

So let’s tie that all together. Save that example query up there to a file called ASPerrors.sql. Now run something like this to query everything, but only after midnight on March, 8 2010:

LogParser file:ASPerrors.sql –i:IISw3c -minDateMod:"2010-03-08 00:00:00" –o:datagrid

Tip 4: Check out the datagrid. See that last –o parameter on the example above? That sends the output of your query to a nifty little grid window. This is invaluable while tweaking your query or just doing research. Faster and easier than sorting things out at the command prompt or capture output in text files. Once you have the query nailed down you can change the –o parm to a more appropriate format if desired.

Bonus tip: Click “All rows” when the datagrid shows up after running your query.

Tip 5: Take some time and look at examples on the web. There are tons and you’ll get some great ideas!

Log Parser initially looks a little humble and archaic due the command line nature, but dig into it a bit and you’ll be amazed by the power and potential.

Oh, I should mention that I looked into some GUI apps to help ease using Log Parser but didn’t really hit jackpot. First I tried Visual Logparser but couldn’t get it to even start on my Win7x64 machine. Next I tried Log Parser Lizard. This one shows some real potential, but every now and then it would go to 100% CPU while running a query and never come back. I can’t deny that my query may have been flawed, but I’d rather just get an error :-)

My Web App Isn’t a Spammer

11 Jan 2010
January 11, 2010

At work we build and host web based applications. As part of that, our applications generate email. We don’t get too fancy with sending email, we just shoot it out using Windows Internet Information Services (IIS) SMTP service. Traditionally we never really configured it, we just turned it loose.

As the world becomes more and more spam conscious, we spend more time suggesting to our clients that they might have to keep an eye on their spam folders just in case our emails aren’t showing up. Heck, sometimes emails sent to ourselves, as part of testing, don’t show up either. That’s not really an ideal approach and I finally sat down late last week to fix things up a bit.

Show Original in Google MailFirst I had our server send me an email to my Google Apps account and checked out the “original” version of the mail – this is nice feature of Google Mail that a lot of folks haven’t noticed, but comes in handy for stuff like this:

Two of the lines in the header tell a grim story:

Received-SPF: softfail (google.com: best guess record for domain of transitioning noreply@mycompany.com does not designate 5.79.185.165 as permitted sender) client-ip=5.79.185.165;

Authentication-Results: mx.google.com; spf=softfail (google.com: best guess record for domain of transitioning noreply@mycompany.com does not designate 5.79.185.165 as permitted sender)smtp.mail=noreply@mycompany.com

Clearly I have things to fix.

Sender Policy Framework

First I tackled SPF – for some reason I had no SPF DNS records defined. SPF stands for Sender Policy Framework and you can read all about it over at Wikipedia. The short version is that it is a way for administrators to define which servers are allowed to send email for their domains. Mail servers then look for these SPF records to help determine if mail received is spam or legit.

I found a great SPF Wizard over at openspf.org and used it to determine what my record should look like. We’re a Google Apps customer so I had to make sure I included their servers as well as our assorted app servers. I specified some by IP, some by domain name, included the Google entry and got a record that looks roughly like this:

v=spf1 ip4:5.79.185.165 a a:demo.mycompany.com include:aspmx.googlemail.com ~all

I then popped over to my registrar and added that as a DNS TXT record. Actually, I did it twice. Once for just the domain name and one for the wildcarded subdomains. Gotta cover my bases for the next step.

Reverse DNS and IIS

OK, next I tackled Reverse DNS. I wanted to make sure that application mail server identified itself by a name that would match when other mail servers would look up its IP address.  There are two pieces to this one. First my SMTP server and the actual reverse DNS configuration.

By default, IIS SMTP server gives out its internal FQDN (fully qualified domain name) – in other words, something like webserver.ad.companyname.com. For me, this is never the same as the external DNS name – for instance, application.companyname.com. Fixing this in IIS SMTP had stumped for the longest time… but it turns out it is pretty easy to fix.

If you want to see what yours is doing just telnet to port 25 of the mail server and check the greeting. Top of the banner will be the machine name. You can hit enter a few times and then HELO and have a conversation or just type “quit” to be done.

To change the name, go into IIS Manager, right-click on the SMTP node and click properties. Go to the Delivery tab and click “Advanced…” at the bottom. As mentioned, by default the FQDN field is the machine’s internal name. I initially tried setting a value in the “Masquerade domain” field but that didn’t seem to change anything (I guess I should research that…).  I took a deep breath and just changed the FQDN field to the server’s external name. Fortunately, all heck did not break out and mail continued to get processed. A quick telnet test showed the right address.

Next I contacted our co-location provider and asked them to setup the reverse DNS for my server’s IP. 10 minutes later the server name matched the IP matched the reverse DNS. Fun!

Results?

Sent another test email to myself and the headers look a lot better now (compare to the ones above):

Received-SPF: pass (google.com: domain of noreply@mycompany.com designates 5.79.185.165 as permitted sender) client-ip=5.79.185.165;

Authentication-Results: mx.google.com; spf=pass (google.com: domain of noreply@mycompany.com designates 5.79.185.165 as permitted sender) smtp.mail=noreply@mycompany.com

Neat, huh? This isn’t a cure all, but I think just these few changes will make a big difference in our mail getting through.

For testing, I found the reports at allaboutspam.com were incredibly helpful. I just set up a quick ASP page to send test mails from my application server to their test service and they’d bounce it back with a nice report link. Slick.