Wednesday, December 9, 2009

Capturing Output as String and Variable in Powershell

I'm embedding a posh shell into a C# app I'm writing, and I needed a way to capture the text output of the app, as well as the results as an array. This is what I came up with:

Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();

runspace.SessionStateProxy.SetVariable( "location", "c:\\" );
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript( "get-childitem $location" );

// this splits the output into the text and variable
Command cmd = new Command( "tee-object" );
// this is where the result array will end up
cmd.Parameters.Add( "variable", "invoke_results" );
pipeline.Commands.Add( cmd );
// forces the rest of the output to string
pipeline.Commands.Add( "out-string" );

Collection<PSObject> results = pipeline.Invoke();

var errors = pipeline.Error;

if( 0 < errors.Count )
{
 // there might be more. For now, we read the first
 var oneError = errors.Read( 1 );
 ConsoleColor formerConsoleColor = Console.ForegroundColor;
 Console.ForegroundColor = ConsoleColor.Red;
 Console.WriteLine( "Error: {0}", oneError[0] );
 Console.ForegroundColor = formerConsoleColor;
}
else
// hooray, no errors
{
 object[] resArray = (object[])runspace.SessionStateProxy.GetVariable( "invoke_results" );

 Func<List<String>, object, List<String>> nameAcc =
  ( n, o ) =>
  {
   n.Add( o.ToString() );
   return n;
  };

 List<String> names = resArray.AsEnumerable().Aggregate( new List<String>(),
        nameAcc
  );

 foreach( var list in names )
 {
  Console.WriteLine( list );
 }
}

runspace.Close();
This was much easier than trying to implement my own custom host.

Tuesday, December 8, 2009

Facebook Scalability Story

Here's a synopsis of a presentation given as part of the CNS lecture series, discussing Facebook's architecture.

Even though it is just an overview, it is pretty detailed, and gives interesting little factoids such as:
  • 600k photos/sec
  • php for the front end
  • 30K servers, in two datacenters
  • ”Work fast and don’t be afraid to break things.”
There's lots of neat stuff in there.

Thursday, December 3, 2009

MS SQL Server Pivot Tables

Another one for the "don't lose me" pile...

MS SQL 2005 and onward have a PIVOT function, which acts like an Excel pivot table.

Notes here.

Wednesday, November 25, 2009

Statistical Data Mining Tutorials

Just a little note to myself to not lose this link.

Tuesday, November 24, 2009

Listing User Last Logon with Powershell

This script assumes that you have enabled auditing on successful logins (by default, it doesn't).

The general process it follows is:
  • retrieve the Security event log
  • pulls login information up to the last reboot
  • gets unique usernames and the time they logged in
  • writes it all out to a text file
It's still a little raw, but it works. It runs very slow over the network; I'll work up one that uses the PSJob facilities...


function getLastBoot( $computername )
{
$wmi = Get-WmiObject -Class Win32_OperatingSystem
return $wmi.ConvertToDateTime( $wmi.LastBootUpTime )
}

function getTopDates()
{
$logins = @()
$input | foreach {
$rec = $_

# this is ugly...I'm going thru the list twice
$hasit = ($logins | where {($_.UserName -eq $rec.UserName) -and ($_.MachineName -eq $rec.MachineName)})

if( $hasit )
{
for( $x = 0; $x -lt $logins.Count; $x++ )
{
if(($rec.UserName -eq $logins[ $x ].UserName ) -and
($rec.MachineName -eq $logins[ $x ].MachineName ))
{
$logins[ $x ] = $rec
}
}
}
else
{
$logins = $logins + @(,$rec)
}
}
return $logins
}

$start_time = (Get-Date)
Write-Host "starting all $start_time"

$target_computers = @( "dal1mspwb16",
"dal1mspwb19",
"dal1mspwb36",
"dal1mspwb37",
"dal1mspwb12",
"dal1mspwb35")

# $target_computers = @( "dal1msdwb34" )

$target_computers | foreach {
$target = $_
$lastboot = getLastBoot( $target )

Remove-Item "iis_logins_$target.txt" -ErrorAction SilentlyContinue


Write-Host "processing $target :" (get-date)
Get-EventLog -LogName "Security" -ComputerName $target -After $lastboot |
select -Property UserName, MachineName, TimeGenerated -Unique |
sort -Property TimeGenerated |
getTopDates |
Out-File -Append -FilePath "iis_logins_$target.txt"

Write-Host "completed $target :" (get-date)
}

$end_time = (Get-Date)

Write-Host "complete"
Write-Host "Started: $start_time"
Write-Host "Finished: $end_time"
Write-Host ($end_time - $start_time)

Thursday, November 19, 2009

Creating System Restore Points with Powershell

Getting ready to install a new video driver? What about that "interesting" piece of software you found?

Aren't you worried that it is going to screw up your computer?

Well, if so, Windows XP/Vista/7 have a facility known as "System Restore Points". Basically, these are snapshots of your filesystem. There is a good chance they are already enabled, and being used by Windows Update.

What about those other times, though?

If you have Powershell 2.0 installed (whaddya mean, you don't? Get on it!), then you have a couple of commands to help you out:

Checkpoint-computer creates a system restore point.

Restore-computer reverts to the specified restore point

So, before you install that problematic driver/update/app, this is a quick and dirty way to cover your butt.

Be warned, though, this will restore everything - including any changes to any files you may have made.

Wednesday, November 18, 2009

Tracing Internet Explorer

eJohn has this article discussing dynaTrace Ajax, a utility for IE6-8 performance tracing. Sounds like neat stuff:
Not only can you see the execution count for your defined JavaScript methods but you can also see execution time for the built-in DOM methods! Wondering what native method calls are slowing down your application? Wonder no more. From the HotSpot view you can filter by DOM or regular JavaScript and see exactly where execution time is going and what methods are so slow.

Worth a look.

Tuesday, November 17, 2009

25 Tips For Intermediate Git Users

Nice little list of stuff about git.

Read it here.

You Don't Know Jack About Software Maintenance

Communications of the ACM has this article up on their site.
Software maintenance involves moving an item away from its original state. It encompasses all activities associated with the process of changing software. That includes everything associated with "bug fixes," functional and performance enhancements, providing backward compatibility, updating its algorithm, covering up hardware errors, creating user-interface access methods, and other cosmetic changes.

In software, adding a six-lane automobile expressway to a railroad bridge is considered maintenance—and it would be particularly valuable if you could do it without stopping the train traffic.

The article asserts that this can be managed, because it has been managed in the past. However, it is pretty weak on the "how" - only that it can be done.

Thursday, November 12, 2009

Powershell v1.0, IIS6, and remote machines

I found this snippet on the web:


$computer="server"
$co = new-object System.Management.ConnectionOptions
#$co.Username="domain\username"
#$co.Password="password"
$co.Authentication=[System.Management.AuthenticationLevel]::PacketPrivacy
#$co.EnablePrivileges=$true;
$wmi = New-Object System.Management.ManagementObjectSearcher
$wmi.Query="Select * From IIsApplicationPool"
$wmi.Scope.Path="\\$computer\root\MicrosoftIISv2"
$wmi.Scope.Options=$co
$wmi.Get() | foreach { $_.name }
In PowerShell v2.0 there is a new parameter, -Authentication, to specify
the authentication level (one line):
gwmi -class IIsApplicationPool -namespace "root\MicrosoftIISv2" -computer
$computer -authentication PacketPrivacy | foreach { $_.name}

Live failover for Xen images

Remus promises to bring to the Xen hypervisor live failovers. In other words, if a host system crashes, another machine picks up the load without any interruption to services.

From the Remus website:

Remus provides transparent, comprehensive high availability to ordinary virtual machines running on the Xen virtual machine monitor. It does this by maintaining a completely up-to-date copy of a running VM on a backup server, which automatically activates if the primary server fails. Key features:


  • The backup VM is an exact copy of the primary VM. When failure happens, it continues running on the backup host as if failure had never occurred.

  • The backup is completely up-to-date. Even active TCP sessions are maintained without interruption.

  • Protection is transparent. Existing guests can be protected without modifying them in any way.


This is neat because the only thing that I know of that provided this functionality is VMWare, which costs big bucks (and still holds the tools advantage). Xen is free, and available for Linux and OpenSolaris.

Thursday, November 5, 2009

Google Chrome Beta 4

I installed Google Chrome today, since I saw that the latest beta was out. I'm not much of a browser fanboy, so this was my first spin with it.

It's fast.

Very fast.

As in, it seems like I went from DSL to fiber.

I haven't run into any rendering issues, but I haven't tried banking yet, either. It doesn't have nearly the plethora of plugins that Firefox does. For day to day browsing, though, this thing is looking pretty good.

Tuesday, November 3, 2009

Open Data Kit

The University of Washington's "Change" project has just announced a collection of tools to simplify the development of data-collection applications. Called the "Open Data Kit", the primary client target is the Android operating system (no surprise, since Google was heavily involved).

Anyway, Change hopes to help out the third world with these tools, as cellular connectivity is often the only connectivity. That's a noble cause, but it is sure to have less-altruistic uses, as well. All open-source, under the Apache license.

Here's their rundown of the goodies:

ODK Collect is powerful phone based replacement for your paper forms. Collect is built on the Android platform and can collect a variety of form data types: text, location, photos, video, audio, and barcodes.

ODK Aggregate provides a ready to deploy online repository to store, view and export collected data. Aggregate is currently implemented on Google App Engine and enables free hosting of data on Google's reliable infrastructure.

ODK Manage maintains a database of all phones in a deployment to enable remote device management. By sending an SMS to a deployed phone, Manage can trigger the transfers of forms, data, and applications.

ODK Validate ensures that you have a OpenRosa complaint form -- one that will also work with all the ODK tools.

ODK Voice facilities mapping XForms to sound snippets that can be played over a "robo" call to any phone. Responses are collected using the phone's keypad (DTMF) and are automatically aggregated.

Wednesday, October 28, 2009

JVM on Xen, from Sun

This could be neat.

Sun has a project where they are trying to run a JVM directly on a hypervisor, without the normal OS in-between.

From their project overview:

Project Guest VM is an implementation of the Java platform in Java and hosted directly on the Xen hypervisor, that is, without the traditional operating system layer. It is based on the Maxine Virtual Machine which is itself written in Java. The result is a Java platform software stack that is all Java except for a very thin microkernel layer that interfaces to Xen.

Briefly, the goals for Guest VM are:

* Exploit access to low-level features, particularly memory management and thread scheduling.

* Enable specialization and optimization of the entire software stack.

* Simplify administration by extending the Java platform to replace the OS.

* Increase developer productivity through the use of modern IDEs.

More information, including instructions for getting the source code, can be found at http://research.sun.com/projects/guestvm.


If it is a single JVM per dom-U, then this could make resource provisioning a snap.

IronScheme Hits RC1

Given my rediscovered enjoyment of parenthesis-based languages, I noted that IronScheme, an R6RS implementation on MS' CLR, has hit RC1.

Although, really, if I were to choose a functional language for the CLR, it would be F#. Fully supported by MS, decent IDE support...unless there is something particularly compelling about IronScheme, F# would be the one to choose.

Tuesday, October 27, 2009

Eclipse and Clojure Unit Testing

So far, I'm enjoying my little Clojure projects. The biggest weakness is the IDE, NetBeans, specifically. I haven't and probably won't bother with the emacs version. I'm sure there's things about it that work better, but I'm way too spoiled.

To be fair, the NetBeans plugin is in alpha state. It should get better - this is just a snapshot at this point in time.

As I've worked on this intermediate app, I've had occasion to go back and modify some of the java code I'd already written. Given all the refactoring and testing handholding which NetBeans provides, making those changes was pretty easy.

Here's the world in which I'm living: To get unit tests even somewhat integrated, I call and organize them from main.

(ns some-ns.main
(:gen-class)
(:use
clojure.contrib.test-is))

(defn -main []
(run-tests
'some-ns.someclass
'some-ns.otherclass
))

Because of the IDE integration differences, I can't say that I'm more productive in Clojure. A huge difference is highlighting syntax errors. In Java, they pop up immediately and are easily dealt with. In Clojure, I have to compile before I find out I fat-fingered something.

Debugging is similar. No breakpoints and horrific stack-traces. A good chunk of my code is devoted to logging (which is good and all, but c'mon).

I'm sure things will improve over time. Like I said, the plug-in is alpha. If the language gains much momentum, I expect to see improvements. I have the feeling that it won't gain enough to compare to the ease of Java (I can't believe I just typed that) overall.

Overall, I'm enjoying the experience. I am accomplishing a lot, pretty quickly, and it seems like there is a lot less jumping around trying to keep all the different parts working together.

Friday, October 16, 2009

Beginning Clojure Macros - Part 2

Last time, we went over what macros were, and their parts. I also gave a pretty poor example of how to use them. I'll be taking that same code and being a little more efficient about it.

(defn parse-ip [ip current]
(if
;;; probably should use the network lib to be as accurate as possible...
;;; but I won't
(re-matches #"[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" ip)
(inc-summary-counter current ip)
current))

(defn parse-datetime [ datetime current ]
(if
;;; since we're splitting on spaces, we get back "[19/Jan/2006:04:30:26"
(re-matches #"\[.{20}" datetime)
(inc-summary-counter current (subs datetime 1 12))
current))
There's a lot of repetition there. It's a simple if statement, but it will do for this example.


(defmacro parse-field [ valid-expr update-expr skip-expr ]
`(if
~valid-expr
~update-expr
~skip-expr))

Awesome! We just re-created if! Let's use it...


(let [[chash ctext] (get-field results :path-summary 6 nline)]
(parse-field
(re-matches #"/.+" ctext)
(inc-summary-counter chash ctext)
chash))

At least that's more self-explantory than a commented if-statement. Let's sort-of expand this.


(let [[chash ctext] (get-field results :path-summary 6 nline)]
;;; replacement of "parse-field" begins here
(if
;;; ~valid-expr
(re-matches #"/.+" ctext)
;;; ~update-expr
(inc-summary-counter chash ctext)
;;; ~skip-expr
chash))

Yep. That's it. Effectively, the only thing that changed was parse-field into if, right? Yes, but rather than evaluate the expressions, then pass them, we're passing the expressions themselves. They don't get evaluated for their values until they are required. Clojure already does things quite lazily, and caches a lot, but it can't anticipate everything, such as a long-running database call. It sometimes is just better to build in assurances yourself, anyway.

Why not a function?


Another way to achieve the same thing would've been to pass anonymous functions to another function. This would achieve the deferral of expensive evaluations, encapsulation, expressiveness, etc., as well, thanks to the power of closures:

(defn parse-field-fn [ current-hash current-text valid-fn update-fn ]
(if
(valid-fn current-text)
(update-fn current-hash current-text)
current-hash))

(let [[chash ctext] (get-field results :path-summary 6 nline)]
(parse-field-fn
chash
#(re-matches #"/.+" ctext )
#(inc-summary-counter chash ctext)))


When should I use macros instead of functions?


You should look for opportunities any time you're writing a function to replace structure. I only used a single if statement, but it could have had several. Another good opportunity is to replace a complicated function. If you have multiple nested if statements, conds, whatever, the main code can be easier to read by using a macro.

In all, though, learning macros - and any metaprogramming techniques - is worthwhile. It gives you greater control assembling the various bits and pieces which make up the program.

Beginning Clojure Macros - Part 1

I finally got around to learning the macro system in Clojure. When I'm picking up a new language, one of the first things I do is write an app which parses log files. I keep some around, just for that. There's a lot on the web about learning Clojure, but not as much about its macro system, not to mention macros, in general.

What's a macro?


According to this tutorial on which I've been leaning heavily:
Macros are used to add new constructs to the language. They are code that generates code at read-time.

To further oversimplify, macros are some really smart text substitution. Sort of.

You use macros to re-use code structure. You probably already do this with generic algorithms as part of your abstraction, but macros let you generate those algorithms dynamically. Applied appropriately, they result in cleaner code.

What is macro expansion?


When the compiler takes your source code, the first thing it does is look for macros. It "expands" the macro into the defmacro, replacing the text of one for the other.

Consider the text "The quick brown fox jumps over the lazy dog". There's a structure to that sentence, "The jumps over the lazy dog" (and more that we could abstract away, but we're keeping it simple here). If our macro is "", and we've defined it as "gazelle", then the resulting sentence would be "The gazelle jumps over the lazy dog".

Simple enough, right?

An actual code example



Log files have a bunch of different fields, and they all have their little peculiarities to deal with. To that end, I've got a couple of filter-ish functions: parse-ip and parse-datetime.

The two share a lot of similarities. They both check to see if the data is valid, and if it is, then update the current data and return the results, otherwise return the original results. Inside of the update, we update the map, either creating a new entry with a value of "1", or incrementing the existing entry.

I've added a bunch of comments to help explain the various pieces of a macro.

(defn parse-ip [ip current]
;;; accepts the IP address field as a string
;;; the current hashmap is updated and returned

;;; is it actually an ip address?
(if
;;; probably should use the network lib to be as accurate as possible...
;;; but I won't
(re-matches #"[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" ip)

;;; increment the count
(assoc current ip
(if (contains? current ip)
(+ 1 (get current ip))
1 ))

;;; not an ip address
current))

(defn parse-datetime [ datetime current ]
(if
;;; since we're splitting on spaces, we get back "[19/Jan/2006:04:30:26"
(re-matches #"\[.{20}" datetime)

(let [ dt (subs datetime 1 12) ] ;;;(subs datetime 1 12)]

(assoc current dt
(if (contains? current dt)
(+ 1 (get current dt))
1 )))
current))


We'll take care of the hash update part, first, and replace that whole assoc form with something else.

;;; our macro takes two parameters, like a function
(defmacro inc-summary-counter [ hset nkey ]
;;; see the little "`" at the beginning?
;;; that means "don't evaluate anything, just return the text"
`(let
;;; there's two "weird" things in this let form
;;; the "#" after the first "hset"
;;; and the "~" before the second "hset"
;;;
;;; the suffix "#" means "generate a symbol for this",
;;; essentially a name which is guaranteed unique to this macro expansion
;;;
;;; the prefix "~" means "expand this passed variable"
;;; "~hset" will be replaced with the text of "hset"
;;;
;;; the reason for this particular little trick is to make sure that "hset"
;;; is evaluated only once, and it's result kept in a binding
[ hset# ~hset
nkey# ~nkey ]

;;; all of this will be returned as-is
(assoc hset# nkey#
(if (contains? hset# nkey#)
(+ 1 (get hset# nkey#))
1))))


Well, that was a lot to "save time", wasn't it? Here's our updated filter code, minus but using inc-summary-counter:

(defn parse-ip [ip current]
(if
;;; probably should use the network lib to be as accurate as possible...
;;; but I won't
(re-matches #"[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" ip)
(inc-summary-counter current ip)
current))

(defn parse-datetime [ datetime current ]
(if
;;; since we're splitting on spaces, we get back "[19/Jan/2006:04:30:26"
(re-matches #"\[.{20}" datetime)
(inc-summary-counter current (subs datetime 1 12))
current))

Much prettier, wouldn't you say? Nothing you couldn't do with a function, but it is what is happening that is important.

Everywhere you see inc-summary-counter and parameters, that expression is replaced with it's defmacro.

Here's parse-ip again, with the macro pseudo-expanded.

(defn parse-ip [ip current]
(if
;;; probably should use the network lib to be as accurate as possible...
;;; but I won't
(re-matches #"[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" ip)
;;; inc-summary-counter was here
(let
;;; the ~hset and ~nkey have been replaced
[ hset# current
nkey# ip ]
(assoc hset# nkey#
(if (contains? hset# nkey#)
(+ 1 (get hset# nkey#))
1))))
current))

Remember that macro-expansion happens before compilation? So, after the macros are expanded, the above code is what is passed to the compiler. I'll have a more powerful example next time.

Wednesday, October 14, 2009

The Four Quadrants of Technical Debt

Martin Fowler has a piece breaking down "Technical Debt", i.e., shortcuts you take now will have to be "paid back" in the future.

The argument was made that some Technical Debt was not only inevitable, it was desirable. If taking on that debt meant making a ship date, then that debt was worthwhile.

The debt metaphor reminds us about the choices we can make with design flaws. The prudent debt to reach a release may not be worth paying down if the interest payments are sufficiently small - such as if it were in a rarely touched part of the code-base.

He also provides a nice graphic breakdown, worth reading. Check it out.

Monday, October 12, 2009

Using RUNAS for SQL Management Studio

Ran across this today, and just don't want to lose it.

The short version: if you need to connect to a Windows SQL server in a different domain, the runas command has a /netonly switch.

runas /netonly /user:domain\username “C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe”

Neat, huh?

Friday, October 9, 2009

Exploding Software Myths - MS Research

Over the last few years, Microsoft has been putting real study into development processes and techniques. Which makes sense, since they've got enough development teams to be able to do (mostly) controlled experiments.

Some of their findings:
  • TDD: "code that was 60 to 90 percent better in terms of defect density...took longer to complete their projects—15 to 35 percent longer"
  • Assertions: "definite negative correlation: more assertions and code verifications means fewer bugs. Looking behind the straight statistical evidence, they also found a contextual variable: experience."
  • Organizational Structure: "Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85 percent"
  • Remote workers: "the differences were statistically negligible" for distributed development"

“I feel that we’ve closed the loop,” Nagappan says. “It started with Conway’s
Law, which Brooks cited in The Mythical Man-Month; now, we can show that,
yes, the design of the organization building the software system is as crucial
as the system itself.”

Awesome.

Thursday, October 8, 2009

Considering Clojure

I've been looking at clojure for awhile, now. I liked lisp, back in the day, but never got particularly good at it. Since then, I've done some minor projects in Scheme (Chicken scheme, to be specific). The syntax and programming styles really "did it" for me. Problems just seemed simpler to solve.

When I was working on an earlier revision of this project in .NET, I passed up on F#. I used it for some small test apps, liked it a lot, but decided against it. The main reason is that nobody else is using it. This might change with its inclusion in VS.NET 2010, but I'm not going to hold my breath. Besides, C# has a lot of functional-like syntax these days, so while I may miss some of the F# sugar (pattern matching, for example), I don't feel all that hemmed in with C#.

This next chunk of code I have to write for my little indexing project is discrete from the rest of the project. If clojure doesn't take off as more than an interesting niche language, I could easily find myself replacing it with bog-standard Java.

'Cause I have to admit it: Java sucks. It isn't that it is hard, it's that it is a pain in the ass. In some ways, I preferred programming in C - I spent a lot less time working, it seems. Maybe because I did so much in C; I don't know. I do know that just about every language I've tried since then (except PIC) has been less of a hassle.

I do like all the JVM application containers, though. To me, that's the real winner for Java.

Which leaves me with the whole "changing horses in midstream" problem. I really should write the whole thing in one language. You wouldn't think that would be too much to ask, would you? I certainly wouldn't hesitate to ask it of someone else.

So, I'll probably keep poking around through the tutorial. I may end up with working code which I end up using for the next bit. At the very least, I'll have a good idea as to whether or not this was a good idea.

LiquiBase - Database Version Control

If you're a fan of Ruby on Rails, then you probably know about migrations (and what a pain they can be).

Well, LiquiBase promises to bring the same benefits (and headaches) associated with Rails-style migrations. I haven't had a chance to check it out, yet, but I will update when I do.

Thursday, September 24, 2009

The Importance of Other Stuff

There haven't been a lot of updates around here, lately, and for that I apologize.

Ya see, I've been busy. I won't go into the specifics. I'll only say that I haven't been poking at computers that much this month because I've been doing Other Stuff.

Given the fast and ever-changing nature of this business, it can be very easy to get caught up trying to keep up with everything. There's always a new technology, a new product, or even just a new version of existing products. There's also all of the tech that one doesn't know anything about, but would be very useful to know.

That's just it, though. Nobody can keep up with it all; there's just too much. A long time ago (get off my lawn!), I concluded that I would commit as little as possible to memory. I focus on the overall patterns, but the specifics tend to be ephemeral, obsolete before you ever get to use them again. If I do memorize something, it is because I use it all of the time. The most important thing I've learned is how to find the answer quickly, even when I've already answered that particular question. (Putting it in a blog doesn't hurt, either.)

There's just so much more to the world. It is a shame to miss it.

Well hey, maybe you're young, and hungry, and this is all you want to do. Okay, that's your call, but you're limiting yourself in the long run. Not just in a "there's more to life" kind of way, but professionally.

You see, all of this tech is about people. Not as "lusers", but as people. The most successful technology isn't the "cool" technology, it's the tools which help people do what people have always done: talk to other people.

They say that a business is about the people. That isn't entirely true. If it were, then we'd still have huge secretarial pools. It is about the interactions between those people. Outside of improving the overall process, the boss isn't interested in what tools you used to get those numbers, only that you got those numbers and they are correct.

There's an attitude among too many of the people in this field that users are stupid. Worse still are the ones who believe that their success in this field translates to success in other fields. The one commonality among them is that they don't do anything else. They don't leave their comfort zones, they forget what it is like to be the least knowledgeable person in the room.

That's why it is important to do Other Stuff.

Thursday, September 3, 2009

Pash - PowerShell for Unix

Regular readers of this blog - both of you (Hi Mom, and that guy in Australia who subscribed to the RSS) - know that I loves me some MS PowerShell. I've called it a "game changer", because it greatly simplifies Windows administration. It is good enough that I've abandoned cygwin on my Windows systems; PowerShell is better than bash.

Well, somebody has released a version of PowerShell that runs on *nix systems: Pash. It builds on Mono, so there is a huge library of objects for it to work on. I don't know how well it works, yet, but I'm certainly going to be trying it.

Monday, August 31, 2009

Where's That File?

This post starts out with a reading of another blog, but it isn't outright babble. It's about what I'm working on.

The author of this article claims "you have to think of content entirely abstractly". While there is some exposition as to what it should look like, it is very vague: "your system should be capable of managing any kind of content."

Fair enough, but how?

Well, that's what I've been working on. I think that the various types of data are best handled by programs specifically designed to handle that data. What we as users need is an easy way to find it.

The current solutions tend to involve centralization, synchronization, and search. You're supposed to keep all the important data centralized, if you need to organize it your own way then you synchronize it, and if you're looking for something you search for it.

Which is great, except that users don't do this, because it all sucks.

If I download file from the internet, that file exists in two places which I can get to. My download folder, and the original link. If I copy it up to a CMS, now it is in three places. If that CMS is backed up, it exists in four places. Copy it to a thumb drive? Now I'm up to five.

Five copies of the same file, in locations which are all equally valid, and all have their strengths and weaknesses. Between them, the data is unlikely to be completely irretrievable.

Now, as a user, all I want to know is "where's that file?" (thus the name of the project)

The author of the original article was correct in that the only important thing is the metadata. What he doesn't seem to get is that the metadata is the only content which needs to be managed.

Currently, the problem I'm solving is strictly a question of duplicate files on the network. I have files that I know must be backed up, but I don't know where all of those copies are. I don't want too many copies, because storage costs are on a rising curve: Each additional terabyte costs more than the previous terabyte.

Turns out, solving this problem isn't easy (my first naive implementations didn't scale), and a whole bunch of the work can be extended to other storage sources.

Having that, though, the next obvious step is to include personal metadata (tags, descriptions) to the files. You have to collect and index metadata, anyway (file name, size, etc.), so why not add user metadata, too?

What I'd expect to see at that point is a UI which reflects the various metadata. If I'm looking for my resume, I should be able to not only find "resume.doc", I should know about all of the copies of "resume.doc" I know about, even if I can't get to them. I'd prefer that the "nearest" one be highlighted in some way, things like that.

What I'd like to do after that (as if I didn't want to do enough), is assign rules to various tags. If I label something with "important", then it should be included in a special backup/sync/whatever. Again, this isn't something that will be particularly difficult, but will require effort.

Well, that's cool, but what about other storage sources? Those are a bit harder, and generally specific to that storage (email, for example). However, things like links to articles and downloads is pretty straightforward, and shouldn't be too hard to include.

Where am I now?

Heh. I mentioned that looking for duplicate files is harder than I thought it would be. I'm actually on my third try. The first one was when I thought "I can do this with a script", the second was with .NET, where I aimed bigger, but found not nearly big enough.

So, I've just completed the work on the file crawler, and the next bit is submitting the crawl results to the index. I've done this part before, and I don't expect it to be particularly hard, but I have to find the time for it. After that, something resembling a UI (I am trying to solve a problem), then put the whole thing out there with a big fat "alpha" disclaimer (probably Apache license, since I'm using so much of their stuff).

And that's what I'm doing, and where I'm at.

IT, Users, and Communication

I was going to let this article slide, and not get all meta-bloggy about it, but the rebuttal really tweaked me. It's all about what users get to install on their work machines, and IT's reaction.

They both miss the point, I think.

Mr. Manjoo related a story about Firefox, and the crowd cheered. If there was really that much demand for it, then it was a failure on the IT department's part to know that it was wanted, and if they knew that, not at least acknowledging it clearly. There's plenty of good reasons not to upgrade.

What Mr. Manjoo missed is that there are tradeoffs to the freedom to install whatever you want, most of them related to support. A lot of IT policy is driven by how much they have to provide that support. Less money means coarser support - heavily locked down machines, aggressive re-imaging, or similar. Things that don't require a lot of people time.

The confirmation bias that both articles triggered in me, though, was that it clearly showed that in neither case is the IT department and the users communicating.

Good IT is hard, not just because of the technology involved, but because you have to make long term decisions which will permit you to react to users ever-changing needs and wants.

Remember, we're here for them, not the other way around. When I walk into a shop that doesn't live that attitude, I know I'll find a lot of problems.

Wednesday, August 12, 2009

Digital Sharecropping? Hah!

Jeff Atwood of CodingHorror seems to have a problem with user-generated content. He calls it "digital sharecropping". He includes a black and white photo of black people working a dusty field, just in case you didn't get the reference.

The gist of his analogy goes like this:
  • Users put their own work into building their particular segment of a much larger site.
  • The much larger site puts ads next to the work, and reaps profits.
  • The user receives nothing in return.
It's that last part that isn't true. The site provides a cheap and easy means of publishing on the internet - much easier than doing it all yourself. This particular generation differentiates itself from GeoCities, et. al., by providing additional tools for tracking related users and topics.

I think few of the people who publish on these sites are unaware that the hoster is trying to make money off of their work. At the beginning of his article, he repeats a story about a woman who contributes to a site. She calls it a "labor of love".

I think she knows exactly what she is doing. It's a hobby, it keeps her busy, and satisfied. What is so difficult to understand about that?

I don't begrude venues the opportunity to make a profit for providing a comfortable environment. I know of few people who do (you dirty smelly hippie commies!) To be honest, I'm glad that Mr. Atwood at least thinks about the topic, but really: it isn't all that big a deal.

Monday, August 10, 2009

Java's Lots of Little Files

I ran into this article about "Next Generation Java Programming Style", at YCombinator. There was some interesting discussion about the overall effectiveness of these suggestions.

Part of the discussion involved commenter Xixi asking if anyone had followed point #5, "Use many, many objects with many interfaces". It turns out, I've been following that model. I started to reply there, but I recognized a blog post after a bit.

Here's my general workflow, and I've found it to be quite effective.

The linked-to article refers to interfaces as "roles", and that's probably the easiest way to think of them.

If I have a few related items which need to be handled, I first create an interface for it: IMessage (I realize it isn't the java convention to prefix interfaces with "I", but I prefer it - especially with as many as I ended up with). Add in a few obvious functions, send, recv.

Create the concrete class (the actual implementation): FooMessage. In this case, the messages would deal with "Foo". So, it has send, recv, and say count. Gotta know how many Foos we have, right?

Next up, the test class - but I'll get to it in a moment. This is where I write it in the overall workflow, but it doesn't make as much sense without talking about Mocks.

Last, I write the mock class for the concrete class. It also implements IMessage, but most of the implementation is empty - just accepts the parameter, and maybe spits back a default value.

Which brings us back to the test class. Since I refer to everything via its interface, using those mocks is easy. In FooMessageTest, I use the concrete class FooMessage, and a whole bunch of mocks. Generally, everything but the class being tested can use mocks, so testing ends up nicely isolated and repeatable.

In practice, concrete classes implement several interfaces (IFoo, specifying count, so you could have IBar, which specifies weight.)

Okay, this was a lot of work up-front, and I'll be honest: I approached it with some concern that it wouldn't pay off.

Well, it has. Refactoring, with the assistance of NetBeans, has been a breeze. Adding in new, or modifying old functionality has been super-easy. Yes, there's a few hangups, but they tend to revolve around my lack of planning than the overall process. I don't feel as "free" as when I use Ruby, but I don't feel held up by the language or its environment.

The hardest part has been maintaining discipline. It is really easy to think that this particular class doesn't need an interface, or a mock, etc - but that is no different than any other methodology.

Tuesday, August 4, 2009

Svnserve, and Solaris 10

I had to go through the trouble of getting svnserve to run as an SMF-managed service on Solaris 10, so there's no reason you should, too.

Create the method script.


This script uses rc-like syntax. The xml manifest (coming up!) uses this.

vi /lib/svc/method/svc-svnserve

The contents:
#!/sbin/sh

case $1 in
start)
svnserve -r /var/svnroot -d ;;
stop)
/usr/bin/pkill -x -u 0 svnserve ;;
*)
echo Usage is $0 { start | stop }
exit 1 ;;
esac

exit 0

Fix the permissions:

chmod 555 /lib/svc/method/svc-svnserve
chown root:bin /lib/svc/method/svc-svnserve

Test it with:

sh /lib/svc/method/svc-svnserve start

Try to connect, list, etc., make sure it works the way you want it to.

Create the SMF manifest


vi /var/svc/manifest/site/svnserve.xml

The manifest, itself


<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type='manifest' name='SUNWsvn:svnserve'>
<service
name='site/svnserve'
type='service'
version='1'>
<single_instance/>
<dependency
name='loopback'
grouping='require_all'
restart_on='error'
type='service'>
<service_fmri value='svc:/network/loopback:default'/>
</dependency>

<exec_method
type='method'
name='start'
exec='/lib/svc/method/svc-svnserve start'
timeout_seconds='30' />
<exec_method
type='method'
name='stop'
exec='/lib/svc/method/svc-svnserve stop'
timeout_seconds='30' />
<property_group name='startd' type='framework'>
<propval name='duration' type='astring' value='contract'/>
</property_group>
<instance name='default' enabled='true' />
<stability value='Unstable' />
<template>
<common_name>
<loctext xml:lang='C'>
New service
</loctext>
</common_name>
</template>
</service>
</service_bundle>

Check your work


Check the xml with:

xmllint --valid /var/svc/manifest/site/svnserve.xml

Then let's see if the smf stuff likes it:

svccfg validate /var/svc/manifest/site/svnserve.xml

If everything looks good so far...

Importing the manifest


svccfg import /var/svc/manifest/site/svnserve.xml

It should show up under svcs in maintenance. Let's fix that:

svcadm enable svnserve:default

If it doesn't start, check /var/svc/log/site-svnserve:default.log

You should be all nicely integrated now.

Monday, August 3, 2009

Fun With Projects!

Way back in the day, around April or so, I was talking about a project which was taking up my time. Yes, it is still coming along nicely. We're playing nice with ActiveMQ, Java, the whole bit.

It has taken awhile to get as far as I have. It isn't that the underlying concept is all that difficult ("index files"), it is the scale at which I want to do it. So, there's been a lot of internal abstraction going on, with all of the attendant complexity (lots of little files).

What I'm really happy about is the overall process I've been following. I've been a proponent of tests and mocks, and I've used them a lot in my projects before. The one mistake I always made, that everyone always makes, is losing discipline - giving into the urge to cut a corner. After all, I won't need a mock for that class, it's too simple, right?

I haven't done that this time around. It is really paying off. I haven't been able to devote 100% of my time to this, so I've walked away more than once. I have had no trouble picking up where I was. New pieces work excellently with older pieces, and I barely question the predictability of anything I've done so far.

There's more work to do, of course. I see the light at the end of the tunnel, though. There's some obvious performance changes I can make, but once I've got the basic "duplicate files" functionality going, I'll post it all someplace.

Wednesday, July 29, 2009

Sticking with XP

I was reading this CodingHorror piece, and for the most part thought it was ho-hum. Windows 7 is nice, and he finds a nice way to dig at it ("Vista service pack"). Clever, or would be if I hadn't heard it a few times already. Maybe it's new to you, I dunno, that's not the point.

There was a bit in it that I totally disagreed with:

It's important to me not because I am an operating system fanboy, but mostly because I want the world to get the hell off Windows XP. A world where people regularly use 9 year old operating systems is not a healthy computing ecosystem. Nobody is forcing anyone to use Windows, of course, but given the fundamental inertia in most people's computing choices, the lack of a compelling Windows upgrade path is a dangerous thing.

Different definitions of "dangerous".


How is slow change dangerous to an ecosystem? It might be dangerous for itself, i.e., the Windows franchise, but the ecosystem seems to have not only survived, but thrived. Not only did Apple take total advantage of MS' lapse, but the Linux desktop looks better and better (typing this on Ubuntu). Within the Windows' desktop universe, there is a huge software library specific to XP, not just Windows. Tweaks, utilities, and documentation galore.

XP was some good shit.

Now? Now I'm going to keep any XP machines around, until they absolutely have to be upgrade to gain functionality. Just out of spite. Er, if I can find any, that is.

Tuesday, July 28, 2009

Redmine on Glassfish

Okay, this took me a couple of days to piece together, so here it is for posterity.

Redmine is a software-development project management system, written in Ruby on Rails. The demo looks good, I tried it out on a small scale, liked it, and am ready to really use it.

I like my RoR running under Java where I can keep an eye on it, and normally this isn't a problem. Just warble it up, and deploy. Nothing to it.

However, this time around was a little more interesting.

Get everything working per these instructions, running Webrick, etc., under jruby instead of ruby.

The .jar that comes with the latest stable JRuby (as of this writing) has a bug. There's a newer one here which works. It should be replace whatever jruby-completes are in jruby-1.3.0\lib\ruby\gems\1.8\gems\warbler-0.9.13\lib. Thanks to this little post for that fix.

Next, run warble config, and modify the newly-created config/warble.rb. Make whatever other changes you might want (like including your jdbc driver in config.gems), and add lang to config.dirs.

Then warble it up, and away you go.

Also, it really helps if you don't have some weird file corruption issue to help you misdiagnose things, spin your wheels, and get needlessly frustrated for a day and a half. Really, try to avoid that part. Just try it on one of the other ten zillion machines you've got laying around a little sooner, dumbass.

Update

If you happen to be using redmine not only in this configuration, but behind nginx as a proxy, here's a trick to get around the non-relative paths you find all over the place:
        
server {
listen 80;
server_name whatever.example.com;
location / {
proxy_pass http://server/redmine;
}
location /redmine {
proxy_pass http://server/redmine;
}


Hope this helps.

Free Book: Pro Git

I've been using git for my version control needs for awhile, so I was happy to not only see a book on it, but that it is free: Pro Git.

Saturday, July 25, 2009

Java XML "support"

Like many software projects written these days, I'm finding using XML unavoidable. I'd always wondered why Java programmers always seemed so grumpy about XML, and now I know.

Java XML support sucks.

I'd never really appreciated .NET's support for XML. A few pain-in-the-ass points, but for the most part pretty easy to use. Ruby makes it way too easy, and comes closest to how it "should" be done.

It started with serialization. I need some way to store the state of an object so I can send it over the wire. Java has a built-in binary serializer which works very well. It doesn't matter if the fields are private, or whatever, it'll dump them out and read them back in. A couple of pain points, but pretty easy to use, much like .NET's serialization. I figured that serializing to XML would be just as easy.

I can be so naive.

Everyplace I looked said to use XMLEncoder. Okee dokee, except that the equivalent code doesn't give equivalent results. Do some digging, and it turns out the binary and XML serializers use completely different mechanisms. Now, there's probably some pointy-headed academic someplace who is satisfied that the XML serializer meets some standard. If it means re-writing your code in such a way as to make it more difficult to maintain, then you're doing it wrong. Especially when a not only do competing languages do it in an elegantish manner, but you have an implementation which does it well, too.

You know, the source code is open. Someone with more time oughta look into that.

Anyways, since it all blows so hard, I've got some extra work implementing my own serialization conventions. I've got it implemented about halfway through at the time of this writing. It shouldn't take much longer to finish.

Thursday, July 23, 2009

Learning Java, Still

I've been hammering away at both getting the project along (and it is coming along nicely), while learning Java (also coming along nicely). I've done enough that I've got some opinions about Java. Maybe I'm just doing things wrong, and whatever old Java sages stumble across this post will just shake their heads at the n00b.

Today's complaint - exceptions

First, I wanna complain about exceptions in Java. I like exceptions. I like code that blows up in a predictable, informative, and containable manner. At first, when I was having to declare all the potential exceptions a method could throw, I was annoyed. Then, I came to appreciate it. I had a detailed list of broad categories of things that could go kerblooie.

That is totally awesome.

So, there I was, hard at work, not understanding why I wasn't getting the results I was expecting. Then I noticed the log saying something about an exception, and continuing.

I looked, but I wasn't catching it anywhere. It was an unhandled exception, or at least I wasn't handling it, and I'm the only person I care about when it comes to code.

Dammit, things should blow up, not just continue. What the hell kind of "exception" is that, anyway? They call it "unchecked" (or is it the other way around?). I guess I'm just not smart enough for Java.

All is not suckitude

I'm using NetBeans, this go-round. It runs really well, handles my multi-mon just fine, and holds my hand without getting in the way too much. It isn't as good as Visual Studio - VS just has a bit more "polish" to it (much better autocomplete, for example), but it is easier to get to the guts of NetBeans.

Refactoring in Java is awesome. I have absolutely no fear about renaming things, moving them around, whatever. Since laying things out in both agiley and enterprisey fashion means a lot of little files, this is pretty important. I've been burned by refactoring in VS (but not a recent version), but not once in NetBeans.

Things could be easier, dammit!

Documentation is all over the map. The linux distros used to have this problem (they've gotten much better), where there were just so many options, and no clear consolidated approach. NetBeans packages as much as it can together, of course. This doesn't cover moving to production, or anything like that. I guess it is expecting too much to find documentation geared towards someone who is intent on learning an entire stack at once.

I'll get over the smell

As usual, I'm having fun learning. I've got enough of the basic patterns down that I'm getting a lot done fairly quickly. I'm sure I'm making dozens of beginner mistakes, but hopefully I'm doing a good enough job that those things are easy to find and fix.

I keep getting stuck on the server deployment scenarios. There's just too many of them. Not just app servers, but what those app servers need to provide. The acronyms are very thick.

Just more stuff to hammer on through. I'll keep you posted, of course.

Monday, July 20, 2009

Powershell, and log4net

EZ money:

"--- Loading log4net ---"

[System.Reflection.Assembly]::LoadFrom("$pslib\log4net.dll") | out-null;
"Loaded log4net.dll"

[System.IO.FileInfo]$fi = new-object System.IO.FileInfo "$pslib\log4net.xml"
[log4net.Config.XmlConfigurator]::Configure( $fi )
"log4net configured with $pslib\log4net.xml"

$log = [log4net.LogManager]::getLogger("default")
"`$log = (new logger `"default`")"

""
'Cause sometimes we wanna be fancy...

Sunday, July 19, 2009

My Online Dumping Ground

I've written a bunch of stuff over the years, of various utility.

And I've lost most of it. No hardship, it is just a pain in the butt to keep track of it.

Anyway, some of it is marginally useful, so I thought I'd set up someplace to host it. It is all BSD-licensed.

You can find it here.

Featured projects (okay, the only projects):

  • IIS6 Cmdlets (cmdlets to administer remote IIS6 machines)

  • ServerInfo (detailed remote IIS6 configuration information GUI)

  • FileInfoExtensions (Things .NET's FileInfo should have, but doesn't)

  • Powershell Profile (My profile.ps1. Whoop. It was when the output of my profile exceeded the length of my original profile that I decided to move everything out there)

I can't attest to any great quality for these apps. If nothing else, at least I'll know where they are.

I've been putting snippets of code in this blog, but it isn't too long after that I've posted it that I'm updating the article with a much better script. Rather than do that, I'll just provide a link to the code in the project, with maybe some bits extracted. That FileInfo post was an excellent example: worthwhile code, but loooong.

Saturday, July 18, 2009

Some FileInfo Extensions

I kept on needing the same stuff regarding files when writing C#, so I came up with a set of helpers implemented as extensions to the FileInfo class.

Nothing particularly earth-shattering, but you might find them useful.

  • IsDirectory - does the obvious
  • GetMimeType - uses Windows' urlmon.dll to magically determine a file's mime type.
  • GetFileBytes - gets a chunk off the front of a file.
  • GetSha1, and friends - Computes the SHA1 hash of a file. Configurable buffer size.
  • ApplyToFolder - accepts a function to be applied to every file in a hierarchy.

Latest version here.

using System;
using System.IO;
using System.Runtime.InteropServices;
using System.Security.Cryptography;
using System.Text;

namespace QSha
{
public static class FileInfoExtensions
{
public static bool IsDirectory( this FileInfo fi )
{
if ( !( ( FileAttributes.Directory & fi.Attributes ) == FileAttributes.Directory ) )
{
return false;
}
return true;
}


public static byte[] GetFileBytes( this FileInfo fi, long maxBufSize )
{
byte[] buffer = new byte[( fi.Length > maxBufSize ? maxBufSize : fi.Length )];
using ( FileStream fs =
new FileStream( fi.FullName, FileMode.Open, FileAccess.Read, FileShare.Read, buffer.Length ) )
{
fs.Read( buffer, 0, buffer.Length );
fs.Close();
}

return buffer;
}

public static void ApplyToFolder( this FileInfo fi, Func fFileFound )
{
string[] subFolders;
try
{
subFolders = Directory.GetDirectories( fi.FullName );
}
catch ( UnauthorizedAccessException )
{
return;
}

string[] files;
foreach ( string folder in subFolders )
{
FileInfo tmpFi = new FileInfo( folder );
tmpFi.ApplyToFolder( fFileFound );

try
{
files = Directory.GetFiles( folder );
}
catch ( UnauthorizedAccessException )
{
continue;
}

foreach ( string file in files )
{
bool ffres = fFileFound( new FileInfo( file ) );

// we don't do any post-processing, so these are wasted cycles
/*
if ( !ffres )
continue;
*/
}
}
}

// mime-type stuff
public static string GetMimeType( this FileInfo fi )
{

if ( fi.IsDirectory() )
throw new FileNotFoundException( String.Format( "Is a directory, not a file: {0}", fi.FullName ) );

if ( !File.Exists( fi.FullName ) )
throw new FileNotFoundException( fi.FullName + " not found" );


byte[] buffer = new byte[256];
using ( FileStream fs = new FileStream( fi.FullName, FileMode.Open ) )
{
if ( fs.Length >= 256 )
fs.Read( buffer, 0, 256 );
else
fs.Read( buffer, 0, (int)fs.Length );
}
try
{
System.UInt32 mimetype;
FindMimeFromData( 0, null, buffer, 256, null, 0, out mimetype, 0 );
System.IntPtr mimeTypePtr = new IntPtr( mimetype );
string mime = Marshal.PtrToStringUni( mimeTypePtr );
Marshal.FreeCoTaskMem( mimeTypePtr );
return mime;
}
catch ( Exception )
{
return "unknown/unknown";
}
}

[DllImport( @"urlmon.dll", CharSet = CharSet.Auto )]
private extern static System.UInt32 FindMimeFromData(
System.UInt32 pBC,
[MarshalAs( UnmanagedType.LPStr )] System.String pwzUrl,
[MarshalAs( UnmanagedType.LPArray )] byte[] pBuffer,
System.UInt32 cbSize,
[MarshalAs( UnmanagedType.LPStr )] System.String pwzMimeProposed,
System.UInt32 dwMimeFlags,
out System.UInt32 ppwzMimeOut,
System.UInt32 dwReserverd
);
// /mime-type stuff

// hash stuff
public static string GetSha1Base64( this FileInfo fi, long maxBufSize )
{
return Convert.ToBase64String( fi.GetSha1( maxBufSize ) );
}

public static string GetSha1Hex( this FileInfo fi, long maxBufSize )
{
byte[] hash = fi.GetSha1(maxBufSize);
StringBuilder hex = new StringBuilder( hash.Length );
for ( int i = 0; i < hash.Length; i++ )
{
hex.Append( hash[i].ToString( "X2" ) );
}
return hex.ToString();
}

public static byte[] GetSha1( this FileInfo fi, long maxBufSize )
{
string ret = String.Empty;

if ( 0 == fi.Length )
return null;

if( 0 == maxBufSize )
{
maxBufSize = fi.Length;
}
byte[] buffer = fi.GetFileBytes( maxBufSize );

SHA1Managed sha1 = new SHA1Managed();
return sha1.ComputeHash( buffer );
}
}
}

Microsofting

Oh boy, coding on a Saturday night! What could be more fun?

Well, there's been a little tempest-in-a-teapot about some MS marketing blog being busted as an astroturfing site.

I like the response of the blog:

As for WordPress, I address using Microsoft competitor tools on my about page. Again, it’s not really a secret. You can see the flickr photos and youtube videos right there on the homepage.

All this is to say — you can hate on Microsoft and its advertising all you want … but I’m not sure it’s fair to act like it’s a big GOTCHA! to catch a marketing site being, well, a marketing site.


There's two bits I like about it. First, the obvious in-your-faceness of it.

Next, admitting that they use tools other than MS tools. You know, like the rest of the world. I've always found Redmond's self-rah-rahing and NIH attitude irritating. Microsoft's biggest weaknesses revolve around its isolation from the rest of the community. This is a welcome change. I wonder how long it will take for it to be beaten down?

Hash My Files

Not particularly revolutionary, but if you need a nice tool for getting the hashes of various files, HashMyFiles from Nirsoft is pretty nice, and best of all, free.

SHA1, CRC32, and MD5.

Friday, July 17, 2009

ZFS and Upgrading Drives

Yea, okay, so this is common. In fact, I've done it with VMs before. I still think it is neat that I was able to upgrade an array without any downtime.

Before:
root@bluelight:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 888G 511G 377G 57% DEGRADED -
rpool 298G 6.28G 292G 2% ONLINE -

After:
root@bluelight:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 2.73T 511G 2.23T 18% ONLINE -
rpool 298G 6.27G 292G 2% ONLINE -


I ran into one snag. When I issued zpool replace data c4d0s0, it gave me a message about there being no such device. The solution was zpool replace data c4d0. I suspect that "s0" at the end of the original disk was a slice, and probably an artifact of how I set the system up to begin with.

The larger impact is that I've got a lot of room at very little cost. There aren't as many advantages to the high-end gear that there used to be. Everything I used is commodity class, or free. It doesn't suit all situations, but there isn't a reason for small businesses to be paying a lot of money.

Monday, July 13, 2009

What If Microsoft Turned A Corner, and No One Was There?

(Disclaimer: MSFT from 1994-2003)

Ran into this today, which argues that Microsoft has turned the corner. Considering that blog tends to be critical of MS, it was worth a read.

Despite my history with MS, I've been pretty much all over the map when it comes to PC-based server tech. Even at MS, I was an early adopter of Linux, and ultimately started my own hosting service on FreeBSD (which failed, but that's another story).

That said, I recently took a new look at MS' 2008 stack (Windows 2008, SQL 2008, VS 2008). It doesn't suck. In fact, it is pretty darn good.

What I liked:
- Powershell. I don't like to throw the term "game changer" around much, but this one is. Between that, and a very rich .NET model, a lot of previously cumbersome tasks are now trivial.
- IIS7. Much more modular, much more programmable (more Powershell)
- VS2008. I've long like MS' IDEs, and this one kicks much butt. Add in ReSharper, and it totally rocks.
- SQL 2008. It scales better, it is easier to manage, and very configurable.
- C# 3.0. I'm a chump for simple lambda syntax.

The biggest difficulties I ran into stemmed not from Microsoft's tech, but from the ecosystem around their technology.

I have found plenty of neat stuff to use, but a lot of it runs under Java. Sure, I could build various bridges from .NET to those tools, but then I have a whole other chunk to maintain. Many of the projects have .NET client, but they tend to be second class.

Why would I want to do things half-ass like that, when I know all too well of the problems that causes?

There's just more happening in the Java/Linux world. If I want to maintain a competitive edge for my customers and myself, it behooves me to use and recommend the better technology. Right now, that isn't Microsoft - and it isn't because of Microsoft, itself.

When I wanted a search server, I could use MS', which works on Windows, or I could use Solr, which runs on Java. When I needed some form of network monitoring, I ran into OpenNMS, and didn't find a lot for Windows . Lately, I wanted a good message queue. Microsoft has one (and it works fine), Java has a selection.

Thus bringing me to the point: it may not matter if MS is getting it. MS isn't capable of doing everything, no matter how efficient and disciplined they get. They need everybody else in order to be a strong presence. Their heavy-handed tactics of the past may have deprived them of what they needed most: people.

They have a strong presence in the enterprise world, and this latest stack will help cement that position. The large organizations tend to move slow, anyway, so being a little behind isn't too big a deal. MS' best tools are best suited to that environment. They aren't going away.

They have the money to ride out their bad reputation. Maybe they can trim more of the fat, and focus a little bit better. In the meantime, they have a lot of ground to make up.

Sunday, July 12, 2009

Powershell $profile - Is This Great Filler, or What?

Some new java stuff, a couple of nice little helper functions: qfind, for searching the filesystem and title, to change the console title.
$projects = $env:USERPROFILE + "\Documents\Visual Studio 2008\Projects"
$sysdir = $env:USERPROFILE + "\sys"

# misc stuff
new-alias -name npp -value "C:\Program Files\Notepad++\notepad++.exe"
function qfind( $start, $like ) { get-childitem $start -name $like -recurse }

function title( $msg ) { $host.ui.RawUI.WindowTitle = $msg }

# git stuff

$gitexe = $sysdir + "\Git\bin\git.exe"
new-alias -name vi -value $sysdir\Vim\vim72\gvim.exe
$env:EDITOR = "npp"
new-alias -Name git -Value $gitexe

# java stuff

$jdk = "jdk1.6.0_12"
$java_home = "C:\Program Files\Java\$jdk"
$env:JAVA_HOME = $java_home

new-alias -name jruby -value $sysdir\jruby\bin\jruby.bat

new-alias -name jar -value "$java_home\bin\jar.exe"
new-alias -name ant -value "c:\sys\ant\bin\ant.bat"
new-alias -name javac -value "$java_home\bin\javac.exe"
function jetty() { java -jar start.jar etc/jetty.xml }
$env:CATALINA_HOME = "c:\sys\tomcat6"
new-alias -name tomcat -value "c:\sys\tomcat6\bin\startup.bat"

# MS Stuff

new-alias -name nant -value "$sysdir\nant\bin\nant.exe"
new-alias -name msbuild -value C:\Windows\Microsoft.NET\Framework\v2.0.50727\msbuild.exe

title( "General" )

Hope this helps. At least it is harder to lose :)

Update: Once I got to looking at it here, I decided it could be better laid out. So, it is a little prettier, now.

Saturday, July 11, 2009

Learning Java

Okay, so here I am, learning Java. I've put it off for many years. Learning Java has always been compelling, but more along academic lines: As pervasive as it is, I should just know it.

I tend to need to write stuff fairly quickly (misc tools don't need to be particularly well-written). So, I usually script my way out of trouble. When I need a GUI app, or just need to show additional polish, I reached for DevStudio - Microsoft makes those one-off apps easy.

My goals are pretty simple: learn enough to

  • Administer java application containers
  • Diagnose Java application problems
  • Understand it well enough to use a high-level language (Groovy, JRuby) effectively
I'm not unfamiliar with configuring Java containers, but I'm in no way an expert. I like the overall...explicitness?...of it. So far, I haven't seen anything that I couldn't really pull off on other platforms (Apache or IIS), and there is a cost of initial complexity.

However, as the overall configuration becomes more complex, java containers simplify things - all that explicitness pays off. Part of the appeal, given my current path, is that I can take a couple of these java based OSS projects (OpenNMS, ActiveMQ, Solr), and stuff them all into one JVM. Or not. (and I really look forward to learning how many problems doing it causes :) )

One thing Java got right was that it was all-Java right from the beginning. If you were coding a Java EE app, you were writing in Java. Microsoft's decision to foster backward-compatibility by encouraging interop wrappers around existing Win32 dlls has been a huge headache. While appserver admins were wrestling with classpaths (and arguing with devs about the location of physical files which are being written to), MS admins were building dozens of small machines because native.dll from vendor A's ".NET application" collided with native.dll from vendor B, who actually went out of business three years ago. I've dumped too many things into VMs because of that crap.

MS and it's ecosphere has certainly gotten better, but there's still sticking points (do we really need a Global Assembly Cache?).

What about the language? Surely I can't talk about "Learning Java" without talking about the language!

Well, I can't talk much; I've only been at it for about eight hours. A nice chunk of that was spent learning about classpaths, and changing the location of where apps were writing files on the physical disk. I'm being all old-skool about it, too. I've got Notepad++, the JDK, and a couple of powershell scripts as my dev environment. I know there's better and easier ways to do get things done, and I'll get to using them eventually. For now, I want to know what's going on under the covers.

For my first trick, I created a bunch of unnecessary abstraction ultimately resulting in getting some rows from a database, and dumping them to the console.

Why do I have to declare my potential exceptions? If the compiler knows enough to know that I didn't declare an exception when I should have, why doesn't it just stick it in there for me?

I thought it was neat that I didn't necessarily have to include, er, import namespaces. (At least I didn't need to with the Postgres adapter; I don't know all the rules to this, yet.) It was neat until I had to start adding "throw ClassNotFoundException" on all of my functions, anyway.

Having one general abstraction for databases is nice, too ("java.sql.*", vs. "SQL and OLEDB" everything). In fact, there's some nice generalizations for just about everything to accomodate that EE specification.

Anyway, I'm having fun, and I'll keep y'all posted.

Thursday, July 9, 2009

The State of Hypervisors: Meh

I've been dinking around with the various virtualization technologies: VMWare ESX, Solaris xvm, MS Hyper-V, and Ubuntu/Linux kvm (I still have VirtualBox on the todo list).

kvm and xvm, despite different underlying technologies, are pretty similar. They get the job done, but they aren't much fun to manage. The various crowds are making it better, everyday (eucalyptus, which uses Xen on Linux, is especially promising, but personally untested).

VMWare wins the management crown, hands down. Highly complex configurations, an abundance of tools, simple administration, and well-known in the industry.

They need to be doing a lot more, though. MS' Hyper-V 2.0 promises to have live migrations (a big selling point for VMWare), and is nearly as easy to manage. It won't take long for MS to catch up, and it is almost free with Windows.

Here's something that kills me for all of them, though. It can be difficult to move VMs from one version of the software to another version, which kills flexibility. If I want to run the VM on my laptop on a big fat server for awhile, moving it from the workstation version to the server was not simple.

It should be simple. I realize that if I wanted to do something similar on a large scale, I could do it by implementing some conventions and standards, with a bunch of scripts to hold it together. I'm lazy. That's the point here.

My Hypervisor Grail is this: a seamless transition from any hardware, to any hardware. At the moment, I'm whining about moving from my laptop to a server, but I'd like to be able to move it out to a cloud of some flavor also (right now, I'm using AWS EC2, and it is sufficient).

It would be super-cool if I could choose among images at boot, and have it backed up at shutdown, without the compromise of hard drive space and/or performance.

We're getting there, at least:

Xen 3.3 also contains a wealth of new features contributed by vendors collaborating in the new Xen Client Initiative (XCI), a Xen.org community effort to accelerate and coordinate the development of fast, free, compatible embedded Xen hypervisors for laptops, PCs and PDAs. The XCI is targeting three use cases: using Xen to run "embedded IT" VMs that allow remote support, security and service of PCs through embedded IT applications without any impact on the user's primary desktop OS;
"instant on" applications that can be immediately available as separate VMs from the user's primary desktop OS; and "application compatibility" VMs, which allow legacy PC applications to run as VMs, alongside the user's primary desktop OS. XCI member companies are already shipping Xen client hypervisors embedded in chipsets, PCs and laptops.


I think, for my next build-out, I'm going to try Eucalyptus on a server (sacrificing the Win2008 machine - good OS, but not as good as Solaris, which stays and gets big fat hard drives). I like the idea of being able to push VMs from inside the datacenter (okay, in this case, my closet) to the cloud.

Wednesday, July 8, 2009

MSBuild, and Cancelling Long Running Processes

I really like the syntax and organization associated with the various build tools for getting stuff done. I'd hoped to leverage those qualities to handle some system tasks - and I will, anyway.

I ran into a snag, though. If a task involves a long-running external process - say, robocopy'ing a large directory structure, killing the msbuild process doesn't kill the child process. Robocopy keeps on running, leading to a lot of cursing about how slow the network is until you notice it running two dozen times.

BTW, here's a powershell snippet to kill all robocopy processes:
get-process -Name robocopy | foreach-object { $_.Kill() }

Under normal circumstances, the runaway processes shouldn't be a problem. Under other circumstances, this could be a big deal.

MSBuild exposes its object model. It should be possible to either write a wrapper, or a better exec task. One way or the other, it isn't as cut and dried as I'd hoped.

Monday, July 6, 2009

SocketShifter - Redirect Sockets on Windows

A trick I've often used to get around firewalls is ssh port forwarding. Basically, after SSH connects to a server, you can get it to forward local ports through the tunnel to arbitrary ports on the other end.

So, if I connect from work to my ssh server listening on my home router, I can forward my local port 1000 (which isn't being used) to port 80 on one of my internal machines, and connect to that webserver via http://localhost:1000.

Windows servers don't have many good options for an ssh server. The only one I've used was the version that comes with Cygwin, and it wasn't really all that good (hangs, crashes, non-killable).

Introducing SocketShifter, an app that performs a similar magic, only on Windows.

But wait, there's more!

It uses something called the .NET Service Bus, an offering from Microsoft's Azure. This promises to allow two machines to connect, regardless of intermediate firewalls (and how do I prevent this?). It also means you have to sign up for an account in order to use it.

I'll be sticking with ssh, for now, but this service bus thing looks kind of interesting.

Saturday, July 4, 2009

msbuildtasks - A collection of MSBuild Tasks (duh)

I'm starting to use MSBuild for some system-level stuff. I've been torn between that, NAnt, and Powershell for commonly executed admin tasks (backups, deploys, things like that).

The most tempting choice was PowerShell. It can do just about anything I'd need it to. That flexibility creates a problem, too: I don't want to have to maintain these things. If I use a full-featured language, then follow up maintainers will fall into two camps: the ones who add more complexity until it becomes too hard to maintain, and the ones who won't understand any of it at all.

I was a little jealous of NAnt, because it has matured very nicely. That has to be installed - never mind how easy - and needs to be justified.

Then I ran across msbuildtasks, from the same people who brought us Subversion. Lots of shiny stuff (I love it!) including tasks for IIS sites (I don't know if it does IIS7), a slew of build and deploy related tasks, SQL tasks, Registry tasks, and more.

Then I found the MSBuild Extension Pack, which is chock-full of shiny promise, too.

So, MSBuild is our winner (found a good tutorial, and yet another). It comes with .Net 2.0, which is installed anyway (okay, you have to copy those extensions, but that could be the first task :) ), it has the features I'm looking for, thanks to Tigris, and it is proven in the field. I'm sure it will suck, too, but at least I'll know how much and in what ways it sucks. If it doesn't work out, I'll look for something else.

Update: Despite the title of this article, I'm not even using the msbuildtasks package, yet. The MSBuild Extension Pack has a lot more goodies, and everything that I've needed so far.

Wednesday, July 1, 2009

Unattended, A Windows Deployment System

Unattended. From their site:

Features include:

  • Automated install of operating system, hotfixes and applications.
  • Full documentation and source code.
  • Support for floppy, CD-ROM, and "nothing but net" installs.
  • True unattended installation, not disk imaging.
  • No Windows servers required; use your Unix servers instead.
  • No Unix servers required; use your Windows servers after all.
  • Completely free.

When you are finished setting up Unattended, you will be able to boot any PC from a floppy, from a CD-ROM, or directly from the network, answer a few questions, and come back an hour or two later to a fully-installed Windows workstation.


Update: I gave it a look-see, and it looks like it would be useful for Windows 2003, and that's about it. Considering deployments of that are dropping off of a cliff, it doesn't look useful.

Monday, June 29, 2009

Eucalyptus - AWS Workalike

I keep rediscovering Eucalyptus, an project aiming to be a clone of Amazon Web Services. I need to give them a real test run...

Sunday, June 28, 2009

Custom Exception, Converting enum to String

There has to be a better way to do this...

I've got a custom exception with its own types, etc. It started out like this:
public class MyException : Exception
{
public MyException() : base()
{
}

public MyException( String message ) : base( message )
{
}
}

Nothing fancy there, right? At least I can now filter my catches appropriately.

Well, what if I want to use an enum to define the kinds of failures? Like so:

public enum ExceptionType
{
ScrewedUpA,
ScrewedUpB
}

So far, so good.

Until I want to call it thus:

MyException ex = new MyException( MyException.ExceptionType.ScrewedUpA );
throw ex;

Okay, so we add a constructor which takes a MyException.ExceptionType:
public MyException( ExceptionType exType ) 
{}

That's all very nice, but then what? We need some kind of message, right?

Getting the string from the enum isn't that hard. The enum is represented as an int, the type has fields we can query. The field we're interested in will be offset from the enum's int by one (the first field designates the type of the enum).

We'd need to call the constructor with the String signature, which means using the this object. However, the only place this works is in the function signature. Given all of that, this is what I ended up with:
public class MyException : Exception
{
public MyException() : base()
{
}

public MyException( String message ) : base( message )
{
}

public MyException( ExceptionType exType ) :
this (exType.GetType().GetFields()[(int)exType + 1].Name)
{}

public enum ExceptionType
{
ScrewedUpA,
ScrewedUpB
}
}

If there's a better way to do this, I'd like to know. Thanks.

Friday, June 26, 2009

OpenNMS and Windows Monitoring

Now that OpenNMS has discovered my network, how can I get even more out of it?

My first thought was to extend OpenNMS to accommodate what I'm looking for. That's a lot like work, so I've installed and enabled SNMP on one of my Windows 2003 servers.

A little searching turned up this set of instructions for configuring SNMP (because MS' instructions are atrocious. A "configuring snmp" section should do more than tell you what configuring it would get you, if they were planning on being so gracious as to tell you how to do it).

It's under the properties for the service. That's a great place for it - no sarcasm intended. Of course, there's only a couple of other services that do that, so I don't look there.

Anyway...

I configured the community name, and set the destination to the FQDN of the OpenNMS server. Restarted the service, and ta-da! OpenNMS started receiving messages. And complaining about them.

Checked the Security tab, and added the OpenNMS server to the list of "Accept SNMP packets from these hosts" list.

There's a section in the OpenNMS admin section for configuring what IP addresses will be sending traps, and what community name they send. I left it at v1.

It started working after that.

I'm not entirely thrilled with this as a solution. It is known to be pretty insecure. Let's see what kind of data comes through...and I'm just not getting the point. There doesn't seem to be anything new. Maybe SNMP is just too smart for me. Ah well.

Update: I did a little configuration with evntwin.exe, and started forwarding various W3SVC events to OpenNMS. That was all around kludgey, and not really worth the effort.

I still like OpenNMS as a lightweight inventory and monitoring system, though.

Backing up VMWare ESX 3.5

I'd been whining about the troubles I had getting the various parts of a VMWare backup script working. Well, I've got something that appears to fill the bill.

The end result is that I have a backup local to the host, so I can restore very quickly, and as many past snapshots as my Windows server can hold.

The first thing you need to do is open up the firewall:
esxcfg-firewall --enableService smbClient

Note that in order for everything to work, I have to use the FQDN to the server. Otherwise, I'd get an error regarding broadcasts.

Here's the main script:
#!/bin/sh

VMCONFIGLIST=`/usr/bin/vmware-cmd -l`
TODAY=`date +%Y%m%d`
BACKUPBASE=/vmfs/volumes/`hostname`\:raid0/backups

USER=root
PASSWORD=nunyabidnz

WINUSER=NA/meh
WINPW=blah
WINDEST='//server01.example.com/backups'

if [ ! -d ${BACKUPBASE} ]
then
echo "Backup directory ${BACKUPBASE} does not exist."
echo "Exiting"
# no, I don't want to create it. If there's a problem with the path,
# there's no telling where these backups will end up.
exit
fi

echo "Back up for ${TODAY}"
echo "Backup directory ${BACKUPBASE}"

for VM in ${VMCONFIGLIST}
do
# set up the variables for this loop
VMDIR=`dirname ${VM}`
echo "Looking for ${VMNAME} in ${VMDIR}"

# if the config file doesn't exist, we can't do anything
if [ ! -f ${VM} ]
then
echo " * Did not find ${VM}."
echo " * Skipping"
continue
fi

# get the VM name from the config file
VMNAME=`cat ${VM} | awk '/displayName =/ {print $3}' | sed 's/\"//g'`

DESTDIR=${BACKUPBASE}/${VMNAME}

TARNAME=${VMNAME}.${TODAY}.tgz

# the work begins here
# all the configurable stuff should be above this line

echo " - Backing up ${VMNAME}"
echo " - file ${VM}"

# vcbMounter bombs out if the directory already exists
if [ -d ${DESTDIR} ]
then
echo " - deleting directory ${DESTDIR}"
rm -Rf ${DESTDIR}
fi

# create the backup
echo " - Starting vcb..."
/usr/sbin/vcbMounter -h `hostname` -t fullvm -u ${USER} -p ${PASSWORD} -r ${DESTDIR} -a Name:${VMNAME}

# check vcbMounter's return
if [ "$?" -ne "0" ]
then
echo " - Error backing up VM ${VMNAME}"
continue
else
echo " - Successful backup of ${VMNAME}"
fi

# compress it

echo " - Archiving backup to tarfile ${BACKUPBASE}/${TARNAME}"
echo " - to tarfile $TARNAME"
# --force-local to ignore the ":" in the path
tar -C ${BACKUPBASE} --force-local -czf ${BACKUPBASE}/${TARNAME} ${DESTDIR}
echo " - Compression completed"

echo " - Copying to ${WINDEST}"
/usr/bin/smbclient -c "prompt off; put ${BACKUPBASE}/${TARNAME} ${TARNAME}" -U ${WINUSER} ${WINDEST} ${WINPW}

echo "Cleaning up .tgz"
rm -f ${BACKUPBASE}/${TARNAME}

echo " --- ${VMNAME} complete --- "
done

Any suggestions for improvements (how about an occasional function , Mr. Goon? Or check the return from smbclient? Hmmm?) are welcome.

MSDeploy, First Looksies

Microsoft has a tool which promises to simplify IIS deployments, known as MSDeploy.

I had a little trouble finding documentation, but I finally found a link pointing here.

Update: Dinked around with it a little, and it is looking pretty darn cool. Heavily slanted towards IIS7, it looks like it works reasonably well with IIS6.

Thursday, June 25, 2009

NirCmd?

Was reading about something else, and NirCmd came up. It is a command line utility, with a bunch of functionality. Very lightweight, no install necessary.

I haven't looked at it, yet. Just bookmarking...

Update:Meh. I like that it is really small, and I have a copy of it on my thumbdrive, but it ain't all that.

Tuesday, June 23, 2009

MSProject Server Doesn't Exist?

Well, I installed MSProject Server onto a node of the web farm, and ran into a snag. When I tried to administer it via shared services, I got this error:
The Project Application Service doesn't exist or is stopped. Start the Project Application Service.

Checked the service, it was running according to the MOSS UI. Services MMC showed the other services as running. Nothing interesting in the event log.

Searched around, and found others with this problem. The most common suggested fix was
stsadm -o provisionservice -action start -servicetype "Microsoft.Office.Project.Server.Administration.ProjectApplicationService, Microsoft.Office.Project.Server.Administration, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71E9BCE111E9429C" -servicename ProjectApplicationService

Well, that didn't do it.

The solution? I could about kick myself. MSProject server needs to be installed on all nodes of the farm. It even says it in the instructions. Dumbass me...

Monday, June 22, 2009

vmware-cmd Not Working

I'm still working on those VM backups. I got everything started okay on the first node. It wasn't until the second node that I ran into other problems. Of course.

Attempting to run vmware-cmd resulted in the following:

Can't locate VMware/VmPerl.pm in @INC (@INC contains: blib/arch -Iblib/lib -I/usr/lib/perl5/5.6.0/i386-linux -I/usr/lib/perl5/5.6.0 -I. /usr/lib/perl5/5.8.0/i386-linux-thread-multi /usr/lib/perl5/5.8.0 /usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.0 /usr/lib/perl5/site_perl/5.8.0 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.0 /usr/lib/perl5/vendor_perl/5.8.0 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.0/i386-linux-thread-multi /usr/lib/perl5/5.8.0 . blib/arch blib/lib /usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0 .) at /usr/bin/vmware-cmd line 133.
/usr/bin/vmware-cmd requires the VMware::VmPerl Perl libraries to be installed.
Check that your installation did not encounter errors.

According to some stuff I found, this can be caused by clock-skew when the server is installed. Of course, I don't care about why it happened (just good to know). How to fix it?
# vmware-config.pl

Please specify a port for remote console connections to use [902]

Stopping xinetd: [ OK ]
Starting xinetd: [ OK ]
Configuring the VMware VmPerl Scripting API.

Building the VMware VmPerl Scripting API.

Using compiler "/usr/bin/gcc". Use environment variable CC to override.

Installing the VMware VmPerl Scripting API.

The installation of the VMware VmPerl Scripting API succeeded.

The configuration of VMware ESX Server 3.5.0 build-123630 for this running
kernel completed successfully.
And that cleared it up.

smbclient Woes on ESX

I'm writing a backup script for our ESX hosts, and I ran into a problem connecting to the Windows server.

After opening the firewall with
esxcfg-firewall -e smbclient

I still got an error attempting to connect to the Windows machine:

[root@ESX backups]# smbclient -U DOM\\adminusers //server01/backups password
Packet send failed to 10.4.136.255(137) ERRNO=Operation not permitted
Connection to server01 failed

Okay, so the firewall is blocking broadcasts (that ".255" at the end). The solution was to negate the need for the broadcast. smbclient was unable to resolve the IP address given just the hostname of "server01", and was attempting a netbios broadcast (I think).

The solution was to use the FQDN to server:

[root@ESX backups]# smbclient -U DOM\\adminuser //server01.example.com/backups password
Domain=[DOM] OS=[Windows Server (R) 2008 Enterprise 6001 Service Pack 1] Server=[Windows Server (R) 2008 Enterprise 6.0]
smb: \>

Friday, June 19, 2009

OpenNMS as a Windows Service

[I've updated this some, to correct errors and be more thorough]

I'm doing some testing with OpenNMS, to see if it is at least good enough. So far, it is.

One of the complications is that it is a java app. This would be fine, except it doesn't ship with a means of running it as a Windows service. All in all, it wasn't too bad. I got it running on Windows 2003 Server - I haven't tried 2008, yet.

All of this assumes that you have OpenNMS itself working.

I started with the instructions here, using "Method One".

The abbreviated version of those instructions is that inside the downloaded .zip file, there's a file structure which looks like the one in your OpenNMS directory.

Unzip that to someplace other than your OpenNMS directory - I know how you people are - so we can make some changes.

Delete the ./src directory. If you're reading this, you aren't using that.

In the ./bin directory, there's a bunch of batch files. I renamed them "WrapperInstall.bat", "WrapperStart.bat", etc., because that's just easier to keep track of.

In ./conf, there's a file named wrapper.conf. Try using this one, but double-check the directory names....

#********************************************************************
# Wrapper License Properties (Ignored by Community Edition)
#********************************************************************
# Include file problems can be debugged by removing the first '#'
# from the following line:
##include.debug
#include ../conf/wrapper-license.conf
#include ../conf/wrapper-license-%WRAPPER_HOST_NAME%.conf

#********************************************************************
# Wrapper Java Properties
#********************************************************************
wrapper.java.command=C:\Program Files\Java\jdk1.6.0_14\bin\java
wrapper.java.additional.1=-Xmx256m
wrapper.java.additional.2=-Dopennms.home="C:/PROGRA~1/OpenNMS"

wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
wrapper.app.parameter.1= org.opennms.bootstrap.Bootstrap
wrapper.app.parameter.2= start

wrapper.java.classpath.1=C:/Program Files/Java/jdk1.6.0_14/lib/tools.jar
wrapper.java.classpath.2=../lib/wrapper.jar
wrapper.java.classpath.3=../lib/opennms_bootstrap.jar

wrapper.java.library.path.1=../lib

wrapper.java.additional.auto_bits=TRUE

#********************************************************************
# Wrapper Logging Properties
#********************************************************************

wrapper.console.format=PM
wrapper.console.loglevel=WARN
wrapper.logfile=../logs/wrapper.log
wrapper.logfile.format=LPTM
wrapper.logfile.loglevel=INFO

wrapper.logfile.maxsize=0
wrapper.logfile.maxfiles=0

wrapper.syslog.loglevel=NONE

#********************************************************************
# Wrapper Windows Properties
#********************************************************************

wrapper.console.title=OpenNMS

#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************

wrapper.ntservice.name=opennms

wrapper.ntservice.displayname=OpenNMS
wrapper.ntservice.description=Network Management System

wrapper.ntservice.starttype=AUTO_START
wrapper.ntservice.interactive=false

Copy all of the wrapper files into their corresponding OpenNMS directories. If the destination directory doesn't exist (./conf), create it.

Almost there...

Go into the ./bin directory of your OpenNMS installation, and double-click the WrapperInstall.bat file.

Now, you should be able to click on WrapperStart.bat (you did rename it, right?), or go into the Services MMC and start it from there.