MySpace Openly Selling User Data

Per this  /. post, MySpace is now openly selling user data. If you don’t think FaceBook et. al. can and will (and do) do the same when it benefits them, you’re sorely mistaken.

“one-time leading social network is now selling user data to third party collection firms. From the article, the data that InfoChimps has listed includes ‘user playlists, mood updates, mobile updates, photos, vents, reviews, blog posts, names and zipcodes.’ InfoChimps is a reseller that deals with individuals and groups, from academic researchers to marketers and industry analysts.”

Posted in Life, Work | Tagged , , , | Leave a comment

On Leadership

[NOTE: This essay was originally written for a now-defunct publication, and was never published. The exclusivity period granted to them has expired, and it is being reprinted with minor edits.]

It’s rare that I quote the US Military, however they know a thing or two about Leaderhship. According to FM 22-100, there are 14 traits to being a good leader (if you’re an NCO or junior officer, you’ll recite LDRSHIP and only count 7. If you’re a senior officer, you’ll just smile). Whether you’re leading troops in combat, a multinational corporation in business, a college through its mission, a department in support of a business, or a team within a department in support of a mission, or even a trek leader on a hike or climb, these traits are the same. They’re not all necessary all the time, nor does an effective Leader need all of them, but the vast majority are critical. While the traits themselves are purloined, the verbiage is original.

Continue reading

Posted in Life, Work | 1 Comment

Nagios Tray 3

The long-awaited release of Nagios Tray 3 has arrived. This version has been gutted and large swaths rewritten using Visual Basic 2010 (from VB6) reducing the size, memory usage, code complexity and dependencies greatly. The .NET 3.5 runtime will be needed. Nagios Tray 3 is configuration-compatible with Nagios Tray 2, providing a smooth transition. The source, as always, is included as “source.zip” in the installer.

3.0.1.0

  • Now runs on Windows XP, Vista and 7 or any server platform that can run .NET Framework 3.5 or newer.
  • Completely overhauled, moving code from VB6 to VB.NET 2010: The equivalent of upgrading from Windows 98 to Windows 7 in one swoop.
  • Large swaths of code replaced with API or .NET equivalents, reducing complexity greatly.
  • Only one non-framework dependency, and no IE widget used anymore.
  • Massive performance improvement.
  • Include a speechlib redistributable to ease deployment. May still need speech packs to have more voices.
  • The NagiosTray icon mouseover will now only display the first 63 characters of the current status text, instead of the first 127 it used to. This limit is a Microsoft tray icon tooltip length limitation in .NET, not mine. No complaining.
  • Moved code that converted bitmaps to icons from the timers into the loader with some global variables, saving a few cycles at the expense of a few bytes of RAM, but hopefully appeasing GDI when the renderer is off.
Posted in Uncategorized | 2 Comments

Go

As you may or may not have heard, some monkeys  at Google have been working on a “new” language called Go. Several features of the language interested me, and as such I’ve spent around 200 hours writing and debugging in it this year. The main draw for me was its intrinsic concurrency model- a lot of the large systems I write have many components, all of which need to talk to each other, all of which need to share data, all of which need to do this very fast. Go channels and goroutines are embedded mechanisms for IPC and are interesting to me. Also, the fact that Go is dynamically-typed yet still compilable (unlike Perl) and readable (unlike Python) was intriguing.

A lot is missing that I consider critical for any modern programming language. One of the largest omissions thus far is regular expressions. There are a LOT of nice features for text processing, and once a good regular expression library is built in, hopefully PCRE (pleasepleasepleasepleasepleasepleaseplease) that will really enhance the value.

But a lot of really nice features are there including parallelization, ridiculously flexible and powerful interfaces, and a super-clean syntax that is quite pleasurable once you get your head around it and stop programming the long way.

I’m certainly not moving to Go for any real coding, but it is coming along very well, and I encourage programmers- hobbyist or professional- to give it a once-over. It’s refreshing when a development team seems to be very interested in making a clean language without preconceptions, something that hasn’t been done [well] in probably 20 years or so.

Posted in Linuxy, Opinions, Products | Leave a comment

Generating Work

Accidents happen. No seasoned analyst, admin or engineer I know hasn’t typed ‘reboot’ in the wrong term window, deleted the wrong file, created the odd network loop, etc. These are generally accidents: what was being done was judgmentally sound, however something wasn’t quite right.

When you omit the sound judgment, however, you didn’t have an accident anymore, you have generated work. In the world of geeks, accidents are things everyone laughs about after the fact, generating work will usually result in no one laughing with you ever again. Time is precious, respect doubly so, and making a bad decision that costs others’ cycles consumes both.

For the last few weeks, we’ve had some electrical contractors in our office building replacing our 1970s-vintage fire alarm system with something modern. Observing their group dynamics was fascinating, and reinforced the same principles. Some guy who set off the alarm system twice in one morning because of bad decisions wasn’t invited back after a couple days. Another guy who checkpointed his thoughts with the more seasoned crew before blundering was given more leeway. A third who didn’t seem to know how to do his job, and was constantly requesting help and making bad decisions requiring others to fix his shit, was asked at one point “are you really an electrician?” and generally ostracized by the senior crew.

I am a firm believer in the old adage “there are no stupid questions”. Without hesitation I offer my time to all sorts of people who are interested in learning, check-pointing, advancing or honing their knowledge in an array of topics. I enjoy pedagogy and dialogue: Most seasoned, polydisciplinaries do, especially those who are also autodidacts. Form intelligent questions, ask intelligent questions, save your reputation, expand your knowledge, develop sounder judgment, don’t generate work. Ask.

Posted in Life, Opinions | Leave a comment

Chicken Tikka Masala

This Indian classic is very easy to make. This is not the easy version, but instead the amazing Matt-went-home-two-hours-early-to-cook-the-feast gourmet version, heavily adapted from numerous sources.

3 lbs boneless chicken breasts
1 lemon, cut in half
1/4 cup ghee, melted
4 large garlic cloves, chopped fine
thumb-sized piece of fresh ginger, peeled and chopped fine
2 tbsp ground paprika
1 tsp ground cinnamon
1 tbsp ground cumin
1 tsp ground coriander
1/2-1 tsp ground chili powder
1/2 tsp ground cloves
1/2 cup plain yogurt
4 drops red food coloring
2 drops yellow food coloring
2-3 small (or one large and seeded) chili peppers, chopped fine
14 oz can diced tomatoes
1 cup heavy cream
1/4 cup fresh cilantro (chopped)
salt
pepper

Preheat oven to 400F (will be using top rack). Place cleaned chicken in large glass bowl  and  stab repeatedly with pairing knife. Rub in 1/2 lemon and 1/2 tsp salt. Add in 1/2 of the chopped garlic, 1/2 of the cumin, 1 tbsp paprika,  and all of: ginger, yogurt, food coloring, cinnamon, coriander, chili powder, cloves. Mix with your hands until everything is evenly coated up to your elbows. If prepping ahead, this mix can go into the fridge for up to a week just fine.

Put chicken on cookie cooling rack over a solid, rimmed cookie sheet (dripping on the bottom of the oven are not fun). Some people claim a roasting pan works too. *shrug* With turkey baster, or a spoon if you’re boring, splurt 1/2 of the ghee over the top of  the chicken mess. Bake for 45 minutes. Broil for 10-20 minutes until top coating visibly blackening in spots (not burning!!). While broiling, make the sauce (next paragraph).

In large cast iron skillet, add remaining ghee over high heat until drops of water cause sizzling. Add remaining garlic and chili pepper, sauteing for 30 seconds or so. CAREFULLY add cream, diced tomatoes (with liquid) – ghee will be hot and addition of liquid may causing flashing!! Stir in remaining spices except cilantro (a few grinds of salt and pepper, too). Reduce heat to medium and simmer uncovered, stirring regularly,  for about 10  minutes or until sauce visibly thicker.

After chicken is done, remove and allow to cool for a couple minutes. Cut into small chunks and add to sauce.  Cover and reduce heat to low, cooking about 5 minutes.

Serve with rice and bread, sprinkling cilantro and a lemon wedge around plate edges for garnish and extra seasoning.

Serves 4. Prep time about 1 hour (serial). Cook time 70-85 minutes.

Posted in Recipes | Tagged , , | 3 Comments

The Next Five Years of Storage

[NOTE: This essay was commissioned by a client in December 2006. It’s the third in a series of old-yet-relevant position-papers whose exclusivity has expired, that I’m editing and posting. Things for the next five look “similar”. There is no formal “conclusion”, as this is one section of a larger piece.]

Over the next five years, gross storage needs will double every other year, sparked by industry trends that avoid deleting anything, ever; continued bloat in software programs; increased user demand for larger-file storage; increased user demand for indefinite storage; increased user, corporate, and industry expectation of system-side backups and frequent snapshots; and the enabling factor of meteoric-disk-size -to- paltry-disk-cost ratios.

Since the late 1990s, we have seen rapid acceleration of infinite data life. While storage vendors will use terms such as “information life-cycle management”, “information archiving” or “data warehousing” – they all converge onto the premise that corporate data life is no longer finite. The value of this is dubious, but irrelevant to argue: financial workers expect to be able to look at historical data for modelling purposes; draft and product workers expect to be able to look at long-dead projects that might now be of value with new knowledge; in the throes of bankruptcy, competent managers (and lawyers) will want to mine the archives for something… anything that may provide some value. Everything your organization has ever known is expected to be retained, indefinitely.

The average 10-page MS Word document in 1995 was 13K in size. The average 10-page MS Word document in 2006 is 1.4MB. While that size may still seem small, it’s indicative of a growing trend of software generating vastly wasteful content because they can. Software vendors don’t need to worry about their data fitting onto floppies anymore, so they don’t. Multiply this across dozens of applications, add in media, and you have truly huge data files with only a few pages of actual content.

Similarly, the users want ever-larger files. Gone are the days of compressing graphics, video and audio to the Nth degree: users want full-quality content. They don’t want a 120×120 “thumbnail” video, they want something that takes some real-estate on their oversized monitor. As bandwidth increases, so will the user-desire for better content faster. They then want to save that same content to their network volume. They want it backed up in case of catastrophe (or their own error). What was a 3MB MP3 file is now a 45MB FLAC or WAV file sitting in your database.

The increase in user-end space (desktop harddisks) has led users to demand not only more and more space from their storage providers, but also indefinite storage. Users no longer have to selectively delete their e-mails to stay in a predefined space, so they keep them all, forever. They expect the same from the rest of their digital attics: they expect every bad poem, doodle, patent-idea-on-a-napkin, picture of their grandkids, etc. to be immediately available, forever.

Forever. Even if your disks die. Even if they accidentally delete them. Even if a meteor pummels your datacenter. The old standard of weekly backups have long passed the borders of Being Prudent, travelled through the Fields of Marginally Acceptable, and have entered the Mountains of Irreparable Harm to Your Reputation. Users, customers, regulators, etc. are barely tolerant of losing a day of data, and this will get worse. In the next half-decade a truly monumental shift into multi-media backups, near-real-time data snapshots, and 100% protection of data assets will be fully realized, requiring several multiples more mixed-media backup storage than live data storage.

On the up-side, disk sizes are sky-rocketing, costs are plummeting and the reliability of the new serial ATA (SATA) architected drives have come up to a level that allows anyone to build in or expand networked disk with a trivial investment. A new generation of storage vendors are coming up and challenging the old way of thinking about networked storage, and adopting technologies with more agility than their behemoth competitors. We’re quickly on our way to 1TB disk drives, flash-based storage continues to be refined and is nearing enterprise-grade, holographic storage is being commercially realized for some applications, and all of these technologies are driving the cost per megabyte down.

Posted in Architecture, Opinions, Work | Leave a comment

The Next Five Years of Bandwidth

[NOTE: This essay was commissioned by a client in December 2006. It’s the second in a series of old-yet-relevant position-papers whose exclusivity has expired, that I’m editing and posting. Things for the next five look “similar”, yet scaled up in some areas. There is no formal “conclusion”, as this is one section of a larger piece.]

Over the next five years, datacenter bandwidth will level off for a bit. With the 10GigE standard behind us we can finally pull our backbones up to a level where they’ll be able to breathe easier for a while. Storage speeds are still being gated by the storage devices themselves, and until either solid-state media becomes cost effective or disks rotate twice as fast as they are now, that isn’t going to change much. Aggregating virtual systems is actually causing an interesting bandwidth phenomena that I’ll address later. Regardless, a 10Gig, or Nx1Gig backbone should be able to breathe well for the next half-decade. Planned year-over-year demand increases of 5-7% should be expected.

Desktop network speeds have been about the same for the last five years, and will largely remain unchanged. A 32-bit computer system running a commercial desktop operating systems has too many architectural limitations, still, to be make use of more than 60-85Mb/s of bandwidth. While some vendors are running 64bit processors, they generally are using bus architectures that aren’t that wide, thus gating peripheral speeds back to 32bit. In the next five years that will clean up a bit, and 64bit “extensions” to the 32bit processors will become more common place, but still not impacting the network noticeably due largely to OS and bus architectural issues.

Environments consolidating onto virtualized systems are seeing an interesting gross decrease in datacenter network bandwidth use. Not surprisingly, they’re also seeing peak utilization well above what they had prior to consolidation. The latter is easily explained by virtualized systems generally “netbooting” their OS from the storage network or a bootserver, and now more than ever embracing networked storage completely. The gross decrease has been unexpected because of the higher demands on the network, but is explained by architectural constraints. We’re now seeing 10-15 virtual servers sharing one or two network connections, where previously each had one or two of their own. This has somewhat of a levelling effect on network use, but isn’t dramatically impacting service performance as one would expect. The network is more important in these environments, but as a whole not as taxed.

It was largely believed that mobile “broadband” availability and use would be much higher by now, but we have yet to see a real platform for use. The Palm Treo series is getting an overhaul “soon” and rumored platforms by Google and Apple may change that landscape. In general, even if fully realized, the network demands by these users will largely have no impact on the greater network, or on datacenter network needs. The next-generation, “4G”, will be changing that, but I don’t expect to see that kind of horsepower in a phone until late-2010-to-2012: the processors are still just too slow.

What will change dramatically will be the bandwidth access for remote users. While not directly impacting the datacenter we’re going to see dramatic growth in the cable/DSL/satellite “broadband” space. Internet-facing applications may see a 20-30% rise in client demands as users become less tolerant of waiting for application loads due to their expectations of “faster” service, on the order of 200-250% more bandwidth. It is expected that OSP asymmetrical provisioning will continue.

Posted in Architecture, Opinions, Work | Leave a comment

Disposable Appliance Computing

[NOTE: This essay was commissioned by a client in February 2007. It’s the first in a series of old-yet-relevant position-papers whose exclusivity has expired, that I’m editing and posting]

The hosted systems industry has turned another critical point. Several years ago we eschewed large mainframe systems in exchange for commodity servers that could divide load and work together to provide services without single-vendor lock-in and without a single piece of “iron” waiting to fail. The computing power of a $2,000,000 mainframe was dwarfed by the implementation of $80,000 in commodity hardware. With virtualization coming-of-age- with Intel and AMD putting hooks into their processors and chipsets to allow virtualization to be fully realized and not just a software-only hack- we’ve seen those same commodity systems hosting dozens of virtual systems reliably and at near-metal efficiency. The cost per virtual system is a number rapidly approaching zero.

New offerings from Sun, IBM and HP/Compaq are emphasizing something that “the server guys” haven’t needed to care much about: infrastructure. Historically, your network engineers and analysts worried about interconnection, route redundancy, and ensuring the bits could flow where they needed, reliably and sufficiently; and your system engineers worried about everything up to the point the bits hit “the network”. Moving forward, that is almost a debilitating dichotomy. Traditionally, in the post-mainframe era, a physical system did one or two things and its exclusion from the network or its under-performance on the network was a minor issue. With a physical system possibly hosting dozens of virtual systems- all with unique networking requirements, cross-talking requirements, and of course: networked storage requirements- your system engineers must be well-versed in network engineering. “The Network Is The Computer” is not just a Sun tag-line, or a lame cliche’. We’re now fully realizing the potency of that statement. Every system offering from the Big Three contains significant “infrastructure” features: Network features.

By pushing more and more network features into server systems- IBM servers with Cisco “swrouters” built-in, for example – the server itself has become more important and less relevant at the same time. Keeping it up and running well will require a new kind of system engineer because “the box” is now more complex: But at the same time, collections of “boxes” should be able to self-heal and adapt to the failures of others. Each system has now become disposable.

A large swath of the architectural literati are already deploying quantities of self-healing farms that take over the work – the very virtual machines – of failed or failing physical systems. Virtualization on its own wasn’t a game-changer. Virtualization with processor support and recognition sparked real potential. Virtualization on top of “infrastructure”-aware (e.g. heavily networked) physical systems has dramatically shifted the value of hybrid “networked systems engineers”, raised the bar for the “server guys” to get up to speed on the real internals of networking, and has provided the unprecedented opportunity to deploy redundantly resilient systems that can in-practice achieve five-to-seven “nines” of reliability.

Posted in Architecture, Linuxy, Opinions, Work | Leave a comment

Anniversary Dinner

My parents’ anniversary was September 10th. While their son is a [culinary] genius, they have only allowed me to cook for them about three times in my life. My mother doesn’t approve of all sorts of things I do, including quite a bit in the kitchen. Well, anyhow, this year they accepted an offer for me to take care of their celebratory evening. Below are the recipes.

Appetizer: Wine Poached Pears

Recipe was from an old friend of mine (from memory), melded with one from a cookbook for precision.

1 1/2 cup red wine or burgandy
1 cup raw sugar
2 tbsp lemon juice
2 cinnamon sticks
6 cloves (whole)
4-6 pears (whole, peeled)

In a pot combine all ingredients except the pears over med-high heat until boiling. Reduce to low and simmer, covered for 3-5 minutes (should be noticeably thickened). Lop off the bottom of each pear so they will sit upright on the plate for serving later.

Add the pears on their side if possible, and cook uncovered for about 15 minutes or until tender, turning frequently.

Remove the pears to small serving plates, upright. Bring reserve sauce to a boil, uncovered, over med-high/high heat, until reduced to a thick glaze, ~5-8 minutes. Remove cinnamon sticks and cloves with tongs. Drizzle over pears and serving plate.You may serve chilled if climate dictates.

Serves: 6. Prep time: ~2 minutes.Cook time: 25 minutes.

Salad: Goat’s Medley w/ Walnuts

I can’t overstate the value of baby spinach, dandelion greens, and clover sprouts. Simply the best salad you can make. The ratios can be varied for taste.

1/2 lb baby spinach
1/2 lb dandelion greens (shredded or chopped)
1 bunch radishes (sliced, chopped, or shredded)
2-3 carrots (sliced or shredded)
1/2 lb sun-dried tomatoes
1 1/2 cups freshly cracked walnut pieces

Combine. Toss. Serve with your favorite version of raspberry vinagrette.

Serves: 6-8 humans, or 1 goat or llama. Prep time: 20-15 minutes.

Entree: Peppercorn-rubbed Filet Mignon

Purists and book-wise chefs would crucify you for rubbing down a tenderloin filet with anything more than simple salt. I’m neither.

Peppercorn Rub

Regular peppercorns can be subbed-in here, but it changes the dynamic. This rub is great on any cut of beef. Double, triple, etc. as needed. Keeps well in sealed container for several weeks. Omit sugar until just in time, to keep for several months.

2 tbsp szechuan peppercorns (whole)
2 tbsp coriander (whole)
1/2 tbsp raw sugar
1 tsp allspice (whole)
1 tsp sea salt

In iron skillet, over grill or med-high burner, cook szechuan peppercorns and coriander until you can smell them well, about 2-3 minutes. Remove from heat, combine with rest of ingredient in a food processor or spice grinder. Pulse-grind until evenly-blended but coarse.

Coats 4-6 filet mignons lightly. Prep time: ~2 minutes. Cook time: 3 minutes.

Grilled Filet Mignon

If you don’t know the difference between tenderloin cuts, that’s ok. Filet mignon is the tip of the tenderloin: very small, and very pricey. I recommend 2″ cuts because that yields about 7 oz of meat. If you can’t afford all mignons (will run you about $50 for 6 little cuts) then you might want to purchase a whole tenderloin, which any reputable butcher will gladly slice up for you: 1″ cuts of the upper tenderloin with a single 2″ mignon at the bottom will run you about $40, but net 6 good-sized steaks plus the mignon. Do not buy packaged filets wrapped in bacon. If you’re going to pay for tenderloin, make sure it’s being chopped fresh just for you.

4-6 2″ thick filet mignon cuts of tenderloin beef
or
4-6 1″ thick upper tenderloin cuts

Preheat grill on high. Trim visible fat from filets. Rub down filets by hand with peppercorn rub, coating evenly but lightly. Turn grill down to medium. Meat should be flipped every 5 minutes. Should be served medium-rare (135 degrees), medium (145 degrees) at most… but well-done brutalizations are also possible (165 degrees). All temperatures are taken at the center of the meat.

Serves: 4-6. Prep time: ~5 minutes. Cook time: varies by thickness and doneness, 7-25 minutes

Dessert: Raspberry Tart

This lovely was made from scratch by my lovely, following a copyrighted recipe I can’t legally reprint here. Excellent ending to a great meal.

Posted in Recipes | Tagged , , , , , , | Leave a comment