Resume

Wednesday, March 20, 2013

UPS breaks my guitar, can't figure out how to pick up the return package

The following is an email I sent to UPS tonight after complaining about their customer service via Twitter.

Hi, 
I was invited to send this via the UPSHelp twitter account.
Tracking number: XXXXXXXXXX to XXXXXXXXXX
Last week I received a damaged package from UPS (XXXXXXXXXX). The item is a guitar, and it arrived with a quarter-sized hole through the face of the guitar. The outer box had a gash in it that tore all the way through the inner box and into the guitar.  
Now, mistakes happen, so no big deal. After working it out with the shipper, I received the return document and then called UPS. Here is where my multiple problems lie.
  • I was told the package had to be inspected. As a result, I was specifically told I could not bring the package to the service center; that I had to wait for an inspector. I was told the inspector would document the damage, wrap the package and take it back. 
  • I was told that a UPS inspector would come after receiving the pickup order. They did not come yesterday. Okay, not a big deal, I called today and someone came promptly. The customer service person didn’t know the procedure off hand, but called me back in less than ten minutes to let me know someone would come by my house.
  • A regular delivery driver came and he acted stunned when I told him my expectation that the package needed to be inspected. He called his supervisor (Chuck), explained that the package was not ready for pickup, and then offered to let me talk to him. 
  • The first thing the supervisor asked was, “Is the package wrapped?” Now I’m kind of annoyed because the driver doesn’t know the procedure, the supervisor doesn’t seem to know the procedure, and now I’m feeling interrogated. Look, I’m not UPS, I am justifiably ignorant of the procedure, but I need help. Chuck told me the driver couldn’t wait around (which of course I understand), so I asked Chuck to send a delivery supervisor to pick up the package and he agreed to send someone. 
It’s now 90 minutes past the time I called today, and I don’t know what else is supposed to happen. 
Am I supposed to wrap the package? I have it half wrapped now: kind of hedging my bets I guess. Is someone going to come by? It’s getting late. Should I have had the package ready to go? Could I have taken the package to the customer service center after all? 
My biggest question is: why doesn’t UPS have better competency at handling this situation? It must happen dozens of times a day given the volume of packages UPS delivers. Why aren’t there procedures in place to ensure high customer service when UPS makes a mistake by damaging a package?  
What’s going on here? 

Saturday, March 9, 2013

Command line interface for Google Analytics


One of my job responsibilities is configuring and running reports from Google Analytics. Google Analytics is great, and it's free, but there are some limitations that we sometimes need to work around in order to generate helpful data. The biggest limitation is Google Analytics' tendency to report on sampled data. Our site receives millions of visitors each month, so sometimes the samples can be quite small. The longer the date range for the report, the more sampling errors can creep in. Shorter reports have less sampling bias, but this requires running multiple reports. Multiply this by five or six profiles and suddenly you're talking about a lot of labor.

A couple of years ago I wrote a little bash script that would run this routine for me; however, Google has deprecated its old Google Analytics API and my script required a bit of manual tweaking when I switched Google Analytics profiles. I needed to re-write the script using Google Analytics API 3.0 and at the same time make it more robust so that I could write ad hoc reports without tweaking the source each time. I started from scratch using nodejs assuming this work might be integrated into a future analytics dashboard.

OAuth 2.0

Google Analytics 3.0 recommends authenticating with OAuth 2.0. I used bsphere's gapitoken package to do some of this lifting. The application also had to be registered with Google API console as an installed application.

GOTCHA although OAuth was configured correctly, a 403 Error: User does not have permission to perform this operation was thrown when the application ran. It turns out the application was registered correctly under my Google email address, but I assumed that was enough to have permission to query the date. NOPE! The application's service account address actually need to be added as an authorized user to each Google Analytics profile in order to perform the query.

CLI arguments

Command line arguments are handled using trentm's dashdash package. I picked some reasonable defaults for report dimension, metrics, and profile ID.

The big advantage to running from the command line is that multiple queries having only slightly different parameters can be run back to back without having to futz with a user interface. With dashdash I'm able to also easily provide multiple dimensions or metrics to a request.

Profile configuration

Our organization transitioned to a unified Google Analytics profile almost two years ago. Sometimes we want to run queries across this gap. A JSON file stores our profile information. If a single property (say, the main website) has multiple associated profiles, the profiles are stored as key value pairs where the key is a JSON-encoded date value signifying the first day the profile was available and the value is the profile ID. This allows for queries to be run on that site as though the Google Analytics data were contiguous in one profile (you need to be careful of what metrics and dimensions are used though!)

Dates

Speaking of dates, this application is designed to run a separate report for each month in the time range. Months have different number of days, and February is its own special case. JavaScript has a clever way of working around this issue (see StackOverflow). Basically, a new Date defined with a day parameter of zero: 
var d = new Date(2011, 1, 0) will result in a new Date object created on the last day of the previous month. Nice! Using this, I made a function that takes a date, finds the month, and returns the first and last day of that month.

Throttling asynchronous connections

Things were working well until I started issuing reports with over 10 months. Google Analytics objected to too many connections, so I used queue from the async package to slow things down a little.

Tests

I started with writing tests in nodeunit, but as I ran into walls I stopped writing tests. I'm frankly confused by the variety of testing platforms available for node, and I'm not sure how to write solid tests for asynchronous functions. If you have feedback on how to do this a good way please let me know. 




Wednesday, February 27, 2013

working Grunt into a primitive build workflow part 1

I'm sold on grunt, but I'm responsible for maintaining legacy JavaScript at work so now I'm trying to organize the code so I can use grunt as part of a build process.

The Old Way: download a fresh copy of our global.js script from production every time before commencing with a change. Make the modification by appending a new $(document).ready() block to the end of the file. Cross my fingers, and upload to production.

This is unsustainable. We are using JavaScript more and more every day, and depending on a household deity (me) to keep everything together by using hopes and prayers is going to fail dramatically one day.

Towards a permanent solution to this situation I looked into replacing our hand edits with a build process. I'm familiar with ant from a previous development position many years ago, but I wasn't satisfied with ant's support for web workflows. Enter grunt. Grunt is written in node, and is therefore installed through npm. There are dozens of plugins for grunt, but the ones I need are jshint, uglify, and concat. I'll want to add some more as I settle in with grunt (particularly qunit), but first things first.

I've been using jslint on my code, but apparently my settings have been fairly permissive. After getting my grunt configuration setup (Gruntfile.js), I setup a task list that contained jshint, concat, and uglify. Well, jshint just about had a heart attack. Part of this is because of our organization's extensive use of jQuery—most of our code just assumes it exists. You can tell jshint that you're making such an assumption, but it needs to be specifically stated in a comment block before the code that attempts to use jQuery.

I could force grunt to ignore the jshint errors, but what's the point of using a build tool if you're going to start developing bad habits? Now I have grunt installed, getting my code in better shape is my next step.

Once this is done we should have a nice, reliable, way of maintaining our global.js code. After that comes working in unit testing with qunit. It's all for a good cause. :)


Monday, February 18, 2013

Twitter bot in node.js

Dairy Godmother makes really awesome frozen custard. They have a special flavor of the day which they publish on their "Flavor Forecast" calendar.

Since we've had a long weekend, it was perfect for a project to learn some node.js, grunt, unit testing, and oauth for authenticating.

So things I learned:


  1. Writing unit tests for a module is tricky. Writing unit tests for asynchronous code is tricky. My tests are not very good at this point. I need to make 'em better. 
  2. Vis a vis the Twitter 1.1 API and node-oauth, this bug hung me up for a long time before I found this thread.  Actually I need to go back and tweak this more. Hm. 
  3. Grunt is pretty awesome once it gets running, but it's still kind of confusing. I had a grip on it by Saturday; however, today a major new version was released. Doh!
Yeah. It works, more work needs to be done. Since it's kind of a "fan" bot, I'd like it to be able to respond to the main Dairy Godmother Twitter account if it speaks to the bot. 

You can see the bot here: DGFlavor

I'll share my github repo once I do some QA and code review. 

 

Sunday, February 10, 2013

I am a responsive design skeptic

Responsive design came up in a few meetings at work this week. The most passionate advocate of responsive design presented some compelling arguments, but I remain a responsive design skeptic.

Responsive design is an approach to design websites in a way that the content "responds" to the resolution of the visitor's screen. Mobile browsers, tablets, and desktop browsers all use the same HTML source, but the layout is adjusted through CSS and maybe some JavaScript magic.

Sunday, February 3, 2013

1996: building an art gallery with basic tools

This morning I was thinking about an artist who contracted me to design a website back in 1996. There was no off the shelf CMS to use, and JavaScript was not implemented in a way that made for a useful client side technology. Although server side includes and CGI were available, the company I worked for preferred to avoid using technologies that might put extra load on the web server.

The number of pages for the project was projected to be well over 100. Up until this point I'd been working on web sites that were mostly under 10 pages—they were little more than digital brochures and an email link. Maintaining them by hand was a straight-forward affair. But this project was going to require discovering a new approach.

As a self-taught designer and web developer, I didn't have a great approach to gathering requirements for the site. Looking back, I realize that the site should have taken a minimalist approach, but I thought it would be neat to have a textured brick background with some subtle lighting effects to make it appear as though the artist's work was hanged on a wall. Yep, I was into skeuomorphism before skeuomorphism was cool.

The artist's work was arranged in five or six galleries, each composed of 20-100 works. She had slides of each painting. Using a slide scanner, the images were converted into high resolution jpegs and stored on my computer, an Amiga 3000.

The design specification was to create a homepage that listed the galleries, paginated galleries with thumbnails of the art, and a navigable slideshow for each gallery. This had to be managed with minimum SSI (header and footer only) and no CGI. Also, no CSS, which was nascent in 1996 and that I personally avoided until designing the Otakurama website in 2001/2002.

The first challenge was to convert 200+ images into thumbnail versions and slideshow versions. To tackle this I used ImageFX, which was the Amiga equivalent of Photoshop. The batch conversion capabilities of ImageFX were extremely powerful, so much so that even Photoshop CS3 isn't in the same league. I was able to use ARexx to perform all the image conversions and save them with a directory and filename schema that was compatible with the website architecture.

Next was building 200+ slideshow pages. For that I used an HTML preprocessor called hsc, (HTML Sucks Completely) which I'm amazed to discover is still available in 2013 (and there's even a little hsc tutorial!). Using hsc, I was able to define templates for the paginated gallery pages and the slideshows and then assemble everything by running a script. The output of hsc is a complete website ready for deployment.

It took a few times to get the process right, but eventually I got it down to the point where if a site-wide change was necessary I could just find the spot in the template where the HTML was defined and fix it once, recompile the site, and redeploy. I felt so clever at the time. :)

In retrospect, I understood the design of the site needed to be separated from the content. It wasn't the way I'd learned to build websites, but it was a much better approach. I wasn't able to communicate how powerful this development process was to my employer—heck, I didn't grasp the idea in full at the time myself—but I grew dissatisfied developing websites the "old way" and a year later left web development for a technical support position that paid better. Hindsight, 20/20 right?

Tuesday, January 29, 2013

How to batch convert using sips on Mac

Have you ever had a bunch of images that you needed to quickly convert into a different format? I have! The Mac has some great tools like GraphicConverter and PhotoShop, but it also has a flexible command line image converter that comes with every Mac: sips, the scriptable image processing system.

I had a problem where I had a lot of web images in different formats (png, jpg, gif) and wanted to run a scenario of how much space I'd be able to save by converting all the images to low-quality jpeg. The following bash script expects a wildcard argument (e.g., *.jpg, *.gif, *.*) then creates a directory 'low' and makes low-quality versions of all the images in the argument in the low directory.
#!/bin/bash
FILES="$@"
if ! [-d 'low']       # if low doesn't exist, make it
  then mkdir 'low';
fi
for i in $FILES; 
do
  # find the extension on files that may have multiple periods or
  # extensions like 'jpeg'
  NF=$(echo $i | awk -F "." '{print NF}');  
  NF=$(($NF - 1));
  EXT=$(echo $i | awk -F "." '{print $NF}');
# convert image using 50% quality and save in 'low' folder as jpg
  sips -s format jpeg -s formatOptions 50% $i --out "low/"$(echo $i | sed "s/"$EXT"/jpg/g")
done

Monday, January 28, 2013

Lost Cat, Found

A lost kitty cat found its way to my mom's house about December 18 while I happened to be visiting for the holidays. We snapped a few photos of her and uploaded them to Craigslist. Mom also drafted up a few hand-written bills and pasted them up around her neighborhood.

We kept an eye on PetFinder for a few days but we didn't see a match. My mom swore she'd get rid of the cat, but she kept putting off bringing it to the pound.

Fast forward to today. I was checking my junk folder and saw a message about the cat sent two weeks ago! Oh no, it had slipped past. I called Mom to let her know, but by amazing coincidence the kitty's owners had managed to find out Mom had the cat, so they picked her up over the weekend. The cat and her family were reunited!

Heartwarming. :)

Friday, January 25, 2013

The challenge of legacy website maintenance

Following departure of one of our web developers, I'm starting to get a closer look at how our weekly content is updated at work. Some of the data is tightly bound to the presentation layer, which requires manual updates. Although some content is built using a house templating system, there are still parts that are too complicated for the house template solution to handle.

The problem has the following aspects:

  • Information for the website updates is not centralized.
  • Metadata for updates is not centralized or even captured in a system. Some of it is in the head of developers like me. 
  • The house template solution is deeply embedded in the workflow, but is not very flexible. 
  • The content updating is spread amongst multiple web editors and coordination is handled via scheduled workflow rather than any scripted process. 
  • A variety of clever workarounds can obfuscate the purpose of some weekly update items. 
Along with this list of challenges is the knowledge that our vendor is working on a next-generation CMS. The question becomes, should we try to fix our legacy workflows? The answer is, maybe

Changing processes is costly. First, there's the cost of researching the existing process and its solution. Second, there's the development cost. Finally, there's the cost of implementing the new process. 

For me, with my hacker mentality, it's easy to see opportunities for improvement without taking into consideration the implementation cost. But what good does it do to make something "work better" if the people who are meant to benefit from the improvement have to climb a learning curve to take advantage of it? In the long run the tradeoff is probably worth it, assuming the long run is sufficiently placed in the future. 

The real challenge is not the existing workflow hiccups, it's knowing whether they're worth the expense to fix. The "can we do it" part is already obvious. Yes, we can streamline processes. Can we do it in a way that serves all the stakeholders? That's the real heart of the question. 

Sometimes it is best to leave a legacy process alone until the foundation can be replaced. As a developer my duty is to notice these opportunities and bring them to the attention of my team, and then to be sensitive and responsive to the other stakeholders. Just because something is possible doesn't make it right to do. 

Wednesday, January 23, 2013

Excel 2010 loading Excel 2011 file

Damn you, Microsoft. It's 2013, why does your software still operate as though it was built to cause frustration?

I've been keeping extensive project notes in Excel 2011. Normally I use Google Docs, but I wanted to take advantage of the more sophisticated row and column formatting options in Excel 2011.

This week I shared the location of the file with a co-worker. He opened the file, found the whole project blank (like, without even a single worksheet) and just figured I hadn't added any notes yet.

Today we learned that Excel 2010 didn't like opening the file that was made in Excel 2011. I tried saving it in old-fashioned .xls format. He still couldn't open the file. I was able to open it in Excel 2007, but even after saving it as both .xlsx and .xls, it still didn't open.

A little while after I'd given up trying he noticed that Excel 2010 had tried to pop up a window with the spreadsheet off screen. Like, as though he were running a virtual desktop, even though he isn't. And apparently Excel 2010 doesn't have a 'cascade window' menu item any more, so he had to do all sorts of contortions to get the spreadsheet to appear.

Microsoft, man, I sure hate you sometimes.

Sunday, January 20, 2013

Learning underscore.js

Playing with underscore.js this weekend to take a crack at solving one of the workflow challenges with the site Science home page.

There are two news feeds on the home page, Daily News and Careers. The Careers feed is pre-generated on the back end using a house templating application. The Daily News feed is constructed from a Google Feed API and the markup is generated in JavaScript.

This is suboptimal for a couple of reasons. First, any changes to the markup must be made in both the JavaScript on the client side and in the house templates on the back end. To fix this we can use underscore.js's template function to define the template once. Deploying changes to the template could be done much more quickly, although it's questionable how helpful this would be since the templates aren't changed often.

Second, since the Careers data is not separated from its presentation, making reuse challenging. Considering the number of places on the site we use headline-type data, it would be nice if we could define a "headline" data format, and then any Science publication (Careers, News, the three journals) would only need to be responsible for publishing a data feed (in JSON) while underscore.js handles the templates.

In fact, with a little bit of meta-data in the feed, we could use underscore.js's collection functions to highlight headlines depending on what type of article a visitor was reading.

Adopting underscore.js to solve this particular problem may not gain us benefits to offset the development costs, but it's interesting to play with new ideas. It's hard to tell when one of them might take off.

Tuesday, January 15, 2013

Pirate Cinema

I bought the Humble Bundle e-book bundle a couple of months ago, and was finally getting around to Cory Doctorow's Pirate Cinema. Doctorow's book was one of the reasons I was excited about the bundle. Down and Out in the Magic Kingdom was amazingly compelling, and it even inspired a real-life effort to develop a reputation-based currency called whuffie. Cory Doctorow is a visionary.

It pains me to say this, but I could not make it through Pirate Cinema. I'm a reader. Heck, I just finished Middlemarch, weighing in at over 300,000 words. Doctorow is a political activist who focusses on Internet-related policies like loosening copyright law, privacy, copyright, and so on. And he walks the walk: many of his works can be downloaded for free from his website.

Despite being sympathetic to many of his political positions, having them shoved into my face every page for the first four chapters was exhausting. His characters hardly spoke about anything but Doctorow's politics. They weren't really even having dialogue, they just lectured at each other. Once I finished the third chapter and dug partly into the fourth I started to skip around to see if there was just a rough rhetorical patch I needed to get through, but it looked like screed straight to the end. Such a bummer!

His older work is must-read speculative fiction. Skip this though.

Saturday, January 12, 2013

Backbone.js on Code School

Thanks to Code School, I was able to work through some Backbone.js tutorials.

Code School is setup to present a topic in multiple sections. Each section begins with a 15 minute video which covers a few key points about the topic. For example, in the Backbone.js tutorial one section explained how the event model worked, using a todo list as an example application. After the video, there is a small interactive quiz that reinforces the key points from the section. The quiz part includes a code editor, and when you submit your answer it actually runs the code and judges you on the results, not on the code itself.

Although the two courses I ran through today had high production value, the site itself is narrowly focussed on Ruby and JavaScript. I won't be buying a Code School membership after this trial, but I'll keep my eye on it and if the number of tutorials that interest me continues to expand, I may revisit that decision.

Ora et Labora

I picked up a new board game tonight at Labyrinth in DC called Ora et Labora. It's from the same designer who made Agricola and La Havre. The mechanics look more streamlined than either of those games. Like Agricola, it looks like its major gameplay mechanic is resource management making this an economics-oriented game.

48 hours of Code School for free


Code School is running a promotion this weekend offering any of their classes free for 48 hours. I'm reviewing my jQuery and learning some Backbone.

jQuery has been in my toolset for years, but it's helpful to spend some time skimming through materials looking for gaps in my knowledge. Most of my experience with jQuery is in the 1.5 and earlier era, so maybe there will be some bits that are new to me.