These are the CAR (challenge, action, result) stories that underlie the bullet points on my resume. (They’re talking points for an interview. If you found this file, consider this a virtual interview!)
Contents
How Stories Are Used
For a short resume and good talking points in an interview, follow this highly recommended process:
- Write the story as a narrative: Challenge, Action, Result. That’s the story you tell in an interview.
- For a resume, invert and condense. Start with the result, summarize the action, and leave out the challenge.
Sun Microsystems: Document Systems Architect
Acted as a technical resource for a team of 12 writers. Documented procedures, established infrastructure, and managed production processes for 13,500 documents, combined from multiple workspaces (source code, generated APIs, guides & tool docs, marketing docs, web docs), with the results published to the web, included in the product release as man pages, delivered to Japan for localization, sent to java.net, and encapsulated in a giant, downloadable “doc bundle” that was larger than the product.
- Pioneered the group’s move to the DITA document format. Investigated the technology, evaluated and selected tools, ran the pilot study, gave talks on rationale and lessons learned, and participated in team-building.
- Although the web design team gave us a great layout for our new pages, most of the semantics were undefined. (What happens when you click something?) Clarified the design decisions. Came up to speed on Javascript and JSP, investigated Eclipse Help, and did the implementation. Debugged using Glassfish. Participated in the design of the information architecture. (Javascript, JSP, Glassfish)
- Evaluated several CMS systems, taking a deep look at XDocs and Alfresco. Eventually concluded that, given resource restrictions, a version control system like Subversion would provide the required functionality, in conjunction with some tools (to be written) to handle link management. Designed the algorithms for those tools.
- Worked with management to devise a DITA implementation strategy. Initial focus was on new material needed for JDK7: documents that explained Java development for rich internet applications (applets and JavaWebStart applications). Those documents had a significant degree of overlap, making DITA a natural choice. The plan was to (a) develop those documents in DITA, setting up production systems and quality checks in the process, (b) tackle the installation pages, (c) single-source the man pages, and (d) establish a collaborative environment for online editing. (We were on the verge of achieving the first step in that plan when the company’s financial situation forced management to pull the plug.)
- We needed to determine if DITA would really work for us and, if so, how it worked. Ran a pilot to answer those questions, eventually concluding that it would. Took copious notes on lessons learned. Wrote papers and gave presentations to explain DITA’s major concepts. Listed, clarified, and identified heuristics for the 20-some design decisions that face any DITA project. Investigated collaboration vehicles for online editing, so developers could continue to participate in document development.
- Installation page, existed in Solaris, Windows, and Linux variants. There was a significant degree of overlap and cross-linking between pages for the runtime environment (JRE) and development environment (JDK), and between the 32-bit and 64-bit versions. Investigated DITA as a single-source solution for that document set, as well as for man pages.
- With the utility we had in place, troff versions of the man pages were dependent on the corresponding HTML files. But the two most common build systems (ANT and Make) knew nothing about such dependencies. Use Rake to constructed a build process that understood the dependencies and did minimal builds, so builds ran faster when only a single file or two changed. (Rake)
- Documentation for command line tools (man pages) had four duplicate representations that were constantly drifing apart: Solbook SGML man pages, troff man pages, HTML files for Windows, and HTML files for Linux/Solaris operating systems. SGML could have been the source, but the developers who were highly active on those pages would have been cut out of the loop. An XML variant could have been employed, but it did not support the”conditional text needed to generate the different HTML versions. Wrote the html2man utility so the HTML files could be used as source, allowing the developers to remain active, and reducing the problem to one of dual-sourcing. Used Rake to dynamically determine dependencies and do minimal builds. (Ruby and Rake)
- Investigated Wikis, but eventually set up a Subversion workspace for departmental collaboration, since it was easier for experienced writers to edit and post files in that medium. Configured Apache for WebDAV access to them, to support PC users. (Subversion, Apache, WebDAV)
- Evaluated DITA-based document-collaboration strategies, with an eye looking forward towards deeper design-and-decision-making collaboration online. (The original article, Enabling Collaborative Design-and-Decision Discussions, Online, was posted to a blog hosted at Sun Microsystems. That blog is now defunct, and the pages have disappeared. The Collaboration pages contain the precursor material.)
- With 13,500 web documents averaging more than 10 links each, there was something on the order of 150,000 links in the doc set. Broken links were a problem, especially since But since documents were often dual-sourced and scattered across multiple workspaces. Over time, 8,000 broken links had crept into the documentation as files in different workspaces were moved and removed. Even when the file existed, an “anchor” referenced in it might not be there. Used a program I had written earlier (LinkCheck) to solve the problem. Set up a monthly cron job to run the program. Management used the reports in a concerted effort to drive down the number of broken links. Participated in that effort, eventually getting the report down to under 500 exceptions, most of which were “false positives”.
- When one of our writers left, and with a deadline looming, we were left with a gap we needed to fill right away. Took a 4-month hiatus from the production work and wrote the introductory materials for rich internet apps, until a replacement could be found to take them over.
- People were editing HTML by hand, creating mistakes, inconsistencies, and broken links in the process. To improve productivity and quality, set up DreamWeaver sites wherever possible, and encouraged people to take advantage of it’s templating facilities and link management capabilities.
- Workspaces contained a confusing mixture of edited and generated files, which made further automation difficult. Identified files that needed to be relocated. Created a pictorial before and after representations. Communicated our intentions to all affected groups, and carried out the moves. Did the moves once in DreamWeaver, to automatically adjust links. Then restored the original source files and their SCCS histories (corrupted by the move), and did the moves again using TeamWare workspace commands to make the repository aware of the moves — longing for the day that we moved to Mercurial or Subversion.
- Writers were floundering as they attempted to work with multiple documentation workspaces, source code workspaces with strict putback regulations, bug tracking procedures, and a mixture of personal workstations and servers. Acted as a technical resource for the team of 12 writers, documenting procedures, establishing infrastructure, and managing production processes.
- Managed production processes for 13,500 documents, combined from multiple workspaces (source code, generated APIs, guides & tool docs, marketing docs, web docs), with the results published to the web, included in the product release as man pages, delivered to Japan for localization, sent to java.net, and encapsulated in a giant, downloadable “doc bundle” that was larger than the product.
Development
- CommentMerge: The Java Micro Edition group produced multiple versions of the platform, each of which needed to have the exact same documentation. But comments were embedded in the source files. CommentMerge provided a way to read comments from files in a spec hierarchy and merge them into source files in the target hierarchy.
- StubMaker: Extracts comment-complete, compilable stubs from source files, minus code. Used by the Security team to deliver source-less files that can be used in javadoc processing, and by the Micro Edition team to create a specification-file hierarchy.
- LinkCheck: Looks for broken links, treating discrete sets of files as though co-located, verifying the presence of referenced anchors on all pages (including external pages). Input from web and javascript ensured coverage of every possible link on an HTML page. NIO libraries and internal hash tables ensured optimal performance.
- LinkFix: Pattern-based utility for changing HTML links en masse. Used in the LinkCheck procedure to convert local references to https:// references, so that links can be followed when accessing the reports remotely.
- DocCheck: Lint utility for API comments. Identifies missing comments and comment tags. Generates templates for those that are missing. Ran weekly on the J2SE source files. Available at https://java.sun.com/javadoc/doccheck.
- Glossary Servlet: A servlet and a rich client (Java Web Start-enabled) for displaying and editing a localization glossary in any two of 9 different languages.
- Utility library: For shared and generally-useful functionality that emerged from the programs. Included application templates, a regular-expression processor for files and directories, NIO routines to copy external Web pages into a buffer. (The utility APIs for the pattern-matching file-retrieval classes were included as part of the DocCheck documentation.)
Other Professional Work
- Founded a startup focused on building productivity software. Initial product was an outliner. Focused on product definition, marketing, and business development, while my partners focused on technology development.
- Played a significant role in landing contracts of $2m, $3m, and $6m for a large hardware vendor as a member of the Major Opportunities Team. Wrote demos, gave presentations, ran benchmarks, set up hardware, and provided general technical support for multi-million dollar opportunities.
- Created “vaporware” demos for a Colorized office automation system, simple voice messaging system.
- Wrote a multi-tasking emulation library and a code-profiling tool (SP/Pascal)
- Two reported bugs in two years. (A lot of time testing!)
- SP/Pascal was a “systems” version of Pascal created by Dave Reese.
- He was on the standards committee, and saw a lot of good ideas.
- Since he wasn’t subject to committee decisions, he took the best of them that were able to “play together”, and created one of the best languages the world has ever seen.
AI Program: Othello
- I had played the extent games, seen my program at work using the algorithms described by David Levy in a (then current) issue of Byte magazine, and had see other programs running on a variety of programs. They all had two things in common: (1) They were superb at forcing you to give them a corner (the ultimate “safe” square); and (2) Once they had a corner, they seemingly played at random, capitalize in any significant way on the advantage they had achieved. I reasoned that if a corner was safe (because there was no way to flip it), then any piece “connected” to the corner was also safe and, similarly, any piece connected to them was safe, as well. I devised a solidity heuristic that gave a very high rating to such pieces — high enough to outweigh all other factors, so that the program preferred them above all else.I then observed that since “fan out” produced the exponential explosion that made look-ahead algorithms so expensive, then any situation in which a players move was “forced” (i.e. they had only one move or no moves at all), then look-ahead was essentially free. I therefore increased the search depth in such circumstances. (That particular algorithm eventually became a thesis at Carnegie Mellon, although I was not the one to write it.)
As a result of those heuristics, my program excelled at finding forcing lines that led to a complete wipeout of enemy pieces. (It recorded two such wipeouts in international competition (the only program ever to have done so) and rapidly became an object of fear and envy.
In fact, after the first day of the competition, the person running the tournament (David Levy) put 9 computers to find an answer for the opening line my program generated over the board. He did, and the program had its first loss early the second day, as a result. (After winning subsequent games, it wound up in a tie for fifth — but it could easily have placed first or second, otherwise.)
Interesting wrinkle: Allowing for an unlimited search depth turned out to have an unintended effect: The computer would prefer a long line that give it a lot of squares (and left the opponent a few) to a line that wiped the opponent out completely, but did so earlier, leaving the computer fewer “solid” squares to count. The fast fix for that was to reduce the look ahead to a “mere” 10 or 12 ply on forcing lines. (It typically went 4-6 ply otherwise.) But the real fix would have been to add an extra (even larger) bonus for every empty square left on the board when no enemy pieces remained, so that the system would greatly prefer the earliest-possible wipeout.
Reasons for Dynamic Typing
- Efficiency: You’re more productive, so you get stuff done faster.
- Duck typing: You don’t need a rigid type hierarchy. If the object speaks your language, you can use it.
- Safety: When you have strong typing, you get code that is just as “safe” as statically typed languages.
- Agility: It lets you round trip faster, and because you think you’re less safe, you do the functional/behavioral/unit testing that you should have done to start with.
- Extensibility: Let you program define your language as you go along!