Internationalization is everyone’s responsibility


In my first jobs out of college, I was part of software internationalization teams that were completely independent from the “core” teams. The core teams created the products for the original target locale (usually the United States), and the internationalization teams created branches of that product and performed all the engineering work to put the product in other markets. The core teams rarely worried about localization or data formatting. They didn’t concerned themselves with layouts or character sets. They rarely even allowed the internationalization teams to push the updated products back into their core code base. Weekly updates to the internationalization branch were always a merge-mess.

Those environments were frustrating and tedious. The worse thing is that those environments were common throughout the software industry. That was 25+ years ago. 

After a few years out of college, I dreamed that internationalization would one day become everyone’s responsibility. I had hope that core product teams would take on internationalization work on their own and that my job as an internationalization engineer would eventually become obsolete. After more than two decades, the situation is better but not perfect.

After all this time, internationalization engineers are still required. I still find dedicated internationalization teams. The biggest improvement, however, is that core teams welcome code updates back into the core product repositories, and some teams even attempt to follow best practices. In the best cases, the internationalization teams provide globalization tools, best practice guidelines, and educational support. However, in most cases the internationalization teams still come in after all the core feature work is finished and retrofit existing code to be internationally-aware. This is almost always error-prone and expensive.

The ideal development environment is one in which internationalization is everyone’s responsibility. Retrofitting a product is simply not the best approach to add cultural awareness and localizability to products. It was time-consuming, expensive, and error-prone decades ago, and it still is.

Once every product and engineering team takes on the responsibility, internationalization actually becomes easier. Once the tasks become a regular, planned part of sprint deliverables, internationalization simply works better. That is, it’s easier to manage, integrate, and implement.

The truth is simple: when internationalization is everyone’s responsibility, you can create a better product.

Internationalization as a form of technical debt


The term technical debt is often used to label implementation choices that trade long-term goals for limited, short-term solutions. Technical debt has a negative connotation because it means that you have accrued a technical obligation that must be resolved before you can make future progress. Teams take on technical debt for many reasons: short schedules, insufficient knowledge, poor team collaboration, and a host of others. Generally, you want to avoid technical debt because it represents a technical hurdle that you have avoided or have resolved only partly. Technical debt limits the rate at which can innovate and progress in the future.

I’ve often thought about how many product and software teams approach software internationalization. The typical team will begin development with a single geographical market. The team knows that they want to succeed internationally, but they don’t worry about that concern at first. They have schedules, product features, and short-term needs that demand attention. They sacrifice long-term goals for short-term wins. They ignore best practices in software internationalization because of insufficient knowledge or perhaps even laziness. Over time, internationalization work becomes technical debt that must be paid to make further progress into desired markets. 

The disheartening fact is that unattended technical debt in internationalization will eventually require code refactoring, new implementations, and even updated designs. Interest increases rapidly. At some point, you may not be able to do everything yourself, and you might need external help.

Fortunately, internationalization does not have to become technical debt. Basic internationalization usually does not have huge upfront costs for either schedules or resources. Internationalization can be integrated into every sprint or delivery schedule. The very basic concepts are simple, and a little up-front and regular consideration will pay huge dividends.

So, what can you do now to avoid technical debt in internationalization? I suggest you tackle this in a few steps:

  1. Make everyone responsible for knowing the basic issues and concerns in internationalization.
  2. Resolve to implement best practices for each of the concerns that will affect your product.
  3. Make internationalization a part of your ongoing development and review process.

In an upcoming blog, I’ll provide you with some resources for each of these steps. 

All the best,
John O’Conner

Enumerating Android Calendars

AndroidAndroid APIs allow you to query information about calendars in your system. Your application can perform typical read, write, update, and delete (CRUD) operations on calendars using a combination of several classes.

To retrieve calendar data, you’ll use the following classes:

  • Context
  • ContentResolver
  • Cursor

Android security requires that you announce your application’s intentions for calendar access. You indicate this in the application’s manifest file. The following manifest entry tells the Android platform that your application will read calendar information:

<uses-permission android:name="android.permission.READ_CALENDAR"/>

Make sure that the <uses-permission> is immediately outside the <application> tag. If you do not put this permission indicator in your manifest file, your application will throw security exceptions. More importantly, it won’t be able to access calendar information.

Why do you need three classes (Context, ContentResolver, and Cursor) to retrieve calendar information? First, a cursor is used to iterate through calendar information. Second, the Cursor is provided by a ContentResolver. Finally, you need a Context to retrieve a content resolver.

Within an Activity class, which represents a user-interface view, you can get a content resolver easily with the getContentResolver method. An Activity is a subclass of Context. That’s simple enough. However, if you want to separate concerns, you may want to create a calendar service to isolate calendar details from the rest of your application. As a separate, non-Activity, class, your CalendarService (implementation left to the reader) may not have access to a context. So you may need to provide a resolver or context from your Activity when instantiating a CalendarService instance. 

Here’s how you retrieve the content resolver:

import android.content.ContentResolver;
// if you are calling from within an Activity
ContentResolver resolver = getContentResolver(); // if you are calling from elsewhere with access to a Context ContentResolver resolver = context.getContentResolver();

Once you have the resolver, you can then query it for the exact data items needed. Calendars have a lot of information including name, time zone, and colors. Tell the resolver exactly what you want by declaring a projection. A projection is simply a String array that indicates the fields that you want to extract from a calendar row in Android’s databases.

The following code shows how to perform the query:

import android.provider.CalendarContract;
String[] projection = {CalendarContract.Calendars._ID,
String selection = String.format("%s = 1", CalendarContract.Calendars.VISIBLE);
Cursor c = contentResolver.query(CalendarContract.Calendars.CONTENT_URI,
    null, null);
while(c.moveToNext()) {
    // the cursor, c, contains all the projection data items
    // access the cursor’s contents by array index as declared in
    // your projection 
    long id = cursor.getLong(0);
String name = cursor.getString(1)); ... } c.close();

This particular example simply iterates over the calendar meta-data, not actual events. You’ll need an additional query for that.

Comparison of the Instant and Date Classes


Java 8 has a new java.time package, and one of its new classes is Instant. The best counterpart to this in past platforms is the java.util.Date class.

There are a couple notable differences between Date and Instant:

  • Date has very few useful methods, and Instant provides many.
  • Instant provides finer time granularity and a longer timeline.

Most of Date’s methods have been deprecated. Date manipulation and formatting have been delegated to the Calendar and DateFormat classes. In comparison, the Instant class allows you to perform some very basic functionality directly. You can add seconds and milliseconds for example. You can parse and generate ISO 8601 date strings with Instant as well. ISO 8601 dates have a consistent form across all locales and look like this: 2014-08-12T14:51:53:00Z. Most of the Instant methods are purely for convenience. You can do the similar things with Date using the Calendar and DateFormat classes.

Both Date and Instant have the same epoch (1970-01-01T00:00:00Z), but Instant can represent a much longer timeline. Date’s internal structure uses a long to represent milliseconds from the epoch. Instant, however, uses a long to represent seconds from epoch AND an int to represent nanoseconds of that second. That certainly means you don’t have to worry about date rollover problems in the near future.

The differences between Date and Instant are relatively minor, but these classes really are the starting point of of a more thorough discussion of the java.time package. Expect more details in the near future.

The New Date and Time API in Java 8

It’s no secret that developers have been unsatisfied with the existing Date and Calendar classes of previous Java versions. I’ve heard complaints that the Calendar API is difficult to understand, lacks needed features, and even causes unexpected concurrency bugs. As a result, developers sometimes migrated to the popular Joda Time library, which apparently satisfied their needs.

I’ve always suspected that the standard Date and Calendar API would be updated (or replaced), but I can’t help being a little surprised to see the new java.time package in Java 8. I’m not so surprised that it exists but that it is so comprehensive…and that it seems so familiar. If you’re one of those who moved to Joda Time, you’ll feel a sense of déjà vu. The new Java 8 library looks a lot like Joda Time. After a little snooping, now I understand why. The new Date and Time API was created by Stephen Colebourne, the author of Joda Time. Of course, he worked with Oracle and others within the umbrella of the JSR 310 proposal, but this is Joda Time in many ways.

As I take a first browse of the new API, I noted a couple simple thoughts: the API is feature-rich and complete, and it’s still complex.

Time, dates, and date manipulations are not simple, and no API is going to make that change . However, I think that this new API does a great job of making things less complicated than before. If you haven’t looked at it yet, please check it out. Let me know what you think. I’ll do the same and share how to use the APIs in upcoming blogs.


Using the HAXM Accelerator for Android

Developing Android applications on Mac OS X is easy, especially if you are using Google’s new Android Studio or JetBrain’s IntelliJ. Also when you install Google’s SDK, you’ll get plenty of tools for creating virtual devices to test on. I created a Nexus 7 virtual device, and although it ran slowly, it did run smoothly. I soon got used to the workflow that involved running and testing my application on a different device (real and virtual).

After a few cycles of code, deploy to device, and test, I realized I was spending too much time waiting for the virtual device to load. I had randomly opted to run my AVD using an ARM system image, and other developers claimed that running on the Intel Atom CPU image would provide better performance for the time-consuming workflow. I decided that I’d give the Intel CPU image a try.

Using the Intel x86 image and reaping the benefits of a faster emulator requires that you install 2 additional items available in the Android SDK:

  1. Intel x86 Atom System Image
  2. Intel x86 Emulator Accelerator

You install the x86 CPU image from within the Android SDK Manager. The SDK Manager lists the different API versions, and you can see options to install system images. Install the x86 image by clicking the x86 option and then clicking on the “Install package” button.


From within the SDK Manager, you can also install the x86 accelerator. Look for the “Extras” section that includes the option to install the Intel x86 Emulator Accelerator (HAXM). The option looks like this:


After installing these options, you then select the Intel Atom x86 cpu image from within the AVD manager:


Running the AVD using the new image may have provided some speed improvement, but I really didn’t notice much. I was a bit disappointed. Then I discovered that I was supposed to also install HAXM. Wait, but didn’t I do that earlier in step #2? I thought I had, but actually I had only downloaded the HAXM installer. I caught on to this when I noticed the AVD startup dialog:

emulator: Failed to open the HAX device!
HAX is not working and emulator runs in emulation mode

Why did this warning message show up? I had used the SKD manager to install the needed pieces. By digging around in my Android SDK and “Extras” folders, I found a HAXM installer. I finally realized that although I had downloaded the HAXM installer, I had not actually installed HAXM itself. This isn’t perfectly clear when you select and install the HAXM option.


So, after you download the HAXM installer, look for it in your SDK folder. Install it. Run your emulator of an x86 processor, and….wow! An amazing improvement, and very noticeable. I was pleasantly surprised.

If you’re developing for Android, you’ll appreciate this information. It’s amazing how much it improves emulator performance. You’ll definitely see a difference in speed. You’ll be glad you installed both the x86 image and the HAXM accelerator. 

But remember…if you really want to get the benefits of the accelerator, you have to actually install it after you download it.


Tool Options for Android Development


When developing applications for the Android platform, you have several choices of integrated development environment (IDE). The environments are free and easy to download on the web.

The best known IDE is the combination of Eclipse and the Android Development Tools (ADT) plugin. This option has been around for the longest. As the most mature option, this development environment has the best community support as well. Additionally, you have two options for installing this IDE. The first option is to download and use the Eclipse + ADT pre-configured development bundle. The second option is to download the Eclipse and ADT plugin separately. Either option works well, and you can find instructions for both on the Android developer site:

Netbeans also has Android support. The support is available as a set of NetBean plugins from a site called NBAndroid. NBAndroid provides a free basic plugin and a paid set of “extensions.” The extension include gradle support and visual layout editing for 15 euros.

Finally, Google andJetBrains have teamed up to create a new development option called Android Studio. The well-known IntelliJ editor forms the basic platform, and the companies have have tightly integrated the Android SDK tools. Android Studio is not final, but you can download a beta version from Google here: Android Studio appears to be a tool branded under Google. However, you can get the same functionality from IntelliJ in either the IntelliJ IDEA Ultimate or IntelliJ IDEA Community Edition:  

So, there you have it: 3 options for Android development that are available for free.

Learning Android

I’ve neglected this space for a long time. The truth is that life gets in the way. However, I’ve picked up a new hobby — Android software development.

I’m in the initial stages now — setting up an environment, installing tools, and learning the platform. I hope to use my blog to communicate the information I learn about Android.

What about internationalization? Hmmm…I’m still interested in that. I suppose I will always be interested in internationalization. However, this isn’t currently my primary activity at this time; I’m not currently employed using this skill set. In fact, I’m doing something very different in my full-time work. Unfortunately as a result, I don’t write much about internationalization. 

As I learn Android, I will also share my knowledge of the i18n api and features. That seems reasonable since i18n is still one of my core interests. But you may find that the main topics won’t be pure i18n in the near future.

I have a great new Android device, the Nexus 7 (2013 version). I’m eager to get started with it.

To help in that effort, I’m going through an online coursera course called Programming Mobile Applications for Android handheld Systems. You might want to take a look at it too if your starting from scratch.

See you around and I hope you enjoy upcoming blogs.


Unicode Characters and Alternative Glyphs

Smiley face

Unicode defines thousands of characters. Some “characters” are surprising, and others are obvious. When I look at the Unicode standard and consider the lengthy debates that occur when deciding upon whether a character should be included, I can imagine the discussion and rationalization that occurs. Deciding on including a character can be difficult.

One of the more difficult concepts for me to appreciate is the difference between light and dark (or black and white) characters. A real example will help me explain this. Consider the “smiley face” characters U+263A and U+263B:  ☺ and ☻. These characters are named WHITE SMILING FACE and BLACK SMILING FACE respectively.

These are not the only characters that have white and black options. Dozens of others exist. There are even white and black options for BLACK TELEPHONE and WHITE TELEPHONE.

Of course, once these characters go into the standard, they should stay. One shouldn’t remove existing characters. However, a serious question does arise when considering WHITE and BLACK options for a character.

The question I have is this: Why? Why isn’t the white and black color variation simply a font variation for the same character. The Unicode standard clearly states that it avoids encoding glyph variations of the same character. That makes a lot of sense. However, in practice, the standard at least appears to do exactly the opposite for many characters. I can only guess that someone on the standards committee made a very good, logical and well-supported argument for the character differentiation.

My hope for future versions of the standard is that these kind of color variations will be avoided. Not being on the committee when these characters were added, I cannot really complain. And I hope that my comments here don’t come across that way. However, in the future, I’d like the standard to include annotations for these characters that describe why they deserve separate code points. It certainly isn’t clear from the existing character’s notes, and I’m sure that others would be curious about the reasons as well.

Standard Charsets in Java 7

Once in a while I poke my nose through the release notes of new Java releases. It’s not a particularly rewarding activity, but this time I did find something interesting. Oddly enough, it was interesting for what it did NOT say. I was surprised, so I thought you might want to know about a new class that is now available and quietly overlooked in any release notes.

Character sets have their own class representation in Java: Charset. You can use the Charset class to identify a character set for encoding or decoding. To create a Charset object, you use a factory method: Charset.forName(String charset). The uncomfortable trick to using this method is that you must be prepared to catch an exception if the JRE doesn’t actually supply the requested character set. Bummer.

I’ve always wondered why the JDK allows a random string as the parameter. I suppose it was for convenience…to allow the JDK to be updated over time with new charset support without having to change any API or enumeration. That’s understandable. But not really knowing what minimal set of character sets is supported in a particular JDK is somewhat…unnerving…especially to an engineer just trying to get his/her work finished.

The JDK documentation was always clear on what character sets you could absolutely depend on to be present. That was helpful and much needed. At least an observant developer could depend on that. However, the JDK now provides a more robust and useful way to identify which charsets are minimally supported. Java 7 provides a new class: java.nio.charset.StandardCharsets.

StandardCharsets does one thing. It lets you know what set of character sets is minimally supported in your JDK. The set is probably unchanged from Java 6 or Java 5 or even earlier. However, now you don’t have to read the documentation as carefully; the standard set is given to you. The Standardcharsets class explicitly enumerates the normal set for you.

Rocket science? No. But this welcome addition to the JDK was a long time in coming, and I’m glad to have found it.