Author Archives: joconner

A Little Java Character History


The Java language has supported Unicode from its beginning. In those early days, the Unicode character set defined characters with integer values in the range 0x0000 through 0xFFFF. That’s 65,536 possible character values in the full Unicode set. Java’s char type was defined to represent a single character in that range.

However, Unicode changed. It grew bigger. It can now define character values all the way up to 0x10FFFF. The range grew by 16x. As a result of that growth, Java’s char type simply cannot represent every possible Unicode character anymore. A char still has its original range definition, so it can only have an unsigned integer value up to 0xFFFF.

Fortunately, the Unicode consortium considered how its growth might affect existing systems. It created a clever encoding form that allowed systems to use two 16-bit values as an alias for character values above 0xFFFF. That encoding form is called UTF-16. The consortium quickly partitioned a couple special ranges within the original 65,536 values that could be used in this encoding form. Those special values are called surrogates. A pair of surrogates, in the UTF-16 encoding form, can represent any defined character above 0xFFFF.

To keep up with the expanded Unicode range, Java’s char type has changed its definition a little bit. It is now a Unicode code unit in the UTF-16 encoding form. It’s still a 16-bit value, but you can’t really think of it as just a character anymore. It’s a code unit. Some 16-bit code unit values are complete characters, but some are only part of a surrogate pair. Remember surrogate values are not complete characters. A valid surrogate pair represents a single Unicode character somewhere above 0xFFFF.

So, let’s get right to the point. Sometimes a char is a complete character, and sometimes its only part of a surrogate pair. This makes text processing tricky.

In a future post, I’ll describe how to correctly iterate through a Java string. Because a char isn’t what is used to be, parsing a string isn’t as simple as it once was.

Using Cloudformation to Create a Virtual Private Cloud in AWS


Creating your network infrastructure in AWS is simplified by a service called Cloudformation. Cloudformation allows you to specify your network subnets, groups, and other resources in a JSON file. When you submit that JSON file to AWS, the service will create the resources in your AWS account. This article demonstrates Cloudformation by creating a basic virtual private cloud (VPC) with an accompanying subnet and gateway. You will need an AWS account to test this out. Preferably you would also have read about using the command line AWS client.

A Cloudformation template is a JSON file that describes each resource in your network. Every resource has a specific set of attributes that you can define within this template. Amazon documents all resources and their attributes on its on site as well, so take a look at it for more complete details.

The general structure of a template is a basic map of resources within a “Resources” id. Each resource has a Type and various Properties. A VPC resource that defines a set of IP addresses in a block, looks like this:

  "Resources": {
    "VPC01": {
      "Type": "AWS::EC2::VPC",
      "Properties": {
        "CidrBlock": "",
        "Tags": [ 
            "Key": "Name",
            "Value": "vpc-charprop"

Within a Cloudformation template, you can refer to other resources using their logical id. For example, you can associate a subnet with “VPC01” with the following declaration:

  "SUBNET01": {
    "Type": "AWS::EC2::Subnet",
    "Properties": {
      "VpcId": {
        "Ref": "VPC01"
      "CidrBlock": "",
      "Tags": [
          "Key": "Name",
          "Value": "subnet-charprop-public"

I’ve placed these and other resources into a larger template called network.json. Using this file, you submit your resource creation request with the AWS CLI with this simple command:

aws cloudformation create-stack --stack-name charprop-network --template-body file://./network.json

Assuming network.json is in your current working directory, the command should return immediately, showing a JSON description something like this:

  "StackId": "arn:aws:cloudformation:us-west-2:446581796491:stack/charprop-network/c3aa1530-0848-11e6-a533-50a68a2012ba"

You’ve successfully created a VPC and subnet using Cloudformation!

In the next article, I’ll add security groups and a Linux machine instance.

Preparing to Use the Amazon Web Services Command Line Client

General AWScloud

I’m so impressed with the AWS services that I’m going to use them to create my own set of services for my personal domains. The plan is to introduce a few proof-of-concept services to show others both how to use AWS and how to have some fun with a couple i18n/g11n services as well.

Let’s get started. First thing, you’ll need an AWS account. It’s not difficult at all, and you can get a 12-month free trial. Yes, 12 months F-R-E-E! If you’re a hands-on engineer or dev ops person, you owe it to yourself to investigate this if you haven’t already.

If you’re new to AWS, you’ll spend your first hours browsing the console, creating server instances, etc. However, for anything beyond casual browsing, the AWS command-line client (AWS CLI) is critical. The AWS CLI helps you create resource templates that you can reuse over and over again in scripts. And we all know that creating a well-defined script is critical for creating anything in a reproducible, reliable, standard way. This article describes how to setup and use the AWS CLI. Your steps are these:

  1. Create a ‘deploy’ user.
  2. Create an access key for the ‘deploy’ user.
  3. Install and configure the AWS CLI.
  4. Test it out.

Creating a User

General user

Whether you use the CLI or an AWS SDK for Java, Ruby, or other language, you’ll run those scripts/programs using a set of secret access keys. You get those keys when you create users for your account. In the console, click on the Identity and Access Management (IAM) links. Create yourself a new user called “deploy”. You should use this user when creating or managing resources. Place that user in a administrative group or provide a policy that allows for creation of the resources you’ll need.

Creating an Access Key

Security Identity AWSIAM long termsecuritycredential

Once you have a “deploy” user, you’ll need to create an “Access Key” for it. Download it immediately at creation time. This is the only time you’ll be able to download it. This “key” has two parts: a key id, and a key secret. The command line client will need these, so store it away somewhere safe.

Installing the AWS CLI


You can install the CLI using a variety of options. Two easy options are the pip and Homebrew installers.

Use this to install with the Python pip installer:

pip install awscli

Or use Homebrew on a Mac:

brew install awscli

Once installed, you can use the following the create a default aws profile to use with your account. Remember that access key you created earlier? You’ll need that now. Run the following:

aws configure

This tool will ask you a few questions like this:

$ aws configure
AWS Access Key ID [None]: YOUR_KEY_ID_HERE
Default region name [None]: us-west-2
Default output format [None]: json

Copy and paste your access key id and secret key into the tool. This will create a couple files in the hidden ~/.aws directory. The AWS CLI will use these configuration files when accessing your account. You can find even more detailed information on the AWS CLI configuration website.

Confirming Your AWS Client Installation

Once you’ve installed and configured the CLI correctly, you should be able to work with your account resources immediately. Give it a try:

aws iam list-users

This should return at least two users from your account, including the recently created deploy user. The output looks like this:

    "Users": [
            "UserName": "deploy",
            "PasswordLastUsed": "2016-03-25T04:34:00Z",
            "CreateDate": "2016-03-07T00:51:35Z",
            "UserId": "ABCD1234ABCD1234",
            "Path": "/",
            "Arn": "arn:aws:iam::123412341234:user/deploy"
            "UserName": "jsoconner",
            "PasswordLastUsed": "2015-04-14T04:41:18Z",
            "CreateDate": "2015-04-11T07:21:45Z",
            "UserId": "ABCD1234ABCD1234",
            "Path": "/",
            "Arn": "arn:aws:iam::123412341234:user/jsoconner"

If you got this far and are still reading, you’re ready to do something even better with the AWS CLI. In the next article, we’ll create a complete stack of resources using Cloudformation.

Thanks for reading. If you enjoy this type of content, please provide feedback.

Starting with Amazon Web Services

I recently began working with Amazon Web Services (AWS). AWS truly is an amazing set of services that allow you to create scalable, durable applications in the cloud. AWS provides a powerful set of capabilities to create, deploy, and manage cloud resources with a convenient command line interface (CLI) and a browser console. Over the next few weeks, I will explore some of these AWS capabilities and bring you along for the ride. I’ll tackle a few tasks in quick succession:
1. Create a virtual private cloud (VPC) for a demo app.
2. Create a security group to limit access to compute instances.
3. Spin up an elastic compute cloud (EC2) instance.
4. Deploy a sample application. It should be both fun and educational…mostly for me, but I hope you’ll get something out of it too.

Category: AWS

Headless Raspberry Pi

Although my very first experience with the Raspberry Pi was less than impressive, my subsequent encounter proved fruitful. After following a youtube tutorial, I learned that my initial 8GB micro SD card had NOOBS installed. While it is intended for noobies maybe, I guess my use-case was sufficiently off-track to derail my noob attempt. I don’t have an extra monitor, keyboard and mouse lying around the house, so I needed to interact with a “headless” raspberry pi. I needed to interact with the pi using only my laptop, a network cable, and eventually a wifi adapter. My first step was to burn my own Raspian OS image onto a 64GB micro SD card. A 64GB is not needed. Raspbian doesn’t require anywhere near that much; an 8GB card will do nicely. Being Christmas, I just happened to have access to a new camera’s 64GB card…sorry Robyn (my spouse). I’ll replace it, promise. Pick up the OS image and instructions up from the Raspberry Pi site. Rather than recreate the entire experience here, I’ll point you again to the Youtube video above. It worked as advertised. Use Part 2 of the tutorial to add remote GUI and VNC abilities. Then, check out Part 4 of that tutorial when you’re ready to add a wifi USB adapter to the mix. I’m not certain how long this process took, but it happened while re-watching Star Wars IV (A New Hope). So, there you have it. I’m up and running on my new Raspberry Pi! RaspPI Desktop

First Experience With Raspberry Pi

Today my wife presented me with a Raspberry Pi Cana Kit. Out of the box, I have:

  • the Pi itself
  • a power adapter
  • an HDMI cable
  • a micro-sd card
  • a Wifi adapter

It doesn’t have a keyboard or display of course. I figured the SD card had the OS on it, so I inserted the SD card into the Pi. Then I stuck a network cable into it, connected it to my router, and tried to find it on my network from my laptop. My idea was that I’d connect via ssh. No bueno.

Hmmm…what’s going on? Isn’t this thing booting automatically and connecting to the network?

Swallowing my pride, I picked up the small set of docs. It appears that no OS is running yet. Instead, I think it is running a boot loader called NOOBS, so I can’t SSH into it…yet. I think it’s just sitting there waiting for me to answer a couple questions before it loads an OS.

Sigh…maybe I’ll have to actually connect a monitor and a keyboard directly to it for the first boot and config step? Or maybe I’ll have to to read page 2 of the docs? Heaven forbid.

Managing Translatable UI Text in JavaScript with RequireJS

Old world map

Internationalizing a web application includes the task of extracting user interface text for translations. When text is separated from the rest of your application business logic, you are able to localize and translate it easier. Although the JavaScript language and browser environment don’t prescribe any particular method for creating externalized text resources, many libraries exist to help this effort. One such library is RequireJS. The library includes an i18n plugin that helps you organize your text resources and load them at runtime depending on the needed language. The goal of this article is to describe how to use the RequireJS i18n plugin on a simple, single page application.

This sample application has a single html file index-before.html that contains text headers and other UI elements. We will extract the text, put it into a separate resource file, translate that file, and use the translatable files at runtime as needed by the customer’s language.

NOTE: This is not a RequireJS tutorial. The article assumes that you have some RequireJS knowledge.

Setting Up Your Environment

Download the js-localization project on Github. The project contains the source code for this article, allowing you to see code both before and after using the i18n plugin.

The project’s base directory is js-localization. Two HTML files are in this directory, index-before.html and index-after.html, which are the before- and after-internationalization files. The scripts subdirectory holds all JavaScript libraries for the project. All 3rd-party libraries, including RequireJS, are in scripts/libs. This application’s primary application file is scripts/main.js. Externalized text bundles are in scripts/nls.

Understanding the Original Index File

The original file looks like this:

    <meta charset="UTF-8">
    <title>Localization with RequireJS</title>
    <link href="styles/quotes.css" rel="stylesheet"/>
    <h1>Famous Quotation</h1>
    I love deadlines. I like the whooshing sound they make as they fly by.
    Douglas Adams

This file contains the following translatable items that we will extract for translation:

  • The title element contents
  • The h1 element contents
  • Two p element contents

Enabling Your HTML File

Include the RequireJS core library in your HTML file with a script element within the head section:

<script src="scripts/libs/require.js" data-main="scripts/main" charset="UTF-8"></script>

This script loads the require JavaScript file and also tells RequireJS to load your main module. The main module is the application’s starting point.

Creating I18n Text Modules

We need to pull out the text and put it into a separate text resource bundle. We will create text resource bundles in the scripts/nls subdirectory. RequireJS will look for resource bundles within the nls sudirectory unless you configure it otherwise. For our needs, we’ll create scripts/nls/text.js, put all exported text there, and provide a key for each string.

The updated HTML file maintains the same core document structure that is in the original file. However, we’ve removed the UI text. The HTML file, found in index-after.html now looks like this without text:

    <meta charset="UTF-8">
    <title id="titleTxt"></title>
    <link href="styles/quotes.css" rel="stylesheet"/>
    <script src="scripts/libs/require.js" data-main="scripts/main" charset="utf-8"></script>
    <h1 id="headerTxt"></h1>
    <p id="quoteTxt"></p>
    <p id="authorTxt"></p>

Where is the text? It’s in scripts/nls/text.js:

    "root": {
        "titleTxt": "Localization with RequireJS",
        "headerTxt": "Famous Quotations",
        "quoteTxt": "I love deadlines. I like the whooshing sound they make as they fly by.",
        "authorTxt": "Douglas Adams"

In general, each string in the original HTML file should be extracted into one or more nls resource bundles, which are really just JavaScript files. The sole purpose of a resource bundle is to define localizable resources. The files that define your primary set of language key-values are called root bundles. As part of a root bundle, a JavaScript file should define a root object that contains content for your application’s base language. This application’s root language is used when no target language can be found that matches the requested language.

Every piece of localizable text should have a key. The key is used to extract the original and translated text from the bundles. For example, headerTxt contains the label for “Famous Quotations”.

Adding Translated Text

Now that you’ve separated text into one or more resource bundles, you can send those files away for translation. For each target language, you will create a subdirectory in the nls directory. In this example, I used Google Translate to translate the text.js content into Japanese and Spanish. The nls subdirectories that contain translations must be named using standard BCP-47 language tags. The sub-directories for Japanese and Spanish are nls/js and nls/es respectively. Because there is only one source file, there will be only one file in each translation subdirectory.

Informing the Library

You must inform the RequireJS library about the available translations. In each source file that contains translations, you must add a language tag that matches the translation subdirectory name. We then have to update the nls/text.js file to notify the library for both Japanese and Spanish like this:

    "root": {
        "titleTxt": "Localization with RequireJS",
        "headerTxt": "Famous Quotations",
        "quoteTxt": "I love deadlines. I like the whooshing sound they make as they fly by.",
        "authorTxt": "Douglas Adams"
    "es": true,
    "ja": true

For each translation, you should include an additional language tag key in the root bundle. Since our root bundle nls/text.js has been translated into both Japanese and Spanish, we include those language tags and set their value to true, indicating that the translation exists.

Configuring RequireJS

RequireJS determines your customer’s language in one of two ways:

1. It uses your browser's language setting via the `navigator.language` or `navigator.languages` object.
2. It uses your customized, explicit configuration.

The navigator.language object is generally available across all browsers and represents the preferred language that is typically configurable in your browser’s language settings. This may be a reasonable default language, but I don’t recommend that a professional, consumer-facing application rely on this setting.

Instead, you should explicitly configure RequireJS to use a language that you select for the customer, with navigator.language as a backup perhaps. Your RequireJS configuration should be in the primary application JavaScript file. In our case, the scripts/main.js file contains this configuration as well as our application code.

If you set the i18n.locale configuration option for the i18n plugin, RequireJS will use that setting as your application’s language. By setting the value of this field, you control what language RequireJS will attempt to use. Set the language/locale option in the main.js file like this:

    config: {
        i18n: {
            locale: "en"

In an actual application, you will not hard-code this locale setting. Instead, you will determine your customer’s language another way, perhaps using navigator.language as a default.

Accessing the Resource Bundle

Once things are configured, using the resource bundle is easy. You just have to include it as a module in your main.js file. Then, you access each key using the name you give the module.

define(["jquery", "i18n!nls/text"],  function($, text) {

    // pull the text from the bundle and insert it into the document


In the above case, I’ve required two modules: jquery and “i18n!nls/text”. When you require a module using a plugin, you must append a “!” to the plugin name. After the plugin name, append the root resource bundle path. In this case, even though we have three languages, we point RequireJS to the root bundle. The i18n plugin will read the root bundle and discover the additional supported translations.

Since the code uses text as the module name, we can retrieve the text values by simply referencing the keys in the bundle. For example, if we want the quoteTxt, we reference it with text.quoteTxt in our code. The above code uses this technique to populate all the UI text in our simple HTML file.

Demonstrating the Plugin

We’ve setup the environment, configured the plugin, translated a file, and have modified our application so that it pulls translated text from the bundles. Now let’s see this work. You shouldn’t need any additional files or tools. Just point your browser to the index-after.html file on your local drive. If you’ve not changed anything, you should see the following English content:

Quote en

Now if you update the main.js file and change the i18n.locale setting to ja, you will see the next image. Remember, this is not a professional translation and is only used for an example.

Quote ja


Although JavaScript has no predefined framework for providing translatable resources, it is reasonably easy to use a library like the RequireJS i18n plugin to help manage UI text strings. Interestingly, the Dojo libraries work similarly to RequireJS. So, if you’re using Dojo, you will manage translations in your application in much the same way.

One of the most interesting parts of JavaScript internationalization is the question of how to determine the user’s preferred language. I’ve written about this before, so instead of handling that question here, I’ll refer you to Language Signals on the Web.

Good luck in your internationalization efforts. Like anything else, the hardest part is just getting started. Hopefully this article makes that first step easier.

Do You Know What Countries are in Western Europe?

Different people and organizations define geographical regions in almost the same way, but there are differences. Consider the different regions of Europe for example.

What countries do YOU think are in Western Europe? Are you sure one or two of those aren’t in Southern Europe? How do you decide?

According to the United Nations, nine (9) countries define “Western Europe”. Can you name them? I’ve included an image below to help.

Western Europe No Names

You can learn more about how the UN defines world macro regions with their M.49 document.

Java and BCP 47 Language Tags

Since Java 7, Java’s Locale object has been updated to take on the role of a language tag as identified in RFC 5646 and BCP 47 specs. The newer language tag support gives developers the ability to be more precise in identifying language resources. The new Locale, used as a language tag, can be used to identify languages in this general form:


Of course, you can continue to think of locale as a lang_region_variant identifier, but Java now uses the RFC 5646 spec to enhance the Locale class to support language, script, broader regions, and even newer extensions if needed. And if the rules for generating this string of identifiers seems intimidating, you can use the Locale.Builder class to build up the tag without worries of misforming it.

The primary language identifier is the almost the same item you’ve always known; it’s an ISO 639 2-letter code or 3-letter code. The spec recommends using the shortest id possible.

The script is new. You can now add a proper script identifier that specifies the writing system used for the language. People can use multiple writing systems to write languages. For example, Japanese speakers/writers can use 3 or more different scripts for Japanese: kanji, hiragana, katakana, and even “romaji” or Latin script. Serbian is another language often written in either Latin or Cyrillic characters.

The region identifier was once limited to 2-letter ISO 3166 codes, but now you can also use the United Nations 3-digit macro geographical region codes in the region portion of a language tag. A macro geographical region identifies a larger region that comprises more than one country. For example, the UN currently defines Eastern Europe to be macro region 151 and includes 10 countries within it.

Eastern Europe 151

Finally, you can use variant, extension, and privateuse sub-tags to provide even more context for a language tag. See RFC 5646 for more details on these. I suggest that you also use the Locale.Builder class to assist if you need to use this level of detail.

Take a look at the Locale documentation for all the details on using these new features. They definitely give you much more control of how you identify and use language resources in your internationalized applications.

Getting started as an Android developer

Android is the most accessible and least expensive mobile platform for developers to get started with. Although you may eventually want to publish your apps on the Google Play Store, which requires a minimal fee, getting and using the developer tools is free.

Here’s how to get started:
1. Download the Android Studio.
2. Turn on the developer options on your device.
3. Head over to the Getting Started site to begin.

Downloading the Android Studio IDE

Google has partnered with JetBrains to create an integrated development environment (IDE) that’s customized for Android development. Of course you can continue to use whatever environment you like as long as you have the SDK, but this IDE makes getting started simple.

Download the Android Studio.

Enable Developer Options

Although the Android Studio has great emulators, you should also test your apps on actual devices. Enable those devices for debugging and developer support by tapping 7 times on your device’s “Build number” label. Yes, you heard right. Use these instructions to find this on your device.

The Getting Started Guide

The best information for getting started is the Android site itself. It has great tips for new users and veterans. I recommend starting on the developer’s training site.