The other day I suggested that you use UTF-8 to encode your Java source code files. I still think that’s a best practice. If you can do that, you owe it to yourself to follow that advice.
But what if you can’t store text as UTF-8? Perhaps your repository won’t allow it. Or maybe you simply can’t standardize on UTF-8 across the groups. What then? In that case, you should use ASCII to encode your text files. It’s not an optimal solution. However, I can help you get more interesting Unicode characters into your Java source files despite the actual file encoding limitation.
The trick is to use the native2ascii tool to convert your non-ASCII unicode characters to a \uXXXX encoding. After editing and creating a file contain UTF-8 text like this:
String interestingText = "家族";
You would instead run the native2ascii tool on the file to produce an ASCII file that encodes the non-ASCII characters in \u-encoded notation like this:
In your compiled code, the result is the same. Given the correct font, the characters will display properly. U+5BB6 and U+65CF are the code points for “家族”. Using this type of \u-encoding, we’ve solved the problem of getting the non-ASCII characters into your text file and repository. Simply save the converted, \u-encoded file instead of the original, non-ASCII file.
The native2ascii tool is part of your Java Development Kit (JDK). You will use it like this:
native2ascii -encoding UTF-8 <inputfile> <outputfile>
There you have it…an option for getting Unicode characters into your Java files without actually using UTF-8 encoding.