I stumbled onto an interesting article claiming Unicode will be a failure.
Speaking of Chinese characters: “Among these are Japan, Korea, Taiwan, and Vietnam. In the first three, Chinese still forms the backbone of all normal writing and speaking.”
I realize that the Korean language contains loads of borrowings from Chinese. It also contains loads from English and Japanese. But that doesn’t make their writing (한굴) dependent on Chinese (nor hiragana or Latin letters).
As for the rest of the article, many, many good points.
However, it seems to me that a more accurate title would have been “Why Unicode Won’t Work for Classical Oriental Literature in Some Digital Media.”
On the web, and in e-mail with MacOS, IOS, or Windows, the only time I have a problem with Japanese, Chinese, Korean, Polish, French, Spanish, German, Greek, etc. is when someone who should not be permitted to produce web pages (or code e-mail clients) puts in the wrong encoding specification or doesn’t put one in at all. And in those cases, I can easily correct it myself.
On the Mac, I have NO problems with file names, plain text, or word-processors using UTF-8 in any of the languages I’ve mentioned. It’s trouble in PDF, but that’s Adobe’s failure, not Unicode’.
There are problems with other Windows programs, but that’s because Microsoft can’t make up their minds which encoding scheme to use, and they let each department act as if everybody else uses whichever their department favors.
I don’t use Linux or Android much, but I’ve never had a problem with them, either.
SMS sometimes failed me, but that also isn’t Unicode’s fault—AT&T either discarded texts in UTF-8 or turned them into gibberish.
The article was dated long ago (2001), but I don’t recall having problems back then—except those still present in #$censored^# Microsoft Office (last time I used it).
Views: 0