We would like to inform you of some issues in the translation process, so that you can understand some of the difficulties involved in translating European Languages to Japanese.
Do you know?
Japanese use three different kinds of characters
The word that is expressed
by three different kinds of characters.
We, Japanese, use three different kinds of characters.
One is HIRAGANA, which is the most basic character in Japanese. Another is KATAKANA. This character is used for something which comes from abroad. Katakana also shows you how to pronounce English terms in Japanese. The third is KANJI.
Normally we read and write mixing with these three characters.
When we localize chm, we have to re-sort all entries because the alphabetic order will be no more effective once TOC is translated.
Or, in case of documents, we have to insert a “yomi”(how to pronounce the Kanji) after each index entry in order that the entries sort correctly according to Japanese rules.
This will be extra work and our client may incur additional costs.
Do you know?
One of most complicated languages… Japanese
This is a comment from one of my friends who had experience in learning Japanese (and gave up).
The biggest struggle at the beginning is, that you are not only dealing with the new words, but also with learning Kanji (well, some at least) and Kana! Thats a lot to start with!
Then the structure of the language is sooo difficult! Because you put everything at the end of the verbs. There are no auxiliaries, like “I would like to eat” or “I must eat”. Instead, you change the verb to something like: taberunakerebanarimasen! That is VERY long!
Also, you negate the adverbs and adjectives: shiroi/shirokunai!
Yes, she is right.
If you want to understand the intention of the speaker correctly, you have to pay attention to the end of the sentence. We negate a whole sentence at the very end, while most European languages do this at the beginning.
At least our nouns do not have genders to remember.
Do you know?
“usage” of a manual is quite different between countries
While we handle manual translation projects, especially for consumer products, we are aware that “usage” of a manual is quite different between countries.
From our experience, Japanese people depend on manuals and refer to them when they do what they do for the first time. However, people from an English speaking background just refer to manuals only when they encounter a situation that they are unfamiliar with. In addition, they are not concerned that the actual user interface terms are different from that written in manuals. But Japanese people care.
As a result of this, we Japanese are very keen to have precise glossaries. When we translate only documents and olhs in which the SW or FW was handled by other vendors, it is possible that you are not receiving the best quality work if the glossaries are not correct or the latest. If we have the opportunity to translate complete projects, including the SW, OLH and other documents, this would not only result in savings to you, our client, but also result in a higher quality complete translation.
Do you know?
Post Edit
Almost a year ago, one of our clients provided us a training for Post Editing Job. Before that, we have been handling jobs on which Machine translation is adopted before handoff from the client, but it is not “Post-Editing”, but still belonged to “Translation” category.
It seems some of source clients (apparently one of them is Microsoft) continue brushing up machine translation engine and now it seems to be ready to be adopted into actual projects. It has been well known that Machine translation can not be used for asian language, especialy for Japanese, but with accumulated datas, recent translation engine can analyze source texts better than before and good results can be expected, especially for strings with proper length.
Quality of machine translation is degitized and measured as “Edit Distance Ratio”. Compare error counts between human translation and machine translation. In case its result is less than a specific value, the project is regarded that machine translation is effecitve.
It is said machine translation is not good, but good enough for some market. For example, a project cost reduction is critical, speed comes first, or target is to grasp the content and no quality is demanded.
This trend can be compared to a catch phrase of a first food restaurant; “quick, cheep and tastes good”. And machine translation can be said “Quick, cheep and testes bad“. But we have to admit that this service has definitely a market. Rather huge market, we would say.
We had thought machine translation would deprive jobs from translators, but it seems it creates another category of job, that is Post Editting. We localization vendors should be modest and accept the client quality needs, whatever it is.
This is really challenging . But we are not afraid of tough demands!
Do you know?
HP Cloud Compute
We are handling localization of Enterprise-class Software applications. One of the most important process during localization can be said “Linguistic QA (LQA)”. At our translation stage, we translate strings, in most cases, without any context information. As a result, translated strings include potential defects. In order to detect these defects, our client incorporates the translated strings into an actual build and provides us the localized build. We install it on an environment which meets minimum specs and check the build from linguistic points of view. There are many items to be checked;
– localized strings match the context
– end style of strings are proper
– truncation, gabage characters.
We would say this might be one of the most important processes to decide quality of the product.
On the other hand, it is getting harder to get proper hardware resources. Enterprise-class software applications compose of huge and complicated constructions. In order to provide a test environment on which plural testers can test without stresses, it is necessary to prepare a server(or servers) with high specifications. Recently, scale of the products becomes bigger and bigger and plural products are getting integrated, variaties of outputs (such as mobile, tablet) are expected. Finally, we gave up constructing the environments with our own virtual servers within our company network.
Then we decided to adopt IaaS(Infrastructure as a Service). A provider provides virtual infrastructures(such as servers) which meets necessary specs, users access to the environment via internet. So called “Cloud Computing”.
Currently, we are using Amazon EC2(Elastic Compute Cloud). With this system, we can easily get servers without any limitation of numbers, specs only for necessary periods. Once the project is finished, what we should do is just shut-down the sytem. We pay exactly what we used. Really convenient.
Now, Hewlett-Packard officially announced to move into this cloud service providing field as “HP Clound Compute”. It adopts open-source “OpenStack“. Users can use the system without depending on specific vendor’s products or technologies. This service is, at the moment, beta stage. But it seems HP will enhance its functionalities and make a full scale entry into this field. I heard they are providing free trial service. Very interesting indeed ! I immediately registered to this free trial. But I was told “This service is already full. Please wait until the service will be resumed.” Pity… I hope I will be able to introduce this system in near future.