Niklas Laxström

Doing stuff with language and translation.

Author Archives: Niklas Laxström

About Niklas Laxström

Doing stuff with language and translation.

My presentations at Akademy and Wikimania

In July I gave two presentations: one at Akademy 2012 in Tallinn, and one at Wikimania 2012.

Short summary of my Akademy presentation (slides): If you are translating content in MediaWiki and you are not using Translate extension, you are doing it wrong. Statistics, translation and proofreading interface – you get them all with Translate. Because Translate keeps track of changes to pages, you can spend your time translating instead of trying to figure what needs translating or updating.

Also, have a look at UserBase, it has now been updated to include the latest features and fixes of Translate extension, like the ability to group translatable pages into larger groups.

Akademy presenation by Niklas and Claus: click for video. Yes, there’s a a typo.

Short summary of my Wikimania presentation (slides; video not yet available): Stop wasting translators’ time.
Forget signing up to e-mail lists, forget sending files back and forth. Use translation platforms that move files from and to the version control system transparently to the translator.
If you have sentences split into multiple messages, you are doing it wrong. If your i18n framework doesn’t have support for plural, gender and grammar dependent translations, you are doing it wrong. If you are not documenting your interface messages for translators, you are doing it wrong.

Niklas maybe having fun at Library of Congress. Photo tychay, CC-BY-NC-ND

Translation sprint for KDE in Finnish

In our sprint website we’re translating the upcoming KDE SC 4.9 release into Finnish. If you know Finnish, you only have to register to start translating: please join us!
We have a simple goal: translate 10,000 new messages and have all the changes proofread and accepted. In two weeks we have translated more than 3,000 messages and the majority of them have been proofread and accepted. We still have about three weeks to go, so your help is needed to increase the output to reach the goal of 10,000 new translations. As a secondary activity we are also proofreading the existing translations and discussing and harmonizing the terminology. For example should filter be suodin or suodatin.

Keep reading if you are interested in how we organized the sprint from a technical perspective.

This is the second translation sprint I’m organizing with the Translate extension. The first one was in March, when we translated Gnome 3.4 into Finnish and this time we are translating KDE 4.9 into Finnish. I can say that the Translate extension fits for this purpose pretty well:

  • You can set up everything in few hours.
  • There are minimal barriers to start using it (we do require registration).
  • It is suitable for novice translators, because they get feedback when other people proofread and correct their translations.

It is not without its issues either, but I see this as a great opportunity to make the MediaWiki Translate extension even better and have it support a variety of use cases. Let me describe some.

Bugs. There are always some bugs. This time I found a regression in the workflow states feature where the recent changes weren’t backwards compatible with the old configuration format. That was quickly fixed and I also submitted fixes for a few minor issues, which were not encountered before. All in all I have 7 local patches, mostly small behaviour changes like the formatting of message keys or showing the message context field to translators. Most of those can be cleaned up and submitted for merging.

Scalability. I had an impression for a long time that the Translate extension scales up pretty well. After all we have thousands of message groups and 50k messages translated into hundreds of languages at translatewiki.net. How naive I was. All of KDE as we use it (stable and trunk branches merged; including playground and extragear, calligra and other related stuff) contains 200k messages. Turns out that our import tools choke when you try to feed them 350k new messages at once (this includes Finnish translations). As a workaround I had to limit the amount of messages that are processed at once and iterate over the whole process multiple times. This is where the bulk of my time was spent. Of course I also ran out of disk space in the middle of the import. It takes about 1G of space, but currently I have only a tiny 10G disk on the server.

Search. The most requested feature is better search. Currently it is not possible to limit the search to a message group nor to see the translation when searching source texts, or the source text when searching for translations. Also it takes a few clicks before you can edit the message from the search results. Building a good search backend is currently on the backlog of the Wikimedia Localisation team, but it is not yet scheduled for any sprint.

Stay tuned for the results of the KDE Finnish translation sprint.

Report from the Multilingual Web Workshop

I attended the W3C Workshop about multilingual web with Gerard Meijssen for the Wikimedia Localisation team. Aside from the long list of new things you will learn in every conference, this time I was surprised by the number of links that appeared between things I already knew. For example META-SHARE was mentioned multiple times in different contexts.

Presentations. The workshop was split into two days. The first day was packed with short presentations from participants. Some observations:

  • Keynote about semantic web and how it can help us to reach multilingual web.
  • Microsoft presented their translation toolkit. It didn’t seem to include translation management at all: “You can then send the empty translation file by email”. Also in the example application mph was not localised to km/h.
  • There was a poster presentation about open source language guessers. We do tag the language used in Wikipedia pages, but still most of the guessers didn’t get it right. To me this says that there is training data out there, but nobody bothers to use it.
  • New language related features (bidirectional text, ruby, translate-flag) in HTML5 ignited lots of discussion: they were welcomed but people wanted to do more.
  • XSL-FO is still years ahead of CSS by having direction neutral start, end, before and after keywords. That is the one of the few features I like in that language.
  • Some WTF-moments: “unicode languages”, using flags for languages and locales and one of the best practices for bidirectional text was to “avoid using it”.

Open linked data. There is a big demand for all kinds of linguistic data. One of the discussion groups in the second day was about open linked data. It was emphasized that open data means that the data is in a standard format, not tied into one application. But for me an explicit open license is more important, since it allows converting the proprietary format into other formats and *distribute* them.

Open linked data. Links are another side of open linked data. Links were said to be as important as the data itself, something easy to agree with. What would for example Wikipedia be without links? The number of links is increasing, but currently the links are clustered into centers. Links are crucial to discover what data is actually available, but projects like META-SHARE do their part too. For me this compares closely to the UNIX philosophy of having each tool do one thing and do it well.
An example of this idea is in the Bank of Finnish Terminology in Arts and Sciences. Contributors are encouraged to write short definitions for terms, while long explanations are better suited to be included in Wikipedia. We are also using Semantic MediaWiki to increase the links inside the data itself.

Open linked data. A type of linked open data I would like to see is translation memory data. This is also something the open source and open content projects including Wikimedia and translatewiki.net can contribute, since we have lots of translations that can be used to build translation memories and parallel corpuses. Have you ever wanted to compare the same text in 50+ languages? We have it. I also see nice post-processing possibilities to increase the usefulness of the data by doing sentence or even word level alignment; we only have paragraph alignment for now.

Updates on translation review feature of Translate extension

About three months ago I blogged about the translation review feature that we developed for the Translate extension. It is time to have a look at how it has been received. Thanks to Siebrand Mazeland we can now draw a graphs for review and reviewer activity. This feature came just in time for the Gnome 3.4 Finnish Translation Sprint that I’m organizing. If you look at its main page, you can see graphs for translation and review activity. The activity isn’t exactly over the top, so if you speak or can translate into Finnish, please join and help us.

I’m aware of three places using this feature: translatewiki.net, Wikimedia Foundation and the translation sprint mentioned above. In translatewiki.net the review ability is not as open as I originally envisioned it to be: only experienced translators can get it by request.  Only about 2% of over 3500 registered translators currently have the review right in translatewiki.net. For the other two places, everyone who can translate can also review.

When looking at the graphs for translatewiki.net we can without doubt see that translation reviewing activity is not yet anywhere near close to the translation activity, and we should consider that there is a huge backlog or previous translations that should also be reviewed. We don’t even see a steady growth in the review activity (around the change of the year we had a translation sprint which temporarily increased translation and review activity to higher than normal levels). We don’t have graphs for Wikimedia projects yet, but looking at the logs the review features seems to be relatively in more active use there. I would personally like to see all new translations from now on to be reviewed at least by one other user.

The next step would be to add a review level column to Special:LanguageStats and Special:MessageGroupStats pages. That would need some idea on how to convey both quantity and coverage. For example, a hundred translators reviewing the same message doesn’t mean that the review coverage is good. Perhaps we should just start with coverage and bring quantity later. This could be a nice small project for someone who wants to help to develop the Translate extension with help from us.

My take-away from Open Advice

I told my friend Nemo that I have been reading the recently published Open Advice book and he basically forced me to write a review about it. This isn’t really a review, but instead something the book made me think. When I started reading the book I expected to get some simple tips on how I could do things better or on new things I could do. Well, I didn’t get those, but I got something else.

The book consists of many short stories of open source from different starting points – each story is written by a different author. It was nice to notice that among the writers there were many who I’ve met or at least whose name and work I knew. Most of the stories didn’t tell anything new to me, and the section about translation was annoyingly short of content. The book is worth reading, especially since each story is short, which makes it easy to read.

When I read what follows in Markus Krötzsch’s Out of the lab, into the Wild, I started thinking.

When a certain density of users is reached, support starts to happen from user to user. This is always a magical moment for a project, and a sure sign that it is on a good path.

I have been developing the Translate extension (and by extension translatewiki.net too) for many years now, but apart from seeing it being used more and more, I haven’t really stopped to think what it means for a software project to grow up and be successful. So I made up some milestones:

  1. You write something for yourself
  2. Other people find it useful and start using it
  3. The users of your software are providing peer to peer help
  4. Other developers are able take over maintenance and development of the software

Now we have something we can measure. I started writing Translate over five years ago. Some years later there were already tens of translators using it. This year the Translate extension is used in many Wikimedia projects as well as in KDE UserBase in addition to translatewiki.net. Lots of new people need to learn how to use the Translate extension from a management point of view, and more and more often they get an answer not from me but from someone else or by reading the documentation.

So what about step 4? Until very recently Translate has been my world and my world only apart from some patch contributions. But I have now taken it as my personal goal to change this. And what a lucky person I am! The Wikimedia Localisation Team – which I am member of – has the development of Translate extension as one of their major goals. Even better, we are an agile team, which means that each and every developer of the team should be able to do any development task in the team. To achieve this we divide tasks among team members so that nobody works only on their own favourite project. In addition we are explicitly reserving time for knowledge transfer, which happens through code review, proofreading the documentation one of us has written, explicit sessions where a team member covers a topic they know well and pair programming. This has already been going on for some months and it is not going to stop.

In addition to schooling the other developers in our team, I also plan to keep expanding the documentation, adding more tutorials and organizing tasks suitable for new developers, so that it is easy for interested volunteer developers to start contributing to Translate. Because in the end knowledge is useless if the developer has no reason to develop, and the best reason to develop is to scratch your own itch. I believe those developers are to be found among the users of the Translate extension who have a slightly different and new use case which needs development work.

I haven’t yet finished my plans on the fifth step (world domination), so stay tuned for coming blog posts.

New UIs in MediaWiki Translate extension

I’m not a designer. Yet, I am a designer. During the many years of development of the Translate extension, I have done about all things related to the development of a software project: coding, translating, documenting, testing, system administration, marketing and user interface (UI) design among those. My UI design skills are limited to personal interest and one university course. But I try to pay attention to the UIs I create, and I listen for feedback. For once we got some good feedback about the issues in the current UIs and some suggestions about how to improve it. Based on this feedback I have done two significant changes to Special:Translate – the main translation interface of the Translate extension. The first significant change is to split the page into a few different tasks: translating, proofreading, statistics and export. I implemented these as tabs. Typically the user starts from language statistics and selects the project he wants to translate or proofread. This has the following benefits:

  • The tasks are clearly separated: users can see at a glance what are the things that can be done with the intreface.
  • Switching between tasks is seamless: previously there was no easy way to go back to language statistics from translating or proofreading.
  • There are less visible options at a time: the UI just looks nicer and takes less space.

The second change is an embedded translation editor. This feature is still in beta phase, and if we get enough positive feedback about it, we will switch over from the old popup based editor. You can test the editor by going to Special:Translate and double clicking the text you want to translate. This should prevent the hassle of moving and resizing dialogs. On the other hand it has some problems with the editor moving on the screen when you advance to next message, and it also stands out worse in the middle of the surrounding context. I’m investigating if and how we can mitigate these issues. I’ve already changed some stylings to make the editor stand out more and the whole table appear less heavy. As a bonus the embedded editor feels faster, because I’ve added some preloading. This means that when you save your translation and go to the next message, it will show up instantly because it has already been loaded.

Exploring the state(s) of open source search stack supporting Finnish

In July 2011, before starting my Wikimedia job, I completed my master’s thesis. Finally I spent some time to polish and submit it, which means that I will graduate!

In my thesis I investigated the feasibility of using a Finnish morphology implementation with the Lucene search system. With the same Lucene-search package that is used by the Wikimedia Foundation I built two search indexes: one with the existing Porter stemming algorithm and the other one with morphological analysis. The corpus I used was the current text dump of Finnish Wikipedia.

Finnish is among the group of languages with relatively vibrant and extensive morphology. For you English speakers, this means that instead of using prepositions, our words actually change depending on the context they are in. This makes exact pattern matching in searching mostly useless, because it only matches a fraction of the inflected forms. In Finnish nouns, verbs and adjectives can each have over a thousand of different forms when combining all the cases, plural markers, possessive suffixes and other clitics.

Simple stemmers have no or very limited vocabulary and they strip letters off the words according to rules. Morphological analyser instead comes with an extensive word list and can find all the possible interpretations of a given inflected word and only those. The morphology is based on the Omorfi interpretative finite state transducer, which returns the basic dictionary forms of the inflected words given as input. The transducer I used was brand new. Omorfi is the first open implementation of Finnish morphology.

From a technical perspective I came up with seven requirements for the new algorithm and its implementation (thanks to help from Roan and Ariel at Wikimedia) before it can be deployed in Wikimedia:

  1. it has to be open source,
  2. the code must be reviewed,
  3. the performance should be on par with the current system,
  4. it must be stable, no crashing or bugs requiring reindexing whole wikis,
  5. it must be easily installable with dependencies,
  6. searching must not be harder and the search interface must not change,
  7. it must return improved search results.

Now I will tell how well it met these requirements.

  1. Omorfi and the lookup utility I use to drive the transducer are both open source (GPL and Apache).
  2. Code review might be tricky due to lack of resources in Wikimedia. However we’re not at this stage yet.
  3. Indexing time is from five to ten times slower, but searches are about as fast and search index size grew only by 10 to 20 percent. Since indexing is done only once, it’s not such a big deal. The speed can be improved though, the lookup utility is not optimized.
  4. I got some out of memory errors and crashes while developing the system – the components I used were very new and I usually was their first user.
  5. The lookup utility is a simple Java library and the transducer is just a file – easy to install or bundle.
  6. The search syntax and interface has not changed at all.
  7. And the most important point: the quality of search results. The Wikimedia Foundation provided me with a corpus of actual search queries: I ran them on both indexes and I analysed the variations in the results they gave. I got very mixed results here, with many searches performing significantly better and many significantly worse. This is probably explained by a major implementation mistake I found in my own implementation. The alternatives proposed by the morphology sometimes got full weight when they matched the searched keyword. For example searching for tee (tea) returned many pages which contained the inflected word form teiden which can be genitive plural of tee or tie (road) or word teesi (thesis) which was interpreted as tee with possessive suffix (your tea). The problem could be solved by marking the interpreted words with a % prefix, so that they wouldn’t get as much weight as real exact matches in the document. I was not able to execute this fix during my thesis, however it would be the first thing to try among the ample possibilities of further research.

Even with the problems I encountered in my research, I believe this approach is viable and could – with further improvements – replace the current stemmer algorithm.
This was the first time that open content, open search engine and open Finnish morphology were put together.

The thesis (PDF) is written in Finnish, but I’m happy to tell you more about it. Just ask!

New translation memories near you soon

In the last sprint I developed a translation memory server in PHP almost from scratch. Well, it’s not really a server. It’s run inside MediaWiki during client requests. It closely follows the logic of tmserver from translatetoolkit, which uses Python and SQLite.

The logic of how it works is pretty simple: you store all definitions and translations in a database. Then you can query suggestions for a certain text. We use string length and fulltext search to filter the initial list of candidate messages down. After that we use a text similarity algorithm to rank the suggestions and do the final filtering. The logic is explained in more detail in the Translate extension help.

PHP provides a text matching function, but we (Santhosh) had to implement pure PHP fallback for strings longer than 255 bytes or strings containing anything else than ASCII. The pure PHP version is much slower, although that is offset a little because it’s more efficient when there are fewer characters in a string than bytes. But more importantly, it works correctly even when not handling English text. The faster implementation is used when possible. Before we did some optimizations to the matching process, it was the slowest part. After those optimizations the time is now bound by database access. The functions implement the Levenshtein edit distance algorithm.

End users won’t see much difference. Wanting a translation memory on Wikimedia wikis was the original reason for reimplementing translation memory in PHP, and in the coming sprints we are going to enable it on wikis where Translate is enabled (meta-wiki, mediawiki.org, incubator and wikimania2012 currently). It is just over 300 lines of code [1] including comments and in addition there are database table definitions [2].

Now, having explained what was done and why, I can reveal the cool stuff, if you are still reading. There will also be a MediaWiki API module that allows querying the translation memory. There is a simple switch in the configuration to choose whether the memory is public or private. In the future this will allow querying translation memories from other sites, too.

Putting that another pair of eyes into good use

This blog post is about the MediaWiki Translate extension and explains how we got to develop a new set of translation review tools.

One of the core principles at translatewiki.net is that the time of translators is a prestige resource. We show our appreciation to translators by providing tools that let them concentrate 100% on the task at hand and let the (volunteer) staff handle the boring tasks.

It is well known that good translators take pride of their and others work. This may result in a urge to review all translations made by other translators. I consider myself being that kind of translator. The good news is that in recent months the Translate extension has got massively better at supporting reviewing of translations. Some weeks ago we added a new listing where you can click a button to accept a translation. When the list is empty, you know that all translations have either been made or fixed by you, or you have accepted someone elses’ translations.

This is all nice and dandy, but if you want to review new translations as they come in it is not practical. You’d either have to watch the list of recent translations or subscribe to the feed of them. From here you can get to the individual messages, but it takes many clicks to get to the page where you see the button to accept the translation. And iterating over each of the hundreds of message groups to see if there is anything to accept is not practical either.

The solution: a special message group which lists the recent translations in a given language. Since only some of the translators are allowed to review, on the right you can see a screenshot of how it looks like – click to enlarge. One could bookmark this page and have a look at it a few times per week. For me this is a real time saver, and I’m sure others will find it useful too.

To get this implemented, I originally anticipated that some heavy refactoring was needed and I estimated about one and a half day for it. In the end it took only about half a day – I was positively surprised how painless the refactoring was. The problem was that the class which fetches all the messages from the database assumed they all belong in the same MediaWiki namespace. In translatewiki.net we have over ten namespaces for translations of different projects, so it had to be fixed. I’d say this is a prime example of the saying Premature optimization is the root of all evil by Donald Knuth.

In the future we need to link this page from suitable places to make this feature discoverable and also to make sure that more than the current 66 users out of 3000+ translators get the right to use this feature.

MediaWiki grows up – no more playing with Lego

User interface messages built from pieces of text or leaving some parts out of a message are what is called Lego messages. The end result of this practice is not a glittering Lego castle. The end result is more like a shady shack with a leaking roof.

Major Lego message usage in MediaWiki will soon be in the past as I have refactored the MediaWiki logging system and brought the code to match with what we expect from internationalisation today. Instead of snippets “moved X to Y” translators can now work with full sentences like “U moved X to Y”. It makes it possible to change the message to “Page X was moved to Y by U”. Consider the languages where sentences don’t begin with the subject. It must have been as awkward as “moved U X to Y” would be in English.

There is more: translations can now take the gender of the user who performed the action into account. English almost always gets away from taking sides in interface messages, but that is not the case in many other languages.

We already have many translations using these new possibilities:

  • English: Nike moved page Hapsen to Saalen
  • Welsh: Symudwyd y dudalen Hapsen i Saalen gan Nike
  • Russian (male): Nike переименовал страницу Hapsen в Saalen
  • Russian (female): Никa переименовала страницу Hapsen в Saalen