Monthly Archives: June 2008

The special Special Pages of extensions

First phase of my Summercode Finland is almost ready. Support for native Gettext projects is in testing phase and Xliff support is waiting for comments about which parts of the Standard should be supported. In other words, there hasn’t been many changes to file format support lately. This week I fixed some bugs found in Gettext testing which actually affected all groups not depending on the file format. For some reason every time I look at my code I find places to improve and clean up it. I cleaned up the command line maintenance scripts and sprinkled few headers for copyright and so on. In the process I managed to introduce handful of new bugs, but that happens always when I code :).

But let’s talk about the post title. It means the names of special pages shown in your browser’s address bar are no more sacred but can be translated like almost everything else. Now that Firefox 3 has been released many current browser even display them nicely and not in some unfriendly percent encoding like %D0%97%D0%B0%D0%B3%D0%BB%D0%B0%D0%B2%D0%BD%D0%B0%D1%8F_%D1%81%D1%82%D1%80%D0%B0%D0%BD%D0%B8%D1%86%D0%B0 instead of Заглавная_страница.

Actually, we have supported this for a long time already, but only for MediaWiki itself and not for special pages provided by the MediaWiki extensions. Special pages can have multiple aliases, and all of those can be used to access it, which means that they need some special handling. All of the complexity (yeah right… one do-while loop) is fortunately hidden behind a variable.

To make your extension support translating of special page aliases, you only need to put one line of code and create one file.

$wgExtensionAliasesFiles['YourExtension'] = $dir . 'YourExtension.i18n.alias.php';

And that file should look something like this:

* Aliases for special pages of YourExtension extension.

$aliases = array();

/** English
* @author YourName
$aliases['en'] = array(
	'YourSpecialPage'          => array( 'YourSpecialPage' ),

At least the first instance YourSpecialPage should be the same as they key you used for declaring your special page with $wgSpecialPages. Note that WordPress likes to mangle quotes, so it is not safe to copy-paste verbatim from the above.

All this was committed today, so there may be some changes still, as always with brand new code. And the good news does not stop there. I already rewrote the Special:Magic of translate extension to support translating these! It already has two extension defined: Translate and Configure. The number of supported extensions will probably grow soon.

Project progress

Had to spend some time maintaining Betawiki, so the progress has been a little slow for the past week. Aside from that I’ve been working on many things.

I have setup a test project for Gettext: a Wesnoth campaign. It is now shown to all, just to us few testers who are going to translate it using Betawiki. It already helped to find some bugs, simplify the code and the edit view got support from displaying information extracted from the pot file.

To make a project available for translation it is not enough to only add it to the list. That part is easy to do, just checking out the files and about twenty lines of code. But to really support a project, we need to work closely with the development team and with the existing translation communities around it. It would be easy if we could just get everyone to use Betawiki immediately, but often some people don’t want to use the web interface for one reason or another. We need to setup rules which languages are translated and where, to avoid conflicts, map our language codes to what the project uses and setup some kind of integration process that translation actually get to the upstream, and that upstream changes propagate in a proper way back to us.

But back to the project. I have been reading Xliff format specifications. It’s fortunately quite short and clear and has nice examples. Xliff supports all kinds of nice features, and I have been trying to decide which features we need to support. I wrote a simple implementation that can export translations in minimalistic Xliff file. It was actuall a pretty easy to do, under 100 lines of code. It would be a really good to get someone who uses programs that accept Xliff files to comment which features would be useful to implement. In any case, I will implement a parser too this week, so that we can get those translations back too :).

If the test project doesn’t bring any big surprises, I start preparing to tackle the next task in the project schedule.

Unproductive start for a week

Well, maybe unproductive is a bit overstatement, but considering I didn’t advance much in my summer project it wasn’t very productive either.

Anyway, on Monday the internet connection was down for good number of hours. I got fed up with it and started cooking! I don’t usually cook to myself, so I’m not very good at it. It was tasty however, giving courage to do it more often. I spent the rest of the day playing Sid Meier’s Alpha Centauri with the extension disc. I love that game!

Oh, and FreeCol 0.7.4 was released yesterday (Monday). It didn’t go as well as I hoped. I was unable to commit latest changes done after Sunday before the release, because the connection was broken. I hope there wasn’t too much effort put into it after Sunday. Now that 0.7.4 is released and the branch officially dead, we have to finally migrate to the 8-branch. Most of the preparations have been already done. I wrote a script that tries to guess key mappings and other changes. So I have the list. In few days we will rename all FreeCol messages, changing to own namespace for FreeCol, removing the prefix and renaming old keys to new names. Keys have to be fixed in the files also. Except a short downtime when it is not possible to translate FreeCol while we do these changes.

And then Tuesday. On Monday Tim Starling committed a change to MediaWiki code that moved files around. (Tim is btw my summer project mentor, but this is unrelated). There was short breakage when Siebrand tried to update the code normally, and it was quickly reverted. Today I committed most of our local changes again: fix to comments; special page normalisation try 2; rewrite or Special:RecentChanges to add few hooks etc. I don’t know how many bugs I introduced yet again, but let’s hope not too many. Thanks to Ialex who already fixed few.

After that I put Betawiki into maintenance mode and started updating and merging changes. It really didn’t help that my local shitty Internet Service Provider had 35% of packet loss while doing it… ssh was irritatingly slow. I managed to do it in less than 10 minutes, and now we are back running, with less local changes.

Rest of the time was spent on eating (the same food as yesterday) and fighting with the papers to put an application for a new place… I have to get away from the dorm before my head explodes. For the evening I probably have to read up about XLIFF format to implement support for it, or test the Gettext implementation, or write some documentation for it, or something else… who knows.

As grain of salt, some nice features (or something like that) we got last week:

  1. Possibility to count fuzzy messages in group statistics
  2. Possibility to hide languages that have no translations at all
  3. First pieces of Gettext plural support
  4. Ability to blacklist some authors from the credits (mainly for bots or us who do maintenance like work)
  5. FreeCol got lots of optional messages
  6. Improvements to the RecentChanges filters that got implemented in the previous week and finally committed to svn repository today.

Gettext pecularities

All kinds of weird things show when looking different po files. While implementing support for the standard way of doing plurals, I found that for example Wesnoth has comments inside the definitions itself, which is quite unexpected, when the po format itself has designated syntax for many kinds of comments! Just look at the snippet taken from Gettext manual:

     #  translator-comments
     #. extracted-comments
     #: reference...
     #, flag...
     #| msgid previous-untranslated-string-singular
     #| msgid_plural previous-untranslated-string-plural

I’m not sure what to do to those… It doesn’t hurt to leave them as is… but it is counter-intuitive to translators… but that would make the code more complex… decisions.

Anyway, the plural support seems to kinda work. At least it parses nicely, but I still need to make sure the special handling doesn’t break comparing messages (for import) and gets exporter properly. Also, KDE3 uses some kind of own format for plurals, but doubt if it is worth making it a special case too.

Aside from improving Gettext, I had to fix somewhat broken author export. It didn’t all authors always, and while I was fixing it, I implemented a blacklist to filter our bots so that they do not appear in every author list.

I also changed the code to support having messages in different namespaces. Now that we support multiple formats, and the number of external projects is probably growing, it was needed. The message cache of MediaWiki was not designed to handle hundreds of thousands of messages. Having one namespace per project also helps filtering and reducing key conflicts. Unfortunately it broke our recentchanges hacks, which means I had to rewrite those, now in a more proper way.

Gettext support is coming

I have managed to translate one string with the extension and export a file including existing translations, and have it work. This means that I’ve implemented a parser and exporter for po files.

Most of the time went into refactoring the old code to support other file formats easily. Exporting is as easy as possible:

$lang = array( 'fi', 'de' );
$target = '/tmp';
$group = MessageGroups::getGroup( 'out-freecol' );
$writer = $group->getWriter();
$writer->fileExport( $langs, $target );

The above code exports FreeCol translations to the tmp directory. Those file can then be committed to the vcs. Each project (group) has a preferred output format. It is also possible to export to other supported file formats by creating a writer manually.

As gettext messages don’t have ids, I had to create those. Currently it is just hash + snippet, which produces page titles like MediaWiki:33e5da6ddaa6edf2f1cdf8c235813747e40fc326-Disable Ethernet Wake-On-Lan w/fi. It is not pretty, but good enough for now.

Anyway, the support is “coming”, not ready. Still to do are for example gettext plural support, better formatting of authors, importing external changes and so on.

-- .