Start a new topic

Slate Desktop on Mac

Hope to start testing Slate Desktop on Mac soon.


Yesterday I received this information from Tom:


In a couple of months, I'll be rolling out some new read-made engines for some language pairs where public training corpora are available. Then, translators without TMs can use Slate Connect ($49) with the ready-made engines for less than $30 each engine. This configuration also makes it easier to support a native MacOS Slate Connect running on MacOS, but building engines will still be a Linux or Windows task with Slate Desktop or Pro.

We're making progress slowly but surely. As I reported before, Jeroen created a consistent C++ build environment for Windows, Linux and OSX. After Jeroen creating and testing the binaries, he learned problems with Moses's Perl scripts. They create GNU-compatible command lines for utilities like `sort` but OS X version command lines are completely different. Now all the Perl scripts are working and he's continuing his binary tests. Our C++ optimizations for Windows & Linux are causing problems on OS X and he's slowly walking through all 14 binaries to find compiler options that work on OS X.


One month ago, Tom wrote:


>Hi everyone. I'm trying to be a bit more proactive keeping you up to date. Jeroen continues working on the OS X build. The C++ details are more than I personally understand, but I know they're real because he's frustrated, stressed and frazzled. Short of a miracle, I'm not expecting a September availability. The best I can offer for now is more frequent updates and my reassurance that this is Jeroen's only work task. We're not giving up. -- Tom


Thanks for the update! Any news?

Yes, here's the full article on Slator.com: Hyperbolic? Experts Weigh In on Google Neural Translate. Kirti (former sales VP @ Asia Online and Language Weaver), John Tinsley (Iconic Translation Machines) and Juan Alonso (Lucy Software) advise caution that the Google study used only 500 sentences. I agree.

Kirti and I worked together at Asia Online (now Omniscien). He sold automation solutions predating Moses and guided AO to promote replacing translators with post-editors. Somehow, he "saw the light" after loosing his job at Asia Online and now he's a self-proclaimed a translator advocate (The ABRATES Conference in Rio: Translators focusing on MT). Nonetheless, he states translators don't have and (worse yet) are incapable of learning the skills to create engines that make a difference to translators in their work.

I agree with Kirti's quotes except his pejorative use of the term "DIY" and its efficacy as a personalized engines on desktops. Kirti is simply wrong that GT performs better than a desktop solution. He clings to an outdated MT value pyramid (MT Options for the Individual Translator) despite Memsource's report showing only 5% of "generic GT" is correct. Note that none of our customers who use their own TMs have ever reported less than 30% correct. Customers who use huge corpora from the EU, UN and other public sources report significantly less than 30%. Kirti in these cases is right, related to the kinds of TMs, not to the size of platform (cloud or desktop). It takes us longer to collect real and publish our data because we honor our customers who value their privacy.

With the exceptions of Kirti and Juan Alonso, Slator.com's experts all hold PhD's in computational linguistics or computer science. This "who's who" of MT have been largely responsible for guiding SMT technology into the Cloud and away from a translator's desktop. While eminently capable of analyzing Google's raw research (especially the meaning of a 500 sentence), these experts' track record guiding SMT to the real world leads me to disagree with prognosis for the efficacy of NMT in the real world.

I think Google's real breakthrough is the speed to generate engines and run production. NMT's limits regarding vocabulary are real when working with TMs with "~4.5 – 36M" TUs (i.e. "modest size"). Desktop engines using 75K to 300K TUs generate faster and suffer less from vocabulary its limits.

Before this article, I have publicly proposed on Linkedin that Memsource and Google team-up to re-run the 28 million words through GNMT and measure NMT's delta from SMT's 10%. I pledged US$ 2,000 to support the research and have solicited others to pledge. Somehow, I don't think it will happen. Too bad.
I've just read this: Vashee cautions against “overstating the definite and clear progress that has been made.” He says that, “for some reason, this overstatement of progress is something that happens over and over again in MT. Keep in mind that drawing conclusions on a sample of 500 is risky even when the sample is really well chosen and the experiment has an impeccable protocol.” However, he says that the study “has just raised the bar for the Moses DIY practitioners, which makes even less sense now since you could do better with generic Google or Microsoft, who also have several NMT initiatives underway.”

1 person likes this
Thought I'd share some practical realities that I just read/learned in the official NMT reports of experiments presented at MT Marathon of the Americas (http://statmt.org/mtma16/uploads/mtma16-neural.pdf):

NMT:
  • Corpus size: 30,000–100,000 words in the vocabulary (no indication of # phrase pairs)
  • Training time: 1 to 3 weeks
  • Decoding (i.e. translation production): "fast" defined as 100,000–500,000 sentences/day
  • Decoding hardware: NVIDIA Titan X and amuNN GPU (i.e. same used to make supercomputers)


Compare to SMT (Slate/Moses):
  • Corpus size: 1 million words (500,000)
  • Training time: Overnight (8-10 hours)
  • Decoding: 3-5 million sentences per day
  • Hardware: Intel i7


I believe there's a 100% chance these stats will get better. Hardware always gets faster. Algorithms are optimized. Ten years ago, SMT required 1-3 weeks to train an SMT model. Hardware got faster. Academic researchers refined the algorithms. The same will happen over time with NMT. NMT gets faster and more robust, maybe the quality more than 26% over SMT. We'll monitor and bring it to market when it's ready.


1 person likes this

SD supports forced terminology translation (anyone know a better term for this?). You create tab-delimited UTF-8 files with source term in the left column and target term on the right. You need to manually tokenize the terms. It's not the best solution, but it will get better. The source and target terms should be naturally cased. So, if you want to translate to "LaserJet", that's what goes in the target (right) column.


There are some discussions on our support forum here on Freshdesk.com. We're not as organized as CafeTran. You have to log-in to read forums, but anyone can create an account. I suggest you read there:


http://pttools.freshdesk.com/support/home


In addition to the terminology files, the traditional SMT community recommends you add your terms to the parallel training data like any other TM. We recently learned how we can improve over this recommendation when you have huge terminology glossaries. I'm not sure when I can implement, but the traditional approach still works.


We haven't done any performance tests, but I can't imagine any change in processing speed when importing glossaries. Re improved recognition result... I'm not sure. When a term is in the terminology file, it can only have one entry. I.e. we don't support translating to from source to multiple translation choices. If the glossary terms are added to the training corpus but not in the terminology, then the statistical frequencies in the language model (i.e. monolingual target data) have the most determine affect on the final translation suggestion. Our Linkedin group (Slate Desktop) is running a workshop now that covers this exact topic. You can join any time and participate.


Also, on our support forum you'll find Igor Goldfarb's comments about his experiments with training data size. His comments are relevant to the "big mama" TM approach. Essentially, quality performance breaks down with these large TMs. Pieter Beens reports that his big mama TM works well, but he feels it could be better. Fair enough.


Quality performance might improve if you take extra steps to normalize them, but this is not part of "out of the box" SD. There are too many variables. SD can be customized, but I always recommend you set a baseline with minimal (baseline) processing. Then increment your improvements.


SD has the ability to split TMX TUs into smaller TMX's based on the text/values in any <prop> tag and values of TU attributes. This is configurable without writing any code. Depending on what values are available in your TMX, you can split into various TMX files by creationid, client, project, and even the x-document value (i.e. original document's file name). You can even re-map these values. For example, if "george" used 5 different creationid values, you can merge them all into one "george" set. Once they're split, you can then re-combine them in any mix to make smaller, focused engines. Of course, this is all undocumented, like all good leading edge new software products :)


Tom


1 person likes this

Thanks for the update, Tom.


I was wondering about the use of importing my background glossary (big papa) in Slate Desktop. It contains words and word groups like:


open

front

side

the

front side

the front side

open the front side


I guess that SD will identify these items itself. However, can importing this glossary:

  • Reduce the processing time?
  • Improve the recognition (and probably the translation) result: in a sentence that contains many recognised words/groups, the slots can easier be identified as 'candidates'. Is that assumption correct? Or isn't that how it works at all :).

Hi everyone. I'm trying to be a bit more proactive keeping you up to date. Jeroen continues working on the OS X build. The C++ details are more than I personally understand, but I know they're real because he's frustrated, stressed and frazzled. Short of a miracle, I'm not expecting a September availability. The best I can offer for now is more frequent updates and my reassurance that this is Jeroen's only work task. We're not giving up. -- Tom

1 person likes this
RE "Neural Machine Translation Improving Fast" (NMT)

Yes, I read the hype rags, too. A 26% overall improvement sounds great! Right? Then there's the "70% less verb placement errors, 50% less word order errors, 19% less morphological errors, and 17% less lexical errors." WOW! and hogwash!

I propose the only thing that counts is how many times the engine gets it right. Any additional improvement is an unexpected and unpredictable bonus. When the engine is right, everything else disappears. You have "0% verb placement errors, 0% word order errors, 0% morphological errors, and 0% lexical errors." (notice the "less" is missing)

So, how does a 26% overall improvement relate to NMT's real world starting point? Have you seen the Memsource's recent blog where they published the results of their spying* (see below) on translators activities? Here's the link: http://blog.memsource.com/machine-vs-human-translation/. They report that post-editors using GT/Microsoft through MemSource: "Already 5 – 20% of the suggestions from MT are good enough to simply use them as final translation without any changes." And then, there's that frightening term good enough! Really? Are you looking for good enough or right?

Look at their chart. Only EN:FR scores +20%. Only DE:RU scores over 10%. EVERYTHING else scores 10% or less. That's just what you get when you mash millions/billions of TUs into one engine that's designed to deliver a weighted average millions/billions of times each day. So, a 26% improvement over 5% to 10% gives you results that hover at 30%~35%?

If you review our claims about Slate Desktop (standard phrase-based SMT), you'll see that our customers, using their TMs (70K to 150K TUs) are experiencing 30% to 50% exact matches on their first engine. We don't promote good enough. We define exact matches as the same as a translator's independent work without SD. With a good set of TUs that the translator created himself/herself, Slate Desktop's results start where NMT is trying to achieve.

Your statement, "or else this whole MT approach will get obsolete," assumes that Moore's Law applies to NMT but not to Slate Desktop. Our customers will have real assets because SD saves their TMs cataloged by their labels in its inventory. You can rebuild as many engines as you like at no cost, no fees. Some customers already have a dozen engines of different categories and language pairs, all for their one-time perpetual license fee.

NMT's starts with same TMs. There's nothing stopping us from adding NMT to SD. When you buy that SD upgrade, you'll be able to rebuild your engines with that new technology for the cost of the upgrade. Try that with a subscription-based cloud service.

Back to your comment, "you have to hurry," I got it and thank you for your gentle push. As mentioned above, Jeroen is working 100% on the OS X version. I'm burning time on the OS X installer. September might be possible, but I'm really not sure. Keep pushing us, we need it!

* Spying: I had a conversation with the technology manager at a major European LSP last week. When I mentioned the Memsource blog, she was very matter-of-fact. She not only knew of they survey, but she they also use those production statistics to set rates they offer translators (err--- post-editors, err--- hmm). Protect your business secrets, including your work habits, carefully!
Hi everyone and thanks Jeroen for the update. Sorry for my silence. I'd like to rejoin the conversation.

First, Igor... Thank you very much for your work to include Slate Desktop for CafeTran. I plan to test it myself in the coming week.

Re Mac OS X version, Jeroen is working hard, despite the outward appearance of hardly working :) If you follow the moses-support email list, you'll see that many people are experiencing problems compiling Moses on OS X. Apple changed its compiler/build environment requirements and the Moses source has problems. The problems are serious enough that Hieu Hong (lead Moses researcher) recommends moses-support requests should change to a Linux VM on OS X. This despite the fact that Hieu himself does all his work exclusively on OS X. So, Jeroen's challenges are significant, but if anyone can solve the C++ cross-platform issues, return the improvements to the Moses open source community, and support our customers with a stable product, Jeroen is the man!

To our customers who backordered and those waiting to buy OS X, this is now our top priority and Jeroen is working full-time focused only on this requirement. It's still early September. While he's working on the highly geeky/nerdish issues of C++ builds, I'm working on updating our Bitrock InstallBuilder to make an installer that properly registers everything with OS X. If anyone has insights into what we need to do to satisfy OS X code signing, etc, can you please contact me? Any help I can get will hasten the delivery.

Thanks!

Well, you have to hurry, or else this whole MT approach will get obsolete ...


Neural Machine Translation Improving Fast, Study Finds

Neural Machine Translation Improving Fast, Study Finds

study published on August 16, 2016 claims that Neural Machine Translation (NMT) outperforms phrase-based MT (PBMT) and provides better translations in the “particularly hard” to translate English-German language pair.

In the past, the researchers say, NMT was considered “too computationally costly and resource demanding” to compete with PBMT. Well, NMT literally need(ed) a lot of electricity. However, this has apparently changed beginning 2015, and NMT is now becoming more competitive.

The researchers (Luisa Bentivogli, Mauro Cettolo, and Marcello Federico of Fondazione Bruno Kessler, Trento Italy; Arianna Bisazza of the University of Amsterdam) found that, architecturally speaking, NMT is simpler than traditional statistical MT systems. Interestingly enough, however, they also add that the process is “less transparent” with NMT, saying that “the translation process is totally opaque to the analysis.” How NMT does what it does still seems a bit of a black box.

For the study, the researchers built on evaluation data from the IWSLT 2015 (International Workshop on Spoken Language Translation) MT English-German task and compared results using what they call the “first four top-ranking systems”; that is, NMT and three other phrase-based MT approaches.

Translate TED

The researchers sourced translation material from TED talks (transcripts translated from English into German), reasoning that the language used is structurally less complex, more conversational than formal, and required “a lower amount of rephrasing and reordering.”

As to why English and German, the researchers said using the two languages would be interesting because, despite belonging to the same language family, “they have marked differences in levels of inflection, morphological variation, and word order, especially long-range reordering of verbs.”

“The outcomes of the analysis confirm that NMT has significantly pushed ahead the state of the art”—Bentivogli, Cettolo, Federico, Bisazza

And it is in this aspect of better word reordering, particularly in the case of proper verb placement, that NMT shines. To quote, “one of the major strengths of the NMT approach is its ability to place German words in the right position even when this requires considerable reordering.”

Those Misplaced German Verbs

In contrast, the study indicated that “verbs are by far the most often misplaced word category in all PBMT systems,” which the researchers pointed out as a common problem affecting standard phrase-based statistical MT.

In summary, the outcome of the study’s analysis confirmed that NMT reduced the overall effort by a post-editor by 26% compared to PBMT output. In addition, NMT produced 70% less verb placement errors, 50% less word order errors, 19% less morphological errors, and 17% less lexical errors.

“Machine translation is definitely not a solved problem”—Bentivogli, Cettolo, Federico, Bisazza

However, despite outperforming PBMT systems on all sentence lengths, the performance of NMT degraded faster than its competitors the longer the input sentence became, which was one aspect the researchers singled out as an area for future work on improving NMT.

The researchers’ sense of excitement is palpable when they write “machine translation is definitely not a solved problem, but the time is finally ripe to tackle its most intricate aspects.”

Research Editor at Slator. Wide reader, online course consumer, computer science and transhumanism enthusiast, among other things. Bikes to work, so not a total couch potato.

We're working on  the Mac build.  We have our own build environment for the software in Slate Toolkit, and the trick is to get that working for OSX as well.  There's a few technical unknowns to be conquered there, so it's not an easy thing to plan.  Beyond that it's a matter of building a Mac installer, and lots and lots of testing!

>I think we're looking at late August or early September for first testing. 


That would be great, since CafeTran's ready for Slate Desktop now. On Mac too:



I go to SIN quite often, Tom, but not next week. Later this month, or next month, will let you know. H.
Login to post a comment