Start a new topic

LogiTerm as a single search and/or pre-translation interface to manage ALL of your terminology and translation memories

Since I just wrote quite a big post over in the DVX mailing list about LogiTerm, which might interest people here, I will copy/pasted here. LogiTerm is interesting because it can be used as a single search and/or pre-translation interface to manage all of your terminology and translation memories, and can be used nicely alongside a normal CAT tool. Anyway, without further ado, here is the post ( which is taken from https://groups.yahoo.com/neo/groups/dejavu-l/conversations/topics/137941):


############################################################################

WARNING: THE FOLLOWING WAS DICTATED (IN A HURRY); ENTER AT YOUR OWN RISK!

############################################################################


 

This is how I use LogiTerm these days:


## Terminology ##

I use the Terminology feature to keep track of all my glossaries/termbases, both my own, those I find online, and those of clients. Note that I don't use LogiTerm to do any actual translating, so do not use the Terminology feature of LogiTerm when translating inside Word.


Every time I come across a new glossary or termbase, I quickly add a ‘Module’ in Logiterm's Terminology area. I then quickly convert the file via: Terminology > Convert > Tab-delimited to LogiTerm, and add it to the module, and index the module.


## Bitexts ##

I use LogiTerm’s Bitexts function to keep track of my ever-growing, massive collection of translation memories, in .tmx format. I have been collecting Dutch>English translation memories for years now, from all over the place: online, from clients, my own, from friends and colleagues, etc. Over the years, I came to realize that the only tool that could handle such a large database of translation memories (for searching, i.e.), was TMLookup (http://farkastranslations.com/tmlookup.php). I still love this tool, and it is absolutely amazing, but at some point I discovered that LogiTerm can do everything TMLookup can do, and a whole lot more. Of course, LogiTerm is quite expensive, whereas TMLookup is free, but it is well worth the money.


Anyway, back to the point. Every time I come across a new translation memory, I convert into it a .tmx file and save it in a specially named folder in my Memories folder. I then create a specific Module for it in LogiTerm's Bitexts database, using the exact same name as in my folder structure. This allows me to do various things. First of all, while translating I can easily search all of my past translation memories, and quickly drill down through specific clients, subjects, etc. For example, I can run a quick search across ALL of them, or only the client I am translating something for today.


But there's more. When setting up every new translation project, I run: 


Pretranslation > LogiTrans


This allows me to pre-translate my current docs against my massive TMX database, or only parts of it. Since my EU TMX collection is vast, which would cause the pre-translation function to take forever, plus not all of it is relevant, I uslly run LogiTrans on a relevant subset of my complete collection. For example, this morning I started on a set of technical specifications about a solar farm, so I could LogiTrans safely disable all kinds of TMX's that are irrelevant. Once LogiTrans is finished, I have a quick look at the analysis files (showing a wealth of data on any useful matches), and then convert the resulting .xml file into a .tmx, via: 


Bitexts > Convert > LogiTrans data (LT.xml) to Translation Memory (TMX)


I then import this .tmx file into my CAT tool project, whether this is DVX3, CafeTran, Studio 2017, or memoQ 8.2, depending on which tickles my fancy today ;-)


This system is one of the best ways I can think of to ensure that I will never miss a potentially useful match in a translation memory hidden somewhere on my computer, without slowing down my CAT tool to a crawl, which is what happens in most CAT tools if you try to work with translation memory databases that are too big.


As I'm sure you are already aware, the ability to search across ALL of my translation memories, on the one hand, and all my glossaries/termbases on the other hand, from a single, well-designed search interface is invaluable.


While using LogiTerm, I also soon realized that the more you try to work with many different Modules, and name them logically and carefully, the better the tool works when searching. In the past I have also experimented with throwing everything in one big module, e.g., which has the benefit of taking less time to set up, but if you split your data across all manner of different categories, clients, etc., it is obviously much more useful later on down the line.


Having said that, I also have a few special Modules, which I use especially to dump massive, unstructured collections into. This allows me to quickly create a huge module (which I can use to Pretranslate with), without having to waste any time on setting it up. Since individual Modules are so easy to select and deselect, when searching and/or pre-translating, the possibilities are endless! 


Michael


PS: I don't work for Terminotix. just a happy customer. ;-)


***************************** 

Phew! I think that's what they call a ‘long read’, on the Guardian website ;-)


Michael


Can you give some examples on the time LT needs for preparing a text of x words with TMs of y segments?

...and some pictures:


image


image


image


:-)

Login to post a comment