Start a new topic

TM handling for a huge file?

I am missing some instructions on how to handle TMs for huge files (here a 55 K word file).

  • the TM only contains only the file's segments
  • IMHO this means that there is no advantage when using Total Recall
  • the TM engine gets slower with the project advancing (actually 35 K words translated)
  • in the mean time, I created two TMs: the first one with most of thes segments, set to 'Preliminary memory matching' and Read-only, the second as normal TM for current segments (am i right that this is the best procedure?)
  • but even now, after confirming a segment, CT is unresponsive for about 10, sometimes even 30 seconds, with the progress bar at the bottom moving

Are there any settings I should change?





I'm not a fan of TR since first experiments showed me that not all matching segments where extracted from my big mama, last year. Maybe this has been fixed. Anyway, if you think that the slow matching is caused by the size of the memory and not by the size of the project, I'd advise to compact the memory. I've posted a script to use TextWrangler for that.
Strange. I am working now on a 9 K doc with a size of 5 MB, with a 30 MB TM (4 TMs in a folder) – compared with 2,7 MB for the TM in the project specified above. Everything works flawless and fast.

The file specified above has a size of 55 MB (it is a mqxliff file based on a Word file, not a Word file itself, as I wrote above), with a lot of tags (cross reference etc., many concordances and necessary/repeated rework of translated segments, so it is not easy to split neither the Word nor the mqxliff file). It sounds trivial, but this seems to make much more problems than huge TMs.


I will retry the TR as son as my TR runs again.

Login to post a comment