Todd and I have been working on some changes to GoZync, and the latest build offers a substantially faster sync down from hosted files.
In one test, pulling 5,000 new contacts went from 12:52 to 2:00 minutes flat. (Times will be longer on mobile and over shaky connections.) This is a big improvement.
Add this to Your Deployment
Needless to say, we recommend this for all users: only 4 scripts were changed so updating existing deployments is easy, though you will need to distribute new copies of GoZyncMobile, where most of the work was done. Instructions for making this change in deployed copies can be found here.
I’m very proud that GoZync is comprised solely of FileMaker scripts: most of our users are customizing GoZync to match their workflow and it’s important that they can take advantage of new builds like this by just replacing a few scripts.
Todd started looking at this with an insight that Position() and Middle() degrade when working with long strings. That is, in a 1M character string (like a sync package)…
Position ( string ; foo ; 5ooooo ; 1 )
…is much slower than:
Position ( string ; foo ; 100 ; 1 )
MUCH slower. Like 35x slower. So we saw an opportunity here: we were using position in places to parse the package we pull down to mobile. So Todd suggested we add an outer loop and break the package into chunks so our text parsing functions wouldn’t have to “reach” as far into big strings: they’d reach more shallowly into these new, smaller chunks.
So we wrote this outer loop but it didn’t make much difference: shaved a minute or two off our 13 minute pull depending on how big our chunk was. =(
“Move fast, break things”
Having been working on this for a while, we just couldn’t speed it up and had accidentally broken one of our inner loops somewhere along the way. The loop wasn’t exiting, processing would never end, and GoZync’s progress bar wasn’t updating.
So we’d use the script debugger to break the script after a minute or so when we could see it was still not working… but we noticed that even after letting it run for just a minute, almost ALL the 5,000 records were being processed: we’d stumbled upon a way to skip the slow outer loops and still process the records. Success.
To clean it up we rewrote the loop and switched from using Middle() to using GetValue() to parse our big strings, which we left in one big chunk.
And what we ended up with is a huge speed improvement, though we certainly didn’t arrive there the way we started out.
Great post, and kudos for tracking this down!
It’s interesting how the performance of core text functions can vary so greatly as scale increases. I recently encountered a similar bottleneck in the code for SQL Sugar. In that case, it turned out that parsing long strings (of SQL reserved words) to be evaluated by PatternCount() into much shorter snippets within a Choose() block netted a huge speed increase.
Thanks again for sharing this.
Great job guys!
When can I start working for U?
Be you repr in Europe 🙂
Med vänlig hälsning
29 apr 2013 kl. 17:14 skrev “SeedCode: Next” :