ios - How can I avoid having NSFileWrapper use lots memory when writing the file -


i have app using nsfilewrapper create backup of user's data. backup file contains text , media files (compression not relevant here). these backup files quite large, on 200 mb in size. when call nsfilewrapper -writetourl... appears load entire contents memory part of writing process. on older devices, causes app terminated system due memory constraints.

is there simple way avoid having nsfilewrapper load memory? i've read through every nsfilewrapper question on here find. suggestions on how tackle this?

here current file structure of backup file:

backupcontents.backupxyz user.txt - folder1 - audio files asdf.caf asdf2.caf - folder2 - audio files asdf3.caf

again, please don't tell me compress audio files. band-aid flawed design.

it seems move/copy of files directory using nsfilemanager , make directory package. should go down path?

when nsfilewrapper tree gets written out disk, attempt perform hard-link of original file the new location, if supply parameter originalcontentsurl.

it sounds you're constructing file wrapper programmatically (for backup scenario), files scattered on filesystem. mean when writetourl, don't have originalcontentsurl. means hard-link logic going skipped, , file loaded can rewritten.

so, if want hard-linking behavior, need find way provide originalcontentsurl. done supplying appropriate url initial writetourl call.

alternatively, try subclassing nsfilewrapper regular files, , giving them nsurl internally hang on to. you'd need override writetourl pass new url super, url should enough trigger hard-link code. you'd want use subclass of nsfilewrapper large files want hard-linked in place.


Comments

Popular posts from this blog

twig - Using Twigbridge in a Laravel 5.1 Package -

Kivy: Swiping (Carousel & ScreenManager) -

jdbc - Not able to establish database connection in eclipse -