sql - Splitting a massive table with BLOBS over DB Link in Oracle -
i have interesting problem. migrating data 1 db including millions , millions of blobs metadata amounting literally tbs of data. due numerous reasons, must done via db link , view on other side contains blobs not indexed. because there no 1 unique key , there complicated logic uniquely identify document (only 1 reason migrating new system). i'd following:
1) split rows separate chunks transfer can access in parallel , not lose if there crash 2) insert staging tables 3) once over, merge staging tables production tables , dump temporary ones
i cannot open cursor on view due oracle restriction on lob references on db link. use insert around oracle restriction need slice rows. have tried running script several times session killed after 2 days , lose everything.
i tried chunking via rownum use pagination chunking did not work. here code (help http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html):
-- second chunk procedure process_chnk_2 ( l_slice pls_integer; ) begin execute immediate 'insert /*+ append */ destination_table select * ( select q.* ,rownum rnum ( select * migr_view.migr_blob@db_link) q rownum <= l_slice * 2 ) rnum >= l_slice '; commit; end process_chnk_2;
this doesn't work since still need query using blob.
i had thought selecting 1 of text columns , joining original view this:
-- second chunk procedure process_chnk_2 ( l_slice pls_integer; ) begin execute immediate 'insert /*+ append */ destination_table select * ( select q.text_that_is_not_quite_a_key ,q.second_that_is_not_quite_a_key ,q.third_that_is_not_quite_a_key ,q.fourth_that_is_not_quite_a_key . . . ,rownum rnum ( select * migr_view.migr_blob@db_link) q rownum <= l_slice * 2 ) z , migr_view.migr_blob@db_link x z.rnum >= l_slice , z.text_that_is_not_quite_a_key = x.text_that_is_not_quite_a_key , z.second_that_is_not_quite_a_key = x.second_that_is_not_quite_a_key , z.third_that_is_not_quite_a_key = x.third_that_is_not_quite_a_key , z.fourth_that_is_not_quite_a_key = x.fourth_that_is_not_quite_a_key . . . '; commit; end process_chnk_2;
however, talking joining table millions , millions of rows , no indexes slices... know how long take?
i cannot use tools, data pump... script... help!
i trying idea put in past 2 days- i'm proposing answer i'd hear suggestions ugly!! while cannot identify unique keys in sorted order, mixing keys , extracting digits, can find range of values slice on , evenly distributed.
l_stmt := q'{insert /*+ nologging nowait */ destination_table select * migr_view.migr_blob@db_link to_number(regexp_substr(not_key_1 || not_key_2 || not_key_3..., '[[:digit:]]+')) >= }' || i_start_slice || q'{ , to_number(regexp_substr(not_key_1 || not_key_2 || not_key_3..., '[[:digit:]]+')) < }' || l_end_slice;
Comments
Post a Comment