Streams Apply / Error Slowness

In a streams – multi-master situation using version i encountered a serious time delay in apply messages and errors after a Re-instantiation of one of the sites.   When i ran AWR and ASH reports i saw the bottleneck was the queueing table of one of the apply sites i.e. APPLY_FROM_SITE1 (Huge scattered reads counts).  It was doing full table scans and each transaction was taking from .02 seconds to > 45 seconds .  This is a huge problem, especially if you have > 10K transactions to play.

This was the query that was running (Names have been changed to protect the innocent)

select qt.msgid, e.local_transaction_id, e.source_commit_scn, e.external_source_pos, e.aq_transaction_id, e.source_transaction_id, NVL(e.xidusn,0), NVL(e.xidslt, 0), NVL(e.xidsqn, 0), NVL(e.flags, 0)
from “STREAMSUSERNAME”.”APPLY_FROM_SITE1_QT” qt, sys.apply$_error e
where e.apply#=:1 and qt.enq_tid(+)=e.aq_transaction_id and qt.q_name(+)=:2 and e.local_transaction_id=:3 order by qt.step_no

So after some digging and analysis i decided to add an index on the “APPLY_FROM_SITE1_QT” – columns q_name, enq_tid and msgid.   – create index index_site1 on apply_from_site1_qt (q_name, enq_tid,msgid) online parallel 4 (Online to reduce deadlocks).

Once the index was added, the transactions immediately when from > 45 seconds down to .02 seconds. 

Not sure if this is approved by Oracle, but it sure made live much easier after said change.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s