Asm file system cache
Or is this just an improvement on a 'bad' situation I will just have to live with? Thanks, Robert. November 15, - pm UTC. Important Follow up Clarification? Robert, November 13, - pm UTC. Please advise? Very interesting Revelation. Dhairyasheel, November 16, - am UTC. November 23, - am UTC. If you run a memory intensive program, the OS file system cache will shrink and maybe virtually disappear.
Stop running things that use lots of physical RAM and it'll grow again. Since memory is a "use it or lose it" resource, like CPU, they use it when it is there and do not when it isn't.
Johnny, November 16, - am UTC. More likely it was uppercase for emphasis. November 23, - pm UTC. Hi Tom, Not sure if this fits here. Who is responsible for this "doubling"? Is it DBWR itself? Does it simply write each block twice? Who ensures atomicity? Hardware RAID would be somewhat faster? Cheers, Markus. But dbwr would typically do async IO to both devices and wait for the OS to notify it of the completed write.
Win, November 17, - pm UTC. My question is, does it take ASM a couple of process runs to identify hot blocks and arrange them across disks?
DB parameters remains the same. Please suggest if it is expected to give bad performance follwed by an improved performance when migrating to ASM due to hot block detection and arrangement? Hi Tom, Thanks for answering my questions. You wrote the writes don't have to be atomic. I don't understand that. The data could become stale then. The block scn could become 2 in failgroup 1 while still left on 1 in failgroup 2. Someone something has to ensure the data became not stale.
Is there a special protocol? November 28, - pm UTC. No, it would not be. It would be a failure. This is not any different from what we have been doing with multiplexed redo logs since the very early days. Yes, we can deal with a parallel write succeeding on one device and not the other and we know what to do when it happens. That's it! Robert, December 07, - pm UTC. When I doubled my buffer cache from 3 to 6G, my batch jobs are running up to 20 times faster. One follow up question Although performance is greatly improved, I still haven't gotten the performance that I had on the 'cooked' file system.
Can I more or less expect to get same performance on ASM as I had on cooked disks all things being equal if I just add the correct amount of additional buffer cache? December 07, - pm UTC. Can I more or less expect to get same performance on ASM as I had on cooked disks all things being equal How big secondary SGA?
Tom, Is there any 'reasonable' 'secondary-SGA' size? I'm sure 'it depends' but just looking for a sanity-check. Your thoughts? December 10, - am UTC. You start the machine, the OS takes Y units of memory.
You start Oracle, Oracle takes X units of memory. Now, if you have a database machine - and the machine is all about you - you would assign to the pga what you wanted to assign to the pga and probably the rest to the SGA. Let me ask again Operations against raw devices automatically bypass the file system cache.
Swap size was set to 16G also. The SLOB tablespace was created with the tablespace name slob, and pre-allocated to 32G in size in order to eliminate waits on file extension. We created 64 slob schemas, and tested with 32 schemas using this command:.
Each test ran for one hour. However, the results of the test were surprising, and consistent on both VirtualBox and VMware Workstation.
What are the implications of this? More importantly, it leaves tuning for the most part, totally in the hands of the DBA. As someone who once spent weeks working as a SAN Storage Guy on a critical database performance issue caused in the end by bad file system striping, I can attest to the attraction of that!
In a perfect world, there is so much memory available that you can be left giggling at the amount of RAM allocated to each system. In another perfect world, all storage operates at Flash speed and there are no spinning disks at all. But in reality, we are often still constrained by both the quantity of memory available and by disk speed.
When testing new releases you ideally want to be able to still access all these same Oracle features. Also, if production uses Fibre Channel interconnects, seriously consider using the same network topology in your test and dev environments, especially UAT. Subscribe Sign up for blog updates via email. Skip to content actifio.
0コメント