What are the differences between Oracle and other NoSQL database - NoSQL Database

Hi all,
I would like to know what the differences between Oracle and other NoSQL database are.
When and why should we use Oracle?
Is Oracle NoSQL database link with Big Data Appliance?
Can we use map-reduce on a single personal computer? How should we install Oracle NoSQL database to use map reduce on a single personal computer?
Do we also have eventual consistency with Oracle NoSQL database? Can we lose data if master node fails?
Are transactions ACID with Oracle NoSQL database? How can we prove it?
Thanks. 

893771 wrote:
Hi all,
I would like to know what the differences between Oracle and other NoSQL database are.
When and why should we use Oracle?I suggest that you start here:
http://www.oracle.com/technetwork/database/nosqldb/overview/index.html
Is Oracle NoSQL database link with Big Data Appliance?Yes, Oracle NoSQL Database will be a component of the Big Data Appliance.
Can we use map-reduce on a single personal computer? How should we install Oracle NoSQL database to use map reduce on a single personal computer?Yes, I believe you can run M/R on a single computer. Consult the various pieces of documentation available on the web. You may run Oracle NoSQL Database on the same computer that you are running M/R on, but it is likely that they will compete for CPU and IO resources and therefore performance may suffer.
Do we also have eventual consistency with Oracle NoSQL database? Yes.
Can we lose data if master node fails?If you run Oracle NoSQL Database with the default (recommended) durability settings, then if the master fails, a new one will be elected and data is not lost.
Are transactions ACID with Oracle NoSQL database? How can we prove it?Yes, each operation is executed in an ACID transaction. The API has the concept of "multi" operations which allow the caller to perform multiple operations on sets of records with the same major key, but different minor keys. Those operations are also performed within a transaction.
Charles Lamb

Related

Audit DataBase with conservation of performances

Hello everybody. Context I'm a new user of ORACLE DB (V 11g).I'm trying to know which table is in my DB are concerne by a query (SELECT/UPDATE/INSERT/DELETE). I've to make a test during one month.A technician said me it's not a solution to use AUDIT_VAULT on my DB because this method will slow down DB performance. Questions What do you think about that ? (I found this guide that tells me otherwise : https://www.trivadis.com/sites/default/files/downloads/ttc_oracle_auditing_report_ami_june2011-final_02.pdf )If I understand, it's possible to make an audit with AUDIT_VAULT or REDO LOGS, I undestand the difference between them, what do you think about an using of REDO LOGS ? Thank you for your help. Geo-x
More details about the performance impact you can find here: http://www.oracle.com/technetwork/products/audit-vault/learnmore/twp-security-auditperformance-166655.pdf Auditing is a database feature, more details here: https://docs.oracle.com/cd/E11882_01/network.112/e36292/auditing.htm#DBSEG006 Audit Vault is a different product, it's a component of Oracle Audit Vault and Database Firewall which helps enforce the trust but verify principle by consolidating and monitoring audit data from Oracle databases, Non-Oracle databases, Microsoft Active Directory, Microsoft Windows, Oracle Solaris, Oracle Linux, and Oracle ASM Cluster File System. A plug-in architecture consolidates custom audit data from application tables and other sources.  Native audit data provides a complete view of database activity along with full execution context irrespective of whether the statement was executed directly, through dynamic SQL, or through stored procedures.More details here: http://www.oracle.com/technetwork/database/database-technologies/audit-vault-and-database-firewall/overview/index.html By redo logs auditing I guess you want to use Log Miner: https://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm#SUTIL019 It very much depends on what are you trying to achieve.

Can I use Goldengate with E-Business Suite for an active-active scenario

Hello guys...
My question is: Can I use Goldengate with E-Business Suite for an active-active high availability solution?
Scenario: Oracle e-Business Suite 11.5.10.2 on RDBMS 9.2.0.7.0 hosted at head offices. There is a WAN connection between head office and other offices located approximately 16 to 20 km away. Users are located at both sites. Periodically the connection between both locations fails and thus the client wants to establish two master instances so that
when the connection is down both locations can still have access to the Oracle e-Business suite.
Thanks for your help! 
Currently this is not feasible due to some app issues, and the datatypes used by EBS. 
Any update on this thread? Is this applicable now? 
OGG 11g is certified for operational reporting in EBS.
http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-ds-168062.pdf 
Ya knew about this as OGG can use for reporting purpose.
But like to know, is OGG applicable for as JuanAvila asked?
"Periodically the connection between both locations fails and thus the client wants to establish two master instances so that
when the connection is down both locations can still have access to the Oracle e-Business suite." 
Awasi,
No, it is not applicable. There are some datatypes (for example AQ, AnyType) which can't be replicated by GoldenGate. 
Then what would be other alternate way from Oracle as disconnectivity and latency is the main problem due to this accross the country users could not able to access centralize EBS therefore requirements comes as they should be decentralize and all users working their own instances and some thime they will be synch with each other. 
How sync'd do the databases have to be? They will never be exact given that it does take some time to replicate data. If you need something close to real time - and don't need to use Oracle EBS report interfaces (e.g., Discoverer) - and can avoid restricted data types - then you can use GoldenGate or Streams. If you need an exact match but can afford latency (up to a day, for example), you could use Data Guard and have the standby turned into a snapshot standby. Before start of business day, set a restore point, open the database, let users have at it. At the end of the day, restore to the savepoint and apply the accumulated SRLs or ARLs and repeat.
Given that you have network issues, you need a solution that can recover itself. Downstream capture in Streams and Data Guard, for example, have automatic gap resolution.
http://blogs.oracle.com/stevenChan/entry/three_options_for_scaling_up_ebs_for_reporting 
If we go with something close to real then how can we avoid data types in Oracle EBS for Golden Gate? there is a possiblity in EBS as some functionality will not work properly.
When we restore standby again on savepoint then Primary logs will be apply to standby site but how logs
will be transfer and apply from snapshot standby to Primary site when standby was open for user work.
Thanks. 
For EBS, straight replication does not work because of unsupported datatypes. For other alternatives, using EBS-related reporting tools, you need a login to the source, so that doesn't work either.
A snapshot standby will accumulate redo and then apply it when the standby is reverted back to the restore point and the database is taken out of read/write.
"A snapshot standby database is a fully updatable standby database. A snapshot standby database receives and archives, but does not apply, redo data from a primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database."
http://download.oracle.com/docs/cd/E11882_01/server.112/e17022/manage_ps.htm#BACIEJJI 
What about the user work (DML Operations) which has been done on standby db when it was on snapshot mode?
This will all lost, when snapshot standby will restore from savepoint obviously Primary logs applying will start now
but snapshot standby work will not apply to Primary. 
"Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after *discarding all local updates to the snapshot standby database*." 
Question raised base on "disconnectivity and latency is the main problem due to this accross the country users could not able to access centralize EBS therefore requirements comes as they should be decentralize and all users working their own instances and some thime they will be synch with each other. "
If we go with Golden Gate then there is data type restriction, so that it is not eligible.
If we go with Snapshot Standby then users who worked on Snapshot Standby their transaction will not be apply to Primary DB,
because when we restore it from savepoint then all Snapshot Standby users work will be disgarded so that its also not eligible.
Then what oracle suggest in this regards. 
Probably the same thing stated in several MAA whitepapers: tune your network. 
Ya, MAA present disaster scenario if any node down then how next node would be available for operatoin.
There should be some solution from oracle for EBS to the users who spreat accross the country and cannot access one centralize EBS over the WAN.
Anyhow, will wait some suggestion or solution in this regards.

MAP/REDUCE and Oracle NoSQL

Hi all,
I would like to know if there are some examples on how to run run map/reduce with Oracle NoSQL.
Is there any source code any where? Can you send me one example?
Where can we download all the necessary tools?
In Oracle Big Data Appliance is map/reduce used with Oracle NoSQL or with Hadoop?
Thanks 
user962305 wrote:
I would like to know if there are some examples on how to run run map/reduce with Oracle NoSQL.
Is there any source code any where? Can you send me one example?Take a look in the oracle.kv.hadoop.KVInputFormat javadoc. It discusses how to use Oracle NoSQL Database with Hadoop as well as referring to an example which is included in the distribution.
Where can we download all the necessary tools?
In Oracle Big Data Appliance is map/reduce used with Oracle NoSQL or with Hadoop?It would be used with both, whether or not you were on the BDA. You use the KVInputFormat to read data from Oracle NoSQL Database into Hadoop during map/reduce processing.
I hope this is useful.
Charles Lamb 
Hi Charles,
Can you please, explain where and what to download and install for this case?
Should we also install hadoop on the same replication nodes as Oracle NoSQL?
Is it possible to have an example with pre-loaded keys on Oracle NoSQL to perform the test?
Is there a version of Oracle NoSQL which comes with some key/value pairs?
I understand the following. Data in Oracle NoSQL will be loaded in hadoop first and then map/reduce is performed in haddop. Is it right?
I would like to know: Does that mean Oracle NoSQL can not run parallel operations? What is the aim in loading data to hadoop first if Oracle is able do perform parallel operations? Loading data from Oracle NoSQL to hadoop may take enormous time I suppose.
Thanks 
user962305 wrote:
Can you please, explain where and what to download and install for this case?Download [Oracle NoSQL Database from OTN|http://www.oracle.com/technetwork/database/nosqldb/downloads/index.html] .
Should we also install hadoop on the same replication nodes as Oracle NoSQL?It depends on your access patterns. In general, probably not, but there may be cases where you achieve better performance with Hadoop and the Rep Nodes co-located.
Is it possible to have an example with pre-loaded keys on Oracle NoSQL to perform the test?
Is there a version of Oracle NoSQL which comes with some key/value pairs?Look at the quickstart guide that comes with the above Oracle NoSQL Database package. There is a small HelloWorld example which you can use as the basis for creating a data set.
I understand the following. Data in Oracle NoSQL will be loaded in hadoop first and then map/reduce is performed in haddop. Is it right?Hadoop is a framework, which among other things happens to run Map/Reduce jobs. Your Map/Reduce job would use the KVInputFormat to read data from Oracle NoSQL Database and process it however it sees fit. It might write the output of the M/R to (say) HDFS. Or it might write it to (say) Oracle RDBMS. Or it might write it back to (say) Oracle NoSQL Database.
>
I would like to know: Does that mean Oracle NoSQL can not run parallel operations? What is the aim in loading data to hadoop first if Oracle is able do perform parallel operations? Loading data from Oracle NoSQL to hadoop may take enormous time I suppose.I am not sure I understand your question. Hadoop, by its nature will break a job into many subtasks. Those subtasks run in parallel, generally across many Hadoop nodes. Those subtasks may access Oracle NoSQL Database data. Hence, Oracle NoSQL Database is able to perform operations in parallel either on the same or different Rep Nodes.
Charles Lamb 
Thanks Charles.
For hadoop, where and which version should we used for Oracle NoSQL ? 
I think 0.2.20 is the current, no?
Charles Lamb

NOSQL On Exadata

Hi All, IHAC who is looking to use his Exadata in his bigdata strategy.  So he has a question if he can install NoSQL on Exadata? Is it possible to use that way? Any thoughts? Kr, Gaurav
AFAIK, you cannot install NoSQL DB on an Exadata machine.  Exadata is specifically designed and optimized to run the Oracle Database. Of course, you can install NoSQL DB on non-Exadata hardware and access the NoSQL data from Exadata. Hope this helps.ashok
Oracle NoSQL DB is  part of Oracle's solution for Big Data. Used to acquire Big Data. Generally Exadata comes in the later stage of the Big Data solution equation (storage -- acquire, organize, analyse, store). Oracle Big Data SQL could be used to query Oracle RDBMS, Hadoop and NoSQL.

MySQL native replication v/s MySQL clustering v/s Oracle GG

Dear Friends Per se , just wanted to understand which of this best and feasible replication between MySQL databases (bi-directional / circular). We wanted to do a POC study and see which one of this is best and feasible replication between two MySQL database instances separated geographically. i understand first 2 are part of MySQL and people would obviously suggest to go with either of this. However just wanted to get your expertise opinion. Is goldengate bi-directional cumbersome and not so suggested between MySQL DB instances? MySQL version is 5.6 and we wanted to use goldengate 12c.  1. MySQL native replication2. MySQL clustering3. GoldenGate  Best Regards
MySQL Cluster will do active-active replication between data centers.  I do not know if GG will support what you want.
Oracle GoldenGate and MySQL Replication are very similar, they both use log mining technology to achieve data synchronization.OGG can uni-directional/bi-directional/circular synchronize data for transactional data across heterogeneous/homogeneous systems for continuous availability. But, OGG need to purchase license.If you choose MySQL Replication, it is recommended to use the gtid feature. It's free to use.

Categories

Resources