File system: Best practices - NoSQL Database

Hello, all!
which file systems(OS) should be used for storing data files(Best practices)?
Thank you in advice! 

user13299586 wrote:
Hello, all!
which file systems(OS) should be used for storing data files(Best practices)?Hello,
Linux: ext3
Solaris 10: zfs
For best performance on ext3, you will want to set the following parameters (if possible):
sysctl -w vm.dirty_background_bytes=$((10*1024*1024))
sysctl -w vm.dirty_ratio=40
sysctl -w vm.dirty_expire_centisecs=1000
Also, put those into /etc/sysctl.conf to make them permanent.
On ZFS,
set zfs:zfs_no_write_throttle = 0x1
or
set zfs:zfs_write_limit_override = 0x2000000
Charles Lamb 

Charles, thanks!
And what about raw device? 

user13299586 wrote:
Charles, thanks!
And what about raw device?It won't work -- you have to specify a directory, not a file, in the configuration. BDB JE assumes an underlying file system.
Charles Lamb 

when benchmarking the Oracle NoSQL, where does the loaded get stored, in the KVROOT ? or can you please share the details around this. 

user11286653 wrote:
when benchmarking the Oracle NoSQL, where does the loaded get stored, in the KVROOT ? or can you please share the details around this.The actual data is stored using Berkeley DB Java Edition. JE's files are down in the guts of KVROOT/**/*.jdb.
Charles Lamb
Edited by: Charles Lamb on May 23, 2012 2:44 PM

Related

log in takes so much time

Hi All,
When I try to log in (for any user root, etc) Solaris (SunOS ls04812 5.9 Generic_118558-27 sun4u sparc SUNW,Sun-Fire-V240) it takes so much time, after I hit the ctrl+c, it cut some process and login directly,
for example;
Last login: Thu Mar 27 14:59:15 2008 from xx.xxx.xxx.xxx
WARNING: Do not use computer if you are not authorized.
WARNING: You have been warned.
WARNING: Do not use computer if you are not authorized.
WARNING: You have been warned.
Sourcing /etc/.profile-EIS.....
checking /global/app/sls34/.kshrcat this point waiting, waiting, waiting.... :(
I tailed at this time /var/adm/messages but couldnt get anything, I will be glad if someone give idea.
I have a cluster by the way and the other node does not have any problem like that.
thanks,
halit 
FYI
If the server have many nfs mount, logging in would be slow. So if your server have many nfs mount, it may be dued to this case. 
Hi Halit,
What type of authenication are you using? Are you using local, NIS or NIS+? 
Tan_Tangerine wrote:
FYI
If the server have many nfs mount, logging in would be slow. So if your server have many nfs mount, it may be dued to this case.how can I check that server has many nfs mount, if so how am I going to reduce it?
p.s:sorry for additional questions.
cheers,
halit 
markscheck wrote:
Hi Halit,
What type of authenication are you using? Are you using local, NIS or NIS+?I dont know :( could you please let me know how to check that?
many thanks for help,
halit 
Hi Halit,
df -k will show you the mounts they'll look like machinename:/directory
If there's a problem with a mount that would cause slowness when you log in. Let me know. Regards Mark 
Is this a stand alone machine? A stand alone machine uses its /etc/passwd for authentication. I think something may be up with the .profiles.
Is the machine bogged down in any other way? IE slowness after you log in?
more /etc/passwd and tell us what shells you are using.
vmstat 1,1
prtstat
Let me know. Regards Mark 
markscheck wrote:
Hi Halit,
df -k will show you the mounts they'll look like machinename:/directory
If there's a problem with a mount that would cause slowness when you log in. Let me know. Regards Markoh! you meant that with nfs mount :)
no I dont have problem with that this problem occurred suddenly. all of the mounts are ok, none of them %100.
I think this is not related with that.
thanks,
halit 
markscheck wrote:
Is this a stand alone machine? A stand alone machine uses its /etc/passwd for authentication. I think something may be up with the .profiles.
Is the machine bogged down in any other way? IE slowness after you log in?
more /etc/passwd and tell us what shells you are using.
vmstat 1,1
prtstat
Let me know. Regards MarkHi Mark,
a/ Cluster machine,
b/ .profiles was checked by me many times and also let me say the other node has same .profile file and does have such a prob like that.
c/ I checked that actually but all the usages(output of "top", "prstat") were same before and after logging in.
I ll let you know when I got something,
thanks,
halit 
Hi Halit,
I was thinking of this yesterday. Did this problem just pop up? I was thinking also that it may be a DNS entry lookup. I've seen Linux hosts take forever when they don't have a entry for the computer in the hosts table. Check how it's doing lookups in nsswitch.conf. 
Hi Halit,
* You can check the dns entries in /etc/nsswitch.conf
* Check for NFS filesystem using df -k. This should not hang.
* You are getting the prompt when pressing CTRL+C. So check your profile manually. then use
-> source /etc/profile
-> source /etc/.profile.EIS
-> source ~/.profile
This may help if u have actually problem with the Profiles or not.
Also check your nfs shares. If there is more number of NFS shares and NFS mounts the logins will be slow.
-> dfshares or share
-> dfmounts
Hope this helps.
Regards,
Jawahar 
Does running 'quota -v' in the shell after you log in also take a long time to run?
--
Darren 
Darren_Dunham wrote:
Does running 'quota -v' in the shell after you log in also take a long time to run?
--
DarrenBingo.
I'd bet that's it - the default profile "/etc/profile" does a quota check.
If you don't have root access to edit /etc/profile, create a ".hushlogin" file in your home directory. 
If that is it, you can also mount the filesystems with 'noquota'.
--
Darren 
Hi All,
I have coincidently solved the problem.
messages told me that something wrong with /var directory no space left bla bla. Then I started deleting everything in /var I have seen a couple of log files related with logins. When I do a ">file" login is faster then before :P
I will also post which files they are (forgot so looking for it).
many thanks for all your helps guys,
halit

Equivalent of PIN tool for SPARC?

Hi,
I am using Sun Studio 12 update 1 for SPARC. I would like to take a trace of all the memory accesses of my application. This can easily be done in Linux/x86 using the PIN tool (http://www.pintool.org/).
Is there any tool with similar functionality, perhaps in SStudio12u1? The only binary instrumentation tool is BIT but it seems to be useful only for reporting.
Regards,
-Ippokratis.
Take a look at shade:
[http://cooltools.sunsource.net/shade/|http://cooltools.sunsource.net/shade/]
Example:
[http://developers.sun.com/solaris/articles/shade.html|http://developers.sun.com/solaris/articles/shade.html]
Regards,
Darryl. 
Hi Darryl,
Thanks for the quick reply. SHADE with support for multi-threaded applications could do it. But, I am using SunOS10 (u8) and according to the webpage:
"This version only supports single-threaded applications on the Solaris 10 platform..."
I apologize for not writing down all my requirements (SunOS 10, multithreaded app).
Regards,
-Ippokratis.
Edited by: Ippokratis on Feb 22, 2010 12:27 PM 
Hi Ippokratis,
If you are trying to catch data race cases, you can build your application
with "-xinstrument=datarace" option, then run under "collect -r on", and
view the results in "tha" (Thread Analyzer).
Thanks.
Nik 
Hi Nik,
No I am not trying to catch data races. I want to get a dump of all the memory accesses of my multithreaded application on SunOS5.10 and post-process it.
Regards,
-Ippokratis.

Configure Search Depth for AD Nested Groups (VDI 3.1)

Hi there,
I'm trying to increase the search depth when resolving nested group memberships for pool access.
If I run
bash-3.00# /opt/SUNWvda/lib/vda-client -u user1
Password:
No desktop found for user1
user1 is a member of a group called ServiceTech which is a member of VDIUsers which has a pool assigned.
I have read over the various docs on the wiki and the ldap filters all appear to be set correctly for Active Directory.
Any ideas would be very welcomed.
Thanks for your time.
Kim 
You can't configure the nested group depth - it is fixed at 3.
Your example should work though. Is there an available desktop in the assigned pool? 
Hi Stephen,
Good morning.
Unfortunately no, it tells me there isn't any desktops assigned to the user when I log in.
I had to increase the search depth within SGD for the same user to see a published application.
Thanks for your reply :)
Kim 
Hi Kim,
SGD doesn't enable nested group searches by default, so to enable it you have to increase the depth. If VDI has a fixed depth of 3, this should cover the case above. Can you provide the user directory logging from VDI? This should help us see what the issue is.
Thanks,
-- DD 
Aha, that would make sense then thanks Dean!
Sorry for being dense, I take it I want to up the cacao logging level?
Cheers guys :)
Kim 
Yep, please. To ALL if possible:
[Enabling VDI User Directory Logging|http://wikis.sun.com/pages/viewpage.action?pageId=171840712]
Thanks,
-- DD
Edited by: DeanyDean on Jul 15, 2010 2:02 AM 
Hi DD,
Thanks for this.
I'll do this out of hours this evening as it's a production environment (12 boxes).
Many thanks for your help.
Kim

Cannot use java.io.File.rename() to move files between zfs

Hi,
I have an java application running on a zone and part of the application does a simple File.rename() to move from /usr/local/application/x.zip to /zfs/space/application/x.zip [locations obfusticated]. This fails with an exception (currently unknown, but will be getting the stack trace tomorrow). However, on my development environment I don't get this issue but we are not using zfs.
Does anyone know why the move would not work?
Cheers
Simon 
Hmm, can File.rename() really move files between different filesystems?
.7/M. 
Hi.
According:
http://download.oracle.com/javase/1.4.2/docs/api/java/io/File.html
"Whether or not this method can move a file from one filesystem to another is platform-dependent."
So it may be work may be not :)
Regards. 
Thanks Nik :)
If java in turn uses the C-function rename() it will not work..
.7/M.
Edited by: abrante on Aug 11, 2011 12:12 PM
This link seems useful:
http://www.javakb.com/Uwe/Forum.aspx/java-programmer/25200/Problem-with-java-Rename-function-in-solaris-platform

zpool disaster recovery

Hi,
I have two solaris 10 machines, each one with a zpool DATA.
Now, I should to do a DR of the zpool DATA from the first machine to the
second machine.
If I lose the first zpool, I must have all data saved in the second zpool.
Can I use the snapshot for this purpose? 
The short answer is yes. You can create a snapshot on the source data pool, use "zfs send" to create a stream of the snapshot" and pipe it over using (for example) ssh to the destination machine which can store it using "zfs receive". It is also possible to do incremental transfers. I'd recommend that you check out the documentation (zfs manual page, reference manuals, blogs, etc.) for additional information. I'm not in a position to give you exact references now but perhaps someone else will. 
ok thanks,
but I know that the zfs snapshot is copy on write where you must have the source data for restore the snapshot... and in the second pool I have only the snapshot. Is it wrong?
-------------------------------
of course, I'm writing this thread under 'Solaris Zone' because in DATA there is a zone....
Edited by: 853883 on Sep 19, 2012 12:43 PM 
zpool sharing menas ,ur exporting and importing the pools ?
and where ur taking the snap shot?
while it zfs has snpshot it will not allow you destroy in normal way ,u have to force option to destroy the pool. 
I'm a little confused by your statement. When you create a snapshot, you're effectively saving the state of the filesystem at a particular point in time. When you do a zfs send to send the data to the other machine, it sends the filesystem data as it existed at the time of the snapshot (even if the source filesystem has changed since then). So you do transfer the actual data to the other machine so that the filesystem can be recreated there. 
yes if you use the zfs send command to save the snapshot and moving it to other system and then using zfs receive to restore.
the zfs dataset will be created.
in this case you have to create the snapshots for all the datasets in the respected pool.
then only we can get the complete pool.
Edited by: muvvas on Sep 26, 2012 9:23 PM

Categories

Resources