source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
140,728
How am I supposed to pass a password to ldapsearch using the -y <password file> option? If I write the password in the password file in plain text, I get this error: ldap_bind: Invalid credentials (49) additional info: 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 The same happens if I use the -w <password> option. EDIT : The command I'm running is ldapsearch -x -D <my dn> -y .pass.txt -h server.x.x -b "dc=x,dc=y" "cn=*" Where the file .pass.txt contains my password, in plain text. Both the DN and the password are correct. If I run the command with the -W option and type the password on the prompt the command runs successfully, but I would like to store the password somehow to make a script.
Keep in mind that ldapsearch will use the entire contents of the file for the password--which means it WILL include a terminating newline character if one exists. To verify if this is in fact your problem, try creating a file without one: echo -n ThisIsaBadPassword > .pass.txt ( UPDATE : Included '-n')
{ "source": [ "https://serverfault.com/questions/140728", "https://serverfault.com", "https://serverfault.com/users/9172/" ] }
140,832
Or should perfmon be limited to a Dev/QA server with load tests that simulate production activity? I'd like to run perfmon for two days ( like Sql Server master Brent Ozar suggests ) to get an overall feel of my web app's database performance.
SQL Server, and most other products, generate the counters all the time, no matter if there are listeners or not (ignoring the -x startup option). Counter tracing is completely transparent on the application being monitored. There is a shared memory region on which the monitored application writes and from which monitoring sessions read the raw values at the specified interval. So the only cost associated with monitoring is the cost of the monitoring process and the cost to write of the sampled values to disk. Choosing a decent collection interval (I usually choose 15 sec) and a moderate number of counters (50-100), and writing into a binary file format usually leaves no impact on the monitored system. But I'd recommend against using Perfmon (as in perfmon.exe). Instead get yourself familiar with with logman.exe, see Description of Logman.exe, Relog.exe, and Typeperf.exe Tools . This way you don't tie the collection session to your session. Logman, being a command line tool, can be used in scripts and scheduled jobs to start and stop collection sessions.
{ "source": [ "https://serverfault.com/questions/140832", "https://serverfault.com", "https://serverfault.com/users/20656/" ] }
140,990
I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.
I think that it's because nginx is not detecting that the upstream is down because it's on the same machine. The options that you're looking for are: proxy_next_upstream and proxy_connect_timeout . Try this: location / { proxy_pass http://lb; proxy_redirect off; proxy_next_upstream error timeout invalid_header http_500; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
{ "source": [ "https://serverfault.com/questions/140990", "https://serverfault.com", "https://serverfault.com/users/27537/" ] }
141,189
I am attempting to install "uploadprogress" for a PHP application, and have failed on dependencies. Firstly, on phpize, then php-devel, then on autoconf and automake. I have tried yum, and various repositories, with no luck. I think it's to do with the ultra-tight but annoying set up they have on Rackspace Cloud servers. Does anyone know where I can find a repository that I can tell yum to look at that will contain php-devel, autoconf, automake, etc? Thanks ever so much. Release details: Red Hat Enterprise Linux Server release 5.3 (Tikanga) Linux version 2.6.18-128.7.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Wed Aug 19 04:17:26 EDT 2009 Linux Serv001 2.6.18-128.7.1.el5xen #1 SMP Wed Aug 19 04:17:26 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
These should be installable via yum and the base RHEL repositories. Have you tried querying if they are already installed but not in your path? Also, have you successfully installed anything via yum? To check what package you need to install: [user@server]# yum whatprovides autoconf automake Loaded plugins: rhnplugin, security autoconf-2.59-12.noarch : A GNU tool for automatically configuring source code. Repo : rhel-x86_64-server-5 Matched from: automake-1.9.6-2.1.noarch : A GNU tool for automatically creating Makefiles. Repo : rhel-x86_64-server-5 Matched from:
{ "source": [ "https://serverfault.com/questions/141189", "https://serverfault.com", "https://serverfault.com/users/39544/" ] }
141,205
I've made some changes to sshd_config file and therefore need to restart. I'm looking tips on safely restarting ssh when getting physical access to the server would be a huga PITA.
Restarting sshd while logged in via ssh will not disconnect your ssh connection. If you're worried about your configuration, log in a few times via ssh, and restart. If you can no longer ssh in, with new connections, you now have access to fix the problems. Mentioned below in a comment by @Milan Babuškov: sshd -t will test your configuration for syntax correctness, if you really want to be certain. Another suggestion, by @Ronald Pottol was to set up a cron task to restart the server with a known working configuration. Perhaps overkill, but if you're updating a mission critical server, etc... sometimes you can never be too careful.
{ "source": [ "https://serverfault.com/questions/141205", "https://serverfault.com", "https://serverfault.com/users/41519/" ] }
141,504
I'm having trouble finding an answer for this question Does mount --bind persist over reboot? On my CentOS it looks like it doesn't, so I've placed apropriate mount --bind calls in rc.local. How can I do mount --bind to avoid rc.local scenario?
Create an entry for the bound mount in your /etc/fstab. An example is below. /path/to/source/dir /path/to/mount/point none bind 0 0
{ "source": [ "https://serverfault.com/questions/141504", "https://serverfault.com", "https://serverfault.com/users/9788/" ] }
141,773
I know you can use -a or --archive to activate archive mode when using rsync. Unfortunately, I have no idea what archive mode is supposed to do, and the man page is not at all explicit about what this is: equals -rlptgoD (no -H,-A,-X) Can you explain what those options ( rlptgoD ) mean and what's the behaviour of rsync when I use them?
It's all of these: -r , --recursive recurse into directories -l , --links copy symlinks as symlinks -p , --perms preserve permissions -t , --times preserve modification times -g , --group preserve group -o , --owner preserve owner (super-user only) -D same as --devices --specials --devices preserve device files (super-user only) --specials preserve special files It excludes: -H , --hard-links preserve hard links -A , --acls preserve ACLs (implies -p ) -X , --xattrs preserve extended attributes It's perfect for backups. My "default" set of switches is -avzP - archive mode, be verbose, use compression, preserve partial files, display progress. Note: Invariably when the descriptions say "preserve", it means make the destination be like the source.
{ "source": [ "https://serverfault.com/questions/141773", "https://serverfault.com", "https://serverfault.com/users/38334/" ] }
141,975
I've installed and configured nginx server on my Mac from MacPorts sudo port install nginx Followed the recommendation from the port installation console and created the launchd startup item for nginx, then started the server. Renamed nginx.conf.example to nginx.conf and renamed mime.types.example to mime.types . It works fine, but I couldn't stop it. I tried sudo nginx -s stop , but this doesn't stop the server, I can still see "Welcome to nginx!" page in my browser on http://localhost/ ; also I still see master and worker processes of nginx with ps -e | grep nginx . What is the best way to start/stop nginx on Mac? BTW, I've added "daemon off;" into nginx.conf - as recommended by various resources.
# nginx -h ... -s signal : send signal to a master process: stop, quit, reopen, reload ...
{ "source": [ "https://serverfault.com/questions/141975", "https://serverfault.com", "https://serverfault.com/users/7689/" ] }
141,988
I'm finding that on occasion my Linux box runs out of memory and it starts tearing down random processes to deal with it. I'm curious what administrators do to avoid this? Is the only real solution to up the amount of memory (will upping the swap alone help?), or is there better ways to set up the box with software to avoid this? (i.e., quotas, or some such?).
By default Linux has a somewhat brain-damaged concept of memory management: it lets you allocate more memory than your system has, then randomly terminates a process when it gets in trouble. (The actual semantics of what gets killed are more complex than that - Google "Linux OOM Killer" for lots of details and arguments about whether it's a good or bad thing). To restore some semblance of sanity to your memory management: Disable the OOM Killer (Put vm.oom-kill = 0 in /etc/sysctl.conf) Disable memory overcommit (Put vm.overcommit_memory = 2 in /etc/sysctl.conf) Note that this is a trinary value: 0 = "estimate if we have enough RAM", 1 = "Always say yes", 2 = "say no if we don't have the memory") These settings will make Linux behave in the traditional way (if a process requests more memory than is available malloc() will fail and the process requesting the memory is expected to cope with that failure). Reboot your machine to make it reload /etc/sysctl.conf , or use the proc file system to enable right away, without reboot: echo 2 > /proc/sys/vm/overcommit_memory
{ "source": [ "https://serverfault.com/questions/141988", "https://serverfault.com", "https://serverfault.com/users/30986/" ] }
142,344
Hello I have just set up a DNS server for my domain example.org with 2 name servers ns1.example.org and ns2.example.org. I have attempted to set up a glue record for ns1 and ns2 at my registrar. It seems to work for now when I do a dig example.org but when I do a whois example.org it lists ns1.example.org and ns2.example.org but not their IP address which should be set up as a glue record. So I am wondering how do I check for the existence of a glue record? Do I do it with whois? I have seen .com and .net whois records that have both the domain name as well as the IP address for the name servers, is .org different? What's the proper way to test this? Thanks.
Glue records only ever exist in the parent zone of a domain name. Hence in the case of your example.org domain name, first find the .org name servers: % dig +short org. NS a0.org.afilias-nst.info. a2.org.afilias-nst.info. b0.org.afilias-nst.org. b2.org.afilias-nst.org. c0.org.afilias-nst.info. d0.org.afilias-nst.org. Then, for as many of these as you feel like testing, explicitly ask those name servers for the NS records for your domain: % dig +norec @a0.org.afilias-nst.info. example.org. NS You should get back the correct list of NS records in the "AUTHORITY SECTION". For any name servers that have correctly configured glue you should see those glue A (and/or AAAA ) records appear in the "ADDITONAL SECTION".
{ "source": [ "https://serverfault.com/questions/142344", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
142,729
Nowadays, agile systems adminitration and devops are some of the most trending topics regarding systems administration and operations. Both these concepts are mainly focused on bridging the gap between operations/sysadmins and the projects (developers, business, etc). Even if you never heard of the devops concept, I'm sure that this topic is your concern too. So, what tools and techniques do you use to accomplish devops in you companies? I'm particularly interested in topics like change management, continous integration and automation, but not only in these topics. Please share your thoughts. I'm looking forward to read your answers/opinions :)
svn/git - revision control, obviously. trac/redmine/jira - ticketing. cobbler - for base operating system server provisioning. Cobbler's a redhat-family focused product but I'm sure there's something similar for debian/ubuntu. Similarly most of the "cloud control panel" companies like RightScale will provide this for you. The watchword here is "JEOS" or "just enough operating system". My route is to use the "%packages --nobase" line in my kickstarts and to then build up my specific stack via... puppet/chef - for configuration management and consistency enforcement. There are other options here too, it matters more that you use one than which. One trick I've found particularly important is to store the configs in the same version control system as the developers use. This helps pull together the two teams workflow and make it visible to each other. func (or capistrano or cluster-ssh) - for running the deploy script across the cluster. The trick here is to make it something that the senior developers can run themselves to both push new things live and push the inevitable fixes. This is really the core of devops, empowering the developers to both break and fix the environment. A lot of sysadmins are too power hungry to let go like this, or their management still works on the mistaken notion that sysadmins should be policing developers (as if we can even read half of what they're doing). cacti/ganglia/collectd/munin - graphs are soooooo key. Its the business value of metrics with the human value of simple visuals. Correlating the timestamp of code pushes with the timestamp of changes in the graphs is immensely valuable in troubleshooting performance regression and seeing real facts about performance decisions. There is a key point here in that the graphs need to be easy to see and use by the developers and their management needs to expect it of them. nagios/zabbix/smokeping/etc - monitoring of server stuff and "base page" type performance metrics. Again the graphs are key. These are more for the ops side of the team. gomez/keynote/browsermob - external monitoring of full browser performance, taking into account third party services, CDNs, and render time issues. These are more for the dev side of the team. Thats a mix of tools and techniques, focus on the techniques. Specifically the change in mindset of the "sysadmin" side of devops from "admin" to "operations". Its about enabling the developers. Enabling them to do things, enabling them to fix things, enabling them to see real facts/metrics/graphs about what they did. Conversely the devs need to embrace that they've been enabled and actually do the work of watching performance trends, debugging problems, and thinking about not just features but how to roll them out and how they will affect the health of the entire system/environment.
{ "source": [ "https://serverfault.com/questions/142729", "https://serverfault.com", "https://serverfault.com/users/9684/" ] }
142,730
I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting server with IIS7. One of the domains is being called the "primary domain" by my hosting provider (GoDaddy) and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root/ --Default.aspx --SomeFile.aspx --domain2folder/ ----Default.aspx ----Domain2SomeFile.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder , I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.
svn/git - revision control, obviously. trac/redmine/jira - ticketing. cobbler - for base operating system server provisioning. Cobbler's a redhat-family focused product but I'm sure there's something similar for debian/ubuntu. Similarly most of the "cloud control panel" companies like RightScale will provide this for you. The watchword here is "JEOS" or "just enough operating system". My route is to use the "%packages --nobase" line in my kickstarts and to then build up my specific stack via... puppet/chef - for configuration management and consistency enforcement. There are other options here too, it matters more that you use one than which. One trick I've found particularly important is to store the configs in the same version control system as the developers use. This helps pull together the two teams workflow and make it visible to each other. func (or capistrano or cluster-ssh) - for running the deploy script across the cluster. The trick here is to make it something that the senior developers can run themselves to both push new things live and push the inevitable fixes. This is really the core of devops, empowering the developers to both break and fix the environment. A lot of sysadmins are too power hungry to let go like this, or their management still works on the mistaken notion that sysadmins should be policing developers (as if we can even read half of what they're doing). cacti/ganglia/collectd/munin - graphs are soooooo key. Its the business value of metrics with the human value of simple visuals. Correlating the timestamp of code pushes with the timestamp of changes in the graphs is immensely valuable in troubleshooting performance regression and seeing real facts about performance decisions. There is a key point here in that the graphs need to be easy to see and use by the developers and their management needs to expect it of them. nagios/zabbix/smokeping/etc - monitoring of server stuff and "base page" type performance metrics. Again the graphs are key. These are more for the ops side of the team. gomez/keynote/browsermob - external monitoring of full browser performance, taking into account third party services, CDNs, and render time issues. These are more for the dev side of the team. Thats a mix of tools and techniques, focus on the techniques. Specifically the change in mindset of the "sysadmin" side of devops from "admin" to "operations". Its about enabling the developers. Enabling them to do things, enabling them to fix things, enabling them to see real facts/metrics/graphs about what they did. Conversely the devs need to embrace that they've been enabled and actually do the work of watching performance trends, debugging problems, and thinking about not just features but how to roll them out and how they will affect the health of the entire system/environment.
{ "source": [ "https://serverfault.com/questions/142730", "https://serverfault.com", "https://serverfault.com/users/43375/" ] }
142,959
When I first learned how to make ssh keys, the tutorials I read all stated that a good passphrase should be chosen. But recently, when setting up a daemon process that needs to ssh to another machine, I discovered that the only way (it seems) to have a key that I don't need to auth at every boot is to create a key with an empty passphrase. So my question is, what are the concerns with using a key with no passphrase?
A key with no passphrase is reliant upon nobody else being able to get at that key (who wouldn’t be able to get at the resources it gives access to anyway). So, if the key grants access to a machine next to it, and both machines have the same level of electronic and physical security, then it’s not really any big deal. On the other hand, if your key is on a machine with poor security (perhaps it has many untrusted users, is easily physically accessible, or isn’t kept well up-to-date with its patching regime), then you probably don’t want to keep passphrase-less keys on there. Ultimately, it’s down to confidence in your setup and weighing up the risks/costs of doing it — if you can be pretty confident that it’s not realistically easier for an attacker to gain access to the key than to the resource the key gives you access to, then you’re fine. If you don’t have that confidence, you should probably fix the reasons why :)
{ "source": [ "https://serverfault.com/questions/142959", "https://serverfault.com", "https://serverfault.com/users/29252/" ] }
142,968
I have a website, call it http: //sub.example.com, hosted on, say, 72.xx.xx.x. There is a certificate for https: //sub.example.com. Now I go into the DNS management tool in my hosting provider, and I set up the standard subdomain forwarding wherein https: //sub.example.com forwards to 72.xx.xx.x. Now when I try to browse to https: //sub.example.com, I get a certificate error saying it is for the wrong website. I have also tried forwarding http: //sub.example.com to 72.xx.xx.x, and tried it with domain masking in both cases. I am still getting the certificate error no matter what. Additional wrinkle: if someone types in https: //sub.example.com then the domain forwarding does not seem to work and IE just spins endlesssly and finally fails. How can I domain forward the https: //sub.example.com to 72.xx.xx.x?
A key with no passphrase is reliant upon nobody else being able to get at that key (who wouldn’t be able to get at the resources it gives access to anyway). So, if the key grants access to a machine next to it, and both machines have the same level of electronic and physical security, then it’s not really any big deal. On the other hand, if your key is on a machine with poor security (perhaps it has many untrusted users, is easily physically accessible, or isn’t kept well up-to-date with its patching regime), then you probably don’t want to keep passphrase-less keys on there. Ultimately, it’s down to confidence in your setup and weighing up the risks/costs of doing it — if you can be pretty confident that it’s not realistically easier for an attacker to gain access to the key than to the resource the key gives you access to, then you’re fine. If you don’t have that confidence, you should probably fix the reasons why :)
{ "source": [ "https://serverfault.com/questions/142968", "https://serverfault.com", "https://serverfault.com/users/22149/" ] }
142,997
I found this article on options that can be put before a key in the authorized_keys file. I was wondering though, are there more? Options listed in the article are from="domain" command="commandtorun" no-port-forwarding no-X11-forwarding no-agent-forwarding no-pty Update It appears that the original article is now inaccessible. Because of that I've now changed the link to point to the archive.org version.
All options are detailed in the sshd(8) man page; search for AUTHORIZED_KEYS FILE FORMAT . At the moment, those options are: cert-authority command="command" environment="NAME=value" expiry-time="timespec" from="pattern-list" no-agent-forwarding no-port-forwarding no-pty no-user-rc no-X11-forwarding permitlisten="[host]:port" permitopen="host:port" principals="principals" restrict tunnel="n"
{ "source": [ "https://serverfault.com/questions/142997", "https://serverfault.com", "https://serverfault.com/users/29252/" ] }
143,084
I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!
I would take a look at os.path.ismount() .
{ "source": [ "https://serverfault.com/questions/143084", "https://serverfault.com", "https://serverfault.com/users/13642/" ] }
143,184
I have a list of IP addresses on a network, and most of them support multicast DNS. I'd like to be able to resolve the server name instead of just having the IP address. ping computer.local 64 bytes from 192.168.0.52: icmp_seq=1 ttl=64 time=5.510 ms 64 bytes from 192.168.0.52: icmp_seq=2 ttl=64 time=5.396 ms 64 bytes from 192.168.0.52: icmp_seq=3 ttl=64 time=5.273 ms Works, but I'd like to be able to determine that name from the IP. Also the devices don't necessarily broadcast any services, but definitely do support mDNS broadcast. So looking through services won't work.
Since you already know the IP addresses you can look up the reverse entry for each IP address to get the associated forward address: $ dig -x 10.0.0.200 @224.0.0.251 -p 5353 ; <<>> DiG 9.6.0-APPLE-P2 <<>> -x 10.0.0.200 @224.0.0.251 -p 5353 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54300 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;200.0.0.10.in-addr.arpa. IN PTR ;; ANSWER SECTION: 200.0.0.10.in-addr.arpa. 10 IN PTR atj-mbp.local. ;; ADDITIONAL SECTION: atj-mbp._device-info._tcp.local. 10 IN TXT "model=MacBookPro3,1" ;; Query time: 2 msec ;; SERVER: 10.0.0.200#5353(224.0.0.251) ;; WHEN: Sat Jun 26 07:53:44 2010 ;; MSG SIZE rcvd: 126 For a more shell script friendly output, use '+short': $ dig +short -x 10.0.0.200 @224.0.0.251 -p 5353 atj-mbp.local. Depending on your intended use case there may be a more appropriate method of performing the query. Feel free to contact me if you should need any further information.
{ "source": [ "https://serverfault.com/questions/143184", "https://serverfault.com", "https://serverfault.com/users/23333/" ] }
143,208
The counter, Process(sqlservr)\% Processor Time , is hovering around 300% on one of my database servers. This counter reflects the percent of total time SQL Server spent running on CPU (user mode + privilege mode). The book, Sql Server 2008 Internals and Troubleshooting , says that anything greater than 80% is a problem. How is it possible for that counter to be over 100%?
There are two counters with the same name: Process\% Processor Time : The sum of processor time on each processor Processor(_Total)\% Processor Time : The total for all processors Your question indicates you're using the first counter, which means that its maximum value is 100% * (no of CPUs). So if you have 4 CPUs, then the total maximum is 400%, and 80% is actually (400 * 0.8 =) 320% (and for 8 CPUs it's 640%, etc etc)
{ "source": [ "https://serverfault.com/questions/143208", "https://serverfault.com", "https://serverfault.com/users/20656/" ] }
143,238
this kind of question maybe has been asked here but I couldn't find any that really match my question. Heard that nginx performance is quite impressive, but Apache has more docs, community(read:expert) to get help Now what I want to know, how both web servers compare in term of performance, easiness of config, level of customization,etc. AS REVERSE PROXY server in a vps environment?? I'm still weighing between the two for a ruby web app(not ROR) served with thin (one of ruby web server). Specific answer will be much appreciated. General answer not touching the ruby part is okay. I'm still noob in web server administration.
I wanted to put this in a comment since I agree with the most important point of webdestroyas answer, but it got a bit too long. You're in a VPS environment, this means you're most likely going to be low on RAM. For this reason alone you'll want Nginx as its memory footprint is smaller than Apaches. Also I do not agree with some of the arguments mentioned. Easiness of Config: Nginx is not more difficult than Apache. It's different. If you're used to Apache then change will always be more difficult, this does not mean that the configuration style itself is more difficult. I migrated completely from Apache to Nginx over a year ago and today I would struggle to configure an Apache server whereas I find Nginx extremely easy to configure. For Ruby: Nginx has Passenger, however, I usually see it described as the inferior method to connect to Ruby. I am not a Ruby programmer so I cannot verify this but I often see Unicorn and Thin mentioned as better alternatives. In Conclusion: Nginx was made to be a reverse proxy. Initially all it did was serve static files and reverse proxy to a backend server via HTTP/1.0. Since then fastcgi, load balancing and various other features has been added, but it's initial design purpose was to serve static files and reverse proxy. And it does this really well. Apache, on the contrary is a general purpose web server. I have no doubt that it can reverse proxy perfectly fine, but it was not designed to have a minimal memory footprint and as a result it requires more resources than Nginx does, which means my initial VPS environment argument comes into play.
{ "source": [ "https://serverfault.com/questions/143238", "https://serverfault.com", "https://serverfault.com/users/22695/" ] }
143,296
Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url= http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity , it boots as a LiveCD. If I remove boot=casper , it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto , it still can't do automatic install. If I remove auto , it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg , I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt . Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.
Ok... I've found the answer with experiments. Use the server or alternate ISO instead of the desktop ISO! Preseed does not work with the desktop ISO. Use the linux-generic kernel and tasksel ubuntu-desktop to get a desktop installation. The auto boot parameter does not work (at least for i386). Use auto=true priority=critical instead. In contrast to the official documentation , which states that "if the URL is missing a protocol, http is assumed" , http:// is required or the installer will not be able to fetch the preseed file. If you have multiple network cards, add interface=auto or the installer will ask you which interface to use. Therefore, the minimum boot parameters needed are auto=true priority=critical url=http://mydomain.com/path/preseed initrd=/install/initrd.gz If I have time, I'll definitely post a documentation-improvement request to launchpad.
{ "source": [ "https://serverfault.com/questions/143296", "https://serverfault.com", "https://serverfault.com/users/32430/" ] }
143,367
How do I start a service with certain parameters? In another question I have found net start servicename blah but if I try this, net throws a syntax error at me. What am I doing wrong? Edit: To clarify net start servicename works just fine, but I need to pass parameters to the service. I can do this manually in services.msc by filling in a start parameter before starting the service. But how can I do this from a script? Another edit: Sorry, but my question was misleading . In my tests, I had many more parameters and it's not the /blah net start complains about. In fact, anything starting with a slash is fine . So net start servicename /blah works, net start servicename blah doesn't work. Since I need net start servicename /foo bar , it's the bar that is the problem.
sc start fooservice arg1 arg2 ...
{ "source": [ "https://serverfault.com/questions/143367", "https://serverfault.com", "https://serverfault.com/users/43563/" ] }
143,445
This question has appeared on a pre-interview quiz and it's making me crazy. Can anyone answer this and put me at ease? The quiz has no reference to a particular shell but the job description is for a unix sa. again the question is simply... What does 'set -e' do, and why might it be considered dangerous?
set -e causes the shell to exit if any subcommand or pipeline returns a non-zero status. The answer the interviewer was probably looking for is: It would be dangerous to use "set -e" when creating init.d scripts: From http://www.debian.org/doc/debian-policy/ch-opersys.html 9.3.2 -- Be careful of using set -e in init.d scripts. Writing correct init.d scripts requires accepting various error exit statuses when daemons are already running or already stopped without aborting the init.d script, and common init.d function libraries are not safe to call with set -e in effect. For init.d scripts, it's often easier to not use set -e and instead check the result of each command separately. This is a valid question from an interviewer standpoint because it gauges a candidates working knowledge of server-level scripting and automation
{ "source": [ "https://serverfault.com/questions/143445", "https://serverfault.com", "https://serverfault.com/users/9440/" ] }
143,573
How can I delete my password for MySQL? I dont want to have a password to connect to the database. My server is running Ubuntu.
Personally, I think instead it's better to set a password and save it in /root/.my.cnf: First: mysqladmin -u root password 'asdfghjkl' Then edit root's .my.cnf file: [client] password = asdfghjkl Make sure to chmod 0600 .my.cnf . Now you have a password but you're no longer prompted for it. My default MySQL server install is a totally random unique password for each MySQL server, saved in the .my.cnf file like this.
{ "source": [ "https://serverfault.com/questions/143573", "https://serverfault.com", "https://serverfault.com/users/43353/" ] }
143,786
On Ubuntu it is possible to have multiple JVMs at the same time. The default one is selected with update-alternatives . But this does not set the JAVA_HOME environment variable, due to a debian policy . I am writing a launcher script (bash), which starts a java application. This java application needs the JAVA_HOME environment variable. So how to get the path of the JVM which is currently selected by update-alternatives ?
For the JRE, something like this should do the trick: JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
{ "source": [ "https://serverfault.com/questions/143786", "https://serverfault.com", "https://serverfault.com/users/20773/" ] }
143,954
all. I'm looking for a quick and dirty way to generate some diagrams of some directories that have almost, but not exactly, the same hierarchy, so I can show them around at a meeting and we can decide which flavor we like best. I'm not interested in the "leaf" nodes, just the directories. The catch: I don't want to mess with X. This is a server system I deal with entirely through SSH. So I'm looking for something that will do ASCII layout, maybe with simple pipes-and-hyphens for lines or something. Does anyone know of such a utility? I'm sure I could write something myself, but it's such a fiddly little sort of project, with handling spacing and layout and such; I'd really like to discover that someone's done it for me. Alas, Google doesn't seem to know of such a thing...or if it does, it's hidden beneath heaps of excellent visual explications of the standard general Unix file hierarchy. Thanks!
I would use tree . $ tree -d /usr|head -n 12 /usr |-- X11R6 | `-- lib | `-- X11 | `-- wily |-- bin | `-- X11 -> . |-- games |-- i586-mingw32msvc | |-- bin | |-- include | | |-- GL
{ "source": [ "https://serverfault.com/questions/143954", "https://serverfault.com", "https://serverfault.com/users/2026/" ] }
143,968
My system configuration script does an apt-get install -y postfix . Unfortunately the script is halted when the postfix installer displays a configuration screen. Is there a method to force postfix to use the defaults during installation so that an automated script can continue to the end? Does the postfix installer maybe check for existing configuration in /etc/postfix , and if it exists, not bother the user with the configuration screen?
You can use pre-seeding for this, using the debconf-set-selections command to pre-answer the questions asked by debconf before installing the package. For example: debconf-set-selections <<< "postfix postfix/mailname string your.hostname.com" debconf-set-selections <<< "postfix postfix/main_mailer_type string 'Internet Site'" apt-get install --assume-yes postfix
{ "source": [ "https://serverfault.com/questions/143968", "https://serverfault.com", "https://serverfault.com/users/26450/" ] }
144,095
My mysql password = '' i try to login to PhpMyAdmin (on Ubuntu 10.04 lamp) and get error: Login without a password is forbidden by configuration (see AllowNoPassword) What should i do for enter to phpMyAdmin without set password? Thanks
You can turn on the option AllowNoPassword on file /etc/phpmyadmin/config.inc.php. Edit the file config.inc.php, search and uncomment this line: // $cfg['Servers'][$i]['AllowNoPassword'] = TRUE; Then you can access PhpMyAdmin without password.
{ "source": [ "https://serverfault.com/questions/144095", "https://serverfault.com", "https://serverfault.com/users/43353/" ] }
144,325
To create a test email server, I have a similar requirement as: How to redirect all outgoing email from postfix to a single address for testing But I need to send all the emails to an external account, not a local one. I would like to do something like: xyz:[email protected] but xyz is not local nor smtp.
Create /etc/postfix/virtual-regexp with the following content: /.+@.+/ [email protected] Edit /etc/postfix/main.cf and add the file to virtual_alias_maps . The end result might look like this: virtual_alias_maps = regexp:/etc/postfix/virtual-regexp If you had existing virtual_alias_maps , separate the values with commas (eg. virtual_alias_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp ) . Build the mapfile by typing: postmap /etc/postfix/virtual-regexp Then restart postfix : sudo service postfix restart Voila!
{ "source": [ "https://serverfault.com/questions/144325", "https://serverfault.com", "https://serverfault.com/users/43856/" ] }
144,411
This works: du -cshm . But this fails: du -cshg . How can I see it in unit of GB?
GNU du has the --block-size option: du -csh --block-size=1G . As sajb noted, omitting the block size argument will automatically scale the output (and display the unit). Using any block size argument displays the number but omits the unit.
{ "source": [ "https://serverfault.com/questions/144411", "https://serverfault.com", "https://serverfault.com/users/38199/" ] }
144,460
I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus
As far as automatically starting/stopping and you're using virsh you can do that like this (as a privileged user) virsh autostart Domain I'm not sure why virt manager isn't giving you any output. It does have a connection to the machine hosting machine right? It should show a list of domains if it's connected.
{ "source": [ "https://serverfault.com/questions/144460", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
144,939
I'm a bash newbie but I would like to create a script in which I'd like to allow the user to select multiple options from a list of options. Essentially what I would like is something similar to the example below: #!/bin/bash OPTIONS="Hello Quit" select opt in $OPTIONS; do if [ "$opt" = "Quit" ]; then echo done exit elif [ "$opt" = "Hello" ]; then echo Hello World else clear echo bad option fi done (Sourced from http://www.faqs.org/docs/Linux-HOWTO/Bash-Prog-Intro-HOWTO.html#ss9.1 ) However my script would have more options, and I'd like to allow multiples to be selected. So something like this: 1) Option 1 2) Option 2 3) Option 3 4) Option 4 5) Done Having feedback on the ones they have selected would also be great, eg plus signs next to ones they have already selected. Eg if you select "1" I'd like to page to clear and reprint: 1) Option 1 + 2) Option 2 3) Option 3 4) Option 4 5) Done Then if you select "3": 1) Option 1 + 2) Option 2 3) Option 3 + 4) Option 4 5) Done Also, if they again selected (1) I'd like it to "deselect" the option: 1) Option 1 2) Option 2 3) Option 3 + 4) Option 4 5) Done And finally when Done is pressed I'd like a list of the ones that were selected to be displayed before the program exits, eg if the current state is: 1) Option 1 2) Option 2 + 3) Option 3 + 4) Option 4 + 5) Done Pressing 5 should print: Option 2, Option 3, Option 4 ...and the script terminate. So my question - is this possible in bash, and if so is anyone able to provide a code sample? Any advice would be much appreciated.
I think you should take a look at dialog or whiptail . Edit: Here's an example script using the options from your question: #!/bin/bash cmd=(dialog --separate-output --checklist "Select options:" 22 76 16) options=(1 "Option 1" off # any option can be set to default to "on" 2 "Option 2" off 3 "Option 3" off 4 "Option 4" off) choices=$("${cmd[@]}" "${options[@]}" 2>&1 >/dev/tty) clear for choice in $choices do case $choice in 1) echo "First Option" ;; 2) echo "Second Option" ;; 3) echo "Third Option" ;; 4) echo "Fourth Option" ;; esac done
{ "source": [ "https://serverfault.com/questions/144939", "https://serverfault.com", "https://serverfault.com/users/38939/" ] }
145,383
I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTP request to nginx which is redirect to the same URL but over HTTPS nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas?
I'm using the following config in production server { listen xxx.xxx.xxx.xxx:80; server_name www.example.net; rewrite ^(.*) https://$server_name$1 permanent; } server { listen xxx.xxx.xxx.xxx:443; server_name www.example.net; root /vhosts/www.example.net; ssl on; ssl_certificate /etc/pki/nginx/www.example.net.crt; ssl_certificate_key /etc/pki/nginx/www.example.net.key; ssl_prefer_server_ciphers on; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /etc/pki/nginx/dh2048.pem; # intermediate configuration. tweak to your needs. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } }
{ "source": [ "https://serverfault.com/questions/145383", "https://serverfault.com", "https://serverfault.com/users/44153/" ] }
145,777
Other than for historical reasons, is there is reason to have “www” in a URL? Should I create a permanent redirect from www.xyz.com to xyz.com , or from xyz.com to www.xyz.com ? Which one would you suggest and why?
One of the reasons why you need www or some other subdomain has to do with a quirk of DNS and the CNAME record. Suppose for the purposes of this example that you are running a big site and contract out hosting to a CDN (Content Distribution Network) such as Akamai. What you typically do is set up the DNS record for your site as a CNAME to some akamai.com address. This gives the CDN the opportunity to supply an IP address that is close to the browser (in geographic or network terms). If you used an A record on your site, then you would not be able to offer this flexibility. The quirk of the DNS is that if you have a CNAME record for a host name, you cannot have any other records for that same host. However, your top level domain example.com usually must have an NS and SOA record. Therefore, you cannot also add a CNAME record for example.com . The use of www.example.com gives you the opportunity to use a CNAME for www that points to your CDN, while leaving the required NS and SOA records on example.com . The example.com record will usually also have an A record to point to a host that will redirect to www.example.com using an HTTP redirect.
{ "source": [ "https://serverfault.com/questions/145777", "https://serverfault.com", "https://serverfault.com/users/26763/" ] }
146,093
I am used to using vi, not vim. What I find annoying in vim is that when you are scrolling with CTRL-F and reach EOF, vim scrolls down to the very last line and put this line on the top of your screen, and you can't see the lines above. You must scroll up a little bit so you can see the context. All this happens with CTRL-F only, not with j or the down cursor key. In vi, you scroll down (with CTRL-F), but when you reach EOF it still show you, say, 15 lines and then the typical ~. How can I config vim to behave like vi in this case? I am using Putty for remote access.
You want to set option scrolloff : 'scrolloff' 'so' number (default 0) number of screen lines to keep above and below the cursor. This will make some context visible around where you are working. Use e.g. :set scrolloff=10 to always keep at least 10 lines visible.
{ "source": [ "https://serverfault.com/questions/146093", "https://serverfault.com", "https://serverfault.com/users/44358/" ] }
146,525
I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are mostly MyISAM with one small InnoDB. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?
One thing that may be slowing the process is the key_buffer_size , which is the size of the buffer used for index blocks. Tune this to at least 30% of your RAM or the re-indexing process will probably be too slow. For reference, if you were using InnoDB and foreign keys, you could also disable foreign key checks and re-enable it at the end (using SET FOREIGN_KEY_CHECKS=0 and SET FOREIGN_KEY_CHECKS=1 ).
{ "source": [ "https://serverfault.com/questions/146525", "https://serverfault.com", "https://serverfault.com/users/8639/" ] }
146,569
How can I on my ubuntu server, in Iptables only allow one IP adress on a specific port? Thanks
One liner: iptables -I INPUT \! --src 1.2.3.4 -m tcp -p tcp --dport 777 -j DROP # if it's not 1.2.3.4, drop it A more elegant solution: iptables -N xxx # create a new chain iptables -A xxx --src 1.2.3.4 -j ACCEPT # allow 1.2.3.4 iptables -A xxx --src 1.2.3.5 -j ACCEPT # allow 1.2.3.5 iptables -A xxx --src 1.2.3.6 -j ACCEPT # allow 1.2.3.6 iptables -A xxx -j DROP # drop everyone else iptables -I INPUT -m tcp -p tcp --dport 777 -j xxx # use chain xxx for packets coming to TCP port 777
{ "source": [ "https://serverfault.com/questions/146569", "https://serverfault.com", "https://serverfault.com/users/30251/" ] }
146,621
This is kind of a weird request. For some reason whoever owns safeandbuy.com has pointed their domain at my IP address. The reason it's a problem is that I'm having all kinds of crawlers that are trying to crawl my site with that domain name. Is there anything I can do about this?
You could set up a virtualhost on your webserver for safeandbuy.com to grab all that traffic, and just have an index page that says "I am not safeandbuy.com". That would at least pull the hits out of your actual domain. The whois information for safeandbuy.com has a contact phone number, address and email. You could try to contact them and let them know they are pointing to the wrong IP.
{ "source": [ "https://serverfault.com/questions/146621", "https://serverfault.com", "https://serverfault.com/users/8208/" ] }
146,745
How can I tell (in ~/.bashrc ) if I'm running in interactive mode, or, say, executing a command over ssh. I want to avoid printing of ANSI escape sequences in .bashrc if it's the latter.
According to man bash : PS1 is set and $- includes i if bash is interactive, allowing a shell script or a startup file to test this state. So you can use: if [[ $- == *i* ]] then do_interactive_stuff fi Also: When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist. So ~/.bashrc is only sourced for interactive shells. Sometimes, people source it from ~/.bash_profile or ~/.profile which is incorrect since it interferes with the expected behavior. If you want to simplify maintenance of code that is common, you should use a separate file to contain the common code and source it independently from both rc files. It's best if there's no output to stdout from login rc files such as ~/.bash_profile or ~/.profile since it can interfere with the proper operation of rsync for example. In any case, it's still a good idea to test for interactivity since incorrect configuration may exist.
{ "source": [ "https://serverfault.com/questions/146745", "https://serverfault.com", "https://serverfault.com/users/5185/" ] }
146,913
I deleted the /var/log/nginx/error.log file, and then created a new one using: sudo nano error.log Doing ls -la shows that the error.log and access.log have the same permissions. When I try and start nginx I get the error: alert: could not open error log file: open() "/var/log/nginx/error.log" failed permission denited. Update When trying to start nginx, I am also seeing: emerg: /var/run/nginx.pid failed 13: permission denied.
This doesn't solve your problem, but in the future, if you do cat /dev/null > /file/you/want/to/wipe-out you will copy over the contents of the file with nothing, and keep all permissions in tact. Not nginx-speicific, but Additionally, make sure you are running the application as the user it is supposed to run as. If you ever ran it as root, all the permissions are going to be owned by root, so other users won't be able to run it.
{ "source": [ "https://serverfault.com/questions/146913", "https://serverfault.com", "https://serverfault.com/users/9900/" ] }
147,169
I have an Ubuntu 9.10 Server running as guest from VMware Fusion. How can I check if it's running VMware tools from the command line?
This works in SLES: ps ax|grep vmware 8885 ? Ss 8:05 /usr/lib/vmware-tools/sbin64/vmware-guestd --background /var/run/vmware-guestd.pid /etc/init.d/vmware-tools status vmware-guestd is running You can also check if the vm kernel modules are running lsmod ... vmw_pvscsi 22359 0 vmxnet3 44475 0 vmwgfx 114733 3 vm...
{ "source": [ "https://serverfault.com/questions/147169", "https://serverfault.com", "https://serverfault.com/users/16033/" ] }
147,181
I have a question about domain joining workstations. We just upgraded Exchange and AD to a new server. I want to know how I join these old workstations to the new domain controller and preserve all the users documents etc. In other words I don't want to create another user account on the workstations. Thanks for all your help.
This works in SLES: ps ax|grep vmware 8885 ? Ss 8:05 /usr/lib/vmware-tools/sbin64/vmware-guestd --background /var/run/vmware-guestd.pid /etc/init.d/vmware-tools status vmware-guestd is running You can also check if the vm kernel modules are running lsmod ... vmw_pvscsi 22359 0 vmxnet3 44475 0 vmwgfx 114733 3 vm...
{ "source": [ "https://serverfault.com/questions/147181", "https://serverfault.com", "https://serverfault.com/users/42527/" ] }
147,515
I came across a bug in my DOS script that uses date and time data for file naming. The problem was I ended up with a gap because the time variable didn't automatically provide leading zero for hour < 10. So running> echo %time% gives back: ' 9:29:17.88'. Does anyone know of a way to conditionally pad leading zeros to fix this? More info: My filename set command is: set logfile=C:\Temp\robolog_%date:~-4%%date:~4,2%%date:~7,2%_%time:~0,2%%time:~3,2%%time:~6,2%.log which ends up being: C:\Temp\robolog_20100602_ 93208.log (for 9:23 in the morning). This question is related to this one . Thanks
A very simple way is to just replace the leading space with zero: echo %TIME: =0% outputs: 09:18:53,45
{ "source": [ "https://serverfault.com/questions/147515", "https://serverfault.com", "https://serverfault.com/users/15554/" ] }
147,638
Is there any way to export a Microsoft SQL Server database to a SQL script? I'm looking for something which behaves similarly to mysqldump, taking a database name, and producing a single script which will recreate all the tables, stored procedures and reinsert all the data etc. I've seen http://vyaskn.tripod.com/code.htm#inserts , but I ideally want something to recreate everything (not just the data) which works in a single step to produce the final script.
In SQL Server Management Studio right-click your database and select Tasks / Generate Scripts. Follow the wizard and you'll get a script that recreates the data structure in the correct order according to foreign keys. On the wizard step entitled "Set Scripting Options", click on the button (on the right of the window) labelled "Advanced" and modify the option "Types of data to script" and choose "Schema and data". TIP: In the final step select "Script to a New Query Window", it'll work much faster that way.
{ "source": [ "https://serverfault.com/questions/147638", "https://serverfault.com", "https://serverfault.com/users/23736/" ] }
147,647
When I connect to my machine and I am on the local network Remote desktop does not require my password. However, when I try to remote into my machine remotely over the Internet, it makes me enter my password each time. Is there a way to have it remember my password, and not prompt me each time?
In SQL Server Management Studio right-click your database and select Tasks / Generate Scripts. Follow the wizard and you'll get a script that recreates the data structure in the correct order according to foreign keys. On the wizard step entitled "Set Scripting Options", click on the button (on the right of the window) labelled "Advanced" and modify the option "Types of data to script" and choose "Schema and data". TIP: In the final step select "Script to a New Query Window", it'll work much faster that way.
{ "source": [ "https://serverfault.com/questions/147647", "https://serverfault.com", "https://serverfault.com/users/26295/" ] }
147,676
I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog : Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?
The messages that have been sent or are trying to be sent are stored in /var/spool/mqueue . Messages that Sendmail has not tried to queue yet can be found in /var/spool/mqueue-client . So try this (I assume you want to get rid of all messages in the queue): Stop sendmail rm /var/spool/mqueue/* If you want to remove messages in waiting, rm /var/spool/mqueue-client/* . Start sendmail This will clear our your queue folder(s) until the system receives another message. You can double check by running mailq (both queue folders), or sendmail -bp (only the queue folder). NOTE: With most Linux distributions you can start/stop services with with service sendmail <start|stop|restart> or /etc/init.d/sendmail <start|stop|restart> . Both options have many other status flags which can be observed by typing in the command and service without the status flags.
{ "source": [ "https://serverfault.com/questions/147676", "https://serverfault.com", "https://serverfault.com/users/528/" ] }
147,787
I'm using ln -f -s /var/www/html/releases/build1390 app-current to update symbolic link "app-current" with a new destination. However, this doesn't work, the link "app-current" keeps it original destination, however, I don't get any errors... I'd rather not remove the link and recreate it, just update the target of an existing link. Is that possible?
That works for me, what is the output of strace ln -f -s /var/www/html/releases/build1390 app-current ? Oh, since it is a directory you need to add -n for no dereference and this should solve the issue. -f is really more of a convenience since adding the -f just causes it to unlink anyways. Although I guess it would probably happen a few hundred ms faster on a normally loaded system. For example, if arf already points to /home: strace With -n : strace ln -n -f -s / arf ... symlink("/", "arf") = -1 EEXIST (File exists) unlink("arf") = 0 symlink("/", "arf") = 0 strace Without -n : strace ln -f -s / arf ... write(2, "ln: "..., 4ln: ) = 4 write(2, "`arf/': cannot overwrite director"..., 34`arf/': cannot overwrite directory) = 34 write(2, "\n"..., 1) = 1 So without the -n arf gets dereferenced so ln treats it as arf as if it were actually / . In your particular example, if there is no error, I think you have probably created a new symbolic link inside of /var/www/html/releases/build1390 app-current and will want to clean that up.
{ "source": [ "https://serverfault.com/questions/147787", "https://serverfault.com", "https://serverfault.com/users/29990/" ] }
147,921
I'm trying to get email reports from our AWS EC2 instances. We're using Exchange Online (part of Microsoft Online Services). I've setup a user account specifically for SMTP relaying , and I've setup Postfix to meet all the requirements to relay messages through this server. However, Exchange Online's SMTP server will reject messages unless the From address exactly matches the authentication address (the error message is 550 5.7.1 Client does not have permissions to send as this sender ). With careful configuration, I can setup my services to send as this user. But I'm not a huge fan of being careful - I'd rather have postfix force the issue. Is there a way to do this?
This is how to really do it in postfix. This config changes sender addresses from both local originated, and relayed SMTP mail traffic: /etc/postfix/main.cf: sender_canonical_classes = envelope_sender, header_sender sender_canonical_maps = regexp:/etc/postfix/sender_canonical_maps smtp_header_checks = regexp:/etc/postfix/header_check Rewrite envelope address from email originating from the server itself /etc/postfix/sender_canonical_maps: /.+/ [email protected] Rewrite from address in SMTP relayed e-mail /etc/postfix/header_check: /From:.*/ REPLACE From: [email protected] Thats very useful if you're for instance using a local relay smtp server which is used by all your multifunctionals and several applications. If you use Office 365 SMTP server, any mail with a different sender address than the email from the authenticated user itself will simply be denied. The above config prevents this.
{ "source": [ "https://serverfault.com/questions/147921", "https://serverfault.com", "https://serverfault.com/users/3455/" ] }
147,935
I know that doing a dd if=/dev/hda of=/dev/hdb does a deep hard drive copy . I've heard that people have been able to speed up the process by increasing the number of bytes that are read and written at a time (default: 512 ) with the bs option. My question is: What determines the ideal byte size for copying from a hard drive? and Why does that determine the ideal byte size?
As Chris S wrote in this answer the optimum block size is hardware dependent. In my experience it is always greater than the default 512 bytes. If your working with raw devices then the overlying file system geometry will have no effect. I've used the script below to help 'optimize' the block size of dd. #!/bin/bash # #create a file to work with # echo "creating a file to work with" dd if=/dev/zero of=/var/tmp/infile count=1175000 for bs in 1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M 2M 4M 8M do echo "Testing block size = $bs" dd if=/var/tmp/infile of=/var/tmp/outfile bs=$bs echo "" done rm /var/tmp/infile /var/tmp/outfile
{ "source": [ "https://serverfault.com/questions/147935", "https://serverfault.com", "https://serverfault.com/users/44139/" ] }
148,341
I'd like to schedule a command to run after reboot on a Linux box. I know how to do this so the command consistently runs after every reboot with a @reboot crontab entry, however I only want the command to run once. After it runs, it should be removed from the queue of commands to run. I'm essentially looking for a Linux equivalent to RunOnce in the Windows world. In case it matters: $ uname -a Linux devbox 2.6.27.19-5-default #1 SMP 2009-02-28 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux $ bash --version GNU bash, version 3.2.48(1)-release (x86_64-suse-linux-gnu) Copyright (C) 2007 Free Software Foundation, Inc. $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 0 Is there an easy, scriptable way to do this?
I really appreciate the effort put into Dennis Williamson's answer . I wanted to accept it as the answer to this question, as it is elegant and simple, however: I ultimately felt that it required too many steps to set up. It requires root access. I think his solution would be great as an out-of-the-box feature of a Linux distribution. That being said, I wrote my own script to accomplish more or less the same thing as Dennis's solution. It doesn't require any extra setup steps and it doesn't require root access. #!/bin/bash if [[ $# -eq 0 ]]; then echo "Schedules a command to be run after the next reboot." echo "Usage: $(basename $0) <command>" echo " $(basename $0) -p <path> <command>" echo " $(basename $0) -r <command>" else REMOVE=0 COMMAND=${!#} SCRIPTPATH=$PATH while getopts ":r:p:" optionName; do case "$optionName" in r) REMOVE=1; COMMAND=$OPTARG;; p) SCRIPTPATH=$OPTARG;; esac done SCRIPT="${HOME}/.$(basename $0)_$(echo $COMMAND | sed 's/[^a-zA-Z0-9_]/_/g')" if [[ ! -f $SCRIPT ]]; then echo "PATH=$SCRIPTPATH" >> $SCRIPT echo "cd $(pwd)" >> $SCRIPT echo "logger -t $(basename $0) -p local3.info \"COMMAND=$COMMAND ; USER=\$(whoami) ($(logname)) ; PWD=$(pwd) ; PATH=\$PATH\"" >> $SCRIPT echo "$COMMAND | logger -t $(basename $0) -p local3.info" >> $SCRIPT echo "$0 -r \"$(echo $COMMAND | sed 's/\"/\\\"/g')\"" >> $SCRIPT chmod +x $SCRIPT fi CRONTAB="${HOME}/.$(basename $0)_temp_crontab_$RANDOM" ENTRY="@reboot $SCRIPT" echo "$(crontab -l 2>/dev/null)" | grep -v "$ENTRY" | grep -v "^# DO NOT EDIT THIS FILE - edit the master and reinstall.$" | grep -v "^# ([^ ]* installed on [^)]*)$" | grep -v "^# (Cron version [^$]*\$[^$]*\$)$" > $CRONTAB if [[ $REMOVE -eq 0 ]]; then echo "$ENTRY" >> $CRONTAB fi crontab $CRONTAB rm $CRONTAB if [[ $REMOVE -ne 0 ]]; then rm $SCRIPT fi fi Save this script (e.g.: runonce ), chmod +x , and run: $ runonce foo $ runonce "echo \"I'm up. I swear I'll never email you again.\" | mail -s \"Server's Up\" $(whoami)" In the event of a typo, you can remove a command from the runonce queue with the -r flag: $ runonce fop $ runonce -r fop $ runonce foo Using sudo works the way you'd expect it to work. Useful for starting a server just once after the next reboot. myuser@myhost:/home/myuser$ sudo runonce foo myuser@myhost:/home/myuser$ sudo crontab -l # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/root/.runonce_temp_crontab_10478 installed on Wed Jun 9 16:56:00 2010) # (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $) @reboot /root/.runonce_foo myuser@myhost:/home/myuser$ sudo cat /root/.runonce_foo PATH=/usr/sbin:/bin:/usr/bin:/sbin cd /home/myuser foo /home/myuser/bin/runonce -r "foo" Some notes: This script replicates the environment (PATH, working directory, user) it was invoked in. It's designed to basically defer execution of a command as it would be executed "right here, right now" until after the next boot sequence.
{ "source": [ "https://serverfault.com/questions/148341", "https://serverfault.com", "https://serverfault.com/users/16374/" ] }
148,401
I need to check a PTR record to make sure that a script I have is sending emails which will actually be received by my users and not be incorrectly marked as spam. I understand that the ISP which owns the IP range has to set up the PTR record, but how do I check if it is already set up?
If you have Unix or Linux , you can do this by typing this on a command prompt: dig -x xx.yy.zz.aa You'll get an answer with your authority of aa.zz.yy.xx.in-addr.arpa and server resolving to this address. In Windows you can do nslookup xx.yy.zz.aa . You can also check online at www.intodns.com and input your domain... It will error on the results checking for a reverse zone lookup. xx.yy.zz.aa = The IP address you're trying to resolve Update: When using dig, nslookup, or host it is frequently useful to use a DNS server outside of your control like Google (8.8.8.8) so you get confirmation things are right from a 3rd party. – Zoredache Zoredache makes a good point. Here are the commands for testing/resolving to external/outside DNS servers: Dig (testing reverse DNS on Google's DNS server of 8.8.8.8): dig -x zz.yy.xx.aa @8.8.8.8 Host and Nslookup (testing reverse dns on Google's DNS server of 8.8.8.8) nslookup zz.yy.xx.aa 8.8.8.8 host zz.yy.xx.aa 8.8.8.8
{ "source": [ "https://serverfault.com/questions/148401", "https://serverfault.com", "https://serverfault.com/users/44977/" ] }
148,721
Is there a linux shell command that I can use to inspect the TXT records of a domain?
Dig will also do it quite nicely: dig -t txt example.com and if you add the +short option you get just the txt record in quote marks with no other cruft.
{ "source": [ "https://serverfault.com/questions/148721", "https://serverfault.com", "https://serverfault.com/users/14896/" ] }
149,039
Ubuntu 10.04, MySQL 5.1, Apache 2.2, and PHP 5.2/5.3: I just discovered that I am using the wrong version of PHP for a CRM application. Once I figured out how to make a simple phpinfo() script to tell me what Apache2 is using, I tried changing the php.ini such that my webserver would use the PHP I want. Well, this is my problem. Not sure how to do that. I compiled the version of PHP I want to /etc here: /etc/php-5.2.8/ Inside this, there was a php.ini-recommended file that I made some changes to and renamed to php.ini so PHP would use it. But when I opened my browser and cleared my history and went to the http://localhost<CRM dir>/install.php address, the wizard still says I'm not usign the correct version of PHP. Based on this post what do I have to do to change the version of PHP that shows up after I run my test.php script? In other words, phpinfo() says I'm running PHP 5.3.2, but I want to change it to my compiled 5.2.8 version located in /etc .
If you've already installed another version of php, you only need to change the php* module used by apache. for example, I have php5 and php7.0. when I want apache use php7.0, I only need to enable his module and disable php5 module. sudo a2dismod php5 sudo a2enmod php7.0
{ "source": [ "https://serverfault.com/questions/149039", "https://serverfault.com", "https://serverfault.com/users/39955/" ] }
149,673
How do you allow a user to log in using " su - user " but prevent the user from login in using SSH? I tried to set the shell to /bin/false but the when I try to su it doesn't work. Are there several ways to only allow logins by su ? Is SSH's AllowUser the way to go? (how would I do this if it's the way to go)
You can use AllowUsers / AllowGroups if you have only a few users/groups that are allowed to login via ssh or DenyUsers / DenyGroups if you have only a few users/groups that are not allowed to login. Note that this only restricts login via ssh, other ways of login (console, ftp, ...) are still possible. You need to add these options to your /etc/ssh/sshd_config file for most ssh installations. If you have set the login shell to /bin/false you can use su -s /bin/bash user (replace /bin/bash with the shell of your choice)
{ "source": [ "https://serverfault.com/questions/149673", "https://serverfault.com", "https://serverfault.com/users/45342/" ] }
149,833
I have a folder a 2003 member server which can't be deleted. Nothing has any permissions (domain admin and running up a cmd prompt as "nt authority\system" using psexec) - always "access denied". When I do a dir /q, the owner shows as "...". I've tried takeown.exe on the folder and also it's parent. The bad folder always reports "access denied". Also tried to reset using icacls, same thing. Explorer permissions has no sharing & security options or tabs. It works fine for other folders, even in the same directory.
I've seen something similar to this. What ended up being the case is that the file was deleted while there were still outstanding locks on it. I couldn't do a darned thing to it. Clearing the outstanding locks caused the file to fully delete.
{ "source": [ "https://serverfault.com/questions/149833", "https://serverfault.com", "https://serverfault.com/users/45395/" ] }
150,348
On various systems that I administer, there are cron scripts that get run via the commonly-used /etc/cron.{hourly,daily,weekly} layout. What I want to know is whether there's any common 'disable this script' functionality. Obviously, simply deleting something out of a given directory will disable it, but I'm looking for a more permanent solution. Deleting /etc/cron.daily/slocate will work to disable the nightly updatedb on my home machine (where I never use slocate ), but next time I upgrade the slocate package, I'm pretty sure it'll reappear. The two distributions I'm most interested in are Gentoo and OpenSUSE, but I'm hoping there's a widely-implemented mechanism. Both distros as I have them use vixie-cron (not sure it matters).
You should be able to chmod -x scriptname to disable a script but leave the file in place.
{ "source": [ "https://serverfault.com/questions/150348", "https://serverfault.com", "https://serverfault.com/users/9910/" ] }
150,740
Which characters are allowed and which of them must be escaped on the command line in different operating systems?
There's a discussion of filename characters in the Wikipedia article on File Names . You may find this essay informative: Fixing Unix/Linux/POSIX Filenames . This article compares OS X and Windows XP: X vs. XP: Forbidden Characters in Filenames (PDF, see pp approx. 64-66). Things That Shouldn’t Be in File Names for $1,000 Alex I don't know which characters must be un -escaped, but in Linux, it's probably not a good idea to escape the characters that may have special meaning such as "n" (newline), "t" (tab) and others, but that's generally not a problem in file operations. Perhaps you mean "escaped" rather than "unescaped". The most common ones are ones that the shell will interpret such as space, ">", "<", etc. See some of the articles I linked for a discussion of those.
{ "source": [ "https://serverfault.com/questions/150740", "https://serverfault.com", "https://serverfault.com/users/22257/" ] }
151,090
Is there a maximum size for an HTTP POST? And if there is a max size, is it determined by the protocol or is it at the discretion of the server?
The HTTP specification doesn't impose a specific size limit for posts. They will usually be limited by either the web server or the programming technology used to process the form submission.
{ "source": [ "https://serverfault.com/questions/151090", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
151,109
How do I get the current Unix time in milliseconds (i.e number of milliseconds since Unix epoch January 1 1970)?
This: date +%s will return the number of seconds since the epoch. This: date +%s%N returns the seconds and current nanoseconds. So: date +%s%N | cut -b1-13 will give you the number of milliseconds since the epoch - current seconds plus the left three of the nanoseconds. and from MikeyB - echo $(($(date +%s%N)/1000000)) (dividing by 1000 only brings to microseconds)
{ "source": [ "https://serverfault.com/questions/151109", "https://serverfault.com", "https://serverfault.com/users/43880/" ] }
151,635
I've found one way so far: less +G filename , but it scrolls up line-by-line only with ↑ . What's a more powerful less usage which provides scrolling by page, backward pattern search, and so on?
I'm sure someone else has a better answer, but With "less" after you've opened the file: G goes to the bottom of the file ^b goes up one page ? searches backwards. As you said, you can open the file with +G and then use ? and ^b to scroll up. There are likely clever awk things you can do to achieve the same thing in a script.
{ "source": [ "https://serverfault.com/questions/151635", "https://serverfault.com", "https://serverfault.com/users/45942/" ] }
151,638
Having difficulties setting up Bare Metal recovery in DPM 2010. Does anyone know of a good guide/walkthrough to talk me through a basic setup? I have tried most of the DPM knowledge base without much luck. I can perform System State backups, but as soon as I enable bare metal the jobs start failing. Error codes are not coming up anything on google at all. Thanks in advance
I'm sure someone else has a better answer, but With "less" after you've opened the file: G goes to the bottom of the file ^b goes up one page ? searches backwards. As you said, you can open the file with +G and then use ? and ^b to scroll up. There are likely clever awk things you can do to achieve the same thing in a script.
{ "source": [ "https://serverfault.com/questions/151638", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
151,645
I am primarily a web-application developer, and I do not know much about scaling/scalability techniques. My application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySQL for my database server; both running on the same VPS. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have performance issues. So my question is how do I go about scaling my web-application? My current plan is as follows: Now I have just one vps server running, apache + MySQL. Next, I plan to add another vps server, to run only MySQL, so I'll have one webserver and one DB server. Next, I'll add memcache to the webserver for caching data, to take some load off MySQL. Next, another web-server for serving all the static content. Next, a VPS server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy? Please guide me around here.
I'm sure someone else has a better answer, but With "less" after you've opened the file: G goes to the bottom of the file ^b goes up one page ? searches backwards. As you said, you can open the file with +G and then use ? and ^b to scroll up. There are likely clever awk things you can do to achieve the same thing in a script.
{ "source": [ "https://serverfault.com/questions/151645", "https://serverfault.com", "https://serverfault.com/users/30032/" ] }
151,824
I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.
It appears that there is more than just "regular" process priority at work here. As I noted in the question, the task scheduler by default runs your task at lower than normal priority. This question on StackOverflow describes how to fix any task to run at Normal priority, but the fix still leaves one thing slightly different: memory priority. Memory priority was a new feature for Windows Vista, and is described in this Technet article . You can see memory priority using Process Explorer , which is a must-have tool for any administrator or programmer. Anyway, even with the scheduled task priority fix, the memory priority of your task is set to 4, which is one notch below the normal setting of 5. When I manually boosted the memory priority of my task up to 5, the performance was on par with running the process interactively. For info on boosting the priority, see my answer to a related StackOverflow question about IO priority; setting memory priority is done similarly, via NtSetInformationProcess, with PROCESS_INFORMATION_CLASS set to ProcessMemoryPriority (the value of this is 39 or 0x27). I might make a free utility that can be used to set this, if others need it and don't have access to programmer tools. EDIT: I've gone ahead and written a free utility for querying and setting the memory priority of a task, available here . The download contains both source code and a compiled binary.
{ "source": [ "https://serverfault.com/questions/151824", "https://serverfault.com", "https://serverfault.com/users/1678/" ] }
151,955
We are using Smarter Mail system. Recently, we found that hacker had hacked some user accounts and sent out lots of spams. We have firewall to ratelimit the sender, but for the following email, the firewall couldn't do this because of the empty FROM address. Why an empty FROM address is consider OK? Actually, in our MTA(surgemail), we can see the sender in the email header. Any idea? 11:17:06 [xx.xx.xx.xx][15459629] rsp: 220 mail30.server.com 11:17:06 [xx.xx.xx.xx][15459629] connected at 6/16/2010 11:17:06 AM 11:17:06 [xx.xx.xx.xx][15459629] cmd: EHLO ulix.geo.auth.gr 11:17:06 [xx.xx.xx.xx][15459629] rsp: 250-mail30.server.com Hello [xx.xx.xx.xx] 250-SIZE 31457280 250-AUTH LOGIN CRAM-MD5 250 OK 11:17:06 [xx.xx.xx.xx][15459629] cmd: AUTH LOGIN 11:17:06 [xx.xx.xx.xx][15459629] rsp: 334 VXNlcm5hbWU6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 334 UGFzc3dvcmQ6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 235 Authentication successful 11:17:07 [xx.xx.xx.xx][15459629] Authenticated as [email protected] 11:17:07 [xx.xx.xx.xx][15459629] cmd: MAIL FROM: 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK <> Sender ok 11:17:07 [xx.xx.xx.xx][15459629] cmd: RCPT TO:[email protected] 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK <[email protected]> Recipient ok 11:17:08 [xx.xx.xx.xx][15459629] cmd: DATA
The empty MAIL FROM is used for delivery status notifications. Mail servers are required to support it ( RFC 1123 section 5.2.9 ). It’s used primarily for bounce messages, to prevent an endless loop. When MAIL FROM is used with an empty address (represented as <> ), the receiving server knows not to generate a bounce message if the message is being sent to a non-existent user. Without this, it might be possible for someone to DoS you simply by faking a message to a non-existent user at another domain, with a return address of a non-existent user at your own domain, resulting in a never-ending loop of bounce messages. What would happen if you block messages with an empty MAIL FROM: ? Your users would not get bounce messages from other domains: they would never know if they made a typo when sending mail to a user at another domain. The empty MAIL FROM: messages that you are seeing are probably not coming from a spammer. Instead, a spammer has faked an address at your domain and used it as the return address for a message to another domain. Let’s say you are yourdomain.com and my domain is mydomain.net . The spammer sends a message to [email protected] , faking the return address as [email protected] . Since there is no user johnq in my domain, my mail server sends a bounce message ( MAIL FROM:<> ) to the apparent sender, [email protected] . That is what you are probably seeing. Blocking empty MAIL FROM messages will do more harm than good, in my opinion. Spammers, in my experience, rarely use an empty MAIL FROM: since they can easily fake a real-looking address. When the message is actual spam, there are far better ways to detect and block it, including RBLs, Bayesian filters, and SpamAssassin. And finally, you can prevent at least some of the forgeries using yourdomain.com by setting up proper SPF records for your domain. Update: After looking closer at your log, someone was able to AUTH using a valid username and password for your server. This puts it in a whole other category of trouble. However, everything I said about MAIL FROM: still stands. 99% of the time it’s going to be the result of bounce messages.
{ "source": [ "https://serverfault.com/questions/151955", "https://serverfault.com", "https://serverfault.com/users/26731/" ] }
152,139
My server is under DDOS attacks and I want to block the IP that is doing it, what logs should I be looking for to determine the attacker's IP?
tail -n 10000 yourweblog.log|cut -f 1 -d ' '|sort|uniq -c|sort -nr|more Take a look at the top IP addresses. If any stand out from the others, those would be the ones to firewall. netstat -n|grep :80|cut -c 45-|cut -f 1 -d ':'|sort|uniq -c|sort -nr|more This will look at the currently active connections to see if there are any IPs connecting to port 80. You might need to alter the cut -c 45- as the IP address may not start at column 45. If someone was doing a UDP flood to your webserver, this would pick it up as well. On the off chance that neither of these show any IPs that are excessively out of the norm, you would need to assume that you have a botnet attacking you and would need to look for particular patterns in the logs to see what they are doing. A common attack against wordpress sites is: GET /index.php? HTTP/1.0 If you look through the access logs for your website, you might be able to do something like: cut -f 2 -d '"' yourweblog.log|cut -f 2 -d ' '|sort|uniq -c|sort -nr|more which would show you the most commonly hit URLs. You might find that they are hitting a particular script rather than loading the entire site. cut -f 4 -d '"' yourweblog.log|sort|uniq -c|sort -nr|more would allow you to see common UserAgents. It is possible that they are using a single UserAgent in their attack. The trick is to find something in common with the attack traffic that doesn't exist in your normal traffic and then filter that through iptables, mod_rewrite or upstream with your webhost. If you are getting hit with Slowloris, Apache 2.2.15 now has the reqtimeout module which allows you to configure some settings to better protect against Slowloris.
{ "source": [ "https://serverfault.com/questions/152139", "https://serverfault.com", "https://serverfault.com/users/37222/" ] }
152,175
Anyone know the default user/group for Apache under OS X (10.6)? I'd like to set permissions correctly. Already enabled Web Sharing, etc.
Apache user and group are _www and _www .
{ "source": [ "https://serverfault.com/questions/152175", "https://serverfault.com", "https://serverfault.com/users/622/" ] }
152,194
I the have following setup in my conf file upload_set_form_field $upload_field_name.name "$upload_file_name"; But I want change chosen param name to: upload_set_form_field ($upload_field_name+"[name]") "$upload_file_name"; So I can get "attachment[name]" but this doesn't work. I would be very happy if someone could help me with merging variables with string in nginx config file :).
This works: set $foo = 'foo'; set $foobar "${foo}bar";
{ "source": [ "https://serverfault.com/questions/152194", "https://serverfault.com", "https://serverfault.com/users/46090/" ] }
152,206
I was wondering how I would go about getting the sort of full screen image of a desktop running linux from windows using xming. Basically, I dont just want the console. Thanks in advance!
This works: set $foo = 'foo'; set $foobar "${foo}bar";
{ "source": [ "https://serverfault.com/questions/152206", "https://serverfault.com", "https://serverfault.com/users/45478/" ] }
152,245
I am using the "ftp" command of linux to send data to a 3rd party provider. This company states that we need to "Disable passive mode in your FTP client", and I confirm it doesn't work in passive mode. However, when I googled the linux command, I see that the "-p" flag is "the default now for all clients (ftp and pftp) due to security concerns using the PORT transfer mode. The flag is kept for compatibility only and has no effect anymore." How do I disable passive mode then? And, is it that bad?
Once you have logged into the site with FTP, type passive and then do your transfer.
{ "source": [ "https://serverfault.com/questions/152245", "https://serverfault.com", "https://serverfault.com/users/24213/" ] }
152,363
On Arch Linux, I would like to have eth0 (connected to bridged router) share the connection received from wlan0, I've read tutorials but I'm not command savvy as other users are and don't completely understand.
UPDATE It is not possible to bridge between wireless (client a.k.a. station mode) and wired interfaces according to this thread on linux-ath5k-devel . Setup NAT One should set up NAT instead: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE Assigning an IP Then you have to assign IP addresses to yourself: ifconfig eth0 10.0.0.1 netmask 255.255.255.0 up Install dhcp daemon Install a dhcp server and add the following text to its config file (in /etc/dhcpd.conf or something similar) subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.100 10.0.0.120; option routers 10.0.0.1; option domain-name-servers the-ip-address-you-have-in-etc-resolv.conf; } Start dhcpd Then start it /etc/init.d/dhcpd start And that's it! Only read below if you are interested in the non-working bridging setup brctl addbr mybridge brctl addif mybridge eth0 brctl addif mybridge wlan0 First you create a bridge interface I choose an arbitrary name mybridge then add intefaces to it. You should request a new ip address (This is needed only if you want to get a valid IP for the bridging device itself): dhclient -d mybridge
{ "source": [ "https://serverfault.com/questions/152363", "https://serverfault.com", "https://serverfault.com/users/46127/" ] }
152,373
I'm running PHP through mod_fastcgi & mod_suexec, and I added Header set X-UA-Compatible "IE=edge env=best-standards-support" to my .htaccess file. It works fine for static content, but the PHP files lack the header. How do I fix this problem?
UPDATE It is not possible to bridge between wireless (client a.k.a. station mode) and wired interfaces according to this thread on linux-ath5k-devel . Setup NAT One should set up NAT instead: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE Assigning an IP Then you have to assign IP addresses to yourself: ifconfig eth0 10.0.0.1 netmask 255.255.255.0 up Install dhcp daemon Install a dhcp server and add the following text to its config file (in /etc/dhcpd.conf or something similar) subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.100 10.0.0.120; option routers 10.0.0.1; option domain-name-servers the-ip-address-you-have-in-etc-resolv.conf; } Start dhcpd Then start it /etc/init.d/dhcpd start And that's it! Only read below if you are interested in the non-working bridging setup brctl addbr mybridge brctl addif mybridge eth0 brctl addif mybridge wlan0 First you create a bridge interface I choose an arbitrary name mybridge then add intefaces to it. You should request a new ip address (This is needed only if you want to get a valid IP for the bridging device itself): dhclient -d mybridge
{ "source": [ "https://serverfault.com/questions/152373", "https://serverfault.com", "https://serverfault.com/users/11996/" ] }
152,393
I have this command to create a service: sc create svnserve binpath="\"C:\Program Files (x86)\Subversion\bin\svnserve.exe\" --service --root C:\SVNRoot" displayname="Subversion" depend=tcpip start=auto obj="NT AUTHORITY\LocalService" Unfortunately, it seems not to work, even though the syntax is correct. When I run it, I get the usage instructions (which I guess is a way of telling me that I've supplied incorrect arguments, although I have no idea what incorrect argument I might have supplied). Can anyone help me out of my difficulty? Thanks!
Your syntax is actually incorrect, but you'll be forgiven for missing it. From the help text for sc create : NOTE: The option name includes the equal sign. What isn't immediately obvious from this is that the options need to be specified with a space between the option name and the value. Incorrect: displayname="Subversion" Correct (note the space after = ): displayname= "Subversion" Your command should work just fine formatted accordingly, i.e.: sc create svnserve binpath= "\"C:\Program Files (x86)\Subversion\bin\svnserve.exe\" --service --root C:\SVNRoot" displayname= "Subversion" depend= tcpip start= auto obj= "NT AUTHORITY\LocalService"
{ "source": [ "https://serverfault.com/questions/152393", "https://serverfault.com", "https://serverfault.com/users/65299/" ] }
152,745
Is it possible/how can I configure an Nginx location block to proxy to different backends depending on the request method (ie. GET/POST)? The reason is, I am currently handling the 2 methods at 2 different URLs (one via http proxy and the other via fcgi) and am trying to make it more "REST"ful so, would ideally like the GETting the resource to return the list, while POSTing to the same resource should add to the list.
I don't use this configuration, but based on the examples here : location /service { if ($request_method = POST ) { fastcgi_pass 127.0.0.1:1234; } if ($request_method = GET ) { alias /path/to/files; } } If your writing your own application, you can also consider checking GET/POST in it, and sending X-Accel-Redirect headers to hand off transport of the files to nginx.
{ "source": [ "https://serverfault.com/questions/152745", "https://serverfault.com", "https://serverfault.com/users/17989/" ] }
153,025
i'd really like to know what the lifespan of a server is, or could be. Is there something like lifespan in the world of PCs at all? Lets assume it runs 24/7, how many hours/days/years/etc could it be used?
Do you mean before any part fails, at all? Or do you mean how long can a server be expected to last if you perform maintenance on the serve, including replacing faulty parts like hard drives and power supplies? Assuming you mean the latter - after all components like a hard drive can fail any time from '2 days after you got them' to 10 years plus - then I'd say the lifespan of a server can be measured in two ways You could consider its lifespan to be however long it remains able to do the tasks given to it, which might be some time if the task is something that never really changes, e.g. DNS server. This is common enough in businesses that don't give a lot of funding over to IT; I've always worked in "large business, big iron" environments, but this is a perfectly valid viewpoint in a small business, to some degree at least. Or you could (and in my opinion, should ) consider the lifespan to be for however long the hardware is supportable . In other words, once you can no longer obtain replacement parts for a server, it is essentially living on borrowed time. That doesn't mean you need to run out and buy a new server to replace an old one the very second you can no longer obtain parts to maintain it, but that at this point you have to balance the cost of replacing it against the cost/risk of not doing so and having the service it provides unavailable until you can purchase a new server and migrate the old server's apps and data over to the new one. In addition to both/either of the above points, you might also consider the point at which an old server becomes inefficient to maintain - the cost of keeping it running becomes greater (maintenance, power, floorspace in some cases) than the cost of virtualising it and a bunch of other similar older servers on new hardware. EDIT : I think it depends too on the task that server is doing - there's a big difference between a DNS server, say, and a database server running a major CRM backend that the business is paralysed without. Both in terms of risk to the business if it dies unexpectedly and in the efficiency gained by moving to newer hardware. Of course you could migrate the CRM backend to new hardware and re-purpose the old hardware. We've done this a few times for non-vital test or dev environments, etc.
{ "source": [ "https://serverfault.com/questions/153025", "https://serverfault.com", "https://serverfault.com/users/44663/" ] }
153,093
NetGear's ReadyNAS 2100 has 4 disk slots and costs $2000 with no disks. That seems a bit too expensive for just 4 disk slots. Dell has good network storage solutions too. PowerVault NX3000 has 6 disk slots, so that's an improvement. However, it costs $3500; the NX3100 doubles the number of disks at double the price. Just in case I'm looking at the wrong hardware for lots of storage, the trusty PowerVault MD3000i SAN has a good 15 drives, but it starts at $7000. While you can argue about support from Dell, Netgear or HP or any other company being serious, it's still pretty damn expensive to get those drives RAID'ed together in a box and served via iSCSI. There's a much cheaper option: build it yourself. Backblaze has built it's own box , housing 45 (that's forty five) SATA drives for a little under $8000, including the drives themselves. That's at least 10 times cheaper than current offers from Dell, Sun, HP, etc. Why is NAS (or SAN - still storage attached to a network) so expensive? After all, it's main function is to house a number of HDDs, create a RAID array and serve them over a protocol like iSCSI; nearly everything else is just colored bubbles (AKA marketing terms).
This really depends on your point of view. If I'm an ISV who needs to launch on the tiniest possible budget but I need a crapload of storage, then yes, a brand-name box will be too expensive and the risk/reward of a home-made FreeNAS box would most likely be an acceptable solution. However, if I'm a mega-multi-national corporation with 10,000 users and I run a datacentre that supports a billion-dollar-a-year company and if the datacentre goes offline it's going to cost in the order of $100,000 a minute then you can bet your arse I'm going to buy a top-shelf brand-name NAS with a 2-hour no-questions-asked replacement SLA. Yes, it's going to cost me 100x more than a DIY box, but the day your entire array fails and you've got 10TB of critical storage offline, that $100,000 investment is going to pay for itself in about 2 hours flat. For someone like Backblaze, where storage volume is king, then it makes sense for them to roll their own - but that's the core competancy - providing storage. Dell, EMC, etc - their products are aimed at those who storage is not their primary focus. Of course, it's all totally pointless if you don't have backups, but that's another story for another day.
{ "source": [ "https://serverfault.com/questions/153093", "https://serverfault.com", "https://serverfault.com/users/34304/" ] }
153,100
I am trying to migrate from my old server (Server 1) from provider 1 to a new server (Server B) at provider 2, keeping the process as seamless as possible. One of the first things I noticed in the test folder I migrated is that several PHP functions are not supported with Server 2 -- apache_request_headers(), for example. This is supposedly because PHP was not compiled as an Apache module on Server 2. There might be other differences that may cause fatal script errors, that I haven't yet found. Both servers run CentOS with WHM. Is there a way to configure the new server to be exactly the same as the old, without this ad hoc checking?
This really depends on your point of view. If I'm an ISV who needs to launch on the tiniest possible budget but I need a crapload of storage, then yes, a brand-name box will be too expensive and the risk/reward of a home-made FreeNAS box would most likely be an acceptable solution. However, if I'm a mega-multi-national corporation with 10,000 users and I run a datacentre that supports a billion-dollar-a-year company and if the datacentre goes offline it's going to cost in the order of $100,000 a minute then you can bet your arse I'm going to buy a top-shelf brand-name NAS with a 2-hour no-questions-asked replacement SLA. Yes, it's going to cost me 100x more than a DIY box, but the day your entire array fails and you've got 10TB of critical storage offline, that $100,000 investment is going to pay for itself in about 2 hours flat. For someone like Backblaze, where storage volume is king, then it makes sense for them to roll their own - but that's the core competancy - providing storage. Dell, EMC, etc - their products are aimed at those who storage is not their primary focus. Of course, it's all totally pointless if you don't have backups, but that's another story for another day.
{ "source": [ "https://serverfault.com/questions/153100", "https://serverfault.com", "https://serverfault.com/users/45581/" ] }
153,409
I simply cannot believe this is quite so hard to determine. Even having read the RFCs, it's not clear to me if a server at subdomain.example.com can set a cookie that can be read by example.com. subdomain.example.com can set a cookie whose Domain attribute is .example.com. RFC 2965 seems to explicitly state that such a cookie will not be sent to example.com, but then equally says that if you set Domain=example.com, a dot is prepended, as if you said .example.com. Taken together, this seems to say that if example.com returns sets a cookie with Domain=example.com, it doesn't get that cookie back! That can't be right. Can anyone clarify what the rules really are?
Quoting from the same RFC2109 you read: * A Set-Cookie from request-host x.foo.com for Domain=.foo.com would be accepted. So subdomain.example.com can set a cookie for .example.com . So far so good. The following rules apply to choosing applicable cookie-values from among all the cookies the user agent has. Domain Selection The origin server's fully-qualified host name must domain-match the Domain attribute of the cookie So do we have a domain-match? * A is a FQDN string and has the form NB, where N is a non-empty name string, B has the form .B', and B' is a FQDN string. (So, x.y.com domain-matches .y.com but not y.com.) But now example.com wouldn't domain-match .example.com according to the definition. But www.example.com (or any other "non-empty name" in the domain) would. This RFC is in theory obsoleted by RFC2965 , which dictated things about forcing a leading dot for domains on Set-Cookie2 operations. More important, as noted by @Tony, is the real world. For a glimpse into what actual user agents are doing, see Firefox 3's nsCookieService.cpp and Chrome's cookie_monster.cc For perspective into what actual sites are doing, try playing with wget using --save-cookies , --load-cookies , and --debug to see what's going on. You'll likely find that in fact most sites are using some combination of Set-Cookie from the older RFC spec with "Host" values, implicitly without a leading dot (as twitter.com does) or setting Domain values (with a leading dot) and redirecting to a server like www.example.com (as google.com does).
{ "source": [ "https://serverfault.com/questions/153409", "https://serverfault.com", "https://serverfault.com/users/46424/" ] }
153,526
For Linux, this command should return the DNS record for the LDAP server host -t srv _ldap._tcp.DOMAINNAME (found at Authenticating from Java (Linux) to Active Directory using LDAP WITHOUT servername ) How could I get the same on the Windows command line using nslookup? I tried nslookup -type srv _ldap._tcp.DOMAINNAME (following http://support.microsoft.com/kb/200525 ), would this be correct?
You need to use an = after -type : nslookup -type=srv _ldap._tcp.DOMAINNAME
{ "source": [ "https://serverfault.com/questions/153526", "https://serverfault.com", "https://serverfault.com/users/16768/" ] }
153,528
Is there an easy way to throttle outgoing SMTP traffic? Some of our users continue to send large attachments to a large group of people - as a result the bandwidth is almost completely consumed, and other users are starting to complain about hickups in their internet access. Any suggestions? Thanks in advance. J.
You need to use an = after -type : nslookup -type=srv _ldap._tcp.DOMAINNAME
{ "source": [ "https://serverfault.com/questions/153528", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
153,634
I recently compiled a PHP 5.2.9 binary, and I tried to execute some PHP scripts with it. I can execute some scripts without problems, but one of them halts its execution midway, exiting with no errors or warnings. The returned status code of the process is 255. I've read in the manual that such status is 'reserved'. The question is: for what? I believe it's got something to do with missing dependencies in the PHP executable, but I can't be sure. Anyone knows what does an exit code of 255 mean? P.S. There are no errors in the PHP scripts, they run OK on other machines.
255 is an error, I could reproduce that same exit code by having a fatal error. This means that somehow your error reporting is hidden, there are some possible causes for this: error_reporting is not defined and php reports no error at all An @ (error suppression operator) hides the output of the error STDERR is redirected somewhere else (php -f somefile.php 2>/dev/null, remove the redirection) This could still be an internal error due to missing dependencies and that a fatal error has the same exit code as a program crash.
{ "source": [ "https://serverfault.com/questions/153634", "https://serverfault.com", "https://serverfault.com/users/27483/" ] }
153,690
We run the name servers for our domain on our network. We use bind/named. Lets call the domain example.com . One thing I've noticed recently, when I goto a website like http://network-tools.com and run queries on URLs defined on our name servers, I see changes instantly. For example, if I add an entry to our DNS server for the url funny.example.com and then look up that url on http://network-tools.com , I see the proper external static IP listed for it immediately. That is telling me that any DNS requests related to example.com are coming straight to our DNS servers every time. My suspicions were confirmed earlier in the week when our DNS servers went down for a very short period. And during that time period, if I used http://network-tools.com to query example.com or any of its subdomains, I would get zero results. Obviously its because the DNS servers were down and couldn't be reached. So this brings me to my question. I thought changes to our DNS servers should be propogating out onto the internet to other DNS servers. That way, if our DNS goes down temporarily, other servers on the internet still know what IP address example.com points to. Am I misunderstanding this DNS stuff? Are 3rd party-controlled DNS servers like ours not allowed to propagate DNS information to other servers on the net? Where should I start investigating as to why the changes aren't making it out there? I can see on our firewall that port 53 traffic is making it to our DNS servers properly. UPDATE I know you guys are saying that its impossible to publish your DNS settings instantaneously, but all I know is this: If I make a DNS change on our DNS server(s) and then immediately check it on http://network-tools.com , I see the changes immediately. If I turn off our DNS servers and then I try to check any of the URLs using http://network-tools.com , the site cannot find any of the URLs. But if I bring the DNS servers back online, all of the sudden http://network-tools.com can find the URLs again... This tells me that servers are NOT caching our DNS settings. Am I wrong? Also, our TTL settings are set to 900 (15 minutes) at the moment and our DNS servers have been running for over a year. So its not like DNS servers out on the internet haven't had a chance to cache it yet. Is the reason servers are not caching the settings because the TTL is so low at the moment? That kinda makes sense if that is the reason.
Yes, you are misunderstanding how DNS works. I'm going to use some emphasis here, but please don't be offended as none is intended. DNS RECORDS ARE NOT PROPAGATED. THEY ARE CACHED. That being said, here's a simplified explanation of what happens: You create a new DNS record (A, CNAME, etc) A remote user (more specifically a process\application launched by the user) tries to access a service accessed via that DNS record (a web browser trying to access the web site running on funny.example.com for instance) The users DNS client sends a DNS query to it's DNS server, the DNS server then finds your name servers (usually through a series of recursive DNS queries) and asks them for the information regarding funny.example.com Your name servers respond with the answers The users DNS server then sends this information to the user (more specifically to the users DNS client resolver), which in turn returns the information to the process\application. This information comes with what is called a TTL (Time To Live) that tells the DNS client resolver how long this information may be kept in it's DNS cache (in memory) and how long the information can be considered current and accurate The user's DNS client resolver then flushes this information when the TTL expires. Any new requests for the DNS record(s) in question requires a new DNS lookup and the above process repeats. So the long and short of it is this: Your DNS records do not propagate. No other DNS server has a copy of your DNS records or zones. A DNS client or server may cache information about your DNS records or zones (based on their DNS queries of your DNS records and zones) into their DNS cache. This information is temporarily cached and will be removed from their DNS cache when the TTL expires. If your name servers are down, only those DNS clients that have any of your DNS records in their cache will be able to resolve those DNS records and only until the TTL expires. Also, when the TTL expires (neccessitating a new DNS lokkup) those DNS clients will no longer be able to resolve your DNS records.
{ "source": [ "https://serverfault.com/questions/153690", "https://serverfault.com", "https://serverfault.com/users/21307/" ] }
153,776
How can I issue a nmap command that shows me all the alive machines' IP addresses and corresponding hostname s in the LAN that I am connected? (if this can be done in another way/tool you surely are welcome to answer)
nmap versions lower than 5.30BETA1: nmap -sP 192.168.1.* newer nmap versions: nmap -sn 192.168.1.* This gives me hostnames along with IP adresses, and only pings the hosts to discover them. This will only give you the hostnames if you run it as root . EDIT: As of Nmap 5.30BETA1 [2010-03-29] -sP has been replaced with -sn as the preferred way to do ping scans, while skipping port scanning, just like the comments indicate: Previously the -PN and -sP options were recommended. This establishes a more regular syntax for some options that disable phases of a scan: -n no reverse DNS -Pn no host discovery -sn no port scan
{ "source": [ "https://serverfault.com/questions/153776", "https://serverfault.com", "https://serverfault.com/users/46503/" ] }
153,843
I've installed a Python package using pip, which is a replacement for easy_install. How do I get a list of which installed files are associated with this package? Basically, I'm looking for the Python package equivalent of dpkg -L or rpm -ql
You could do that by using command: pip show -f <package>
{ "source": [ "https://serverfault.com/questions/153843", "https://serverfault.com", "https://serverfault.com/users/847/" ] }
153,875
I'm using Mac OS X. I'm trying to copying some files with cp command for a build script like this. cp ./src/*/*.h ./aaa But this command fires an error if there is no .h file in ./src directory. How to make the command don't fire the error? (silent failure) The error makes build result fail, but I just want to copy when only there are some header file.
If you're talking about the error message, you can suppress that by sending it to the bit bucket: cp ./src/*/*.h ./aaa 2>/dev/null If you want to suppress the exit code and the error message: cp ./src/*/*.h ./aaa 2>/dev/null || :
{ "source": [ "https://serverfault.com/questions/153875", "https://serverfault.com", "https://serverfault.com/users/46527/" ] }
153,970
Is it "safe" to delete any of the subfolders in C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\ from my drive to free up space? Or is it needed for upgrade/uninstall and other patches? Right now the Update Cache folder contains KB968369 (sp1) which takes up 416mb, which seems like a candidate for freeing up space.
According to this site , you shouldn't. You can compress it and remove log files, but you shouldn't delete it. If I hadn't googled for it I would probably have tried searching to see if any of the files were open, and if not, then copied them to a new location for storage until I was certain the server worked well without it, and if there was an issue recopy them over. Then again I also have been known to delete the hidden/compressed update files in the Windows directory which is also considered bad practice from what some have said and have had no horrible side effects while freeing up hundreds of meg in space. An alternative would be to look into installing larger drives and expanding your disk partitions. Depending on the role of the server this could be a major project, though, but in the end if you're in need of freeing space on a database server it's probably time to look at upgrading that subsystem, at least.
{ "source": [ "https://serverfault.com/questions/153970", "https://serverfault.com", "https://serverfault.com/users/41673/" ] }
154,650
Whats the best way to find printers in my network with nmap? Is it possible to save the printers ip to a file?
If you're leery of doing OS Fingerprinting for some reason, you can do a more targeted port-scan: nmap -p 9100,515,631 192.168.1.1/24 -oX printers.xml That'll scan for ports common to printers and printing systems. 9100 = the RAW port for most printers, also known as the direct-IP port 515 = the LPR/LPD port, for most printers, as well as older print-servers 631 = the IPP port, for most modern printers, and CUPS-based print-server Output is in XML.
{ "source": [ "https://serverfault.com/questions/154650", "https://serverfault.com", "https://serverfault.com/users/46770/" ] }
154,651
I am running Visual SVN Server(with Apache) on a Windows 7 computer and network. After about 15-20 minutes of my first commit/update, I am unable to access the repository via Tortoise SVN. The error message I get is: OPTIONS of " https://jason/svn/repository1 ": could not connect to server ( https://jason ) Restarting the Visual SVN Server service helps sometimes but fails quite often. The only sure-shot way to get it working is to restart the computer. The server - https://jason is also not accessible via the browser when I get this error 1) I tried reinstalling Windows 7, Visual SVN server and Tortoise SVN but I still keep getting this error. 2) I searched several forums but I dont seem to be able to find an answer. Please help.
If you're leery of doing OS Fingerprinting for some reason, you can do a more targeted port-scan: nmap -p 9100,515,631 192.168.1.1/24 -oX printers.xml That'll scan for ports common to printers and printing systems. 9100 = the RAW port for most printers, also known as the direct-IP port 515 = the LPR/LPD port, for most printers, as well as older print-servers 631 = the IPP port, for most modern printers, and CUPS-based print-server Output is in XML.
{ "source": [ "https://serverfault.com/questions/154651", "https://serverfault.com", "https://serverfault.com/users/46772/" ] }
154,957
Is it possible to set up a user on ubuntu with openssh so that ssh does not use password authentication but sftp does? I assume that if I change /etc/ssh/ssh_config to have PasswordAuthentication yes this makes is possible for users to use passwords to login with both ssh and sftp. Edit: My purpose here is to let some users sftp with a password instead of a keyfile. But I do not want ssh users to be able to login with a password, I want them to have to use a keyfile. If it helps, I do not need the sftp users to be able to login, they only need to do sftp.
As I understand you have (at least for this particular problem) two distinct groups of users, one being able to login via SSH and get an interactive shell (let's call the group ssh ) and one being able to login via SFTP and only get an SFTP shell (let's call the group sftp ). Now create the groups ssh and sftp on your system with groupadd , put the respective users in the groups ( gpasswd -a $USERNAME $GROUPNAME ) and append the following lines at the end ( this is important! ) of your sshd_config located at /etc/ssh/sshd_config : Match Group sftp PasswordAuthentication yes # Further directives for users in the "sftp" group Match Group ssh PasswordAuthentication no # Further directives for users in the "ssh" group Read about the Match directive in sshd_config(5) and about the allowed patterns in ssh_config(5) . You'll also have to restart the ssh process for this to take effect: sudo /etc/init.d/ssh restart
{ "source": [ "https://serverfault.com/questions/154957", "https://serverfault.com", "https://serverfault.com/users/11576/" ] }
154,958
I would like to know about your strategies on what to do when one of the Hadoop server disk fails. Let's say, I have multiple (>15) Hadoop servers and 1 namenode, and one from 6 disks on slaves stops working, disks are connected via SAS. I don't care about retrieving data from this disk, but for general strategies for keeping cluster running. What do you do?
As I understand you have (at least for this particular problem) two distinct groups of users, one being able to login via SSH and get an interactive shell (let's call the group ssh ) and one being able to login via SFTP and only get an SFTP shell (let's call the group sftp ). Now create the groups ssh and sftp on your system with groupadd , put the respective users in the groups ( gpasswd -a $USERNAME $GROUPNAME ) and append the following lines at the end ( this is important! ) of your sshd_config located at /etc/ssh/sshd_config : Match Group sftp PasswordAuthentication yes # Further directives for users in the "sftp" group Match Group ssh PasswordAuthentication no # Further directives for users in the "ssh" group Read about the Match directive in sshd_config(5) and about the allowed patterns in ssh_config(5) . You'll also have to restart the ssh process for this to take effect: sudo /etc/init.d/ssh restart
{ "source": [ "https://serverfault.com/questions/154958", "https://serverfault.com", "https://serverfault.com/users/46731/" ] }
155,113
I'm trying to set up virtual hosts on Mac OS X. I've been modifying httpd.conf and restarting the server, but haven't had any luck in getting it to work. Furthermore, I notice that it's not serving files in the DocumentRoot mentioned in httpd.conf (Libraries/WebServer/Documents), but in a different directory (/usr/local/apache2/htdocs). I don't see this folder mentioned anywhere in httpd.conf. Furthermore, PHP works, but the "LoadModule php5_module" line is commented out. This makes me think it's using another .conf file. How can I figure out which config is actually being loaded? Update: I just deleted that httpd.conf and apache behaves the same after restart, so it definitely wasn't using it!
With any *nix application, the easiest method is to query the binary itself. In the case of httpd, I'd imagine the process would be something like this: $ whereis httpd /usr/sbin/httpd $ /usr/sbin/httpd -V Server version: Apache/2.2.11 (Unix) Server built: Jun 17 2009 14:55:13 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.7, APR-Util 1.2.7 Compiled using: APR 1.2.7, APR-Util 1.2.7 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_FLOCK_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/bin/suexec" -D DEFAULT_PIDLOG="/private/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/private/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/private/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/private/etc/apache2/httpd.conf" As you can see - my OS X says the binary, if not directed otherwise, will use the config file: /private/etc/apache2/httpd.conf If that doesn't help, perhaps Christopher's suggestion of find is the next step.
{ "source": [ "https://serverfault.com/questions/155113", "https://serverfault.com", "https://serverfault.com/users/36175/" ] }
155,239
I'm new in Linux and I want to schedule a reboot at midnight. How should I do it? Edits: I'm sorry I didn't put the complete details. I want a reboot every 3rd Saturday of the month at 23:30. I don't know what's wrong but I cannot find crontab. What I have is cron.d ; cron.daily ; cron.weekly ; cron.monthly ; I'm sorry for the noob question. Pls help me. Thanks.
Type shutdown -r 0:00 and it will reboot at midnight. If you want to reboot each night, add a cron entry using crontab -e as root to run shutdown -r each midnight @midnight shutdown -r now
{ "source": [ "https://serverfault.com/questions/155239", "https://serverfault.com", "https://serverfault.com/users/46929/" ] }
155,299
I work most of the time remotly from home. To gain access to the different servers (via SSH) I have to use OpenVPN. I would like to connect to all of them (three, sometimes four) at once, so I dont have to switch all the time. My setup is Windows 7 and a PC with only one NIC. Is it possible (if yes, how?) to connect multiple VPNs at once (maybe with some kind of a virtual network device)? thanks Andreas
You will need to create some additional TAP-WIN32 adapters if you haven't already. If you are using OpenVPN 2.3.x or later, run addtap.bat: C:\Program Files\TAP-Windows\bin\addtap.bat If you are using an older version of OpenVPN, run the tapinstall command C:\Program Files\OpenVPN\bin\tapinstall.exe C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe (NOTE: Maybe you should open the cmd with Administrator Privileges) Obviously you will also need to make sure that nothing about your various VPNs conflict with each other. For example if one is modifying the default gateway you are probably going to have problems. If nothing is changing the default gateway and there are no overlapping IP addresses then you may be ok. I am not certain if it is needed by I also renamed all my TAP-WIN32 adapters with names like VPNDEV1 , VPNDEV2 , VPNDEV3 . In my openvpn configurations I specified the device I wanted to use by using the configuration directive dev-node VPNDEV2 .
{ "source": [ "https://serverfault.com/questions/155299", "https://serverfault.com", "https://serverfault.com/users/330/" ] }
155,629
Is there a way to use rdesktop or another Linux client to connect to a server that requires Network Level Authentication? From Windows Server 2008 R2 -- Control Panel -- System And Security -- System -- Allow Remote Access there is an option that says "Allow connections only from computers running Remote Desktop with Network Level Authentication". So with this enabled I can con not connect from Linux. I can connect from XP but you need SP3 and I had to edit a couple of things in the registry for it to work.
FreeRDP (a spin-off from rdesktop) supports this in recent versions.
{ "source": [ "https://serverfault.com/questions/155629", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
155,882
I have a forum with a lot of visitors, Some days the load increase to reach 40 without increase of the number vistors. As you can see from the below output, the waiting time is high (57%). how do I find the reason for that? The server software is Apache, MySQL and PHP. root@server:~# top top - 13:22:08 up 283 days, 22:06, 1 user, load average: 13.84, 24.75, 22.79 Tasks: 333 total, 1 running, 331 sleeping, 0 stopped, 1 zombie Cpu(s): 20.6%us, 7.9%sy, 0.0%ni, 13.4%id, 57.1%wa, 0.1%hi, 0.9%si, 0.0%st Mem: 4053180k total, 3868680k used, 184500k free, 136380k buffers Swap: 9936160k total, 12144k used, 9924016k free, 2166552k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23930 mysql 20 0 549m 122m 6580 S 90 3.1 4449:04 mysqld 17422 www-data 20 0 223m 20m 10m S 2 0.5 0:00.21 apache2 17555 www-data 20 0 222m 19m 9968 S 2 0.5 0:00.13 apache2 17264 www-data 20 0 225m 19m 8972 S 1 0.5 0:00.17 apache2 17251 www-data 20 0 220m 12m 4912 S 1 0.3 0:00.12 apache2 . root@server:~# top top - 13:39:59 up 283 days, 22:24, 1 user, load average: 6.66, 10.39, 13.95 Tasks: 318 total, 1 running, 317 sleeping, 0 stopped, 0 zombie Cpu(s): 13.6%us, 4.2%sy, 0.0%ni, 40.5%id, 40.6%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 4053180k total, 4010992k used, 42188k free, 119544k buffers Swap: 9936160k total, 12160k used, 9924000k free, 2290716k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23930 mysql 20 0 549m 122m 6580 S 44 3.1 4457:30 mysqld 19946 www-data 20 0 223m 21m 10m S 5 0.6 0:00.77 apache2 17316 www-data 20 0 226m 23m 11m S 1 0.6 0:01.76 apache2 17333 www-data 20 0 222m 21m 11m S 1 0.5 0:01.55 apache2 18212 www-data 20 0 225m 22m 11m S 1 0.6 0:01.58 apache2 19528 www-data 20 0 220m 13m 5480 S 1 0.3 0:00.63 apache2 19600 www-data 20 0 224m 20m 11m S 1 0.5 0:00.73 apache2 19942 www-data 20 0 225m 21m 10m S 1 0.5 0:00.82 apache2 20232 www-data 20 0 222m 16m 8760 S 1 0.4 0:00.65 apache2 20243 www-data 20 0 223m 21m 11m S 1 0.5 0:00.57 apache2 20299 www-data 20 0 225m 20m 9m S 1 0.5 0:00.67 apache2 20441 www-data 20 0 225m 21m 10m S 1 0.5 0:00.57 apache2 21201 www-data 20 0 220m 12m 5148 S 1 0.3 0:00.19 apache2 21362 www-data 20 0 220m 12m 5032 S 1 0.3 0:00.17 apache2 21364 www-data 20 0 220m 12m 4916 S 1 0.3 0:00.14 apache2 21366 www-data 20 0 220m 12m 5124 S 1 0.3 0:00.22 apache2 21373 www-data 20 0 222m 14m 7060 S 1 0.4 0:00.26 apache2
Here are a few tools to find disk activity: iotop vmstat 1 iostat 1 lsof strace -e trace=open <application> strace -e trace=open -p <pid> In ps auxf you'll also see which processes are are in uninterruptible disk sleep ( D ) because they are waiting for I/O. Some days the load increase to reach 40 without increase of the number vistors. You may also want to create a backup, and see if the harddrive is slowly failing. A harddrive generally starts to slow down before it deceases. This could also explain the high load.
{ "source": [ "https://serverfault.com/questions/155882", "https://serverfault.com", "https://serverfault.com/users/34981/" ] }
155,893
I am looking for a command like fschek or something similar for oracle to make sure that the indexing is proper . I am doing a HP UX server installation and there was some error while installing some depots , now I force removed that but I want to do a consistency check and then proceed with the installation . Basically I want to Check Database Integrity of my db and looking for a simple xommand like fscheck !
Here are a few tools to find disk activity: iotop vmstat 1 iostat 1 lsof strace -e trace=open <application> strace -e trace=open -p <pid> In ps auxf you'll also see which processes are are in uninterruptible disk sleep ( D ) because they are waiting for I/O. Some days the load increase to reach 40 without increase of the number vistors. You may also want to create a backup, and see if the harddrive is slowly failing. A harddrive generally starts to slow down before it deceases. This could also explain the high load.
{ "source": [ "https://serverfault.com/questions/155893", "https://serverfault.com", "https://serverfault.com/users/46152/" ] }
155,973
http://technet.microsoft.com/en-us/library/cc732742(WS.10).aspx The above URL describes how to start/stop an IIS 7 app pool. However, I have spaces in my app pool name. Double-quotes doesn't work. Ideas? C:\Windows>C:\Windows\System32\inetsrv\appcmd stop apppool /apppool.name: My Ap p Services Failed to process input: The parameter 'App' must begin with a / or - (HRESULT=8 0070057). C:\Windows>C:\Windows\System32\inetsrv\appcmd stop apppool /apppool.name: "My A pp Services" ERROR ( message:The attribute "apppool.name" is not supported in the current com mand usage. )
Type appcmd list apppool , and use exactly what it lists there in your appcmd start apppool /apppool.name: Names with spaces should be escaped with double quotes. For example: %SYSTEMROOT%\System32\inetsrv\appcmd stop apppool /apppool.name:"My App Services" Post the exact command you're trying to run ; perhaps you missed the colon or there's another problem with the syntax? Edit - you're adding a space between the colon and the first double-quote. Remove that space, use the double-quote, and see what happens.
{ "source": [ "https://serverfault.com/questions/155973", "https://serverfault.com", "https://serverfault.com/users/17341/" ] }
155,989
I have two directory trees with similar layouts, i.e. . |-- dir1 | |-- a | | |-- file1.txt | | `-- file2.txt | |-- b | | `-- file3.txt | `-- c | `-- file4.txt `-- dir2 |-- a | |-- file5.txt | `-- file6.txt |-- b | |-- file7.txt | `-- file8.txt `-- c |-- file10.txt `-- file9.txt I would like to merge the the dir1 and dir2 directory trees to create: merged/ |-- a | |-- file1.txt | |-- file2.txt | |-- file5.txt | `-- file6.txt |-- b | |-- file3.txt | |-- file7.txt | `-- file8.txt `-- c |-- file10.txt |-- file4.txt `-- file9.txt I know that I can do this using the "cp" command, but I want to move the files instead of copying, because the actual directories I want to merge are really large and contain lots of files (millions). If I use "mv" I get the "File exists" error because of conflicting directory names. UPDATE: You can assume that there are no duplicate files between the two directory trees.
rsync -ax --link-dest=dir1/ dir1/ merged/ rsync -ax --link-dest=dir2/ dir2/ merged/ This would create hardlinks rather than moving them, you can verify that they were moved correctly, then, remove dir1/ and dir2/ .
{ "source": [ "https://serverfault.com/questions/155989", "https://serverfault.com", "https://serverfault.com/users/22495/" ] }
156,437
Seems like chown with the recursive flag will not work on hidden directories or files. Is there any simple workaround for that?
I'm pretty sure the -R flag does work - it always has for me anyway. What won't work, and what tripped me up early in my command line usage, is using * in a directory with hidden files/directories. So doing $ chown -R /home/user/* will not do the hidden files and directories. However if you follow it with $ chown -R /home/user/.[^.]* then you will do all the hidden files, (but not . or .. as /home/user/.* would do). Having said all that, I would expect $ chown -R /home/user to get all the hidden files and directories inside /home/user - though that will of course also change the permissions of the directory itself, which might not be what you intended.
{ "source": [ "https://serverfault.com/questions/156437", "https://serverfault.com", "https://serverfault.com/users/47234/" ] }
157,375
Is there any reason why I would want to have iptables -A INPUT -j REJECT instead of iptables -A INPUT -j DROP
As a general rule, use REJECT when you want the other end to know the port is unreachable' use DROP for connections to hosts you don't want people to see. Usually, all rules for connections inside your LAN should use REJECT. For the Internet, With the exception of ident on certain servers, connections from the Internet are usually DROPPED. Using DROP makes the connection appear to be to an unoccupied IP address. Scanners may choose not to continue scanning addresses which appear unoccupied. Given that NAT can be used to redirect a connection on the firewall, the existence of a well known service does not necessarily indicate the existence of a server on an address. Ident should be passed or rejected on any address providing SMTP service. However, use of Ident look-ups by SMTP serves has fallen out of use. There are chat protocols which also rely on a working ident service. EDIT: When using DROP rules: - UDP packets will be dropped and the behavior will be the same as connecting to an unfirewalled port with no service. - TCP packets will return an ACK/RST which is the same response that an open port with no service on it will respond with. Some routers will respond with and ACK/RST on behalf of servers which are down. When using REJECT rules an ICMP packet is sent indicating the port is unavailable.
{ "source": [ "https://serverfault.com/questions/157375", "https://serverfault.com", "https://serverfault.com/users/21875/" ] }
157,461
I've just recently installed Windows 7, and I'm trying to set up a network share to be accessible by everyone on my (home) network. I'm used to XP, so it's taking me a little while to get used to the new way of sharing folders and setting permissions in 7. So far, I have been able to: share a directory on the network change permissions on the directory so that users can actually see the contents Now my problem is that every file in the directory is viewable, but not readable to network users. From my other machine I can see that the file exists, but when I try to copy it, I get a permissions error. Is there a way to open the permissions on all the files in a directory to be readable by everyone? So far I have only found a way to do it one file at a time, and that's just awful. In unix terms, I want all the directories to be 755, and all the files to be 644. How can I do this recursively?
Here's how I was able to do it: right-click on the directory, go to Properties Security tab, Advanced.. Permissions tab, Change Permissions... Add... Advanced... click Find Now, then find and click on "Everyone", click OK click OK "Everyone" should now show up in the list, with "Read & execute" permissions check the box for "Replace all child object permissions with inheritable permissions from this object" click OK. That should recursively give read access to "Everyone".
{ "source": [ "https://serverfault.com/questions/157461", "https://serverfault.com", "https://serverfault.com/users/47528/" ] }
157,484
We have a sql 2005 cluster on W2K8 cluster. It is a named instance say SRV1\A. Then I built a new W2K8 (with a diff cluster service name) but the same service account. Then I installed a new sql 2005 cluster say SRV2\A. Now when I bring down the sql server resources on SRV1 and try to rename SRV2\A to SRV1\A through the cluster admin, I get the error the network name already exists. I have tried bringing an old cluster and installing a new cluster with the same name and it works. Why am I not able to rename the name? Any advice would very helpful.
Here's how I was able to do it: right-click on the directory, go to Properties Security tab, Advanced.. Permissions tab, Change Permissions... Add... Advanced... click Find Now, then find and click on "Everyone", click OK click OK "Everyone" should now show up in the list, with "Read & execute" permissions check the box for "Replace all child object permissions with inheritable permissions from this object" click OK. That should recursively give read access to "Everyone".
{ "source": [ "https://serverfault.com/questions/157484", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }