source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
722,563 | Our network admin recently enabled HTTPS inspection on our firewall/router. For IE users this is fine because the certs have all been distributed via Active Directory for domain-joined machines. However, we have a number of Firefox users that are now throwing certificate errors on practically every HTTPS site. Firefox uses their own CA store, and they're real proud of it too . Is there any way to get Firefox to trust the system certificate store by default? I see a lot of posts on how to do this in Linux, but nothing for Windows. I suspect from this post that it's not possible, but that post is almost 4 years old. | Since Firefox 49 there is some support for Windows CA certificates and support for Active Directory provided enterprise root certificates since Firefox 52. It is also supported in macOS to read from the Keychain since version 63. Since Firefox 68 this feature is enabled by default in the ESR (enterprise) version, but not in the (standard) rapid release. You can enable this feature for Windows and macOS in about:config by creating this boolean value: security.enterprise_roots.enabled and set it to true . For GNU/Linux, this is usually managed by p11-kit-trust and no flag is needed. Deploying the configuration system wide Since Firefox 64, there is a new and recommended way by using policies, documented at https://support.mozilla.org/en-US/kb/setting-certificate-authorities-firefox For legacy versions, the Firefox installation folder can be retrieved from Windows registry, then go to defaults\pref\ subdirectory and create a new file with the following: /* Allows Firefox reading Windows certificates */
pref("security.enterprise_roots.enabled", true); Save it with .js extension, e.g. trustwincerts.js and restart Firefox. The entry will appear in about:config for all users. Deploying Windows Certificates system wide In Firefox from 49 until 51, it only supports the "Root" store. Since Firefox 52, it supports other stores, including those added from domain via AD. This is a bit out of scope but explains which was the only certificate store supported by Firefox for versions 49 to 51 or just for local testing. Because this deploys for all local machine users, it requires Administrator privileges in your CMD/PowerShell window or in your own automated deployment script.: certutil -addstore Root path\to\cafile.pem This may also be done from the Management Console by clicking a lot of windows if you prefer the mouse way ( How to: View Certificates with the MMC Snap-In ). | {
"source": [
"https://serverfault.com/questions/722563",
"https://serverfault.com",
"https://serverfault.com/users/210684/"
]
} |
722,803 | I get the error: (error) NOAUTH Authentication required. When in redis-cli and trying to display the KEYS * . I've only set a requirepass not an auth afaiac. I'm in the redis.conf but do not know what to do. | Setting the requirepass configuration directive causes the server to require password authentication with the AUTH command before sending other commands. The redis.conf file states that clearly: Require clients to issue AUTH before processing any other commands. This might be useful in environments in which you do not trust others with access to the host running redis-server. | {
"source": [
"https://serverfault.com/questions/722803",
"https://serverfault.com",
"https://serverfault.com/users/208527/"
]
} |
722,981 | What's are the differences/similarities between a "bastion host" and a "jump host"? Are they usually used interchangeably? | A Bastion host is a machine that is outside of your security zone. And is expected to be a weak point, and in need of additional security considerations. Because your security devices are technically outside of your security zone, firewalls and security appliances are also considered in most cases Bastion hosts. Usually we're talking about: DNS Servers FTP Servers VPN Servers A Jump Server is intended to breach the gap between two security zones. The intended purpose here is to have a gateway to access something inside of the security zone, from the DMZ. The main reason I've seen this utilized is to make sure that the one known entrance to a specific server that has to be accessible from the outside is kept up to date and is known in its purpose as only having to connect to (a) specific host(s). Usually this is a hardened Linux box only used for SSH. | {
"source": [
"https://serverfault.com/questions/722981",
"https://serverfault.com",
"https://serverfault.com/users/46363/"
]
} |
724,469 | I have a single 50 GB file on server_A, and I'm copying it to server_B. I run server_A$ rsync --partial --progress --inplace --append-verify 50GB_file root@server_B:50GB_file Server_B has 32 GB of RAM with 2 GB swap. It is mostly idle and should have had lots of free RAM. It has plenty of disk space. At about 32 GB, the transfer aborts because the remote side closed the connection. Server_B has now dropped off the network. We ask the data center to reboot it. When I look at the kernel log from before it crashed, I see that it was using 0 bytes of swap, and the process list was using very little memory (the rsync process was listed as using 600 KB of RAM), but the oom_killer was going wild, and the last thing in the log is where it kills metalog's kernel reader process. This is kernel 3.2.59, 32-bit (so no process can map more than 4 GB anyway). It's almost as if Linux gave more priority to caching than to long-lived running daemons. What gives?? And how can I stop it from happening again? Here is the output of the oom_killer: Sep 23 02:04:16 [kernel] [1772321.850644] clamd invoked oom-killer: gfp_mask=0x84d0, order=0, oom_adj=0, oom_score_adj=0
Sep 23 02:04:16 [kernel] [1772321.850649] Pid: 21832, comm: clamd Tainted: G C 3.2.59 #21
Sep 23 02:04:16 [kernel] [1772321.850651] Call Trace:
Sep 23 02:04:16 [kernel] [1772321.850659] [<c01739ac>] ? dump_header+0x4d/0x160
Sep 23 02:04:16 [kernel] [1772321.850662] [<c0173bf3>] ? oom_kill_process+0x2e/0x20e
Sep 23 02:04:16 [kernel] [1772321.850665] [<c0173ff8>] ? out_of_memory+0x225/0x283
Sep 23 02:04:16 [kernel] [1772321.850668] [<c0176438>] ? __alloc_pages_nodemask+0x446/0x4f4
Sep 23 02:04:16 [kernel] [1772321.850672] [<c0126525>] ? pte_alloc_one+0x14/0x2f
Sep 23 02:04:16 [kernel] [1772321.850675] [<c0185578>] ? __pte_alloc+0x16/0xc0
Sep 23 02:04:16 [kernel] [1772321.850678] [<c0189e74>] ? vma_merge+0x18d/0x1cc
Sep 23 02:04:16 [kernel] [1772321.850681] [<c01856fa>] ? handle_mm_fault+0xd8/0x15d
Sep 23 02:04:16 [kernel] [1772321.850685] [<c012305a>] ? do_page_fault+0x20e/0x361
Sep 23 02:04:16 [kernel] [1772321.850688] [<c018a9c4>] ? sys_mmap_pgoff+0xa2/0xc9
Sep 23 02:04:16 [kernel] [1772321.850690] [<c0122e4c>] ? vmalloc_fault+0x237/0x237
Sep 23 02:04:16 [kernel] [1772321.850694] [<c08ba7e6>] ? error_code+0x5a/0x60
Sep 23 02:04:16 [kernel] [1772321.850697] [<c08b0000>] ? cpuid4_cache_lookup_regs+0x372/0x3b2
Sep 23 02:04:16 [kernel] [1772321.850700] [<c0122e4c>] ? vmalloc_fault+0x237/0x237
Sep 23 02:04:16 [kernel] [1772321.850701] Mem-Info:
Sep 23 02:04:16 [kernel] [1772321.850703] DMA per-cpu:
Sep 23 02:04:16 [kernel] [1772321.850704] CPU 0: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850706] CPU 1: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850707] CPU 2: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850709] CPU 3: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850711] CPU 4: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850713] CPU 5: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850714] CPU 6: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850716] CPU 7: hi: 0, btch: 1 usd: 0
Sep 23 02:04:16 [kernel] [1772321.850718] Normal per-cpu:
Sep 23 02:04:16 [kernel] [1772321.850719] CPU 0: hi: 186, btch: 31 usd: 70
Sep 23 02:04:16 [kernel] [1772321.850721] CPU 1: hi: 186, btch: 31 usd: 116
Sep 23 02:04:16 [kernel] [1772321.850723] CPU 2: hi: 186, btch: 31 usd: 131
Sep 23 02:04:16 [kernel] [1772321.850724] CPU 3: hi: 186, btch: 31 usd: 76
Sep 23 02:04:16 [kernel] [1772321.850726] CPU 4: hi: 186, btch: 31 usd: 29
Sep 23 02:04:16 [kernel] [1772321.850728] CPU 5: hi: 186, btch: 31 usd: 61
Sep 23 02:04:16 [kernel] [1772321.850731] CPU 7: hi: 186, btch: 31 usd: 17
Sep 23 02:04:16 [kernel] [1772321.850733] HighMem per-cpu:
Sep 23 02:04:16 [kernel] [1772321.850734] CPU 0: hi: 186, btch: 31 usd: 2
Sep 23 02:04:16 [kernel] [1772321.850736] CPU 1: hi: 186, btch: 31 usd: 69
Sep 23 02:04:16 [kernel] [1772321.850738] CPU 2: hi: 186, btch: 31 usd: 25
Sep 23 02:04:16 [kernel] [1772321.850739] CPU 3: hi: 186, btch: 31 usd: 27
Sep 23 02:04:16 [kernel] [1772321.850741] CPU 4: hi: 186, btch: 31 usd: 7
Sep 23 02:04:16 [kernel] [1772321.850743] CPU 5: hi: 186, btch: 31 usd: 188
Sep 23 02:04:16 [kernel] [1772321.850744] CPU 6: hi: 186, btch: 31 usd: 25
Sep 23 02:04:16 [kernel] [1772321.850746] CPU 7: hi: 186, btch: 31 usd: 158
Sep 23 02:04:16 [kernel] [1772321.850750] active_anon:117913 inactive_anon:9942 isolated_anon:0
Sep 23 02:04:16 [kernel] [1772321.850751] active_file:106466 inactive_file:7784521 isolated_file:0
Sep 23 02:04:16 [kernel] [1772321.850752] unevictable:40 dirty:0 writeback:61 unstable:0
Sep 23 02:04:16 [kernel] [1772321.850753] free:143494 slab_reclaimable:128312 slab_unreclaimable:4089
Sep 23 02:04:16 [kernel] [1772321.850754] mapped:6706 shmem:308 pagetables:915 bounce:0
Sep 23 02:04:16 [kernel] [1772321.850759] DMA free:3624kB min:140kB low:172kB high:208kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolate
d(file):0kB present:15808kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:240kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tm
p:0kB pages_scanned:0 all_unreclaimable? yes
Sep 23 02:04:16 [kernel] [1772321.850763] lowmem_reserve[]: 0 869 32487 32487
Sep 23 02:04:16 [kernel] [1772321.850770] Normal free:8056kB min:8048kB low:10060kB high:12072kB active_anon:0kB inactive_anon:0kB active_file:248kB inactive_file:388kB unevictable:0kB isolated(anon)
:0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:513008kB slab_unreclaimable:16356kB kernel_stack:1888kB pagetables:3660kB unstable:0
kB bounce:0kB writeback_tmp:0kB pages_scanned:1015 all_unreclaimable? yes
Sep 23 02:04:16 [kernel] [1772321.850774] lowmem_reserve[]: 0 0 252949 252949
Sep 23 02:04:16 [kernel] [1772321.850785] lowmem_reserve[]: 0 0 0 0
Sep 23 02:04:16 [kernel] [1772321.850788] DMA: 0*4kB 7*8kB 3*16kB 6*32kB 4*64kB 6*128kB 5*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 3624kB
Sep 23 02:04:16 [kernel] [1772321.850795] Normal: 830*4kB 80*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 8056kB
Sep 23 02:04:16 [kernel] [1772321.850802] HighMem: 13*4kB 14*8kB 2*16kB 2*32kB 0*64kB 0*128kB 2*256kB 2*512kB 3*1024kB 0*2048kB 136*4096kB = 561924kB
Sep 23 02:04:16 [kernel] [1772321.850809] 7891360 total pagecache pages
Sep 23 02:04:16 [kernel] [1772321.850811] 0 pages in swap cache
Sep 23 02:04:16 [kernel] [1772321.850812] Swap cache stats: add 0, delete 0, find 0/0
Sep 23 02:04:16 [kernel] [1772321.850814] Free swap = 1959892kB
Sep 23 02:04:16 [kernel] [1772321.850815] Total swap = 1959892kB
Sep 23 02:04:16 [kernel] [1772321.949081] 8650736 pages RAM
Sep 23 02:04:16 [kernel] [1772321.949084] 8422402 pages HighMem
Sep 23 02:04:16 [kernel] [1772321.949085] 349626 pages reserved
Sep 23 02:04:16 [kernel] [1772321.949086] 7885006 pages shared
Sep 23 02:04:16 [kernel] [1772321.949087] 316864 pages non-shared
Sep 23 02:04:16 [kernel] [1772321.949089] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
(rest of process list omitted)
Sep 23 02:04:16 [kernel] [1772321.949656] [14579] 0 14579 579 171 5 0 0 rsync
Sep 23 02:04:16 [kernel] [1772321.949662] [14580] 0 14580 677 215 5 0 0 rsync
Sep 23 02:04:16 [kernel] [1772321.949669] [21832] 113 21832 42469 37403 0 0 0 clamd
Sep 23 02:04:16 [kernel] [1772321.949674] Out of memory: Kill process 21832 (clamd) score 4 or sacrifice child
Sep 23 02:04:16 [kernel] [1772321.949679] Killed process 21832 (clamd) total-vm:169876kB, anon-rss:146900kB, file-rss:2712kB Here is the 'top' output after repeating my rsync command as a non-root user: top - 03:05:55 up 8:43, 2 users, load average: 0.04, 0.08, 0.09
Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0% us, 0.0% sy, 0.0% ni, 99.9% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 33204440k total, 32688600k used, 515840k free, 108124k buffers
Swap: 1959892k total, 0k used, 1959892k free, 31648080k cached Here are the sysctl vm parameters: # sysctl -a | grep '^vm'
vm.overcommit_memory = 0
vm.panic_on_oom = 0
vm.oom_kill_allocating_task = 0
vm.oom_dump_tasks = 1
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.dirty_background_ratio = 1
vm.dirty_background_bytes = 0
vm.dirty_ratio = 0
vm.dirty_bytes = 15728640
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000
vm.nr_pdflush_threads = 0
vm.swappiness = 60
vm.lowmem_reserve_ratio = 256 32 32
vm.drop_caches = 0
vm.min_free_kbytes = 8192
vm.percpu_pagelist_fraction = 0
vm.max_map_count = 65530
vm.laptop_mode = 0
vm.block_dump = 0
vm.vfs_cache_pressure = 100
vm.legacy_va_layout = 0
vm.stat_interval = 1
vm.mmap_min_addr = 4096
vm.vdso_enabled = 2
vm.highmem_is_dirtyable = 0
vm.scan_unevictable_pages = 0 | So let us read the oom-killer output and see what can be learned from there. When analyzing OOM killer logs, it is important to look at what triggered it. The first line of your log gives us some clues: [kernel] [1772321.850644] clamd invoked oom-killer: gfp_mask=0x84d0, order=0 order=0 is telling us how much memory is being requested. The kernel's memory management is only able to manage page numbers in the powers of 2, so clamd has requested 2 0 pages of memory or 4KB. The lowest two bits of the GFP_MASK (get free page mask) constitute the so-called zone mask telling the allocator which zone to get the memory from : Flag value Description
0x00u 0 implicitly means allocate from ZONE_NORMAL
__GFP_DMA 0x01u Allocate from ZONE_DMA if possible
__GFP_HIGHMEM 0x02u Allocate from ZONE_HIGHMEM if possible Memory zones is a concept created mainly for compatibility reasons. In a simplified view, there are three zones for an x86 Kernel: Memory range Zone Purpose
0-16 MB DMA Hardware compatibility (devices)
16 - 896 MB NORMAL space directly addressable by the Kernel, userland
> 896 MB HIGHMEM userland, space addressable by the Kernel via kmap() calls In your case, the zonemask is 0, meaning clamd is requesting memory from ZONE_NORMAL . The other flags are resolving to /*
* Action modifiers - doesn't change the zoning
*
* __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
* _might_ fail. This depends upon the particular VM implementation.
*
* __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
* cannot handle allocation failures.
*
* __GFP_NORETRY: The VM implementation must not retry indefinitely.
*/
#define __GFP_WAIT 0x10u /* Can wait and reschedule? */
#define __GFP_HIGH 0x20u /* Should access emergency pools? */
#define __GFP_IO 0x40u /* Can start physical IO? */
#define __GFP_FS 0x80u /* Can call down to low-level FS? */
#define __GFP_COLD 0x100u /* Cache-cold page required */
#define __GFP_NOWARN 0x200u /* Suppress page allocation failure warning */
#define __GFP_REPEAT 0x400u /* Retry the allocation. Might fail */
#define __GFP_NOFAIL 0x800u /* Retry for ever. Cannot fail */
#define __GFP_NORETRY 0x1000u /* Do not retry. Might fail */
#define __GFP_NO_GROW 0x2000u /* Slab internal usage */
#define __GFP_COMP 0x4000u /* Add compound page metadata */
#define __GFP_ZERO 0x8000u /* Return zeroed page on success */
#define __GFP_NOMEMALLOC 0x10000u /* Don't use emergency reserves */
#define __GFP_NORECLAIM 0x20000u /* No realy zone reclaim during allocation */ according to the Linux MM documentation , so your requst has the flags for GFP_ZERO , GFP_REPEAT , GFP_FS , GFP_IO and GFP_WAIT , thus being not particularly picky. So what's up with ZONE_NORMAL ? Some generic stats can be found further on in the OOM output: [kernel] [1772321.850770] Normal free:8056kB min:8048kB low:10060kB high:12072kB active_anon:0kB inactive_anon:0kB active_file:248kB inactive_file:388kB unevictable:0kB isolated(anon)
:0kB isolated(file):0kB present:890008kB Noticeable here is that free is just 8K from min and way under low . This means your host's memory manager is somewhat in distress and kswapd should be swapping out pages already as it is in the yellow phase of the graph below: Some more information on the memory fragmentation of the zone is given here: [kernel] [1772321.850795] Normal: 830*4kB 80*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 8056kB basically stating that you have a single contiguous page of 4MB with the rest heavily fragmented into mainly 4KB pages. So let's recapitulate: you have a userland process ( clamd ) getting memory from ZONE_NORMAL whereas non-privileged memory allocation usually would be performed from ZONE_HIMEM the memory manager should at this stage have been able to serve the requested 4K page, although you seem to have significant memory pressure in ZONE_NORMAL the system, by kswapd 's rules, should have seen some paging activity beforehand, but nothing is being swapped out, even under memory pressure in ZONE_NORMAL , without apparent cause None of the above gives a definite reason as for why oom-killer has been invoked All of this seems rather odd, but is at least to be related to what is described in section 2.5 of John O'Gorman's excellent "Understanding the Linux Virtual Memory Manager" book : As the addresses space usable by the kernel (ZONE_NORMAL) is limited in size, the kernel has support for the concept of High Memory. [...] To access memory between the range of 1GiB and 4GiB, the kernel temporarily maps pages from high memory into ZONE_NORMAL with kmap(). [...] That means that to describe 1GiB of memory, approximately 11MiB of kernel memory is required. Thus, with 16GiB, 176MiB of memory is consumed, putting significant pressure on ZONE_NORMAL. This does not sound too bad until other structures are taken into account which use ZONE_NORMAL. Even very small structures such as Page Table Entries (PTEs) require about 16MiB in the worst case. This makes 16GiB about the practical limit for available physical memory Linux on an x86 . (emphasis is mine) Since 3.2 has numerous advancements in memory management over 2.6, this is not a definite answer, but a really strong hint I would pursue first. Reduce the host's usable memory to at most 16G by either using the mem= kernel parameter or by ripping half of the DIMMs out of the server. Ultimately, use a 64-bit Kernel . Dude, it's 2015. | {
"source": [
"https://serverfault.com/questions/724469",
"https://serverfault.com",
"https://serverfault.com/users/186718/"
]
} |
724,501 | I am trying to create an instance in ec2 using CLI.
Is there anyway to specify tags to the instance when using CLI to create instances? aws ec2 run-instances --image-id $ami_id --key-name $deployment_key_name \
--region $region --security-groups default --instance-type m4.large \
--user-data file://./yaml/master.yaml | As of 28 March 2017, you can specify tags for instances (and attached volumes) as part of the run-instances command. Example: aws ec2 run-instances --image-id ami-abc12345 --count 1 \
--instance-type t2.micro --key-name MyKeyPair \
--subnet-id subnet-6e7f829e \
--tag-specifications 'ResourceType=instance,Tags=[{Key=webserver,Value=production}]' 'ResourceType=volume,Tags=[{Key=cost-center,Value=cc123}]' Announcement blog post: https://aws.amazon.com/blogs/aws/new-tag-ec2-instances-ebs-volumes-on-creation/ Additional documentation (see example 4): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#Using_Tags_CLI | {
"source": [
"https://serverfault.com/questions/724501",
"https://serverfault.com",
"https://serverfault.com/users/292934/"
]
} |
725,262 | This is a Canonical Question about Connection Refused We see a lot of questions to the effect When I try to connect to a system I get a message Connection refused Why is this ? | Note : This message is a symptom of the problem you are trying to solve. Understanding the cause of the message will ultimately lead you to solving your problem. The message 'Connection Refused' has two main causes: Nothing is listening on the IP:Port you are trying to connect to. The port is blocked by a firewall. No process is listening. This is by far the most common reason for the message. First ensure that you are trying to connect to the correct system. If you are then to determine if this is the problem, on the remote system run netstat or ss 1 e.g. if you are expecting a process to be listening on port 22222 sudo netstat -tnlp | grep :22222 or ss -tnlp | grep :22222 For OSX a suitable command is sudo netstat -tnlp tcp | grep '\.80 ' If nothing is listening then the above will produce no output. If you see some output then confirm that it's what you expect then see the firewall section below. If you don't have access to the remote system and want to confirm the problem before reporting it to the relevant administrators you can use tcpdump (wireshark or similar). When a connection is attempted to an IP:port where nothing is listening, the response from the remote system to the initial SYN packet is a packet with the flags RST,ACK set. This closes the connection and causes the Connection Refused message e.g. $ sudo tcpdump -n host 192.0.2.1 and port 22222 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp14s0, link-type EN10MB (Ethernet), capture size 262144 bytes 12:31:27.013976 IP 192.0.2.2.34390 > 192.0.2.1.22222: Flags [S] , seq 1207858804, win 29200, options [mss 1460,sackOK,TS val 15306344 ecr 0,nop,wscale 7], length 0 12:31:27.020162 IP 192.0.2.1.22222 > 192.0.2.2.34390: Flags [R.] , seq 0, ack 1207858805, win 0, length 0 Note that tcpdump uses a . to represent the ACK flag. Port is blocked by a firewall If the port is blocked by a firewall and the firewall has been configured to respond with icmp-port-unreachable this will also cause a connection refused message. Again you can see this with tcpdump (or similar) $ sudo tcpdump -n icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp14s0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:03:24.149897 IP 192.0.2.1 > 192.0.2.2: ICMP 192.0.2.1 tcp port 22222 unreachable, length 68 Note that this also tells us where the blocking firewall is. So now you know what's causing the Connection refused message you should take appropriate action e.g. contact the firewall administrator or investigate the reason for the process not listening. 1 Other tools are likely available. | {
"source": [
"https://serverfault.com/questions/725262",
"https://serverfault.com",
"https://serverfault.com/users/9517/"
]
} |
725,545 | I was adjusting the permissions when setting up some WordPress themes, and ran chmod 664 -R theme-dir/* It worked fine on the files in the root of the directory, but all the files in subdirectories now read like this when I ls -l : ?--------- ? ? ? ? ? core_functions.php
?--------- ? ? ? ? ? css
?--------- ? ? ? ? ? custom_functions.php
?--------- ? ? ? ? ? images
?--------- ? ? ? ? ? import_settings.php
?--------- ? ? ? ? ? js
?--------- ? ? ? ? ? options_trim.php
?--------- ? ? ? ? ? page_templates
?--------- ? ? ? ? ? post_thumbnails_trim.php
?---------+ ? ? ? ? ? shortcodes I can't cd to any of the subdirectories, and I also can't delete them. I've never seen anything like this, anybody ever run into something similar? | Accessing the contents (or more specifically file metadata except for filename) of a directory requires that the directory have the execute bit set. Your recursive chmod removed that permission, so you lost that access. If you are using the -R option of chmod is better to avoid using the numeric version of the permissions, and instead run (using your desired state as an example) chmod -R ug=rwX,o=rX . The capital X there means set the X bit only on directories or files that have at least one x set. Also you might want to use 644 ( u=rwX,go=rX ) unless you really need group users to write. | {
"source": [
"https://serverfault.com/questions/725545",
"https://serverfault.com",
"https://serverfault.com/users/313909/"
]
} |
725,562 | I have a large S3 bucket with a nested "folder" structure containing (among other things) static .json and .md files. Theses files are being served by S3 as text/plain rather than the correct application/json and text/markdown . I have updated the bucket defaults so that new uploads will have the correct content type. What is the best way to walk the "tree" and update the content type for files matching a certain extension? | Here is an example how to do this with the aws cli tool. The cp tool allows the use of a recursive options, which I don't think the s3api tool can do. In this case, I'm fixing a bunch of SVGs. Remove the --dryrun options when you are ready to unleash it. aws s3 cp \
--exclude "*" \
--include "*.svg" \
--content-type="image/svg+xml" \
--metadata-directive="REPLACE" \
--recursive \
--dryrun \
s3://mybucket/static/ \
s3://mybucket/static/ | {
"source": [
"https://serverfault.com/questions/725562",
"https://serverfault.com",
"https://serverfault.com/users/61685/"
]
} |
726,907 | So I was doing some maintenance on my server earlier today and noticed I was able to delete a file owned by root in my home directory. I was able to reproduce a sample: [cbennett@nova ~/temp]$ ls -al
total 8
drwxrwxr-x. 2 cbennett cbennett 4096 Oct 5 20:59 .
drwxr-xr-x. 22 cbennett cbennett 4096 Oct 5 20:58 ..
-rw-rw-r--. 1 cbennett cbennett 0 Oct 5 20:58 my-own-file
[cbennett@nova ~/temp]$ sudo touch file-owned-by-root
[cbennett@nova ~/temp]$ ls -al
total 8
drwxrwxr-x. 2 cbennett cbennett 4096 Oct 5 21:00 .
drwxr-xr-x. 22 cbennett cbennett 4096 Oct 5 20:58 ..
-rw-r--r--. 1 root root 0 Oct 5 21:00 file-owned-by-root
-rw-rw-r--. 1 cbennett cbennett 0 Oct 5 20:58 my-own-file
[cbennett@nova ~/temp]$ rm file-owned-by-root
rm: remove write-protected regular empty file ‘file-owned-by-root’? y
[cbennett@nova ~/temp]$ ls -al
total 8
drwxrwxr-x. 2 cbennett cbennett 4096 Oct 5 21:00 .
drwxr-xr-x. 22 cbennett cbennett 4096 Oct 5 20:58 ..
-rw-rw-r--. 1 cbennett cbennett 0 Oct 5 20:58 my-own-file
[cbennett@nova ~/temp]$ My question is how was I able to delete a file that's owned by root and has permissions -rw-r--r-- , while I'm not root? | The permissions, content and all attributes are part of the inode. The name is in the directory entry. The permissions are not inherited recursively (except when you use default in Posix ACLs). When you delete a file, internally you just remove a hard link from the directory entry to the inode. When all hardlinks are removed and the inode is not in use, the filesystem will reclaim the space. You need only write permission on the folder no matter which permissions are set on the file (with the exception of immutable ext permission). Same for an empty folder. When you delete a folder that is not empty you need write permission on the folder you are deleting and its parent. | {
"source": [
"https://serverfault.com/questions/726907",
"https://serverfault.com",
"https://serverfault.com/users/226086/"
]
} |
726,983 | My organization recently bought a storage system. It has 1.5Petabyte, with RAID6, and there is an online synced mirror in a physical different location. The system allows rollback / file recovery, by default allowing up to 30 days but this can be increased. There is a discussion going on if we need some kind of extra backup for data living only on the storage. The system has a very good level of redundancy, it has geographical redundancy and allows up to some extent rollback which means we can recover up to the defined time (30 days by default) old data or accidentally deleted data. Given this scenario does it still make sense to have a "traditional" backup?
By traditional, I mean a dedicated backup system, with snapshots that we can retrieve in case something goes wrong. Do we really need it? Am I missing something? Am I just thinking by the traditional way and being over zealous? | What you describe is essential a geographically distributed RAID and a RAID was never a backup . Online sync usually means everything you do on the primary storage gets immediately replicated to the backup system, including operations like the deletion of (all) snapshots and/or volumes by an attacker or simply an admin error. | {
"source": [
"https://serverfault.com/questions/726983",
"https://serverfault.com",
"https://serverfault.com/users/190045/"
]
} |
727,104 | I have a single node kubernetes cluster in google container engine to play around with. Twice now, a small personal website I host in it has gone offline for a couple minutes. When I view the logs of the container, I see the normal startup sequence recently completed, so I assume a container died (or was killed?) and restarted. How can I figure out the how & why of this happening? Is there a way to get an alert whenever a container starts/stops unexpectedly? | You can view the last restart logs of a container using: kubectl logs podname -c containername --previous As described by Sreekanth, kubectl get pods should show you number of restarts, but you can also run kubectl describe pod podname And it will show you events sent by the kubelet to the apiserver about the lifecycled events of the pod. You can also write a final message to /dev/termination-log, and this will show up as described in the docs . | {
"source": [
"https://serverfault.com/questions/727104",
"https://serverfault.com",
"https://serverfault.com/users/142783/"
]
} |
728,727 | A pod in my Kubernetes cluster is stuck on "ContainerCreating" after running a create. How do I see logs for this operation in order to diagnose why it is stuck? kubectl logs doesn't seem to work since the container needs to be in a non-pending state. | kubectl describe pods will list some (probably most but not all) of the events associated with the pod, including pulling of images, starting of containers. | {
"source": [
"https://serverfault.com/questions/728727",
"https://serverfault.com",
"https://serverfault.com/users/140501/"
]
} |
729,025 | dig responses return flags in the comments section: $ dig example.com +noall +comments
; <<>> DiG 9.8.3-P1 <<>> example.com +noall +comments
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29045
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 On the last line here, there are flags: flags: qr rd ra; What are all the possible flags that dig has? Here's a list of the ones I've found so far: rd - Recursion Desired ra - Recursion Available aa - Authoritative Answer qr - Query? cd - Checking Disabled (not sure what this means) others? | I am using RFC 1035 as source, keeping to the sequence from there, regardless if you already mentioned it in your question. QR specifies whether this message is a query (0), or a response (1) OPCODE A four bit field, only valid values: 0,1,2 AA Authoritative Answer TC TrunCation (truncated due to length greater than that permitted on the
transmission channel) RD Recursion Desired RA Recursion Available Z Reserved for future use. Must be zero There were two more DNSSEC-related flags introduced in RFC 4035 : CD (Checking Disabled): indicates a security-aware resolver should
disable signature validation (that is, not check DNSSEC records) AD (Authentic Data): indicates the resolver believes the responses to be authentic - that is, validated by DNSSEC | {
"source": [
"https://serverfault.com/questions/729025",
"https://serverfault.com",
"https://serverfault.com/users/218198/"
]
} |
729,761 | I am running an email server which is currently set up to use TLS if possible, when sending and receiving emails. When you read in the documentation about this, there is also the option to enforce TLS and not accept plain text transmission of emails. It also warns you that some mail servers might not support encryption yet and enforcing encryption could block these servers. But is this still an issue one should think about or is it safe to say that enforcing encryption won't be a problem anymore? Is there possibly some big provider who is already doing this or what do you consider best practice these days? | The practical problem is that not every SMTP-compliant (the RFC is quite old) server can speak TLS to your server, so you may miss receiving some email messages. The philosophical problem with this is that its impossible to tell how the email gets relayed after (or before) it arrived at your server. This means that the email may have already been transmitted in plain-text via a relay already. Anyone serious about protecting the contents of their email should actually encrypt the body. With encryption en-route its always plausible its been transmitted in plain-text already. So, to answer your question enforcing encryption at the SMTP layer is probably pointless, increases your chance of missing email and there is no guaranteed beneficial payoff. Edit: This refers to SMTP enforcement for the purposes of relaying, not submission of email. In mail submissions encryption should be enforced since the SMTP conversation (not the actual email) possibly contains authentication credentials. | {
"source": [
"https://serverfault.com/questions/729761",
"https://serverfault.com",
"https://serverfault.com/users/293542/"
]
} |
729,818 | I'm trying to RDP into one of my servers, which as Network Level Authentication Enabled as well as NTLMv2 being forced. This worked fine until the server had to reboot for updates. Now, I cannot RDP into my server anymore. I get this error trying to connect via RDP: An authentication error has occurred - The function requested is not supported This translates to:
An authentication error has occurred.
The function requested is not supported I tried several things I found by googling, for example adding extra SecurityPackages values to the registry as described here: http://funeasytech.com/rdp-connection-error-of-the-requested-security-package-does-not-exist/ but that didn't work. Neither did changing the Group Policy on the client solve my issue, as described here: https://stackoverflow.com/questions/17371311/the-function-requested-is-not-supported-exception-when-using-smtpclient-in-azu The problem is that I don't have physical access to this box, only via RDP.
The server is running Windows Server 2012 R2 Standard, the client is running Windows 10 Pro. How can I regain access to my server? 1 : | I had the same issue. I found the issue has to do with a Windows Update patch that was pushed out to my work station in last nights Windows Updates. There was a critical CVE ( CVE-2018-0886 ) for RDP which required a patch to fix. If your workstation is patched, but your server isn't, your workstation will fail to connect. Quoting from the following blog website with information about the issue, you've got three options: Patch your target computer for CVE-2018-0886 (Recommended) Enforce the Vulnerable parameter on the source computer (Not recommended) Disable NLA on your target computer (Not recommended) I didn't have alternative access to the server(remote access only), so I had to chose Option 2 so I could go do the updates on the server. I opened the start menu on my work station, searched for "group policy", clicked Edit Group Policy. Then following Microsoft spec , Go to "Computer Configuration -> Administrative Templates -> System -> Credentials Delegation", then : Setting Encryption Oracle Remedation set to "Enabled" In Options below, set "Protection Level: to Vulnerable | {
"source": [
"https://serverfault.com/questions/729818",
"https://serverfault.com",
"https://serverfault.com/users/317356/"
]
} |
729,823 | I'm not sure if this question belongs to ServerFault or StackOverflow, but since I'm guessing that I need to debug this problem serverside, I'm going with ServerFault. The problem We're running a shared webhosting server for some client of ours. Everything is running smoothly, except for one clients their website. Around 2 to 3 days a week, our monitor detects a brief downtime because apache is not serving the page within 30 seconds, but instead between 60 to 120 seconds. I checked one time with my own desktop to confirm: the website kept loading for 80 seconds and then suddenly loads. There is no increased load, no more requests than normal and the other websites on the server loads perfectly. We had issues with a specific plugin earlier: this plugin made contact with the server from the author to confirm the license-key. When this server was not reachable, Wordpress couldn't continue loading and had the same symptoms as now. We noticed this because one day their server was down for a couple of hours and we had time to disable and enable all the plugins, one by one. According to the plugin author, the problems are solved now. I have the strong feeling that we're looking at the same problem again, maybe with the same plugin and maybe not. But since the downtime is so brief (usually no more than 2 minutes), I have no idea how to debug this timeout error. Things I've thought of Normally I would disable the plugins one by one, but before I'm connected to the database to disable the plugins, the website is up again. Since there is no pattern in the downtime, I can't stay standby for when it happens. Apache logs don't show any errors: I can only see the request from users and see that there are no files served for some time. My second thought was to run a stacktrace on the apache process. I'm pretty sure this would reveal where Apache is waiting on for so long. But since the server is getting more than 30 requests a minute, the logging file would become very large in a couple of hours, which would make it impossible for us to find the right requests. Relevant server specifications CentOS Linux release 7.0.1406 (Core)
Kernel 3.10.0-123.el7.x86_64
Apache/2.4.12 with mod_ruid2
PHP 5.4.38 (cli)
mysql Ver 15.1 Distrib 5.5.41-MariaDB, for Linux (x86_64) using readline 5.1
All compiled by DirectAdmin 1.48.3 Ideas? Who could think of a good way to debug this very specific problem? Any help is greatly appreciated! EDIT: Slow query log doesn't report any slow queries at during the slow requests. | I had the same issue. I found the issue has to do with a Windows Update patch that was pushed out to my work station in last nights Windows Updates. There was a critical CVE ( CVE-2018-0886 ) for RDP which required a patch to fix. If your workstation is patched, but your server isn't, your workstation will fail to connect. Quoting from the following blog website with information about the issue, you've got three options: Patch your target computer for CVE-2018-0886 (Recommended) Enforce the Vulnerable parameter on the source computer (Not recommended) Disable NLA on your target computer (Not recommended) I didn't have alternative access to the server(remote access only), so I had to chose Option 2 so I could go do the updates on the server. I opened the start menu on my work station, searched for "group policy", clicked Edit Group Policy. Then following Microsoft spec , Go to "Computer Configuration -> Administrative Templates -> System -> Credentials Delegation", then : Setting Encryption Oracle Remedation set to "Enabled" In Options below, set "Protection Level: to Vulnerable | {
"source": [
"https://serverfault.com/questions/729823",
"https://serverfault.com",
"https://serverfault.com/users/150910/"
]
} |
729,903 | When it comes to hardware I often read something like "Apple Mac computers, and other lower profile ...."
For me it sounds like a better word for low end segment hardware but I am not sure about it. Google didn't help me well to answer this question.
Is there something more connected to it? I need it to fully understand some articles.
Term is used here for example: HBA H240 Storage controller - plug-in card - low profile | You have a few PCIe form factors: You have different card and/or bracket heights: FH - Full-height
HH - Half-height And different card lengths: FL - Full-length
HL - Half-lenth "Low-profile" is synonymous with half-height, half-length (HHHL). An HBA like the HP H240 is a half-length card that comes with full-height and half-height brackets, and can fit into either type of server slot. If using with a Server like an HP DL360p Gen8, this give you options on card placement. | {
"source": [
"https://serverfault.com/questions/729903",
"https://serverfault.com",
"https://serverfault.com/users/185013/"
]
} |
729,914 | I am attempting to get a web application, running in Tomcat 6, to authorize a user that was authenticated by Apache. I have configured Apache 2.4 to use Active Directory for user authentication (using a module from Centrify) and ProxyPass / ProxyPassReverse requests to Tomcat. Now I am trying to figure out how to use those credentials in an application. Taking the Tomcat 6 manager app as an example, how do I go about changing it to recognize the authenticated user and check for a suitable role? I'm assuming I have to change the Realm in server.xml, probably to JNDIRealm or JAASRealm, however, the documentation talks about a realm being 'a "database" or usernames and passwords.' Is that the right way to go? I'm also assuming that I need to change login-config in web.xml, although I've no idea what values to use yet. If someone could steer me in the right direction or suggest other avenues to explore, I would appreciate that. BTW, I am also looking at trying to authenticate the user directly in Tomcat but was asked to look at the Apache proxy route for preference. | You have a few PCIe form factors: You have different card and/or bracket heights: FH - Full-height
HH - Half-height And different card lengths: FL - Full-length
HL - Half-lenth "Low-profile" is synonymous with half-height, half-length (HHHL). An HBA like the HP H240 is a half-length card that comes with full-height and half-height brackets, and can fit into either type of server slot. If using with a Server like an HP DL360p Gen8, this give you options on card placement. | {
"source": [
"https://serverfault.com/questions/729914",
"https://serverfault.com",
"https://serverfault.com/users/317429/"
]
} |
730,088 | I'm using openldap 2.4.40, and i need to migrate my existing ldap database, configuration, and schema (basically everything ldap server related) to a new machine. the problem is, I use cn=config configuration not the old slapd.conf file anymore. The documentation provided by openldap and other 3rd party websites only helps for migrating slapd.conf LDAP server, not LDAP server with the newer cn=config configuration file. and also I have new schema (attributetype and objectclass), is there a way to migrate these to a new machine as easily as possible? I need other way than reconfiguring and adding my schema manually one by one to the new machine. This will be done with the intention of turning off the old machine most likely. TL;DR Is there any way to conveniently migrate LDAP database, schema, configuration from 1 LDAP Server to a new LDAP Server with the intention of turning off the old machine Thank you. *Posted the answer below -
Julio | The solution : So here's what I did to make this works. Stop Slapd on main server Slapcat databases from the main server (There are 2 database that needs to be exported. I use the "-n" tag slapcat -n 0 -l (config file location) This one will export all schema and cn=config
and slapcat -n 1 -l <database backup ldif path> This on will export all user data that you keep in LDAP. SCP the 2 ldif file to the new server (make sure you installed LDAP on the server and make sure the configuration are almost identical to make this easier) stop slapd on the new server. delete the content of folder /etc/ldap/slapd.d use slapadd to import the configuration to the new server slapadd -n 0 -l (config ldif location) -n 0 is for adding configuration back to LDAP slapadd -n 1 -l (database ldif location) -n 1 is for adding database back to LDAP *EDIT: Somehow those command won't work on my 2nd 3rd .... and so on try. So The proper command That I've verfied that it works are slapadd -n 0 -F /etc/ldap/slapd.d -l <config backup ldif path> and slapadd -n 1 -l <data backup ldif path> change the permission in the /etc/ldap/slapd.d folder (chown and chmod). I chown it to openldap and chmod it to 755 Also Change permission in the /var/lib/ldap folder (chown and chmod) to openldap if you have certificate for TLS connection.
Copy the certificates and keys from old server to new server to the same exact location. change the permission on the places. start slapd. and it should be good to go. Hopes this helps other people | {
"source": [
"https://serverfault.com/questions/730088",
"https://serverfault.com",
"https://serverfault.com/users/277622/"
]
} |
730,239 | I found this systemd service file to start autossh to keep up a ssh tunnel: https://gist.github.com/thomasfr/9707568 [Unit]
Description=Keeps a tunnel to 'remote.example.com' open
After=network.target
[Service]
User=autossh
# -p [PORT]
# -l [user]
# -M 0 --> no monitoring
# -N Just open the connection and do nothing (not interactive)
# LOCALPORT:IP_ON_EXAMPLE_COM:PORT_ON_EXAMPLE_COM
ExecStart=/usr/bin/autossh -M 0 -N -q -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -p 22 -l autossh remote.example.com -L 7474:127.0.0.1:7474 -i /home/autossh/.ssh/id_rsa
[Install]
WantedBy=multi-user.target Is there a way to configure systemd to start several tunnels in one service. I don't want to create N system service files, since I want to avoid copy+paste. All service files would be identical except "remote.example.com" would be replace with other host names. 1.5 year later ... I asked this question roughly 1.5 year ago. My mind has changed. Yes, it's nice, that you can do this with systemd, but I will use configuration-management in the future. Why should systemd implement a template language and substitute %h? .. I think it makes no sense. Several months later I think this looping and templating should be solved on a different level. I would use Ansible or TerraForm for this now. | Well, assuming that the only thing changing per unit file is the remote.example.com part, you can use an Instantiated Service . From the systemd.unit man page: Optionally, units may be instantiated from a template file at runtime.
This allows creation of multiple units from a single configuration
file. If systemd looks for a unit configuration file, it will first
search for the literal unit name in the file system. If that yields no
success and the unit name contains an "@" character, systemd will look
for a unit template that shares the same name but with the instance
string (i.e. the part between the "@" character and the suffix)
removed. Example: if a service [email protected] is requested and no
file by that name is found, systemd will look for [email protected] and
instantiate a service from that configuration file if it is found. Basically, you create a single unit file, which contains a variable (usually %i ) where the differences occur and then they get linked when you "enable" that service. For example, I have a unit file called /etc/systemd/system/[email protected] that looks like this: [Unit]
Description=AutoSSH service for ServiceABC on %i
After=network.target
[Service]
Environment=AUTOSSH_GATETIME=30 AUTOSSH_LOGFILE=/var/log/autossh/%i.log AUTOSSH_PIDFILE=/var/run/autossh.%i.pid
PIDFile=/var/run/autossh.%i.pid
#Type=forking
ExecStart=/usr/bin/autossh -M 40000 -NR 5000:127.0.0.1:5000 -i /opt/ServiceABC/.ssh/id_rsa_ServiceABC -l ServiceABC %i
[Install]
WantedBy=multi-user.target Which I've then enabled [user@anotherhost ~]$ sudo systemctl enable [email protected]
ln -s '/etc/systemd/system/[email protected]' '/etc/systemd/system/multi-user.target.wants/[email protected]' And can intereact with [user@anotherhost ~]$ sudo systemctl start [email protected]
[user@anotherhost ~]$ sudo systemctl status [email protected]
[email protected] - AutoSSH service for ServiceABC on somehost.example
Loaded: loaded (/etc/systemd/system/[email protected]; enabled)
Active: active (running) since Tue 2015-10-20 13:19:01 EDT; 17s ago
Main PID: 32524 (autossh)
CGroup: /system.slice/system-autossh.slice/[email protected]
├─32524 /usr/bin/autossh -M 40000 -NR 5000:127.0.0.1:5000 -i /opt/ServiceABC/.ssh/id_rsa_ServiceABC -l ServiceABC somehost.example.com
└─32525 /usr/bin/ssh -L 40000:127.0.0.1:40000 -R 40000:127.0.0.1:40001 -NR 5000:127.0.0.1:5000 -i /opt/ServiceABC/.ssh/id_rsa_ServiceABC -l ServiceABC somehost.example.com
Oct 20 13:19:01 anotherhost.example.com systemd[1]: Started AutoSSH service for ServiceABC on somehost.example.com.
[user@anotherhost ~]$ sudo systemctl status [email protected]
[user@anotherhost ~]$ sudo systemctl status [email protected]
[email protected] - AutoSSH service for ServiceABC on somehost.example.com
Loaded: loaded (/etc/systemd/system/[email protected]; enabled)
Active: inactive (dead) since Tue 2015-10-20 13:24:10 EDT; 2s ago
Process: 32524 ExecStart=/usr/bin/autossh -M 40000 -NR 5000:127.0.0.1:5000 -i /opt/ServiceABC/.ssh/id_rsa_ServiceABC -l ServiceABC %i (code=exited, status=0/SUCCESS)
Main PID: 32524 (code=exited, status=0/SUCCESS)
Oct 20 13:19:01 anotherhost.example.com systemd[1]: Started AutoSSH service for ServiceABC on somehost.example.com.
Oct 20 13:24:10 anotherhost.example.com systemd[1]: Stopping AutoSSH service for ServiceABC on somehost.example.com...
Oct 20 13:24:10 anotherhost.example.com systemd[1]: Stopped AutoSSH service for ServiceABC on somehost.example.com. As you can see, all instances of %i in the unit file get replaced with somehost.example.com . There's a bunch more specifiers that you can use in a unit file though, but I find %i to work best in cases like this. | {
"source": [
"https://serverfault.com/questions/730239",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
731,238 | So, we are on the DNS chapter in our class, and I was wondering if there's any way possible though which I can connect to a DNS server on port 53 via command line interface (i.e Telnet or netcat) like we do for SMTP or HTTP or POP on their specific ports; I tried: > telnet 8.8.8.8 53 But the connection was closed as soon as it was established; which I later realized was because telnet uses TCP while DNS uses UDP. Then I tried doing the same with netcat: > nc -u 8.8.8.8 53 Nada!
I just want to see the working of DNS with some transparency.(like with http, SMTP etc.) | As you note, DNS primarily uses UDP but service is actually also provided over TCP (typically used for large responses and zone transfers). This is why you managed to establish a connection in the first place when you tried telnet .
Your connection was closed because you weren't interacting with the service in the expected way, not because telnet uses TCP. The important difference is that, unlike HTTP and SMTP which are plain text protocols and easy enough to work with directly, DNS is a binary protocol . This means that you will need some DNS client program to interact with nameservers in any reasonable fashion. dig has been the de facto standard for DNS troubleshooting for a very long time as it is very good in terms of both constructing queries and in terms of pretty-printing all the information in the response in a concise way.
(Part of BIND code-base and included in the Windows build from ISC.) drill is another alternative with similar capabilities and essentially the same output formatting as dig . nslookup is well known as it has been around since the dawn of time. It has been largely abandoned except on Windows and has some undesirable quirks and limited capabilities in comparison to the previously mentioned alternatives. The debug option ( set debug ) makes it usable for troubleshooting in a pinch as it greatly improves the completeness of the output, although the formatting of the debug output leaves a lot to be desired. | {
"source": [
"https://serverfault.com/questions/731238",
"https://serverfault.com",
"https://serverfault.com/users/318353/"
]
} |
731,519 | I am using Ubuntu 12.04 and can't write to any file, even as root, or do any other operation that requires writing. Neither can any process that needs to write, so they're all failing. df says I've got plenty of room: Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 30G 14G 15G 48% /
udev 984M 4.0K 984M 1% /dev
tmpfs 399M 668K 399M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 997M 0 997M 0% /run/shm All of the results I find for "can't write to disk" are about legitimately full disks. I don't even know where to start here. The problem appeared out of nowhere this morning. PHP's last log entry is: failed: No space left on device (28) Vim says: Unable to open (file) for writing Other applications give similar errors. After deleting ~1gb just to be sure, the problem remains. I've also rebooted. df -i says Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 1966080 1966080 0 100% /
udev 251890 378 251512 1% /dev
tmpfs 255153 296 254857 1% /run
none 255153 4 255149 1% /run/lock
none 255153 1 255152 1% /run/shm | You are out of inodes. It's likely that you have a directory somewhere with many very small files. | {
"source": [
"https://serverfault.com/questions/731519",
"https://serverfault.com",
"https://serverfault.com/users/223197/"
]
} |
731,886 | If I do: sudo cat /etc/resolv.conf | less It will prompt me for the password, even though less (presumably) takes stdin. Over what fd's is the password prompt shown and how does it get the input back? | Actually, a typical invocation of sudo does not read the password from stdin at all. Instead, sudo will directly access the controlling terminal (a tty or pty , via the /dev/tty special file) and output the prompt and read characters directly. This can be seen in the tgetpass.c file in the sudo source. There are a few other scenarios: If an askpass program is specified, e.g. in the -A param, that program will be invoked. Otherwise, if you specifically request sudo to read from stdin , e.g. with the -S flag -- and it will also write the prompt to stderr . This is the case where MadHatter's answer applies. Otherwise, if there is no tty available If password echo is disabled (it is by default, controlled by the visiblepw flag in sudoers ), sudo will report an error: no tty present and no askpass program specified Otherwise, sudo will fall back to using stdin and stderr even if it was not specifically requested. MadHatter's answer will also apply here. | {
"source": [
"https://serverfault.com/questions/731886",
"https://serverfault.com",
"https://serverfault.com/users/77755/"
]
} |
732,196 | I am confused about the EBS and SSD choice while creating an instance . while choosing instance parameters (Step 2) you will see 2 options in the column Instance Storage (GB) : EBS only or SSD . I dont know why this option is there because SSD and EBS are diffrent things and why would i choose one and not the other . The definition of instance storage (GB) below is in contradiction with above as all is persistant . (you see this definition if you hover the column name) The local instance store volumes that are available to the instance. The data in an instance store is not permanent - it persists only during the lifetime of the instance. Why in Step 4 again i will need to choose between SSD or magnetic ? Any clarification would help . | SSD are faster because there's no network latency, but it is ephemeral and you can't detach it from an instance and attach it to another. As you can see, it is available to more powerful instances. EBS are more flexible, since you can attach and detach it from instances, but is a little bit slower, as more suitable for general purpose. Now, in Step 4, you should choose if you want a SSD or a magnetic-like storage. You can roughly compare it as if you were choosing between a SATA drive or a SSD. Again, SSDs are obviously quicker. There are pricing differences, so you should read a little bit about it from the AWS documentation and use the pricing calculator to learn the technical differences. But, as far as I know, AWS is slowing stopping the use of magnetic storage. Hope this shines some light on the question. Cya! | {
"source": [
"https://serverfault.com/questions/732196",
"https://serverfault.com",
"https://serverfault.com/users/257274/"
]
} |
732,423 | man smartctl states (SNIPPED for brevity): The first category , called "online" testing. The second category of testing is called "offline" testing. Normally, the disk will suspend offline testing while disk accesses are taking place, and then automatically resume it when the disk would otherwise be idle. The third category of testing (and the only category for which the word ´testing´ is really an appropriate choice) is "self" testing. Enables or disables SMART automatic offline test, which scans the drive every four hours for disk defects. This command can be given during normal system operation. Who runs the test - drive firmware? What sort of tests are these - does the firmware read/write to disk - what exactly goes on? Is it safe to invoke testing whilst in the OS (linux) or can one schedule a test for later - how does this take place - when you reboot the OS at the BIOS prompt ('offline test')? Where are the results displayed - SMART logs? | The drive firmware runs the tests. The details of the tests can be read in eg www.t13.org/Documents/UploadedDocuments/technical/e01137r0.pdf, which summarises the elements of the short and long tests thus: an electrical segment wherein the drive tests its own electronics. The particular tests in this segment
are vendor specific, but as examples: this segment might include such tests as a buffer RAM test, a
read/write circuitry test, and/or a test of the read/write head elements. a seek/servo segment wherein the drive tests it capability to find and servo on data tracks. The
particular methodology used in this test is also vendor specific. a read/verify scan segment wherein the drive performs read scanning of some portion of the disk
surface. The amount and location of the surface scanned are dependent on the completion time
constraint and are vendor specific. The criteria for the extended self-test are the same as the short self-test with two exceptions: segment
(3) of the extended self-test shall be a read/verify scan of all of the user data area, and there is no
maximum time limit for the drive to perform the test. It is safe to perform non-destructive testing while the OS is running, though some performance impact is likely. As the smartctl man page says for both -t short and -t long , This command can be given in normal system operation (unless run in captive mode) If you invoke captive mode with -C , smartctl assumes the drive can be busied-out to unavailability. This should not be done on a drive the OS is using. As the man page also suggests, the offline testing (which simply means periodic background testing) is not reliable, and never officially became part of the ATA specifications. I run mine from cron, instead; that way I know when they should happen, and I can stop it if I need to. The results can be seen in the smartctl output. Here's one with a test running: [root@risby images]# smartctl -a /dev/sdb
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.1.6-201.fc22.x86_64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
[...]
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 20567 -
# 2 Extended offline Completed without error 00% 486 -
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Self_test_in_progress [90% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing Note two previous completed tests (at 486 and 20567 hours power-on, respectively) and the current running one (10% complete). | {
"source": [
"https://serverfault.com/questions/732423",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
732,474 | I have a problem with my Nginx configuration. I upgraded to nginx 1.9.6 to test http/2 but it does not work on my server. I used ubuntu 14.04.2 LTS This is the nginx -V output : nginx version: nginx/1.9.6
built with OpenSSL 1.0.2d 9 Jul 2015
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-http_v2_module --with-stream --with-ipv6 --with-mail --with-mail_ssl_module --with-openssl=/build/nginx-GFP362/nginx-1.9.6/debian/openssl-1.0.2d --add-module=/build/nginx-GFP362/nginx-1.9.6/debian/modules/nginx-auth-pam --add-module=/build/nginx-GFP362/nginx-1.9.6/debian/modules/nginx-echo --add-module=/build/nginx-GFP362/nginx-1.9.6/debian/modules/nginx-upstream-fair --add-module=/build/nginx-GFP362/nginx-1.9.6/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-GFP362/nginx-1.9.6/debian/modules/nginx-cache-purge And this is my vhost config : server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2; ## listen for ipv4; this line is default and implied
root /var/www/rendez-vous;
index index.phtml index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
ssl_certificate /etc/nginx/certificates/myeventsportal/server.crt;
ssl_certificate_key /etc/nginx/certificates/myeventsportal/server.key;
/... If I navigate to my site with the latest version of chrome, it is only served over http/1.1. | I've just run into the same problem, but I think I know why it happens. nginx 1.9.6 is not a stock package on Ubuntu 14.04, so you're probably getting it from an nginx PPA . That's fine, but those packages are built with the stock libraries from 14.04, which is to say OpenSSL 1.0.1f. Unfortunately that version of OpenSSL does not contain RFC7301 ALPN support which is needed for proper HTTP/2 negotiation; it only supports the now-deprecated NPN. It looks like Chrome has already removed support for NPN, so it's incapable of negotiating an HTTP/2 connection without ALPN. Firefox 41 on the other hand, still has NPN support and you should be able to use HTTP/2 with that. You can test your server like this - you will need OpenSSL 1.0.2d installed on your client (run openssl version to check): Test with ALPN: echo | openssl s_client -alpn h2 -connect yourserver.example.com:443 | grep ALPN If ALPN is working, you should see: ALPN protocol: h2 otherwise you'll get: No ALPN negotiated Test with NPN: echo | openssl s_client -nextprotoneg h2 -connect yourserver.example.com:443 If that works, you will get: Next protocol: (1) h2
No ALPN negotiated That means that it's successfully negotiating an HTTP/2 connection via NPN, which is what Firefox does. So how to solve this? The only way I can see is to install a later build of openssl from a PPA (I use this one for PHP, which also contains openssl) and build your own nginx linked to it. You can find the config params for your existing nginx build by running nginx -V , and you should be able to use that to build your own version. Update : I've discovered that the reason that Chrome doesn't support HTTP/2 with NPN is not that it doesn't support NPN (though it will be dropped at some point), but that it specifically doesn't support h2 with NPN, as shown on the chrome://net-internals/#http2 page: | {
"source": [
"https://serverfault.com/questions/732474",
"https://serverfault.com",
"https://serverfault.com/users/128601/"
]
} |
734,123 | I use the nice feature of systemd: Instantiated Services. Is there a simple way to reload all instantiated services with one call? Example: I don't want to run all like this: systemctl restart autossh@foo
systemctl restart autossh@bar
systemctl restart autossh@blu I tried this, but this does not work systemctl restart autossh@* Related: Start N processes with one systemd service file Update First I was fascinated by Instantiated Services, but later I realized that running a configuration management tool like Ansible makes more sense. I learned: Keep the tools simple. Many tools starts to implement condition-checking (if .. else ...) and loops. For example webservers or mailserver congfiguration. But this should be solved at a different (upper) level: configuration management. See: https://github.com/guettli/programming-guidelines#dont-use-systemd-instantiated-units | Systemd (starting from systemd-209) supports wildcards, however your shell is likely trying to expand them. Use quotes to pass wildcards to the systemctl/service command verbatim: systemctl restart 'autossh@*' | {
"source": [
"https://serverfault.com/questions/734123",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
734,932 | My domain bytecode77.com ( analytics ) is using a RapidSSL certificate. Firefox doesn't trust that one, so I installed a CA certificate. I used the one below. I placed it in /usr/local/share/ca-certificates/ca.crt and I ran update-ca-certificates . Then I restarted apache. But Firefox still doesn't trust the certificate. What is going wrong here? My vHost <VirtualHost *:443>
ServerName bytecode77.com
DocumentRoot /var/www/bytecode77/html
SSLEngine on
SSLCertificateFile /var/www/bytecode77/root/bytecode77.com.crt
SSLCertificateKeyFile /var/www/bytecode77/root/bytecode77.com.key
</VirtualHost> CA certificate -----BEGIN CERTIFICATE-----
MIIEJTCCAw2gAwIBAgIDAjp3MA0GCSqGSIb3DQEBCwUAMEIxCzAJBgNVBAYTAlVT
MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
YWwgQ0EwHhcNMTQwODI5MjEzOTMyWhcNMjIwNTIwMjEzOTMyWjBHMQswCQYDVQQG
EwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEgMB4GA1UEAxMXUmFwaWRTU0wg
U0hBMjU2IENBIC0gRzMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCv
VJvZWF0eLFbG1eh/9H0WA//Qi1rkjqfdVC7UBMBdmJyNkA+8EGVf2prWRHzAn7Xp
SowLBkMEu/SW4ib2YQGRZjEiwzQ0Xz8/kS9EX9zHFLYDn4ZLDqP/oIACg8PTH2lS
1p1kD8mD5xvEcKyU58Okaiy9uJ5p2L4KjxZjWmhxgHsw3hUEv8zTvz5IBVV6s9cQ
DAP8m/0Ip4yM26eO8R5j3LMBL3+vV8M8SKeDaCGnL+enP/C1DPz1hNFTvA5yT2AM
QriYrRmIV9cE7Ie/fodOoyH5U/02mEiN1vi7SPIpyGTRzFRIU4uvt2UevykzKdkp
YEj4/5G8V1jlNS67abZZAgMBAAGjggEdMIIBGTAfBgNVHSMEGDAWgBTAephojYn7
qwVkDBF9qn1luMrMTjAdBgNVHQ4EFgQUw5zz/NNGCDS7zkZ/oHxb8+IIy1kwEgYD
VR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwNQYDVR0fBC4wLDAqoCig
JoYkaHR0cDovL2cuc3ltY2IuY29tL2NybHMvZ3RnbG9iYWwuY3JsMC4GCCsGAQUF
BwEBBCIwIDAeBggrBgEFBQcwAYYSaHR0cDovL2cuc3ltY2QuY29tMEwGA1UdIARF
MEMwQQYKYIZIAYb4RQEHNjAzMDEGCCsGAQUFBwIBFiVodHRwOi8vd3d3Lmdlb3Ry
dXN0LmNvbS9yZXNvdXJjZXMvY3BzMA0GCSqGSIb3DQEBCwUAA4IBAQCjWB7GQzKs
rC+TeLfqrlRARy1+eI1Q9vhmrNZPc9ZE768LzFvB9E+aj0l+YK/CJ8cW8fuTgZCp
fO9vfm5FlBaEvexJ8cQO9K8EWYOHDyw7l8NaEpt7BDV7o5UzCHuTcSJCs6nZb0+B
kvwHtnm8hEqddwnxxYny8LScVKoSew26T++TGezvfU5ho452nFnPjJSxhJf3GrkH
uLLGTxN5279PURt/aQ1RKsHWFf83UTRlUfQevjhq7A6rvz17OQV79PP7GqHQyH5O
ZI3NjGFVkP46yl0lD/gdo0p0Vk8aVUBwdSWmMy66S6VdU5oNMOGNX2Esr8zvsJmh
gP8L8mJMcCaY
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDfTCCAuagAwIBAgIDErvmMA0GCSqGSIb3DQEBBQUAME4xCzAJBgNVBAYTAlVT
MRAwDgYDVQQKEwdFcXVpZmF4MS0wKwYDVQQLEyRFcXVpZmF4IFNlY3VyZSBDZXJ0
aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDIwNTIxMDQwMDAwWhcNMTgwODIxMDQwMDAw
WjBCMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UE
AxMSR2VvVHJ1c3QgR2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA2swYYzD99BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9m
OSm9BXiLnTjoBbdqfnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIu
T8rxh0PBFpVXLVDviS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6c
JmTM386DGXHKTubU1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmR
Cw7+OC7RHQWa9k0+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5asz
PeE4uwc2hGKceeoWMPRfwCvocWvk+QIDAQABo4HwMIHtMB8GA1UdIwQYMBaAFEjm
aPkr0rKV10fYIyAQTzOYkJ/UMB0GA1UdDgQWBBTAephojYn7qwVkDBF9qn1luMrM
TjAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjA6BgNVHR8EMzAxMC+g
LaArhilodHRwOi8vY3JsLmdlb3RydXN0LmNvbS9jcmxzL3NlY3VyZWNhLmNybDBO
BgNVHSAERzBFMEMGBFUdIAAwOzA5BggrBgEFBQcCARYtaHR0cHM6Ly93d3cuZ2Vv
dHJ1c3QuY29tL3Jlc291cmNlcy9yZXBvc2l0b3J5MA0GCSqGSIb3DQEBBQUAA4GB
AHbhEm5OSxYShjAGsoEIz/AIx8dxfmbuwu3UOx//8PDITtZDOLC5MH0Y0FWDomrL
NhGc6Ehmo21/uBPUR/6LWlxz/K7ZGzIZOKuXNBSqltLroxwUCEm2u+WR74M26x1W
b8ravHNjkOR/ez4iyz0H7V84dJzjA1BOoa+Y7mHyhD8S
-----END CERTIFICATE----- | Your certificate chain is incomplete. You need to add a SSLCertificateChainFile line, and the file needs to include the "RapidSSL SHA256 CA - G3" intermediate certificate. Firefox trusts the Geotrust global CA, but without sending it the intermediate cert as well, it doesn't know who signed your certificate so doesn't trust it. RapidSSL has a good tutorial, including where to get the needed files, here: https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&id=SO6252 | {
"source": [
"https://serverfault.com/questions/734932",
"https://serverfault.com",
"https://serverfault.com/users/179708/"
]
} |
735,176 | How can I determine the supported MACs, Ciphers, Key length and KexAlogrithms supported by my ssh servers? I need to create a list for an external security audit. I'm looking for something similar to openssl s_client -connect example.com:443 -showcerts . From my research the ssh uses the default ciphers as listed in man sshd_config . However I need a solution I can use in a script and man sshd_config does not list information about key length . I need to correct myself here: You can specify ServerKeyBits in sshd_config . I guess that ssh -vv localhost &> ssh_connection_specs.out returns the information I need but I'm not sure if the listed ciphers are the ciphers supported the client or by the server. Also I'm not sure how to run this non interactive in a script. Is there a convenient way to get SSH connection information? | You miss few points in your question: What is your openssh version? It can differ a bit over the versions. ServerKeyBits is option for protocol version 1, which you have hopefully disabled! Supported Ciphers, MACs and KexAlgorithms are always available in manual and this doesn't have anything in common with key lengths. Enabled Chiphers, MACs and KexAlgorithms are the ones that are offered using connection as you point out. But they can be gained also in other ways, for example using sshd -T | grep "\(ciphers\|macs\|kexalgorithms\)" To get the key length of your server key(s), you can use ssh-keygen: ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub but you will probably want also the moduli sizes that are offered and used during the key exchange, but it really depends on the key exchange method, but it should be also readable from debug output ssh -vvv host . | {
"source": [
"https://serverfault.com/questions/735176",
"https://serverfault.com",
"https://serverfault.com/users/255678/"
]
} |
736,274 | When using a TUN (layer 3) OpenVPN server with client-to-client disabled, my clients can still talk to each other. The client-to-client config should prevent this according to the documentation: Uncomment out the client-to-client directive if you would like
connecting clients to be able to reach each other over the VPN. By
default, clients will only be able to reach the server. Why can the clients continue to communicate to each other when this option is disabled? Here is my server conf: port 443
proto tcp
dev tun
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/server.crt
key /etc/openvpn/keys/server.key
dh /etc/openvpn/keys/dh4096.pem
topology subnet
server 10.10.201.0 255.255.255.128
ifconfig-pool-persist ipp.txt
crl-verify /etc/openvpn/keys/crl.pem
push "route [omitted]"
push "dhcp-option DNS [omitted]"
keepalive 10 120
comp-lzo
user nobody
group nogroup
persist-key
persist-tun
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so login
cipher AES-256-CBC
tls-auth /etc/openvpn/keys/pfs.key 0
verb 4 | If client-to-client is enabled , the VPN server forwards client-to-client packets internally without sending them to the IP layer of the host (i.e. to the kernel). The host networking stack does not see those packets at all. .-------------------.
| IP Layer |
'-------------------'
.-------------------.
| TUN device (tun0) |
'-------------------'
.-------------------.
| OpenVPN server |
'-------------------'
^ |
1 | | 2
| v
.----------------. .----------------.
| Client a | | Client b |
'----------------' '----------------' If client-to-client is disabled , the packets from a client to another client go through the host IP layer (iptables, routing table, etc.) of the machine hosting the VPN server: if IP forwarding is enabled , the host might forward the packet (using its routing table) again to the TUN interface and the VPN daemon will forward the packet to the correct client inside the tunnel. .-------------------.
| IP Layer | (4) routing, firewall, NAT, etc.
'-------------------' (iptables, nftables, conntrack, tc, etc.)
^ |
3 | | 5
| v
.-------------------.
| TUN device (tun0) |
'-------------------'
^ |
2 | | 6
| v
.-------------------.
| OpenVPN server |
'-------------------'
^ |
1 | | 7
| v
.----------------. .----------------.
| Client a | | Client b |
'----------------' '----------------' In this case ( client-to-client disabled), you can block the client-to-client packets using iptables: iptables -A FORWARD -i tun0 -o tun0 -j DROP where tun0 is your VPN interface. | {
"source": [
"https://serverfault.com/questions/736274",
"https://serverfault.com",
"https://serverfault.com/users/272326/"
]
} |
736,471 | I use a samba4 domain account to log in on my laptop. I wanted to try zsh out, but since my user doesn't reside in /etc/passwd I found that chsh can't find my user. Can anyone advise how I can change my login_shell ? I couldn't see anything in my ldap.conf , nssswitch.conf or anything in /etc/pam.d that helped... Looking on the domain controller I thought maybe I could use samba-tool, but I saw nothing in help that pointed me in the right direction... | I asked about this in the #suse channel on Freenode, and Miuku suggested the same as Arul, however, he mentioned two things, if I were using a Windows domain I could set the loginShell attribute. Sadly, I'm on a samba domain, so that didn't help. But his final suggestion was perfect, get the output of: getent passwd USERNAME This will have the valid entry equivalent for your user in /etc/passwd, take this, paste it in to /etc/passwd and update the shell at the end for the valid path of the shell you want to use. This way it doesn't change it for all users, and you can make sure that shell is on the machine you're configuring this on before making the change. | {
"source": [
"https://serverfault.com/questions/736471",
"https://serverfault.com",
"https://serverfault.com/users/204304/"
]
} |
736,538 | How do I install .rpm package on remote machine using Ansible? The obvious solution is to use command module, but that is a bit silly. Also I would like to avoid setting up a yum repository just for one package. Is there some more pragmatic approach to this problem? | Ansible yum module already provides a solution for this problem. The path to the local rpm file on the server can be passed to the name parameter. From the Ansible yum module documentation : You can also pass a url or a local path to a rpm file. To operate on several packages this can accept a comma separated list of packages or (as of 2.0) a list of packages. The proper steps to do this would be something like this: - name: Copy rpm file to server
copy:
src: package.rpm
dest: /tmp/package.rpm
- name: Install package.
yum:
name: /tmp/package.rpm
state: present | {
"source": [
"https://serverfault.com/questions/736538",
"https://serverfault.com",
"https://serverfault.com/users/293954/"
]
} |
736,624 | I want my systemd service to be automatically restarted on failure. Additionally I want to rate limit the restarts. I want to allow maximum of 3 restarts within 90 seconds duration. Hence I have done the following configuration. [Service]
Restart=always
StartLimitInterval=90
StartLimitBurst=3 Now the service is restarted on failure. After 3 Quick failures/restarts it is not restarting anymore as expected. Now I expected the systemd to start the service after the timeout ( StartLimitInterval ). But the systemd is not automatically starting the service after the timeout(90sec), if I manually restart the service after the timeout it is working. But I want the systemd to automatically start the service after the StartLimitInterval . Please let me know on how to achieve this feature. | To have a service restart 3 times at 90 second intervals include the following lines in your systemd service file: [Unit]
StartLimitIntervalSec=400
StartLimitBurst=3
[Service]
Restart=always
RestartSec=90 Before systemd-230 it was called just StartLimitInterval : [Unit]
StartLimitInterval=400
StartLimitBurst=3
[Service]
Restart=always
RestartSec=90 This worked worked for me for a service that runs a script using Type=idle . Note that StartLimitIntervalSec must be greater than RestartSec * StartLimitBurst otherwise the service will be restarted indefinitely. It took me some time with a lot of trial and error to work out how systemd uses these options, which suggests that systemd isn't as well documented as one would hope. These options effectively provide the retry cycle time and maximum retries that I was looking for. References: https://manpages.debian.org/testing/systemd/systemd.unit.5.en.html for Unit section https://manpages.debian.org/testing/systemd/systemd.exec.5.en.html for Service section | {
"source": [
"https://serverfault.com/questions/736624",
"https://serverfault.com",
"https://serverfault.com/users/108192/"
]
} |
737,499 | How can I list the Active directory user attributes from a Linux computer?
The Linux computer is already joined to the domain. I can use 'getent' to get the user and group information, but it does not display the complete active directory user attributes. | You can use ldapsearch to query an AD Server. For example, the following query will displya all attributes of all the users in the domain: ldapsearch -x -h adserver.domain.int -D "[email protected]" -W -b "cn=users,dc=domain,dc=int" Command options explained: -x use simple authentication (as opposed to SASL) -h your AD server -D the DN to bind to the directory. In other words, the user you are authenticating with. -W Prompt for the password. The password should match what is in your directory for the the binddn (-D). Mutually exclusive from -w. -b The starting point for the search More info: http://www.openldap.org/software/man.cgi?query=ldapsearch&apropos=0&sektion=0&manpath=OpenLDAP+2.0-Release&format=html | {
"source": [
"https://serverfault.com/questions/737499",
"https://serverfault.com",
"https://serverfault.com/users/281249/"
]
} |
737,636 | I'm am trying to set up SSL on my load balancer with a certificate I purchased from GoDaddy. When trying to upload the certificate in the console I got an error Failed to create load balancer: Server Certificate not found for the key: arn:aws:iam::************:server-certificate/mycert I've never encountered this error before when adding SSL certificates. I'm not sure why iam is even used here. After some Googling, I was able to upload my certificate to iam using aws cli (again, not sure why I had to do this). Now when modifying the listeners I can see my uploaded certificate as an existing SSL certificate. When I try to save the my changes to the load balancer however, I get the same error. I have verified that the certificate exists: $ aws iam list-server-certificates
{
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "*********************",
"ServerCertificateName": "mycert",
"Expiration": "2018-11-19T18:47:38Z",
"Path": "/",
"Arn": "arn:aws:iam::************:server-certificate/mycert",
"UploadDate": "2015-11-19T19:23:32Z"
}
]
} (I have verified the obfuscated account number here is the same as in the error) From here I am stuck. Why am I not able to apply my certificate to this load balancer? Edit Thu Nov 19 11:47:18 PST 2015 After waiting for a while and logging out and in, I was able to update the listeners with my SSL certificate. However, it doesn't seem to be working correctly. When trying to load my domain over HTTPS the request times out. It seems it unable to load the certificate $ echo | openssl s_client -connect www.example.com:443 2>/dev/null | openssl x509 -noout -subject
unable to load certificate
69457:error:0906D06C:PEM routines:PEM_read_bio:no start line:/SourceCache/OpenSSL098/OpenSSL098-52.30.1/src/crypto/pem/pem_lib.c:648:Expecting: TRUSTED CERTIFICATE | I faced the same problem when trying to create the ELB from the web console. I was trying to create a upload a new certificate there via GUI and it was finally failing with same error. I solved it by uploading the certificate files separately via aws cli. It is explained in this doc - http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html#upload-cert Upload the certificate, private key and certificate chain like this aws iam upload-server-certificate --server-certificate-name my-server-cert \
--certificate-body file://my-certificate.pem --private-key file://my-private-key.pem \
--certificate-chain file://my-certificate-chain.pem And then go to the web console and choose the option "Choose an existing certificate from AWS Identity and Access Management (IAM)" and choose the certificate pair that was just uploaded. It will work fine after that. | {
"source": [
"https://serverfault.com/questions/737636",
"https://serverfault.com",
"https://serverfault.com/users/85879/"
]
} |
738,452 | I've encountered a few questions and answers on here that use this syntax: location @default {
# ...
}
location /somewhere {
try_files $uri @default;
} I've searched high and low on the Googles and I can't seem to find any documentation of it. What does it mean and what are some of it's practical uses? Is it some sort of variable declaration and assignment? Sorry for the newbie question. | The answer is in official documentation . The “@” prefix defines a named location. Such a location is not used
for a regular request processing, but instead used for request
redirection. They cannot be nested, and cannot contain nested
locations. | {
"source": [
"https://serverfault.com/questions/738452",
"https://serverfault.com",
"https://serverfault.com/users/276612/"
]
} |
738,773 | After installing Docker, I am getting an error when I try to run the Hello World example: Error response from daemon: Cannot start container 4145d0fccd96b904e4ab4413735f1129b8765429bad5be71dc8d5f4c0760666d:
failed to create endpoint high_saha on network bridge:
failed to add the host (veth7f6f907) <=> sandbox (veth788d9dc) pair interfaces: operation not supported (I have just upgraded my Debian server from Wheezy to Jessie) Does anyone has an idea why I get this error? Did I missed something during the upgrade? Thanks for your help. | In my case, the error appears every time I update my Linux kernel. It disappears when I restart the computer. I am using Arch Linux | {
"source": [
"https://serverfault.com/questions/738773",
"https://serverfault.com",
"https://serverfault.com/users/324019/"
]
} |
739,476 | I have a written a piece of multi-threaded software that does a bunch of simulations a day. This is a very CPU-intensive task, and I have been running this program on cloud services, usually on configurations like 1GB per core. I am running CentOS 6.7, and /proc/cpuinfo gives me that my four VPS cores are 2.5GHz. processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
stepping : 2
microcode : 1
cpu MHz : 2499.992
cache size : 30720 KB
physical id : 3
siblings : 1
core id : 0
cpu cores : 1
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good unfair_spinlock pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm arat xsaveopt fsgsbase bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4999.98
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: With a rise of exchange rates, my VPS started to be more expensive, and I have came to a "great deal" on used bare-metal servers. I purchased four HP DL580 G5 , with four Intel Xeon X7350s each.
Basically, each machine has 16x 2.93GHz cores and 16GB, to keep things like my VPS cloud. processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU X7350 @ 2.93GHz
stepping : 11
microcode : 187
cpu MHz : 1600.002
cache size : 4096 KB
physical id : 6
siblings : 4
core id : 3
cpu cores : 4
apicid : 27
initial apicid : 27
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm dts tpr_shadow vnmi flexpriority
bogomips : 5866.96
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: Essentially it seemed a great deal, as I could stop using VPS's to perform these batch works. Now it is the weird stuff... On the VPS's I have been running 1.25 thread per core, just like I have been doing on the bare metal. (The extra 0.25 thread is to compensate idle time caused by network use.) On my VPS, using in total 44x 2.5GHz cores, I get nearly 900 simulations per minute. On my DL580, using in total 64x 2.93GHz cores, I am only getting 300 simulations per minute. I understand the DL580 has an older processor. But if I am running one thread per core, and the bare metal server has a faster core, why is it performing poorer than my VPS? I have no memory swap happening in any of the servers. TOP says my processors are running at 100%. I get an average load of 18 (5 on VPS). Is this going to be this way, or am I missing something? Running lscpu gives me 1.6GHz on my bare metal server. This was seen on the /proc/cpuinfo as well. Is this information correct, or is it linked to some incorrect power management? [BARE METAL] $ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Stepping: 11
**CPU MHz: 1600.002**
BogoMIPS: 5984.30
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0-15
[VPS] $ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Stepping: 2
**CPU MHz: 2499.992**
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-3 | Processor advancements, clock speed and IPC calculations can make it almost impossible to try to reasonably compare decade old CPUs to modern ones. Not only are the instructions per cycle going to vary, but newer processors have instruction sets dedicated to complex calculations (Intel has added AES-NI as an example), clock speed is no longer a reasonable comparator, due to these factors (did I mention multi-core vs hyperthreading...). With enough time and patience you could certainly figure out how many older procs equal 1 newer proc but the calculations will end up saying its cheaper and faster to buy a new CPU. | {
"source": [
"https://serverfault.com/questions/739476",
"https://serverfault.com",
"https://serverfault.com/users/324573/"
]
} |
739,483 | I am facing a problem remoting into a machine using a Domain account. Problem Facts : The Host VM's are hosted by Company A (read Domain A). The VM's have a local administrator as well Domain 'A' based User accounts who are on "Administrators" on the VM's. I belong to a Company B (Domain B). I use a VPN provided by Company A to have access to their network. I was previously able to use mstsc from Computer on Domain B to remote into any of VM's on Domain A. Recently Company A migrated their Domain A into Domain Z. Now I am not able to remote from a computer on Domain B into a VM on Domain Z using my Domain 'Z' user account, however, I am able to login using the local user account. The error for Domain Account is generic credentials not valid. My domain 'Z' accounts are working when I remote access another VM (say VM1) using my domain account after logging into a VM2 as local admin. (VM 1 & 2 are on the Domain Z) The problem in step 6 & 7 only SEEM to occur in environment at Domain Based environment. (Domain B where my local machine is located on and Domain C where another company user is facing the same issue as me). When trying from a local machine with windows freshly installed (no domain, no AV, default OS options) over Company A provided VPN, everything works fine i.e can remote into VM using Domain Accounts. Windows 7 Enterprise as Guest. Windows 7 , 2008 R2 , 8.1 as guest VMs. 11. On guest machine, tried deactivating firewall, stopping Forefront security app and removing machine from Domain and connecting to internet directly, but still it was not connecting. (maybe some group policy is causing the issue and removing from domain does not deactivate the policy. The surprising factor was people from Company C were also facing the same issue). How Can I troubleshoot this issue ? | Processor advancements, clock speed and IPC calculations can make it almost impossible to try to reasonably compare decade old CPUs to modern ones. Not only are the instructions per cycle going to vary, but newer processors have instruction sets dedicated to complex calculations (Intel has added AES-NI as an example), clock speed is no longer a reasonable comparator, due to these factors (did I mention multi-core vs hyperthreading...). With enough time and patience you could certainly figure out how many older procs equal 1 newer proc but the calculations will end up saying its cheaper and faster to buy a new CPU. | {
"source": [
"https://serverfault.com/questions/739483",
"https://serverfault.com",
"https://serverfault.com/users/324577/"
]
} |
740,630 | I'm aware that tomorrow the public IP address of one of our production servers is going to be changed. The TTL on that A record is currently set to 3 hours. Will adjusting the TTL on that A record to something lower like 1 minute actually work (the domain registrar does allow specifying minutes!), so that users DNS will only be pointing to the old server for a maximum of 1 minute after we switch that A record to the new public IP address? | They're not supposed to, but some DNS services may treat this as more of a suggestion than a hard rule. They may honor the setting down to some minimum, or they may ignore your TTL completely and always use their own setting (I've heard that 2 days is, or at least was, common). You need to be aware there is nothing you can do that will make those providers update any faster, and therefore some requests will end up going to the old address for some time after you make the change. Ideally in this case, you want to cut over to a new IP address while you still have some control of the old address, such that your server can be set to handle requests via both addresses for a small interim period. Additionally, some DNS services charge you per request (or per million requests). Moving from 3 hours to 1 minute will increase your DNS requests by a factor 180... you'll get 180 times as many requests as before. It's not likely to break the bank, but just make sure you're prepared for that. As an example, I have DNS service for a rather small web site where I spend about $20 per year for them to service 5 million requests per month. I admit that I'm not actually sure whether they'll just bill me or stop handling requests if I ever exceed that, though I expect it's the former. Right now I tend to only get about 1/2 million requests per month, but I wonder what would happen if I changed my TTL setting to get 180 times as many more and left it that way for too long. Still, most DNS services will honor your 1 minute setting. This will help smooth the changeover to the new address, and it's not likely to hurt you at all as long you're careful. Just remember to do this at least 3 hours (the old TTL) ahead of the change. There's no point doing it much earlier; any provider that would need to see the change sooner is not honoring the setting anyway. And, of course, don't forget to put it back when you're done. You may also want to reference this question: Migrating DNS Providers It's a bit different than yours, but some of the issues involved are similar. | {
"source": [
"https://serverfault.com/questions/740630",
"https://serverfault.com",
"https://serverfault.com/users/325427/"
]
} |
741,487 | It took me several hours to fix the issue because the local component store was corrupted and the computers are accessing a local WSUS server instead of the public update server by Microsoft (and because I use Dism very rarely). For reference and to help other people with the same issue, I will write down a problem description and provide a solution. Since upgrading to Windows 10 Pro Version 1511 (Build 10586) I have a problem with a corrupted filed opencl.dll in several locations. I tried sfc.exe /scannow , but it failed to fix the issue. The error messages are, among others: 2015-12-08 08:50:43, Info CSI 00003c3a Hashes for file member \SystemRoot\WinSxS\wow64_microsoft-windows-r..xwddmdriver-wow64-c_31bf3856ad364e35_10.0.10586.0_none_3dae054b56911c22\opencl.dll do not match actual file [l:10]"opencl.dll" :
Found: {l:32 g2VAunZ6/2J1G3oL7kf9fjInPUA9VYeiJcl9VKgizaY=} Expected: {l:32 9rnAnuwzPjMQA7sW63oNAVhckspIngsqJXKYSUeQ5Do=}
2015-12-08 08:50:43, Info CSI 00003c3b [SR] Cannot repair member file [l:10]"opencl.dll" of microsoft-windows-RemoteFX-clientVM-RemoteFXWDDMDriver-WOW64-C, version 10.0.10586.0, arch Host= amd64 Guest= x86, nonSxS, pkt {l:8 b:31bf3856ad364e35} in the store, hash mismatch
2015-12-08 08:50:43, Info CSI 00003c3c [SR] This component was referenced by [l:125]"Microsoft-Windows-RemoteFX-VM-Setup-Package~31bf3856ad364e35~amd64~~10.0.10586.0.RemoteFX clientVM and UMTS files and regkeys"
2015-12-08 08:50:43, Info CSI 00003c3d Hashes for file member \??\C:\WINDOWS\SysWOW64\opencl.dll do not match actual file [l:10]"opencl.dll" :
Found: {l:32 g2VAunZ6/2J1G3oL7kf9fjInPUA9VYeiJcl9VKgizaY=} Expected: {l:32 9rnAnuwzPjMQA7sW63oNAVhckspIngsqJXKYSUeQ5Do=}
2015-12-08 08:50:43, Info CSI 00003c3e Hashes for file member \SystemRoot\WinSxS\wow64_microsoft-windows-r..xwddmdriver-wow64-c_31bf3856ad364e35_10.0.10586.0_none_3dae054b56911c22\opencl.dll do not match actual file [l:10]"opencl.dll" :
Found: {l:32 g2VAunZ6/2J1G3oL7kf9fjInPUA9VYeiJcl9VKgizaY=} Expected: {l:32 9rnAnuwzPjMQA7sW63oNAVhckspIngsqJXKYSUeQ5Do=}
2015-12-08 08:50:43, Info CSI 00003c3f [SR] Could not reproject corrupted file [l:23 ml:24]"\??\C:\WINDOWS\SysWOW64"\[l:10]"opencl.dll"; source file in store is also corrupted Okay, so the issue is clear now. Unfortunately, SFC is unable to resolve the corruption because the local component store also got corrupted. Unfortunately, I lost the error messages indicating the component store corruptions. So I tried Dism /Online /Cleanup-Image /RestoreHealth to no avail. It fails with error 0x800f081f , indicating another problem with the source files. 2015-12-08 08:57:35, Info CBS Exec: Download qualification evaluation, business scenario: Manual Corruption Repair
2015-12-08 08:57:35, Info CBS Exec: Clients specified using Windows Update.
2015-12-08 08:57:35, Info CBS WU: Update service is not default AU service, skip. URL: https://fe2.update.microsoft.com/v6/, Name: Microsoft Update
2015-12-08 08:57:35, Info CBS WU: Update service is not default AU service, skip. URL: https://fe2.ws.microsoft.com/v6/, Name: Windows Store
2015-12-08 08:57:35, Info CBS WU: Update service is not default AU service, skip. URL: https://fe3.delivery.mp.microsoft.com/, Name: Windows Store (DCat Prod)
2015-12-08 08:57:35, Info CBS WU: WSUS service is the default, URL: (null), Name: Windows Server Update Service
2015-12-08 08:57:35, Info CBS DWLD:Search is done, set download progress to 20 percent.
2015-12-08 08:57:35, Info CBS Nothing to download, unexpected
2015-12-08 08:57:35, Info CBS Failed to collect payload and there is nothing to repair. [HRESULT = 0x800f081f - CBS_E_SOURCE_MISSING]
2015-12-08 08:57:35, Info CBS Failed to repair store. [HRESULT = 0x800f081f - CBS_E_SOURCE_MISSING] Looking at the error messages, it becomes clear that Windows is set to use our local WSUS server and therefore Dism is unable to retrieve the valid file from the repositories. While I am sure that I could somehow configure WSUS to provide the necessary files, I don't know how and I need a quick fix. (If someone knows how to configure WSUS accordingly, please provide information). Limiting access to the local storage by adding the paramter /LimitAccess would be useless as the local component store is also corrupted, as mentioned earlier. I experienced this issue on two machines. A refresh of Windows 10 did not fix the issue. | To fix this problem, you need to have the ISO of the exact build you have installed. Mount the ISO image. Create a temporary directory to mount the Windows Image File (WIM). mkdir C:\WIM Mount the WIM file. Dism /Mount-Wim /WimFile:D:\sources\install.wim /index:1 /MountDir:C:\WIM /ReadOnly Run Dism with the following parameters. Dism /Online /Cleanup-Image /RestoreHealth /Source:C:\WIM\Windows /LimitAccess When done, unmount the image and delete the folder Dism /Unmount-Wim /MountDir:C:\WIM /Discard
rmdir C:\WIM It's mandatory to restart your computer, or SFC and DISM will still show errors. That should fix the issue. Edit As pointed out in the comments, there might be a more direct approach. The TL;DR is, that it did not work for me, hence my more detailed approach. But I am interested if you had any problems with the direct approach. Please comment. | {
"source": [
"https://serverfault.com/questions/741487",
"https://serverfault.com",
"https://serverfault.com/users/188837/"
]
} |
741,489 | I have repository server with a lot of files and data, that being used by a few nodes. Lately, I came across the issue of high load on the repository due to high volume of reads from nodes. What I wanted to do, is to have some recent data(sliding window), that being used most frequently, locally on the nodes, luckly I have some SSD space on nodes.
Only few jobs, that are running on nodes require old data from the repository itself.
Now the question is, is there any option to combine data from NFS share and locally stored under the same folder. Application that uses data doesn't quite have an option to go to different folders.
Structure is the following: > Share: /data/YYYY/YYYYMM/.....
> Local: /local/YYYY/YYYYMM/.... (But only last 3 months) I want to mount it under same folder, /mnt for instance, that /mnt/2015/201512 will be on local SSD and /mnt/2015/201511 will be available from repository server | To fix this problem, you need to have the ISO of the exact build you have installed. Mount the ISO image. Create a temporary directory to mount the Windows Image File (WIM). mkdir C:\WIM Mount the WIM file. Dism /Mount-Wim /WimFile:D:\sources\install.wim /index:1 /MountDir:C:\WIM /ReadOnly Run Dism with the following parameters. Dism /Online /Cleanup-Image /RestoreHealth /Source:C:\WIM\Windows /LimitAccess When done, unmount the image and delete the folder Dism /Unmount-Wim /MountDir:C:\WIM /Discard
rmdir C:\WIM It's mandatory to restart your computer, or SFC and DISM will still show errors. That should fix the issue. Edit As pointed out in the comments, there might be a more direct approach. The TL;DR is, that it did not work for me, hence my more detailed approach. But I am interested if you had any problems with the direct approach. Please comment. | {
"source": [
"https://serverfault.com/questions/741489",
"https://serverfault.com",
"https://serverfault.com/users/201152/"
]
} |
741,670 | I need to rsync a file tree to a specific pod in a kubernetes cluster. It seems it should be possible if only one can convince rsync that kubectl acts sort of like rsh. Something like: rsync --rsh='kubectl exec -i podname -- ' -r foo x:/tmp ... except that this runs into problems with x since rsync assumes a hostname is needed: exec: "x": executable file not found in $PATH I can not seem to find a method to help rsync construct the rsh command. Is there a way to do this? Or some other method by which relatively efficient file transfer can be achieved over a pipe? (I am aware of gcloud compute copy-files , but it can only be used onto the node?) | To rsync to a Pod I use the following helper. pod=$1;shift;kubectl exec -i $pod -- "$@" I put this in a file called "rsync-helper.sh" and then run the rsync like so. rsync -av --progress --stats -e './rsync-helper.sh' source-dir/ thePodName:/tmp/dest-dir If you'd like a simple script that wraps this all up, save this as krsync . #!/bin/bash
if [ -z "$KRSYNC_STARTED" ]; then
export KRSYNC_STARTED=true
exec rsync --blocking-io --rsh "$0" $@
fi
# Running as --rsh
namespace=''
pod=$1
shift
# If user uses pod@namespace, rsync passes args as: {us} -l pod namespace ...
if [ "X$pod" = "X-l" ]; then
pod=$1
shift
namespace="-n $1"
shift
fi
exec kubectl $namespace exec -i $pod -- "$@" Then you can use krsync where you would normally rsync. krsync -av --progress --stats src-dir/ pod:/dest-dir You can also set the namespace. krsync -av --progress --stats src-dir/ pod@namespace:/dest-dir NOTE: The Pod must have the rsync executable installed for this to work. | {
"source": [
"https://serverfault.com/questions/741670",
"https://serverfault.com",
"https://serverfault.com/users/67890/"
]
} |
743,087 | I have a vanilla install of CoreOS (835.9.0) and it doesn't start the docker daemon on startup. It only starts when I SSH in and do eg docker ps . How can i make the docker daemon automatically start on system boot? When i say the docker daemon, i mean ps -ef | grep docker shows no processes until after i do docker ps | sudo systemctl enable docker did the trick. | {
"source": [
"https://serverfault.com/questions/743087",
"https://serverfault.com",
"https://serverfault.com/users/16937/"
]
} |
743,548 | I can’t figure out why does an SSH public key file generated by ssh-keygen have a user and host at the end of it. Example: id_rsa.pub ssh-rsa ... rest of file ... /CA9gyE8HRhNMG6ZDwyhPBbDfX root@mydomain Notice the root@mydomain at the end of the file. If I can use the public key anywhere with any user to authenticate using my private key, what significance does the root@mydomain have on the authentication process? Or is it just a place holder to figure our who was it issued by? | This field is a comment, and can be changed or ignored at will. It is set to user@host by default by ssh-keygen . The OpenSSH sshd(8) man page describes the format of a public key thus: Public keys consist of the following space-separated fields: options, keytype, base64-encoded key, comment. . . . The comment field is not used for anything (but may be convenient for the user to identify the key). The ssh-keygen(1) man page says: The key comment may be useful to help identify the key. The comment is initialized to “user@host” when the key is created, but can be changed using the -c option. | {
"source": [
"https://serverfault.com/questions/743548",
"https://serverfault.com",
"https://serverfault.com/users/16045/"
]
} |
743,666 | I've worked in IT quite a number of years, so I know what a RAID array is, what RAID 0 is, RAID 1, 5, 6, 10, 50, 60, etc., but something sprung to mind in a recent conversation at work; if RAID stands for redundant array of independent (or inexpensive) disks, then why is RAID 0 classed as RAID at all and not just a striped array? Having data striped across multiple disks on the one array offers no redundancy whatsoever so why is it classed as a RAID array? Surely the lowest number should be RAID 1 (mirrored) as that's when redundancy actually starts? | You actually part answered this in your question. The lowest form of RAID is RAID 1. RAID 0 was added well after RAID was defined (can't find reference to a date for this though) The 0 in RAID 0 is used to signify that actually it isn't considered redundant. Think of it as more True/False where 0 is False. | {
"source": [
"https://serverfault.com/questions/743666",
"https://serverfault.com",
"https://serverfault.com/users/322428/"
]
} |
744,147 | When I register a new domain, I send it to my hosting provider by assigning it its domain name servers in the registar's settings. For example, with Digital Ocean, I input the following: ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com I then add the domain settings in the A record of my server. It just occurred to me that anyone else on the same hosting provider can add an A record with a domain I own. Is there anything preventing this from occurring? if 2 different servers that use the same domain name server try to assign a domain to themselves through the A records, where would the domain actually resolve when you enter it in the browser? what prevents domain name collisions on the same DNS server? | Never you mind the comments section below, and never you mind the previous answers in the edit history. After about an hour of some conversation with friends (thank you @joeQwerty, @Iain, and @JourneymanGeek), and some jovial hacking around we got to the bottom of both your question and the situation on the whole. Sorry for brusqueness and misunderstanding the situation completely at first. Let's step through the process: You buy wesleyisaderp.com at, let's say, NameCheap.com. Namecheap as your registrar will be where you populate your NS records. Let's say you actually want to host the DNS zone on Digital Ocean. You point your shiny new domain's NS records to ns1.digitalocean.com and ns2.digitalocean.com . However, let's say I was able to determine that you had registered that domain, and furthermore that you had changed your NS records to Digital Ocean's . Then I beat you to a Digital Ocean account and added the zone wesleyisaderp.com to my own. You try to add the zone in *your* account but Digital Ocean says that the zone already exists in their system! Oh noes! I CNAME wesleyisaderp.com to wesleyisbetterthanyou.com . Hilarity ensues. Some friends and I just played this exact scenario out, and yes it works. If @JoeQwerty buys a domain and points it to the Digital Ocean nameservers, but I already had that zone added to my account, then I am the zone master and can do with it what I want. However consider that someone would have to first add the zone to their DNS account, and then you'd have to point your NS records to the name servers of that same host for anything nefarious to happen. Furthermore, as the domain owner, you can switch NS records any time you want and move the resolution away from the bad zone host. The likelihood of this happening is a bit low to say the least. It is said that, statistically, you can shuffle a deck of 52 playing cards and get an ordering that no other human has ever gotten, and no other human ever will. I think the same reasoning exists here. The likelihood of someone exploiting this is so very low, and there are better shortcuts in existence, that it probably won't happen in the wild by accident. Furthermore, if you own a domain at a registrar and it someone happens to have made a zone on a provider like Digital Ocean that you collide with, I'm sure if you provide proof of ownership, they'd ask the person who made the zone in their account to remove it since there's no reason for it to exist as they're not the domain name owner. But what about A records The first person to have a zone on, for instance Digital Ocean, will be the one that controls it. You cannot have multiple identical zones on the same DNS infrastructure. So for example, using the silly names above, if I have wesleyisaderp.com as a zone on Digital Ocean, no one else on Digital Ocean's DNS infrastructure can add it to their account. Here's the fun part: I actually really have added wesleyisaderp.com to my Digital Ocean account! Go ahead and try to add it into yours. It won't hurt anything. So as a result, you can't add an A record to wesleyisaderp.com. It's all mine. But what about... As @Iain pointed out below, my point #4 above is actually too verbose. I don't have to wait or plot or scheme at all. I can just make thousands of zones in an account and then sit back and wait. Technically. If I make thousands of domains, and then wait for them to get registered, and then hope they use the DNS hosts that I've set my zones on... maybe I can do something kinda bad? Maybe? But probably not? Apologies to Digital Ocean & NameCheap Note that Digital Ocean and NameCheap are not unique, and have nothing to do with this scenario. This is normal behavior. They are blameless on all fronts. I just used them since that was the example given, and they're very well known brands. | {
"source": [
"https://serverfault.com/questions/744147",
"https://serverfault.com",
"https://serverfault.com/users/219982/"
]
} |
744,960 | I've got two websites being served from a CentOS instance. One of those has SSL enabled, the other is just served on port 80. So, http://siteone.com and https://siteone.com both work fine, as does http://sitetwo.com . The issue is that https://sitetwo.com displays https://siteone.com . I have one public IP address available. I think it's the case that I can't serve two https sites from one IP, but is there at least a way to redirect https to port 80 for https://sitetwo.com instead of serving the wrong site? sudo apachectl -S
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using xxx.xxx.xxx.xxx. Set the 'ServerName' directive globally to suppress this message
VirtualHost configuration:
▽
xxx.xxx.xxx.xxx:443 siteone.com (/etc/httpd/sites-enabled/ssl-siteone.conf:1)
*:80 is a NameVirtualHost
default server beta-siteone (/etc/httpd/sites-enabled/beta-siteone.conf:1)
port 80 namevhost beta-ilegis (/etc/httpd/sites-enabled/beta-siteone.conf:1)
alias beta.siteone.com
port 80 namevhost siteone.com (/etc/httpd/sites-enabled/siteone.conf:1)
alias www.siteone.com
port 80 namevhost sitetwo.com (/etc/httpd/sites-enabled/sitetwo.com.conf:1)
alias www.sitetwo.com
*:443 is a NameVirtualHost
default server xxx.xxx.xxx.xxx (/etc/httpd/conf.d/ssl.conf:56)
port 443 namevhost xxx.xxx.xxx.xxx (/etc/httpd/conf.d/ssl.conf:56)
port 443 namevhost xxx.xxx.xxx.xxx (/etc/httpd/sites-enabled/ssl-sitetwo.com.conf:1)
ServerRoot: "/etc/httpd"
Main DocumentRoot: "/var/www/html"
Main ErrorLog: "/etc/httpd/logs/error_log"
Mutex proxy: using_defaults
Mutex authn-socache: using_defaults
Mutex ssl-cache: using_defaults
Mutex default: dir="/run/httpd/" mechanism=default
Mutex mpm-accept: using_defaults
Mutex authdigest-opaque: using_defaults
Mutex proxy-balancer-shm: using_defaults
Mutex rewrite-map: using_defaults
Mutex authdigest-client: using_defaults
Mutex ssl-stapling: using_defaults
PidFile: "/run/httpd/httpd.pid"
Define: DUMP_VHOSTS
Define: DUMP_RUN_CFG
User: name="apache" id=48
Group: name="apache" id=48 | Two https can be served in one IP. You just need to verify that the virtual host configuration works. Are you sure that your virtualhost works? You can use this config in site-available. <VirtualHost *:80>
ServerName www.example.com
ServerAlias example.com
DocumentRoot /var/www/example.com/public_html
ErrorLog /var/www/example.com/error.log
CustomLog /var/www/example.com/requests.log combined
</VirtualHost>
<VirtualHost *:80>
ServerName www.example2.com
DocumentRoot /var/www/example2.com/public_html
ServerAlias example2.com
ErrorLog /var/www/example2.com/error.log
CustomLog /var/www/example2.com/requests.log combined
</VirtualHost> Follow the tutorial here If you are sure about your virtual host configuration, then you can change the configuration like this: <VirtualHost *:443>
ServerName www.example.com
ServerAlias example.com
DocumentRoot /var/www/example.com/public_html
ErrorLog /var/www/example.com/error.log
CustomLog /var/www/example.com/requests.log combined
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/example/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/example/apache.key
</VirtualHost>
<VirtualHost *:443>
ServerName www.example2.com
DocumentRoot /var/www/example2.com/public_html
ServerAlias example2.com
ErrorLog /var/www/example2.com/error.log
CustomLog /var/www/example2.com/requests.log combined
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/example2/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/example2/apache.key
</VirtualHost> Maybe you can refer to this for the ssl tutorial. And finally you can access your web like this https://example.com https://example2.com | {
"source": [
"https://serverfault.com/questions/744960",
"https://serverfault.com",
"https://serverfault.com/users/328577/"
]
} |
745,764 | Look at the graph of IPv6 adoption rates maintained by Google here: https://www.google.com/intl/en/ipv6/statistics.html Zoom in to the September to December 2015 period. The graph of IPv6 adoption rates is clearly periodic, with much higher rates at the weekend. Why is that? | tl;dr: Because Comcast. Comcast has, by any measure, the largest IPv6 deployment in the world, with the greatest number of users. Commercial/business networks are lagging behind with regards to IPv6. People are not at work on the weekends and such we see higher IPv6 adoption then. I'm sure other residential ISPs contribute to this trend as well. Say what you want about Comcast's business practices. No one can fault them for being a staunch IPv6 advocate from very early on. (of course they were forced into it due to IPv4 not having enough addresses for them to manage their own device pool) | {
"source": [
"https://serverfault.com/questions/745764",
"https://serverfault.com",
"https://serverfault.com/users/190967/"
]
} |
746,551 | We use rsync to backup servers. Unfortunately the network to some servers is slow. It takes up to five minutes for rsync to detect, that nothing has changed in huge directories. These huge directory trees contain a lot of small files (about 80k files). I guess that the rsync clients sends data for each of the 80k files. Since the network is slow I would like to avoid to send 80k times information about each file. Is there a way to tell rsync to make a hash-sum of a sub directory tree? This way the rsync client would send only a few bytes for a huge directory tree. Update Up to now my strategy is to use rsync . But if a different tools fits better here, I am able to switch. Both (server and client) are under my control. Update2 There are 80k files in one directory tree . Each single directory does not have more than 2k files or sub-directories Update3 Details on the slowness of the network: time ssh einswp 'cd attachments/200 && ls -lLR' >/tmp/list
real 0m2.645s Size of tmp/list file: 2MByte time scp einswp:/tmp/list tmp/
real 0m2.821s Conclusion: scp has the same speed (no surprise) time scp einswp:tmp/100MB tmp/
real 1m24.049s Speed: 1.2MB/s | Some unrelated points: 80K is a lot of files. 80,000 files in one directory? No operating system or app handles that situation very well by default. You just happen to notice this problem with rsync. Check your rsync version Modern rsync handles large directories a lot better than in the past. Be sure you are using the latest version. Even old rsync handles large directories fairly well over high latency links... but 80k files isn't large...it is huge! That said, rsync's memory usage is directly proportional to the number of files in a tree. Large directories take a large amount of RAM. The slowness may be due to a lack of RAM on either side. Do a test run while watching memory usage. Linux uses any left-over RAM as a disk cache, so if you are running low on RAM, there is less disk caching. If you run out of RAM and the system starts using swap, performance will be really bad. Make sure --checksum is not being used --checksum (or -c ) requires reading each and every block of every file. You probably can get by with the default behavior of just reading the modification times (stored in the inode). Split the job into small batches. There are some projects like Gigasync which will "Chop up the workload by using perl to recurse the directory tree, building smallish lists of files to transfer with rsync." The extra directory scan is going to be a large amount of overhead, but maybe it will be a net win. OS defaults aren't made for this situation. If you are using Linux/FreeBSD/etc with all the defaults, performance will be terrible for all your applications. The defaults assume smaller directories so-as not to waste RAM on oversized caches. Tune your filesystem to better handle large directories: Do large folder sizes slow down IO performance? Look at the "namei cache" BSD-like operating systems have a cache that accelerates looking up a name to the inode (the "namei" cache"). There is a namei cache for each directory. If it is too small, it is a hindrance more than an optimization. Since rsync is doing a lstat() on each file, the inode is being accessed for every one of the 80k files. That might be blowing your cache. Research how to tune file directory performance on your system. Consider a different file system XFS was designed to handle larger directories. See Filesystem large number of files in a single directory Maybe 5 minutes is the best you can do. Consider calculating how many disk blocks are being read, and calculate how fast you should expect the hardware to be able to read that many blocks. Maybe your expectations are too high. Consider how many disk blocks must be read to do an rsync with no changed files: each server will need to read the directory and read one inode per file. Let's assume nothing is cached because, well, 80k files has probably blown your cache. Let's say that it is 80k blocks to keep the math simple. That's about 40M of data, which should be readable in a few seconds. However if there needs to be a disk seek between each block, that could take much longer. So you are going to need to read about 80,000 disk blocks. How fast can your hard drive do that? Considering that this is random I/O, not a long linear read, 5 minutes might be pretty excellent. That's 1 / (80000 / 600), or a disk read every 7.5ms. Is that fast or slow for your hard drive? It depends on the model. Benchmark against something similar Another way to think about it is this. If no files have changed, ls -Llr does the same amount of disk activity but never reads any file data (just metadata). The time ls -Llr takes to run is your upper bound. Is rsync (with no files changed) significantly slower than ls -Llr ? Then the options you are using for rsync can be improved. Maybe -c is enabled or some other flag that reads more than just directories and metadata (inode data). Is rsync (with no files changed) nearly as fast as ls -Llr ? Then you've tuned rsync as best as you can. You have to tune the OS, add RAM, get faster drives, change filesystems, etc. Talk to your devs 80k files is just bad design. Very few file systems and system tools handle such large directories very well. If the filenames are abcdefg.txt, consider storing them in abdc/abcdefg.txt (note the repetition). This breaks the directories up into smaller ones, but doesn't require a huge change to the code. Also.... consider using a database. If you have 80k files in a directory, maybe your developers are working around the fact that what they really want is a database. MariaDB or MySQL or PostgreSQL would be a much better option for storing large amounts of data. Hey, what's wrong with 5 minutes? Lastly, is 5 minutes really so bad? If you run this backup once a day, 5 minutes is not a lot of time. Yes, I love speed. However if 5 minutes is "good enough" for your customers, then it is good enough for you. If you don't have a written SLA, how about an informal discussion with your users to find out how fast they expect the backups to take. I assume you didn't ask this question if there wasn't a need to improve the performance. However, if your customers are happy with 5 minutes, declare victory and move on to other projects that need your efforts. Update: After some discussion we determined that the bottleneck is the network. I'm going to recommend 2 things before I give up :-). Try to squeeze more bandwidth out of the pipe with compression. However compression requires more CPU, so if your CPU is overloaded, it might make performance worse. Try rsync with and without -z , and configure your ssh with and without compression. Time all 4 combinations to see if any of them perform significantly better than others. Watch network traffic to see if there are any pauses. If there are pauses, you can find what is causing them and optimize there. If rsync is always sending, then you really are at your limit. Your choices are: a faster network something other than rsync move the source and destination closer together. If you can't do that, can you rsync to a local machine then rsync to the real destination? There may be benefits to doing this if the system has to be down during the initial rsync. | {
"source": [
"https://serverfault.com/questions/746551",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
746,562 | I have the following code to make sure that api.example.com will call example.com/api.php . It works on my localhost, but not on my website. RewriteEngine On
# Handle Front Controller...
RewriteCond %{HTTP_HOST} ^api\.(.*)
RewriteRule ^ api.php [L]
# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ website.php [L] The browser returns the eror ERR_NAME_NOT_RESOLVED . Whats the problem and how do I fix this? | Some unrelated points: 80K is a lot of files. 80,000 files in one directory? No operating system or app handles that situation very well by default. You just happen to notice this problem with rsync. Check your rsync version Modern rsync handles large directories a lot better than in the past. Be sure you are using the latest version. Even old rsync handles large directories fairly well over high latency links... but 80k files isn't large...it is huge! That said, rsync's memory usage is directly proportional to the number of files in a tree. Large directories take a large amount of RAM. The slowness may be due to a lack of RAM on either side. Do a test run while watching memory usage. Linux uses any left-over RAM as a disk cache, so if you are running low on RAM, there is less disk caching. If you run out of RAM and the system starts using swap, performance will be really bad. Make sure --checksum is not being used --checksum (or -c ) requires reading each and every block of every file. You probably can get by with the default behavior of just reading the modification times (stored in the inode). Split the job into small batches. There are some projects like Gigasync which will "Chop up the workload by using perl to recurse the directory tree, building smallish lists of files to transfer with rsync." The extra directory scan is going to be a large amount of overhead, but maybe it will be a net win. OS defaults aren't made for this situation. If you are using Linux/FreeBSD/etc with all the defaults, performance will be terrible for all your applications. The defaults assume smaller directories so-as not to waste RAM on oversized caches. Tune your filesystem to better handle large directories: Do large folder sizes slow down IO performance? Look at the "namei cache" BSD-like operating systems have a cache that accelerates looking up a name to the inode (the "namei" cache"). There is a namei cache for each directory. If it is too small, it is a hindrance more than an optimization. Since rsync is doing a lstat() on each file, the inode is being accessed for every one of the 80k files. That might be blowing your cache. Research how to tune file directory performance on your system. Consider a different file system XFS was designed to handle larger directories. See Filesystem large number of files in a single directory Maybe 5 minutes is the best you can do. Consider calculating how many disk blocks are being read, and calculate how fast you should expect the hardware to be able to read that many blocks. Maybe your expectations are too high. Consider how many disk blocks must be read to do an rsync with no changed files: each server will need to read the directory and read one inode per file. Let's assume nothing is cached because, well, 80k files has probably blown your cache. Let's say that it is 80k blocks to keep the math simple. That's about 40M of data, which should be readable in a few seconds. However if there needs to be a disk seek between each block, that could take much longer. So you are going to need to read about 80,000 disk blocks. How fast can your hard drive do that? Considering that this is random I/O, not a long linear read, 5 minutes might be pretty excellent. That's 1 / (80000 / 600), or a disk read every 7.5ms. Is that fast or slow for your hard drive? It depends on the model. Benchmark against something similar Another way to think about it is this. If no files have changed, ls -Llr does the same amount of disk activity but never reads any file data (just metadata). The time ls -Llr takes to run is your upper bound. Is rsync (with no files changed) significantly slower than ls -Llr ? Then the options you are using for rsync can be improved. Maybe -c is enabled or some other flag that reads more than just directories and metadata (inode data). Is rsync (with no files changed) nearly as fast as ls -Llr ? Then you've tuned rsync as best as you can. You have to tune the OS, add RAM, get faster drives, change filesystems, etc. Talk to your devs 80k files is just bad design. Very few file systems and system tools handle such large directories very well. If the filenames are abcdefg.txt, consider storing them in abdc/abcdefg.txt (note the repetition). This breaks the directories up into smaller ones, but doesn't require a huge change to the code. Also.... consider using a database. If you have 80k files in a directory, maybe your developers are working around the fact that what they really want is a database. MariaDB or MySQL or PostgreSQL would be a much better option for storing large amounts of data. Hey, what's wrong with 5 minutes? Lastly, is 5 minutes really so bad? If you run this backup once a day, 5 minutes is not a lot of time. Yes, I love speed. However if 5 minutes is "good enough" for your customers, then it is good enough for you. If you don't have a written SLA, how about an informal discussion with your users to find out how fast they expect the backups to take. I assume you didn't ask this question if there wasn't a need to improve the performance. However, if your customers are happy with 5 minutes, declare victory and move on to other projects that need your efforts. Update: After some discussion we determined that the bottleneck is the network. I'm going to recommend 2 things before I give up :-). Try to squeeze more bandwidth out of the pipe with compression. However compression requires more CPU, so if your CPU is overloaded, it might make performance worse. Try rsync with and without -z , and configure your ssh with and without compression. Time all 4 combinations to see if any of them perform significantly better than others. Watch network traffic to see if there are any pauses. If there are pauses, you can find what is causing them and optimize there. If rsync is always sending, then you really are at your limit. Your choices are: a faster network something other than rsync move the source and destination closer together. If you can't do that, can you rsync to a local machine then rsync to the real destination? There may be benefits to doing this if the system has to be down during the initial rsync. | {
"source": [
"https://serverfault.com/questions/746562",
"https://serverfault.com",
"https://serverfault.com/users/329928/"
]
} |
746,849 | Couple of servers that have been rebuilt recently are hitting warnings on C:\ drive usage. Looking at the disk there are GBs of data in Windows\Temp being used up by cab_XXXX_X (e.g. cab_5328_2). The suggestion I have found online is to just delete them but I can't help but feel this is only going to prove to be a workaround as they are being generated multiple times a day. Has anyone seen this behaviour before with a Windows Server 2008 R2 SP1 box? I can't see it happening on any other server that we have, only the two that have been rebuilt recently. Am hoping to find a permanent way to stop it as I am sure it cannot be helping performance. | I had a similar issue a while ago, this helped to identify the cause.
This is the bit with the fix. in C:\windows\Logs\CBS folder delete the oldest .log file (you can also
delete them all) in C:\windows\temp folder delete every cab_xxxx in the
following regeneration process, the remaining (CBS) logs where zipped
correctly, and C:\windows\temp was left clean | {
"source": [
"https://serverfault.com/questions/746849",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
747,895 | I want to configure haproxy to bind to a tcp as well as tcp6 socket on all interfaces (i.e., 0.0.0.0:80 and :::80 ). I was able to reach this goal with the following settings: listen web
bind :80 v4v6
bind :::80 v6only Is there any shorter way than this? While I expect it to behave different, the v4v6 keyword makes haproxy bind to a v4 socket only. | To listen on the same port for IPv6 and IPv4, use this: bind :::80 v4v6 Admittedly, this was an intuitive guess that appears to have been correct... but rather than just post a "lucky" guess as the answer, even though it works, it seems like I should justify it. the v4v6 keyword makes haproxy bind to a v4 socket only. My first intuition was that it's not v4v6 but rather the use of :80 (or, more precisely, the use of no IP address at all, just a port number) that causes this socket to listen on IPv4 only. This seems to be confirmed in the docs for bind : address is optional and can be a host name, an IPv4 address, an IPv6 address, or '*' . It designates the address the frontend will listen on. If unset, all IPv4 addresses of the system will be-listened on. The same will apply for '*' or the system's special address " 0.0.0.0 ". The IPv6 equivalent is '::'. http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4.2-bind (emphasis added) So the following three forms are all equivalent, and are all interpreted as being IPv4 by HAProxy: bind :80
bind *:80
bind 0.0.0.0:80 Next, there is one sentence in the docs for v4v6 could be read in isolation to indicate that v4v6 might be usable to extend one of the above bind statements to listen on IPv6... v4v6 It is used to bind a socket to both IPv4 and IPv6 when it uses the default address. ...hmmm, but I suspect that this actually means "the v6 default address" ( :: )... Doing so is sometimes necessary
on systems which bind to IPv6 only by default. ...and now, I suspect it even more... It has no effect on non-IPv6
sockets, and is overridden by the v6only option. http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.1 So, it appears that v4v6 only modifies bind directives that specify the IPv6 default listen address, which is :: (the 3rd : is the separator between the address and the port), and is ignored for others. | {
"source": [
"https://serverfault.com/questions/747895",
"https://serverfault.com",
"https://serverfault.com/users/153134/"
]
} |
747,907 | I have a Digitalocean server with ubuntu linux, nginx 1.4.6 (running on port 80), varnish 3.0.5 (running on port 8080, together)
I have two domains, say siteA.com and siteB.com. In default.conf of nginx I configured so that the front door (80) use the siteA folder as root, the code is: server {
listen *:8080 default_server;
root /home/sitea;
index index.html index.htm index.php;
server_name IP_domain_siteA;
location / {
autoindex on;
autoindex_exact_size off;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location ~ \.php$ {
try_files $uri =404;
expires off;
fastcgi_read_timeout 900s;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
} but I want the SiteB use the same port for access using the 2 domains to the same server. so when I from accessing: siteA.com => carry my server folder:
/home/siteA/index.php
siteB.com => carry the same server folder (same ip as well):
/home/siteB/index.html How I do it? already I tried everything, even including these backend lines in default.VCL (varnish configuration). backend siteA{
.host = "sitea.com";
.port = "8080";
}
backend siteB{
.host = "siteb.com";
.port = "8080";
}
sub vcl_recv {
if (req.http.host == "sitea.com") {
#You will need the following line only if your backend has multiple virtual host names
set req.http.host = "sitea.com";
set req.backend = siteA;
return (lookup);
}
if (req.http.host == "siteb.com") {
#You will need the following line only if your backend has multiple virtual host names
set req.http.host = "siteb.com";
set req.backend = siteB;
return (lookup);
}
} It not resolved, it returns me the error: BACKEND HOST "siteB.com": resolves to multiple IPv4 addresses. Only
one address is allowed. i already use virtual hosts , for others folders with nginx but It is only possible for change PORTS, the line server_name directed to domainA or domainB, dont work.... because is the same IP What can I do anyone have suggestions? thank you EDIT 1: nginx config for both sites is here (siteA): server {
listen *:8080 default_server;
root /var/www/public/sitea;
try_files $uri $uri/ @handler;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
##domain address 1 of server...
server_name www.sitea.com.br sitea.com.br;
#location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
#try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
#}
## These locations would be hidden by .htaccess normally
location ^~ /app/ { deny all; }
location ^~ /includes/ { deny all; }
location ^~ /lib/ { deny all; }
location ^~ /media/downloadable/ { deny all; }
location ^~ /pkginfo/ { deny all; }
location ^~ /report/config.xml { deny all; }
location ^~ /var/ { deny all; }
location /var/export/ { ## Allow admins only to view export folder
auth_basic "Restricted"; ## Message shown in login window
auth_basic_user_file htpasswd; ## See /etc/nginx/htpassword
autoindex on;
proxy_read_timeout 150;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location @handler { ## Magento uses a common front handler
rewrite / /index.php;
}
location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler
rewrite ^(.*.php)/ $1 last;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_read_timeout 120;
include fastcgi_params;
}
} the other site (siteB): server {
listen 8090;
client_max_body_size 20M;
root /var/www/public/siteb;
index index.html index.htm index.php;
##domain address 2 of server...
server_name www.siteb.com.br siteb.com.br;
location / {
autoindex on;
try_files $uri $uri/ /index.php?q=$request_uri;
autoindex_exact_size off;
proxy_pass http://localhost:8080;
}
location ~ \.php$ {
#try_files $uri =404;
expires off;
fastcgi_read_timeout 900s;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
} | To listen on the same port for IPv6 and IPv4, use this: bind :::80 v4v6 Admittedly, this was an intuitive guess that appears to have been correct... but rather than just post a "lucky" guess as the answer, even though it works, it seems like I should justify it. the v4v6 keyword makes haproxy bind to a v4 socket only. My first intuition was that it's not v4v6 but rather the use of :80 (or, more precisely, the use of no IP address at all, just a port number) that causes this socket to listen on IPv4 only. This seems to be confirmed in the docs for bind : address is optional and can be a host name, an IPv4 address, an IPv6 address, or '*' . It designates the address the frontend will listen on. If unset, all IPv4 addresses of the system will be-listened on. The same will apply for '*' or the system's special address " 0.0.0.0 ". The IPv6 equivalent is '::'. http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4.2-bind (emphasis added) So the following three forms are all equivalent, and are all interpreted as being IPv4 by HAProxy: bind :80
bind *:80
bind 0.0.0.0:80 Next, there is one sentence in the docs for v4v6 could be read in isolation to indicate that v4v6 might be usable to extend one of the above bind statements to listen on IPv6... v4v6 It is used to bind a socket to both IPv4 and IPv6 when it uses the default address. ...hmmm, but I suspect that this actually means "the v6 default address" ( :: )... Doing so is sometimes necessary
on systems which bind to IPv6 only by default. ...and now, I suspect it even more... It has no effect on non-IPv6
sockets, and is overridden by the v6only option. http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.1 So, it appears that v4v6 only modifies bind directives that specify the IPv6 default listen address, which is :: (the 3rd : is the separator between the address and the port), and is ignored for others. | {
"source": [
"https://serverfault.com/questions/747907",
"https://serverfault.com",
"https://serverfault.com/users/331007/"
]
} |
747,953 | So I've been tasked to document and review the status of the UPSes for my current employer and identify any changes that need to be made. Generally easy stuff, however I came across a few units that are plugged into both regular power and a UPS outlet. I've done similar before but it reminded me of an electrician I used to work with who said doing this could damage the equipment. I've been looking online to figure out what the specifics of this warning was about, but I haven't found much. From memory, I think it had something to do with using circuits from different sources provided by Three-Phase power. I was working on industrial sites at the time, and some power was provided to my servers by three-phase transformers. Not all was, however. My question is: Are there any instances where running a system with redundant power supplies could be damaged by running them on different power circuits, assuming both circuits are clean (eg, running both PSU on either circuit alone would cause no damage). Thanks, | With multi-phase AC systems (motors, etc.), you're right, bad things can and will happen if one of the phases drops out. However, with computer PSUs, each of them operates completely separately, converting its AC input voltage to a variety of DC voltages for the computer system. You can safely run redundant PSUs on different circuits, different phases, etc. Doing so is actually a really great idea to reduce the number of components that are fate-shared. | {
"source": [
"https://serverfault.com/questions/747953",
"https://serverfault.com",
"https://serverfault.com/users/109162/"
]
} |
747,983 | I'm attempting to use robocopy to move files older than 5 years to another server, to reduce the size of a 3TB volume under 2TB so that the machine can be P2V'ed using Microsoft VM Converter. There are actually 3 identical servers (3 offices for the same company), and this command has worked fine on 2 out of the 3. But when run on one server in particular, the output is The filename, directory name, or volume label syntax is incorrect There are plenty of search results for this error on Google, but they all seem to deal with copying from/to network shares (either mapped or UNC). The output I'm getting indicates that robocopy is finding an issue with the local folder, which is concerning (and not in any search results). Full input/output included below, but are there any suggestions for things I might be doing wrong, before I turn to CHKDSK? A full scan could take days and would slow access for all users, so I'd prefer to avoid it. (Note: HP ACU says no disk/volume problems, and the disk does not otherwise indicate any error) Input robocopy D:\Local\Folder X: /e /z copy:DATSO /move /minlad:1800 /log:D:\robocopy.log /tee Output 2016/01/10 20:32:23 ERROR 123 (0x0000007B) Scanning Source Directory D:\Local\Folder
The filename, directory name, or volume label syntax is incorrect.
Waiting 30 seconds... | With multi-phase AC systems (motors, etc.), you're right, bad things can and will happen if one of the phases drops out. However, with computer PSUs, each of them operates completely separately, converting its AC input voltage to a variety of DC voltages for the computer system. You can safely run redundant PSUs on different circuits, different phases, etc. Doing so is actually a really great idea to reduce the number of components that are fate-shared. | {
"source": [
"https://serverfault.com/questions/747983",
"https://serverfault.com",
"https://serverfault.com/users/331054/"
]
} |
747,985 | I have Daedicated Server, and I want to split it on two virtual servers with two diffrent IP Adresses. I bought two exetrnal IP adresses for this server and now I have three, one Main Address, one for firts VPS and one for second VPS. I have installed Debian 7 x64 on server and OpenVZ, set booting to OpenVZ kernel and created a container with Debian 8.0 and added only external external IP (Main Address is 8x.xxx.132.7x, I have added 8x.xxx.249.20x to container). I ran container and connected to SSH with 8x.xxx.249.20x. This works perfect. But, when I try to execute ping google.rs in container, I get error: ping: unknown host google.rs How to allow internet connection from container? P.S. I'm trying to fix it from tonight (3 hours). Nothing from internet doesn't help. Output from container when execute ifconfig : lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:179 errors:0 dropped:0 overruns:0 frame:0
TX packets:158 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:19000 (18.5 KiB) TX bytes:17609 (17.1 KiB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:89.163.249.207 P-t-P:89.163.249.207 Bcast:89.163.249.207 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 Route Table: route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
89.163.132.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
89.163.249.221 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
89.163.249.207 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
89.163.132.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
0.0.0.0 89.163.132.1 0.0.0.0 UG 0 0 0 eth0 | With multi-phase AC systems (motors, etc.), you're right, bad things can and will happen if one of the phases drops out. However, with computer PSUs, each of them operates completely separately, converting its AC input voltage to a variety of DC voltages for the computer system. You can safely run redundant PSUs on different circuits, different phases, etc. Doing so is actually a really great idea to reduce the number of components that are fate-shared. | {
"source": [
"https://serverfault.com/questions/747985",
"https://serverfault.com",
"https://serverfault.com/users/331053/"
]
} |
748,516 | I've been tailing my server's access log while working today, and have noticed one of my client's wordpress sites getting hammered with login attempts from an IP from out of the country. I wanted to deny access from this IP address and tried the following ufw command: sudo ufw deny from xx.xx.xx.xx to any I see the rule has been added and the firewall is active, but I'm still seeing a ton of post's to the login page from that ip address. I've also tried to use iptables, though I'm not very familiar with the tool: sudo iptables -A INPUT -s xx.xx.xx.xx -j DROP Have I gone about this wrong? I would think that after denying access to the ip address that it wouldn't show up in my apache access log with a 200 ok status for the post to the login page. Edit:
As I mentioned, ufw is active and the rule is in place, here's the output of ufw status (with the ip blocked out): root@mel:~# ufw status
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80 ALLOW Anywhere
1723 ALLOW Anywhere
8080 ALLOW Anywhere
6273 ALLOW Anywhere
36728 DENY Anywhere
Anywhere DENY xx.xx.xx.xx
22 ALLOW Anywhere (v6)
80 ALLOW Anywhere (v6)
1723 ALLOW Anywhere (v6)
8080 ALLOW Anywhere (v6)
6273 ALLOW Anywhere (v6)
36728 DENY Anywhere (v6) | The order of the firewall rules are important. Since you have allowed port 80 for all at the beginning, this rule will match for all request and the deny rule that comes later will never be matched. So, if you need to block something particluarly , put it at the beginning and then allow all . To see your rules with a reference number, use this: sudo ufw status numbered Then remove the deny rule first that you have added: sudo ufw delete rule_number_here Then add it again at the top: sudo ufw insert 1 deny from xx.xx.xx.xx to any For further Ref: https://help.ubuntu.com/community/UFW#Deny_Access Please also note that, ufw is not the best tool to mitigate such attacks. Try to use fail2ban , that can do this dynamically. | {
"source": [
"https://serverfault.com/questions/748516",
"https://serverfault.com",
"https://serverfault.com/users/216876/"
]
} |
748,520 | In MDT 2013, is there a way to change the file name of the WinPE WIM that's generated from updating the share? I'm going to be using multiple deployment shares with WDS, and I was trying to automate importing the WinPE WIMs into WDS. I know you can change the image name, but it would be easier for the WIM file name to be different from LiteTouchPE_x64 (or x86). I was thinking about just adding an rni line to the PS script for auto importing to WDS, but I wasn't sure if it would cause any issues if I manually renamed the WIM file. | The order of the firewall rules are important. Since you have allowed port 80 for all at the beginning, this rule will match for all request and the deny rule that comes later will never be matched. So, if you need to block something particluarly , put it at the beginning and then allow all . To see your rules with a reference number, use this: sudo ufw status numbered Then remove the deny rule first that you have added: sudo ufw delete rule_number_here Then add it again at the top: sudo ufw insert 1 deny from xx.xx.xx.xx to any For further Ref: https://help.ubuntu.com/community/UFW#Deny_Access Please also note that, ufw is not the best tool to mitigate such attacks. Try to use fail2ban , that can do this dynamically. | {
"source": [
"https://serverfault.com/questions/748520",
"https://serverfault.com",
"https://serverfault.com/users/329416/"
]
} |
749,130 | I am currently learning about installing Kippo SSH. From the tutorial, it said that I should reconfigure SSH port from 22 to a different port (which in this case 3389). So now whenever I try to SSH from a client, it will connect to port 3389. From the tutorial, the reason behind this is that "we do not want Kippo to have root access". My question being, what difference does it make running SSH from port 22 vs port 3389? | Most servers require root access if you want to open ports lower than 1024. The TCP/IP port numbers below 1024 are special in that normal users are not allowed to run servers on them. This is a security feaure, in that if you connect to a service on one of these ports you can be fairly sure that you have the real thing, and not a fake which some hacker has put up for you. See: https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html | {
"source": [
"https://serverfault.com/questions/749130",
"https://serverfault.com",
"https://serverfault.com/users/277325/"
]
} |
749,474 | The authorized_keys has a command="..." option that restricts a key to a single command. Is there a way to restrict a key to multiple commands? E.g. by having a regex there, or by editing some other configuration file? | You can have only one command per key, because the command is “forced”. But you can use a wrapper script. The called command gets the original command line as environment variable $SSH_ORIGINAL_COMMAND , which it can evaluate. E.g. put this in ~/.ssh/allowed-commands.sh : #!/bin/sh
#
# You can have only one forced command in ~/.ssh/authorized_keys. Use this
# wrapper to allow several commands.
case "$SSH_ORIGINAL_COMMAND" in
"systemctl restart cups")
systemctl restart cups
;;
"shutdown -r now")
shutdown -r now
;;
*)
echo "Access denied"
exit 1
;;
esac Then reference it in ~/.ssh/authorized_keys with command="/home/user/.ssh/allowed-commands.sh",… | {
"source": [
"https://serverfault.com/questions/749474",
"https://serverfault.com",
"https://serverfault.com/users/331525/"
]
} |
749,801 | When specifying servers, like (I would assume) many engineers who aren't experts in storage, I'll generally play it safe (and perhaps be a slave to marketing) by standardising on a minimum of 10k SAS drives (and therefore are "enterprise"-grade with a 24x7 duty cycle, etc) for "system" data (usually OS and sometimes apps), and reserve the use of 7.2k mid/nearline drives for storage of non-system data where performance isn't a significant factor. This is all assuming 2.5" (SFF) disks, as 3.5" (LFF) disks are only really relevant for high-capacity, low IOPs requirements. In situations where there isn't a massive amount of non-system data, I'll generally place it on the same disks/array as the system data, meaning the server only has 10k SAS drives (generally a "One Big RAID10" type of setup these days). Only if the size of the non-system data is significant do I usually consider putting it on a separate array of 7.2k mid/nearline disks to keep the cost/GB down. This has lead me to wonder: in some situations, could those 10k disks in the RAID10 array have been replaced with 7.2k disks without any significant negative consequences? In other words, am I sometimes over-spec'ing (and keeping the hardware vendors happy) by sticking to a minimum of 10k "enterprise" grade disks, or is there a good reason to always stick to that as a minimum? For example, take a server that acts as a hypervisor with a couple of VMs for a typical small company (say 50 users). The company has average I/O patterns with no special requirements. Typical 9-5, Mon-Fri office, with backups running for a couple of hours a night. The VMs could perhaps be a DC and a file/print/app server. The server has a RAID10 array with 6 disks to store all the data (system and non-system data). To my non-expert eye, it looks as though mid/nearline disks may do just fine. Taking HP disks as an example: Workload: Midline disks are rated for <40% workload. With the office only open for 9 hours a day and average I/O during that period unlikely to be anywhere near maximum, it seems unlikely workload would go over 40%. Even with a couple of hours of intense I/O at night for backups, my guess is it would still be below 40% Speed: Although the disks are only 7.2k, performance is improved by spreading it across six disks So, my question: is it sensible to stick a minimum of 10k SAS drives, or are 7.2k midline/nearline disks actually more than adequate in many situations? If so, how do I gauge where the line is and avoid being a slave to ignorance by playing it safe? My experience is mostly with HP servers, so the above may have a bit an HP slant to it, but I would assume the principles are fairly vendor independent. | There's an interesting intersection of server design, disk technology and economics here: Also see: Why are Large Form Factor (LFF) disks still fairly prevalent? The move toward dense rackmount and small form-factor servers. E.g. you don't see many tower offerings anymore from the major manufacturers, whereas the denser product lines enjoy more frequent revisions and have more options/availability. Stagnation in 3.5" enterprise (15k) disk development - 600GB 15k 3.5" is about as large as you can go. Slow advancement in 2.5" near line (7.2k) disk capacities - 2TB is the largest you'll find there. Increased availability and lower pricing of high capacity SSDs. Storage consolidation onto shared storage. Single-server workloads that require high capacity can sometimes be serviced via SAN. The maturation of all-flash and hybrid storage arrays, plus the influx of storage startups. The above are why you generally find manufacturers focusing on 1U/2U servers with 8-24 2.5" disk drive bays. 3.5" disks are for low-IOPs high-capacity use cases (2TB+). They're best for external storage enclosures or SAN storage fronted by some form of caching. In enterprise 15k RPM speeds, they are only available up to 600GB. 2.5" 10k RPM spinning disks are for higher IOPS needs and are generally available up to 1.8TB capacity. 2.5" 7.2k RPM spinning disks are a bad call because they offer neither capacity, performance, longevity nor price advantages. E.g. The cost of a 900GB SAS 10k drive is very close to that of a 1TB 7.2k RPM SAS. Given the small price difference, the 900GB drive is the better buy. In the example of 1.8TB 10k SAS versus 2.0TB 7.2k SAS , the prices are also very close. The warranties are 3-year and 1-year, respectively. So for servers and 2.5" internal storage, use SSD or 10k. If you need capacity needs and have 3.5" drive bays available internally or externally, use 7.2k RPM. For the use cases you've described, you're not over-configuring the servers. If they have 2.5" drive bays, you should really just be using 10k SAS or SSD. The midline disks are a lose on performance, capacity, have a significantly shorter warranty and won't save much on cost. | {
"source": [
"https://serverfault.com/questions/749801",
"https://serverfault.com",
"https://serverfault.com/users/90144/"
]
} |
750,175 | The docker-compose run reference states that it has the --rm option to Remove container after run. I want to make this a default run behavior for some of services I specify in docker-compose.yml . So, the questions are : Can it somehow be specified in docker-compose.yml ? If it can, how can I do that? ( INB4 "Use bash aliases, Luke!" : Of course I can enforce this outside of docker-compose.yml by setting some bash alias like alias docker-compose-run='docker-compose run --rm' but I'm interested in how can I enforce that exactly through docker-compose.yml , not in some extrnal way.) | TLDR: It's still not possible 2018-11 ; use docker-compose down or docker-compose run --rm I want to give an updated answer to this question because it's almost 3 years later. This will save others some searching. I had the same question and here are the workarounds I found (including the one from the question itself): docker-compose down which does the following: Stops containers and removes containers, networks, volumes, and images
created by up. By default, the only things removed are: - Containers for services defined in the Compose file
- Networks defined in the networks section of the Compose file
- The default network, if one is used Networks and volumes defined as external are never removed. Although you cannot declare it in docker-compose.yml it will safe you some hassle; especially with volumes and networks. docker-compose run --rm --rm - Remove container after run. Ignored in detached mode. Runs a one-time command against a service. For example, the following
command starts the web service and runs bash as its command. docker-compose run web bash [...]
the command passed by run overrides the command defined in the service configuration. [...] the command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag docker-compose rm -f -f, --force Don't ask to confirm removal | {
"source": [
"https://serverfault.com/questions/750175",
"https://serverfault.com",
"https://serverfault.com/users/214542/"
]
} |
750,199 | I'm trying to integrate GLPI with Active Directory. I'm using windows XAMPP V3.2.2 and GLPI 0.90.1. While setting up the LDAP Directory, I'm getting an error "Test failed: Main server ADSrv". What am I doing wrong? | TLDR: It's still not possible 2018-11 ; use docker-compose down or docker-compose run --rm I want to give an updated answer to this question because it's almost 3 years later. This will save others some searching. I had the same question and here are the workarounds I found (including the one from the question itself): docker-compose down which does the following: Stops containers and removes containers, networks, volumes, and images
created by up. By default, the only things removed are: - Containers for services defined in the Compose file
- Networks defined in the networks section of the Compose file
- The default network, if one is used Networks and volumes defined as external are never removed. Although you cannot declare it in docker-compose.yml it will safe you some hassle; especially with volumes and networks. docker-compose run --rm --rm - Remove container after run. Ignored in detached mode. Runs a one-time command against a service. For example, the following
command starts the web service and runs bash as its command. docker-compose run web bash [...]
the command passed by run overrides the command defined in the service configuration. [...] the command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag docker-compose rm -f -f, --force Don't ask to confirm removal | {
"source": [
"https://serverfault.com/questions/750199",
"https://serverfault.com",
"https://serverfault.com/users/332777/"
]
} |
750,206 | With netscaler, I can redirect all traffic SSL to specific host which depending their subdomains. Example: +-------------+
+-------> |webserver 443|
| +-------------+
+----------+ +--------------+ www.example.com:443
| internet | +----> | reverseproxy |
+----------+ +--------------+
| +-----------+
+-------> |openvpn 443|
+-----------+
vpn.example.com:443 The traffic is just redirected and it not unencrypted because we have not configure any certificate on Netscaler. We have just one certificate "wildcard" for the reverse proxy. I want to say that I have not configured NetScaler. So, it is possible I'm wrong on the configuration. Question: I would to know if it is possible to do the same with an opensource software like Nginx or Squid? How does it work this configuration? | TLDR: It's still not possible 2018-11 ; use docker-compose down or docker-compose run --rm I want to give an updated answer to this question because it's almost 3 years later. This will save others some searching. I had the same question and here are the workarounds I found (including the one from the question itself): docker-compose down which does the following: Stops containers and removes containers, networks, volumes, and images
created by up. By default, the only things removed are: - Containers for services defined in the Compose file
- Networks defined in the networks section of the Compose file
- The default network, if one is used Networks and volumes defined as external are never removed. Although you cannot declare it in docker-compose.yml it will safe you some hassle; especially with volumes and networks. docker-compose run --rm --rm - Remove container after run. Ignored in detached mode. Runs a one-time command against a service. For example, the following
command starts the web service and runs bash as its command. docker-compose run web bash [...]
the command passed by run overrides the command defined in the service configuration. [...] the command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag docker-compose rm -f -f, --force Don't ask to confirm removal | {
"source": [
"https://serverfault.com/questions/750206",
"https://serverfault.com",
"https://serverfault.com/users/332783/"
]
} |
750,430 | When testing the SOA setting for example-domain.org on http://mxtoolbox.com/ , it says that SOA Serial Number Format is Invalid The entry is ns-885.awsdns-46.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 That, however, is exactly what Amazon suggest in their Route 53 documentation on http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html mxtoolbox issues a warning - why? They also consider the missing DMARC settings as an error. Please bear with me - I am not a sysadmin. Any hint that uses a language that a developer can understand is greatly appreciated. | There is a recommendation that the SOA serial number use a format that is four digits of year, two digits of month, two digits of day and two digits of count of changes in the same day. This format is common, but far from universal (look at .COM for a high-profile example of a zone that doesn't). The tool you got the error message from is oversensitive and should be adjusted. | {
"source": [
"https://serverfault.com/questions/750430",
"https://serverfault.com",
"https://serverfault.com/users/60573/"
]
} |
750,684 | One of the common server failure scenarios is bad DRAM, sometimes even when ECC memory is used. memtest86+ is one of the most useful tools to diagnose DRAM problems. As it loads itself at the start of the memory, I've been wondering if memtest86+ checks the part of the memory which memtest86+ is loaded into. Is the memory allocated to memtest86+ so small that it doesn't matter, or is it possible that memtest86+ could miss a defect in the DRAM because it can't test the memory locations it's residing in? | Obviously, memtest86+ cannot test the memory region which currently contains the memtest86+ executable code (but if there are memory errors in that region, it is very likely that the test itself will crash). However, memtest86+ is able to relocate its own code to a different address at runtime, and by using this trick it is able to test all memory which it is allowed to use by the firmware (BIOS) — just not all at once. This code relocation is described in README.background inside the memtest86+ source code archive (the file is slightly out of date — e.g., it states that the addresses used for memtest86+ code are 0x2000 and 0x200000, but the low address as defined in the source is actually 0x10000, and the high address is either 0x2000000 or 0x300000 depending on the amount of memory in the machine). But even with this relocation trick memtest86+ is not able to test all memory for the following reasons: Usually the firmware (BIOS) reserves some RAM regions for its own use (e.g., ACPI tables). While these RAM regions can be accessed by CPU, writing anything into them can result in unpredictable behavior. Some part of RAM is used for the System Management Mode and is not even accessible from the CPU outside of the privileged SMM code. The RAM address range between 640K and 1M is inaccessible due to quirks of the legacy PC memory layout (some of this RAM may be used as a shadow for BIOS ROM and for SMM, other parts may be completely inaccessible). | {
"source": [
"https://serverfault.com/questions/750684",
"https://serverfault.com",
"https://serverfault.com/users/110212/"
]
} |
750,686 | I have some Cobian Backup tasks to copy a file to an external hard drive. To prevent it from being compromised, the drive will not be mounted until the task is launched. We use the following AutoHotkey script: RunAs, Administrator, adminpassword
Run, cmd.exe /C "C:\Folder\MontaSiempreX.bat"
RunAs
Exit MontaSiempreX.bat just mounts the unit called COPIASALFA on X letter using this tools: @echo off
set NUEVA=X:
for /f "tokens=3 delims= " %%a in ('echo list volume ^| diskpart ^| findstr "COPIASALFA"') do @set ANTIGUA=%%a
set "ANTIGUA=%ANTIGUA%:"
if "%ANTIGUA%" == ":" (LoadMedia.exe %NUEVA%) else (ReMount.exe %ANTIGUA% %NUEVA%)
ping localhost -n 6 > nul After the backup is done, the following script is launched as an Administrator (using a new AutoHotkey script and RunAs command as before): @echo off
EjectMedia.exe X -o The problem is that the Cobian task sometimes fails. When it does, the log will show the following error: ERR 2016-01-21 04:00 No se pudo copiar el fichero
"C:\COPIAS\Jueves\Jueves.zip": El parámetro no es correcto Can someone help me? Jueves.zip is the source file... Thanks in advance. | Obviously, memtest86+ cannot test the memory region which currently contains the memtest86+ executable code (but if there are memory errors in that region, it is very likely that the test itself will crash). However, memtest86+ is able to relocate its own code to a different address at runtime, and by using this trick it is able to test all memory which it is allowed to use by the firmware (BIOS) — just not all at once. This code relocation is described in README.background inside the memtest86+ source code archive (the file is slightly out of date — e.g., it states that the addresses used for memtest86+ code are 0x2000 and 0x200000, but the low address as defined in the source is actually 0x10000, and the high address is either 0x2000000 or 0x300000 depending on the amount of memory in the machine). But even with this relocation trick memtest86+ is not able to test all memory for the following reasons: Usually the firmware (BIOS) reserves some RAM regions for its own use (e.g., ACPI tables). While these RAM regions can be accessed by CPU, writing anything into them can result in unpredictable behavior. Some part of RAM is used for the System Management Mode and is not even accessible from the CPU outside of the privileged SMM code. The RAM address range between 640K and 1M is inaccessible due to quirks of the legacy PC memory layout (some of this RAM may be used as a shadow for BIOS ROM and for SMM, other parts may be completely inaccessible). | {
"source": [
"https://serverfault.com/questions/750686",
"https://serverfault.com",
"https://serverfault.com/users/308287/"
]
} |
750,693 | Apparently it's as simple as just sending an e-mail to a local account, and configuring Postfix to receive mail for the local account. Not too shabby! Original Question This hypothetic server would run Postfix on LAMP. I have a contact form on a webpage, of which contents are sent with PHP mail() to the contact e-mail address. Is there a way to just save the form contents as a mail directly to the server (using postfix) without actually sending it out of the server and receiving it again? The reason I want to do it this way is because I want to use the user's e-mail address as the sender address, and this could be problematic wthout the correct configuration of headers and things, as far as I've come to understand in regards to blacklists etc. In addition to this, of course, it seems very redundant for the server to spend bandwidth sending mail to itself. :) All I need is to be pointed in the right direction here, but if someone would give detailed explanations, that would be much appreciated as well. Note: I have tried finding information on how to do this by google searching but to no avail. I have not experimented with postfix myself because I don't have any idea where to start. | Obviously, memtest86+ cannot test the memory region which currently contains the memtest86+ executable code (but if there are memory errors in that region, it is very likely that the test itself will crash). However, memtest86+ is able to relocate its own code to a different address at runtime, and by using this trick it is able to test all memory which it is allowed to use by the firmware (BIOS) — just not all at once. This code relocation is described in README.background inside the memtest86+ source code archive (the file is slightly out of date — e.g., it states that the addresses used for memtest86+ code are 0x2000 and 0x200000, but the low address as defined in the source is actually 0x10000, and the high address is either 0x2000000 or 0x300000 depending on the amount of memory in the machine). But even with this relocation trick memtest86+ is not able to test all memory for the following reasons: Usually the firmware (BIOS) reserves some RAM regions for its own use (e.g., ACPI tables). While these RAM regions can be accessed by CPU, writing anything into them can result in unpredictable behavior. Some part of RAM is used for the System Management Mode and is not even accessible from the CPU outside of the privileged SMM code. The RAM address range between 640K and 1M is inaccessible due to quirks of the legacy PC memory layout (some of this RAM may be used as a shadow for BIOS ROM and for SMM, other parts may be completely inaccessible). | {
"source": [
"https://serverfault.com/questions/750693",
"https://serverfault.com",
"https://serverfault.com/users/320040/"
]
} |
750,856 | I'm working on several Ansible playbooks to spin up a new server instance. There are approximately 15 different playbooks I need to run in a specific order to successfully spin up a server. My initial thought was to write a shell script that executes ansible-playbook playbook_name.yml and duplicate it one entry for each playbook I need to run. Is there a smarter/better way to do this using a master playbook and if so what would it look like (examples are appreciated). I could write one monolithic playbook that does it all but there are some plays that run as root first then as a sudo user later. | Build many sub-playbooks and aggregate them via include statements. - include: playbook-one.yml
- include: playbook-two.yml If your playbooks must run in order and if all of them are mandatory, build a main playbook and include files with tasks. A playbook should always be a closed process. | {
"source": [
"https://serverfault.com/questions/750856",
"https://serverfault.com",
"https://serverfault.com/users/174425/"
]
} |
750,902 | Let's Encrypt has announced they have: Turned on support for the ACME DNS challenge How do I make ./letsencrypt-auto generate a new certificate using DNS challenge domain validation? EDIT I mean: How do I avoid http/https port binding, by using the newly announced feature (2015-01-20) that lets you prove the domain ownership by adding a specific TXT record in the DNS zone of the target domain? | Currently it is possible to perform DNS validation, also with the certbot LetsEncrypt client in manual mode. Automation is possible as well (see below). Manual plugin You can either perform a manual verification - with the manual plugin. certbot -d bristol3.pki.enigmabridge.com --manual --preferred-challenges dns certonly Certbot will then provide you instructions to manually update a TXT record for the domain in order to proceed with the validation. Please deploy a DNS TXT record under the name
_acme-challenge.bristol3.pki.enigmabridge.com with the following value:
667drNmQL3vX6bu8YZlgy0wKNBlCny8yrjF1lSaUndc
Once this is deployed,
Press ENTER to continue Once you have updated the DNS record, press Enter, certbot will continue and if the LetsEncrypt CA verifies the challenge, the certificate is issued as normally. You may also use a command with more options to minimize interactivity and answering certbot questions. Note that the manual plugin does not yet support non-interactive mode. certbot --text --agree-tos --email [email protected] -d bristol3.pki.enigmabridge.com --manual --preferred-challenges dns --expand --renew-by-default --manual-public-ip-logging-ok certonly Renewal does not work with the manual plugin as it runs in non-interactive mode. More info in the official certbot documentation . Update: manual hooks In the new certbot version you can use hooks , e.g., --manual-auth-hook , --manual-cleanup-hook . The hooks are external scripts executed by certbot to perform the task. Information is passed in environment variables - e.g., domain to validate, challenge token. Vars: CERTBOT_DOMAIN , CERTBOT_VALIDATION , CERTBOT_TOKEN . certbot certonly --manual --preferred-challenges=dns --manual-auth-hook /path/to/dns/authenticator.sh --manual-cleanup-hook /path/to/dns/cleanup.sh -d secure.example.com You can write your own handler or use already existing ones. There are many available, e.g., for Cloudflare DNS. More info on official certbot hooks documentation . Automation, Renewal, Scripting If you would like to automate DNS challenge validation it is not currently possible with vanilla certbot. Update: some automation is possible with the certbot hooks. We thus created a simple plugin that supports scripting with DNS automation. It's available as certbot-external-auth . pip install certbot-external-auth It supports the DNS, HTTP, TLS-SNI validation methods. You can either use it in handler mode or in JSON output mode. Handler mode In handler mode, the certbot + plugin calls external hooks (a program, shell script, Python, ...) to perform the validation and installation. In practice you write a simple handler/shell script which gets the input arguments - domain, token and makes the change in DNS. When the handler finishes, certbot proceeds with validation as usual. This gives you extra flexibility, renewal is also possible. Handler mode is also compatible with Dehydrated DNS hooks (former letsencrypt.sh). There are already many DNS hooks for common providers (e.g., CloudFlare, GoDaddy, AWS). In the repository there is a README with extensive examples and example handlers. Example with Dehydrated DNS hook: certbot \
--text --agree-tos --email [email protected] \
--expand --renew-by-default \
--configurator certbot-external-auth:out \
--certbot-external-auth:out-public-ip-logging-ok \
-d "bristol3.pki.enigmabridge.com" \
--preferred-challenges dns \
--certbot-external-auth:out-handler ./dehydrated-example.sh \
--certbot-external-auth:out-dehydrated-dns \
run JSON mode Another plugin mode is JSON mode. It produces one JSON object per line. This enables a more complicated integration - e.g., when Ansible or some deployment manager is calling certbot. Communication is performed via STDOUT and STDIN. Cerbot produces JSON objects with data to perform the validation, for example: certbot \
--text --agree-tos --email [email protected] \
--expand --renew-by-default \
--configurator certbot-external-auth:out \
--certbot-external-auth:out-public-ip-logging-ok \
-d "bristol3.pki.enigmabridge.com" \
--preferred-challenges dns \
certonly 2>/dev/null
{"cmd": "perform_challenge", "type": "dns-01", "domain": "bs3.pki.enigmabridge.com", "token": "3gJ87yANDpmuuKVL2ktfQ0_qURQ3mN0IfqgbTU_AGS4", "validation": "ejEDZXYEeYHUxqBAiX4csh8GKkeVX7utK6BBOBshZ1Y", "txt_domain": "_acme-challenge.bs3.pki.enigmabridge.com", "key_auth": "3gJ87yANDpmuuKVL2ktfQ0_qURQ3mN0IfqgbTU_AGS4.tRQM98JsABZRm5-NiotcgD212RAUPPbyeDP30Ob_7-0"} Once DNS is updated, the caller sends the new-line character to STDIN of certbot to signal it can continue with validation. This enables automation and certificate management from the central management server. For installation you can deploy certificates over SSH. For more info please refer to the readme and examples on certbot-external-auth GitHub. EDIT: There is also a new blog post describing the DNS validation problem and the plugin usage. EDIT: We currently work on Ansible 2-step validation, will be soon off. | {
"source": [
"https://serverfault.com/questions/750902",
"https://serverfault.com",
"https://serverfault.com/users/216686/"
]
} |
751,079 | The email address used as the admin email when we started using let's encrypt needs to be modified (a former employee used his personal email address as the admin email and he is no longer with the firm). What steps need to be taken to get that modified (we can get the former employee to confirm this). We need to remove his personal email address and replace it with a new email address. This will be used for key recovery actions. In either case, I would like the former employee's personal email address to be removed. What steps do I need to take to accomplish this (if my understanding of the process is incorrect, kindly point me to the right direction). Thanks in advance. | Use: certbot-auto register --update-registration --email [email protected] or certbot register --update-registration --email [email protected] or certbot update_account --email [email protected] certbot-auto or certbot will work if you have the executable under /usr/sbin . If you're unable to call certbot-auto globally, use the path to the certbot-auto file. Source: https://letsencrypt.org/docs/expiration-emails/ | {
"source": [
"https://serverfault.com/questions/751079",
"https://serverfault.com",
"https://serverfault.com/users/149198/"
]
} |
751,155 | Is there a way that I can permanently enable a SCL? I've installed rh-php56 , and I would like to make sure that it is loaded every time I ssh into my machine. I am currently running CentOS 7. | using scl enable actually opens a new shell inside your current one, which is quite unclean, especially if done from a login script. You should place, instead, in your ~/.bash_profile : source /opt/rh/rh-nginx18/enable or: source scl_source enable rh-nginx18 The latter is more "elegant" as it is independent from the actual installation path. This has the effect of loading the environment in your current shell. | {
"source": [
"https://serverfault.com/questions/751155",
"https://serverfault.com",
"https://serverfault.com/users/229634/"
]
} |
752,146 | My company distributes a Windows Installer for a Server based product. As per best practices it is signed using a certificate. In line with Microsoft's advice we use a GlobalSign code signing certificate , which Microsoft claims is recognised by default by all Windows Server versions. Now, this all works well unless a server has been configured with Group Policy: Computer Configuration / Administrative Templates / System / Internet Communication Management / Internet Communication settings / Turn off Automatic Root Certificate Update as Enabled . We found that one of our early beta testers was running with this configuration resulting in the following error during installation A file that is required cannot be installed because the cabinet file [long path to cab file] has an invalid digital signature. This may indicate that the cabinet file is corrupt. We wrote this off as an oddity, after all no-one was able to explain why the system was configured like this. However, now that the software is available for general use, it appears that a double digit (percentage) of our customers are configured with this setting and no-one knows why. Many are reluctant to change the setting. We have written a KB article for our customers, but we really don't want the problem to happen at all as we actually care about the customer experience. Some things we have noticed while investigating this: A fresh Windows Server installation does not show the Globalsign cert in the list of trusted root authorities. With Windows Server not connected to the internet, installing our software works fine. At the end of the installation the Globalsign cert is present (not imported by us). In the background Windows appears to install it transparently on first use. So, here is my question again. Why is it so common to disable updating of root certificates? What are the potential side effects of enabling updates again? I want to make sure we can provide our customers with the appropriate guidance. | In late 2012 / early 2013 there was an issue with automatic root certificate updates. The interim fix was to disable the automatic updates, so partly this issue is historical. The other cause is the Trusted Root Certificate program and Root Certificate Distribution, which (to paraphrase Microsoft )... Root certificates are updated on Windows automatically. When a [system] encounters a new root certificate, the Windows certificate chain verification software checks the appropriate Microsoft Update location for the root certificate. So far, so good but then... If it finds it, it downloads it to the system. To the user, the
experience is seamless. The user does not see any security dialog
boxes or warnings. The download happens automatically, behind the
scenes. When this happens it can appear that certs are being automagically added to the Root store. All this makes some sysadmins nervous as you can't remove a 'bad' CA from the certificate management tools because they're not there to remove... Actually there are ways to make windows download the full list so they can edit it as they wish but it's common to just block the updates. A great number of sysadmins don't understand encryption or security (generally) so they follow received wisdom (correct or otherwise) without question and they don't like making changes to things involving security that they don't fully understand believing it to be some black art. | {
"source": [
"https://serverfault.com/questions/752146",
"https://serverfault.com",
"https://serverfault.com/users/42783/"
]
} |
753,105 | I have multiple subdomains, all pointing to one machine, and one IP address. On this machine, I want to have nginx acting as a reverse proxy, and depending on which subdomain was used to access the machine, I want it to reverse proxy to a different server. All the examples I've seen of using nginx as a reverse proxy use location , but as I understand that only works for the path, not for different subdomains. How can I achieve what I want? | Unless I completely misread your question: You simply set up server blocks for each sub-domain and the define the correct reverse proxy for the root of that subdomain i.e. something along the lines of: server {
server_name subdomain1.example.com;
location / {
proxy_pass http://hostname1:port1;
}
}
server {
server_name subdomain2.example.com;
location / {
proxy_pass http://hostname2:port2;
}
} | {
"source": [
"https://serverfault.com/questions/753105",
"https://serverfault.com",
"https://serverfault.com/users/254628/"
]
} |
753,268 | Is there any way to make a seasoned Linux syadmin productive without giving him full root access? This question comes from a perspective of protecting intellectual property (IP), which in my case, is entirely code and/or configuration files (i.e. small digital files that are easily copied). Our secret sauce has made us more successful than our smallish size would suggest. Likewise, we are once-bitten, twice shy from a few former unscrupulous employees (not sysadmins) who tried to steal IP. Top management's position is basically, "We trust people, but out of self-interest, cannot afford the risk of giving any one person more access than they absolutely need to do their job." On the developer side, it's relatively easy to partition workflows and access levels such that people can be productive but only see only what they need to see. Only the top people (actual company owners) have the ability to combine all the ingredients and create the special sauce. But I haven't been able to come up with a good way to maintain this IP secrecy on the Linux admin side. We make extensive use of GPG for code and sensitive text files... but what's to stop an admin from (for example) su'ing to a user and hopping on their tmux or GNU Screen session and seeing what they're doing? (We also have Internet access disabled everywhere that could possibly come into contact with sensitive information. But, nothing is perfect, and there could be holes open to clever sysadmins or mistakes on the network admin side. Or even good old USB. There are of course numerous other measures in place, but those are beyond the scope of this question.) The best I can come up with is basically using personalized accounts with sudo , similar to what is described in Multiple Linux sysadmins working as root . Specifically: no one except the company owners would actually have direct root access. Other admins would have a personalized account and the ability to sudo into root. Furthermore, remote logging would be instituted, and the logs would go to a server only the company owners could access. Seeing logging turned off would set off some kind of alerts. A clever sysadmin could probably still find some holes in this scheme. And that aside, it's still reactive rather than proactive . The problem with our IP is such that competitors could make use of it very quickly, and cause a lot of damage in very short order. So still better would be a mechanism that limits what the admin can do. But I recognize that this is a delicate balance (particularly in the light of troubleshooting and fixing production issues that need to be resolved right now ). I can't help but wonder how other organizations with very sensitive data manage this issue? For example, military sysadmins: how do they manage servers and data without being able to see confidential information? Edit: In the initial posting, I meant to preemptively address the "hiring practices" comments that are starting to surface. One, this is supposed to be a technical question, and hiring practices IMO tend more towards social questions. But, two, I'll say this: I believe we do everything that's reasonable for hiring people: interview with multiple people at the firm; background and reference checks; all employees sign numerous legal documents, including one that says they've read and understood our handbook which details IP concerns in detail. Now, it's out of the scope of this question/site, but if someone can propose "perfect" hiring practices that filter out 100% of the bad actors, I'm all ears. Facts are: (1) I don't believe there is such a perfect hiring process; (2) people change - today's angel could be tomorrow's devil; (3) attempted code theft appears to be somewhat routine in this industry. | Everything said so far here is good stuff but there is one 'easy' non technical way that helps negates a rogue sys admin - the four eyes principle which basically requires that two sysadmins be present for any elevated access. EDIT:
The two biggest items that I've seen in comments are discussing cost and the possibility of collusion. One of the biggest ways that I've considered to avoid both of those issues is with the use of a managed service company used only for verification of actions taken. Done properly the techs wouldn't know each other. Assuming the technical prowess that a MSP should have it would be easy enough to have a sign off of actions taken.. maybe even as simple as a yes/no to anything nefarious. | {
"source": [
"https://serverfault.com/questions/753268",
"https://serverfault.com",
"https://serverfault.com/users/76685/"
]
} |
754,690 | For a server I am hosting a website on I want to backup the data and settings to an S3 bucket. I found out that you can't directly use rsync to backup to an S3 bucket. Is there another way to achieve the following rsync command to backup the data to an S3 bucket? rsync -av /Data /s3bucket I also want to backup the mysql database on that server to the S3 bucket. What is the best way to achieve this? Last question, if I managed to backup everything to the S3. What is the best way restore the server if it's crashed or in worst case completely wiped?
Do I have to note the server settings myself and reconfigure the server or is there a way to also backup this? | To communicate to s3 you need to have 2 things IAM user credentials who has read-write access to s3 bucket. A client like aws-cli for bash, boto library for Python etc. once you have both, you can transfer any file from your machine to s3 and from s3 to your machine. Below is the example for aws-cli. To sync all files in a folder aws s3 sync source_folder s3://your_bucket_name/destination_folder/ To copy one file to s3 aws s3 cp source_file s3://your_bucket_name/destination_folder/ Just replace source & destination in the command to download any file from s3. for more info follow aws docs | {
"source": [
"https://serverfault.com/questions/754690",
"https://serverfault.com",
"https://serverfault.com/users/336361/"
]
} |
755,194 | I'm running Ubuntu 15.10 server on a Asrock E3C226D2I board. When I get a kernel update or run update-initramfs -u I get a warning about missing firmware: root@fileserver:~# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.2.0-27-generic
W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast I can't find much information on this particular firmware, other than it is probably for my video card. Since I'm running a server I don't really care about graphics (no monitor attached). All works fine so I'm ignoring it for now but is there a way to fix this? | Its annoying, but harmless. That is coming from the Aspeed VGA module from the IPMI on your server/workstation. It can be safely ignored for now. I took a quick look at the source code of the aspeed DRM driver. It is hardcoded at runtime to look for /lib/firmware/ast_dp501_fw.bin. This provides a way to update for firmware issues at runtime versus needing to be flashed onto the hardware. Here is the lspci output showing the video card in question from my Asus Workstation which has the same "issue" as it were: lspci |grep -i aspeed
01:01.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 10) Aspeed's drivers and source for drivers are here (but you shouldn't need them from there unless you have a Windows server): http://www.aspeedtech.com/support.php?fPath=24 Here is the Bug report: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1751613 Final(?) Update: I assume this particular error will be there forever since it's on End of Life hardware. The bug above remains in expired status since 2018-04-28. | {
"source": [
"https://serverfault.com/questions/755194",
"https://serverfault.com",
"https://serverfault.com/users/334806/"
]
} |
755,351 | From my understanding DNS uses UDP and port 53. What undesirable things could happen if incoming UDP packets to port number 53 weren't blocked? UPDATE: Packets originate or are destined to the university-operated local DNS server or university-operated authoritative DNS server would be allowed. | The logic works like this: Only authoritative DNS servers that provide records to the internet are required to be exposed. Open recursive servers that are exposed to the internet will inevitably be found by network scans and abused. (See user1700494's answer) The likelihood of someone accidentally standing up an exposed recursive server is greater than that of an exposed authoritative DNS server. This is because many appliances and "out of the box" configs default to allowing unrestricted recursion. Authoritative configurations are much more customized and infrequently encountered. Given 1-3, dropping all unsolicited inbound traffic with a destination port of 53 protects the network. In the rare event that another authoritative DNS server needs to be added to the network (a planned event), exceptions can be defined on an as-needed basis. | {
"source": [
"https://serverfault.com/questions/755351",
"https://serverfault.com",
"https://serverfault.com/users/336847/"
]
} |
755,373 | I've worked in organizations where instead of creating a new Ubuntu user per person that wants to log into a machine, the sysadmins simply add the ssh key of each user to .ssh/authorized_keys , and everyone ssh s to the machine as ( e.g. ) ubuntu@host or ec2-user@host . (Incidentally, I've also seen this practiced on shared Mac minis in a lab setting.) Is this accepted practice, or an anti-pattern? The hosts in question are mainly used for testing, but there are also actions taken that typically require per-user configuration and are tracked as being done by a specific user, such as creating and pushing git commits, which are currently done using a generic git user. | Yes it is a bad habit. It relies on the basic assumption that nobody malicious is (or will be) around and that nobody makes mistakes. Having a shared account makes it trivial for things to happen without accountability and without any limit - a user breaking something breaks it for everyone. If the reason for this uid-sharing scheme is simply to reduce the administrative cost of creating new accounts and sharing configuration, then perhaps the administrators should invest some time in an automation system like Ansible , Chef , Puppet or Salt that makes stuff like creating user accounts on multiple machines extremely simple. | {
"source": [
"https://serverfault.com/questions/755373",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
755,375 | I have three servers. Server 1 - Print Server, Windows Server 2008 Standard Server 2 - Domain Controller, Windows Server 2008 R2 Standard Server 3 - Terminal Services Server, Windows Server 2008 R2 Standard On Server 1 I have 5 printers installed. All printers are TCP/IP printers. One printer should be restricted so that only members of a specified AD group are able to print to it. Therefore, in Print Management, in the Security tab for the restricted printer the AD Security Group RESTRICTED Printers - Authorized Domain Users is given the Print Allow permission. The default Everyone group which has the Print Allow permission has been removed. The only member of the RESTRICTED Printers - Authorized Domain Users is Domain\TestAllowed. All 5 printers are installed on Server 3 through a GPO on Server 2 which automatically adds the printers. This works correctly. I then login to Server 3 as Domain\TestProhibited and try to print to the restricted printer and the page prints. Why does the page print and what do I need to do to ensure that only members of RESTRICTED Printers - Authorized Domain Users are able to print to the restricted printer? I have already read (and confirmed that I configured the ACL correctly) Microsoft's TechNet page on setting permissions for print servers . I went so far as to explicitly deny the Print permission for Domain\TestProhibited on the restricted printer on Server 1. I logged out of Server 3, logged back in, and Domain\TestProhibited was still able to print to the restricted printer. | Yes it is a bad habit. It relies on the basic assumption that nobody malicious is (or will be) around and that nobody makes mistakes. Having a shared account makes it trivial for things to happen without accountability and without any limit - a user breaking something breaks it for everyone. If the reason for this uid-sharing scheme is simply to reduce the administrative cost of creating new accounts and sharing configuration, then perhaps the administrators should invest some time in an automation system like Ansible , Chef , Puppet or Salt that makes stuff like creating user accounts on multiple machines extremely simple. | {
"source": [
"https://serverfault.com/questions/755375",
"https://serverfault.com",
"https://serverfault.com/users/335183/"
]
} |
755,607 | I've just started to study Docker and there's something that's being quite confusing for me. As I've read on Docker's website a container is different from a virtual machine. As I understood a container is just a sandbox inside of which an entire isolated file system is run. I've also read that a container doesn't have a Guest OS installed. Instead it relies on the underlying OS Kernel. All of that is fine. What I'm confused is that there are Docker images named after operating systems. We see images like Ubuntu, Debian, Fedora, CentOS and so on. My point is: what are those images, really? How is it different creating a container based on the Debian image than creating a Virtual Machine and installing Debian? I thought containers had no Guest OS installed, but when we create images we base them on some image named after one OS. Also, in examples I saw when we do docker run ubuntu echo "hello world" ,
it seems we are spinning up a VM with Ubuntu and making it run the command echo "hello world" . In the same way when we do docker run -it ubuntu /bin/bash , it seems we are spinning up a VM with Ubuntu and accessing it using command line. Anyway, what are those images named after operating systems all about? How different is it to run a container with one of those images and spinning up a VM with the corresponding Guest OS? Is the idea that we just share the kernel with the host OS (and consequently we have access to the underlying machine hardware resources, without the need to virtualize hardware), but still use the files and binaries of each different system on the containers in order to support whatever application we want to run? | Since all Linux distributions run the same (yup, it's a bit simplified) Linux kernel and differ only in userland software, it's pretty easy to simulate a different distribution environment - by just installing that userland software and pretending it's another distribution. Being specific, installing CentOS container inside Ubuntu OS will mean that you will get the userland from CentOS, while still running the same kernel, not even another kernel instance. So lightweight virtualization is like having isolated compartments within same OS. Au contraire real virtualization is having another full-fledged OS inside host OS. That's why docker cannot run FreeBSD or Windows inside Linux. If that would be easier, you can think docker is kind of very sophisticated and advanced chroot environment. | {
"source": [
"https://serverfault.com/questions/755607",
"https://serverfault.com",
"https://serverfault.com/users/283620/"
]
} |
755,654 | It's extremely common for RFCs to be cited in support of opinions (including Serverfault Q&A's), but the average IT employee has a very poor understanding in regards to which RFCs define standards and which ones are purely informative. This should be no surprise: system administrators of all experience levels typically avoid glazing their eyes at RFCs unless they have no choice but to. On a site like ours, it is extremely important that we don't perpetuate common misunderstandings in our upvoted answers. Random users cruising in from search engines are going to assume that upvotes with no disputing comments are sufficient indicators of vetting. Recently I stumbled across an answer from 2011 making it apparent that this is definitely not getting caught in some cases as we upvote and probably warrants some efforts to inform our community and the internet at large. So without further ado, how does one differentiate between a RFC that is quotable as an internet standard and one that is purely informative? | Only RFCs on the standards track can be cited as defining a standard. For the reader in passing, these are the main points to understand: Some of the older RFCs are not clearly labeled. When in doubt, plug it into the search box at http://www.rfc-editor.org/ and pay attention to the Status column. Be very cautious with anything labeled as Unknown , as they are effectively abandoned and not considered relevant. Any RFC with a designation of Historic has been obsoleted, regardless of how it was originally classified. Any RFC with a status of Proposed Standard , or Internet Standard can be used as a technical reference for the applicable internet standard. This is somewhat counter-intuitive and will be touched on below. In all other cases, the RFC cannot be considered a binding, authoritative source of information relative to Internet Standards. That said, RFCs with a designation of Best Current Practice (BCP) should be considered as carrying significant advisory weight. They are not binding in the way that a standard is, but they are heavily vetted and undergo some of the same scrutiny that RFCs in the standards track receive. Ignoring them doesn't violate a standard, but usually it's a bad idea . Informational RFCs lacking the BCP identifier are best likened to an article you come across in an IT magazine. You wouldn't pull out an editorial piece out of your desk and tell a director that it defines a standard, right? Experimental RFCs can only be used as a reference for the experimental features that they describe, and not as a reference for the standard that they are associated with. They exist in a vacuum until promoted to the standards track. Occasionally a technical reference may be published as an Informational RFC prior to being incorporated as an Internet Standard. DMARC ( RFC 7489 ) is one of the most widely known modern examples of this. For all intents and purposes, treat these as you would an Experimental RFC. They exist in a vacuum and describe an optional feature. Even once you've navigated this maze, be aware that newer RFCs may have obsoleted significant parts of the RFC that you are quoting from! It is strongly recommended to use tools providing hyperlinks to RFCs that update the one you're viewing, such as those provided by http://tools.ietf.org/ and http://www.rfc-editor.org/ . Those are the bullet points. Now we're going to get into specifics. RFC 1796 is a good primer for most people who don't want to spend a day staring at RFCs. It clearly and concisely explains the common misconception of people assuming that a RFC is always defining an internet standard of some sort. Pay special note to the part where vendors are occasionally guilty of abusing this ignorance when pushing their products. BCP 9 defines the internet standards track, most notably the progression from Proposed Standard to Internet Standard . It should be noted that this is a concatenation of several RFCs , beginning with RFC 2026 . Reading RFC 2026 by itself in a vacuum is common occurrence but also a terrible idea: RFC 6410 eliminates the concept of Draft Standards entirely. RFC 7127 is a more recent (2014) update to BCP 9 making it clear that many Proposed Standards are never promoted to Internet Standard despite widespread implementation and high stability. This is in large part due to the higher vetting standards that modern Proposed Standards are subjected to prior to being classified as such. This RFC effectively retracts the prior statement by RFC 2026 that "Implementors should treat Proposed Standards as immature specifications" . Never quote that line to anyone. In short, if a RFC document is on the internet standards track at all, it has sufficient maturity to be used as a technical reference until such a point that a future RFC updates it. Disclaimer As the above demonstrates, the internet standards track defined by BCP 9 is a moving target. This answer is a snapshot in time and may require updating in the future. Given its community wiki status, feel free to do so or improve upon it in any way. | {
"source": [
"https://serverfault.com/questions/755654",
"https://serverfault.com",
"https://serverfault.com/users/152073/"
]
} |
757,210 | I am trying to take a docker container from one machine and run it on another and encountering this error: " Error response from daemon: No command specified ". Below is a simplified example showing the problem: docker --version
Docker version 1.10.1, build 9e83765
docker pull ubuntu
docker run --name u1 -dit ubuntu:latest
docker export -o exported u1
docker stop u1
docker rm u1
docker import exported ubuntu:imported
docker run --name u1 -dit ubuntu:imported
docker: Error response from daemon: No command specified. In that example, we first pull an image (ubuntu) and successfully create/run container u1 from it. Then we export that container to a file ( exported ), stop/remove the container, import the file into a new image ( ubuntu:imported ) and try to run a new container from it. It fails. | docker export does not export everything about the container — just the filesystem. So, when importing the dump back into a new docker image, additional flags need to be specified to recreate the context. For example, if the original container was running fine because the Dockerfile that was used for creating its image had CMD ["/usr/bin/supervisord"] in it, then import your dump this way: docker import \
--change 'CMD ["/usr/bin/supervisord"]' \
path/to/dump.tar imagename:tagname | {
"source": [
"https://serverfault.com/questions/757210",
"https://serverfault.com",
"https://serverfault.com/users/177476/"
]
} |
757,461 | I'm automatically securing SSL keys like this: - name: Find ssl keys
find: paths="/etc/ssl/" patterns="*.key" recurse=yes
register: secure_ssl_keys_result
- name: Secure ssl keys
file: path={{ item.path }} user=root group=root mode=600
with_items: secure_ssl_keys_result.files Now, for every item, there is a huge log message with the whole content of the item: ok: [127.0.0.1] => (item={u'uid': 0, u'woth': False, u'mtime':
1454939377.264, u'inode': 400377, u'isgid': False, u'size': 3243, u'roth': False, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr':
False, u'wusr': True, u'xoth': False, u'rusr': True, u'nlink': 1,
u'issock': False, u'rgrp': False, u'path': u'/etc/ssl/foo.key',
u'xusr': False, u'atime': 1454939377.264, u'isdir': False, u'ctime':
1454939657.116, u'isblk': False, u'xgrp': False, u'dev': 65025, u'wgrp': False, u'isfifo': False, u'mode': u'0600', u'islnk': False}) This is incredibly unreadable, as I only want to know the path of the item that is being processed (and maybe changed). With a big number of keys, this get's out of hand really quick. How can I change this play in a way that only the item.path is being printed out for each item? I have already tried no_log: True , but this completely omits the output of course. | Ansible 2.2 has loop_control.label for this. - name: Secure ssl keys
file: path={{ item.path }} user=root group=root mode=600
with_items: secure_ssl_keys_result.files
loop_control:
label: "{{ item.path }}" Found via: https://stackoverflow.com/a/42832731/799204 Documentation: http://docs.ansible.com/ansible/playbooks_loops.html#loop-control | {
"source": [
"https://serverfault.com/questions/757461",
"https://serverfault.com",
"https://serverfault.com/users/125240/"
]
} |
758,919 | Recently I've been trying to login to various machines via RDP and am getting the following error my Windows 10 workstation: Faulting application name: mstsc.exe, version: 10.0.10586.0, time stamp: 0x5632d1d8
Faulting module name: ntdll.dll, version: 10.0.10586.103, time stamp: 0x56a8483f
Exception code: 0xc0000374
Fault offset: 0x00000000000ee71c
Faulting process id: 0x3eac
Faulting application start time: 0x01d16d6d340f9399
Faulting application path: C:\WINDOWS\system32\mstsc.exe
Faulting module path: C:\WINDOWS\SYSTEM32\ntdll.dll After debugging with VS 2015 it seems like a heap corruption issue. | The problem was from the recent CSR harmony bluetooth driver I installed. The drivers try to add some bluetooth tag authentication which was causing the issue and RDP crashes regardless of a good or bad password. The simple fix is to head to C:\Program Files\CSR\CSR Harmony Wireless Software Stack and change BLEtokenCredentialProvider.dll to BLEtokenCredentialProvider.dll.BAK And the issue is now fixed for me. | {
"source": [
"https://serverfault.com/questions/758919",
"https://serverfault.com",
"https://serverfault.com/users/240388/"
]
} |
758,930 | How can I view, who is currently connected to a server (Windows 2012) with a remote desktop client? I am myself connected to this server via RDP. This question offers a solution to get IP addresses with established connections. I would be interested in a list of users or their sessions and when these sessions were active the last time. | You can type "Query User" into a command prompt on the remote machine to get a very quick look | {
"source": [
"https://serverfault.com/questions/758930",
"https://serverfault.com",
"https://serverfault.com/users/274223/"
]
} |
758,956 | I have seen an experimental linux box where lvreduce -rL -10G /dev/main/repository worked without unmounting (i.e. even on root,home dir etc), but it does not on my server install debian squeeze LVM version: 2.02.111(2) (2014-09-01) Library version: 1.02.90 (2014-09-01) Driver version: 4.27.0 Linux kernel: 3.16.0-4-amd64 filesystem: ext3 what it the version combination which allows it? PS: I tried to browse release notes of lvm but to no avail. | You can type "Query User" into a command prompt on the remote machine to get a very quick look | {
"source": [
"https://serverfault.com/questions/758956",
"https://serverfault.com",
"https://serverfault.com/users/128488/"
]
} |
758,979 | We are having an issue with file transfers over our MPLS. Our setup: Home Office <====== MPLS ======> Datacenter
VM Cluster VM Cluster
Windows 2008 Windows 2008 Via network Shares: When transferring files from local folder on both PCs and Servers in the home office to the Datacenter we are averaging 177kbps. When transferring files from local folder Servers in the Datacenter to the Home Office we are averaging 5mbps. Via FTP: When Transferring files via FTP from local PCs and Servers in the home office to the Datacenter we are averaging 5mbps. I didn't test FTP in the other direction. Any help on where to start looking would be appreciated. Update from questions below:
DC's are all Windows Server 2008 The Datacenter is aware of the issue, but all of their equipment tests out fine. This has been happening since it was set up, the previous admin was working on correcting it when they left. No notes on their research were left behind. Identical Firewalls and Routers in each location. Updated to detail server hardware.
The Servers are all Windows Server 2008 running on VMWare. I did test on a non VMware server in the datacenter and received similar results. | You can type "Query User" into a command prompt on the remote machine to get a very quick look | {
"source": [
"https://serverfault.com/questions/758979",
"https://serverfault.com",
"https://serverfault.com/users/339876/"
]
} |
759,572 | While re-partitioning a Server 2003 R2 domain controller, we accidentally deleted the partition that held the Active Directory database folder ( D:\AD\Data ). The D:\ was a partition on a disk shared with C:\ . We eliminated the D:\ drive not realizing that it housed the Active Directory data folder. We have no other domain controllers and no backups of this Active Directory data. Is there any chance of restoring the AD? | If you just deleted the partition and did not create a new partition, it is likely possible to recover. First things first - pull the drive, put it in a Linux box and do a raw clone. The first rule of data recovery is that you do your work on a clone, not the original. Now on the clone run a linux tool called testdisk . If the filesystem hasn't been obliterated this should re-create the partition table entry and allow it to be accessed again. If you did create a new partition, or if testdisk can't find the filesystem then your chances of successful recovery are much lower. You might want to consider talking to data recovery specialists at this point. | {
"source": [
"https://serverfault.com/questions/759572",
"https://serverfault.com",
"https://serverfault.com/users/212457/"
]
} |
759,583 | How does one get the version of Logstash? root@elk:/usr/share/elasticsearch# bin/logstash --help
bash: bin/logstash: No such file or directory I have Logstash running on my system. Also. root@elk:/# logstash -V
bash: logstash: command not found Also. root@elk:/# ps aux | grep logstash
logstash 1725 45.3 8.5 1942860 175936 ? SNl 22:03 0:35 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/logstash -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/logstash -XX:HeapDumpPath=/opt/logstash/heapdump.hprof -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
root 1777 0.0 0.0 8860 636 ? S+ 22:05 0:00 grep --color=auto logstash More. root@elk:/opt/logstash/bin# ls
logstash logstash.bat logstash.lib.sh plugin plugin.bat rspec rspec.bat setup.bat
root@elk:/opt/logstash/bin# logstash -V
bash: logstash: command not found | Logstash is one of those things that just doesn't quite live where you expect it to live, and the documentation is reallllly light (read: non-existent) on where they expect you to find things, so if you've installed it from a package then it can be nigh impossible to find the expected location documented. 1 Logstash typically lives in /opt/logstash and you can find the logstash binary in the bin folder ( /opt/logstash/bin ). From there you can run -V or --version ./logstash -v or ./logstash --version From your comments on another answer, it would appear that this is in a docker container. This is the sort of thing you should really be including in your original question. You will want to make use of docker exec . You will need to use docker ps to list your containers, and pass that through to your docker exec command. For example: docker exec -d elk_container /opt/logstash/bin/logstash --version 1 I don't want this to be misconstrued. Logstash documentation is excellent - it's just the parts about where all the different bits are expected to live that's impossible to find | {
"source": [
"https://serverfault.com/questions/759583",
"https://serverfault.com",
"https://serverfault.com/users/208527/"
]
} |
759,602 | I'm trying to add a pre-existing wildcard SSL certificate to a single Ubuntu instance on Amazon EC2, where the webserver is Nginx, and I run a single subdomain. I have - from the original vendor who provides the certificate - files named private.key, selfsigned.crt, and ssl-shared-cert.inc. I've uploaded these files to EC2, at /etc/nginx/ssl (which is a new folder I've created). I've previously used the same files on Heroku, though the process seems to be quite specific there. They're also used on our main domain ( https://wwww.minnpost.com ), but I was not involved in setting them up there, as I believe our hosting vendor did it for us. ssl-shared-cert looks like this: SSLEngine On
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCertificateFile "/path/to/selfsigned.crt"
SSLCertificateKeyFile "/path/to/private.key" On my EC2 instance, I've changed my site's configuration to: server {
listen 8080;
listen 443 ssl;
server_name subdomainurl;
ssl_certificate /etc/nginx/ssl/selfsigned.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
root path;
...
} When I run sudo nginx -t I get the following: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful So I ran sudo service nginx restart , and this returned OK. First I checked to make sure the HTTP version still worked, and it does, but the HTTPS still does not. When I run curl, it returns: * SSL certificate problem: Invalid certificate chain
* Closing connection 0
curl: (60) SSL certificate problem: Invalid certificate chain
More details here: http://curl.haxx.se/docs/sslcerts.html Where can I go from here? | Logstash is one of those things that just doesn't quite live where you expect it to live, and the documentation is reallllly light (read: non-existent) on where they expect you to find things, so if you've installed it from a package then it can be nigh impossible to find the expected location documented. 1 Logstash typically lives in /opt/logstash and you can find the logstash binary in the bin folder ( /opt/logstash/bin ). From there you can run -V or --version ./logstash -v or ./logstash --version From your comments on another answer, it would appear that this is in a docker container. This is the sort of thing you should really be including in your original question. You will want to make use of docker exec . You will need to use docker ps to list your containers, and pass that through to your docker exec command. For example: docker exec -d elk_container /opt/logstash/bin/logstash --version 1 I don't want this to be misconstrued. Logstash documentation is excellent - it's just the parts about where all the different bits are expected to live that's impossible to find | {
"source": [
"https://serverfault.com/questions/759602",
"https://serverfault.com",
"https://serverfault.com/users/119355/"
]
} |
759,620 | I am consolidating a bunch of super old servers (~200). All the code has been tweaked to be able to run on a single box. Except there is a 3rd party web service app that listens on a socket on each one of these servers. The vendor is not going to change the app to be able to work on a single server. So I've read about the upcoming Containers in Windows Server 2016, though I still do not fully comprehend them. I was wondering whether I can deploy 200 instances of a container, each running this web service inside the container. The app itself is very easy on resources. Would this be a good case for containerization? | Logstash is one of those things that just doesn't quite live where you expect it to live, and the documentation is reallllly light (read: non-existent) on where they expect you to find things, so if you've installed it from a package then it can be nigh impossible to find the expected location documented. 1 Logstash typically lives in /opt/logstash and you can find the logstash binary in the bin folder ( /opt/logstash/bin ). From there you can run -V or --version ./logstash -v or ./logstash --version From your comments on another answer, it would appear that this is in a docker container. This is the sort of thing you should really be including in your original question. You will want to make use of docker exec . You will need to use docker ps to list your containers, and pass that through to your docker exec command. For example: docker exec -d elk_container /opt/logstash/bin/logstash --version 1 I don't want this to be misconstrued. Logstash documentation is excellent - it's just the parts about where all the different bits are expected to live that's impossible to find | {
"source": [
"https://serverfault.com/questions/759620",
"https://serverfault.com",
"https://serverfault.com/users/3025/"
]
} |
760,337 | I have a script that zip files from a folder. I want to make sure that the zipped file is not more than 10 MB. If the size is more than 10MB, it should create another ZIP file. Is there any command (or other method )that can be used for this? | You can use the " split archive " functionality of " zip " itself using the " --split-size " option. From "zip" manpage (" man zip "): (...) One use of split archives is storing a large archive on multiple remov‐ able media. For a split archive with 20 split files the files are typ‐ ically named (replace ARCHIVE with the name of your archive) AR‐ CHIVE.z01, ARCHIVE.z02, ..., ARCHIVE.z19, ARCHIVE.zip. Note that the last file is the .zip file. (...) -s splitsize --split-size splitsize Split size is a number optionally followed by a multiplier. Currently the number must be an integer. The multiplier can currently be one of k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes). As 64k is the minimum split size, numbers without multipliers default to megabytes. For example, to cre‐ ate a split archive called foo with the contents of the bar directory with splits of 670 MB that might be useful for burning on CDs, the command: zip -s 670m -r foo bar could be used. So, to create a split zip archive , you could do the following (the " -r " is the "recursive" switch to include subdirectories of the directory): $ zip -r -s 10m archive.zip directory/ To unzip the file , the " zip " manpage explains that you should use the "-s 0`" switch: (...) zip -s 0 split.zip --out unsplit.zip will convert a split archive to a single-file archive. (...) So, you first "unsplit" the ZIP file using the "-s 0" switch: $ zip -s 0 archive.zip --out unsplit.zip ... and then you unzip the unsplit file: $ unzip unsplit.zip | {
"source": [
"https://serverfault.com/questions/760337",
"https://serverfault.com",
"https://serverfault.com/users/281249/"
]
} |
760,461 | The facts: there is a website this website is accessible via www.example.org there is an EC2 instance which very likely keeps the website the server is Apache the server OS is Ubuntu I have full access to the server (and sudo privileges) the server is a huge mess The problem is I have no idea where to - simply put - find the index.html/index.php which gets loaded. How do I figure out where to find the website's PHP and HTML code? Is there a systematic approach to this problem? | First of all you should check what websites are hosted on the server # apachectl -t -D DUMP_VHOSTS Then when you will find a site check corresponding configuration file for the option DocumentRoot. For example # apachectl -t -D DUMP_VHOSTS
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:80 is a NameVirtualHost
default server 192.168.88.87 (/etc/httpd/conf.d/192.168.88.87.conf:1)
port 80 namevhost 192.168.88.87 (/etc/httpd/conf.d/192.168.88.87.conf:1)
port 80 namevhost gl-hooks.example.net (/etc/httpd/conf.d/hooks.conf:1)
alias example.net
alias www.example.net You want to know where is resides a website example.net # grep DocumentRoot /etc/httpd/conf.d/hooks.conf
DocumentRoot /vhosts/gl-hooks.example.net/
# cd /vhosts/gl-hooks.example.net/
# ls -la
total 4484
drwxr-xr-x 6 apache apache 4096 Feb 10 11:59 .
drwxr-xr-x 14 root root 4096 Feb 23 08:54 ..
-rw-r--r-- 1 root root 1078 Dec 19 09:31 favicon.ico
-rw-r--r-- 1 apache apache 195 Dec 25 14:51 .htaccess
-rw-r--r-- 1 apache apache 98 Dec 7 10:52 index.html Should also be on the lookout for aliases and redirects/rewrites You also should paid attention on any alias directives. For example with the following settings <VirtualHost *:80>
ServerName example.net
ServerAlias www.example.net
...
DocumentRoot /vhosts/default/public_html/
Alias /api/ /vhosts/default/public_api/
...
</VirtualHost> When you will access http://example.net/some.file.html - apache will look the file at /vhosts/default/public_html/, at the same time with http://example.net/api/some.file.html the file will be looked at /vhosts/default/public_api/. What about rewrites/redirects, especially programmatic (when redirects are triggered by some php code), I think there is no easy way to find such cases. | {
"source": [
"https://serverfault.com/questions/760461",
"https://serverfault.com",
"https://serverfault.com/users/75233/"
]
} |
760,832 | I'm working into an ansible playbook to get the current hostname of a server and then set it into a configuration file. I cannot figure it out how can I push the shell output using the lineinfile module. - name: Get hostname
shell: echo $HOSTNAME
register: result
- name: Set hostname on conf file
lineinfile: dest=/etc/teste/linux/zabbix_agentd.conf regexp="^Hostname=.*" insertafter="^# Hostname=" line=Hostname=???? | In general, to look what's inside a variable you can use the debug module. - debug:
var: result This should show you an object and its properties which include stdout . That is the complete result of the previous command. So to use the output of the first task you would use result.stdout . To use any variable you would use Jinja2 expressions: {{ whatever }} . So your task could look like this: - name: Set hostname on conf file
lineinfile:
dest: /etc/teste/linux/zabbix_agentd.conf
regexp: ^Hostname=.*
insertafter: ^# Hostname=
line: Hostname={{ result.stdout }} So much for theory, but here comes the real answer . Don't do it like that. Of course Ansible already knows the hostname. The hostname as defined in your inventory would be {{ inventory_hostname }} . The hostname as reported by the server is {{ ansible_hostname }} . Additionally there is {{ ansible_fqdn }} . So just use any of these instead of running an additional task: - name: Set hostname on conf file
lineinfile:
dest: /etc/teste/linux/zabbix_agentd.conf
regexp: ^Hostname=.*
insertafter: ^# Hostname=
line: Hostname={{ ansible_hostname }} | {
"source": [
"https://serverfault.com/questions/760832",
"https://serverfault.com",
"https://serverfault.com/users/341373/"
]
} |
761,024 | With Docker Compose v1.6.0+, there now is a new/version 2 file syntax for the docker-compose.yml file. The changes include a separate top level key named volumes . This allows to "centralize" volume definitions in one place. What I am trying to do is to name volumes in there and have a single volume reference multiple path on my local host disk. The following is an example, throwing an exception with a Traceback that ends with AttributeError: 'list' object has no attribute 'items' Example docker-compose.yml : version: '2'
services:
db:
image: postgres
volumes:
- database:/var/lib/postgres/data
php:
image: php-fpm:5.6
volumes:
- phpconf:/etc/php/conf.d
namedvolume:
container_name: namedvolume
build: ./Docker/Testvolume
volumes:
- ./Docker/Testvolume/shareme
volumes:
database:
- ./Docker/Postgres/db:ro
- ./Docker/Postgres/ini
phpconf:
- ./Docker/PHP-FPM/conf
singledir: ./Docker/foo
completemap: ./Docker/bar:/etc/service/conf.d
- namedvolume:/etc/service/conf.d # < this was a separate attempt w/o the other keys
… ? So far I read through all the Docker Compose docs master -branch Volume configuration reference, the Docker Compose docs Volume/Volume-Driver reference and looked through GitHub examples to find the correct syntax that is expected. It seems no one is already using that (GitHub) and the documentation is far from being complete (docker.com). I also tried to build a separate volume as service and reference it in volumes , but that does not work as well. Any idea on how to this syntax is supposed to look like? | Purpose of the volumes key It is there to create named volumes . If you do not use it, then you will find yourself with a bunch of hashed values for your volumes. Example: $ docker volume ls
DRIVER VOLUME NAME
local f004b95d8a3ae11e9b871074e9415e24d536742abfe86b32ffc867f7b7063e55
local 9a148e167e1c722cbdb67c8edc36f02f39caeb2d276e9316e64de36e7bc2c35d With named volumes, you get something like the following: $ docker volume ls
local projectname_someconf
local projectname_otherconf How to create named volumes The docker-compose.yml syntax is: version: '2'
services:
app:
container_name: app
volumes_from:
- appconf
appconf:
container_name: appconf
volumes:
- ./Docker/AppConf:/var/www/conf
volumes:
appconf:
networks:
front:
driver: bridge This something like above shown named volumes. How to remove volumes in bulk When you have a bunch of hashes, it can be quite hard to clean up. Here's a one-liner: docker volume rm $(docker volume ls |awk '{print $2}') Edit: As @ArthurTacca pointed out in the comments, there's an easier to remember way: docker volume rm $(docker volume ls -q) How to get details about a named volume Now that you do not have to look up hashes anymore, you can go on it and call them by their … name: docker volume inspect <volume_name>
# Example:
$ docker volume inspect projectname_appconf
[
{
"Name": "projectname_appconf",
"Driver": "local",
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/projectname_appconf/_data"
}
] Sidenote: You might want to docker-compose down your services to get a fresh start before going to create volumes. In case you are using Boot2Docker / Docker Machine , you will have to docker-machine ssh and sudo -i before doing a ls -la /mnt/… of that volume – you host machine is the VM provisioned by Docker Machine . EDIT: Another related answer about named volumes on SO . | {
"source": [
"https://serverfault.com/questions/761024",
"https://serverfault.com",
"https://serverfault.com/users/120233/"
]
} |
761,887 | This is more of a conceptual question than a question about an actual setup in practice. Let's say I have a network printer, a print server (server A) and workstations B and C that will use the print services. (All of them are in the same subnet). When workstations B and C want to use the printer through server A, do they: Still need the printer driver from the network printer? If they do, then after they download the printer driver, do they still connect to server A for printing? Or : Connecting directly to the network printer instead (since they are all in the same network)? In general, is a print server used to: Just distribute the drivers to the workstations which will then connect to the network printer directly? Or : Manage the printing to the printers, such that workstations will connect to the print server instead of the connecting directly to the printer? If so, why does the workstation still have the printer driver installed? | In general, print servers are used to both distribute drivers to client computer and centrally process and manage the print jobs. In large environments it's useful to have homogeneous drivers (which will usually contain certain printing configurations that are desirable to control centrally) in addition to having a central location for managing and logging print jobs. For example, the most common setting I see companies want to "push out" to client PCs from the print server is to default to black and white printing, rather than color printing (to save money on the more expensive color ink). So yes, in the general case, the client computer will connect to the print server, acquire the printer driver from it, and then connect to the server to actually print to that printer. It is possible, though much less common, to connect to a print server just to get the right driver, install the printer directly, with that driver, and then bypass the print server by printing directly to that printer. But note that this is dependent on how the printer is installed on the client. It's either installed "directly" as a stand-alone printer on the client, or installed as a shared printer from the print server, and this is what determines whether the client connects to the printer directly, or through the print server instead. This is where the distinction between a physical printer ("print device") and a logical printer matters - it is actually possible to have the same physical print device installed multiple times as different logical printers. For example, by installing the same print device once directly, and once via the shared printer on the print server. Since you tagged your question with Server 2012 R2, this Technet doc on Server 2012 Printer Sharing Technologies will probably be of interest. Note the section titled: Enhanced Point and Print , which is a technology that allows clients to print to compatible printers through a Windows Server 2012+ print server without installing a specific driver for the printer on the client. Meaning, of course, that it's also possible to use a print server so that clients don't need to install drivers for specific printers, but it's still most common that a print server will but distribute drivers to client and process/manage client print jobs. | {
"source": [
"https://serverfault.com/questions/761887",
"https://serverfault.com",
"https://serverfault.com/users/332545/"
]
} |
763,815 | Here is how I enter the value for DKIM key: "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwztXzIUqic95qSESmnqX U5v4W4ENbciFWyBkymsmmSNOhLlEtzp/mnyhf50ApwCTGLK9U7goo/ijX/wr5roy XhReVrvcqtIo3+63a1Et58C1J2o4xCvp0K2/lM6hla4B9jSph7QzjYdtWlOJqLRs o0nzcut7DSq/xYcVqvrFDNbutCfG//0wcRVUtGEyLX/a/7mAAkW6H8UEYMPglQ9c eEDfTT6pzIlqaK9cHGOsSCg4r0N8YxnHFMRzKaZwmudaXTorSbCs7e681g125/vJ e82VV7DE0uvKW/jquZYtgMn7+0rm+2FDYcDx/7lzoByl91rx37MAJaUx/2JHi1EA nwIDAQAB" There are no new lines in this value (I specifically copy pasted and tested it in a text editor). But for some reason I keep getting TXT is too long error: TXTRDATATooLong encountered at "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwztXzIUqic95qSESmnqX U5v4W4ENbciFWyBkymsmmSNOhLlEtzp/mnyhf50ApwCTGLK9U7goo/ijX/wr5roy XhReVrvcqtIo3+63a1Et58C1J2o4xCvp0K2/lM6hla4B9jSph7QzjYdtWlOJqLRs o0nzcut7DSq/xYcVqvrFDNbutCfG//0wcRVUtGEyLX/a/7mAAkW6H8UEYMPglQ9c eEDfTT6pzIlqaK9cHGOsSCg4r0N8YxnHFMRzKaZwmudaXTorSbCs7e681g125/vJ e82VV7DE0uvKW/jquZYtgMn7+0rm+2FDYcDx/7lzoByl91rx37MAJaUx/2JHi1EA nwIDAQAB" I really don't know what I should do to fix this issue. | See a similar issue in Route 53 forum : Unfortunately the 255 character limit per string on TXT records is not a Route53 limit but rather one imposed by the DNS protocol itself. However, each TXT record can have multiple strings, each 255 characters long. You will need to split your DKIM into multiple strings for your TXT record. You can do this via the console by entering each string encapsulated in quotes, one string per line. Important note : Do not use "one string per line" as the instructions say -- separate strings with a single space, eg. "foo" "bar" not "foo"\n"bar" . Use DKIMValidator to validate the signature is being read correctly. | {
"source": [
"https://serverfault.com/questions/763815",
"https://serverfault.com",
"https://serverfault.com/users/183288/"
]
} |
765,258 | I use nginx as a reverse-ssl-proxy in front of a backend webserver that is capable of doing HTTP/2.0. I noticed that nginx proxies the requests to the backend server via HTTP/1.1 rather than HTTP/2.0. Is it possible to tell nginx to use an un-encrypted HTTP/2.0 connection instead? Would this increase performance? | Found this: https://trac.nginx.org/nginx/ticket/923 There are no plans to implement HTTP/2 support in the proxy module in the foreseeable future Excerpt from a mail referenced in the ticket: There is almost no sense to implement it, as the main HTTP/2
benefit is that it allows multiplexing many requests within a
single connection, thus [almost] removing the limit on number of
simalteneous requests - and there is no such limit when talking to
your own backends. Moreover, things may even become worse when
using HTTP/2 to backends, due to single TCP connection being used
instead of multiple ones. | {
"source": [
"https://serverfault.com/questions/765258",
"https://serverfault.com",
"https://serverfault.com/users/256824/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.