source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
973,666 | If I have the hosts example.com and leaf.intermediate.example.com in DNS records for example.com , but do not have any records for intermediate.example.com itself, does that cause a problem in some situations or is it bad style or etiquette for some reason? I have web servers set up like this and everything seems to work fine, but just wanted to check if there's something I'm missing. | TL;DR: yes intermediate subdomains need to exist, at least when queried for, per definition of the DNS; they may not exist in the zonefile though. A possible confusion to eliminate first; Definition of "Empty Non-Terminal" You may be confusing two things, as other answers seem also to do. Namely, what happens when querying for names versus how you configure your nameserver and the content of the zonefile. The DNS is hierarchical. For any leaf node to exist, all components leading to it MUST exist, in the sense that if they are queried for, the responsible authoritative nameserver should reply for them without an error. As explained in RFC 8020 (which is just a repeat of what was always the rule,
but just some DNS providers needed a reminder), if for any query, an authoritative nameserver reply NXDOMAIN (that is: this resource record does not exist), then it means that any label "below" this resource does not exist either. In your example, if a query for intermediate.example.com returns NXDOMAIN , then any proper recursive nameserver will immediately reply NXDOMAIN for leaf.intermediate.example.com because this record can not exist if all labels in it do not exist as records. This was already stated in the past in the RFC 4592 about wildcards (which are unrelated here): The domain name space is a tree structure. Nodes in the tree either own at least one RRSet and/or have descendants that collectively own at least one RRSet. A node may exist with no RRSets only if it has descendants that do; this node is an empty non-terminal. A node with no descendants is a leaf node. Empty leaf nodes do not
exist. A practical example with .US domain names Let us take a working example from a TLD with a lot of labels historically, that is .US . Picking any example online, let us use www.teh.k12.ca.us . Of course if you query for this name, or even teh.k12.ca.us you can get back A records. Nothing conclusive here for our purpose (there is even a CNAME in the middle of it, but we do not care about that) : $ dig www.teh.k12.ca.us A +short
CA02205882.schoolwires.net.
107.21.20.201
35.172.15.22
$ dig teh.k12.ca.us A +short
162.242.146.30
184.72.49.125
54.204.24.19
54.214.44.86 Let us query now for k12.ca.us (I am not querying the authoritative nameserver of it, but that does not change the result in fact): $ dig k12.ca.us A
; <<>> DiG 9.11.5-P1-1ubuntu2.5-Ubuntu <<>> k12.ca.us A
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59101
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1480
;; QUESTION SECTION:
;k12.ca.us. IN A
;; AUTHORITY SECTION:
us. 3587 IN SOA a.cctld.us. hostmaster.neustar.biz. 2024847624 900 900 604800 86400
;; Query time: 115 msec
;; SERVER: 127.0.0.10#53(127.0.0.10)
;; WHEN: mer. juil. 03 01:13:20 EST 2019
;; MSG SIZE rcvd: 104 What do we learn from this answer? First, it is a success because the status is NOERROR . If it had been anything else and specifically NXDOMAIN then teh.k12.ca.us , nor www.teh.k12.ca.us could exist. Second, the ANSWER section is empty. There are no A records for k12.ca.us . This not an error, this type ( A ) does not exist for this record, but maybe other record types exist for this record or this record is an ENT, aka "Empty Non Terminal": it is empty, but it is not a leaf, there are things "below" it (see definition in RFC 7719 ), as we already know (but normally the resolution is top down, so we will reach this step before going one level below and not the opposite like we are doing here for demonstration purpose). This is why in fact, as a shortcut, we say the status code is NODATA : this is not a real status code it just means NOERROR + empty ANSWER section, which means there is no data for this specific record type but there may be for others. You can repeat the same experiment for the same result if you query with the next "up" label, that is the name ca.us . Queries' results vs zonefile content Now from where the confusion can come? I believe it may come from some false idea that any dot in a DNS name means there is a delegation. This is false.
Said differently, your example.com zonefile can be like that, and it is totally valid and working: example.com. IN SOA ....
example.com. IN NS ....
example.com. IN NS ....
leaf.intermediate.example.com IN A 192.0.2.37 With such a zonefile, querying this nameserver you will get exactly the behavior observed above: a query for intermediate.example.com will return NOERROR with an empty answer. You do not need to create it specifically in the zonefile (if you do not need it for other reasons), the authoritative nameserver will take care of synthesizing the "intermediate" replies, because it sees it needs this empty non-terminal (and any others "in-between" if there had been other labels) as it sees the leaf name leaf.intermediate.example.com . Note that this is a widespread case in fact in some areas, but you might not see it because it targets more "infrastructure" records that people are not exposed to: in reverse zones like in-addr.arp or ip6.arpa , and specifically the last one. You will have records like 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.a.1.d.e.1.6.8.0.0.0.0.0.0.2.6.2.ip6.arpa. 1h IN PTR text-lb.eqiad.wikimedia.org. and there is obviously not a delegation at each dot, nor resource records attached at each label in SRV records, like _nicname._tcp.fr. 12h IN SRV 0 0 43 whois.nic.fr. , a domain can have many _proto._tcp.example.com and _proto._udp.example.com SRV records because by design they must have this form, but at the same time _tcp.example.com and _udp.example.com will remain Empty Non-Terminals because never used as records you have in fact many other cases of specific construction of names based on "underscore labels" for various protocols such as DKIM. DKIM mandates you to have DNS records like whatever._domainkey.example.com , but obviously _domainkey.example.com by itself will never be used, so it will remain an empty non-terminal. This is the same for TLSA records in DANE (ex: _25._tcp.somehost.example.com. TLSA 3 1 1 BASE64== ), or URI records (ex: _ftp._tcp IN URI 10 1 "ftp://ftp1.example.com/public" ) Nameserver behavior and generation of intermediate replies Why does the nameserver synthesize automatically such intermediate answers? The core resolution algorithm for the DNS, as detailed in RFC 1034 section 4.3.2 is the reason for that, let us take it and summarize in our case when querying the above authoritative nameserver for the name intermediate.example.com (this is the QNAME in protocol below): Search the available zones for the zone which is the nearest
ancestor to QNAME. If such a zone is found, go to step 3,
otherwise step 4. The nameserver finds zone example.com as nearest ancestor of QNAME, so we can go to step 3. We have now this: Start matching down, label by label, in the zone. [..] a. If the whole of QNAME is matched, we have found the
node. [..] b. If a match would take us out of the authoritative data,
we have a referral. This happens when we encounter a
node with NS RRs marking cuts along the bottom of a
zone. [..] c. If at some label, a match is impossible (i.e., the
corresponding label does not exist), look to see if a
the "*" label exists. [..] We can eliminate cases b and c, because our zonefile has no delegation (hence there will be never a referral to other nameservers, no case b), nor wildcards (so no case c). We only have to deal here with case a. We start matching down, label by label, in the zone.
So even if we had a long sub.sub.sub.sub.sub.sub.sub.sub.example.com name, at some point, we arrive at case a: we did not find a referral, nor a wildcard, but we ended up at the final name we wanted a result for. Then we apply the rest of the content of case a: If the data at the node is a CNAME Not our case, we skip that. Otherwise, copy all RRs which match QTYPE into the
answer section and go to step 6. Whatever QTYPE we choose ( A , AAAA , NS , etc.) we have no RRs for intermediate.example.com as it does not appear in the zonefile. So the copy here is empty. Now we finish at step 6: Using local data only, attempt to add other RRs which may be
useful to the additional section of the query. Exit. Not relevant for us here, hence we finish with success. This exactly explains the behavior observed: such queries will return NOERROR but no data either. Now, you may ask yourself: "but then if I use any name, like another.example.com then by the above algorithm I should get the same reply (no error)", but observations would instead report NXDOMAIN in that case. Why? Because the whole algorithm as explained, starts with this: The following algorithm assumes that the RRs are organized in several
tree structures, one for each zone, and another for the cache This means that the above zonefile is transformed into this tree: +-----+
| com | (just to show the delegation, does not exist in this nameserver)
+-----+
|
|
|
+---------+
| example | SOA, NS records
+---------+
|
|
|
+--------------+
| intermediate | no records
+--------------+
|
|
|
+------+
| leaf | A record
+------+ So when following the algorithm, from the top, you can indeed find a path: com > example > intermediate (because the path com > example > intermediate > leaf exists)
But for another.example.com , after com > example you do not find the another label in the tree, as children node of example . Hence we fall into part of choice c from above: If the "*" label does not exist, check whether the name
we are looking for is the original QNAME in the query
or a name we have followed due to a CNAME. If the name
is original, set an authoritative name error in the
response and exit. Otherwise just exit. Label * does not exist, and we did not follow a CNAME , hence we are in case: set an authoritative name error in the response and exit , aka NXDOMAIN . Note that all the above did create confusion in the past. This is collected in some RFCs. See for example this unexpected place (the joy of DNS specifications being so impenetrable) defining wildcards: RFC 4592 "The Role of Wildcards in the Domain Name System" and notably its section 2.2 "Existence Rules", also cited in part at the beginning of my answer but here it is more complete: Empty non-terminals [RFC2136, section 7.16] are domain names that own
no resource records but have subdomains that do. In section 2.2.1, "_tcp.host1.example." is an example of an empty non-terminal name. Empty non-terminals are introduced by this text in section 3.1 of RFC
1034: # The domain name space is a tree structure. Each node and leaf on
# the tree corresponds to a resource set (which may be empty). The
# domain system makes no distinctions between the uses of the
# interior nodes and leaves, and this memo uses the term "node" to
# refer to both. The parenthesized "which may be empty" specifies that empty non- terminals are explicitly recognized and that empty non-terminals "exist". Pedantically reading the above paragraph can lead to an interpretation that all possible domains exist--up to the suggested limit of 255 octets for a domain name [RFC1035]. For example, www.example. may have an A RR, and as far as is practically concerned, is a leaf of the domain tree. But the definition can be taken to mean that sub.www.example. also exists, albeit with no data.
By extension, all possible domains exist, from the root on down. As RFC 1034 also defines "an authoritative name error indicating
that the name does not exist" in section 4.3.1, so this apparently
is not the intent of the original definition, justifying the need
for an updated definition in the next section. And then the definition in next section is the paragraph I quoted at the beginning. Note that RFC 8020 (on NXDOMAIN really meaning NXDOMAIN , that is if you reply NXDOMAIN for intermediate.example.com , then leaf.intermediate.example.com can not exist) was mandated in part because various DNS providers did not follow this interpretation and that created havoc, or they were just bugs, see for example this one fixed in 2013 in one opensource authoritative nameserver code: https://github.com/PowerDNS/pdns/issues/127 People needed then to put specific counter measures just for them: that is not aggressively caching NXDOMAIN because for those providers if you get NXDOMAIN at some node, it may still mean you get something else than NXDOMAIN at another node below it. And this was making QNAME minimization (RFC 7816) impossible to obtain (see https://indico.dns-oarc.net/event/21/contributions/298/attachments/267/487/qname-min.pdf for longer details), while it was wanted to increase privacy. Existence of empty non-terminals in case of DNSSEC also created problems in the past, around handling of non-existence (see https://indico.dns-oarc.net/event/25/contributions/403/attachments/378/647/AFNIC_OARC_Dallas.pdf if interested, but you really need a good understanding of DNSSEC before). The following two messages give an example of problems one provider had to be able to properly enforce this rule on Empty Non-Terminals, it gives some perspective of the issues and why we where there: https://mailarchive.ietf.org/arch/msg/dnsop/XIX16DCe2ln3ZnZai723v32ZIjE https://lists.dns-oarc.net/pipermail/dns-operations/2019-April/018640.html | {
"source": [
"https://serverfault.com/questions/973666",
"https://serverfault.com",
"https://serverfault.com/users/354801/"
]
} |
975,594 | I'm trying to track network activities on my machine running CentOS 7. According to iptables logs, it seems that Google (74.125.133.108) is approaching my VPS many times. I can see that source-port is always 993. What is the reason for that? 16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=60 TOS=0x00 PREC=0xA0 TTL=107 ID=4587 PROTO=TCP SPT=993 DPT=47920 WINDOW=62392 RES=0x00 ACK SYN URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=4666 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=2767 TOS=0x00 PREC=0xA0 TTL=107 ID=4668 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=331 TOS=0x00 PREC=0xA0 TTL=107 ID=4704 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=150 TOS=0x00 PREC=0xA0 TTL=107 ID=4705 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=299 TOS=0x00 PREC=0xA0 TTL=107 ID=4733 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=4771 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=354 TOS=0x00 PREC=0xA0 TTL=107 ID=5026 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=5094 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:11 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=128 TOS=0x00 PREC=0xA0 TTL=107 ID=5116 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=5187 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=124 TOS=0x00 PREC=0xA0 TTL=107 ID=5189 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=5195 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=339 TOS=0x00 PREC=0xA0 TTL=107 ID=5213 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=119 TOS=0x00 PREC=0xA0 TTL=107 ID=5214 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK PSH URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=5229 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK URGP=0
16:22:12 kernel: ipt IN=eth0 OUT= MAC=... SRC=74.125.133.108 DST=... LEN=52 TOS=0x00 PREC=0xA0 TTL=107 ID=5257 PROTO=TCP SPT=993 DPT=47920 WINDOW=248 RES=0x00 ACK FIN URGP=0 | Notice the ACK SYN on the first packet in your dump? Those flags indicate the second stage of the three-way TCP handshake . Since this packet is coming from Google, it indicates that Google is not "approaching your VPS"; your VPS is connecting to Google on port 993, and Google is sending back an acknowledgement. To investigate this further, you can use the iptables command to view details (including process IDs) of connections that are currently active. You can also use the kernel audit subsystem to log outgoing connections as they happen. | {
"source": [
"https://serverfault.com/questions/975594",
"https://serverfault.com",
"https://serverfault.com/users/317691/"
]
} |
977,835 | I've been using AWS for years, but have never ventured outside the Quick Start and AWS Marketplace sections when launching an EC2 instance. The AMIs from the AWS Marketplace look trustable, they have a link to the seller profile, etc.: Compare this to community AMIs, that seem to appear out of thin air, with no information whatsoever on who the heck created and uploaded it: How to know where a Community AMI comes from? Can these be trusted? | Any AWS user can create a community AMI by making it public and shared with everyone. So the answer is just about anyone could have created that community AMI. While many are probably fine, you cannot trust them by default, in my opinion. Regarding the specific creator of the AMI in question, it appears that the only user-specific information available is the OwnerId field, which is the AWS account ID of the image owner. Here's an example AWS Cli command to get that information: aws ec2 describe-images --image-ids ami-gs5mba4yp26bsyx57 (Replace "gs5mba4yp26bsyx57" with the ami id you want to examine.) This will return a lot of information about the image, including the OwnerId field. | {
"source": [
"https://serverfault.com/questions/977835",
"https://serverfault.com",
"https://serverfault.com/users/83039/"
]
} |
980,387 | I took control of a new VM running Windows Server 2019 (Datacenter) recently. Since that time, on each login to the server (via RDP) the Shutdown Event Tracker shows asking for information about an unexpected shutdown. Checking the Event Viewer there is no evidence of a restart, unexpected or otherwise, since the previous time I'd seen this message previously and filled it out. How do I stop this from showing each logon or clear whatever is stuck in the system that is triggering it? I don't want to NOT see this dialog when it is warranted, I just know it's not warranted in this case and want to know how to stop it from showing up on EVERY logon. The server is up to date on patches, serving files without much of any additional software except for the Splunk UniversalForwarder mandated by HQ. | In research of this issue I've found it has been reported occurring on Server 2016 and 2019. Removing two registry keys appears to resolve the issue. To resolve I opened the Registry using an account with admin privileges on the server, and navigated to and deleted the two following registry keys (after backing up, of course): \HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Reliability
.\DirtyShutdown
.\DirtyShutdownTime After removing these two registry keys the Shutdown Event Tracker did not show up on subsequent logins. | {
"source": [
"https://serverfault.com/questions/980387",
"https://serverfault.com",
"https://serverfault.com/users/76309/"
]
} |
985,908 | When I read the supermicro website : there are chassis and motherboard products. I want to know what's the function of them, and what's the difference? | Chassis The chassis, also often called the "case", is the container or enclosure that holds all of the other pieces of a computer. It might include some switches, fans, and indicator lights. Desktops and servers will each use a chassis of a different size and shape, but the functions are similar. Empty server chassis: Empty desktop chassis: Motherboard The motherboard is a circuit board containing slots or sockets for the CPU and memory, and also contains supplementary circuitry. The motherboard will often have networking, video, and audio components built in, and it will usually have expansion slots for additional components to be added or upgraded. Motherboards are mounted in the chassis, typically in the largest open space you see in these pictures. The other open spaces are for other components, like fans, hard drives, and optical drives. Server motherboards are similar to desktop motherboards, but usually are larger, and are capable of containing more CPUs and more RAM. Server motherboard: Desktop motherboard: | {
"source": [
"https://serverfault.com/questions/985908",
"https://serverfault.com",
"https://serverfault.com/users/413980/"
]
} |
986,772 | Issue I have read many discussions about storage, and whether SSDs or classic HDDs are better. I am quite confused. HDDs are still quite preferred, but why? Which is better for active storage? For example for databases, where the disk is active all the time? About SSD. Pros. They are quiet. Not mechanical. Fastest. Cons. More expensive. Question. When the life cycle for one cell of a SSD is used, what happens then? Is the disk reduced by only this cell and works normally? What is the best filesystem to write? Is ext4 good because it saves to cells consecutively? About HDD. Pros. Cheaper. Cons. In case of mechanical fault, I believe there is usually no way to repair it. (Please confirm.) Slowest, although I think HDD speed is usually sufficient for servers. Is it just about price? Why are HDDs preferred? And are SSDs really useful for servers? | One aspect of my job is designing and building large-scale storage systems (often known as "SANs", or "Storage Area Networks"). Typically, we use a tiered approach with SSD's and HDD's combined. That said, each one has specific benefits. SSD's almost always have a higher Cost-per-Byte. I can get 10k SAS 4kn HDD's with a cost-per-gigabyte of $0.068/GB USD. That means for roughly $280 I can get a 4TB drive. SSD's on the other hand typically have a cost-per-gigabyte in the 10's and 20's of cents, even as high as dollars-per-gigabyte. When dealing with RAID, speed becomes less important, and instead size and reliability matter much more. I can build a 12TB N+2 RAID system with HDD's far cheaper than SSD's. This is mostly due to point 1. When dealt with properly, HDD's are extremely cheap to replace and maintain. Because the cost-per-byte is lower, replacing an HDD with another due to failure is cheaper. And, because HDD failures are typically related to time vs. data-written, replacing it doesn't automatically start using up TBW when it rebuilds the RAID array. (Granted, TBW percentage used for a rebuild is tiny overall, but the point stands.) The SSD market is relatively complex. There are four (current, at the time of this writing) major types of SSD's, rated from highest number of total writes supported to lowest: SLC, MLC, TLC, QLC. The SLC typically supports the largest numbers of total writes (the major limiting factor of SSD lifetimes), whereas the QLC typically supports the lowest numbers of total writes. That said, the most successful storage systems I've seen are tiered with both drives in use. Personally, all the storage systems I recommend to clients generally follow the following tiers: Tier 1 is typically a (or several) RAID 10 SSD-only tier. Data is always written to Tier 1. Tier 2 is typically a (or several) RAID 50 or 5 SSD-only tier. Data is aged out of Tier 1 to Tier 2. Tier 3 is typically a (or several) RAID 10 HDD-only tier. Data is aged out of Tier 2 to Tier 3. Tier 4 is typically several groups of RAID 6 HDD-only tiers. Data is aged out of Tier 3 to Tier 4. We make the RAID 6 groups as small as possible, so that there is a maximal support of drive-failure. Read/Write performance drops as you increase tiers, data will propagate down to a tier where most of the data shares the same access-/modification-frequency. (That is, the more frequently data is read/written, the higher the tier it resides on.) Sprinkle some well-designed fibre-channel in there, and you can actually build a SAN that has a higher throughput than on-board drives would. Now, to some specific items you mention: Your SSD Questions How SSD exactly works, when life cycle for one cell is out, what then? Disk is reduced by only this cell and works normally? Or what happened then? Both drive-types are typically designed with a number of "spare" cells. That is, they have "extra" space on them you cannot access that supports failing-to if a cell dies. (IIRC it's like 7-10%.) This means if a single "cell" (sector on HDD) dies, a "spare" is used. You can check the status of this via the S.M.A.R.T. diagnostics utility on both drives. What is best solution (filesystem) to write? I think ext4 is good, because it saves to cells consecutively? For SSD's this is entirely irrelevant. Cell-positioning does not matter, as access time is typically linear. Your HDD Questions In case of mechanical fault, no way to repair it (is it right)? Partially incorrect. HDD's are actually easier to recover data from in most failure situations. (Note: I said easier , not easy .) There is specialized equipment required, but success-rates here seem pretty high. The platters can often be read out of the HDD itself by special equipment, which allows data-recovery if the drive is dead. Slowest, but I think speed is not so important, because speed of HDD is absolutely sufficient for server using? Typically, when using RAID, single-drive speed becomes less a factor as you can use speed-pairing RAID setups that allow you to increase the overall speed. (RAID 0, 5, 6 are frequently used, often in tandem.) For a database with high IO's, HDD's are typically not sufficient unless designed very deliberately. You would want SLC write-intensive grade SSD's for database-grade IO. | {
"source": [
"https://serverfault.com/questions/986772",
"https://serverfault.com",
"https://serverfault.com/users/537773/"
]
} |
987,473 | I have sensitive data stored in both Azure DB and Azure SQL VM . An authorised DBA can log on and query the database, but in theory could a random Microsoft employee do the same without asking permission? I found this online which suggests the answer is 'no', but is it really? Customer data ownership: Microsoft does not inspect, approve, or monitor applications that customers deploy to Azure. Moreover, Microsoft does not know what kind of data customers choose to store in Azure. Microsoft does not claim data ownership over the customer information that's entered into Azure. Also found this on a site discussing the negatives of using a SQL Developer Licence: Microsoft gets access to your data: it is mandatory with any non-commercial installation of SQL Server that all your usage data covering performance, errors, feature use, IP addresses, device identifiers and more, is sent to Microsoft. There are no exceptions. This will likely rule it out for any company that deals with particularly sensitive data. I'm not proposing using a developer licence on Azure, but which is it - can Microsoft inspect my data or not, either legitimately or a rogue employee? | Legally speaking, they can't read your data or send your data to law enforcement without a correct court order. Requests for customer data Government requests for customer data must
comply with applicable laws. A subpoena or its local equivalent is
required to request non-content data, and a warrant, court order, or
its local equivalent, is required for content data. Per transparency from Microsoft, to see the current state of how many laws subpoena they answered on there . You have to choose wisely your Azure region for that reason. In example HIPAA enterprise in Canada would have to be hosted in Canada in example for their data. A rogue Microsoft employee could maybe see your data. The process there is unknown, but that risk is the same from any hoster or rogue employee inside your corporation. | {
"source": [
"https://serverfault.com/questions/987473",
"https://serverfault.com",
"https://serverfault.com/users/327102/"
]
} |
987,686 | I just installed the latest release of docker-ce on CentOS, but I can't reach published ports from a neighboring server and can't reach the outside from the container itself. Running a plain vanilla CentOS 8 with NetworkManager and FirewallD enabled. Default firewall zone is public . Versions: docker-ce 19.03.3 (official Docker RPM) containerd.io 1.2.6 (official Docker RPM for CentOS 7 - not available for CentOS 8 yet) CentOS 8.0.1905 (minimal install) | After spending a couple of days looking at logs and configurations for the involved components, I was about to throw in the towel and revert back to Fedora 30, where this seems to work straight out of the box. Focusing on firewalling, I realized that disabling firewalld seemed to do the trick, but I would prefer not to do that. While inspecting network rules with iptables , I realized that the switch to nftables means that iptables is now an abstraction layer that only shows a small part of the nftables rules. That means most - if not all - of the firewalld configuration will be applied outside the scope of iptables . I was used to be able to find the whole truth in iptables , so this will take some getting used to. Long story short - for this to work, I had to enable masquerading. It looked like dockerd already did this through iptables , but apparently this needs to be specifically enabled for the firewall zone for iptables masquerading to work: # Masquerading allows for docker ingress and egress (this is the juicy bit)
firewall-cmd --zone=public --add-masquerade --permanent
# Specifically allow incoming traffic on port 80/443 (nothing new here)
firewall-cmd --zone=public --add-port=80/tcp
firewall-cmd --zone=public --add-port=443/tcp
# Reload firewall to apply permanent rules
firewall-cmd --reload Reboot or restart dockerd , and both ingress and egress should work. | {
"source": [
"https://serverfault.com/questions/987686",
"https://serverfault.com",
"https://serverfault.com/users/118577/"
]
} |
987,708 | I am trying to change the Access Control permissions on a specific registry key i'm generating using a batch file. I try using regini.exe to pull the configuration from a .ini file and run into issues. I keep getting this error: Z:\EM\Pre>regini.exe RegistryPermissions.ini
REGINI: CreateKey (\HKEY_CURRENT_CONFIG\Software\E) relative to handle (000000000) failed - 161
REGINI: Failed to load from file 'RegistryPermissions.ini' (161) This is the contents of my .ini file RegistryPermissions.ini: Computer\HKEY_CURRENT_CONFIG\Software\E [1 7] This is the batch script i'm writing to solve a problem: @echo off
:: ==========================================
:: Set E Key
:: ==========================================
:: Date : 11 October 2019
:: Author :
:: Modified Date:
:: Modified By:
::
:: Script Details:
:: --------------
:: This script will:
:: + add the E Registry key to HKCC\Software
:: + set the Key permissions to allow "Everyone" full control
:: + reboot PC
:: ===========================================
::***************************************************************
:: Add E Registry Key to HKCC\Software *
::***************************************************************
REG ADD HKCC\Software\E
::***************************************************************
:: Set the Key to permissions to allow Everyone full control *
::***************************************************************
=====This is where I need help=====
::***************************************************************
:: Reboot PC *
::***************************************************************
goto end
:end I have removed some unnecessary sections of the script.
The important part is changing the permissions on a registry key, with cmd. | After spending a couple of days looking at logs and configurations for the involved components, I was about to throw in the towel and revert back to Fedora 30, where this seems to work straight out of the box. Focusing on firewalling, I realized that disabling firewalld seemed to do the trick, but I would prefer not to do that. While inspecting network rules with iptables , I realized that the switch to nftables means that iptables is now an abstraction layer that only shows a small part of the nftables rules. That means most - if not all - of the firewalld configuration will be applied outside the scope of iptables . I was used to be able to find the whole truth in iptables , so this will take some getting used to. Long story short - for this to work, I had to enable masquerading. It looked like dockerd already did this through iptables , but apparently this needs to be specifically enabled for the firewall zone for iptables masquerading to work: # Masquerading allows for docker ingress and egress (this is the juicy bit)
firewall-cmd --zone=public --add-masquerade --permanent
# Specifically allow incoming traffic on port 80/443 (nothing new here)
firewall-cmd --zone=public --add-port=80/tcp
firewall-cmd --zone=public --add-port=443/tcp
# Reload firewall to apply permanent rules
firewall-cmd --reload Reboot or restart dockerd , and both ingress and egress should work. | {
"source": [
"https://serverfault.com/questions/987708",
"https://serverfault.com",
"https://serverfault.com/users/544222/"
]
} |
988,169 | I have a certain domain I manage, which was moved to DNSmadeeasy a week ago. But sometimes when I do a dig request I get a weird IP back on the A record: 166.62.3.1 Others are reporting the same, from different locations. It seems to be random though, as most report the correct IP. DNSmadeeasy say nothing is wrong in their end as usual, so I have no idea how this IP is getting out there. The domain is: elyseecollective.com.au Some dig results are here | The authoritative answers for the ns{1-6}.maccentrecloud.com.au names point to: ns1.maccentrecloud.com.au. 1800 IN A 208.94.148.4
ns2.maccentrecloud.com.au. 1800 IN A 208.80.124.4
ns3.maccentrecloud.com.au. 1800 IN A 208.80.126.4
ns4.maccentrecloud.com.au. 1800 IN A 208.80.125.4
ns5.maccentrecloud.com.au. 1800 IN A 208.80.127.4
ns6.maccentrecloud.com.au. 1800 IN A 208.94.149.4 But the glue records don't quite match: ns1.maccentrecloud.com.au. 900 IN A 208.94.148.4
ns2.maccentrecloud.com.au. 900 IN A 112.140.180.10
ns3.maccentrecloud.com.au. 900 IN A 208.80.126.4
ns4.maccentrecloud.com.au. 900 IN A 208.80.125.4
ns5.maccentrecloud.com.au. 900 IN A 208.80.127.4
ns6.maccentrecloud.com.au. 900 IN A 208.94.149.4 Update the glue (through the registrar for maccentrecloud.com.au ). ( ns2.maccentrecloud.com.au. / 112.140.180.10 responds differently, and the bad glue puts it into the mix of who should be queried) | {
"source": [
"https://serverfault.com/questions/988169",
"https://serverfault.com",
"https://serverfault.com/users/272662/"
]
} |
989,678 | I have an AWS EC2 Ubuntu instance for pet projects. When I tried logging in one day, this error results: ~$ ssh -i"/home/kona/.ssh/aws_kona_id" [email protected] -p22
Enter passphrase for key '/home/kona/.ssh/aws_kona_id':
Received disconnect from [IP address] port 22:2: Too many authentication failures
Disconnected from [IP address] port 22
~$ kona is the only account enabled on this server I've tried rebooting the server, changing my IP address, and waiting. EDIT: kona@arcticjieer:~$ ssh -o "IdentitiesOnly yes" -i"/home/kona/.ssh/aws_kona_id" -v [email protected] -p22
OpenSSH_8.1p1 Debian-1, OpenSSL 1.1.1d 10 Sep 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to ec2-3-17-146-113.us-east-2.compute.amazonaws.com [3.17.146.113] port 22.
debug1: Connection established.
debug1: identity file /home/kona/.ssh/aws_kona_id type -1
debug1: identity file /home/kona/.ssh/aws_kona_id-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1p1 Debian-1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
debug1: Authenticating to ec2-3-17-146-113.us-east-2.compute.amazonaws.com:22 as 'kona'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:D3sIum9dMyyHNjtnL7Pr4u5DhmP5aQ1jaZ8Adsdma9E
debug1: Host 'ec2-3-17-146-113.us-east-2.compute.amazonaws.com' is known and matches the ECDSA host key.
debug1: Found key in /home/kona/.ssh/known_hosts:41
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /home/kona/.ssh/aws_kona_id explicit
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/kona/.ssh/aws_kona_id
Enter passphrase for key '/home/kona/.ssh/aws_kona_id':
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
[email protected]: Permission denied (publickey).
kona@arcticjieer:~$ | This error usually means that you’ve got too many keys loaded in your ssh-agent . Explanation: Your ssh client will attempt to use all the keys from ssh-agent one by one before it gets to use the key specified with -i aws_kona_id . Yes, it's a bit counter-intuitive. Because each such attempt counts as an authentication failure and by default only 5 attempts are allowed by the SSH server you are getting the error you see: Too many authentication failures . You can view the identities (keys) attempted with ssh -v . The solution is to tell ssh to only use the identities specified on the command line: ssh -o "IdentitiesOnly yes" -i ~/.ssh/aws_kona_id -v [email protected] If it doesn’t help post the output of that command here. | {
"source": [
"https://serverfault.com/questions/989678",
"https://serverfault.com",
"https://serverfault.com/users/546354/"
]
} |
992,139 | I regularly connect via SSH to a remote server, from an Ubuntu system, on the default port 22.
Let's call the server example.org .
I am sure that this server is configured properly, and I have confirmed that the following issue is independent from my OS and persists across re-installs. There is one particular Wifi access point where, if I connect to the server ( ssh example.org ), I get this: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:[REDACTED].
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /home/user/.ssh/known_hosts:3
remove with:
ssh-keygen -f "/home/user/.ssh/known_hosts" -R "serv.org"
ED25519 host key for serv.org has changed and you have requested strict checking.
Host key verification failed. The problematic access point belongs to an academic institution, and seems to be more locked-down than commercial ISP networks (for example I can't download torrents on it). If I go back to another network (say, using my phone as an access point), I can connect again. According to Wireshark: The DNS query (to 8.8.8.8 ) for example.org returns the same IP address, even on the problematic access point. The SSH key exchange seems to happen as usual, but the key sent by the server in the "ECDH Key Exchange Reply" indeed has a different fingerprint when I am connecting through the problematic AP. I don't understand what this network is doing.
Blocking port 22 would be one thing, but here I seem to reach the server and get a wrong key as a response. Could this access point be intentionally tampering with the SSH connection?
Is there a way for me to securely use SSH over it despite this?
Should I just avoid using it? | Either this systems host keys are changing or someone/something is MITM'ing the SSH connection. The appropriate course of action is to consider that host as compromised (although its likely not the host itself, rather the connection) unless/until you have an explanation. You may want to reach out to the system administrator of that AP and advise them of your concerns and try and track this down with them. | {
"source": [
"https://serverfault.com/questions/992139",
"https://serverfault.com",
"https://serverfault.com/users/345604/"
]
} |
994,804 | I'm trying to set up a website using an Ubuntu machine running nginx. For some reason, I'm able to access the site by the domain name in Safari and Firefox, but in Chrome it's unable to access the server. However, I'm able to use curl, Postman, etc. and I get the index.html back as I'd like to. I found that in Chrome I'm able to access the site using the IP address, and I'm totally lost on where to check next. Here's my configuration file: server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
} I've changed the response code just to make sure that this is the configuration that I'm actually hitting. Any help would be appreciated! Edit: The domain is gwilliam.dev | Your problem is because you are using a .dev domain. The entire .dev top-level domain (TLD) is on the HSTS preload list and that means you must access it using HTTPS. According to your nginx config snippet, you are only providing HTTP bindings, not HTTPS. In fact, I am surprised that you are able to access the domain using Firefox, since Firefox has been forcing .dev to HTTPS since at least mid-2018. You might be using a very old version, in which case you should upgrade immediately. The easiest way to get HTTPS support on your site is LetsEncrypt . Once you have that set up, your site should work in Chrome. | {
"source": [
"https://serverfault.com/questions/994804",
"https://serverfault.com",
"https://serverfault.com/users/551487/"
]
} |
995,130 | My server connects to some strange resources via OpenVPN, and every time the OpenVPN client starts up, an ovpn interface is brought up. I want to expose only selected ports (say, MySQL) to this interface, so I have this rule in my iptables: iptables -A INPUT -i ovpn -p tcp --dport 3306 -j ACCEPT However, because the OpenVPN client can disconnect and reconnect without intervention, the link ID (as shown by ip link show ovpn ) can change. Will the above iptables rule continue to work after the link disappears and appears again (with a different ID)? | Yes it will continue to work, because iptables doesn't use the interface's index but is doing a string comparison with the current interface's name when evaluating the -i / --in-interface parameter. Actually it appears to be always evaluated , even when the parameter is not provided, but the inlined function is quite optimized. By contrast, nftables (the current candidate successor to iptables ) offers two different expressions: iifname : the direct equivalent of -i , comparing the current name, and iif comparing the interface index, which would cause a problem in your use case. When iptables is translated into nftables (either using iptables-translate or iptables-nft for the newer iptables-over-kernel-nftables API), -i gets translated to iifname as expected for compatibility. | {
"source": [
"https://serverfault.com/questions/995130",
"https://serverfault.com",
"https://serverfault.com/users/450575/"
]
} |
997,614 | This question is about setting the correct value of ssl_prefer_server_ciphers while configuring nginx. According to a fairly typical config suggested by Mozilla, the value should be off (source: https://ssl-config.mozilla.org/#server=nginx&server-version=1.17.7&config=intermediate&openssl-version=1.0.1g ). According to nginx's own documentation, one should always set this to on : https://www.nginx.com/blog/nginx-https-101-ssl-basics-getting-started/ (search the document for ssl_prefer_server_ciphers ). I'm stumped as to which advice to follow. Both sources are pretty solid. Can some industry experts chime in regarding when one should turn this off , and when on ? Would also love to know the rationale. | When ssl_prefer_server_ciphers is set to on , the web server owner can control which ciphers are available. The reason why this control was preferred is old and insecure ciphers that were available in SSL, and TLS v1.0 and TLS v1.1. When the server supports old TLS versions and ssl_prefer_server_ciphers is off, an adversary can interfere with the handshake and force the connection to use weak ciphers, therefore allowing decrypting of the connection. The weak ciphersuites have been deprecated in TLS v1.2 and v1.3, which removes the need for server to specify preferred ciphers. The preferred setting in modern setups is ssl_prefer_server_ciphers off , because then the client device can choose his preferred encryption method based on the hardware capabilities of the client device. For example, if the mobile device does not have AES acceleration, it can choose to use ChaCha cipher for better performance. | {
"source": [
"https://serverfault.com/questions/997614",
"https://serverfault.com",
"https://serverfault.com/users/321109/"
]
} |
997,664 | In Poland, it is common for mobile ISPs to offer plans with limited amount of bandwidth per month, with exclusion of some popular apps. So for example all traffic from YouTube is not counted towards the data cap. Aside from net neutrality issues, I am wondering how is this achieved in the HTTPS age? How does the ISP know which packets to count towards the data cap? I know it could be done with just looking at the IP address, but YouTube has a ton of IPs, and I suspect they change all the time. Plus, I wouldn't be surprised if some of YouTube's IPs are shared with other Google services, which are not uncapped by the ISPs... | Firstly they know the YouTube IP address. ISP's have an IP database. For example YouTube's ASN is AS15169. On the server side they would make a grouping for each service.
One of them is the default grouping and this is the billing group. When you make use of default group, that usage is recorded in the system. For example a few YouTube addresses are listed below. root@server ~>whois -h whois.radb.net -- '-i origin AS15169' | grep ^route
route: 192.179.147.0/24
route: 192.179.148.0/23
route: 192.179.148.0/24
route: 192.179.149.0/24
route: 192.179.150.0/23
route: 192.179.150.0/
...
route6: 2607:f8b0:4016::/48
route6: 2604:31C0::/32
route6: 2620:33:c000::/48
route6: 2607:f8b0:4000::/48
route6: 2404:f340::/32 When you are trying to reach YouTube or other YouTube services (Google video storage) your phone will try to reach these IP addresses. The ISP checks the IP address, and if it is inside the YouTube group, they don't apply charge at this group. Another option is checking the SNI header at the initial HTTP connection.
When you make a connection with HTTPS sites not all the data is encrypted. For example, when you make a search on Google, you can see the URL in your browser like this: https://www.google.com/search?q=hello+world . Encrypted data is /search?q=hello+world and all page content. Now you are reaching a site like www.google.com , but they don't know which page or the content inside of that page. Some ISPs use SNI for this. For example in Turkey this method is used for making specific internet packages like 5GB internet+4GB Spotify or 7GB internet with unlimited WhatsApp. Also they use SNI for banning websites. Some websites use the same IP addresses like wikimedia.com or wikipedia.org . If they try to block Wikipedia with an IP addresses they block all Wikimedia services. | {
"source": [
"https://serverfault.com/questions/997664",
"https://serverfault.com",
"https://serverfault.com/users/291696/"
]
} |
997,788 | Most if not all server certificates that I work with expire before its issuer, but is it possible for a server certificate to expire after its issuer and does this apply to an intermediate certificate as well (expire after the root certificate)? If so, should a client trust a remote with a expired intermediate certificate while the server certificate hasn't? I've looked into Certification authority root certificate expiry and renewal , but I don't fully understand the answer. | According to the SSL FAQ : the validity (and thus level of trust) of a given certificate is determined by the corresponding validity of the higher-level certificate that signed it. So while it is technically possible to make a certificate which lasts longer than its issuer, it makes no sense, as the chain becomes broken the moment an intermediate (or the root) certificate becomes invalid (for whatever reason). No client should (and none does) trust such a chain. | {
"source": [
"https://serverfault.com/questions/997788",
"https://serverfault.com",
"https://serverfault.com/users/184749/"
]
} |
997,896 | Since I had a lot of trouble finding out how to do this anywhere, I'd like to ask, how do I enable the PowerTools repository in CentOS 8? (equivalent of CodeReady Linux Builder repo in RHEL 8) | You can enable it with the following commands: yum install dnf-plugins-core And then: yum config-manager --set-enabled powertools Or: yum config-manager --set-enabled PowerTools You can also just open /etc/yum.repos.d/CentOS-PowerTools.repo with a text editor and set enabled= to 1 instead of 0 '. Run yum repolist and you'll see it. EDIT: The repo is now powertools instead of PowerTools when enabling it with yum . There was a bug so the developers may set it back to what it was before which is why both are listed. The repo file still has the same name. | {
"source": [
"https://serverfault.com/questions/997896",
"https://serverfault.com",
"https://serverfault.com/users/504628/"
]
} |
997,935 | I use centos7 and /var/log/messages has not worked for a while although I restarted rsyslog.service . Its size was always 0. I then deleted the messages file, and restarted rsyslog again but it was not created automatically. I can not log anything currently. What should I do? | You can enable it with the following commands: yum install dnf-plugins-core And then: yum config-manager --set-enabled powertools Or: yum config-manager --set-enabled PowerTools You can also just open /etc/yum.repos.d/CentOS-PowerTools.repo with a text editor and set enabled= to 1 instead of 0 '. Run yum repolist and you'll see it. EDIT: The repo is now powertools instead of PowerTools when enabling it with yum . There was a bug so the developers may set it back to what it was before which is why both are listed. The repo file still has the same name. | {
"source": [
"https://serverfault.com/questions/997935",
"https://serverfault.com",
"https://serverfault.com/users/550448/"
]
} |
998,415 | Both T3 and T3a instances offer the same configuration, CPU credits and network performance. The only difference that I find is T3 uses Intel processor while T3a uses AMD processor, while both are running at 2.5 GHz. Is this the only reason for reduced cost of T3a instances? How does a customer make a choice between these two? | Per benchmarks at photographerstechsupport.com , t3 instances tend to be 10 to 20% faster than their equivalently spec'd t3a counterparts. I notice that the precision of some of the benchmark values is low enough (i.e. 0.06 vs 0.07, reported only to the nearest hundredth) that the actual difference could be anywhere between ~0 and 26%. However, based on multiple values it does seem the difference is about 15% overall. This is reasonably close to the price difference of ~9%, making either choice a good option, depending on the compute performance need. | {
"source": [
"https://serverfault.com/questions/998415",
"https://serverfault.com",
"https://serverfault.com/users/131820/"
]
} |
998,463 | I own a mailserver that serves a couple of domains for their emails needs. For these domains, I set up SPF, DKIM and DMARC for them so that they pass all the items in mailbox spam tests that I have control over. As a result, my domains score pretty well in sites like MXToolbox.com and Mail-Tester.com . Since I have the tools to set up these policies on my server, and some PHP code lying around that can blast out emails, I've been looking at offering EDM design and blasting services to some of my clients. Naturally, the first thing I looked at was MailChimp, since they are the first alternative that comes to mind for 99% of my clients. One thing that really blew me away was that they are able to offer blasts of up a million emails per month! That averages to 30,000 emails a day! I don't know much about the EDM industry, but I've heard that blasting out large numbers of emails will get your server blacklisted. That really scares me, as my mailserver is currently serving a couple of clients, and if I ruin my reputation with an email blast, my existing clients' emails will be affected too. If I had to blast out MailChimp numbers of emails, what are some of the things I need to take note of, so that I avoid getting my server blacklisted? | Per benchmarks at photographerstechsupport.com , t3 instances tend to be 10 to 20% faster than their equivalently spec'd t3a counterparts. I notice that the precision of some of the benchmark values is low enough (i.e. 0.06 vs 0.07, reported only to the nearest hundredth) that the actual difference could be anywhere between ~0 and 26%. However, based on multiple values it does seem the difference is about 15% overall. This is reasonably close to the price difference of ~9%, making either choice a good option, depending on the compute performance need. | {
"source": [
"https://serverfault.com/questions/998463",
"https://serverfault.com",
"https://serverfault.com/users/493918/"
]
} |
1,000,682 | While opening query tool via pgadmin, i am getting this error on popup. could not send data to server: Socket is not connected could not send SSL negotiation packet: Socket is not connected Does any one know why this is happening. | try changing your host connection to "127.0.0.1" instead of "localhost" it worked for me | {
"source": [
"https://serverfault.com/questions/1000682",
"https://serverfault.com",
"https://serverfault.com/users/557492/"
]
} |
1,002,315 | Recently my backups have started failing, and I tracked the problem to the file /var/lib/fail2ban/fail2ban.sqlite3 . It is over 500mb. I am not sure whether it has been growing over time or if this is a recent development. How can I get it to a reasonable size and keep it that size? (For the purposes of this let's say under 500mb.) | There is a dbpurgeage parameter in fail2ban.conf , which tells how many days of data to keep in the database. The default is one day ( 1d ), so try do decrease it to a couple of hours: dbpurgeage = 8h This setting is coupled with findtime : it makes no sense to have a findtime longer than dbpurgeage . Edit (2021) : The note below was true at the time of writing. However nowadays check out neingeist answer instead: fail2ban 0.11.x which starts being available in Linux distributions (e.g. Debian testing , Ubuntu 20.04 and later, Fedora 33 ), respects the dbpurgeage setting. Obsolete note : By looking at my own fail2ban database, the dbpurgeage setting does not seem to be working. Therefore the only solution is to delete the entries manually. For example, in order to delete last year's entries run: sqlite3 /var/lib/fail2ban/fail2ban.sqlite3 \
"DELETE FROM bans WHERE DATE(timeofban, 'unixepoch') < '2020-01-01'; VACUUM;" (the sqlite3 executable is usually in the homonymous package). There seem to be no way to perform a VACUUM of the database without sqlite performing a copy of the database in the same directory. However you can copy the file to another filesystem before performing the operation and than copy back the smaller database. | {
"source": [
"https://serverfault.com/questions/1002315",
"https://serverfault.com",
"https://serverfault.com/users/223197/"
]
} |
1,003,112 | When I do a curl -v to some docker container that I created, I get: * Mark bundle as not supporting multiuse What does it mean? Where is it documented? | From https://github.com/curl/curl/blob/curl-7_82_0/lib/http.c#L4226 : if(conn->httpversion < 20) {
conn->bundle->multiuse = BUNDLE_NO_MULTIUSE;
infof(data, "Mark bundle as not supporting multiuse\n");
} It is a feature of HTTP/2. See, e.g., https://www.cloudflare.com/website-optimization/http2/what-is-http2/ | {
"source": [
"https://serverfault.com/questions/1003112",
"https://serverfault.com",
"https://serverfault.com/users/180457/"
]
} |
1,003,118 | Hy guys,
Im currently have only one public IP , and need to serve multiple third party web applications running on different machines on a lan enviroment.
How can I forward http/s requests based on server name?
Im used to setup apache virtual hosting to serve multiple webs hosted at same server but now I need to map the requests and forward to lan machines.
Does apache provides some module to achieve this ?
Does nginx ?
Any idea would be wellcome,
Regards.
Leandro. | From https://github.com/curl/curl/blob/curl-7_82_0/lib/http.c#L4226 : if(conn->httpversion < 20) {
conn->bundle->multiuse = BUNDLE_NO_MULTIUSE;
infof(data, "Mark bundle as not supporting multiuse\n");
} It is a feature of HTTP/2. See, e.g., https://www.cloudflare.com/website-optimization/http2/what-is-http2/ | {
"source": [
"https://serverfault.com/questions/1003118",
"https://serverfault.com",
"https://serverfault.com/users/549165/"
]
} |
1,003,171 | Use Case: We have several Eaton PDU/PSUs that don't support SSL/TLS authentication. I was tasked with building a SMTP relay server that can take the basic SMTP/25 emails and forward them to our email provider via SSL. Note: The relay host makes the smtps connectione on 465 using stunnel. I am at a point where my SMTP Postfix Relay Server is able to send mail successfully via our email provider, alimail. But I cannot get it to relay emails from other hosts on our network. /etc/postfix/main.cf smtpd_banner = mail01v-la ESMTP
inet_interfaces = all
inet_protocols = ipv4
mynetworks = 127.0.0.0/8, 10.96.80.0/24
relayhost = [127.0.0.1]:5000
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CApath = /etc/ssl/certs
smtp_use_tls = no
smtp_generic_maps = regexp:/etc/postfix/generic /etc/postfix/sasl_passwd [127.0.0.1]:5000 [email protected]:notifypwd /etc/postfix/generic /^root@(.*)$/ [email protected] /etc/stunnel/stunnel.conf client = yes
foreground = no
[smtps]
accept = 5000
connect = smtp.mxhichina.com:smtps SMTP Telnet to Provider [root@mail01v-la ~]# telnet smtp.mxhichina.com smtp
Trying 205.204.101.152...
Connected to smtp.mxhichina.com.
Escape character is '^]'.
220 smtp.aliyun-inc.com MX AliMail Server
ehlo google.come
250-smtp.aliyun-inc.com
250-STARTTLS
250-8BITMIME
250-AUTH=PLAIN LOGIN XALIOAUTH
250-AUTH PLAIN LOGIN XALIOAUTH
250-PIPELINING
250 DSN Checking Stunnel Connection [root@mail01v-la ~]# telnet 127.0.0.1 5000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
220 smtp.aliyun-inc.com MX AliMail Server Sending an Email from the Relay Server echo "Stack Body" | mail -s "Test Subject for Stack" [email protected] Results Feb 14 18:30:29 mail01v-la postfix/pickup[4812]: 3194940DE2: uid=0 from=<root>
Feb 14 18:30:29 mail01v-la postfix/cleanup[4865]: 3194940DE2: message-id=<[email protected]>
Feb 14 18:30:29 mail01v-la postfix/qmgr[2606]: 3194940DE2: from=<[email protected]>, size=481, nrcpt=1 (queue active)
Feb 14 18:30:30 mail01v-la postfix/smtp[4867]: 3194940DE2: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:5000, delay=1.3, delays=0.01/0.01/0.85/0.46, dsn=2.0.0, status=sent (250 Data Ok: queued as freedom)
Feb 14 18:30:30 mail01v-la postfix/qmgr[2606]: 3194940DE2: removed Email Testing with other hosts Random CentOS Server /etc/postfix/main.cf
relayhost = [10.96.80.126]:5000 Result Feb 14 18:06:52 test01v-la postfix/pickup[1247]: BB87C305A42F: uid=0 from=<root>
Feb 14 18:06:52 test01v-la postfix/cleanup[1387]: BB87C305A42F: message-id=<[email protected]>
Feb 14 18:06:52 test01v-la postfix/qmgr[1248]: BB87C305A42F: from=<[email protected]>, size=477, nrcpt=1 (queue active)
Feb 14 18:06:53 test01v-la postfix/smtp[1389]: BB87C305A42F: to=<[email protected]>, relay=10.96.80.126[10.96.80.126]:5000, delay=0.78, delays=0.01/0.01/0.61/0.15, dsn=5.0.0, status=bounced (host 10.96.80.126[10.96.80.126] said: 553 authentication is required (in reply to MAIL FROM command)) Eaton PSU config creds Eaton Result email[17131]: message error -110 in function smtp_start_session test - (Connection timed out) retrying smtp_start_session test email[17131]: Failed to connect to SMTP server 10.96.80.126:5000 with username [email protected] __ This is my first time doing a setup like this. Theres likely a lot of holes in my knowledge that are causing me grief. In a proper setup, do you even need to re-type credentials for any hosts that want to use the relay server? For example, in the eaton smtp config, should it be the creds of the email used in the sasl file? Or a system account permitted for forwarding with postfix? Or an account name defined in the postfix/generic file? A bit lost. Is stunnel even the proper way I should be connecting via ssl/tls? I see starttls available in the telnet prompt for smtp.mxhichina.com. Honestly, I think I'm overcomplicating this or am missing something obvious. If anyone has a better setup to accomodate my use case, it be greatly appreciated as well. Switching SSL connection from Stunnel to Postfix only Results Feb 20 11:27:22 mail01v-la postfix/qmgr[1537]: 6B38AE5EE: from=<[email protected]>, size=479, nrcpt=1 (queue active)
Feb 20 11:27:22 mail01v-la postfix/smtp[1558]: CLIENT wrappermode (port smtps/465) is unimplemented
Feb 20 11:27:22 mail01v-la postfix/smtp[1558]: instead, send to (port submission/587) with STARTTLS
Feb 20 11:27:40 mail01v-la postfix/smtp[1558]: 6B38AE5EE: to=<[email protected]>, relay=smtp.mxhichina.com[205.204.101.152]:465, delay=613, delays=595/0.02/19/0, dsn=4.4.2, status=deferred (lost connection with smtp.mxhichina.com[205.204.101.152] while receiving the initial server greeting)
Feb 20 11:32:22 mail01v-la postfix/qmgr[1537]: A3F736B2: from=<[email protected]>, size=477, nrcpt=1 (queue active) main.cf inet_interfaces = all
inet_protocols = ipv4
mynetworks = 127.0.0.0/8, 10.96.80.0/24
relayhost = [smtp.mxhichina.com]:465
smtp_use_tls = yes
smtp_enforce_tls = yes
smtp_tls_wrappermode = yes
soft_bounce = yes
smtp_sasl_auth_soft_bounce = yes /etc/postfix/sasl_passwd smtp.mxhichina.com [email protected]:notifypwd | From https://github.com/curl/curl/blob/curl-7_82_0/lib/http.c#L4226 : if(conn->httpversion < 20) {
conn->bundle->multiuse = BUNDLE_NO_MULTIUSE;
infof(data, "Mark bundle as not supporting multiuse\n");
} It is a feature of HTTP/2. See, e.g., https://www.cloudflare.com/website-optimization/http2/what-is-http2/ | {
"source": [
"https://serverfault.com/questions/1003171",
"https://serverfault.com",
"https://serverfault.com/users/560159/"
]
} |
1,003,361 | I have deployed my application onto AWS EC2 and I want to implement automation where if I restart my instance or when the Nginx web server is down, it will restart by itself. I do not really know where to start with this. I heard I can use crontab to schedule automatic monitoring and if it is down, it can send email alerts and restart the webserver. | It is a feature of SystemD. Override existing unit file for NGINX by running systemctl edit nginx then paste in: [Service]
Restart=always Save. If NGINX is down due to, e.g. the OOM killer, it will be restarted after dying.
If you have a configuration error in NGINX, it will not be restarted, of course. To verify this configuration. start NGINX service with systemctl start nginx , and verify it is running with systemctl status nginx . Kill it via pkill -f nginx . Confirm that NGINX is running anyway with systemctl status nginx . | {
"source": [
"https://serverfault.com/questions/1003361",
"https://serverfault.com",
"https://serverfault.com/users/560394/"
]
} |
1,004,549 | We have a bucket with more than 500,000 objects in it. I'm assigned a job where I've to delete files which have a specific prefix. There are around 300,000 files with the given prefix in the bucket. For eg If there are 3 files abc_1file.txt
abc_2file.txt
abc_1newfile.txt I've to delete the files with abc_1 prefix only.
I didn't find much in AWS documentation related to this. Any suggestions on how can I automate this? | You can use aws s3 rm command using the --include and --exclude parameters to specify a pattern for the files you'd like to delete. So in your case, the command would be: aws s3 rm s3://bucket/ --recursive --exclude "*" --include "abc_1*" which will delete all files that match the "abc_1*" pattern in the bucket. The behavior of these parameters is documented here These instructions assume you have downloaded, installed and configured the AWS CLI tools | {
"source": [
"https://serverfault.com/questions/1004549",
"https://serverfault.com",
"https://serverfault.com/users/510626/"
]
} |
1,005,235 | We are acquiring new Lenovo SR650 server (which will be hosting multiple Oracle DB servers,SAP) & following storage options are proposed from the vendors ThinkSystem 2.5" 1.2TB 10K SAS 12Gb Hot Swap 512n HDD (QTY: 16 disks) ThinkSystem 2.5" 5210 960GB Entry SATA 6Gb Hot Swap QLC SSD (QTY: 16 disks) ThinkSystem 2.5" 5210 1.92TB Entry SATA 6Gb Hot Swap QLC SSD (QTY: 16 disks) We read somewhere that upon sudden power failure, there are more chances of total data corruption as compared to SAS disks. What is more suitable storage option from above from performance & reliability perspective? We have redundant UPS , along with dedicated online generator for Data Center.Initially we will be hosting 2 SAP servers (Production & Development). Both are virtualized. Each VM space usage is around 3 TB. In the past, our experience with Raid 5 is not good, & we are using RAID 10 in all of our servers, and after RAID10 , we have not encountered any failure from past few years. Is this a good idea to break 16 disks into TWO Raid 10 arrays? PRD on 1st array, and DEV on 2nd array, So that whatever operation (data copy, Backup etc) is in progress, it should not affect second array? | QLC SSDs are absolutely inadequate for write heavy workload as databases and SAP. I strongly suggest you to buy enterprise-grade TLC disks, as Samsung PM/SM863 and Intel S4510/S4610. I would not go the SAS 10k route unless the SSD system cost too much for your budget. Finally, I would keep all disks in the same RAID10 array so that production workloads can benefit from all the 16 disks IOPS. | {
"source": [
"https://serverfault.com/questions/1005235",
"https://serverfault.com",
"https://serverfault.com/users/413319/"
]
} |
1,005,612 | I have a server that's primarily running a Ruby script. Because Ruby (2.7) has a GIL, it is single threaded. My computer (server) has an Intel i3 dual core processor, but due to hyperthreading I see 4 cores. Ruby only utilizes 25% CPU under heavy load. I wanted to see if disabling hyperthreading benefits a programming language that runs on single thread. Also, my server is running a very minimal desktop environment and it doesn't use more than 2% CPU. So I wanted to make most of the resources available to Ruby. I did a benchmark to see if I really get any performance boost by disabling hyperthreading. Benchmark: I wrote a simple Ruby script that runs a while loop and adds a the value of the loop counter with another variable. This program should use 100% of a CPU core: #!/usr/bin/env ruby
$-v = true
LOOPS = ENV['N'].to_i.then { |x| x < 1 ? 100_000_000 : x } + 1
i, j, t = 0, 0, Time.now
puts "Counting till #{LOOPS - 1} and adding values to V..."
while (i += 1) < LOOPS
if i % 10000 == 0
e = Time.now - t
r = LOOPS.*(e)./(i).-(e).round(2)
print "\e[2KN: #{i} | Done: #{i.*(100) / LOOPS}% | Elapsed: #{e.round(2)}s | Estimated Rem: #{r}s\r"
end
j += i
end
puts "\nV = #{j}\nTime: #{(Time.now).-(t).round(2)}s" With Hyperthreading: ⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.55s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.55s
⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.54s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.54s
⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.67s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.67s gnome-system-monitor reported 25% CPU usage by Ruby while the test was running. Without Hyperthreading: [ # echo 0 | tee /sys/devices/system/cpu/cpu{2,3}/online used to disable hyperthreads ] ⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.72s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.72s
⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.54s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.54s
⮚ ruby p.rb
Counting till 100000000 and adding values to V...
N: 100000000 | Done: 99% | Elapsed: 4.56s | Estimated Rem: 0.0s
V = 5000000050000000
Time: 4.56s gnome-system-monitor reported 50% CPU usage by Ruby while the test was running. I have even ran the test on my laptop, which takes around twice the time it took on my computer. But the result is identical: disabling hyperthreading doesn't help the process to do better. And even worse, my laptop gets a bit slower when multitasking. So in the non-hyperthreading mode, Ruby used 2x the CPU power compared to the hyperthreaded mode. But why did it still take the same amount of time to complete the same task? | Your Ruby program did not use 2x the CPU time when running with HT disabled. Rather, as it maximizes one core out of two total cores, gnome-system-monitor will report as the utilization as 50%. If, due to HT, the system reports four total cores, one core out of four would be 25%. Disabling HT did cause more variation in your results because less resources were available: recent Intel (or AMD) cores are quite wide, so additional threads are often useful to extract 10-20% more aggregate performance. If some background process was automatically executed during the test runs, the system without HT is prone to more variance and lower total throughput. | {
"source": [
"https://serverfault.com/questions/1005612",
"https://serverfault.com",
"https://serverfault.com/users/542534/"
]
} |
1,006,700 | I'm learning more about DNS systems and right now I'm studying a very nice project written in Go and I noticed in the code that it queries for some DNS records towards nowhere/?name=probe-test.dns.nextdns.io Initially I thought that this can't be right, it looks more like an invalid url rather than a domain name but I fired up a console and hit dig nowhere/?name=probe-test.dns.nextdns.io and it returned an A record. $ dig nowhere/?name=probe-test.dns.nextdns.io
; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> nowhere/?name=probe-test.dns.nextdns.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53429
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;nowhere/?name=probe-test.dns.nextdns.io. IN A
;; ANSWER SECTION:
nowhere/?name=probe-test.dns.nextdns.io. 300 IN A 45.90.28.0
;; Query time: 26 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Thu Mar 12 18:55:41 EET 2020
;; MSG SIZE rcvd: 84 Can someone explain to me how is this a valid entry. | It's a wildcard DNS entry . The dns.nextdns.io zone is configured to return 45.90.28.0 as a response to a query for anything in front of dns.nextdns.io . Here are some examples, looking for foo and bar . And for good measure, they might already have those two in their zone because they are commonly used for testing and demos, so let's look for something that they won't have anticipated -- NoWayThisExists . C:\Users\me>nslookup foo.dns.nextdns.io 8.8.8.8
Server: dns.google
Address: 8.8.8.8
Non-authoritative answer:
Name: foo.dns.nextdns.io
Address: 45.90.28.0
C:\Users\me>nslookup bar.dns.nextdns.io 8.8.8.8
Server: dns.google
Address: 8.8.8.8
Non-authoritative answer:
Name: bar.dns.nextdns.io
Address: 45.90.28.0
C:\Users\me>nslookup NoWayThisExists.dns.nextdns.io 8.8.8.8
Server: dns.google
Address: 8.8.8.8
Non-authoritative answer:
Name: NoWayThisExists.dns.nextdns.io
Address: 45.90.28.0 So as long as the stuff you put at the beginning doesn't have any invalid characters that would trip up the DNS resolver, you'll get 45.90.28.0 as a response. And in your case, none of the characters in nowhere/?name=probe-test cause a problem. In fact, you can look up a name consisting of nothing but the symbols / , ? , = , and - . C:\Users\me>nslookup ///???===---.dns.nextdns.io 8.8.8.8
Server: dns.google
Address: 8.8.8.8
Non-authoritative answer:
Name: ///???===---.dns.nextdns.io
Address: 45.90.28.0 | {
"source": [
"https://serverfault.com/questions/1006700",
"https://serverfault.com",
"https://serverfault.com/users/115721/"
]
} |
1,008,522 | When I visit a website that has Cloudflare, using the website's IP address, I get this message: Error 1003
Direct IP access not allowed
What happened?
You've requested an IP address that is part of the Cloudflare network.
A valid Host header must be supplied to reach the desired website. I am a student. What does allow Cloudflare to block direct IP address access? Is not DNS a layer above the IP address? If yes, being Cloudflare a DNS service, why does Cloudflare have the capacity to block IP addresses? | There's nothing special in the cloudflare setup. This is just a property of HTTP. When a client opens a URL, there are three important steps: If required, it makes a DNS (or other resolution method) to turn a hostname into an IP address. If the URL specifies an IP address for the host, use that. It makes a connection to that IP address on a well-known port number, normally 80 (unless it's overridden in the URL) It asks the server for the page, including the desired hostname . A classical example looks like this: GET /pub/WWW/TheProject.html HTTP/1.1
Host: www.w3.org Consider a large host with many web sites on it. For simplicity let's say it has a single IP address. Hundreds of domain names resolve to this address. How does the server decide which pages to deliver? It uses the host detail given by the client in the HTTP request. If you ask for something it doesn't have or want to give you, it will give you an error response. In your case, the request contains an IP address for the host specifier. GET /whatever HTTP/1.1
Host: a.b.c.d Very many hosts decide not to give out pages when the host is specified by IP address. There's nothing special about Cloudflare here, nor is it to do with DNS. It's about how the server responds to requests for the host specified by IP address, and you can see that this error message specifies that A valid Host header must be supplied . Here's an answer which describes how to configure a server in this way: https://serverfault.com/a/607222 You can easily verify this kind of behaviour by using telnet to connect to a server and issue the HTTP request manually. PS. The same general answer applies to an HTTP S request, but using Server Name Indication in the setup. It's worth noting that Host came in with HTTP 1.1 (1997). Prior to that, the mechanism described here didn't exist, and a server had no way to reliably tell if the client had asked for a name which legitimately resolved to its IP address, or had asked for the host by IP address directly. As this was an important development for the explosive growth in web sites, many older clients were updated to send Host . [Thanks commenters for picking up on details.] | {
"source": [
"https://serverfault.com/questions/1008522",
"https://serverfault.com",
"https://serverfault.com/users/365777/"
]
} |
1,009,961 | I'm trying to install WordPress + LEMP on my Ubuntu 18.04. I have no interest in installing Apache. Why does the PHP installer assume I do? | According to this answer on AskUbuntu: How to install php without Apache webserver? : Ubuntu package details says php ( php7.2 ) depends on libapache2-mod-php7.2 OR php7.2-fpm OR php7.2-cgi . It seems to default to the first package, which itself depends on apache2 . But if you install one of the latter first, and php afterwards, apache2 will not be installed. If you're using nginx, you probably want: sudo apt install php php7.2-fpm | {
"source": [
"https://serverfault.com/questions/1009961",
"https://serverfault.com",
"https://serverfault.com/users/76315/"
]
} |
1,014,754 | Steps Launch PowerShell 7 on Windows 10. Actual result PowerShell 7.0.0
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/powershell
Type 'help' to get help.
Warning: PowerShell detected that you might be using a screen reader and has disabled PSReadLine for compatibility purposes. If you want to re-enable it, run 'Import-Module PSReadLine'. Expected result No warning is displayed when PowerShell starts, since I am not using a screen reader. Workaround Run the specified command Import-Module PSReadLine . I haven't run this since I first want to understand why the warning is here. $PSVersionTable output: Name Value
---- -----
PSVersion 7.0.0
PSEdition Core
GitCommitId 7.0.0
OS Microsoft Windows 10.0.18362
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0 Additional info I have Visual Studio 2017, 2019 installed | Set the following registry key: Windows Registry
Computer\HKEY_CURRENT_USER\Control Panel\Accessibility\Blind Access\On to value 0 and reboot. I discovered this alternative solution via the issue mentioned by @Znatz. Source | {
"source": [
"https://serverfault.com/questions/1014754",
"https://serverfault.com",
"https://serverfault.com/users/56929/"
]
} |
1,015,547 | I setup a SSH server online that is publicly accessible by anyone. Therefore, I get a lot of connections from IPs all over the world. Weirdly, none actually try to authenticate to open a session.
I can myself connect and authenticate without any problem. From time to time, I get the error: kex_exchange_identification: Connection closed by remote host in the server logs. What causes that? Here is 30 minutes of SSH logs (public IPs have been redacted): # journalctl SYSLOG_IDENTIFIER=sshd -S "03:30:00" -U "04:00:00"
-- Logs begin at Fri 2020-01-31 09:26:25 UTC, end at Mon 2020-04-20 08:01:15 UTC. --
Apr 20 03:39:48 myhostname sshd[18438]: Connection from x.x.x.207 port 39332 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:39:48 myhostname sshd[18439]: Connection from x.x.x.207 port 39334 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:39:48 myhostname sshd[18438]: Connection closed by x.x.x.207 port 39332 [preauth]
Apr 20 03:39:48 myhostname sshd[18439]: Connection closed by x.x.x.207 port 39334 [preauth]
Apr 20 03:59:36 myhostname sshd[22186]: Connection from x.x.x.83 port 34876 on 10.0.0.11 port 22 rdomain ""
Apr 20 03:59:36 myhostname sshd[22186]: error: kex_exchange_identification: Connection closed by remote host And here is my SSH configuration: # ssh -V
OpenSSH_8.2p1, OpenSSL 1.1.1d 10 Sep 2019
# cat /etc/ssh/sshd_config
UsePAM yes
AddressFamily any
Port 22
X11Forwarding no
PermitRootLogin prohibit-password
GatewayPorts no
PasswordAuthentication no
ChallengeResponseAuthentication no
PrintMotd no # handled by pam_motd
AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 /etc/ssh/authorized_keys.d/%u
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
KexAlgorithms [email protected],diffie-hellman-group-exchange-sha256
Ciphers [email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr
MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,[email protected]
LogLevel VERBOSE
UseDNS no
AllowUsers root
AuthenticationMethods publickey
MaxStartups 3:100:60 After searching the web, I have seen references to MaxStartups indicating that it could be the reason for this error but after changing the default value as shown in my sshd_config and attempting more than 3 connections, the server unambiguously indicates the probem Apr 20 07:26:59 myhostname sshd[31468]: drop connection #3 from [x.x.x.226]:54986 on [10.0.0.11]:22 past MaxStartups So, what causes error: kex_exchange_identification: Connection closed by remote host ? | Weirdly, none actually try to authenticate to open a session. Some spiders and services like Shodan scans public ipv4 addresses for open services, e.g. salt masters, ftp servers, RDPs, and also SSH services. These spiders usually only connect to the services without doing any valid authentication steps. I get the error: kex_exchange_identification : Connection closed by remote host in the server logs. What causes that? I haven't found conclusive answers about that, so... time to browse the source then. In OpenSSH source code, kex_exchange_identification is a function to exchange server and client identification (duh) , and the specified error happened if the socket connection between OpenSSH server and client is interrupted ( see EPIPE ), i.e. client already closed its connection. | {
"source": [
"https://serverfault.com/questions/1015547",
"https://serverfault.com",
"https://serverfault.com/users/567409/"
]
} |
1,017,443 | I have recently looked into advanced filesystems (Btrfs, ZFS) for data redundancy and availability and got interested in the additional functionality they provide, especially their "self-healing" capabilities against data corruption. However, I think I need to take a step back and try to understand if this benefit outweighs their disadvantages (Btrfs bugs and unresolved issues & ZFS availability and performance impact) for general home/SMB-usage, compared to a conventional mdadm-Raid1 + Ext4 solution. A mirrored backup is available either way. Let's assume I have a couple of file servers which are used for archival purposes and have limited resources, but ECC memory and a stable power source. How likely am I to even encounter actual data corruption making files unreadable? How? Can Ext4 or the system file manager already detect data errors on copy/move operations, making me at least aware of a problem? What happens if one of the madam-Raid1 drives holds different data due to one drive having bad sectors? Will I still be able to retrieve the correct file or will the array be unable to decide which file is the correct one and lose it entirely? | Yes, a functional checksummed filesystem is a very good thing. However, the real motivation is not to be found into the mythical "bitrot" which, while does happen, is very rare. Rather, the main advantage is that such a filesystem provide and end-to-end data checksum , actively protecting you by erroneous disk behavior as misdirected writes and data corruption related to the disk's own private DRAM cache failing and/or misbehaving due to power supply problem. I experienced that issue first hand, when a Linux RAID 1 array went bad due to a power supply issue. The cache of one disk started corrupting data and the ECC embedded in the disk sectors themselves did not catch anythig, simply because the written data were already corrupted and the ECC was calculated on the corrupted data themselves. Thanks to its checksummed journal, which detected something strange and suspended the filesystem, XFS limited the damage; however, some files/directories were irremediably corrupted. As this was a backup machine facing no immediate downtime pressure, I rebuilt it with ZFS. When the problem re-occured, during the first scrub ZFS corrected the affected block by reading the good copies from the other disks. Result: no data loss and no downtime. These are two very good reasons to use a checksumming filesystem. It's worth note that data checksum is so valuable that a device mapper target to provide it (by emulating the T-10 DIF/DIX specs), called dm-integrity , was developed precisely to extend this protection to classical block devices (especially redundant ones as RAID1/5/6). By the virtue of the Stratis project , it is going to be integrated into a comprehensive management CLI/API. However, you have a point that any potential advantage brought by such filesystem should be compared to the disvantage they inherit. ZFS main problem is that it is not mainlined into the standard kernel, but otherwise is it very fast and stable. On the other hand BTRFS, while mainlined, has many important issues and performance problem (the common suggestion for databases or VMs is to disable CoW which, in turn, disabled checksumming - which is, frankly, not an acceptable answer). Rather then using BTRFS, I would use XFS and hope for the best, or using dm-integrity protected devices. | {
"source": [
"https://serverfault.com/questions/1017443",
"https://serverfault.com",
"https://serverfault.com/users/500388/"
]
} |
1,030,288 | Can NFS be reasonably used on production servers as a means of connecting a compute server to a storage server, assuming the connection is over a LAN 1Gbe or 10Gbe connection? There's obviously some network overhead and NFS seems particularly slower with writes if you have sync mode enabled. Otherwise it seems reasonably lightweight and able to scale from what I can tell, but I have little experience with it personally. Am I wrong? The problem is I have a server right now that acts as both the storage and web server but I'm going to end up needing to split the two likely in the future, and considering some requests need to pass through the web application layer for authentication before initializing the file transfer, it gets kind of tricky with this software. A network fs mount is the simplest option I just.. don't know if that's a good one. I also plan to try and utilize local caching with NFS which should improve performance a good bit, but I'm not sure if that's enough. As far as alternatives, there's only iSCSI that I'm aware of as a real competitor, and most people seem to recommend NFS over any of the other lesser known ones. | NFS is fine, barring some specific other criteria are met, namely: The systems involved are both able to use NFS natively. Windows doesn't count here, it kind of works, but it's got a lot of quirks and is often a pain to work with when dealing with NFS in a cross-platform environment (and if it's just Windows, use SMB3, it eliminates most of the other issues with NFS). Note that on the client side, this means kernel-level support, because a user-level implementation either has to deal with the efficiency issues inherent in using something like FUSE, or it has to be linked directly into the application that needs to access the share. You've properly verified how the NFS client handles an NFS server restart. This includes both the OS itself (which should be fine in most cases), and the software that will be accessing the share. In particular, special care is needed on some client platforms when the software using the share holds files open for extended periods of time, as not all NFS client implementations gracefully handle server restarts by explicitly remounting and revalidating locks and file handles like they should (which leads to all kinds of issues for the client software). Note that you should recheck this any time any part of the stack is either upgraded or reconfigured. You're willing to set up proper user/group ID mapping. This is big, because without it you either need to mirror the UID/GID mappings between the systems (doable, but I'd be wary of setting up SSO against an internal network for an internet facing system) or you end up with potentially serious security implications (namely, what you see on one system for permissions does not match what you see on others). You're operating over a secured network link, or are willing to properly set up authentication for the share. Without auth, anybody on the link can access it (and a malicious client can easily side-step the basic UNIX discretionary access controls). Assuming you meet all those criteria, and you have a reasonably fast network, you should be fine. Also, if you can run jumbo frames, do so, they help a lot for any network filesystem or networked block storage. | {
"source": [
"https://serverfault.com/questions/1030288",
"https://serverfault.com",
"https://serverfault.com/users/588055/"
]
} |
1,030,366 | I've been trying to add global use of aliases on my Debian 10 instance with no luck. What I've already attempted is adding my aliases to /etc/bash.bashrc as well as adding this snippet to /etc/profile to source it without it working. if [ -f /etc/bash.bashrc ]; then . /etc/bash.bashrc fi In my bash.bashrc: #Aliases
alias l='ls -la'
alias ll='ls -l'
alias la='ls -a' EDIT* How do I create an alias for "ls." For example, ls='ls -CF'. As when I use it as an alias it doesn't work? | NFS is fine, barring some specific other criteria are met, namely: The systems involved are both able to use NFS natively. Windows doesn't count here, it kind of works, but it's got a lot of quirks and is often a pain to work with when dealing with NFS in a cross-platform environment (and if it's just Windows, use SMB3, it eliminates most of the other issues with NFS). Note that on the client side, this means kernel-level support, because a user-level implementation either has to deal with the efficiency issues inherent in using something like FUSE, or it has to be linked directly into the application that needs to access the share. You've properly verified how the NFS client handles an NFS server restart. This includes both the OS itself (which should be fine in most cases), and the software that will be accessing the share. In particular, special care is needed on some client platforms when the software using the share holds files open for extended periods of time, as not all NFS client implementations gracefully handle server restarts by explicitly remounting and revalidating locks and file handles like they should (which leads to all kinds of issues for the client software). Note that you should recheck this any time any part of the stack is either upgraded or reconfigured. You're willing to set up proper user/group ID mapping. This is big, because without it you either need to mirror the UID/GID mappings between the systems (doable, but I'd be wary of setting up SSO against an internal network for an internet facing system) or you end up with potentially serious security implications (namely, what you see on one system for permissions does not match what you see on others). You're operating over a secured network link, or are willing to properly set up authentication for the share. Without auth, anybody on the link can access it (and a malicious client can easily side-step the basic UNIX discretionary access controls). Assuming you meet all those criteria, and you have a reasonably fast network, you should be fine. Also, if you can run jumbo frames, do so, they help a lot for any network filesystem or networked block storage. | {
"source": [
"https://serverfault.com/questions/1030366",
"https://serverfault.com",
"https://serverfault.com/users/470531/"
]
} |
1,030,387 | Two policies, got one "Deny", I should not be able to do any operations to bucket,
but I can still list and view bucket objects. Why? Thanks S3 bucket policy {
"Sid": "S3DenyAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": "arn:aws:s3:::<YOURBUCKETHERE>/*"
} IAM user policy {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowConsoleAccess",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
}
]
} | NFS is fine, barring some specific other criteria are met, namely: The systems involved are both able to use NFS natively. Windows doesn't count here, it kind of works, but it's got a lot of quirks and is often a pain to work with when dealing with NFS in a cross-platform environment (and if it's just Windows, use SMB3, it eliminates most of the other issues with NFS). Note that on the client side, this means kernel-level support, because a user-level implementation either has to deal with the efficiency issues inherent in using something like FUSE, or it has to be linked directly into the application that needs to access the share. You've properly verified how the NFS client handles an NFS server restart. This includes both the OS itself (which should be fine in most cases), and the software that will be accessing the share. In particular, special care is needed on some client platforms when the software using the share holds files open for extended periods of time, as not all NFS client implementations gracefully handle server restarts by explicitly remounting and revalidating locks and file handles like they should (which leads to all kinds of issues for the client software). Note that you should recheck this any time any part of the stack is either upgraded or reconfigured. You're willing to set up proper user/group ID mapping. This is big, because without it you either need to mirror the UID/GID mappings between the systems (doable, but I'd be wary of setting up SSO against an internal network for an internet facing system) or you end up with potentially serious security implications (namely, what you see on one system for permissions does not match what you see on others). You're operating over a secured network link, or are willing to properly set up authentication for the share. Without auth, anybody on the link can access it (and a malicious client can easily side-step the basic UNIX discretionary access controls). Assuming you meet all those criteria, and you have a reasonably fast network, you should be fine. Also, if you can run jumbo frames, do so, they help a lot for any network filesystem or networked block storage. | {
"source": [
"https://serverfault.com/questions/1030387",
"https://serverfault.com",
"https://serverfault.com/users/588122/"
]
} |
1,030,406 | I am using a MEMCM Task Sequence to build servers running Windows Server 2019. So far, I build 22 servers with this OS. At the end of OSD, on 20 of them I have only 10 cipher suites available for use. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA On the two servers with more cipher suites, I have the 31 following cipher suites available. TLS_AES_256_GCM_SHA384
TLS_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_NULL_SHA256
TLS_RSA_WITH_NULL_SHA
TLS_PSK_WITH_AES_256_GCM_SHA384
TLS_PSK_WITH_AES_128_GCM_SHA256
TLS_PSK_WITH_AES_256_CBC_SHA384
TLS_PSK_WITH_AES_128_CBC_SHA256
TLS_PSK_WITH_NULL_SHA384
TLS_PSK_WITH_NULL_SHA256 On the servers with the limited set of ciphers suites, I have added the required registry keys to enable TLS 1.2 in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2 and performed a reboot but there's still nothing more. And on the servers with the 31 cipher suites, I don't know what has been changed so they are available.
I have also tried to use Enable-TlsCipherSuite -Name XXX with no success.
Finally, the servers are updated with the august 2020 updates. Any idea why there are missing ciphers and how I can add them? | NFS is fine, barring some specific other criteria are met, namely: The systems involved are both able to use NFS natively. Windows doesn't count here, it kind of works, but it's got a lot of quirks and is often a pain to work with when dealing with NFS in a cross-platform environment (and if it's just Windows, use SMB3, it eliminates most of the other issues with NFS). Note that on the client side, this means kernel-level support, because a user-level implementation either has to deal with the efficiency issues inherent in using something like FUSE, or it has to be linked directly into the application that needs to access the share. You've properly verified how the NFS client handles an NFS server restart. This includes both the OS itself (which should be fine in most cases), and the software that will be accessing the share. In particular, special care is needed on some client platforms when the software using the share holds files open for extended periods of time, as not all NFS client implementations gracefully handle server restarts by explicitly remounting and revalidating locks and file handles like they should (which leads to all kinds of issues for the client software). Note that you should recheck this any time any part of the stack is either upgraded or reconfigured. You're willing to set up proper user/group ID mapping. This is big, because without it you either need to mirror the UID/GID mappings between the systems (doable, but I'd be wary of setting up SSO against an internal network for an internet facing system) or you end up with potentially serious security implications (namely, what you see on one system for permissions does not match what you see on others). You're operating over a secured network link, or are willing to properly set up authentication for the share. Without auth, anybody on the link can access it (and a malicious client can easily side-step the basic UNIX discretionary access controls). Assuming you meet all those criteria, and you have a reasonably fast network, you should be fine. Also, if you can run jumbo frames, do so, they help a lot for any network filesystem or networked block storage. | {
"source": [
"https://serverfault.com/questions/1030406",
"https://serverfault.com",
"https://serverfault.com/users/419877/"
]
} |
1,031,317 | I just started using Let's Encrypt. The http-01-challenge is simple enough: Make a webserver respond to http://example.com Ask Let's Encrypt for a challenge-file Provide the file unter http://example.com/.well-known/acme-challenge Receive the TLS-certificate for example.com Works like a charm. But how are they making sure that I am really the owner of example.com using an insecure http-connection? Couldn't some admin in my data center (or at my ISP) just request a certificate and intercept the http-requests, Let's Enrypt sends to check the server's identity? | Indeed, there is no infallible protection against a man-in-the-middle attack for the HTTP-01 challenge. Someone who can intercept traffic between the Let's Encrypt validation nodes and your server CAN pass the challenge and get a cert issued. If they can pull off BGP trickery they may not necessarily even be in the middle in the normal sense. This applies to the HTTP-01 challenge and for unsigned domains also the DNS-01 challenge. It's worth noting that this problem is not unique to Lets Encrypt, the validation done by traditional CAs for DV certificates generally has this same problem; they typically offer HTTP, DNS and Email validation options, all of which are susceptible to a MITM attack. What Let's Encrypt has done to mitigate the problem , is to run each validation from multiple test nodes in different data centers, requiring that all the test results agree in order to issue the certificate. (Which I suspect makes them less susceptible to this type of abuse than most of the traditional CAs.) This at least reduces the scope of who might be in "the middle", as large parts of "the middle" will be different as viewed from the different test nodes. This is all about the default behavior of automated domain-validation by Let's Encrypt (and CAs in general), but the domain owner has been given some additional control with the requirement that public CA's must check CAA records. In order to actually take control, the domain owner can take these steps: DNSSEC-sign the zone to ensure that the CAA data is not tampered with (Also protects the challenge itself in the case of DNS-01) Add one or more CAA records to limit issuance of certificates (Particularly CAA records that do not only name the CA that is allowed to issue, but also which account with that CA that is allowed) With a CAA record looking something like this: example.com. IN CAA 0 issue "letsencrypt.org; accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/12346789" the problem is much reduced as an impostor cannot trivially initiate the challenge in the first place. Pay special attention to the accounturi parameter in the CAA data, this is what makes the policy specific to the domain owner's account with the specified CA. A CAA record that only specifies the CA but not an account, while valid, would be of limited help as such a policy still allows any other customer of that CA to request having certificates issued. | {
"source": [
"https://serverfault.com/questions/1031317",
"https://serverfault.com",
"https://serverfault.com/users/418266/"
]
} |
1,031,988 | On a Windows 2019 Server the drive D: is 100% full (500 Gb used) : I'm trying to understand why the disk is full but I can't because both File Explorer and Total Commander reports no more than 33 Gb used : It's also strange that WinDirStat reports 100% (500 Gb) used in the start summary, but only 33 Gb used after the analysis: Please note that: I'm logged in as Administrator I started WinDirStat with Administrator privileges I tried with both local Administrator and Active Directory Domain Admin I enabled hidden and system files in File Explorer and Total Commander I ran chkdsk on the D: drive without finding any issue I found 33 Gb of data. Where are other 467 Gb? | You could try WizTree (wiztreefree.com), which is similar to WinDirStat but it bypasses the filesystem driver and reads the MFT directly if run as an administrator. It will show space taken by alternate data streams, metadata files ($MFT, $Secure, $BadClus, etc.), and directories you don't have access to. It doesn't appear to show space allocated for directory indexes, and it may miss some other things, but I wouldn't be surprised if the culprit does show up. | {
"source": [
"https://serverfault.com/questions/1031988",
"https://serverfault.com",
"https://serverfault.com/users/177397/"
]
} |
1,032,006 | My computer froze for a long time and I pressed the reset button. After reboot, all FIVE luks-encrypted (LUKS 1) file systems will no longer open. The message I get is "No key available with this passphrase." I am sure I am using the right password. I have been using the same password for all file systems for years. I have backups for all those volumes except one so I would like to analyze my options for it. I have tried 'cryptsetup isLuks' and 'cryptsetup luksDump' on all the file systems and all of them are successful, I mean, they are Luks partitions and I can dump their headers and see their slots. However, on research, I found similar cases where people say their headers have been damaged beyond repair. I don't know how to identify that. How do I do that? Thank you for any information. | You could try WizTree (wiztreefree.com), which is similar to WinDirStat but it bypasses the filesystem driver and reads the MFT directly if run as an administrator. It will show space taken by alternate data streams, metadata files ($MFT, $Secure, $BadClus, etc.), and directories you don't have access to. It doesn't appear to show space allocated for directory indexes, and it may miss some other things, but I wouldn't be surprised if the culprit does show up. | {
"source": [
"https://serverfault.com/questions/1032006",
"https://serverfault.com",
"https://serverfault.com/users/589837/"
]
} |
1,033,101 | I run a small internet based business from home and make a living at it to feed my family, but I'm still a one man show and internet security is far from my area of expertise. Yesterday I received two emails from a guy who calls himself an "ethical hacker" and has identified two vulnerabilities in my system which he says could be exploited by hackers. I believe him. The problem is, at the bottom of each email he says he "expects a bounty to be paid". Is this black mail? Is this his way of saying you'd better pay me or I'm going to wreak havoc? Or is this a typical and legitimate method for people to make a living without any nefarious intentions? EDIT: For more clarification: He gave me two examples of vulnerabilities with screenshots and clear instructions on how to fix those vulnerabilities. One was to change the "?all" part of my SPF record to "-all" to block all other domains from sending emails for my domain. In the other email he explained how my site was able to be shown inside an iframe (enabling a technique called "clickjacking") and he also included an example of the code and instructions on how to prevent it. | A true "ethical hacker" would tell you what issue (s)he found in your system, not ask money for that; (s)he could offer to fix it as a contractor, but that would be after telling you what the actual problem is; and in any case, it's a completely different thing from just trying to scare you into paying. This is plain and simple blackmail. (Also, it's a very real possibility that there is no real vulnerability and someone is just trying to scam you into paying money for nothing). | {
"source": [
"https://serverfault.com/questions/1033101",
"https://serverfault.com",
"https://serverfault.com/users/268333/"
]
} |
1,033,918 | I have hosted an e-commerce site in AWS-EC2 in a t3a.medium instance type. Now that the traffic is increasing CPU utilization is very high and site stops working time and again. Looking into htop, mysql is utilizing maximum CPU. As in the current instance, vcpu is only 2 so I want to change the instance type. But the problem is how do I know which would best fit my system? | It's easy to change the instance types up, down, or sideways. Don't try to guess which one will work the best for you - instead test it out . If you've got t3a.medium now try upgrading to t3a.large . Or to m5.large - it may be cheaper if you're maxing out the t3a.large and constantly running out of CPU credits (check the monitoring tab in the instance details). If your app is memory-hungry look at r5. * instances, if it's cpu-heavy look at c5. * instances. They tend to be cheaper than the general purpose for some workloads. Best practice is to decouple your database from your app/web frontend , don't host them on the same instance. Move the database to AWS RDS service - it will manage it for you, including backups, upgrades, fail over, etc. Once your app is decoupled from the database look at Auto-Scaling - instead of upgrading to a larger and larger instance as your traffic grows you can scale horizontally by adding more and more smaller instances, all configured the same. That allows you to run with fewer instances in quiet times (night, weekends) and add more of the same in busy times. That way your per-hour bill will fluctuate with the traffic load. The bottom line is that there is no "best EC2 instance" - it all depends on your actual usage. The good thing is that there is no long-term commitment in AWS, simply test things out and evolve the architecture as needed. Hope that helps :) | {
"source": [
"https://serverfault.com/questions/1033918",
"https://serverfault.com",
"https://serverfault.com/users/587992/"
]
} |
1,035,459 | I'm in the process of shutting down a site, and have replaced the old site with a single "nobody home" page at the root level of the site. Now I need to set up some redirection, so that any request to any part of the site, no matter how complicated, ends up at the root page. I've tried what (I thought) ought to work: Creating an .htaccess file containing: RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://www.example.com/ [L,R=301,NE] but it mostly fails: Requests to http://www.example.com still get through, but https://www.example.com/doesnotexist.html throws a 404. (If there was no redirection going on, this would be correct, since that page doesn't exist on the site, but that's the point of the redirection: I want this request to be sent to https://www.example.com .) Arggh. The answer to this is probably obvious to everyone but me; can anyone help out? PS: I'm in a shared hosting situation, so I have to do this with a .htaccess file rather than hacking a full Apache configuration file. | If you are "shutting down a site" then you probably should not be "redirecting" the old site pages to a single page. An HTTP redirect sends a 301 response code, informing users and search engines the pages have moved . (Although mass redirects to a single page are likely to be seen as soft-404s by Google.) Instead, you should be serving a custom "410 Gone" response instead. A 410 informs search engines the pages are gone and not coming back. Your "single page" is the custom error document. For example: ErrorDocument 410 /single-page.html
RewriteEngine On
# Trigger a 410 Gone for all user requests
RewriteCond %{ENV:REDIRECT_STATUS} ^$
RewriteRule ^ - [G] The additional condition that checks against the REDIRECT_STATUS environment variable (which is empty on the initial request, but set to "410" after the RewriteRule is triggered and the HTTP response status is set) is to ensure two things: That the internal subrequest for the error document itself (ie. /single-page.html in this example) does not trigger another 410 (essentially resulting in an endless loop) and thus preventing the custom error document from being served (a default server response would sent instead in this scenario). And also to enable direct requests for /single-page.html itself to also trigger a 410 without creating a rewrite loop. ( Aside: The technique of using REDIRECT_STATUS in this way to detect an already triggered "error state" does not appear to work on LiteSpeed servers unfortunately since the env var is not updated during the request in the case of 4xx responses. However, it is updated for internal rewrites, ie. 200 OK responses, so it's still a good solution to prevent general internal rewrite-loops. It is bizarre why there would be this difference though. Seems like a bug.) If you have images (and/or other external resources) that need to be displayed in the error document then see my answer to the following related question: Using an htaccess file for 410 (gone page) prevents use of images | {
"source": [
"https://serverfault.com/questions/1035459",
"https://serverfault.com",
"https://serverfault.com/users/77729/"
]
} |
1,038,712 | I have an app hosted in AWS, my mail service is not on AWS, I'm using a hosting in hostgator due to pricing since I need 500+ simple mail accounts. My DNS points to my email service and it works correctly. The part I'm lost in is that I'm trying to receive emails in a specific address so that my app can process it. Is there a way so that some addresses are sent to a secondary MX record, or if the address is not found in the first will it go look at the second? Or the second priority MX record is only if the first in offline? | MX records are used according to priority value in the records. The record with the lowest priority is used first, then the higher ones until one responds. If there are multiple records with the same priority, one is randomly selected (this is how you generally do load balancing if you have multiple mail servers accepting incoming connections). The MX records only dictates which mail servers are responsible for a specific domain, it doesn't deal with individual recipients. So a sending server will only use secondary records if the primary server doesn't respond to its connection attempts, not if the primary server rejects the message. What you're trying to achieve is only doable at the DNS level if you use a subdomain for messages destined for your application. That way you can have the MX records for example.com point to your mail servers and the MX records for app.example.com to point towards your application. If you need to use the same domain for both, you'll need to configure your mail server to forward e-mail messages to your application. This can usually be done a couple of different ways depending on the mail server/hosting provider. | {
"source": [
"https://serverfault.com/questions/1038712",
"https://serverfault.com",
"https://serverfault.com/users/596661/"
]
} |
1,040,871 | I'm using ubuntu 20.04 with Xen Hypervisor. On my machine I have an SSD that hosts my VM images, and then four sata drives which I have data on. My current set up is to mount the data on my domain0 and then provide that data to the other VMs over network file server. This seems inefficient as all the VMs would have to go though my NIC in order to access the data. Am I correct in that asusmption that this is a large bottleneck? What's the industry standard for providing data that is within the same physical machine? Any advice or improvements to this setup? Is there harm in mounting the data LVM on each of the VMs? My concern with this approach is what would occur if two VMs try to access the same data point simultaneously? Is this setup vulnerable to data corruption? | In general, no, unless you meet one of two very specific constraints. Either: The device needs to be exposed read-only (this MUST be at the device level, not the filesystem level) to all the VMs, and MUST NOT be written to from anywhere at runtime. or: The volume must be formatted using a cluster-aware filesystem, and all VMs must be part of the cluster (and the host system too if it needs access to the data). In general, filesystems that are not cluster-aware are designed to assume they have exclusive access to their backing storage, namely that it’s contents will not change unless they do something to change them. This obviously can cause caching issues if you violate this constraint, but it’s actually far worse than that, because it extends to the filesystem’s internal structures, not just the file data. This means that you can quite easily completely destroy a filesystem by mounting it on multiple nodes at the same time. Cluster-aware filesystems are the traditional solution to this, they use either network-based locking or a special form of synchronization on the shared storage itself to ensure consistency. On Linux, your options are pretty much OCFS2 and GFS2 (I recommend OCFS2 over GFS2 for this type of thing based on personal experience, but YMMV). However, they need a lot more from all the nodes in the cluster to be able to keep things in sync. As a general rule, they have significant performance limitations on many workloads due to the locking and cache invalidation requirements they enforce, they tend to involve a lot of disk and network traffic, and they may not be feature-complete compared to traditional single-node filesystems. I would like to point out that NFS over a local network bridge (the ‘easy’ option to do what you want) is actually rather efficient. Unless you use a rather strange setup or insist on each VM being on it’s own VLAN, the NFS traffic will never even touch your NIC, which means it all happens in-memory, and thus has very little in the way of efficiency issues (especially if you are using paravirtualized networking for the VMs). In theory, if you set up 9P, you could probably get better performance than NFS, but the effort involved is probably not worth it (the performance difference is not likely to be much). Additionally to all of this, there is a third option, but it’s overkill for use on a single machine. You could set up a distributed filesystem like GlusterFS or Ceph. This is actually probably the best option if your data is not inherently colocated with your VMs (that is, you may be running VMs on nodes other than the ones the data is on), as while it’s not as efficient as NFS or 9P, it will give you much more flexibility in terms of infrastructure. | {
"source": [
"https://serverfault.com/questions/1040871",
"https://serverfault.com",
"https://serverfault.com/users/206835/"
]
} |
1,041,756 | I have a Web and I wanted to move its images to AWS S3. Say it's called mypage.com and I can access to an image like this: https://mypage.com/pics/one.jpg I created a bucket called static.mypage.com to put there all the images, so now I can access to the images like this: https://static.mypage.com.s3.eu-west-1.amazonaws.com/pics/one.jpg As it is a very long name, I want use a "shortener" using DNS. So, I'd want to know how to set the CNAME in my DNS provider to make possible that if I go to... https://static.mypage.com/pics/one.jpg ...I'd get the images from the bucket. Thanks! | In the past S3 supported FQDN bucket names - i.e. exactly what you needed. Where FQDN = Fully Qualified Domain Name, i.e. full host name like static.mypage.com . The problem is that this only works with HTTP and not with HTTPS because there is no way to make S3 use a SSL certificate with your bucket name / host name ( static.mypage.com ). You can still do it if you're happy with HTTP-only traffic. Simply create a static.mypage.com CNAME at your registrar pointing to s3.eu-west-1.amazonaws.com . S3 will recognise the Host: header in the request and look into the right S3 bucket. Provided that the objects in the bucket are publicly accessible the URL http ://static.mypage.com/pics/one.jpg should work just fine. However as soon as you access the same over HTTPS you will get a SSL Certificate Validation error because the hostname in the S3 certificate * .s3.eu-west-1.amazonaws.com won't match the expected static.mysite.com . The solution is CloudFront which can sit in front of your S3 and handle the right SSL certificate for it: create a free SSL certificate for static.mysite.com in Amazon Certificate Manager (or upload your 3rd party issued SSL cert to ACM). set up a CloudFront distribution , attach the SSL cert to it configure the CloudFront distribution to use your S3 bucket as the Origin configure a static.mysite.com CNAME at your DNS provider to point to the CF distribution name, e.g. d123456abcdef.cloudfront.net with that your desired URL https ://static.mypage.com/pics/one.jpg should finally work. Also have a look at Routing traffic to a website that is hosted in an Amazon S3 bucket . Hope that helps :) | {
"source": [
"https://serverfault.com/questions/1041756",
"https://serverfault.com",
"https://serverfault.com/users/527957/"
]
} |
1,041,795 | I've got some files which I would like to sync to a linux server. Only problem is it resets the ownership and group to the current user and group. Unfortunately the options --owner, --group, and --archive are somewhat unworkable because the source is a windows machine. (it's ownership doesn't make sense on the server) Any way to keep destination ownership unchanged? Command i'm using (minimal example): rsync SRC $remote:DEST | In the past S3 supported FQDN bucket names - i.e. exactly what you needed. Where FQDN = Fully Qualified Domain Name, i.e. full host name like static.mypage.com . The problem is that this only works with HTTP and not with HTTPS because there is no way to make S3 use a SSL certificate with your bucket name / host name ( static.mypage.com ). You can still do it if you're happy with HTTP-only traffic. Simply create a static.mypage.com CNAME at your registrar pointing to s3.eu-west-1.amazonaws.com . S3 will recognise the Host: header in the request and look into the right S3 bucket. Provided that the objects in the bucket are publicly accessible the URL http ://static.mypage.com/pics/one.jpg should work just fine. However as soon as you access the same over HTTPS you will get a SSL Certificate Validation error because the hostname in the S3 certificate * .s3.eu-west-1.amazonaws.com won't match the expected static.mysite.com . The solution is CloudFront which can sit in front of your S3 and handle the right SSL certificate for it: create a free SSL certificate for static.mysite.com in Amazon Certificate Manager (or upload your 3rd party issued SSL cert to ACM). set up a CloudFront distribution , attach the SSL cert to it configure the CloudFront distribution to use your S3 bucket as the Origin configure a static.mysite.com CNAME at your DNS provider to point to the CF distribution name, e.g. d123456abcdef.cloudfront.net with that your desired URL https ://static.mypage.com/pics/one.jpg should finally work. Also have a look at Routing traffic to a website that is hosted in an Amazon S3 bucket . Hope that helps :) | {
"source": [
"https://serverfault.com/questions/1041795",
"https://serverfault.com",
"https://serverfault.com/users/348474/"
]
} |
1,041,814 | I am looking for a least latency and max throughput region for Google AppEngine for users in Thailand. According to Cloud locations closest are Jakarta and Hong Kong. Tried to make sense of an underwater cables map information for my question. Could not put an answer together. What I am thinking is a test setup of 2 GAE applications behind a CloudLoadbalancer and measure user experience (somehow). However, maybe someone was able to figure it out already? Is it Jakarta or Hong Kong GCP region which is faster to Thailand users? | In the past S3 supported FQDN bucket names - i.e. exactly what you needed. Where FQDN = Fully Qualified Domain Name, i.e. full host name like static.mypage.com . The problem is that this only works with HTTP and not with HTTPS because there is no way to make S3 use a SSL certificate with your bucket name / host name ( static.mypage.com ). You can still do it if you're happy with HTTP-only traffic. Simply create a static.mypage.com CNAME at your registrar pointing to s3.eu-west-1.amazonaws.com . S3 will recognise the Host: header in the request and look into the right S3 bucket. Provided that the objects in the bucket are publicly accessible the URL http ://static.mypage.com/pics/one.jpg should work just fine. However as soon as you access the same over HTTPS you will get a SSL Certificate Validation error because the hostname in the S3 certificate * .s3.eu-west-1.amazonaws.com won't match the expected static.mysite.com . The solution is CloudFront which can sit in front of your S3 and handle the right SSL certificate for it: create a free SSL certificate for static.mysite.com in Amazon Certificate Manager (or upload your 3rd party issued SSL cert to ACM). set up a CloudFront distribution , attach the SSL cert to it configure the CloudFront distribution to use your S3 bucket as the Origin configure a static.mysite.com CNAME at your DNS provider to point to the CF distribution name, e.g. d123456abcdef.cloudfront.net with that your desired URL https ://static.mypage.com/pics/one.jpg should finally work. Also have a look at Routing traffic to a website that is hosted in an Amazon S3 bucket . Hope that helps :) | {
"source": [
"https://serverfault.com/questions/1041814",
"https://serverfault.com",
"https://serverfault.com/users/481623/"
]
} |
1,042,604 | We have deployed a web application on an m5.xlarge EC2 instance and when we try to buy an annual or 3 years reserved license, AWS recommends based on our current usage it is recommended to purchase 54 t2.nano instances instead of the m5.xlarge we have now. It calculates and shows a difference in the overall cost and shows that going with that option is more profitable to us. The thing I can't understand is what does it mean to buy 54 t2.nano instead of one m5.xlarge? Does it mean we need to host the application in all 54 nano EC2 servers and then put it through an ELB? I am a bit confused here about what to do | There's a couple of things to understand: Reserved Instances are just a billing construct . AWS will try to match the purchased reserved instances against your running instances at the billing time. I.e. you don’t assign RIs to your actual EC2 instances, you get the discount automatically. Reserved Instances capacity doesn’t have to match the running instances. The price for t2.medium is the same as for 2x t2.small or 8x t2.nano . So if you purchase 32x t2.nano it would fully cover the price of 1x t2.xlarge . From the billing perspective it’s the same. On the other hand t2. anything won't be applied against m5. anything - they are a different instance class. You can buy 2x m5.large instead of 1x m5.xlarge reserved instance - same thing from a billing perspective. Now why does it recommend 54x t2.nano ? Probably it found out that your actual needs are somewhere between t2.xlarge and t2.2xlarge - and it's best expressed as 54x t2.nano . Depending on your application you may or may not be able to spread the load over a number of smaller instances . I wouldn't go to 54x t2.nano but perhaps 3x t2.large could be a good option? You can then set up auto-scaling to remove some of the nodes during quiet times and save. And even use Spot Instances and save even more. However for both ASG and Spot you'll need some automation in place. For a much greater flexibility look at AWS Saving Plans - with that you'll be able to migrate your application to newer instance types, mix and match instance types, etc. With Reserved Instances you're locked to a particular instance class in a particular region. With Saving Plans you only commit to a certain spend per month and it's up to you how you use it. Hope that helps :) | {
"source": [
"https://serverfault.com/questions/1042604",
"https://serverfault.com",
"https://serverfault.com/users/600633/"
]
} |
1,045,825 | My server hosts 2 different websites which lately have been slow to load on browsers. After looking at potential causes and solutions online, I came across this thread . One of the advised solutions was to disable IPv6 which I'd like to try, however, As I have no proficiency in this area, I first want to make sure I understand exactly what this will do (what are the consequences of doing so), and if OK then know what is the safe way to do so without breaking my server/websites. | No, do not disable IPv6. It breaks things, turns users without v4 away, and makes more work for your future project to use IPv6. A host with only IPv4 addresses and routes will serve web sites just fine without doing anything. v4 and dual stack hosts can still get to it. You cite forum posts circa 2008. And even then disabling IPv6 was an uninformed shot in the dark, I see no v6 in use in their terminal output. Further, in the time since, IPv6 has gone from early adopters to mature and widely deployed. Use web site performance analytics to measure what is slow. Tools exist to simulate real user load on a slow connection, or to instrument actual user performance in their browser. Use F12 browser developer tools, for example. Prove exactly which requests. Then trace requests through your web server stack and see what is involved in serving it. | {
"source": [
"https://serverfault.com/questions/1045825",
"https://serverfault.com",
"https://serverfault.com/users/603910/"
]
} |
1,047,763 | Why do administrators mostly use +a alongside +mx in SPF records?
This is the example: @ 10800 IN TXT "v=spf1 +a +mx -all" Isn't it enough to only use +mx parameter e.g.: @ 10800 IN TXT "v=spf1 +mx -all" I thought MX record's task is to send email and not A record's. Can anyone explain the scenario why would anyone use +a then? | Frankly because they have copied the configuration from some tutorial or example configuration without knowing the basic principles of SPF. Sometimes it's desired that e.g. both a web server in a and incoming mail exchanges mx are also used for sending mail, but not nearly always. It's better to favor mechanisms with less additional DNS queries: ip4 / ip6 over a and a over mx ( RFC 7208, 10.1.1 ) And even if, for easier administration ( 10.1.2 ), a mechanism is chosen, it's not always a mx or a , but e.g. a:relay.example.com . | {
"source": [
"https://serverfault.com/questions/1047763",
"https://serverfault.com",
"https://serverfault.com/users/430178/"
]
} |
1,048,792 | There are plenty of videos and descriptions about how to use an RJ45 crimping tool. I've seen a few, and I have even tested a crimping tool, but I still don't understand how it does what it does. What exactly happens to the eight wires inside the crimping tool, and why isn't it necessary to remove the insulation from the eight wires before putting the cable into the crimping tool? | When you crimp a cable the metal contact pads inside the RJ45 head will "cut" into the isolated ethernet cable. Most one time used RJ45 connectors will also have a bit in the middle of the connector that will get crushed during crimping, ensuring that the cable doesn't get pulled out easily from the contact pads. There are some RJ45 connectors that allow you to remove the head, but these are most of the time used in a commercial environment (and these are most of the time significantly more expensive). However these heads usually do not require any tools. Similar approach is taken when you use a patch panel, where the patch panel's metal contact pads will cut open the isolation of the ethernet cable and create a circuit by penetrating through the plastic of the cable. Source of image: https://www.vpi.us/installation/assemble-cat5e-rj45-plg-flt.html | {
"source": [
"https://serverfault.com/questions/1048792",
"https://serverfault.com",
"https://serverfault.com/users/39827/"
]
} |
1,049,316 | Can a single DNS response contain both A records and cname records? If so, would it be considered unusual or is it typical behavior? | If the A record(s) that you refer to are for the canonical name (the "target" of the CNAME record) rather than the query name, then this is perfectly normal. It would however be in violation of the standards to return CNAME and A (or any other record) for the same name. Valid example: foo.example.com. 3600 IN CNAME bar.example.com.
bar.example.com. 3600 IN A 192.0.2.1 Invalid example (not discouraged, invalid ): foo.example.com. 3600 IN CNAME bar.example.com.
foo.example.com. 3600 IN A 192.0.2.1 | {
"source": [
"https://serverfault.com/questions/1049316",
"https://serverfault.com",
"https://serverfault.com/users/611848/"
]
} |
1,049,324 | I have all the npm pacakages installed for a particular user (i.e) not root (/home/otheruser/*) I am using monit to see whether to check the program is running. In this case its pm2, which is in /home/otheruser/.nvm/versions/node/v5.2.0/bin/pm2 I cannot use pm2 even when I tried using the full path from other users in the terminal even with root user. It didnot gave any output or nothing happened there like below. root@server:~$ /home/otheruser/.nvm/versions/node/v5.2.0/bin/pm2 list whenever I run node modules with different users like root@server:~$ /home/otheruser/.nvm/versions/node/v5.2.0/bin/forever -v
root@server:~$ /home/otheruser/.nvm/versions/node/v5.2.0/bin/db-migrate -v
root@server:~$ /home/otheruser/.nvm/versions/node/v5.2.0/bin/pm2 -v And I get the below in the syslog node[5791]: No AX.25 port data configured
node[5791]: No AX.25 port data configured
node[5791]: No AX.25 port data configured How to get that working from other users | If the A record(s) that you refer to are for the canonical name (the "target" of the CNAME record) rather than the query name, then this is perfectly normal. It would however be in violation of the standards to return CNAME and A (or any other record) for the same name. Valid example: foo.example.com. 3600 IN CNAME bar.example.com.
bar.example.com. 3600 IN A 192.0.2.1 Invalid example (not discouraged, invalid ): foo.example.com. 3600 IN CNAME bar.example.com.
foo.example.com. 3600 IN A 192.0.2.1 | {
"source": [
"https://serverfault.com/questions/1049324",
"https://serverfault.com",
"https://serverfault.com/users/548642/"
]
} |
1,049,330 | I am trying to install the gdal on my centos 8. I tried with following command sudo yum install gdal-libs And it threw me following error, Last metadata expiration check: 0:05:58 ago on Sun 10 Jan 2021 10:52:18 PM EST.
Error:
Problem: conflicting requests
- nothing provides libdap.so.25()(64bit) needed by gdal-libs-3.0.4-5.el8.x86_64
- nothing provides libdapclient.so.6()(64bit) needed by gdal-libs-3.0.4-5.el8.x86_64
- nothing provides libdapserver.so.7()(64bit) needed by gdal-libs-3.0.4-5.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) Please, anyone, suggest to me, how to solve this error? | If the A record(s) that you refer to are for the canonical name (the "target" of the CNAME record) rather than the query name, then this is perfectly normal. It would however be in violation of the standards to return CNAME and A (or any other record) for the same name. Valid example: foo.example.com. 3600 IN CNAME bar.example.com.
bar.example.com. 3600 IN A 192.0.2.1 Invalid example (not discouraged, invalid ): foo.example.com. 3600 IN CNAME bar.example.com.
foo.example.com. 3600 IN A 192.0.2.1 | {
"source": [
"https://serverfault.com/questions/1049330",
"https://serverfault.com",
"https://serverfault.com/users/575779/"
]
} |
1,050,958 | I am helping run a website that has been blocked for political reasons by the same Russian agency that has previously tried blocking Telegram (RosKomNadzor). This is not the first time it happens, and previously we would just change the domain, but this has its own implications and loss in readership. They are blocking only the domain name, not the IP (we're using Cloudflare anyways). We're using HTTPS, but ISPs are still somehow able to get the DNS information about a request coming our way from their clients. Technically, we can suggest our readers to configure their /etc/hosts , but that is not a viable option. Is there something that could be done on our server's side to encrypt/obfuscate the DNS information without users making any changes/installing software? Or is waiting for DNS over HTTPS to become mainstream our only option? From Russia with love. | Unfortunately, circumventing censorship is better addressed on the client side, so there aren't many server side settings that could help with that. You could advise your users to use a VPN, Tor , and/or public DNS with DNS-over-HTTPS ( RFC 8484 ) or DNS-over-TLS ( RFC 7858 ). You make the assumption that the censorship method has something to do with DNS, but have you actually tested this? Did you know that the server name indication (SNI, RFC 6066, 3 ) in the ClientHello is unencrypted and may also be used to block the TLS connection? Luckily, TLS Encrypted Client Hello ( draft-ietf-tls-esni-09 ) is on its way and can help with that. More reading on the subject: Seth Schoen: ESNI: A Privacy-Protecting Upgrade to HTTPS (EFF) Matthew Prince: Encrypting SNI: Fixing One of the Core Internet Bugs (Cloudflare) ( We don't usually add any greetings to our Q/A posts, but your 007 reference is golden! ) | {
"source": [
"https://serverfault.com/questions/1050958",
"https://serverfault.com",
"https://serverfault.com/users/613926/"
]
} |
1,052,564 | I am able to log in via mysql -u myuser -p mydb -h localhost with this: grant all privileges on mydb.* to myuser@'%' identified by
'1234567890123456789012345678901234567890123456789012345678901234567890123456789'; But not after I do this: grant all privileges on mydb.* to myuser@'%' identified by
'12345678901234567890123456789012345678901234567890123456789012345678901234567890'; Where is this hard limit of 79 characters for a database password coming from? | As has been covered by Mircea Vutcovici, the password is only stored after hashing, which means it will have fixed length when stored. Ie, it's not obvious that there should be such a limitation. I believe what was encountered may rather be a limitation imposed specifically by the mysql client application. The get_tty_password function seems to read the password into char buff[80]; , which would imply 79 characters + null termination. https://github.com/MariaDB/server/blob/b4fb15ccd4f2864483f8644c0236e63c814c8beb/mysys/get_password.c#L155 (Does the limitation even exist if you use a different client?) | {
"source": [
"https://serverfault.com/questions/1052564",
"https://serverfault.com",
"https://serverfault.com/users/321088/"
]
} |
1,057,780 | I created website for someone, but also someone (I guess some SEO guy) told this person that I made big mistake because there are missing DNS records on domain (mx, SPF, dmarc). Now I need to "fix" my error. Thing is, of course these records are used for Email purposes, but there is NO email in this domain (just simple free Gmail account). So, is there any reason to add these records anyway? How they should look like? Only reason I can think of is preventing SPAM using my domain identity. But I thought that SPAM filters are not going to pass email from my domain anyway if these records are missing, so what's the point? | The point would largely boil down to being a good citizen and reducing abuse, like making your domain less useful for spammers to impersonate and to make it immediately clear to others that mail is not deliverable there. If the claim is accurate that the domain is not used for either sending or receiving email at all, you could add something like this: domain.example. IN MX 0 .
domain.example. IN TXT "v=spf1 -all"
_dmarc.domain.example. IN TXT "v=DMARC1; p=reject; aspf=s; adkim=s;" This indicates that inbound mail is not accepted ( null MX ), and that any mail sent from the domain should be rejected ( SPF policy that lists no allowed senders + DMARC policy enforces From-header alignment). | {
"source": [
"https://serverfault.com/questions/1057780",
"https://serverfault.com",
"https://serverfault.com/users/623407/"
]
} |
1,057,848 | I want to replace my on premise server which contains a file server and DC and migrate it to Azure. My on premise LAN is connected with a VPN gateway to my Azure Virtul Network. Is it possible to have two DC servers on Azure (nothing on premise) and my on premise computers in the Azure VMs domain ? Regards. | The point would largely boil down to being a good citizen and reducing abuse, like making your domain less useful for spammers to impersonate and to make it immediately clear to others that mail is not deliverable there. If the claim is accurate that the domain is not used for either sending or receiving email at all, you could add something like this: domain.example. IN MX 0 .
domain.example. IN TXT "v=spf1 -all"
_dmarc.domain.example. IN TXT "v=DMARC1; p=reject; aspf=s; adkim=s;" This indicates that inbound mail is not accepted ( null MX ), and that any mail sent from the domain should be rejected ( SPF policy that lists no allowed senders + DMARC policy enforces From-header alignment). | {
"source": [
"https://serverfault.com/questions/1057848",
"https://serverfault.com",
"https://serverfault.com/users/623513/"
]
} |
1,057,863 | I'm looking for a way to push out commands to all workstations. The scenario is as follows: I often go to environments that I am not familiar with to audit the network. Part of that is a network scan, but to use our specific tools we need to configure a couple of things on every workstation (enable wmi access, enable file and printer sharing, etc.). We have a batch file we can run on every computer, but this solution does not scale well as you can imagine. I've included the commands we run below. Ideally, there would be a way to push out the batch file to run one time on all computers connected to the domain. Alternatively, we could create a new batch file that creates GPO that does the same things, but this is something that I have not done before. Any help is really appreciated! rem Allow the device to be pingable through Windows Firewall
netsh firewall set icmpsetting type=ALL mode=enable
netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
netsh advfirewall firewall add rule name="ICMP Allow incoming V6 echo request" protocol=icmpv6:8,any dir=in action=allow
rem Turn on File and Printer Sharing
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=Yes
rem Allow WMI access through Windows Firewall
netsh firewall set service type=remoteadmin mode=enable
netsh advfirewall firewall set rule group="windows management instrumentation (wmi)" new enable=yes
rem Add user account
net user [REDACTED] /add
net localgroup Administrators [REDACTED] /add
Rem Set WMI Permissions
sc sdset SCMANAGER D:(A;;CCLCRPRC;;;AU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)
ECHO End of script
PAUSE | The point would largely boil down to being a good citizen and reducing abuse, like making your domain less useful for spammers to impersonate and to make it immediately clear to others that mail is not deliverable there. If the claim is accurate that the domain is not used for either sending or receiving email at all, you could add something like this: domain.example. IN MX 0 .
domain.example. IN TXT "v=spf1 -all"
_dmarc.domain.example. IN TXT "v=DMARC1; p=reject; aspf=s; adkim=s;" This indicates that inbound mail is not accepted ( null MX ), and that any mail sent from the domain should be rejected ( SPF policy that lists no allowed senders + DMARC policy enforces From-header alignment). | {
"source": [
"https://serverfault.com/questions/1057863",
"https://serverfault.com",
"https://serverfault.com/users/623545/"
]
} |
1,067,229 | I know the .dev top-level domain requires all sites to support only encrypted HTTPS connections, disallowing any HTTP connections. Are there other such TLDs? | A direct answer to this would eventually become outdated if more top-level domains start enforcing HTTPS using HTTP Strict Transport Security (HSTS, RFC 6797 ). Technically this is an HSTS policy of a TLD submitted to the preloading list. It started with Google's new TLDs , The HSTS preload list can contain individual domains or subdomains and
even top-level domains (TLDs), which are added through the HSTS
website . The TLD is the last part of the domain name, e.g., .com , .net , or .org . Google operates 45 TLDs , including .google , .how , and .soy . In 2015 we created the first secure TLD when we added .google to
the HSTS preload list, and we are now rolling out HSTS for a larger
number of our TLDs, starting with .foo and .dev . and there has even been preliminary thoughts on the possibility of protecting the entire .gov in the future: Zooming out even further: it’s technically possible to preload HSTS
for an entire top-level domain (e.g. “ .gov ”), as Google first did with .google . As a relatively small, centrally managed top-level domain,
perhaps someday .gov can get there. To know the current situation, one must consult the Chromium HSTS Preloaded list . The preloaded list is also available on Chromium's GitHub mirror ; especially the raw version is best for curl or wget . The list is a non-standard JSON with comment lines. It is possible to analyse it with jq after removing the comments with e.g. sed . Here, the jq gives all domain names on the preloaded list and the grep reduces it into TLDs : cat transport_security_state_static.json \
| sed 's/^\s*\/\/.*//' \
| sed '/^$/d' \
| jq -r '.entries[]|select(.include_subdomains==true)|"\(.name)"' \
| grep -P "^\.?[a-z]*\.?$" To search for public suffixes instead of TLDs: cat transport_security_state_static.json \
| sed 's/^\s*\/\/.*//' \
| sed '/^$/d' \
| jq '.entries[]' \
| jq 'select((.policy=="public-suffix") and (.include_subdomains==true))' \
| jq -r '"\(.name)"' | {
"source": [
"https://serverfault.com/questions/1067229",
"https://serverfault.com",
"https://serverfault.com/users/142214/"
]
} |
1,067,230 | I have multiple services running on my server which will be accessed via nginx and encrypted by certbot.
If i want to acess my service with my http://example.com , I get redirected to http(s)://example.com , which is great. However, if I type in my IpAdress:Port I wont get redirected to my domain.
This is my abc.com file in /etc/nginx/sites-enabled server {
server_name abc.com; #example: mysite.xyz
#access_log /var/log/nginx/<servicename>.access.log;
#error_log /var/log/nginx/<servicename>.error.log;
location / {
proxy_pass http://127.0.0.1:9000; # here you define the address, which is used by nginx to access your service
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
} # this is the port you use to access the proxied service
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/abc.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/abc.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = abc.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name abc.com;
listen 80;
return 404; # managed by Certbot
} Can someone tell me or point me into a direction what I need to change in my abc.com file in order to redirect also requests via IpAdress:Port to https://example.com I am grateful for any help! Edit:
I have made my services reachable via localhost which solved my problem. Thank you all for your contributions! | A direct answer to this would eventually become outdated if more top-level domains start enforcing HTTPS using HTTP Strict Transport Security (HSTS, RFC 6797 ). Technically this is an HSTS policy of a TLD submitted to the preloading list. It started with Google's new TLDs , The HSTS preload list can contain individual domains or subdomains and
even top-level domains (TLDs), which are added through the HSTS
website . The TLD is the last part of the domain name, e.g., .com , .net , or .org . Google operates 45 TLDs , including .google , .how , and .soy . In 2015 we created the first secure TLD when we added .google to
the HSTS preload list, and we are now rolling out HSTS for a larger
number of our TLDs, starting with .foo and .dev . and there has even been preliminary thoughts on the possibility of protecting the entire .gov in the future: Zooming out even further: it’s technically possible to preload HSTS
for an entire top-level domain (e.g. “ .gov ”), as Google first did with .google . As a relatively small, centrally managed top-level domain,
perhaps someday .gov can get there. To know the current situation, one must consult the Chromium HSTS Preloaded list . The preloaded list is also available on Chromium's GitHub mirror ; especially the raw version is best for curl or wget . The list is a non-standard JSON with comment lines. It is possible to analyse it with jq after removing the comments with e.g. sed . Here, the jq gives all domain names on the preloaded list and the grep reduces it into TLDs : cat transport_security_state_static.json \
| sed 's/^\s*\/\/.*//' \
| sed '/^$/d' \
| jq -r '.entries[]|select(.include_subdomains==true)|"\(.name)"' \
| grep -P "^\.?[a-z]*\.?$" To search for public suffixes instead of TLDs: cat transport_security_state_static.json \
| sed 's/^\s*\/\/.*//' \
| sed '/^$/d' \
| jq '.entries[]' \
| jq 'select((.policy=="public-suffix") and (.include_subdomains==true))' \
| jq -r '"\(.name)"' | {
"source": [
"https://serverfault.com/questions/1067230",
"https://serverfault.com",
"https://serverfault.com/users/752195/"
]
} |
1,069,102 | I check /var/log/secure and I have these logs: Jul 9 13:02:56 localhost sshd[30624]: Invalid user admin from 223.196.172.1 port 37566
Jul 9 13:02:57 localhost sshd[30624]: Connection closed by invalid user admin 223.196.172.1 port 37566 [preauth]
Jul 9 13:03:05 localhost sshd[30626]: Invalid user admin from 223.196.174.150 port 61445
Jul 9 13:03:05 localhost sshd[30626]: Connection closed by invalid user admin 223.196.174.150 port 61445 [preauth]
Jul 9 13:03:16 localhost sshd[30628]: Invalid user admin from 223.196.169.37 port 62329
Jul 9 13:03:24 localhost sshd[30628]: Connection closed by invalid user admin 223.196.169.37 port 62329 [preauth]
Jul 9 13:03:29 localhost sshd[30630]: Invalid user admin from 223.196.169.37 port 64099
Jul 9 13:03:30 localhost sshd[30630]: Connection closed by invalid user admin 223.196.169.37 port 64099 [preauth]
Jul 9 13:03:45 localhost sshd[30632]: Invalid user admin from 223.196.174.150 port 22816
Jul 9 13:03:46 localhost sshd[30632]: Connection closed by invalid user admin 223.196.174.150 port 22816 [preauth]
Jul 9 13:06:17 localhost sshd[30637]: Invalid user admin from 223.196.168.33 port 33176
Jul 9 13:06:17 localhost sshd[30637]: Connection closed by invalid user admin 223.196.168.33 port 33176 [preauth]
Jul 9 13:07:09 localhost sshd[30639]: Invalid user admin from 223.196.173.152 port 61780
Jul 9 13:07:25 localhost sshd[30641]: Invalid user admin from 223.196.168.33 port 54200
Jul 9 13:07:26 localhost sshd[30641]: Connection closed by invalid user admin 223.196.168.33 port 54200 [preauth]
... It seems someone tries to log in by SSH. I disable login by root user and enable public/private key login, but is this a DDoS attack? And does it use RAM/CPU? What should I do? | That's just the normal Internet background noise of people scanning for vulnerable servers. You can add an iptables rule to rate limit incoming connections (e.g. four in four minutes) for a simple fix (but that will also lock you out if you open too many connections or someone forges SYN packets originating from your address): iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 240 --hitcount 4 --name ssh-v4 --mask 255.255.255.255 --rsource -j REJECT --reject-with tcp-reset
iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name ssh-v4 --mask 255.255.255.255 --rsource -j ACCEPT The proper solution is to use a tool like fail2ban that parses the log file for failed logins and creates firewall rules on demand -- a bit more work to set up, but it requires an established connection and a failed authentication to trigger, so it will not react to forged connection attempts or successful logins like the simple approach does. | {
"source": [
"https://serverfault.com/questions/1069102",
"https://serverfault.com",
"https://serverfault.com/users/588926/"
]
} |
1,074,627 | I'm aware of NAT table. I just want to know what happens if two clients in a private local area network want to download exactly the same resource on the same port? In other words , When a packet comes from the server, how can the router decide which client is supposed to get this packet? If I'm not wrong, the incoming packet from the server has destination IP address of the router which is public and is the same for both, and also the destination's port number which happens to be the same as well in this case. Is there any mechanism in router or server to detect this ? or is this behavior even possible at the first place? I've searched questions like this , which makes sense that the error raises because the port is busy but I'm asking about two separate systems. Update : From comments I realized that I wasn't clear enough so let me say it again with an example: I just care about devices' "source" port. Assume I have two laptops ( 192.168.2.10 and 192.168.2.11 ), both of them are downloading same file from the same server somewhere in the internet. Each of them has an operating system which generates a random port so the source IP and source port would be something like: 192.168.2.10:6321 and 192.168.2.11:7132 . I thought that in NAT, router will set it's (public)IP address along with the ports from laptops so if the public IP address of the home router is 65.82.23.32 , these two packages will get these source IP and source port respectively : 65.82.23.32:6321 and 65.82.23.32:7132 . Now when the response gets back, router can figure out which packet is for which laptop from the port numbers right ? so far so good. But what happens if accidentally or intentionally two laptops generate exactly the same source port? for example : 192.168.2.10:6000 and 192.168.2.11:6000 . Now router will set it's public IP address as the source IP address just as before, but now if it tries to use those port numbers, those packages will have exactly the same source IP and source port number, like : 65.82.23.32:6000 and 65.82.23.32:6000 . This is where I got confused that when the response comes back, how router can decide which packet is for which laptop ? After @mfinni's answer, I noticed that this is not how PAT works! The NAT device (here router) will assign unique ports to each individual laptop(private IP address), then the packets sent out with these unique ports(for example 7777 and 7778 ). So when response gets back, it's clear that which packet is for which laptop from the ports, then router will convert these 65.82.23.32:7777 , 65.82.23.32:7778 to --> 192.168.2.10:6000 , 192.168.2.11:6000 respectively. | A TCP connection (which underlies HTTP and many other protocols) is uniquely (at a given point in time) defined by 4 parameters: The local IP The local port The remote IP The remote port Even if you make the same request twice simultaneously from the same computer, even with the two IP addresses identical and the destination port identical, the source port will be different. Likewise, if you have two requests coming from two devices going through the same NAT device, the NAT device will use different source ports. Depending on the device, it may either keep the original source ports (and only change one if there's a conflict), or always assign a new source port independently of the original source port. The NAT device will then keep for each connection a mapping in its translation table which states that external connection (external IP, external source port, destination IP, destination port) is mapped to internal connection (internal host IP, internal host source port, destination IP, destination port). | {
"source": [
"https://serverfault.com/questions/1074627",
"https://serverfault.com",
"https://serverfault.com/users/675167/"
]
} |
1,079,199 | If I try to access our HTTPS server that has certbot-issued certificate from debian 9, I get the following error: # curl -v https://hu.dbpedia.org/
* Trying 195.111.2.82...
* TCP_NODELAY set
* Connected to hu.dbpedia.org (195.111.2.82) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: certificate has expired
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: certificate has expired However, if I try the same command from debian 10, it succeeds. I tried to simply copy all ca-certificates from a debian 10 VM to the debian 9 VM (to /usr/local/share/ca-certificates) with rsync, and then ran update-ca-certificates which seemingly added 400+ certificates. Unfortunately, it did not help. This is no wonder, as it seems that there are the same certificates on both debian 9 and 10, apparently. My question is : How can I access sites with certbot certificates from debian 9 machines without ignoring certificate verification altogether | First off, Debian 9 is EOL. But as the clients may not be under your control, you may of course want to try to cater to them in this breakage. I assume that while the question only mentions certbot , that it's really specifically about Letsencrypt. (The tool certbot itself is an ACME protocol client which is also used with other ACME-based CAs, so there is some room for confusion here.) The problem at hand would appear to be the combination of: The old Letsencrypt root ("DST Root CA X3") has expired The new default LE chain tries to be "extra compatible" by presenting an optional extension of the chain where the new root ("ISRG Root X1") is presented as a cross-signed intermediate for the old root (as very old Android versions still accept the expired root, but do not have the new root) Openssl 1.0 has a bug causing it to just try the first chain it sees and if it doesn't like that, it doesn't look at any other possibilities (ie, the shorter new chain ending at X1 vs the longer "compatibility" extension of that chain going through X1 to X3). libcurl3 on Debian 9 is linked to libssl 1.0 If you instead present the new LE certificate chain that does not try to be extra compatible, just ending at the new root (X1), it allows libssl 1.0 to work (but then you lose compatibility with really old Android instead). Other than that, other CAs (ACME or otherwise) is probably the option to consider. | {
"source": [
"https://serverfault.com/questions/1079199",
"https://serverfault.com",
"https://serverfault.com/users/79123/"
]
} |
1,080,911 | We are working on a software solution and some of our providers are really CentOS 7 centered. CentoS 7 will continue to produce through the remainder of the RHEL 7 life cycle, which will end sometime in 2024. CentOS 8 will receive updates till December 2021. CentOS Stream was announced by Red Hat but is apparently not a replacement for CentOS. I am not very into diving in this if options are uncertain in the near future with CentOS. Question: what are the options for CentOS 7 users when RHEL 7 reach its end of life and users need a production ready server? | If RHEL binary compatibility is not strictly required and if using in-tree kernel modules only (i.e.: no out-of-tree kmods are required), CentOS Stream should remain a viable option. Otherwise you can use one of the new RHEL clones, such as AlmaLinux , RockyLinux or even Oracle Unbreakable Linux (in this case, be sure to select the RHEL-compatible kernel rather than its own customized kernel). Personal note: I am using RockyLinux with no issues at all (I migrated from a CentOS 8 box with the migrate2rocky script ) but, as always, your mileage may vary. Finally, if you are sure to need fewer than 16 RHEL instances, you can use plain simple Red Hat Enterprise Linux from Red Hat's free tier (with no support, obviously). EDIT: as wisely suggested in other answers, migrating to a different distributions as Debian, Ubuntu, etc. is a very reasonable approach. I did the same (rebuilding with latest Ubuntu LTS) in environments where RHEL compatibility was not required. Debian and Ubuntu officially support in-place upgrade paths while most RHEL clones only have unofficial support - RHEL itself and Oracle Unbreakable Linux being the exceptions, with fully supported leapp upgrades - but things are changing now . | {
"source": [
"https://serverfault.com/questions/1080911",
"https://serverfault.com",
"https://serverfault.com/users/543723/"
]
} |
1,080,950 | im facing extremely weird issue regards one server, it random freeze/hang with no output on server, and not responding to short keys, and required cold boot, when boot with cold boot, no errors on boot screen at all. It's not freezing under heavy load at all, with around 9-20% cpu wheb crash, load average around 2-5(12 core cpu)
and 128gb ram We tried check logs, nothing shows like kernal panics, or anything that relate to the issue itself. In all the freezes after cold boot, when we check the log, we do see normal OOM reaper killing php procces (users reach limits) but nothing too abusive, but always on OOM,
Sometimes when server freeze in the log you see the current time, and sometimes like the it shows after thr current time of the crash few lines from older date, and freezes. Nothing in logs can determine software related, or under heavy load, just normal operation, this is an upgraded machine from old one, that were stable for years..
The freezes are random, could be after a week server up, or two days or three weeks and etc... Also we tried to extract vmcore dump of server freeze but still nothing catches there. It's just freeze with not screen output, but server still running but not pringable, cant access ssh nothing, also kvm as i said show no output at all at screen. Could it be related to maybe faulty hardware? As my suspension is about faulty RAM? I'm extremely lost with this issue..
Thanks | If RHEL binary compatibility is not strictly required and if using in-tree kernel modules only (i.e.: no out-of-tree kmods are required), CentOS Stream should remain a viable option. Otherwise you can use one of the new RHEL clones, such as AlmaLinux , RockyLinux or even Oracle Unbreakable Linux (in this case, be sure to select the RHEL-compatible kernel rather than its own customized kernel). Personal note: I am using RockyLinux with no issues at all (I migrated from a CentOS 8 box with the migrate2rocky script ) but, as always, your mileage may vary. Finally, if you are sure to need fewer than 16 RHEL instances, you can use plain simple Red Hat Enterprise Linux from Red Hat's free tier (with no support, obviously). EDIT: as wisely suggested in other answers, migrating to a different distributions as Debian, Ubuntu, etc. is a very reasonable approach. I did the same (rebuilding with latest Ubuntu LTS) in environments where RHEL compatibility was not required. Debian and Ubuntu officially support in-place upgrade paths while most RHEL clones only have unofficial support - RHEL itself and Oracle Unbreakable Linux being the exceptions, with fully supported leapp upgrades - but things are changing now . | {
"source": [
"https://serverfault.com/questions/1080950",
"https://serverfault.com",
"https://serverfault.com/users/884227/"
]
} |
1,081,024 | Within my laptop PC, I set a systemd service that make a OpenVPN connection to my home, and let it automatically start on boot, so that I can access my home server anywhere. The trouble is that when I'm home already, it still connects to VPN, and confuse the route table of the laptop, therefore I can't access the server when I'm at home. Is there a way, I can let a systemd service start conditionally? Thanks! | If RHEL binary compatibility is not strictly required and if using in-tree kernel modules only (i.e.: no out-of-tree kmods are required), CentOS Stream should remain a viable option. Otherwise you can use one of the new RHEL clones, such as AlmaLinux , RockyLinux or even Oracle Unbreakable Linux (in this case, be sure to select the RHEL-compatible kernel rather than its own customized kernel). Personal note: I am using RockyLinux with no issues at all (I migrated from a CentOS 8 box with the migrate2rocky script ) but, as always, your mileage may vary. Finally, if you are sure to need fewer than 16 RHEL instances, you can use plain simple Red Hat Enterprise Linux from Red Hat's free tier (with no support, obviously). EDIT: as wisely suggested in other answers, migrating to a different distributions as Debian, Ubuntu, etc. is a very reasonable approach. I did the same (rebuilding with latest Ubuntu LTS) in environments where RHEL compatibility was not required. Debian and Ubuntu officially support in-place upgrade paths while most RHEL clones only have unofficial support - RHEL itself and Oracle Unbreakable Linux being the exceptions, with fully supported leapp upgrades - but things are changing now . | {
"source": [
"https://serverfault.com/questions/1081024",
"https://serverfault.com",
"https://serverfault.com/users/508420/"
]
} |
1,081,032 | I was disabling TLS 1.0 and 1.1 (finally) which worked wonderfully for all but one user. When this user tries to connect to an SQLExpress server it forces her to use TLS 1.0 and refuses 1.2 if I or any other user attempts the same connection the communication is in TLS 1.2 with no problems. This users computer is running Windows 20h2 and the server is running windows Server 2016 Datacenter. | If RHEL binary compatibility is not strictly required and if using in-tree kernel modules only (i.e.: no out-of-tree kmods are required), CentOS Stream should remain a viable option. Otherwise you can use one of the new RHEL clones, such as AlmaLinux , RockyLinux or even Oracle Unbreakable Linux (in this case, be sure to select the RHEL-compatible kernel rather than its own customized kernel). Personal note: I am using RockyLinux with no issues at all (I migrated from a CentOS 8 box with the migrate2rocky script ) but, as always, your mileage may vary. Finally, if you are sure to need fewer than 16 RHEL instances, you can use plain simple Red Hat Enterprise Linux from Red Hat's free tier (with no support, obviously). EDIT: as wisely suggested in other answers, migrating to a different distributions as Debian, Ubuntu, etc. is a very reasonable approach. I did the same (rebuilding with latest Ubuntu LTS) in environments where RHEL compatibility was not required. Debian and Ubuntu officially support in-place upgrade paths while most RHEL clones only have unofficial support - RHEL itself and Oracle Unbreakable Linux being the exceptions, with fully supported leapp upgrades - but things are changing now . | {
"source": [
"https://serverfault.com/questions/1081032",
"https://serverfault.com",
"https://serverfault.com/users/615663/"
]
} |
1,084,176 | Do I need a GPU on a text and console only server? No GPU as in no iGPU and dGPU. Im going to be using SSH, so I dont need a display out. Im using Linux, but the OS shouldn't affect the results | You do not need one, but you will be very hard pressed to get a proper server WITHOUT one, even if it is a low end one. This is simply a matter of what people OFFER — not what you want. Server boards mostly are prepared for attaching a screen, and if anything you will value that the next time you must upgrade the BIOS. SSH will not work for that. So, you get a server that not only has a GPU, but also has IPMI functionality allowing you to see both a "virtual screen" as well as using this screen over HTML (mostly) to allow you to see and configure the BIOS. You may say "you use Linux" — but somehow you must install it, and sometimes you must update the BIOS, and things being as they are, those operations are not happening via Linux over SSH. Also at times of failure you may realize "using SSH" means "not having any idea how to get to the server as the SSH interface is failing for whatever reason". At these points a non-networked access (via a browser on a separate management system) is a lifesaver. It has been many years since I have seen a proper server that does not have IPMI and a GPU. You literally have to look into the garbage bin these days — if anyone has something without that is proper please inform me, but I am mostly looking at SuperMicro and am not aware of any board that would fit the OP's demands. | {
"source": [
"https://serverfault.com/questions/1084176",
"https://serverfault.com",
"https://serverfault.com/users/808837/"
]
} |
1,084,317 | Linux systems sometimes remount the root file system as read-only, e.g. if there's an I/O error. I have a machine that becomes useless when this happens, and I end up rebooting it manually. Is there a way to make Linux just automatically reboot when this happens? A read-only mount is useless to me. | I deduce you are using ext3 or ext4 as the file system. If so, you can mount it with the errors=panic option and configure watchdog to reboot your system in case a panic happen. While more complex than roelvanmeer's answer (which I upvoted), it has the added bonus of working for all panic-level kernel crash. As suggested by NikitaKipriyanov , setting the panic=5 kernel boot option can be a simpler alternative to watchdog (which has more configuration options but it is slightly more complex as result). | {
"source": [
"https://serverfault.com/questions/1084317",
"https://serverfault.com",
"https://serverfault.com/users/67975/"
]
} |
1,084,322 | I'm migrating a site currently hosted on github pages to AWS. The site in question is on an aws instance, with an elastic ip of http://13.40.0.39/ . The elastic IP does correctly show the site. I want the domain www.whitewaterwriters.com to show the site. I created a hosted zone in root 53. I changed the nameservers at my domain provider (qiq) and confirmed it later with whois: they have correctly changed. As per the screenshot I created a record to go to the correct IP address. However, when I access the site I get "Hmm, We're having trouble finding that site' and using the Dig command I get server fail. I tried 'test record' in AWS and got: Error occurred
Bad request.
(InvalidInput 400: Record name 'https://www.whitewaterwriters.com/' is not valid for hosted zone: https\072\057\057www.whitewaterwriters.com\057.) My question is: what causes this particualar error in AWS (which seems not to currently return any SE results)? and more generally, what would be the next step for debugging dns issues in AWS. | I deduce you are using ext3 or ext4 as the file system. If so, you can mount it with the errors=panic option and configure watchdog to reboot your system in case a panic happen. While more complex than roelvanmeer's answer (which I upvoted), it has the added bonus of working for all panic-level kernel crash. As suggested by NikitaKipriyanov , setting the panic=5 kernel boot option can be a simpler alternative to watchdog (which has more configuration options but it is slightly more complex as result). | {
"source": [
"https://serverfault.com/questions/1084322",
"https://serverfault.com",
"https://serverfault.com/users/135245/"
]
} |
1,086,065 | I have read about security vulnerabilities related to Log4j. How do I check if Log4j is installed on my server?
My specific servers use Ubuntu 18.04.6 LTS . I have installed many third-party packages and maybe some of them contain it. Is there a command to run on my server to check if Log4j is installed? | Try this script to get a hint: echo "checking for log4j vulnerability..."
OUTPUT="$(locate log4j|grep -v log4js)"
if [ "$OUTPUT" ]; then
echo "[WARNING] maybe vulnerable, those files contain the name:"
echo "$OUTPUT"
fi
OUTPUT="$(dpkg -l|grep log4j|grep -v log4js)"
if [ "$OUTPUT" ]; then
echo "[WARNING] maybe vulnerable, dpkg installed packages:"
echo "$OUTPUT"
fi
if [ "$(command -v java)" ]; then
echo "java is installed, so note that Java applications often bundle their libraries inside jar/war/ear files, so there still could be log4j in such applications."
fi
echo "If you see no output above this line, you are safe. Otherwise check the listed files and packages." Make sure your locate database is up to date with updatedb . Or better check and run the enhanced script from GitHub which also searches inside packed Java files . Run in one line with wget https://raw.githubusercontent.com/rubo77/log4j_checker_beta/main/log4j_checker_beta.sh -q -O - | bash I am not sure if there could be compiled Java Programs running on the server without java being installed though? Or even compiled versions where the source files aren't even found inside packed archives any more? There is also a lot development on GitHub , where you find attacks and countermeasures. This takes to much time for me. I am looking for someone I can transfer the ownership of the repository on GitHub | {
"source": [
"https://serverfault.com/questions/1086065",
"https://serverfault.com",
"https://serverfault.com/users/312300/"
]
} |
1,086,113 | Does the apache webserver (apache2) use log4j? I have Apache2 2.4.38 (debian) installed on Raspberry Pi OS (64bit) and found some strange records in my log regarding CVE-2021-44228 from kryptoslogic-cve-2021-44228.com (honeypot/scanner), dataastatistics.com (offline & malicious?) and a8fvkc.dnslog.cn (I dont know what this is) What should I do now? nothing because apache2 is not affected by CVE-2021-44228 format everything and wait for a few days before I install a patched version "check" if there are any new .class files and if there are not continue operation (and use manual patches like log4j2.formatMsgNoLookups = TRUE something else Logs: 139.59.99.80 - - [12/Dec/2021:00:34:47 +0100] "GET / HTTP/1.1" 301 512 "-" "${jndi:ldap://http80useragent.kryptoslogic-cve-2021-44228.com/http80useragent}"
139.59.99.80 - - [12/Dec/2021:00:34:48 +0100] "GET / HTTP/1.1" 200 5932 "http://79.232.126.49/" "${jndi:ldap://http80useragent.kryptoslogic-cve-2021-44228.com/http80useragent}"
139.59.99.80 - - [12/Dec/2021:01:51:38 +0100] "GET /$%7Bjndi:ldap://http80path.kryptoslogic-cve-2021-44228.com/http80path%7D HTTP/1.1" 301 654 "-" "Kryptos Logic Telltale"
139.59.99.80 - - [12/Dec/2021:01:51:39 +0100] "GET /$%7bjndi:ldap:/http80path.kryptoslogic-cve-2021-44228.com/http80path%7d HTTP/1.1" 404 8456 "http://79.232.126.49/$%7Bjndi:ldap://http80path.kryptoslogic-cve-2021-44228.com/http80path%7D" "Kryptos Logic Telltale"
139.59.99.80 - - [12/Dec/2021:01:51:39 +0100] "GET /$%7bjndi:ldap:/http80path.kryptoslogic-cve-2021-44228.com/http80path%7d HTTP/1.1" 404 8456 "http://79.232.126.49/$%7Bjndi:ldap://http80path.kryptoslogic-cve-2021-44228.com/http80path%7D" "Kryptos Logic Telltale"
139.59.99.80 - - [11/Dec/2021:18:35:25 +0100] "GET / HTTP/1.1" 200 5932 "-" "${jndi:ldap://http443useragent.kryptoslogic-cve-2021-44228.com/http443useragent}"
139.59.99.80 - - [11/Dec/2021:20:13:11 +0100] "GET /$%7Bjndi:ldap://http443path.kryptoslogic-cve-2021-44228.com/http443path%7D HTTP/1.1" 404 8456 "-" "Kryptos Logic Telltale"
191.101.132.152 - - [11/Dec/2021:22:55:33 +0100] "GET /?api=${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help} HTTP/1.1" 400 5115 "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}" "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}"
191.101.132.152 - - [11/Dec/2021:22:55:33 +0100] "GET /?api=${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help} HTTP/1.1" 400 5115 "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}" "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}"
191.101.132.152 - - [11/Dec/2021:22:55:33 +0100] "GET /?api=${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help} HTTP/1.1" 400 5115 "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}" "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}"
191.101.132.152 - - [11/Dec/2021:22:55:34 +0100] "POST /api/v2 HTTP/1.1" 400 5115 "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}" "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}"
191.101.132.152 - - [11/Dec/2021:22:55:34 +0100] "POST /api/v2 HTTP/1.1" 400 5115 "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}" "${jndi:ldap://[*****mydomain****].774dda06.dataastatistics.com/help}"
137.184.106.119 - - [10/Dec/2021:20:38:18 +0100] "GET / HTTP/1.1" 301 568 "-" "${jndi:ldap://a8fvkc.dnslog.cn/a}"
137.184.106.119 - - [10/Dec/2021:20:38:20 +0100] "GET / HTTP/1.1" 200 6576 "-" "${jndi:ldap://a8fvkc.dnslog.cn/a}"
137.184.106.119 - - [10/Dec/2021:20:38:21 +0100] "GET /favicon.ico HTTP/1.1" 200 7103 "-" "${jndi:ldap://a8fvkc.dnslog.cn/a}" | The Apache HTTP Server is not written in Java, it does not use the log4j library, so it is not affected by CVE-2021-44228. Your log files are from the access log, they show people scanning for the log4j vulnerability. | {
"source": [
"https://serverfault.com/questions/1086113",
"https://serverfault.com",
"https://serverfault.com/users/944484/"
]
} |
1,086,119 | User returns after a couple of months, the AD profile was deactivated and the Microsoft 365 license was canceled. Doesnt matter that mails will be gone since its more than 30 days but what do I need to do to return email functionality? What I have tried:
Activated user in AD.
Given the user a new 365 license in 365 admin center.
Enabled-mailbox for user and tried migrating the mailbox but get an error about user not being a mailbox. In EAC on prem the mailbox shows up, in EAC online it doesn't. Admin center 365 shows user but no mailbox. What should I do? | The Apache HTTP Server is not written in Java, it does not use the log4j library, so it is not affected by CVE-2021-44228. Your log files are from the access log, they show people scanning for the log4j vulnerability. | {
"source": [
"https://serverfault.com/questions/1086119",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
1,100,250 | In the history, I mostly used 0.0.0.0/0 for "match every IP address". Recently, I saw a 0.0.0.0/1 subnet filter. What is the difference between 0.0.0.0/0 and 0.0.0.0/1 and what's the practical use of 0.0.0.0/1 ? | The 0.0.0.0/0 matches every IP address, whereas 0.0.0.0/1 only matches half of them ( 0.0.0.0-127.255.255.255 ) and requires 128.0.0.0/1 as its pair to match the rest ( 128.0.0.0-255.255.255.255 ). In basic routing, the smallest available subnet containing the IP address takes precedence . This rule comes from RFC 4632, 5.1 . It is typical there will be overlapping networks as, for example, 192.168.1.0/24 is part of 192.168.0.0/16 , which is – just like any IP address – part of 0.0.0.0/0 . Therefore, by splitting the 0.0.0.0/0 into smaller chunks one can constrain the interface to take precedence over any other interface that has default route 0.0.0.0/0 , without playing with metric values. This is a common technique with VPNs that would not want data to bypass the tunnel. The same logic is the reason you could still use resources from your local subnet (e.g., /24 ) while the VPN is on – if no other methods are used to enforce everything gets tunneled. Likewise, the entire IPv4 address space could be divided into even smaller subnets, e.g. in four chunks: 0.0.0.0/2 ( 0.0.0.0-63.255.255.255 ) 64.0.0.0/2 ( 64.0.0.0-127.255.255.255 ) 128.0.0.0/2 ( 128.0.0.0-191.255.255.255 ) 192.0.0.0/2 ( 192.0.0.0-255.255.255.255 ) Or eight with 0.0.0.0/3 , 32.0.0.0/3 , 64.0.0.0/3 , 96.0.0.0/3 , 128.0.0.0/3 , 160.0.0.0/3 , 192.0.0.0/3 & 224.0.0.0/3 , etc., etc. | {
"source": [
"https://serverfault.com/questions/1100250",
"https://serverfault.com",
"https://serverfault.com/users/207252/"
]
} |
1,100,263 | We launch workloads in GCE using Managed Instance Groups (MIG), which oversee the lifecycle and health of these VMs. New VMs are provisioned with a startup script (bash), which, on rare occasions, fails in some way. However, the VM is still able to start, launch it's workload, and pass it's health checks. Is there some setting in GCE / MIGs that says "if the init script does not execute successfully, kill the VM, and recreate it" ? I could shut down if an error is trapped, eg.: ...
exception() {
echo 'startup script error; shutting down!'
shutdown -h now
}
trap 'exception' ERR
... But was hoping there was a more managed option. | The 0.0.0.0/0 matches every IP address, whereas 0.0.0.0/1 only matches half of them ( 0.0.0.0-127.255.255.255 ) and requires 128.0.0.0/1 as its pair to match the rest ( 128.0.0.0-255.255.255.255 ). In basic routing, the smallest available subnet containing the IP address takes precedence . This rule comes from RFC 4632, 5.1 . It is typical there will be overlapping networks as, for example, 192.168.1.0/24 is part of 192.168.0.0/16 , which is – just like any IP address – part of 0.0.0.0/0 . Therefore, by splitting the 0.0.0.0/0 into smaller chunks one can constrain the interface to take precedence over any other interface that has default route 0.0.0.0/0 , without playing with metric values. This is a common technique with VPNs that would not want data to bypass the tunnel. The same logic is the reason you could still use resources from your local subnet (e.g., /24 ) while the VPN is on – if no other methods are used to enforce everything gets tunneled. Likewise, the entire IPv4 address space could be divided into even smaller subnets, e.g. in four chunks: 0.0.0.0/2 ( 0.0.0.0-63.255.255.255 ) 64.0.0.0/2 ( 64.0.0.0-127.255.255.255 ) 128.0.0.0/2 ( 128.0.0.0-191.255.255.255 ) 192.0.0.0/2 ( 192.0.0.0-255.255.255.255 ) Or eight with 0.0.0.0/3 , 32.0.0.0/3 , 64.0.0.0/3 , 96.0.0.0/3 , 128.0.0.0/3 , 160.0.0.0/3 , 192.0.0.0/3 & 224.0.0.0/3 , etc., etc. | {
"source": [
"https://serverfault.com/questions/1100263",
"https://serverfault.com",
"https://serverfault.com/users/472219/"
]
} |
1,100,677 | I work in a small IT department in a medium-sized enterprise (up to 200 users). Thanks to home office and our growing field workforce, it has become more challenging to supervise and manage our client PCs. One option we figured out to eliminate all our worries is to use reboot-to-restore software like Deep Freeze or HDGuard. During our investigations, we found use-cases for that kind of software in an educational or public environment but next to no usage in corporate IT. Why is that so? What are the downsides of reboot-to-restore software, specifically in the context of a corporate IT environment? | Mostly you see that kind of software used in public access computers - schools, kiosks, things like that where people use the computer for a limited amount of time and don't want to leave any information behind when they are done. For a typical office computer, it would have a ton of downsides: User profiles get regenerated every reboot, which means long logins Outlook is going to download your entire mailbox on every boot Constant 2FA prompts because all the indicators that you regularly login from this computer get removed Users can't customize any settings (which might be a good thing in some situations, but not all) Can't save passwords, bookmark websites, or stay logged into anything Anything people save is gone, unless the software lets you unfreeze certain folders. Even then, someone's going to lose their work occasionally Most of those are advantages on computers that are used by dozens of people a day and only used once, but big disadvantages for someone's daily use computer. There are some use cases for it, but overall it tends to cause more problems than it solves. How would you feel if it was your office computer that reset to it's initially imaged settings every time it reboots? If you really want to give it a try, start with doing it to the IT department's computers and see how well it works. | {
"source": [
"https://serverfault.com/questions/1100677",
"https://serverfault.com",
"https://serverfault.com/users/966031/"
]
} |
1,104,054 | (If there is a better place to ask this, let me know.) A lot of email servers limit the combined size of files attached to incoming emails. Like, for example, maybe some email server disallows messages larger than 20MB; it would fail if an attachment is 22MB, and it would also fail if one attachment is 10MB and a second attachment is 12MB. Most email servers implement some sort of limit like this example, but the total allowed size varies depending on which. Here's the part I don't understand though - does that size limit only apply to attachments? or does it apply to the total size of the email? Like, lets say that a hypothetical email has an attachment that is exactly 19.98MB. But, the body of the email message is 0.03MB. Assuming that the receiving email server has a limit of 20MB, would that arrive? or fail to arrive? Or maybe whether that particular example succeeds or fails depending on which email server is hypothetically receiving it? | Technically, there is no difference between the message body and the attachments. Both the body and the attachments are equally parts of a multipart or multipurpose email message, as defined in RFC 2045 . The message size limits are typically against the total length of the message including all the parts and the headers. Such limit can also be advertised with a Message Size Declaration reply to the EHLO command, and the RFC 1870 also provides a definition for the size limit in section 5 : The message size is defined as the number of octets, including
CR-LF pairs, but not the SMTP DATA command's terminating dot or
doubled quoting dots, to be transmitted by the SMTP client after
receiving reply code 354 to the DATA command. The fixed maximum message size is defined as the message size of
the largest message that a server is ever willing to accept. An
attempt to transfer any message larger than the fixed maximum
message size will always fail. This is, e.g., implemented in both Postfix and Exim with message_size_limit configuration parameter; Microsoft Exchange has multiple types of message size limits . Also notice that binary attachments must be converted to 7-bit ASCII text. Base64 encoding is used for this, causing
37% overhead (33% by the encoding itself; 4% more by the inserted line breaks; RFC 2045, 6.8 ). For this reason you may not be able to send attachments despite they are smaller than the limit. | {
"source": [
"https://serverfault.com/questions/1104054",
"https://serverfault.com",
"https://serverfault.com/users/555248/"
]
} |
1,105,616 | I enabled DNSSEC on my primary domain about a week ago. It's not a major website or anything -- just my personal domain name that I use for email and the like (TLD: com ; DNSSEC algorithm 13; authoritative DNS provider: Cloudflare). Over the last 24 hours, the domain has received 15,605 queries. In response, it has dished out 15,601 NOERROR response codes and a total of 4 NXDOMAIN response codes. How are NXDOMAIN responses still possible? What could be generating them? Personally I cannot trigger one no matter what query I attempt, and my understanding is that DNSSEC should, at least in theory, eliminate this response code entirely. Am I incorrect? | TL;DR The lack of NXDOMAIN responses for Cloudflare hosted domains is a consequence of their specific DNSSEC implementation (using so called "black lies") and not a design of the DNSSEC protocol itself; hence observations will be different with other providers doing DNSSEC. Initial questions How are NXDOMAIN responses still possible? Why wouldn't they be possible? DNSSEC or not, if you query for a name that doesn't exist, you get NXDOMAIN reply back. my understanding is that DNSSEC should, at least in theory, eliminate this response code entirely Why? And from where do you get that feeling? Live example with a DNSSEC enabled domain icann.org is DNSSEC enabled right now. If I query for a name that does not exist under it, I get a NXDOMAIN : $ dig NS icann.org +short
b.icann-servers.net.
c.icann-servers.net.
ns.icann.org.
a.icann-servers.net.
$ dig @a.icann-servers.net does-not-exist-foobar.icann.org
; <<>> DiG 9.18.4 <<>> @a.icann-servers.net does-not-exist-foobar.icann.org
; (1 server found)
;; global options: +cmd
;; Sending:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38891
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 98228e9e0c5ef4e6
;; QUESTION SECTION:
;does-not-exist-foobar.icann.org. IN A
;; QUERY SIZE: 72
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 38891
^^^^^^^^ DNSSEC is an extension of DNS in the sense that for a non validating resolver, answers are not different, even if the domain is DNSSEC enabled. So all return codes work in the same way. Explanations about NSEC/NSEC3/RRSIG What it does change, that you can see if adding +dnssec to dig (which doesn't mean "activate DNSSEC" but means "display DNSSEC related records - those are RRSIG , NSEC and NSEC3 - as they are normally not displayed), is that the AUTHORITY section in case of the NXDOMAIN gives further explanations with NSEC or NSEC3 records: ;; AUTHORITY SECTION:
icann.org. 1h IN SOA sns.dns.icann.org. noc.dns.icann.org. (
2022070670 ; serial
10800 ; refresh (3 hours)
3600 ; retry (1 hour)
1209600 ; expire (2 weeks)
3600 ; minimum (1 hour)
)
j93jujiqg7ge3616mub4r5bei85poet9.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
J9HKD4G746GMUTGGUV6AM37GSJAD6NRR
A NS SOA MX TXT AAAA RRSIG DNSKEY NSEC3PARAM )
tdr1at6eafsrigdrlj6atpb2dge2aof0.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
TE4FB4PVMU1GQNPG9P01ID48U1BTN2G4
A RRSIG )
lsrp57e1pe333jadkpdgh3v1i8vs80rd.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
LT4I8S7OTQ7ACOSF73M7LHCIC7C1J17I
A RRSIG )
icann.org. 1h IN RRSIG SOA 7 2 3600 (
20220804192816 20220714153322 3425 icann.org.
NMcD1TeozFyCRDlmqFMoM/V/VmWQUmRNIH0/igPzdj2S
hemnQHeXDOudBxsUgE/DpSV4KHsgqLQKdgbQruqCO7Dt
iLK1bCLBZs38LdOadyJs3jWjjuJ9+mEnLXTsqMeeMllw
YFL6pPyo1TfChZm05KJ+DJNw0SHJw3MWBRtV4iI= )
j93jujiqg7ge3616mub4r5bei85poet9.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724054620 20220703065347 58935 icann.org.
gmo0VP8k9Li9lutMA3uTrMfABMmFBN23GonYo72Twk9l
wGYqFvlU/naN0KKtEd3g+zOiYB0Jb1J1270Dveew/vYa
hTmeMYrwUbEt9gZYCvi74zm6Ss0cQ8uxJ5bZw70nZ7oU
LAtWYVGJMgupfjtne6021AJoLNB1CaMhFwo+TPo= )
tdr1at6eafsrigdrlj6atpb2dge2aof0.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724101659 20220703045347 58935 icann.org.
hGsUeE4di9yFuDMq8ly1YQEs1OvOFAHVctOQrs6Poixl
STqcErjC20V2CI0YApX6SbiI8AP/dqMjBm3fZh91mtDf
aSrZypfScBEO/KVdlqbW9G+y8VR65ryjTAA7TZIzqN+z
7YyTAESWb8E7T4NCtQPPwYpjl/S9krbEGSiKfaw= )
lsrp57e1pe333jadkpdgh3v1i8vs80rd.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724151521 20220703105347 58935 icann.org.
P9qwkFoGkCd+m3aDQkzF/g7SJfn/byt6d4zugLzRKuH1
rLmYZdlJNOC+fI1saCZySarsP9KavFSBzw6S9GMLobQJ
hTVpu1ZUkEP9BMOZo28eeRLrGvAbrVb7aB9CWl9TgUMc
2+s4nG87HTvD2TCJHmyPC1mIbBLYmJoa7iGLGiI= ) NSEC3 is more complicated (less human friendly) as it uses hashes of domain names. But what all the above means in summary is that the name I requested does not exists because it lands between two names that exist (but can't be seen immediately, because hashed), and that no wildcard exists (which is why you have three NSEC3 records). The RRSIG records sign the NSEC3 ones, so all the above allows a resolving nameserver to indeed double check the NXDOMAIN is legit and not introduced by some on-path attacker, because all the NSEC3 and RRSIG records match the expectations. Simpler example with NSEC case Let us take a domain DNSSEC enabled with NSEC instead of NSEC3 : the root itself :-) If I do dig @g.root-servers.net foobar. +dnssec right now I get NXDOMAIN , again for the same reasons as above and that TLD does not exist (yet?) But let us look in the results and especially one NSEC record: foo. 1d IN NSEC food. NS DS RRSIG NSEC This is an affirmative signed (there is a corresponding RRSIG record) assertion from the nameserver telling me that foobar does not exist in zone, because both foo and food exists, but nothing in between. And per DNSSEC ordering rules foobar would sort between foo and food and hence the above proves that foobar does not exist. Incidentally it proves that a lots of other names do not exist, and some resolver could cache this NSEC and derives answer without requesting anything. Why? Because if I know that nothing exists between foo and food I immediately know that fooa doesn't exist, nor fooa42 or foobie or fooccc or similar… Back to CloudFlare specific case CloudFlare implements "DNSSEC White Lies" AND "Black Lies", see https://www.cloudflare.com/dns/dnssec/dnssec-complexities-and-considerations/ and https://blog.cloudflare.com/black-lies/ for their own various reasons (in part because they do dynamic signatures generation, they generate the RRSIG records at the moment the request come, and not in advance; this is a compromise, both cases have advantages and drawbacks). What does that mean? They fake existence of ALL names, hence there is almost never an NXDOMAIN . Let us see one example: $ dig dwewgewfgewfee-32cewcewcew-2284.cloudflare.com @ns3.cloudflare.com. +dnssec
; <<>> DiG 9.18.4 <<>> dwewgewfgewfee-32cewcewcew-2284.cloudflare.com @ns3.cloudflare.com. +dnssec
;; global options: +cmd
;; Sending:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9469
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
; COOKIE: fd8d36048320c848
;; QUESTION SECTION:
;dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. IN A
;; QUERY SIZE: 87
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9469
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 1232
;; QUESTION SECTION:
;dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. IN A
;; AUTHORITY SECTION:
cloudflare.com. 5m IN SOA ns3.cloudflare.com. dns.cloudflare.com. (
2282614227 ; serial
10000 ; refresh (2 hours 46 minutes 40 seconds)
2400 ; retry (40 minutes)
604800 ; expire (1 week)
300 ; minimum (5 minutes)
)
dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. 5m IN NSEC \000.dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. RRSIG NSEC (I removed the RRSIG records). So what does that tell? First: NOERROR and not NXDOMAIN instead, so the resolver tells me the name I query for exists (but maybe not for the type I asked, A which is default dig type, and this is valid and known as NODATA which means NOERROR but no content either, no ANSWER section, as it happens when the name exists, but not that type). The AUTHORITY part and specifically that NSEC record tells me that there are no names between dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. (the name I asked for in fact, so not the previous one, just mine), and \000.dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. which may look like a strange name but 1) is totally valid (it is not a valid hostname because \000 means byte value 0 which has to be encoded as \000 for DNS operations, but still a valid domain names, as domain names in the DNS specifications can be any arbitrary bytes) and 2) is, with DNSSEC ordering algorithm, the name "right after" my name (so basically the range of the two names do not include any other name in between). The RRSIG NSEC part at the end of the NSEC record means that there are no record type A on the name but there are record types RRSIG and NSEC , which makes sense because I am exactly looking at the NSEC record of that name, and as we are in DNSSEC land, of course there is an RRSIG . So this is called a "lie" because the nameserver is replying to you: this name exists, but not this record type. And no matter which record type you ask for (except NSEC and RRSIG ) the nameserver will tell you: "this name does not exist for this record type".
At the end, if it does not exist for any record type (besides NSEC and RRSIG ) it is really as if it (the name) does not exist at all, but it is just presented in a different way for reasons quickly detailed below. I recommend reading the second link but the gist of it explaining things is (I am skipping the whole points regarding NSEC / NSEC3 and wildcard records, with all the details on "closest encounter" and so on, but those are important if going deep on NSEC stuff): NSEC3 was a “close but no cigar” solution to the problem. While it’s true that it made zone walking harder, it did not make it impossible. (which is why they don't use NSEC3 and keep NSEC but then still need another solution to avoid walking the zone and hence enumerating all names) There are two problems with negative answers: The first is that the authoritative server needs to return the >previous and next name. As you’ll see, this is computationally >expensive for CloudFlare, and as you’ve already seen, it can leak >information about a zone. The second is that negative answers require two NSEC records and >their two subsequent signatures (or three NSEC3 records and three >NSEC3 signatures) to authenticate the nonexistence of one name. >This means that answers are bigger than they need to be. So that part above is the basic explanation of why wanting to avoid using NXDOMAIN and "emulating" it with success ( NOERROR ) but at the same time responding negatively to any query (name+type for any type requested). The other point, again very specific to CloudFlare, is that it is difficult in their case to compute the "next" name (because NSEC is really giving a "range" of two names, as a link between two things existing), so instead of using the real next name as existing in their storage, they compute the mimimal "next" one following the DNSSEC algorithm, hence the strange name above with \000. as prefix, a name that obviously don't exist either, so if you query for it you will get again the same kind of reply, but this time with an NSEC record listing on right \001. or \000.\000. in fact, etc. and so on... Further down: For an NXDOMAIN, we always return \000.(the missing name) as the next name, and because we return an NSEC directly on the missing name, we do not have to return an additional NSEC for the wildcard. This way we only have to return SOA, SOA RRSIG, NSEC and NSEC RRSIG, and we do not need to search the database or precompute dynamic answers. The goal reached with all that is smaller replies. And this is important in DNS land, because of various problems around fragmentation. From their example they go from 1096 bytes to just 357 bytes with black lies, cutting almost 2/3, quite an accomplishment! All the above may become a "standard" in the future, for those wanting to do the same, as they wrote a document that can become maybe an IETF RFC one day: https://datatracker.ietf.org/doc/html/draft-valsorda-dnsop-black-lies Do note it has consequences though: NXDOMAIN is an important signal: various other stuff is built on top of that, see RFC 8020 "NXDOMAIN: There Really Is Nothing Underneath" and RFC 8198 "Aggressive Use of DNSSEC-Validated Cache", so not having this signal anymore can have side effects (and it wouldn't be a good idea to change other recursive resolvers to try finding out if the authoritative side is using black lies and then consider them, that would be brittle; that point is exactly discussed in the draft above) it also impacts ENT or "Empty Non Terminal", where a name has to exist in the DNS tree not because it has any type attached to it, but just because there are names below it; see https://www.ietf.org/archive/id/draft-huque-dnsop-blacklies-ent-01.html for more details on that topic no implementation is free of bugs, and DNSSEC is complicated, and tricks around DNSSEC are even more so complicated; now I am not sure anymore and I can't find references, but I think there was a bug in the beginning, and the returned types (in the NSEC bitmap) were not computed correctly, hence breaking some stuff. Will try to update this if I do find back what I am thinking I have seen, but I could be delusional (easy to be with DNSSEC...); in fact I think it is related to the observation that all their initial examples did put far more types in NSEC last section, where now they put only RRSIG and NSEC . See https://indico.dns-oarc.net/event/40/contributions/899/attachments/862/1563/nsec-bitmaps.pdf for live examples of errors in NSEC bitmaps and their consequences Ah no in fact I remembered right, a bug in this NSEC bitmap is right at the source of a recent Slack outage :-), but it was not on Cloudflare fault, it was AWS Route53 where the problem was. See https://www.potaroo.net/ispcol/2021-12/oarc36.pdf for those details, but in short: Now you can lie with NSEC records, [..] But what a server should never do
is return an empty bit-vector in the NSEC record. Because some resolvers, including Google’s Public
DNS service interpret an empty NSEC bit-vector as claiming that there are no resource records at all for
that domain name. This is not a Google DNS bug. It's a perfectly legitimate interpretation of the
DNSSEC specification. The problem that Slack encountered was that the Route 53 server was returning
a NSEC response with an almost empty RR-type bit-vector when the wildcard entry was used to form
the response and the query type was not defined for the wildcard resource. This was a bug in the Route
53 implementation. So, in short, lying does have bad consequences some times :-)
(and/or: DNSSEC is complicated, and wildcards in the DNS do create all sorts of complications too; in fact DNSSEC + wildcards + CNAME records are like 3 sure signs of apocalypse somehow...). This is only ONE way to do things, the consequences (almost no NXDOMAIN responses) are absolutely not a consequence of the protocol (DNSSEC) but just of their implementation. So don't take this as granted at all, it will be different with other providers. But does it really change anything for you as owner of the zone or users of it? Not so much. Why were you so worried about NXDOMAIN responses :-) ? PS: for a theoretical paper on DNSSEC lies: https://casey.byu.edu/papers/2019_pam_dnssec_lies.pdf for a presentation summarizing things (among others): https://www.slideshare.net/apnic/signing-dnssec-answers-on-the-fly-at-the-edge-challenges-and-solutions | {
"source": [
"https://serverfault.com/questions/1105616",
"https://serverfault.com",
"https://serverfault.com/users/973129/"
]
} |
1,105,634 | Are there good CLI utilities to see things like CPU, RAM, and Network Traffic over time? I know CPU can be looked at via a command like top but curious if there is any way to collect historical trends automagically. I basically want Datadog for a personal project but am trying to avoid paying for it (already did a free trial) | TL;DR The lack of NXDOMAIN responses for Cloudflare hosted domains is a consequence of their specific DNSSEC implementation (using so called "black lies") and not a design of the DNSSEC protocol itself; hence observations will be different with other providers doing DNSSEC. Initial questions How are NXDOMAIN responses still possible? Why wouldn't they be possible? DNSSEC or not, if you query for a name that doesn't exist, you get NXDOMAIN reply back. my understanding is that DNSSEC should, at least in theory, eliminate this response code entirely Why? And from where do you get that feeling? Live example with a DNSSEC enabled domain icann.org is DNSSEC enabled right now. If I query for a name that does not exist under it, I get a NXDOMAIN : $ dig NS icann.org +short
b.icann-servers.net.
c.icann-servers.net.
ns.icann.org.
a.icann-servers.net.
$ dig @a.icann-servers.net does-not-exist-foobar.icann.org
; <<>> DiG 9.18.4 <<>> @a.icann-servers.net does-not-exist-foobar.icann.org
; (1 server found)
;; global options: +cmd
;; Sending:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38891
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 98228e9e0c5ef4e6
;; QUESTION SECTION:
;does-not-exist-foobar.icann.org. IN A
;; QUERY SIZE: 72
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 38891
^^^^^^^^ DNSSEC is an extension of DNS in the sense that for a non validating resolver, answers are not different, even if the domain is DNSSEC enabled. So all return codes work in the same way. Explanations about NSEC/NSEC3/RRSIG What it does change, that you can see if adding +dnssec to dig (which doesn't mean "activate DNSSEC" but means "display DNSSEC related records - those are RRSIG , NSEC and NSEC3 - as they are normally not displayed), is that the AUTHORITY section in case of the NXDOMAIN gives further explanations with NSEC or NSEC3 records: ;; AUTHORITY SECTION:
icann.org. 1h IN SOA sns.dns.icann.org. noc.dns.icann.org. (
2022070670 ; serial
10800 ; refresh (3 hours)
3600 ; retry (1 hour)
1209600 ; expire (2 weeks)
3600 ; minimum (1 hour)
)
j93jujiqg7ge3616mub4r5bei85poet9.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
J9HKD4G746GMUTGGUV6AM37GSJAD6NRR
A NS SOA MX TXT AAAA RRSIG DNSKEY NSEC3PARAM )
tdr1at6eafsrigdrlj6atpb2dge2aof0.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
TE4FB4PVMU1GQNPG9P01ID48U1BTN2G4
A RRSIG )
lsrp57e1pe333jadkpdgh3v1i8vs80rd.icann.org. 1h IN NSEC3 1 0 5 9714B5ACB8F7A193 (
LT4I8S7OTQ7ACOSF73M7LHCIC7C1J17I
A RRSIG )
icann.org. 1h IN RRSIG SOA 7 2 3600 (
20220804192816 20220714153322 3425 icann.org.
NMcD1TeozFyCRDlmqFMoM/V/VmWQUmRNIH0/igPzdj2S
hemnQHeXDOudBxsUgE/DpSV4KHsgqLQKdgbQruqCO7Dt
iLK1bCLBZs38LdOadyJs3jWjjuJ9+mEnLXTsqMeeMllw
YFL6pPyo1TfChZm05KJ+DJNw0SHJw3MWBRtV4iI= )
j93jujiqg7ge3616mub4r5bei85poet9.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724054620 20220703065347 58935 icann.org.
gmo0VP8k9Li9lutMA3uTrMfABMmFBN23GonYo72Twk9l
wGYqFvlU/naN0KKtEd3g+zOiYB0Jb1J1270Dveew/vYa
hTmeMYrwUbEt9gZYCvi74zm6Ss0cQ8uxJ5bZw70nZ7oU
LAtWYVGJMgupfjtne6021AJoLNB1CaMhFwo+TPo= )
tdr1at6eafsrigdrlj6atpb2dge2aof0.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724101659 20220703045347 58935 icann.org.
hGsUeE4di9yFuDMq8ly1YQEs1OvOFAHVctOQrs6Poixl
STqcErjC20V2CI0YApX6SbiI8AP/dqMjBm3fZh91mtDf
aSrZypfScBEO/KVdlqbW9G+y8VR65ryjTAA7TZIzqN+z
7YyTAESWb8E7T4NCtQPPwYpjl/S9krbEGSiKfaw= )
lsrp57e1pe333jadkpdgh3v1i8vs80rd.icann.org. 1h IN RRSIG NSEC3 7 3 3600 (
20220724151521 20220703105347 58935 icann.org.
P9qwkFoGkCd+m3aDQkzF/g7SJfn/byt6d4zugLzRKuH1
rLmYZdlJNOC+fI1saCZySarsP9KavFSBzw6S9GMLobQJ
hTVpu1ZUkEP9BMOZo28eeRLrGvAbrVb7aB9CWl9TgUMc
2+s4nG87HTvD2TCJHmyPC1mIbBLYmJoa7iGLGiI= ) NSEC3 is more complicated (less human friendly) as it uses hashes of domain names. But what all the above means in summary is that the name I requested does not exists because it lands between two names that exist (but can't be seen immediately, because hashed), and that no wildcard exists (which is why you have three NSEC3 records). The RRSIG records sign the NSEC3 ones, so all the above allows a resolving nameserver to indeed double check the NXDOMAIN is legit and not introduced by some on-path attacker, because all the NSEC3 and RRSIG records match the expectations. Simpler example with NSEC case Let us take a domain DNSSEC enabled with NSEC instead of NSEC3 : the root itself :-) If I do dig @g.root-servers.net foobar. +dnssec right now I get NXDOMAIN , again for the same reasons as above and that TLD does not exist (yet?) But let us look in the results and especially one NSEC record: foo. 1d IN NSEC food. NS DS RRSIG NSEC This is an affirmative signed (there is a corresponding RRSIG record) assertion from the nameserver telling me that foobar does not exist in zone, because both foo and food exists, but nothing in between. And per DNSSEC ordering rules foobar would sort between foo and food and hence the above proves that foobar does not exist. Incidentally it proves that a lots of other names do not exist, and some resolver could cache this NSEC and derives answer without requesting anything. Why? Because if I know that nothing exists between foo and food I immediately know that fooa doesn't exist, nor fooa42 or foobie or fooccc or similar… Back to CloudFlare specific case CloudFlare implements "DNSSEC White Lies" AND "Black Lies", see https://www.cloudflare.com/dns/dnssec/dnssec-complexities-and-considerations/ and https://blog.cloudflare.com/black-lies/ for their own various reasons (in part because they do dynamic signatures generation, they generate the RRSIG records at the moment the request come, and not in advance; this is a compromise, both cases have advantages and drawbacks). What does that mean? They fake existence of ALL names, hence there is almost never an NXDOMAIN . Let us see one example: $ dig dwewgewfgewfee-32cewcewcew-2284.cloudflare.com @ns3.cloudflare.com. +dnssec
; <<>> DiG 9.18.4 <<>> dwewgewfgewfee-32cewcewcew-2284.cloudflare.com @ns3.cloudflare.com. +dnssec
;; global options: +cmd
;; Sending:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9469
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
; COOKIE: fd8d36048320c848
;; QUESTION SECTION:
;dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. IN A
;; QUERY SIZE: 87
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9469
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 1232
;; QUESTION SECTION:
;dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. IN A
;; AUTHORITY SECTION:
cloudflare.com. 5m IN SOA ns3.cloudflare.com. dns.cloudflare.com. (
2282614227 ; serial
10000 ; refresh (2 hours 46 minutes 40 seconds)
2400 ; retry (40 minutes)
604800 ; expire (1 week)
300 ; minimum (5 minutes)
)
dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. 5m IN NSEC \000.dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. RRSIG NSEC (I removed the RRSIG records). So what does that tell? First: NOERROR and not NXDOMAIN instead, so the resolver tells me the name I query for exists (but maybe not for the type I asked, A which is default dig type, and this is valid and known as NODATA which means NOERROR but no content either, no ANSWER section, as it happens when the name exists, but not that type). The AUTHORITY part and specifically that NSEC record tells me that there are no names between dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. (the name I asked for in fact, so not the previous one, just mine), and \000.dwewgewfgewfee-32cewcewcew-2284.cloudflare.com. which may look like a strange name but 1) is totally valid (it is not a valid hostname because \000 means byte value 0 which has to be encoded as \000 for DNS operations, but still a valid domain names, as domain names in the DNS specifications can be any arbitrary bytes) and 2) is, with DNSSEC ordering algorithm, the name "right after" my name (so basically the range of the two names do not include any other name in between). The RRSIG NSEC part at the end of the NSEC record means that there are no record type A on the name but there are record types RRSIG and NSEC , which makes sense because I am exactly looking at the NSEC record of that name, and as we are in DNSSEC land, of course there is an RRSIG . So this is called a "lie" because the nameserver is replying to you: this name exists, but not this record type. And no matter which record type you ask for (except NSEC and RRSIG ) the nameserver will tell you: "this name does not exist for this record type".
At the end, if it does not exist for any record type (besides NSEC and RRSIG ) it is really as if it (the name) does not exist at all, but it is just presented in a different way for reasons quickly detailed below. I recommend reading the second link but the gist of it explaining things is (I am skipping the whole points regarding NSEC / NSEC3 and wildcard records, with all the details on "closest encounter" and so on, but those are important if going deep on NSEC stuff): NSEC3 was a “close but no cigar” solution to the problem. While it’s true that it made zone walking harder, it did not make it impossible. (which is why they don't use NSEC3 and keep NSEC but then still need another solution to avoid walking the zone and hence enumerating all names) There are two problems with negative answers: The first is that the authoritative server needs to return the >previous and next name. As you’ll see, this is computationally >expensive for CloudFlare, and as you’ve already seen, it can leak >information about a zone. The second is that negative answers require two NSEC records and >their two subsequent signatures (or three NSEC3 records and three >NSEC3 signatures) to authenticate the nonexistence of one name. >This means that answers are bigger than they need to be. So that part above is the basic explanation of why wanting to avoid using NXDOMAIN and "emulating" it with success ( NOERROR ) but at the same time responding negatively to any query (name+type for any type requested). The other point, again very specific to CloudFlare, is that it is difficult in their case to compute the "next" name (because NSEC is really giving a "range" of two names, as a link between two things existing), so instead of using the real next name as existing in their storage, they compute the mimimal "next" one following the DNSSEC algorithm, hence the strange name above with \000. as prefix, a name that obviously don't exist either, so if you query for it you will get again the same kind of reply, but this time with an NSEC record listing on right \001. or \000.\000. in fact, etc. and so on... Further down: For an NXDOMAIN, we always return \000.(the missing name) as the next name, and because we return an NSEC directly on the missing name, we do not have to return an additional NSEC for the wildcard. This way we only have to return SOA, SOA RRSIG, NSEC and NSEC RRSIG, and we do not need to search the database or precompute dynamic answers. The goal reached with all that is smaller replies. And this is important in DNS land, because of various problems around fragmentation. From their example they go from 1096 bytes to just 357 bytes with black lies, cutting almost 2/3, quite an accomplishment! All the above may become a "standard" in the future, for those wanting to do the same, as they wrote a document that can become maybe an IETF RFC one day: https://datatracker.ietf.org/doc/html/draft-valsorda-dnsop-black-lies Do note it has consequences though: NXDOMAIN is an important signal: various other stuff is built on top of that, see RFC 8020 "NXDOMAIN: There Really Is Nothing Underneath" and RFC 8198 "Aggressive Use of DNSSEC-Validated Cache", so not having this signal anymore can have side effects (and it wouldn't be a good idea to change other recursive resolvers to try finding out if the authoritative side is using black lies and then consider them, that would be brittle; that point is exactly discussed in the draft above) it also impacts ENT or "Empty Non Terminal", where a name has to exist in the DNS tree not because it has any type attached to it, but just because there are names below it; see https://www.ietf.org/archive/id/draft-huque-dnsop-blacklies-ent-01.html for more details on that topic no implementation is free of bugs, and DNSSEC is complicated, and tricks around DNSSEC are even more so complicated; now I am not sure anymore and I can't find references, but I think there was a bug in the beginning, and the returned types (in the NSEC bitmap) were not computed correctly, hence breaking some stuff. Will try to update this if I do find back what I am thinking I have seen, but I could be delusional (easy to be with DNSSEC...); in fact I think it is related to the observation that all their initial examples did put far more types in NSEC last section, where now they put only RRSIG and NSEC . See https://indico.dns-oarc.net/event/40/contributions/899/attachments/862/1563/nsec-bitmaps.pdf for live examples of errors in NSEC bitmaps and their consequences Ah no in fact I remembered right, a bug in this NSEC bitmap is right at the source of a recent Slack outage :-), but it was not on Cloudflare fault, it was AWS Route53 where the problem was. See https://www.potaroo.net/ispcol/2021-12/oarc36.pdf for those details, but in short: Now you can lie with NSEC records, [..] But what a server should never do
is return an empty bit-vector in the NSEC record. Because some resolvers, including Google’s Public
DNS service interpret an empty NSEC bit-vector as claiming that there are no resource records at all for
that domain name. This is not a Google DNS bug. It's a perfectly legitimate interpretation of the
DNSSEC specification. The problem that Slack encountered was that the Route 53 server was returning
a NSEC response with an almost empty RR-type bit-vector when the wildcard entry was used to form
the response and the query type was not defined for the wildcard resource. This was a bug in the Route
53 implementation. So, in short, lying does have bad consequences some times :-)
(and/or: DNSSEC is complicated, and wildcards in the DNS do create all sorts of complications too; in fact DNSSEC + wildcards + CNAME records are like 3 sure signs of apocalypse somehow...). This is only ONE way to do things, the consequences (almost no NXDOMAIN responses) are absolutely not a consequence of the protocol (DNSSEC) but just of their implementation. So don't take this as granted at all, it will be different with other providers. But does it really change anything for you as owner of the zone or users of it? Not so much. Why were you so worried about NXDOMAIN responses :-) ? PS: for a theoretical paper on DNSSEC lies: https://casey.byu.edu/papers/2019_pam_dnssec_lies.pdf for a presentation summarizing things (among others): https://www.slideshare.net/apnic/signing-dnssec-answers-on-the-fly-at-the-edge-challenges-and-solutions | {
"source": [
"https://serverfault.com/questions/1105634",
"https://serverfault.com",
"https://serverfault.com/users/973627/"
]
} |
1,109,042 | Emails sent from all 3 email addresses I have set up in the Rackspace Cloudways Add-On are ending up in Spam in GMail. When I "View Original Message" in GMail, I see... SPF: NEUTRAL with IP 173.203.187.81
DMARC: 'FAIL' ... where 173.203.187.81 is an IP address Rackspace. My DNS provider is CloudFlare. There is a DMARC policy set up, which is the following... _dmarc.boldstatements.com.au v=DMARC1; p=none; ruf=mailto:[email protected]; rua=mailto:[email protected] Cloudways provided me with this DKIM TXT record... 20220817-maluhsjy._domainkey v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC0PqtvPuYkElqS+b80iEj4aepAdf6n+CDXRFTG/1Q8RMdw/D6hNmQpv8FCTyIuplZt/qTxBbBFrPLJK5tp7bqkSEG2YpPSnHDCGihaOCsRkJP0aAbnuQRmjHq6H0yCwtJKjRhW7H4pbjx9/LA6dXIaw4N1emtSLWcGejVrhVZ+CwIDAQAB When I use dnschecker.org and a few other DNS lookup tools, to look at my TXT records I get the following strange output... \239\187\191v=spf1 a mx include:_spf.elasticemail.com include:emailsrvr.com -all These characters \239\187\191 are definitely not in the TXT record in CloudFlare. CloudWays' support claims that these characters are causing the DMARC to fail, but since they don't appear on other DNS checkers such as https://mxtoolbox.com/ and https://www.whatsmydns.net/ , and since SPF is returning "neutral", I suspect they are actually a bug in https://dnschecker.org/ and the DNS checker that Cloudways support are using. Any thoughts? NOTES Thanks to Patrick Mevzek's answer below I was able to find a solution. Just a quick description of how I ended up in this mess in the first place: Basically I copy pasted the values of the DNS records from a Cloudways support chat window straight into Cloudflare. And to remove the characters I needed to copy the value from Cloudflare into Notepad++, change the encoding to ANSII, which made the extraneous characters appear, delete them, then change back to UTF-8 (just in case), and paste back into CloudFlare. | FWIW, \239\187\191 is DNS encoding of 3 bytes of decimal values 239, 187, 191 which maps in hexadecimal to EF BB BF , which is the UTF-8 encoding of Unicode codepoint U+FEFF , which is ZERO WIDTH NO-BREAK SPACE I suspect this TXT record was created by copy and pasting from somewhere and some "smart" behavior and obviously the space is not visible on screen in some UI but did land up in the TXT record. This has to be cleaned, aka removed. As for: These characters \239\187\191 are definitely not in the TXT record in CloudFlare. and but since they don't appear on other DNS checkers such as https://mxtoolbox.com/ and https://www.whatsmydns.net/ , I suspect that various systems (mistakenly but there is unfortunately no standard there at all, DNS was invented far before Unicode/UTF-8 and then lots of things like SPF just decided to abuse TXT records) just consider the TXT record content to be a string in UTF-8 so they decode it and display it, but obviously the "zero-width space" is not visible on any HTML page. A better UI would take care of that and display that properly and/or warn about it. An even better UI would even more so just remove that character when the record is added, since it is obviously wrong here (but the obvious in TXT record is limited… you have to see it is v=spf1 or similar and then act accordingly).
Which now gives me a good idea on what I should fix in my own UI, thanks for the idea :-) | {
"source": [
"https://serverfault.com/questions/1109042",
"https://serverfault.com",
"https://serverfault.com/users/563451/"
]
} |
1,114,123 | If I have several AWS EC2 and azure instances running on separate regions. I am using rabbitmq to exchange messages between them. Should I worry about adding TLS and encrypting those connections? In other words if server A is on AWS us-east for example and server B is in azure how bad will it be if they exchange information without it being encrypted? Only the internet service provider and Amazon/Microsoft will be able to see that unencrypted data correct? I will obviously encrypt anything that deals with the client. I am just curious about 2 backend servers talking to each other. Edit Thanks for the help guys. I know how to encrypt the connection and also how to set up a VPN. Sorry I phrased the question incorrectly. I just wanted to know who will be able to see that traffic between those servers. Why will it be risky? I know it will be risky I believe you lol. I just want to know why. Also how bad will it be to generate my own ssl certificates and trust it on each server. | Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is in
Azure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop. | {
"source": [
"https://serverfault.com/questions/1114123",
"https://serverfault.com",
"https://serverfault.com/users/387296/"
]
} |
1,114,141 | I have successfully configured pam_radius on a Ubuntu client so that users are asked for an OTP. The radius server is an NPS with Azure MFA extension. The OTP is checked against Azure. It works well, but I'd rather not send the user credentials to the NPS, so that only the OTP is checked. Also it would be nice to ask the user for OTP before the password. I read elsewhere ( https://learn.microsoft.com/en-us/answers/questions/20921/mfa-nps-error.html ) that if we choose "Accepting users without validating credentials" on the NPS (in addition to "skip_passwd" on pam_radius_auth configuration), this would work - but it doesn't. Is this because pam_radius will always try to authenticate with both password and OTP? Or maybe NPS will always ask for a password? But on the other hand, on pam_radius_auth documentation it says that skip_pass will send a null as password in that case, so why am I still asked for the password? Best,
Francis | Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is in
Azure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop. | {
"source": [
"https://serverfault.com/questions/1114141",
"https://serverfault.com",
"https://serverfault.com/users/573139/"
]
} |
1,114,150 | Does changing t2 micro free tier to unlimited cause also unlimited out internet data transfer? | Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is in
Azure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop. | {
"source": [
"https://serverfault.com/questions/1114150",
"https://serverfault.com",
"https://serverfault.com/users/990203/"
]
} |
1,114,335 | Currently, DMARC only requires aligned DKIM or SPF. However spoofing SPF is relatively simple for an experienced hacker: You should only control a single IP address in the often large SPF range of e-mail service providers (Microsoft, Google, Mailchimp, ...). It may be even possible to legally do so if the list contains out of date IP addresses. Or you can try to use a bug/hole in the sender verification performed by those service providers. At least some providers do not perform a very secure sender domain verification. The essential problem with SPF is that it whitelists an IP that is shared by many clients of such a service providers. At the other hand, the DKIM key is probably secured much better by those service providers and it is (often) linked to a single customer. Or at least, it should be much easier to secure a DKIM key than to ensure that a hacker could not send an e-mail from one of the allowed SPF IP addresses with a sender address chosen by the hacker. So, wouldn't it be beneficial that DMARC is extended to allow specifying that DKIM should be aligned? Or does a successor of DMARC exists to enforce DKIM alignment? Partially related questions: DMARC Alignment: Enforce messages pass BOTH SPF and DKIM (It's not a duplicate as my question is whether it is a good DMARC design that we couldn't enforce DKIM). Can DMARC's SPF alignment be spoofed? (About the possibility of spoofing aligned SPF: spoofing SPF is easier than spoofing DKIM ). | Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is in
Azure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop. | {
"source": [
"https://serverfault.com/questions/1114335",
"https://serverfault.com",
"https://serverfault.com/users/941896/"
]
} |
1,114,344 | I am running a local web app on my dev machine. And I want to reach my local web app from a test machine (a phone). I set up Nginx to listen on port 8888. My test machine can reach my dev machine at this port. Requests that should go the local web application are reverse-proxied from port 8888 to the local web app port 3000. These requests work fine. Requests that should go to the internet are forward-proxied and resolved by 8.8.8.8. But these requests can only be HTTP. Nginx does not seem to be able to handle forward-proxy HTTPS requests. This setup for HTTP works: server {
listen 8888;
listen [::]:8888;
server_name local.myapp.be myapp.com;
access_log /var/log/nginx/myapp/access.log;
error_log /var/log/nginx/myapp/error.log;
location / {
proxy_pass http://local.myapp.be:3000;
proxy_redirect http://local.myapp.be:3000 $scheme://$host:8888;
proxy_set_header Host $host;
}
}
server {
listen 8888 default_server;
listen [::]:8888 default_server;
access_log /var/log/nginx/default/access.log;
error_log /var/log/nginx/default/error.log;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
} I then tried removing the second server block and adding this to nginx.conf at http block level: stream {
resolver 8.8.8.8;
server {
listen 8888;
ssl_preread on;
proxy_connect_timeout 5s;
proxy_pass $ssl_preread_server_name:$server_port;
}
} But that does not seem to work. How can I setup an Nginx proxy to handle both reverse-proxy and forward-proxy (http & https) requests correctly? | Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is in
Azure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop. | {
"source": [
"https://serverfault.com/questions/1114344",
"https://serverfault.com",
"https://serverfault.com/users/557586/"
]
} |
1,116,427 | We have a server with 64GB of total RAM, applications are using typically a maximum of 30GB of that available RAM. One of those applications deals with a lot of flat files, and we're having throughput issues, namely waiting on disk I/O. While exploring possible solutions the idea of a RAM disk came up. The problem I have with a RAM disk is the inherent volatility. I've found separate documentation on RAM disks, RAID 1 configuration, and Logical Mirrored Volumes to group physical disks, but I can't seem to find any documentation that suggests if either of these disk replication solutions can be used with a RAM disk. More importantly since the idea is to have the RAM disk be available for read/write, and have the physical disk "shadowing" the RAM disk, catching up with writes, we would want the RAM disk to be the "primary" disk for all reads/writes. To note, we would like to avoid merely RAM caching the files with the OS, but if we can get the same performance as a stand-alone RAM disk, that could work. We initially avoided this since often times certain files will not be accessed for long periods of time, but still need the read/write speed on-demand. | To note, we would like to avoid merely RAM caching the files with the OS, but if we can get the same performance as a stand-alone RAM disk, that could work. We initially avoided this since often times certain files will not be accessed for long periods of time, but still need the read/write speed on-demand. You could use vmtouch to solve your problem. This is a utility which allows you to pin certain files or even entire directories and everything under them in the page cache so they do not get evicted, even if they are not accessed for long periods of time (which was your initial reason for not simply relying on the page cache). This requires at most the same amount of memory as your RAM disk, or less in practice. You'll still be using the page cache, but it will result in similar performance to using a RAM disk for everything (actually superior performance as the MD driver will not be involved). | {
"source": [
"https://serverfault.com/questions/1116427",
"https://serverfault.com",
"https://serverfault.com/users/994009/"
]
} |
1,116,484 | There are various services which are being run on our machines, e.g. cassandra, datadog, etc. Occasionally, we need to change the configuration, and we wish to automate the propagation of the config files and restarts. We use Jenkins for automate the workflow for our application software, and were thinking of using this for services as well. We do not wish the server Jenkins runs on to have remote root (or even sudo) access to the host server. I was wondering if we could safely change the owner of /etc/cassandra to cassandra, /etc/datadog to dd-agent, etc, because it would help us to automate. (Actually, is it recommended that such folders/files should be owned by the appropriate user, and that having root as owner is wrong?) | To note, we would like to avoid merely RAM caching the files with the OS, but if we can get the same performance as a stand-alone RAM disk, that could work. We initially avoided this since often times certain files will not be accessed for long periods of time, but still need the read/write speed on-demand. You could use vmtouch to solve your problem. This is a utility which allows you to pin certain files or even entire directories and everything under them in the page cache so they do not get evicted, even if they are not accessed for long periods of time (which was your initial reason for not simply relying on the page cache). This requires at most the same amount of memory as your RAM disk, or less in practice. You'll still be using the page cache, but it will result in similar performance to using a RAM disk for everything (actually superior performance as the MD driver will not be involved). | {
"source": [
"https://serverfault.com/questions/1116484",
"https://serverfault.com",
"https://serverfault.com/users/955977/"
]
} |
1,116,532 | I am looking for an example configuration in which CoreDNS will read the CNAME record from the file (file plugin) and then resolve it using a custom resolver (forward plugin?).
A client should not get CNAME record but only A records. For example: if the client ask the CoreDNS for test.r1.svc then the CoreDNS get the CNAME record test IN CNAME test.r2.svc. then ask the foreign resolver 10.11.12.13:53 for test.r2.svc and response to the client with the A records Is it possible? This config is not working for me: Corefile # root
. {
log
errors
}
r1.svc {
file r1.svc
forward r2.svc 10.11.12.13:53
log
errors
} r1.svc file $ORIGIN r1.svc.
@ 3600 IN SOA sns.dns.icann.org. noc.dns.icann.org. (
202211241713 ; serial
7200 ; refresh (2 hours)
3600 ; retry (1 hour)
1209600 ; expire (2 weeks)
3600 ; minimum (1 hour)
)
3600 IN NS a.iana-servers.net.
3600 IN NS b.iana-servers.net.
test IN CNAME test.r2.svc. | To note, we would like to avoid merely RAM caching the files with the OS, but if we can get the same performance as a stand-alone RAM disk, that could work. We initially avoided this since often times certain files will not be accessed for long periods of time, but still need the read/write speed on-demand. You could use vmtouch to solve your problem. This is a utility which allows you to pin certain files or even entire directories and everything under them in the page cache so they do not get evicted, even if they are not accessed for long periods of time (which was your initial reason for not simply relying on the page cache). This requires at most the same amount of memory as your RAM disk, or less in practice. You'll still be using the page cache, but it will result in similar performance to using a RAM disk for everything (actually superior performance as the MD driver will not be involved). | {
"source": [
"https://serverfault.com/questions/1116532",
"https://serverfault.com",
"https://serverfault.com/users/435181/"
]
} |
1,119,336 | I have a web site where users can upload files. I do not want those files to be accessible by anyone. I have seen that some people create a folder (say my_secret_folder ) at the same level of the www directory. Then, they upload files (with PHP script) using: $destination = $_SERVER['DOCUMENT_ROOT'] . "/../my_secret_folder/" . $filename; Where $destination is the full path of the uploaded file. Then, only the PHP script (from within the www folder) is allowed to access the file. Is this a good practice to upload files to a folder outside the public www folder? | Yes, it is a good practice. Placing it outside the webroot means that the files will not be publicly exposed by a simple configuration mistake in the web server, thus adding another barrier to exposure - and it has essentially zero cost to implement. | {
"source": [
"https://serverfault.com/questions/1119336",
"https://serverfault.com",
"https://serverfault.com/users/617854/"
]
} |
1,123,895 | The chrony documentation warns BE WARNED: Certain software will be seriously affected by such jumps in the system
time. (That is the reason why chronyd uses slewing normally.) Documentation But the documentation gives no examples. What are examples of software that will be seriously affected? Is the OS or any background processes at risk? | This is a bit of open question but let me give some examples: databases - most of them rely a lot of precise time for storing records, indexes, etc security - precise time is very important for security to map action to time and gaps or time duplication is not accepted digital signing - usually part of signed document is the timestamp so wrong time may invalidate the signature scheduling software - may skip or repeat twice jobs depend of time jump direction. clustering software - probably any cluster will need to be in sync and any jump of one or more nodes may have unpredictable result. | {
"source": [
"https://serverfault.com/questions/1123895",
"https://serverfault.com",
"https://serverfault.com/users/606674/"
]
} |
1,123,899 | Sieve vacation answers fine, but uses the from: field but not the reply-to: field, which would make much more sense (to me). Using sieve with roundcube, sieve script is require ["vacation"];
# rule:[rep]
if header :contains "subject" "Software"
{
vacation :subject "reply!!" "abc";
} How do i hint sieve to answer to reply-to? | This is a bit of open question but let me give some examples: databases - most of them rely a lot of precise time for storing records, indexes, etc security - precise time is very important for security to map action to time and gaps or time duplication is not accepted digital signing - usually part of signed document is the timestamp so wrong time may invalidate the signature scheduling software - may skip or repeat twice jobs depend of time jump direction. clustering software - probably any cluster will need to be in sync and any jump of one or more nodes may have unpredictable result. | {
"source": [
"https://serverfault.com/questions/1123899",
"https://serverfault.com",
"https://serverfault.com/users/503157/"
]
} |
13 | There has been a long-standing issue in the Skeptics community that has been recently re-raised and re-branded by Phil Plait's TAM8 talk . It concerns traditionally hostile and/or mocking responses to untenable claims. (I am trying to use emotionally neutral terms. Did I succeed?) There has been plenty of discussion about whether that is a natural and honest response, a legitimate tactic of persuasion, or whether it is self-defeating. I feel the appropriate answer may well be different on the blogosphere, in a town hall meeting, at a dinner party, on a date or here on this Q&A site. So my question is, do we, as a community, have an official position on: mocking and/or hostile answers to the original poster? mocking and/or hostile comments about real world people? mocking and/or hostile comments about real world ideas? | "Bunch of dicks" equals a failed site. It's really as simple as that. Be nice. Treat others with the same respect you’d want them to treat you. We’re all here to learn together. Be tolerant of others who may not know everything you know. It's not optional or reserved for people you agree with; it is a basic tenet of the site. Any hostile behavior or ad hominem attacks should not be tolerated. I would seriously consider applying that tenet EQUALLY to #2 and #3: The community should reject and down-vote disproportionate, mocking behavior towards any opposing ideas or people in the guise of making a VALID argument. If you want this site to be successful, it has to be about objective, factual information. There's a " Back It Up! Principle " on these sites which has to apply ten-fold to a site like this … and it has to apply to BOTH sides of the issues. If you want to make unsubstantiated claims, expect to be called on it. But just as equally, if you want to rant and rave and do a bunch of hand-waving as the skeptic, expect to be EQUALLY be called out about it as well. Otherwise, this site will fail. Trust me on this one: Users will leave this site in droves if its primary purpose is for members to pat each other on the back and tell each other how smart they are. That type of clique-ish behavior starts when "the best quip" or "best put-down" curries favor and popularity from the community. Don't encourage those activities with your support. The best way to respond is with a polite "We don't do that here." Let's keep the questions (and answers) canonical and authoritative. If you can resist the urge to browbeat those who hold opposing ideas (whether they're on this site or not), this site will thrive. | {
"source": [
"https://skeptics.meta.stackexchange.com/questions/13",
"https://skeptics.meta.stackexchange.com",
"https://skeptics.meta.stackexchange.com/users/23/"
]
} |
63 | At some point we will have to decide if religious questions are on-topic, and where exactly we will draw the line to off-topic questions. There is a significant overlap between the skeptic and atheist communities, so these questions will inevitably be asked here. I'll put up some example questions to have some basis to discuss this: What is the evidence for the existence of God? Is there a soul? How can the christian god be benevolent if there is so much suffering in the world? Is the world only 6000 years old as written in the bible? Does faith healing work? | I would personally consider pure religious question off-topic here. We can't extend our focus indefinitely, and many of those questions are more about faith than about science. Religion is a topic many people care deeply about, allowing those questions could cause unnecessary friction. Questions about claims that relate to the natural world, made by religious groups or people I would consider on-topic. If the religion or its proponents make claims about science, those claims deserve the appropriate scrutiny. This would be for example questions about creationism and faith healing. Those are fair game, even if we decide religion is off-topic. Those kinds of questions should not be able to hide behind a no-religion rule. So, from my example questions I would consider 1-3 off-topic, and 4-5 on-topic. | {
"source": [
"https://skeptics.meta.stackexchange.com/questions/63",
"https://skeptics.meta.stackexchange.com",
"https://skeptics.meta.stackexchange.com/users/5/"
]
} |
367 | New users with less than 10 reputation can only add one link per post . As we want to encourage users to cite their claims, this restriction can be counterproductive sometimes. It would be nice if that part of the new user restrictions could be removed on Skeptics. As an example for the irritation this feature causes see this answer , the author posted most of the answer on her blog, as she had too many links in her answer. I don't really see the point of this restriction, spam is spam, no matter if there is one link to the spammer's site, or 10 links. It doesn't prevent the spammers from spamming, and even as a damage-control mechanism it really doesn't do anything useful. We expect users to cite their answers here, the kind of answer we really like to see here just can't be done with two links. We tell the users at many points that they should add cites for their claims, and when they do it they are rewarded by the notice that they have inserted too many links. This restriction is very hostile on Skeptics as a good answer here almost always needs more than 1 or 2 links. This leads to us commenting on first answers with a request for more references, which the users cannot easily comply until they get upvotes, giving them a bad first impression of our site. But if people vote correctly, they won't get upvotes because they are missing important references. Of course they could insert the references unlinked, but that leads to more work as another user has to edit the post, and it is also unreasonable to expect a new user to know how to work around those restrictions. | This has been changed, the limit for the number of links for new users is 50 on Skeptics right now instead of 2. | {
"source": [
"https://skeptics.meta.stackexchange.com/questions/367",
"https://skeptics.meta.stackexchange.com",
"https://skeptics.meta.stackexchange.com/users/5/"
]
} |
1,001 | This answer was down voted because of the choice of units. The answer was giving a calculation in imperial units, rather than SI units. Can we get a consensus on the use of units of measurement , please? Is the down vote of an answer due to its use of non-metric units OK? Is it desirable? Should such an answer be edited? Or should we, on the other hand, prefer imperial units or some other system? | Units should preferably be SI, being an international standard. The metric system as a fallback is acceptable, and it is preferable in situations where it’s more common (e.g. minutes, hours, days, months or years instead of seconds for long durations). The rationale is simple: imperial units (or any other non-metric system) are not used outside the US and virtually not understood outside the US, the UK and perhaps Down Under. While the language on these boards is exclusively English, the audience is still international and imperial units should be considered too localised. For instance, I have no idea, not even a ballpark estimate, of how much a gallon is, and imagine that most people outside the US have the same problem. SI, on the other hand, is an international standard and the standard for scientific communication (just like English). It should be universally understood, even in the US. I propose the following guidelines: When quoting from elsewhere, preserve the original units , but supply a translation. For everything else, use SI or the metric system . Edit existing answers to supply SI or metric units, preserving the original author’s notation where necessary (see above). | {
"source": [
"https://skeptics.meta.stackexchange.com/questions/1001",
"https://skeptics.meta.stackexchange.com",
"https://skeptics.meta.stackexchange.com/users/82/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.