source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
463,847 | I get a notification that says "Additional Multimedia Codecs Required" from Gnome Software, if I try to close it, it reappears, if I kill gnome-software, it just restarts instantly and displays the same message, I can't really uninstall Gnome, because I need it for testing things. | I have the same problem here (on Ubuntu/KDE, if that matters). I can't find out who is trying to get those codecs ( ps fx just says it's started by systemd ), and I can't find out which package it is trying to install (just a useless sad smiley in a window) A quick trick to at least stop the notifications: instead of killing gnome-software , STOP it: pkill -STOP gnome-software The process will stay alive but won't be able to do anything, so the system won't attempt restarting it, and yet it won't be able to push those notifications. After you do this, close the remaining notifications and they won't come back (until you reboot, that is). Do pkill -CONT gnome-software to restart it if needed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306623/"
]
} |
463,855 | Relatively often there are new kernel upgrades. But every time I install them, I do not see any difference between before and after. What do they exactly provide? How can I feel their presence on me? Are they really needed? | It’s a good thing that you don’t notice any difference when upgrading the kernel; the Linux kernel is always supposed to be backwards-compatible. Now obviously there are differences. You can get some idea by reading the “human” changelogs on Kernel Newbies; the changes tend to fall under four large headings: security fixes (including fixes for high-profile issues such as the Spectre variants, Meltdown etc.) new hardware support, or improved hardware support new features (new file systems etc.) refactoring, e.g. improvements to the design and architecture, or performance improvements In most cases you’ll only notice changes which enable previously-unsupported hardware which you happen to have, or enable new features on your hardware. Other changes will be invisible, either because they’re supposed to be (security fixes and refactoring) or because they require support from applications or libraries before they make any difference. In some cases even improved hardware support won’t be apparent immediately; for example, improved OpenGL support in your GPU’s drivers requires support in Mesa too. The presence of security fixes in nearly all kernel releases means they really are needed: you should be tracking either the latest version in general, or the latest version in whatever stable branch you’re using (assuming it’s supported). The safest approach is to use your distribution’s kernel, again assuming you’re using a supported release of your distribution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/202490/"
]
} |
463,861 | unset myVariable i;while [ -z "$i" ]; do read myVariable; echo "myVariable : '$myVariable'"; i=foo;done;echo "myVariable : '$myVariable'" (the unset is there to allow replaying the command) press any key + ENTER, you'll get : myVariable : '[what you typed]'myVariable : '[what you typed]' The value of myVariable exists outside of the while loop.Now try : tmpFile=$(mktemp);echo -e 'foo\nbar\nbaz' >> "$tmpFile"while read myVariable; do otherVariable=whatever; echo "myVariable : '$myVariable', otherVariable : '$otherVariable'";done < "$tmpFile";echo "myVariable : '$myVariable', otherVariable : '$otherVariable'";rm "$tmpFile" you'll get : myVariable : 'foo', otherVariable : 'whatever'myVariable : 'bar', otherVariable : 'whatever'myVariable : 'baz', otherVariable : 'whatever'myVariable : '', otherVariable : 'whatever' The value of myVariable is lost when leaving the loop. Why is there a different behaviour ? Is there a scope trick I'm not aware of ? NB : running GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) | while read myVariable; do The value of myVariable is lost when leaving the loop. No, myVariable has the value it got from the last read . The script reads from the file, until it gets to the position after the last newline. After that, the final read call gets nothing from the file, sets myVariable to the empty string accordingly, and exits with a false value since it didn't see the delimiter (newline). Then the loop ends. You can get a nonempty value from the final read if there's an incomplete line after the last newline: $ printf 'abc\ndef\nxxx' | { while read a; do echo "got: $a"; done; echo "end: $a"; } got: abcgot: defend: xxx Or use while read a || [ "$a" ]; do ... to handle the final line fragment within the loop body. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197400/"
]
} |
463,890 | I am analyzing one script and I cannot find online what chgrp 0 mean in the following line: find $1 -follow -exec chgrp 0 {} so find $1 takes parameter, -follow causes find to follow symlinks, -exec executes the command, chgrp 0 change group ownership, but what 0 do? , {} for all items found by the find command. Please correct me if I am wrong with something. | chgrp 0 file will change the group ownership of the file file to the group with GID 0 which in nearly all cases in Linux is the root group (in BSD, this is nearly always wheel ). So your find command will search in the path provided by the first positional parameter ( $1 ) for all filesystem objects contained therein, follow any symbolic links to their targets, and make the group owner of those objects GID 0 , root (or wheel ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193007/"
]
} |
463,904 | Is there any advantage/disadvantage of initializing the value of a bash variable in the script, either before the main code, or local variables in a function before assigning the actual value to it? Do I need to do something like this: init(){ name="" name=$1}init "Mark" Is there any risk of variables being initialized with garbage values (if not initialized) and that having a negative effect of the values of the variables? | There is no benefit to assigning an empty string to a variable and then immediately assigning another variable string to it. An assignment of a value to a shell variable will completely overwrite its previous value. There is, to my knowledge, no recommendation that says that you should explicitly initialize variables to empty strings. In fact, doing so may mask errors under some circumstances (errors that would otherwise be apparent if running under set -u , see below). An unset variable, unused since the start of a script or explicitly unset by running the unset -v command on it, will have no value. The value of such a variable will be nothing. If used as "$myvariable" , you will get the equivalent of "" , and you would never get "garbage data". If the shell option nounset is set with either set -o nounset or set -u , then referencing an unset variable will cause the shell to produce an error (and a non-interactive shell would terminate): $ set -u$ unset -v myvariable$ echo "$myvariable"/bin/sh: myvariable: parameter not set or, in bash : $ set -u$ unset -v myvariable$ echo "$myvariable"bash: myvariable: unbound variable Shell variables will be initialized by the environment if the name corresponds to an existing environment variable. If you expect that you are using a variable that may be initialized by the environment in this way (and if it's unwanted), then you may explicitly unset it before the main part of your script: unset -v myvariable # unset so that it doesn't inherit a value from the environment ... which would also remove it as an environment variable, or you may simply ignore its initial value and overwrite it with an assignment (which would make the environment variable change value too). You would never encounter uninitialized garbage in a shell variable (unless, as stated, that garbage already existed in an environment variable by the same name). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/463904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245789/"
]
} |
463,913 | I am running under Debian stable, Cinnamon DE and I have some files that I would like to delete with a command line (for now I delete these files with Nemo). For example, these .txt files begin with '?' in the shell and in Nemo this '?' is replaced by a carriage return: $@debian: lsssolveIncpUL46pK ?ssolveIncpUL46pK.txt I tried: rm ?ss* rm \?ss* rm \ ss* | The character is not a question mark. The ls utility will replace non printable characters with ? . It is further unclear whether the non printable character really is the first character in the filename or whether there may be one or several spaces before that. Would you want to delete both those files, you could match the "bad part" with * and then specify the rest of the visible filename more closely: rm -i ./*ssolve* This would first expand the given pattern to all the filenames matching it, and then rm would remove them. Be more specific and specify a longer part of the filename if there are files that you don't want to delete that matches the above short pattern, e.g. with rm -i ./*ssolveIncpUL46pK* This is assuming that you are located in the same directory as the files that you want to delete. The -i option to rm makes it ask for confirmation before actually deleting anything. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/463913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186283/"
]
} |
463,917 | I tried to restrict the number of a service (in a container) restarts. The OS version is CentOs 7.5 , the service file is pretty much as below (removed some parameters for reading convenience). It should be pretty straight forward as some other posts pointed out (Post of Server Fault restart limit 1 , Post of Stack Overflow restart limit 2 ). Yet StartLimitBurst and StartLimitIntervalSec never work for me. I tested with several ways: I check the service PID, kill the service with kill -9 **** several times. The service always gets restarted after 20s ! I also tried to mess up the service file, make the container neverruns. Still, it doesn't work, the service file just keep restarting. Any idea? [Unit]Description=Hello FluentdAfter=docker.serviceRequires=docker.serviceStartLimitBurst=2StartLimitIntervalSec=150s[Service]EnvironmentFile=/etc/environmentExecStartPre=-/usr/bin/docker stop "fluentd"ExecStartPre=-/usr/bin/docker rm -f "fluentd"ExecStart=/usr/bin/docker run fluentdExecStop=/usr/bin/docker stop "fluentd"Restart=alwaysRestartSec=20sSuccessExitStatus=143[Install]WantedBy=multi-user.target | StartLimitIntervalSec= was added as part of systemd v230. In systemd v229 and below, you can only use StartLimitInterval= . You will also need to put StartLimitInterval= and StartLimitBurst= in the [Service] section - not the [Unit] section. To check your systemd version on CentOS, run rpm -q systemd . If you ever upgrade to systemd v230 or above, the old names in the [Service] section will continue to work. Source: https://lists.freedesktop.org/archives/systemd-devel/2017-July/039255.html You can have this problem without seeing any error at all, because systemd ignores unknown directives. systemd assumes that many newer directives can be ignored and still allow the service to run. It is possible to manually check a unit file for unknown directives. At least it seems to work on recent systemd: $ systemd-analyze verify foo.service/etc/systemd/system/foo.service:9: Unknown lvalue 'FancyNewOption' in section 'Service' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/463917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306685/"
]
} |
463,969 | I installed a snmpd into a CentOS 7 minimal installation for system parameters search, for instance: snmpget -v 2c -c public 127.0.0.1 .1.3.6.1.2.1.2.2.1.2 for the above command I get the following result: IF-MIB::ifDescr = No Such Object available on this agent at this OID when i execute: snmpwalk -v 2c -c public 127.0.0.1 to check if the IF-MIB is loaded by snmpd, i get the following result: SNMPv2-MIB::sysDescr.0 = STRING: Linux vm_test.whatever.com 3.10.0-862.6.3.el7.x86_64 #1 SMP Tue Jun 26 16:32:21 UTC 2018 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (90641) 0:15:06.41 SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf) SNMPv2-MIB::sysName.0 = STRING: vm_test.whatever.com SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf) SNMPv2-MIB::sysORLastChange.0 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORID.1 = OID: SNMP-MPD-MIB::snmpMPDCompliance SNMPv2-MIB::sysORID.2 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance SNMPv2-MIB::sysORID.3 = OID: SNMP-FRAMEWORK- MIB::snmpFrameworkMIBCompliance SNMPv2-MIB::sysORID.4 = OID: SNMPv2-MIB::snmpMIB SNMPv2-MIB::sysORID.5 = OID: TCP-MIB::tcpMIB SNMPv2-MIB::sysORID.6 = OID: IP-MIB::ip SNMPv2-MIB::sysORID.7 = OID: UDP-MIB::udpMIB SNMPv2-MIB::sysORID.8 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup SNMPv2-MIB::sysORID.9 = OID: SNMP-NOTIFICATION-MIB::snmpNotifyFullCompliance SNMPv2-MIB::sysORID.10 = OID: NOTIFICATION-LOG-MIB::notificationLogMIB SNMPv2-MIB::sysORDescr.1 = STRING: The MIB for Message Processing and Dispatching. SNMPv2-MIB::sysORDescr.2 = STRING: The management information definitions for the SNMP User-based Security Model. SNMPv2-MIB::sysORDescr.3 = STRING: The SNMP Management Architecture MIB. SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for SNMPv2 entities SNMPv2-MIB::sysORDescr.5 = STRING: The MIB module for managing TCP implementations SNMPv2-MIB::sysORDescr.6 = STRING: The MIB module for managing IP and ICMP implementations SNMPv2-MIB::sysORDescr.7 = STRING: The MIB module for managing UDP implementations SNMPv2-MIB::sysORDescr.8 = STRING: View-based Access Control Model for SNMP. SNMPv2-MIB::sysORDescr.9 = STRING: The MIB modules for managing SNMP Notification, plus filtering. SNMPv2-MIB::sysORDescr.10 = STRING: The MIB module for logging SNMP Notifications. SNMPv2-MIB::sysORUpTime.1 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.2 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.3 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.4 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.5 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.6 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.7 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.8 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.9 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.10 = Timeticks: (4) 0:00:00.04 HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (872972) 2:25:29.72 HOST-RESOURCES-MIB::hrSystemUptime.0 = No more variables left in this MIB View (It is past the end of the MIB tree) the output tells me that IF-MIB is not being checked, but if execute the command: snmptranslate -Dinit_mib .1.3.2>&1 | grep MIBDIR to check the mibdirs (directories) and MIB's found (Seen MIBS) i get the following result: registered debug token init_mib, 1 init_mib: Seen MIBDIRS: Looking in '/root/.snmp/mibs:/usr/share/snmp/mibs' for mib dirs ... init_mib: Seen MIBS: Looking in ':HOST-RESOURCES-MIB:HOST-RESOURCES- TYPES:UCD-DISKIO-MIB:TCP-MIB:UDP-MIB:MTA-MIB:NETWORK-SERVICES-MIB:SCTP- MIB:RMON-MIB:EtherLike-MIB:LM-SENSORS-MIB:SNMPv2-MIB:IF-MIB:IP- MIB:NOTIFICATION-LOG-MIB:DISMAN-EVENT-MIB:DISMAN-SCHEDULE-MIB:UCD-SNMP- MIB:UCD-DEMO-MIB:SNMP-TARGET-MIB:NET-SNMP-AGENT-MIB:SNMP-MPD-MIB:SNMP- USER-BASED-SM-MIB:SNMP-FRAMEWORK-MIB:SNMP-VIEW-BASED-ACM-MIB:SNMP- COMMUNITY-MIB:IPV6-ICMP-MIB:IPV6-MIB:IPV6-TCP-MIB:IPV6-UDP-MIB:IP-FORWARD- MIB:NET-SNMP-PASS-MIB:NET-SNMP-EXTEND-MIB:UCD-DLMOD-MIB:SNMP-NOTIFICATION- MIB:SNMPv2-TM:NET-SNMP-VACM-MIB' for mib files ... init_mib: Seen PREFIX: Looking in '.1.3.6.1.2.1' for prefix .. and if you look carefully, the IF-MIB is there tagged as Seen Mibs. Why is it not showing up in the snmpwalk command? and why does the OID related to the IF-MIB doesn't exist in this agent? is this something permission related? OS related? | The SNMP daemon upon installation in CentOS is configured by default to answer to queries of a restricted MIB tree view using the "public" community for security reasons. As configured by default, the default "public" MIB (sub)tree allowed views are only .1.3.6.1.2.1.1 and .1.3.6.1.2.1.25.1.1 ; if you look closely the IF-MIB address space is .1.3.6.1.2.1.2. So querying objects on that MIB address space is not allowed by default. It also explains why that snmpwalk command of yours shows only a very restricted view. Consequently, to get the SNMP daemon/service answering to your queries, you have firstly to configure a new view (and for security reasons better also a new community) on the configuration file /etc/snmp/snmpd.conf . We shall then configure a "private" community for security reasons, and widen the MIB tree space which can be queried. As such, add to /etc/snmp/snmpd.conf rocommunity private 127.0.0.1 .1 Where 127.0.0.1 is the IP address which can make queries, and .1 the whole MIB tree. ro community also certifies you can only make read queries, which are more secure. After configuring the snmpd.conf file, you have to restart the SNMP service, as in: sudo service snmpd restart or sudo systemctl restart snmpd Now for the query. If you are not asking for a MIB leaf node, you cannot use snmpget . You have to use snmpwalk for it to walk the MIB tree as in: $ snmpwalk -v 2c -c private 127.0.0.1 .1.3.6.1.2.1.2.2.1.2IF-MIB::ifDescr.1 = STRING: loIF-MIB::ifDescr.2 = STRING: eth0IF-MIB::ifDescr.3 = STRING: eth1IF-MIB::ifDescr.4 = STRING: eth2 On the other hand, if you need to query a leaf node of the MIB tree, for instance, your second interface in the system, you do: $ snmpget -v 2c -c private 127.0.0.1 .1.3.6.1.2.1.2.2.1.2.2IF-MIB::ifDescr.2 = STRING: eth0 PS Obviously in production systems, you call your community name something other than private. PS2. The fact that you install a MIB file, is that you are installing dictionaries that translate numbers to readable text for humans and scripts/network monitoring software alike. Not having a MIB installed does not prevent from querying a specific MIB subtree in numeric form if the security context for accessing that SNMP community allows it | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/275866/"
]
} |
464,010 | Suppose I type and run the following command: sha256sum ubuntu-18.04.1-desktop-amd64.iso After a delay, this outputs the following: 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433 ubuntu-18.04.1-desktop-amd64.iso Then, I realize that I should have typed the following command to more rapidly assess whether the SHA‐256 hash matches: sha256sum ubuntu-18.04.1-desktop-amd64.iso | grep 5748706937539418ee5707bd538c4f5eabae485d17aa49fb13ce2c9b70532433 Is there a way to act on the first output without using the sha256sum command to verify the checksum a second time (i.e., to avoid the delay that would be caused by doing so)? Specifically: I'd like to know how to do this using a command that does not require copy and pasting of the first output's checksum (if it's possible). I'd like to know the simplest way to do this using a command that does require copy and pasting of the first output's checksum. (Simply attempting to use grep on a double‐quoted pasted checksum (i.e., as a string) doesn't work.) | You can create a simple function in your .bashrc or .zshrc configurations and run it in the following way: sha256 <expected-sha-256-sum> <name-of-the-file> It will compare the expected sha256 sum with the actual one in a single command. The function is: sha256() { printf '%s %s\n' "$1" "$2" | sha256sum --check} Please find more details here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283615/"
]
} |
464,130 | sudo su - will elevate any user(sudoer) with root privilege. su - anotheruser will switch to user environment of the target user, with target user privileges What does sudo su - username mean? | Just repeating both @dr01 and @OneK's answers because they are both missing some fine details: su - username - Asks the system to start a new login session for the specified user. The system will require the password for the user "username" (even if its the same as the current user). sudo su - username will do the same, but first ask the system to be elevated to super user mode, after which su will not ask for "username"'s password because a super user is allowed to change into any other user without knowing their password. That being said, sudo in itself enforces security by by checking the /etc/sudoers file to make sure the current user is allowed to gain super user permissions,and possibly verifying the current user's password. I would also like to comment that to gain a super user login session, please use sudo -i (or sudo -s ) as sudo su - is just silly: its asking sudo to give super user permissions to su so that su can start a login shell for the super user - when sudo can achieve the same result better it by itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62659/"
]
} |
464,184 | I used to think that file changes are saved directly into the disk, that is, as soon as I close the file and decide to click/select save. However, in a recent conversation, a friend of mine told me that is not usually true; the OS (specifically we were talking about Linux systems) keeps the changes in memory and it has a daemon that actually writes the content from memory to the disk. He even gave the example of external flash drives: these are mounted into the system (copied into memory) and sometimes data loss happens because the daemon did not yet save the contents into the flash memory; that is why we unmount flash drives. I have no knowledge about operating systems functioning, and so I have absolutely no idea whether this is true and in which circumstances. My main question is: does this happen like described in Linux/Unix systems (and maybe other OSes)? For instance, does this mean that if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? Perhaps it depends on the disk type -- traditional hard drives vs. solid-state disks? The question refers specifically to filesystems that have a disk to store the information, even though any clarification or comparison is well received. | if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? They might be. I wouldn't say "most likely", but the likelihood depends on a lot of things. An easy way to increase performance of file writes, is for the OS to just cache the data, tell (lie to) the application the write went through, and then actually do the write later. This is especially useful if there's other disk activity going on at the same time: the OS can prioritize reads and do the writes later. It can also remove the need for an actual write completely, e.g., in the case where a temporary file is removed quickly afterwards. The caching issue is more pronounced if the storage is slow. Copying files from a fast SSD to a slow USB stick will probably involve a lot of write caching, since the USB stick just can't keep up. But your cp command returns faster, so you can carry on working, possibly even editing the files that were just copied. Of course caching like that has the downside you note, some data might be lost before it's actually saved. The user will be miffed if their editor told them the write was successful, but the file wasn't actually on the disk. Which is why there's the fsync() system call , which is supposed to return only after the file has actually hit the disk. Your editor can use that to make sure the data is fine before reporting to the user that the write succeeded. I said, "is supposed to", since the drive itself might tell the same lies to the OS and say that the write is complete, while the file really only exists in a volatile write cache within the drive. Depending on the drive, there might be no way around that. In addition to fsync() , there are also the sync() and syncfs() system calls that ask the system to make sure all system-wide writes or all writes on a particular filesystem have hit the disk. The utility sync can be used to call those. Then there's also the O_DIRECT flag to open() , which is supposed to "try to minimize cache effects of the I/O to and from this file." Removing caching reduces performance, so that's mostly used by applications (databases) that do their own caching and want to be in control of it.( O_DIRECT isn't without its issues, the comments about it in the man page are somewhat amusing.) What happens on a power-out also depends on the filesystem. It's not just the file data that you should be concerned about, but the filesystem metadata. Having the file data on disk isn't much use if you can't find it. Just extending a file to a larger size will require allocating new data blocks, and they need to be marked somewhere. How a filesystem deals with metadata changes and the ordering between metadata and data writes varies a lot. E.g., with ext4 , if you set the mount flag data=journal , then all writes – even data writes – go through the journal and should be rather safe. That also means they get written twice, so performance goes down. The default options try to order the writes so that the data is on the disk before the metadata is updated. Other options or other filesystem may be better or worse; I won't even try a comprehensive study. In practice, on a lightly loaded system, the file should hit the disk within a few seconds. If you're dealing with removable storage, unmount the filesystem before pulling the media to make sure the data is actually sent to the drive, and there's no further activity. (Or have your GUI environment do that for you.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/464184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96121/"
]
} |
464,217 | I have a fresh Alpine Linux 3.8.0 installed on a local disk, dual booted with Ubuntu 18.04. While trying to solve some GUI localization issue, I've entered a wrong keymap in setup-keymap . Sadly, after rebooting, this caused all typed letters to be displayed as squares, for example: Alpine login: øøøøø123 My username and password consist of lowercase English letters and digits. When typing letters, the result is garbage, but digits work fine. Now, because of this, I'm not able to login again and revert the keymap setting. Previously, the keymap was set to us , and everything (almost) was working fine. How can I revert the keymap setting back to us , without having to login to Alpine? Thanks in advance! | if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? They might be. I wouldn't say "most likely", but the likelihood depends on a lot of things. An easy way to increase performance of file writes, is for the OS to just cache the data, tell (lie to) the application the write went through, and then actually do the write later. This is especially useful if there's other disk activity going on at the same time: the OS can prioritize reads and do the writes later. It can also remove the need for an actual write completely, e.g., in the case where a temporary file is removed quickly afterwards. The caching issue is more pronounced if the storage is slow. Copying files from a fast SSD to a slow USB stick will probably involve a lot of write caching, since the USB stick just can't keep up. But your cp command returns faster, so you can carry on working, possibly even editing the files that were just copied. Of course caching like that has the downside you note, some data might be lost before it's actually saved. The user will be miffed if their editor told them the write was successful, but the file wasn't actually on the disk. Which is why there's the fsync() system call , which is supposed to return only after the file has actually hit the disk. Your editor can use that to make sure the data is fine before reporting to the user that the write succeeded. I said, "is supposed to", since the drive itself might tell the same lies to the OS and say that the write is complete, while the file really only exists in a volatile write cache within the drive. Depending on the drive, there might be no way around that. In addition to fsync() , there are also the sync() and syncfs() system calls that ask the system to make sure all system-wide writes or all writes on a particular filesystem have hit the disk. The utility sync can be used to call those. Then there's also the O_DIRECT flag to open() , which is supposed to "try to minimize cache effects of the I/O to and from this file." Removing caching reduces performance, so that's mostly used by applications (databases) that do their own caching and want to be in control of it.( O_DIRECT isn't without its issues, the comments about it in the man page are somewhat amusing.) What happens on a power-out also depends on the filesystem. It's not just the file data that you should be concerned about, but the filesystem metadata. Having the file data on disk isn't much use if you can't find it. Just extending a file to a larger size will require allocating new data blocks, and they need to be marked somewhere. How a filesystem deals with metadata changes and the ordering between metadata and data writes varies a lot. E.g., with ext4 , if you set the mount flag data=journal , then all writes – even data writes – go through the journal and should be rather safe. That also means they get written twice, so performance goes down. The default options try to order the writes so that the data is on the disk before the metadata is updated. Other options or other filesystem may be better or worse; I won't even try a comprehensive study. In practice, on a lightly loaded system, the file should hit the disk within a few seconds. If you're dealing with removable storage, unmount the filesystem before pulling the media to make sure the data is actually sent to the drive, and there's no further activity. (Or have your GUI environment do that for you.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/464217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246490/"
]
} |
464,225 | Yes, yes, I know you're probably like "Hey, there are hundreds of other people asking this same question", but that's not true; I'm not trying to do something like this: foo="example1"bar="example2"foobar="$foo$bar" I'm trying to do this: foo="example1"$foo="examle2" But whenever I attempt this, I get an error message that says: bash: example1=example2: command not found Any suggestions? Is this even possible? | Here are some examples: declare [-g] "${foo}=example2" declare -n foo='example1'foo='example2' eval "${foo}=example2" mapfile -t "${foo}" <<< 'example2' printf -v "${foo}" '%s' 'example2' IFS='' read -r "${foo}" <<< 'example2' typeset [-g] "${foo}=example2" As other users said, be careful with eval and with indirect assignments in general. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306934/"
]
} |
464,232 | How can I disable word-splitting during command substitution? Here's a simplified example of the problem: 4:00PM /Users/paymahn/Downloads ❯❯❯ cat test.txthello\nworld 4:00PM /Users/paymahn/Downloads ❯❯❯ echo $(cat test.txt )helloworld 4:00PM /Users/paymahn/Downloads ❯❯❯ echo "$(cat test.txt )"helloworld 4:01PM /Users/paymahn/Downloads ❯❯❯ echo "$(cat "test.txt" )"helloworld What I want is for echo $(cat test.txt) (or some variant of that which includes command subsitution) to output hello\nworld . I found https://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html which says at the bottom If the substitution appears within double quotes, word splitting and filename expansion are not performed on the results. but I can't seem to make sense of that. I would have thought that one of the examples I already tried conformed to that rule but I guess not. | Having a literal \n get changed to a newline isn't about word splitting, but echo processing the backslash. Some versions of echo do that, some don't... Bash's echo doesn't process backslash-escapes by default (without the -e flag or xpg_echo option), but e.g. dash's and Zsh's versions of echo do. $ cat test.txt hello\nworld$ bash -c 'echo "$(cat test.txt)"'hello\nworld$ zsh -c 'echo "$(cat test.txt)"'helloworld Use printf instead: $ bash -c 'printf "%s\n" "$(cat test.txt)"'hello\nworld$ zsh -c 'printf "%s\n" "$(cat test.txt)"'hello\nworld See also: Why is printf better than echo? Regardless of that, you should put the quotes around the command substitution to prevent word splitting and globbing in sh-like shells.(zsh only does word splitting (not globbing) upon command substitution (not upon parameter or arithmetic expansions) except in sh-mode.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306941/"
]
} |
464,361 | /dev/log is the default entry for system logging. In the case of a systemd implementation (this case) it's a symlink to whatever /run/systemd/journal/dev-log . It used to be a receiving end of a unix socket handled by syslog daemon. ~$ echo "hello" > /dev/log bash: /dev/log: No such device or address~$ fuser /dev/log~$ ls -la /dev/log lrwxrwxrwx 1 root root 28 Aug 23 07:13 /dev/log -> /run/systemd/journal/dev-log What is the clarification of the error that pops when you try to write to it and why isn't there a process holding that file (output from fuser /dev/log empty? The logging does work normally on the system. ~$ logger test~$ journalctl --since=-1m-- Logs begin at Thu 2018-05-24 04:23:46 CEST, end at Thu 2018-08-23 13:07:25 CEST. --Aug 23 13:07:24 alan-N551JM alan[12962]: test Extending with comment suggestions ~$ sudo fuser /dev/log /run/systemd/journal/dev-log: 1 311~$ ls -lL /dev/logsrw-rw-rw- 1 root root 0 Aug 23 07:13 /dev/log | To add some additional info to the accepted (correct) answer, you can see the extent to which /dev/log is simply a UNIX socket by writing to it as such: lmassa@lmassa-dev:~$ echo 'This is a test!!' | nc -u -U /dev/log lmassa@lmassa-dev:~$ sudo tail -1 /var/log/messagesSep 5 16:50:33 lmassa-dev journal: This is a test!! On my system, you can see that the journald process is listening to this socket: lmassa@lmassa-dev:~$ sudo lsof | grep '/dev/log'systemd 1 root 29u unix 0xffff89cdf7dd3740 0t0 1445 /dev/logsystemd-j 564 root 5u unix 0xffff89cdf7dd3740 0t0 1445 /dev/log It got my message and did its thing with it: (i.e. appending to the /var/log/messages file). Note that because the syslog protocol that journald is speaking expects datagrams (think UDP), not streams (think TCP), if you simply try writing into the socket directly with nc you'll see an error in the syscall (and no log show up). Compare: lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -u -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)'socket(AF_UNIX, SOCK_DGRAM, 0) = 4connect(4, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = 0lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)'socket(AF_UNIX, SOCK_STREAM, 0) = 3connect(3, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = -1 EPROTOTYPE (Protocol wrong type for socket) Note I elided some syscalls for clarity. The important point here is that the first call specified the SOCK_DGRAM, which is what the /dev/log socket expects (since this is how the socket /dev/log was created originally), whereas the second did not so we got an error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
464,366 | I have a file index.txt with data like: 2013/10/13-121 f19f26f09691c2429cb33456cf64f8672013/10/17-131 583d3936c814c1bf4e663fe1688fe4a32013/10/20-106 0f7082e2bb7224aad0bd7a6401532f562013/10/10-129 33f7592a4ad22f9f6d63d6a17782d023...... And a second file in CSV format with data like: 2013/10/13-121, DLDFWSXDR, 15:33, 18:21, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/225519652013/10/17-131, DLDFWXDR, 11:05, 15:08, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/225519652013/10/20-106, DLSDXDR, 12:08, 13:06, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/225519652013/10/10-129, DLXDAE, 15:33, 18:46, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965 Now i need a solution to add the MD5SUM at the end of the CSV when the ID (first column) is a match in the index file or vice versa , so at the end the file should look like this: 2013/10/13-121, DLDFWSXDR, 15:33, 18:21, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, f19f26f09691c2429cb33456cf64f8672013/10/17-131, DLDFWXDR, 11:05, 15:08, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 583d3936c814c1bf4e663fe1688fe4a32013/10/20-106, DLSDXDR, 12:08, 13:06, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 0f7082e2bb7224aad0bd7a6401532f562013/10/10-129, DLXDAE, 15:33, 18:46, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 33f7592a4ad22f9f6d63d6a17782d023 | To add some additional info to the accepted (correct) answer, you can see the extent to which /dev/log is simply a UNIX socket by writing to it as such: lmassa@lmassa-dev:~$ echo 'This is a test!!' | nc -u -U /dev/log lmassa@lmassa-dev:~$ sudo tail -1 /var/log/messagesSep 5 16:50:33 lmassa-dev journal: This is a test!! On my system, you can see that the journald process is listening to this socket: lmassa@lmassa-dev:~$ sudo lsof | grep '/dev/log'systemd 1 root 29u unix 0xffff89cdf7dd3740 0t0 1445 /dev/logsystemd-j 564 root 5u unix 0xffff89cdf7dd3740 0t0 1445 /dev/log It got my message and did its thing with it: (i.e. appending to the /var/log/messages file). Note that because the syslog protocol that journald is speaking expects datagrams (think UDP), not streams (think TCP), if you simply try writing into the socket directly with nc you'll see an error in the syscall (and no log show up). Compare: lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -u -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)'socket(AF_UNIX, SOCK_DGRAM, 0) = 4connect(4, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = 0lmassa@lmassa-dev:~$ echo 'This is a test!!' | strace nc -U /dev/log 2>&1 | grep connect -B10 | egrep '^(socket|connect)'socket(AF_UNIX, SOCK_STREAM, 0) = 3connect(3, {sa_family=AF_UNIX, sun_path="/dev/log"}, 10) = -1 EPROTOTYPE (Protocol wrong type for socket) Note I elided some syscalls for clarity. The important point here is that the first call specified the SOCK_DGRAM, which is what the /dev/log socket expects (since this is how the socket /dev/log was created originally), whereas the second did not so we got an error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/297313/"
]
} |
464,392 | If I run a command like this one: find / -inum 12582925 Is there a chance that this will list two files on separate mounted filesystems (from separate partitions) that happen to have been assigned the same number? Is the inode number unique on a single filesystem, or across all mounted filesystems? | An inode number is only unique on a single file system. One example you’ll run into quickly is the root inode on ext2/3/4 file systems, which is 2: $ ls -id / /home2 / 2 /home If you run (assuming GNU find ) find / -printf "%i %p\n" | sort -n | less on a system with multiple file systems you’ll see many, many duplicate inode numbers (although you need to take the output with a pinch of salt since it will also include hard links). When you’re looking for a file by inode number, you can use find ’s -xdev option to limit its search to the file system containing the start path, if you have a single start path: find / -xdev -inum 12582925 will only find files with inode number 12582925 on the root file system. ( -xdev also works with multiple start paths, but then its usefulness is reduced in this particular case.) It's the combination of inode number and device number ( st_dev and st_ino in the stat structure, %D %i in GNU find 's -printf ) that identifies a file uniquely (on a given system). If two directory entries have the same inode and dev number, they refer to the same file (though possibly through two different mounts of a same file system for bind mounts). Some find implementations also have a -samefile predicate that will find files with the same device and inode number. Most [ / test implementations also have a -ef operator to check that two files paths refer to the same file (after symlink resolution though). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/464392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
464,407 | I am using multiple tools developed by the suckless people which are not configured via a config file but via their source code (in this case C) and then simply installed through make install . So I am maintaining my own repos (need continuous changes) of these programs. The question is where should i put these repos? Directorys like /usr or /usr/local/share are for reference purpose. Is it /opt , /srv or should i just collect them somewhere in my home directory? | If you’re installing the software in /usr/local , I would use /usr/local/src — that’s the local variant of /usr/src , of which the FHS says Source code may be placed in this subdirectory, only for reference purposes. with a footnote adding that Generally, source should not be built within this hierarchy. It’s your system though so in my opinion /usr/local/src is fair game. What is the "/usr/local/src" folder meant for? has more on the topic; read this answer in particular. The general idea is to do your work in your home directory, but ensure that the source code to anything installed in /usr/local be at least copied for reference in /usr/local/src , so that local binaries can be rebuilt without needing a specific user’s home directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291711/"
]
} |
464,445 | I had a problem with debian testing today when I was running an apt update with following return: E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; then appstreamcli refresh-cache > /dev/null; fi'E: Sub-process returned an error code | After Googling I found some people saying that removing the package appstream will fix the problem. This solution will work but in many cases this approach will remove packages that you may not want to remove. An alternate solution is to comment the three last lines in /etc/apt/apt.conf.d/50appstream file like this: ...#APT::Update::Post-Invoke-Success {# "if /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; then appstreamcli refresh-cache > /dev/null | true; fi";#}; and then save the file and run apt-get update again. After running a system upgrade there is a chance that the package appstream will have a upgrade that can fix this error and a message like this may appear: Configuration file '/etc/apt/apt.conf.d/50appstream' ==> Modified (by yourself or by a script) since the installation. ==> The package distributor has released an updated version. What do you want to do? Your options are: Y or I: install the version of the maintainer package N or O: keep the currently installed version D: Show differences between versions Z: start a shell to examine the situation You should say Y to upgrade the file you have modified. I'm posting this to help others that may search for this error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305113/"
]
} |
464,475 | I want to search multiple strings in two files. If one string is found in both files, then make something.If one string is found in only one file, then make another thing. My commands are the next: ####This is for the affirmative sentence in both filesif grep -qw "$users" "$file1" && grep -qw "$users" "$file2"; then####This is for the affirmative sentence in only one file, and negative for the other oneif grep -qw "$users" "$file1" ! grep -qw "$users" "$file2"; then is it a correct the way to deny and affirm the statements?p.d. I'm using KSH shell. Thank you in advance. | Try this: if grep -wq -- "$user" "$file1" && grep -wq -- "$user" "$file2" ; then echo "string avail in both files"elif grep -wq -- "$user" "$file1" "$file2"; then echo "string avail in only one file"fi grep can search for patterns in multiple files, so no need to use an OR/NOT operator. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294877/"
]
} |
464,486 | I am trying to use rsync (on Linux Mint) to backup to an external USB drive of type msdos (as shown in the drive properties), using a command: rsync -av ~/Documents/rsynctest/ /media/myname/PC/rsynctest --delete However it is copying some files that have not been changed since I last run this command. What is going on here and is there a straight forward solution, without having to reformat the drive? Adding the "i" flag causes the outputs of lines such as: .f...p..... CBCTest/bin/Debug/CBCTest | Try this: if grep -wq -- "$user" "$file1" && grep -wq -- "$user" "$file2" ; then echo "string avail in both files"elif grep -wq -- "$user" "$file1" "$file2"; then echo "string avail in only one file"fi grep can search for patterns in multiple files, so no need to use an OR/NOT operator. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186571/"
]
} |
464,493 | Is it possible to grep the buffer? For instance if I execute a build process in my VxWorks shell, linux shell, etc, is there a rolling file that contains buffer (e.g. data while scrolling)/previous output? | Try this: if grep -wq -- "$user" "$file1" && grep -wq -- "$user" "$file2" ; then echo "string avail in both files"elif grep -wq -- "$user" "$file1" "$file2"; then echo "string avail in only one file"fi grep can search for patterns in multiple files, so no need to use an OR/NOT operator. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164222/"
]
} |
464,496 | Does anyone understand why I might get these results? Note the discrepency in file size between the two commands below: $ ls -lh gauss_landmarks_0000.npy -rw-rw-r-- 1 dparks dparks 1.1G Aug 16 12:43 gauss_landmarks_0000.npy$ du -h gauss_landmarks_0000.npy 20M gauss_landmarks_0000.npy This occurs on the machine shown below: $ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.3 LTSRelease: 16.04Codename: xenial The results on my own linux mint laptop appear as expected: $ lsb_release -aNo LSB modules are available.Distributor ID: LinuxMintDescription: Linux Mint 18.3 SylviaRelease: 18.3Codename: sylvia | It is probably a sparse file. That means that not all blocks are allocated and the file uses much less space than the file size suggests. On read the missing blocks will read as zero. You can also use the -s option to ls to see the allocated size, it should be the same as the size reported by du . Edit If you have a file that you know or suspects contains many zero bytes but is not sparse, you can use cp --sparse=always to make it sparse, potentially saving a lot of disk space. cp --sparse=always -p file new_file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9035/"
]
} |
464,530 | I would like to ask question about the output from sar -q . I appreciate if someone can help me out with understanding runq-sz . I have a system which cpu threads are 8 cpu threads on RHEL 7.2 . [ywatanabe@host2 ~]$ cat /proc/cpuinfo | grep processor | wc -l8 Below is sar -q result from my system but runq-sz seems to be low compared to ldavg-1 . runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked05:10:01 PM 0 361 0.29 1.68 2.14 005:11:01 PM 0 363 1.18 1.61 2.08 205:12:01 PM 0 363 7.03 3.15 2.58 105:13:01 PM 0 365 8.12 4.15 2.96 105:14:01 PM 3 371 7.40 4.64 3.20 105:15:01 PM 2 370 7.57 5.26 3.51 105:16:01 PM 0 366 8.42 5.90 3.84 105:17:01 PM 0 365 8.78 6.45 4.16 105:18:01 PM 0 363 7.05 6.40 4.28 205:19:02 PM 1 364 8.05 6.74 4.53 005:20:01 PM 0 367 7.96 6.96 4.74 105:21:01 PM 0 367 7.86 7.11 4.93 105:22:01 PM 1 366 7.84 7.31 5.14 0 From the man sar , I was thinking that runq-sz represents the number of tasks inside the run queue which states are TASK_RUNNING which corresponds to R sate in ps . runq-sz Run queue length (number of tasks waiting for run time). What does runq-sz actually represent ? | This man page has a more detailed explanation of this property: runq-sz The number of kernel threads in memory that are waiting for a CPU to run. Typically, this value should be less than 2. Consistently higher values mean that the system might be CPU-bound. Interpreting results As is the case with many "indicators" you have to use them in combination with one another to interpret if there's a performance issue or not. This particular indicator indicates if your system is starved for CPU time. Whereas the load1,5,15 indicate processes that are in the run queue, but are being forced to wait for time to run. The load1,5,15 variety tells you the general trend of the system and if it's got a lot of processes waiting (ramping up load) vs. trending down. But processes can wait for a variety of things with load1,5,15, typically it's I/O that's blocking when you see high load1,5,15 times. With runq-sz, you're waiting for time on a CPU. References How to Check Queue Activity (sar -q) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206866/"
]
} |
464,574 | ssh-add alone is not working: Error connecting to agent: No such file or directory How should I use that tool? | You need to initialize ssh-agent first. You can do this in multiple ways.Either by starting a new shell ssh-agent bash or by evaluating the script returned by ssh-agent in your current shell. eval "$(ssh-agent)" I suggest using the second method, because you keep all your history and variables. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/464574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287413/"
]
} |
464,626 | I made an associative array as follows. To give a few details, the keys refer to specific files because I will be using this array in the context of a larger script (where the directory containing the files will be a getopts argument). declare -A BAMREADSecho "BAMREADS array is initialized"BAMREADS[../data/file1.bam]=33285268BAMREADS[../data/file2.bam]=28777698BAMREADS[../data/file3.bam]=22388955echo ${BAMREADS[@]} # Output: 22388955 33285268 28777698echo ${!BAMREADS[@]} # Output: ../data/file1.bam ../data/file2.bam ../data/file3.bam So far, this array seems to behave as I expect. Now, I want to build another associative array based on this array. To be specific: my second array will have the same keys as my first one but I want to divide the values by a variable called $MIN. I am not sure which of the following strategies is best and I can't seem to make either work. Strategy 1 : copy the array and modify the array? MIN=33285268declare -A BRAMFRACSecho "BAMFRACS array is initialized"BAMFRACS=("${BAMREADS[@]}")echo ${BAMFRACS[@]} # Output: 22388955 33285268 28777698echo ${!BAMFRACS[@]} # Output: 0 1 2 This is not what I want for the keys. Even if it works, I would then need to perform the operation I mentioned on all the values. Stragegy 2 : build the second array when looping through the first. MIN=33285268declare -A BRAMFRACSecho "BAMFRACS array is initialized"for i in $(ls $BAMFILES/*bam)do echo $i echo ${BAMREADS[$i]} BAMFRACS[$i] = ${BAMREADS[$i]} doneecho ${BAMFRACS[@]}echo ${!BAMFRACS[@]}#When I run this, I get the following error which I am unsure how to solve:../data/file1.bam33285268script.bash: line 108: BAMFRACS[../data/file1.bam]: No such file or directory../data/file2.bam28777698script.bash: line 108: BAMFRACS[../data/file2.bam]: No such file or directory../data/file3.bam22388955script.bash: line 108: BAMFRACS[../data/file3.bam]: No such file or directory Thanks | Build the new array from the old: MIN=33285268declare -A BRAMFRACSfor key in "${!BAMREADS[@]}"; do BRAMFRACS[$key]=$(( ${BAMREADS[$key]} / MIN ))done Comments on your code: Your first suggested code does not work as it copies the values from the associative array to the new array. The values automatically gets the keys 0, 1 and 2 but the original keys are not copied. You need to copy the array key by key as I have shown above. This way you assign the wanted value to the correct key. Your second suggested code contains a syntax error in that it has spaces around = in an assignment. This is where the errors that you see come from. variable = value is interpreted as "the command variable executed with the operands = and value ". If you wish to iterate over a set of pathnames, don't use ls . Instead just do for pathname in "$BAMFILES"/*bam; do . Quote you variable expansions. Consider using printf instead of echo to output variable data. Related: Why *not* parse `ls`? When is double-quoting necessary? Security implications of forgetting to quote a variable in bash/POSIX shells Why does my shell script choke on whitespace or other special characters? Why is printf better than echo? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295143/"
]
} |
464,629 | I am frequently using a command line program that I provide with arguments that contain parentheses. For simplicity, let's say I'm writing echo 'bar(1,3)' I would like to omit the quotes. However, if I do that I get syntax errror near unexpected token '(' . I guess this is related to subshells. I am willing to disable those, if that's the only way. (though a subshell cannot be started if it is not at the beginning of the command anyway, so they are no reason to forbid parentheses in the arguments, as far as I can tell) | ( and ) are special token characters in the syntax of bash that are used in a number of operators including: (...) subshell construct func() compound-command function definitions $(...) command substitution <(...) , >(...) process substitution ((...)) arithmetic evaluation construct $((...)) arithmetic expansion a=(...) , a+=(...) array assignment operators @(...) , +(...) , *(...) , ?(...) glob operators (with extglob) [[ (a && b) || (c && d) ]] grouping conditional expression operators [[ ... =~ ...(...)... ]] regexp operator. echo a=(b) is a syntax error, but not export a=(b) . echo a) is a syntax error, unless there was an opening ( in the previous lines part of one of the constructs above. While it may be possible to write a readline hook that adds quotes where needed around ( , ) to avoid a syntax error, it would be considerable effort as it would mean doing a full parsing of the shell syntax. A better approach may be to use a shortcut that quotes the current word when you realise too late that it contains characters special to the shell. With zsh (assuming emacs mode): bindkey -s '\e#' '\C@\eb\Cx\Cx\e"' To have Alt+# quote the current word . Or an approximation with bash (also assuming emacs mode): bind "\"\e#\": \"'\e \eb'\C-X\C-X\"" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106476/"
]
} |
464,652 | Is there any difference beteween doing: echo "hahaha" >> file1 and echo "hahaha" |tee -a file1 ? Yes, I noticed that I cannot write to write protected files even aith sudo echo , but I can if I sudo tee . Thanks. | There's no difference in the sense that the data in the file will be the same if echo and tee are executed successfully and if the file is writable by the current user. The tee command would additionally produce output on its standard output, showing the text that would also be appended to the file. This would not happen in the first command. Another difference is that if the file can not be written to, then the first command, with the redirection, would not even run the echo , whereas the echo would run in the second command, but tee would fail in writing to the file ( tee would still produce text on the terminal though). This could be significant in the case where you run some long running process that produces output: long_running_thing >>file This would not even start long_running_thing if file was not writable. long_running_thing | tee -a file This would execute long_running_thing and it would run to the end, but no output would be saved into file if it wasn't writable (and the output would additionally be written to the terminal from tee ). The next thing to be aware of, which you hinted at in the end of the question, is that sudo echo hello >>file won't work if file isn't writable by the current user. This is because the redirection is processed before the command is executed (see above). To append to a root-owned file, use echo hello | sudo tee -a file Here, we run tee as root. The echo does not need to be executed by root, but the utility that actually writes to the file needs to be executed as root (or as whatever user owns the file) if it's not owned by the current user. Another possibility would be to use sudo sh -c 'echo hello >>file' or echo hello | sudo sh -c 'cat >>file' This would use a redirection to append data to the file, but in this case, the shell that performs the redirection is running as root, so it would not fail in appending/creating the file due to restrictive permissions/ownership (it may still fail if e.g. file is the name of a directory). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304183/"
]
} |
464,818 | I need to print the count of a matching string at the end of each line. An example for matching foo : foo,bar,foo,foobar,foo,bar,barfoo,foo,bar,bar Result : foo,bar,foo,foo,3bar,foo,bar,bar,1foo,foo,bar,bar,2 I have checked this link( How to count the number of a specific character in each line? ) but no luck. | We can use awk with gsub to get the count of occurrence. awk '{print $0","gsub(/foo/,"")}' file Output: foo,bar,foo,foo,3bar,foo,bar,bar,1foo,foo,bar,bar,2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
464,930 | Only read -r is specified by POSIX ; read -n NUM , used to read NUM characters, is not. Is there a portable way to automatically return after reading a given number of characters from stdin? My usecase is printing prompts like this: Do the thing? [y/n] If possible, I'd like to have the program automatically proceed after typing y or n , without needing the user to press enter afterwards. | Reading one character means reading one byte at a time until you get a full character. To read one byte with the POSIX toolchest, there's dd bs=1 count=1 . Note however reading from a terminal device, when that device is in icanon mode (as it generally is by default), only ever returns when you press Return (a.k.a. Enter ), because until then the terminal device driver implements a form of line editor that allows you to use Backspace or other editing characters to amend what you enter, and what you enter is made available to the reading application only when you submit that line you've been editing (with Return or Ctrl + D ). For that reason, ksh 's read -n/N or zsh 's read -k , when they detect stdin is a terminal device, put that device out of the icanon mode, so that bytes are available to read as soon as they are sent by the terminal. Now note that ksh 's read -n n only reads up to n characters from a single line , it still stops when a newline character is read (use -N n to read n characters). bash , contrary ksh93, still does IFS and backslash processing for both -n and -N . To mimic zsh 's read -k or ksh93 's read -N1 or bash 's IFS= read -rN 1 , that is, read one and only one character from stdin, POSIXly: readc() { # arg: <variable-name> if [ -t 0 ]; then # if stdin is a tty device, put it out of icanon, set min and # time to sane value, but don't otherwise touch other input or # or local settings (echo, isig, icrnl...). Take a backup of the # previous settings beforehand. saved_tty_settings=$(stty -g) stty -icanon min 1 time 0 fi eval "$1=" while # read one byte, using a work around for the fact that command # substitution strips the last character. c=$(dd bs=1 count=1 2> /dev/null; echo .) c=${c%.} # break out of the loop on empty input (eof) or if a full character # has been accumulated in the output variable (using "wc -m" to count # the number of characters). [ -n "$c" ] && eval "$1=\${$1}"'$c [ "$(($(printf %s "${'"$1"'}" | wc -m)))" -eq 0 ]'; do continue done if [ -t 0 ]; then # restore settings saved earlier if stdin is a tty device. stty "$saved_tty_settings" fi} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/464930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226269/"
]
} |
464,998 | We have a legacy C code used to allow less privileged user to run custom scripts with escalated privilege. This has SUID bit set. This code restricts the PATH env to a specific folder and then uses system() api to execute the script with restricted shell : /bin/bash -r -c "script <arg>" As the path is restricted, it can execute only scripts from that specific folder. Now knowing all the pitfalls for command injection with system() api, what measures can be taken to avoid command injection? This is used in many places in various scripts etc, so don't want to do a completely new implementation to avoid any regression. | Because its hard to get right, I'd suggest removing the SUID on your code. Change your C code to use sudo . By using sudo the harder aspects getting secure system programming are done. Then you can carefully construct a sudo configuration, using visudo, that does the bare minimum required to perform the task and constrain this to the required users/groups. After configuring sudo, get someone other than you to test it and try to break the intended constrains. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/464998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307585/"
]
} |
465,023 | Why doesn't echo $1 print $1 in this simple bash script? #!/bin/bash# function.shprint_something () {echo $1}print_something$ ./function.sh 123 -> why doesn't it print '123' as a result? | Positional parameters refer to the script's arguments in the main level of the script, but to function arguments in function body. So print_something Something would actually print Something . If you want to pass the script's arguments to a function, you must do that explicitly. Use print_something "$1" to pass the first argument, or print_something "$@" to pass all of them, though the function in example only uses the first one. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307604/"
]
} |
465,028 | I have a file with the following input. The numbers separated by dots represents addresses. Any number in address can be one or more digits as follows: [112.112.112.112;3.3.3.3;44.44.44.44][6.6.6.6;17.17.17.17;88.88.88.88] I want to extract each address without the semicolons and brackets (addresses are separated by semicolon ; ), and insert each address in a line in a new file to produce this output: 112.112.112.1123.3.3.344.44.44.446.6.6.617.17.17.1788.88.88.88 As a first step, I tried grep to extract the addresses as follows: grep -E '\d+\.\d+\.\d+\.\d+' myfile.txt > newfile.txt But it does not print anything. | Extended Regex ( -E or egrep ) does not know about \d . Use -P as suggested by @Alexander or use -E with [0-9] or [[:digit:]] instead. Add -o to select the matches only instead of whole matching lines. This will also break up the single matches into new lines. grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' myfile.txt or grep -Eo '[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+' myfile.txt using Perl Regex ( -P or pgrep ): grep -Po '\d+\.\d+\.\d+\.\d+' myfile.txt If you change + to * you can also use Basic Regex : grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' myfile.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299440/"
]
} |
465,081 | I had a strange situation where I've found a number of files and folders that had 000 permissions set. This was easily repairable via: sudo find . -perm 000 -type f -exec chmod 664 {} \; sudo find . -perm 000 -type d -exec chmod 775 {} \; Unfortunately I suddenly realized the problem was a bit more complicated with some odd permissions such as 044 and some other strange settings. It turns out that these are strewn about and unpredictable. Is there a way to search for permissions such as 0** or other such very limiting permission configurations? | I'd use something like this: find . ! -perm -u=r ! -perm -u=w ! -perm -u=x -ls Or if you prefer the octal notation: find . ! -perm -400 ! -perm -200 ! -perm -100 -ls Unfortunately, no idea, how to take it as one -perm option. That syntax above is standard except for the -ls part (common but not POSIX) which you can replace with -exec ls -disl {} + on systems where find doesn't support -ls to get a similar output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47012/"
]
} |
465,084 | I am just wondering. KDE sends me a notification if there is 10% battery life left on my keyboard, which is wireless. But is there a way to get the whole battery status data? | Battery information is provided to desktop environments by UPower ; this includes the battery information for some keyboards and mice. You can see what your computer knows about its batteries by running upower --dump For example, on my desktop with a wireless Logitech mouse, it shows (among other things) Device: /org/freedesktop/UPower/devices/mouse_0003o046Do101Bx0006 native-path: /sys/devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.2/0003:046D:C52B.0003/0003:046D:101B.0006 vendor: Logitech, Inc. model: M705 serial: XXXXXXXX power supply: no updated: Mon 27 Aug 2018 15:41:36 CEST (106 seconds ago) has history: yes has statistics: no mouse present: yes rechargeable: yes state: discharging warning-level: none percentage: 25% icon-name: 'battery-low-symbolic' On my laptop, it shows the laptop batteries, and the battery status of connected battery-powered devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307653/"
]
} |
465,100 | Recently I developed the habit of killing processes with fuser -k -n tcp $PORT which can hardly kill the wrong process. I prefer this over fiddling with a pidfile that may or may not be still there or may or may not contain the correct pid (OK, I am a bit dramatic here :-) Yet the typical stop script I stumble over still uses a pidfile. Am I missing an important feature of the pidfile approach or a misfeature of the fuser approach . My best guess is that fuser is not available. Though judging by search engine results, bsd, debian, suse, centos, aix, solaris all seem to have it. | Battery information is provided to desktop environments by UPower ; this includes the battery information for some keyboards and mice. You can see what your computer knows about its batteries by running upower --dump For example, on my desktop with a wireless Logitech mouse, it shows (among other things) Device: /org/freedesktop/UPower/devices/mouse_0003o046Do101Bx0006 native-path: /sys/devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.2/0003:046D:C52B.0003/0003:046D:101B.0006 vendor: Logitech, Inc. model: M705 serial: XXXXXXXX power supply: no updated: Mon 27 Aug 2018 15:41:36 CEST (106 seconds ago) has history: yes has statistics: no mouse present: yes rechargeable: yes state: discharging warning-level: none percentage: 25% icon-name: 'battery-low-symbolic' On my laptop, it shows the laptop batteries, and the battery status of connected battery-powered devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140503/"
]
} |
465,103 | I'm trying to define different login shells for different users of an AD domain, as described here . The aim is to deny members of a particular group from logging in while allowing them to do SSH tunneling. Here below is the file /etc/sssd/sssd.conf . MYDOMAIN.GLOBAL is the default domain provided by the AD. The config below defines a test domain MYDOMAIN_TEST.GLOBAL, which is not in the AD, as the domain for these limited users. (This is just a configuration for testing: later, in the MYDOMAIN_TEST.GLOBAL domain section, override_shell = /bin/zsh will be replaced by override_shell = /sbin/nologin .) [sssd]domains = MYDOMAIN.GLOBAL,MYDOMAIN_TEST.GLOBALconfig_file_version = 2services = nss, pam[nss]default_shell = /bin/bash[domain/MYDOMAIN.GLOBAL]ad_server = ad.mydomain.globalad_domain = MYDOMAIN.GLOBALldap_user_search_filter = (memberOf=CN=AdminsGroup,OU=Groups,DC=MYDOMAIN,DC=GLOBAL)id_provider = adsimple_allow_groups = [email protected]_shell = /bin/bash[domain/MYDOMAIN_TEST.GLOBAL]ad_server = ad.mydomain.globalad_domain = MYDOMAIN.GLOBALldap_user_search_filter = (memberOf=CN=LimitedGroup,OU=Groups,DC=MYDOMAIN,DC=GLOBAL)id_provider = adsimple_allow_groups = [email protected]_shell = /bin/zsh A member of MYDOMAIN.GLOBAL is able to login via SSH, while a member of MYDOMAIN_TEST.GLOBAL can't and gets a "Permission denied, please try again" or a "Authentication failed" error. The sssd logfiles don't show any error. Why is that? Does MYDOMAIN_TEST.GLOBAL need to be present in the AD? If yes, is it possible to somehow bypass this and configure sss with different "local categories" of users to do what I want? (Note: Apparently this can be done with nlscd, as per this question and this other question , but it requires a LDAP server, and configuring it to use an AD is another can of worms.) | Battery information is provided to desktop environments by UPower ; this includes the battery information for some keyboards and mice. You can see what your computer knows about its batteries by running upower --dump For example, on my desktop with a wireless Logitech mouse, it shows (among other things) Device: /org/freedesktop/UPower/devices/mouse_0003o046Do101Bx0006 native-path: /sys/devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.2/0003:046D:C52B.0003/0003:046D:101B.0006 vendor: Logitech, Inc. model: M705 serial: XXXXXXXX power supply: no updated: Mon 27 Aug 2018 15:41:36 CEST (106 seconds ago) has history: yes has statistics: no mouse present: yes rechargeable: yes state: discharging warning-level: none percentage: 25% icon-name: 'battery-low-symbolic' On my laptop, it shows the laptop batteries, and the battery status of connected battery-powered devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
465,114 | Question may be not clear and too broad, but any answer, however short or broad it may be, is very appreciated. Some time ago I've asked to install zsh on our production server. My admins companies replied that it cannot be done since this is production, not dev server. I used zsh & oh-my-zsh for last 5 years and am very attached to it, zsh makes my work easier and faster. But it wasn't the enough argument. I am not strong in security or other issues that may be involved here, that's why my questions are: Is installing zsh on production server harmless to the system? Would you allow to install zsh in this case, and why so? Does it make sense to restrict users with bash on production server? | zsh is just a shell, it doesn't start any service, it doesn't come with any setuid command. So the mere installation of the package is not going to do anything until somebody or something actually uses it. Since it doesn't need any privilege, any user can install it on their own in their home directory or wherever they have access to. If it wasn't production quality , one could argue that making it the login shell of and admin user would introduce some risk, but with its much saner syntax than other shells like bash or tcsh , I would argue that it would improve matters considerably. Though its primary usage is as an interactive shell, I'd argue that writing scripts in zsh would likely make safer and more reliable scripts than bash scripts (see how many people unknowingly write bash scripts in zsh syntax when they forget to quote their variables and how changing the shebang from #! /bin/bash to #! /bin/zsh - would fix many of the bugs in their scripts). In any case, installing zsh is among the first things that I have been doing for all server deployments in all the places I've worked at. However, I generally don't make it the login shell of any user by default as it's not uncommon for some software to expect a bash-centric environment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194249/"
]
} |
465,128 | I want to keep monitoring a specific job on a slurm worload like cluster. I tried to use the watch command and grep the specific id . If the job id is 4138 , I tried $> watch squeue -u mnyber004 | grep 4138$> squeue -u mnyber004 | watch grep 4138 but they doesn't work. The second command works for the first few seconds, but stop working when watch refreshes. A better idea please? | You have to quote the command watch 'squeue -u mnyber004 | grep 4138' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250290/"
]
} |
465,132 | The interesting part is I found that I was able to login with xfce but not with either lightdm or gdm. (I am using Ubuntu 16.04) | You have to quote the command watch 'squeue -u mnyber004 | grep 4138' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296827/"
]
} |
465,136 | I can find many questions with answers in the other direction, but unfortunately not in the one I would like to have my replacements: I intend to replace a char , such as # , in a string , such as test#asdf , with a sequence, such as {0..10} to get a sequence of strings , in this example test0asdf test1asdf test2asdf test3asdf test4asdf test5asdf test6asdf test7asdf test8asdf test9asdf test10asdf . I tried, from others: echo '_#.test' | tr # {0..10} (throws usage) echo '_#.test' | sed -r 's/#/{0..10}/g' (returns _{0..10}.test ) echo '_#.test' | sed -r 's/#/'{0..10}'/g' (works for the first, afterwards I get sed: can't read (...) no such file or directory ) What is a working approach to this problem? Edit, as I may not comment yet: I have to use # in the string, in which this character should be replaced, as the string passed from another program. I could first replace it with another char though. | The {0..10} zsh operator (now also supported by a few other shells including bash ) is just another form of csh -style brace expansion. It is expanded by the shell before calling the command. The command doesn't see those {0..10} . With tr '#' {0..10} (quoting that # as otherwise it's parsed by the shell as the start of a comment), tr ends being called with ("tr", "#", "0", "1", ..., "10") as arguments and tr doesn't expect that many arguments. Here, you'd want: echo '_'{0..10}'.test' for echo to be passed "_0.test", "_1.test", ..., "_10.test" as arguments. Or if you wanted that # to be translated into that {0..10} operator, transform it into shell code to be evaluated: eval "$(echo 'echo _#.test' | sed 's/#/{0..10}/')" where eval is being passed echo _{0..10}.test as arguments. (not that I would recommend doing anything like that). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247361/"
]
} |
465,144 | I got bulk number of machines which I need to cron to check the availability in every one hour so. I got nearly 1000 machine which are split between 4-5 name series followed by number for each node. Say from ab1000 to ab1200, from bs3000 to bs3892, from zx7800 to zx8900 etc. Currently I'm using a simple ping script as I can't keep any software on these nodes to monitor (I don't have approval for that). So in my code I'm calling the file where I update all the machine names one by one( trust me I need to do this everyday since the machine names happens very frequently) and wondering if I can use regex to mention the machines as it'll ease my life a lot. Say for eg: ab1*,zx[7-8]* etc. I tried to use the same in the input file but didn't help much. Also one more issue in this is,sometimes one or two machines are down permanently and I don't need to count everytime. So I need to keep it as excluded in my alert list. Also let me know if there is anything else I can make the alert more robust like alert to give list as 3/300 sx are down with machine names sz7701,7702,7703 cat /tmp/node.txtzx7800zx7801zx7802.........zx8900bs3000bs3001cat nodecheck.shfor node in `cat /tmp/node.txt`do count=0 count=$(ping -c 3 $node | grep "100%packet loss"|wc -l) if [ $count -ne 0 ] then echo "$node" >> /tmp/nodedown.txt fidone | The {0..10} zsh operator (now also supported by a few other shells including bash ) is just another form of csh -style brace expansion. It is expanded by the shell before calling the command. The command doesn't see those {0..10} . With tr '#' {0..10} (quoting that # as otherwise it's parsed by the shell as the start of a comment), tr ends being called with ("tr", "#", "0", "1", ..., "10") as arguments and tr doesn't expect that many arguments. Here, you'd want: echo '_'{0..10}'.test' for echo to be passed "_0.test", "_1.test", ..., "_10.test" as arguments. Or if you wanted that # to be translated into that {0..10} operator, transform it into shell code to be evaluated: eval "$(echo 'echo _#.test' | sed 's/#/{0..10}/')" where eval is being passed echo _{0..10}.test as arguments. (not that I would recommend doing anything like that). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249219/"
]
} |
465,170 | gdrive has a sub command list which prints a list of files like the following example: gdrive list Output: Id Name Type Size Created1sV3_a1ySV0-jbLxhA8NIEts1KU_aWa-5 info.pdf bin 10.0 B 2018-08-27 20:26:201h-j3B5OLryp6HkeyTsd9PJaAtKK_GYyl 2018-12-ss-scalettapass dir 2018-08-27 20:26:19 I'm trying to parse this output using tools like awk and sed without success. The problems are empty 'fields' in the size column and the dynamic widths of the columns. Has anybody an idea how to parse this output? | The {0..10} zsh operator (now also supported by a few other shells including bash ) is just another form of csh -style brace expansion. It is expanded by the shell before calling the command. The command doesn't see those {0..10} . With tr '#' {0..10} (quoting that # as otherwise it's parsed by the shell as the start of a comment), tr ends being called with ("tr", "#", "0", "1", ..., "10") as arguments and tr doesn't expect that many arguments. Here, you'd want: echo '_'{0..10}'.test' for echo to be passed "_0.test", "_1.test", ..., "_10.test" as arguments. Or if you wanted that # to be translated into that {0..10} operator, transform it into shell code to be evaluated: eval "$(echo 'echo _#.test' | sed 's/#/{0..10}/')" where eval is being passed echo _{0..10}.test as arguments. (not that I would recommend doing anything like that). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78673/"
]
} |
465,182 | I'm trying to get the 2-digit month and 2-digit year a file was modified, but's it's not working.. modified=$(stat -c %y "$line"); # modified="2018-08-22 14:39:36.400469308 -0400"if [[ $modified =~ ".{2}(\d{2})-(\d{2})" ]]; then echo ${BASH_REMATCH[0]} echo ${BASH_REMATCH[1]fi Bash demo: http://rextester.com/DJSPH52792 RegEx demo: https://regex101.com/r/UEOlMO/1 What am I doing wrong? | First, the quotes suppress the meaning of the special characters in the regex ( online manual ): An additional binary operator, =~ , is available, ... Any part of the pattern may be quoted to force the quoted portion to be matched as a string. ... If you want to match a character that’s special to the regular expression grammar, it has to be quoted to remove its special meaning. The manual goes on to recommend putting the regex in a variable to prevent some clashes between the shell parsing and the regex syntax. Second, \d doesn't do what you think it does, but just matches a literal d . Also note that ${BASH_REMATCH[0]} contains the whole matching string, and indexes 1 and up contain the captured groups. I'd also strongly suggest using four-digit years, so: modified=$(stat -c %y "$file")re='^([0-9]{4})-([0-9]{2})'if [[ $modified =~ $re ]]; then echo "year: ${BASH_REMATCH[1]}" echo "month: ${BASH_REMATCH[2]}"else echo "invalid timestamp"fi For a file modified today, that gives year: 2018 and month: 08 . Note that numbers with a leading zero will be considered octal by the shell and possibly other utilities. (Four-digit years have less issues if you ever need to handles dates from the 1900's, and they're easier to recognize as years and not days of month.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228822/"
]
} |
465,266 | I use the zsh shell as default shell on both Ubuntu and Arch. I configured a shortcut (the up arrow) to autocomplete from history in my zsh shell, using the following line in my .zshrc : bindkey "^[[A" history-beginning-search-backward However, when I source my .zshrc and/or reboot in Ubuntu, the shortcut does not work (I only get the previous command, no matter what I started typing), whereas on Arch it works fine (I only get the last command starting with what I typed). Does anyone know how to solve this? | On most xterm-like terminals, the Up (and it's similar for most navigation keys) send either ␛[A or ␛OA depending on whether the terminal has been put in keypad transmit mode or not. The smkx and rmkx terminfo entries can be used to put a terminal in or out of that mode. The kcuu1 (key cursor up by 1) terminfo entry describes the sequence sent by Up when in keypad transmit mode, that is ␛OA . Debian and derivatives have a /etc/zsh/zshrc file that does a function zle-line-init () { emulate -L zsh printf > /dev/tty '%s' ${terminfo[smkx]}} Which puts the terminal in that mode when zle is active, which means you can now rely on the terminfo database to know what character sequences keys transmit. The file also defines a $key associative array based on the terminfo entries to help you map those to widgets. So on those systems, you can do: (($+key[Up])) && bindkey $key[Up] history-beginning-search-backward For something that works on systems where the terminal is in keypad transmit mode and those that don't or don't have the $key hash, you can do: bindkey $terminfo[kcuu1] history-beginning-search-backwardbindkey ${terminfo[kcuu1]/O/[} history-beginning-search-backward See also: My cursor keys do not work (ncurses FAQ) Why can't I use the cursor keys in (whatever) shell? (xterm FAQ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307418/"
]
} |
465,273 | I want to debug a script that contains Networkmanager-dispacher-variables like DEVICE_IFACE, IP4_GATEWAY. The Networkmanager-manual describes these variables but doesn't mention how to debug them. I searched a lot but i am unable to figure out how to get the value of these variables. When i echo on the command line like echo ${DEVICE_IFACE} i get no value. | On most xterm-like terminals, the Up (and it's similar for most navigation keys) send either ␛[A or ␛OA depending on whether the terminal has been put in keypad transmit mode or not. The smkx and rmkx terminfo entries can be used to put a terminal in or out of that mode. The kcuu1 (key cursor up by 1) terminfo entry describes the sequence sent by Up when in keypad transmit mode, that is ␛OA . Debian and derivatives have a /etc/zsh/zshrc file that does a function zle-line-init () { emulate -L zsh printf > /dev/tty '%s' ${terminfo[smkx]}} Which puts the terminal in that mode when zle is active, which means you can now rely on the terminfo database to know what character sequences keys transmit. The file also defines a $key associative array based on the terminfo entries to help you map those to widgets. So on those systems, you can do: (($+key[Up])) && bindkey $key[Up] history-beginning-search-backward For something that works on systems where the terminal is in keypad transmit mode and those that don't or don't have the $key hash, you can do: bindkey $terminfo[kcuu1] history-beginning-search-backwardbindkey ${terminfo[kcuu1]/O/[} history-beginning-search-backward See also: My cursor keys do not work (ncurses FAQ) Why can't I use the cursor keys in (whatever) shell? (xterm FAQ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228934/"
]
} |
465,305 | If you were to globally set alias ':(){ :|:& };:'='echo fork bomb averted' would that be an effective security strategy to avoid the Bash fork bomb execution or would there still be a way to execute it? I suppose the question cashes out to: is there a way to execute a command when it's aliased to something else? | The two , no, three , ... Amongst the main obstacles to that are: It's not a valid name for an alias. Bash's online manual : The characters ... and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. ( , ) , & , | and whitespace are out in Bash 4.4. That particular string is not the only way to write a fork bomb in the shell, just famous because it looks obscure. For example, there's no need to call the function : instead of something actually composed of letters. If you could set the alias, the user could unset the alias, circumvent it by escaping the alias name on the command line, or disable aliases altogether, possibly by running the function in a script (Bash doesn't expand aliases in noninteractive shells). Even if the shell is restricted enough to stop all versions of a fork bomb, a general purpose system will have other programmable utilities that can recurse and fork off subprocesses. Got Perl or a C compiler? Easy enough. Even awk could probably do it. Even if you don't have those installed, you'll also need to stop the user from bringing in compiled binaries from outside the system, or running /bin/sh which probably needs to be a fully operational shell for the rest of the system to function. Just use ulimit -u (i.e. RLIMIT_NPROC ) or equivalent to restrict the number of processes a user can start. On most Linux systems there's pam_limits that can set the process count limit before any commands chosen by the user are started. Something like this in /etc/security/limits.conf would put a hard limit of 50 processes to all users: * hard nproc 50 (Stephen Kitt already mentioned point 1, Jeff Schaller mentioned 2 and 3.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/465305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
465,322 | at 18:00 shutdown now and shutdown 18:00 , are they starting the same service? Do they work the same way? | at 18:00 shutdown now creates an "at" job, which is performed at the specified time by the at daemon or perhaps the cron daemon, depending on your system. shutdown 18:00 starts a process in your shell that waits until the specified time and then performs the shutdown. This command can be terminated if e.g. your shell session is terminated. The net result in most cases will be the same: the system is shutdown at 18:00. One difference is that if you use at , the job will be stored and if the system is shutdown by some other means before 18:00, upon booting again the job will still be waiting to be run; if the time is already passed, the shutdown will be performed immediately which could be quite unexpected. Another difference is that shutdown 18:00 will create a /run/nologin file 5 minutes before the scheduled time to prevent people logging in after that moment. Also broadcast messages will be sent to warn logged in users that the system is about to be shutdown. You need to take account these differences to decide which to use. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/465322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307871/"
]
} |
465,368 | Trying to remove a string across multiple files in a directory with sed. The folder contains a large amount of sql files, all with table names that I need to remove. For instance, one of the files looks like this: INSERT INTO staging.eav_attribute_set (attribute_set_id, entity_type_id, attribute_set_name, sort_order) VALUES (1, 1, 'Default', 2);INSERT INTO staging.eav_attribute_set (attribute_set_id, entity_type_id, attribute_set_name, sort_order) VALUES (2, 2, 'Default', 2);INSERT INTO staging.eav_attribute_set (attribute_set_id, entity_type_id, attribute_set_name, sort_order) VALUES (3, 3, 'Default', 1);INSERT INTO staging.eav_attribute_set (attribute_set_id, entity_type_id, attribute_set_name, sort_order) VALUES (4, 4, 'Default', 1); I need to remove staging. from all lines. I've tried the following from the directory where the files are: sed -i 's/staging.//g' *sed -i 's/staging\.//g' *sed -i 's|staging.||g' * But receive the following: sed: 1: "eav_attribute_set ...": unterminated substitute pattern | With FreeBSD sed (as found on macOS), you need: sed -i '' 's/staging\.//g' ./* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307929/"
]
} |
465,411 | I want to find *.json2 files , check if they r executed if no execute POST. I have 1.json22.json2 2.json2.ml3.json2 3.json2.ml I use this command find . -type f -name '*.json2' | if [{}.ml]; then -exec sh -c 'curl -X POST -H "Content-Type: application/json" -d @{} https://api.myweb.com/api > {}.ml' \; I want to execute only the file dont have ml extension.Thx | With FreeBSD sed (as found on macOS), you need: sed -i '' 's/staging\.//g' ./* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/300751/"
]
} |
465,563 | The ifconfig command dumps a lot of information at you, especially if you have a lot of interfaces, and you don't know where they come from. I've read through the "Ifconfig Command - Explained in Detail" tutorial page, which gives a great rundown on most of the information in ifconfig . But it doesn't contain all the information I want (and the article could also be outdated after its release in 2006). Using ip addr show eth0 : 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:e2:80:18 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever I find it tough to parse some of the output. Under eth0 : <…> describes… the interface capabilities? Uncertain where I can find the full set of options, uncertain what they're called, no idea what to google. What're the other options? state UP – I know there's also state DOWN and state RUNNING . These are all software constructs, right? Nothing is physically changing when I run ip link set dev eth0 down , right? So how does the kernel act differently when this state changes? Does this state change? group default – interface groups. What is the unique problem they solve? Under inet What does scope global mean? – How can a private IP have a global scope? What am I missing? What is the grammar of this command's output? | Here are the parts that I can already parse, for reference for anyone else with the same question. eth0 is the interface name. It can be any string: mtu 1500 maximum transmission unit = 1500 bytes, this is the largest size that a frame sent over this interface can be. This number is usually limited by the Ethernet protocol's cap of 1500. If you send a larger packet and it arrives at an Ethernet interface, then the frame will get fragmented and its payload transmitted in two or more packets. Not really any benefit to that, so it's best to follow standards. qdisc pfifo_fast queuing discipline = three pipes of first in first out. This determines how an interface chooses which packet to transmit next, when it's being overloaded. group default Interface groups give a single interface to clients by combining the capabilities of the aggregated interfaces on them. qlen 1000 transmission queue length = 1000 packets. The 1000th packet will be queued, the 1001st will be dropped. link/ether means the link layer protocol is Ethernet: brd means broadcast. This is the address that the device will set as the destination when it sends a broadcast. An interface sees all traffic on the wire it's sitting on, but is polite enough to only read data addressed to it. The way you address an interface is by using its specific address, or the broadcast address. inet means the network layer protocol is internet ( ipv4 ). inet6 would mean IPv6. lft stands for lifetime. If you get this address through DHCP , then you'll have a valid lifetime for your lease on the IP address. And just to make handoffs a little bit easier, a (probably) shorter preferred lifetime. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248906/"
]
} |
465,583 | I have a string: 6.40.4 and a second one: 6.40 I am using var=$(echo $versionfull | cut -d'.' -f3) to get the third digit from the first string in bash. What does this command return for the second one? It looks empty but either [ -z $var ] or [ $var == "" ] does not work. I want to give it a value of 0 in case of a second string. | You could use var=$(echo "${versionfull}.0" | cut -d'.' -f3) . In the first case, versionfull will contain 6.40.4.0, ignoring the padding and returning 4 as needed. In the second case, the .0 will be padded and returned. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308075/"
]
} |
465,589 | we want to delete all files of the following job with the same time on midnight 0 0 * * * root [[ -d /var/log/ambari-metrics-collector ]] && find /var/log/ambari-metrics-collector -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete 0 0 * * * root [[ -d /var/log/kO ]] && find /var/log/Ko -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete 0 0 * * * root [[ -d /var/log/POE ]] && find /var/log/POE -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete 0 0 * * * root [[ -d /var/log/REW ]] && find /var/log/REW -type f -mtime +10 -regex '.*\.log.*[0-9]$' -delete is it ok to run all then on the same time? dose cron job will run them step by step? or both all them on the same thread? | Yes, it is perfectly acceptable to have cron schedule multiple jobs at the same time. Computers do nothing simultaneously, however, and they will be started in the order present in the cron table. However, they will not be run in sequence; they will be started one after the other within a few milliseconds of midnight -- simultaneously for all practical purposes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
465,669 | The kernel contains a filesystem, nsfs. snapd creates a nsfs mount under /run/snapd/ns/<snapname>.mnt for each installed snap. ls shows it as a 0 byte file. The kernel source code does not seem to contain any documentation or comments about it. The main implementation seems to be here and the header file here . From that, it seems to be namespace related. A search of the repo does not even find Kconfig entries to enable or disable it... What is the purpose of this filesystem and what is used for? | As described in the kernel commit log linked to by jiliagre above, the nsfs filesystem is a virtual filesystem making Linux-kernel namespaces available. It is separate from the /proc "proc" filesystem, where some process directory entries reference inodes in the nsfs filesystem in order to show which namespaces a certain process (or thread) is currently using. The nsfs doesn't get listed in /proc/filesystems (while proc does), so it cannot be explicitly mounted. mount -t nsfs ./namespaces fails with "unknown filesystem type". This is, as nsfs as it is tightly interwoven with the proc filesystem. The filesystem type nsfs only becomes visible via /proc/$PID/mountinfo when bind-mounting an existing(!) namespace filesystem link to another target. As Stephen Kitt rightly suggests above, this is to keep namespaces existing even if no process is using them anymore. For example, create a new user namespace with a new network namespace, then bind-mount it, then exit: the namespace still exists, but lsns won't find it, since it's not listed in /proc/$PID/ns anymore, but exists as a (bind) mount point. # bind mount only needs an inode, not necessarily a directory ;)touch mynetns# create new network namespace, show its id and then bind-mount it, so it# is kept existing after the unshare'd bash has terminated.# output: net:[##########]NS=$(sudo unshare -n bash -c "readlink /proc/self/ns/net && mount --bind /proc/self/ns/net mynetns") && echo $NS# notice how lsns cannot see this namespace anymore: no match!lsns -t net | grep ${NS:5:-1} || echo "lsns: no match for net:[${NS:5:-1}]"# however, findmnt does locate it on the nsfs...findmnt -t nsfs | grep ${NS:5:-1} || echo "no match for net:[${NS:5:-1}]"# output: /home/.../mynetns nsfs[net:[##########]] nsfs rw# let the namespace go...echo "unbinding + releasing network namespace"sudo umount mynetnsfindmnt -t nsfs | grep ${NS:5:-1} || echo "findmnt: no match for net:[${NS:5:-1}]"# clean uprm mynetns Output should be similar to this one: net:[4026532992]lsns: no match for net:[4026532992]/home/.../mynetns nsfs[net:[4026532992]] nsfs rwunbinding + releasing network namespacefindmnt: no match for net:[4026532992] Please note that it is not possible to create namespaces via the nsfs filesystem, only via the syscalls clone() ( CLONE_NEW... ) and unshare . The nsfs only reflects the current kernel status w.r.t. namespaces, but it cannot create or destroy them. Namespaces automatically get destroyed whenever there isn't any reference to them left, no processes (so no /proc/$PID/ns/... ) AND no bind-mounts either, as we've explored in the above example. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28015/"
]
} |
465,681 | I'm on Manjaro Gnome 3.28.3. For my situation, they do same thing, list thumbnails of all opened applications and if you press Super or Alt , Tab could help you to switch the applications. So, I want Super Tab and Alt Tab to do different things. Like Super Tab switch applications only on this workspace? | By default, they appear to both be assigned to “Switch applications”. They can be re-assigned using the keyboard preferences: Open the menu in the top-right-hand corner of your main screen. Click on the “Settings button”, i.e. the left-most button here: Choose “Devices” in the left-hand column: Choose “Keyboard”. This will lead you to a list of supported keyboard shortcuts, with their assigned keys; you can click on any entry to change it: The “Switch applications” entry is somewhat strange: it only shows Super Tab , but it is also assigned to Alt Tab and will be disabled if you re-assign the latter. However it can then be re-assigned to whatever you want. As you can see, I use Super Tab to switch applications, Alt Tab to switch windows, and Ctrl Super Tab to switch windows inside an application. You can assign nearly any key to any of the supported shortcuts, but you can’t add new shortcuts. On my system, running GNOME 3.26, “Switch windows“ only shows windows on the current workspace: whereas “Switch applications” shows all applications across all workspaces: Note that GNOME Tweaks has an “Alternatetab” extension which can also be used to adjust the behaviour of the window switcher. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232089/"
]
} |
465,683 | I have a text file which has only integer values i.e different integers in different lines(say from 1 to 47). I have made a script that would read each line and take the value present in a different line. If the condition is met, then I want to echo a statement. contents of a.txt : (In one line there is only 1, In second line there is only 2, as is) 1 2 3 4 5 ..so on till 47. Output I want: As soon it reads 5. Output - "Step Completed is 5" (without double quotes) . This should go for 5,10,15,20 till 45 Here is the code but it doesn't seem to work. #!/bin/bashwhile IFS= read -r line; do if [[ $line=="5" ]] ; then echo "Step Completed is:" $var fidone < "$1" Also I want to echo the same statement for every 5 integer values i.e as soon as the script reads 5, it should echo - Step completed is 5. As soon as it reads 10, it should echo - Step Completed is 10. Like this. To run the script I am using the command: . ./al.sh a.txt | By default, they appear to both be assigned to “Switch applications”. They can be re-assigned using the keyboard preferences: Open the menu in the top-right-hand corner of your main screen. Click on the “Settings button”, i.e. the left-most button here: Choose “Devices” in the left-hand column: Choose “Keyboard”. This will lead you to a list of supported keyboard shortcuts, with their assigned keys; you can click on any entry to change it: The “Switch applications” entry is somewhat strange: it only shows Super Tab , but it is also assigned to Alt Tab and will be disabled if you re-assign the latter. However it can then be re-assigned to whatever you want. As you can see, I use Super Tab to switch applications, Alt Tab to switch windows, and Ctrl Super Tab to switch windows inside an application. You can assign nearly any key to any of the supported shortcuts, but you can’t add new shortcuts. On my system, running GNOME 3.26, “Switch windows“ only shows windows on the current workspace: whereas “Switch applications” shows all applications across all workspaces: Note that GNOME Tweaks has an “Alternatetab” extension which can also be used to adjust the behaviour of the window switcher. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306868/"
]
} |
465,695 | I have a variable which stores a string, the output of a sed command.I want to execute a set of commands only if this string value matches either of the 2 other strings. I used the below code. #! /bin/kshrequest=”Request”fault=”Fault”while read lines; do category=`echo $lines|sed -n -e 's/.*Summary: Value//p'| awk '{print $1}'` if [ ! -z "$category" ] then if($category = $request) then echo $category fi fidone<content.txt But it's giving me an error: sample.sh: Request: not found The variable category will have either value Request or value Order Can someone point the error or a solution to this? If inner if is eliminated and echo $category will print the exact string value. | Simple answer to do so you can use this syntax : if [ "$category" = "$request" ] Spacing is important.Side note, you should use the recent way of doing command subsitition and replace (see article 1 and posix article ): category=`echo $lines|sed -n -e 's/.*Summary: value//p'| awk '{print $1}'` by this category=$(echo $lines|sed -n -e 's/.*Summary: value//p'| awk '{print $1}') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306611/"
]
} |
465,714 | Is there a portable test for whether a variable (whose name is known statically in a script) is unset (which is different from empty)? | For scalar variables, in standard (POSIX) sh syntax: if [ "${var+set}" != set ]; then echo var is not setfi Or fancier albeit less legible things like if [ -z "${var++}" ]; then echo var is unsetfi Or: if ${var+false}; then echo var is unsetfi For array variables (not that arrays are portable ), in zsh or yash , that would return unset unless the array is assigned any list including the empty list, while in bash or ksh that would return unset unless the element of index 0 is set. Same for associative arrays (for key "0"). Note that except in zsh (when not in sh emulation), export var or readonly var declares the variable but doesn't give it any value, so shells other than zsh will report var as unset there (unless var had been assigned a value before the call to export / readonly ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/465714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
465,719 | we add anew virtual disk as sdb ( from the vcenter )in order to increase the /var partition we have redhat version 7.5 so after we add the disk we get pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg00 lvm2 a-- <39.51g 60.00m /dev/sdb lvm2 --- 80.00g 80.00g now we initialize a disk or partition pvcreate /dev/sdb now we want to add this to volume group vg00 vgextend vg00 /dev/sdb Couldn't create temporary archive name. why we get " Couldn't create temporary archive name. " | why we get " Couldn't create temporary archive name. " Most likely because your root partition is full. Try clearing out some files in the root ( / ) partition before running vgextend . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
465,758 | I have a coworker who says you need to be careful extracting tarballs because they can make changes you don't know about. I always thought a tarball was just a hierarchy of compressed files, so if you extract it to /tmp/example/ it can't possibly sneak a file into /etc/ or anything like that. | Different tar utilities behave differently in this regard, so it's good to be careful. For a tar file that you didn't create, always list the table of contents before extracting it. Solaris tar : The named files are extracted from the tarfile and written to the directory specified in the tarfile, relative to the current directory. Use the relative path names of files and directories to be extracted. Absolute path names contained in the tar archive are unpacked using the absolute path names, that is, the leading forward slash (/) is not stripped off. In the case of a tar file with full (absolute) path names, such as: /tmp/real-file/etc/sneaky-file-here ... if you extract such a file, you'll end up with both files. GNU tar : By default, GNU tar drops a leading / on input or output, and complains about file names containing a .. component. There is an option that turns off this behavior: --absolute-names -P Do not strip leading slashes from file names, and permit file names containing a .. file name component. ... if you extract a fully-pathed tar file using GNU tar without using the -P option, it will tell you: tar: Removing leading / from member names and will extract the file into subdirectories of your current directory. AIX tar : says nothing about it, and behaves as the Solaris tar -- it will create and extract tar files with full/absolute path names. HP-UX tar : (better online reference welcomed) WARNINGS There is no way to restore an absolute path name to a relative position. OpenBSD tar : -P Do not strip leading slashes ( / ) from pathnames. The default is to strip leading slashes. There are -P options implemented for tar on macOS, FreeBSD and NetBSD as well, with the same semantics, with the addition that tar on FreeBSD and macOS will "refuse to extract archive entries whose pathnames contain .. or whose target directory would be altered by a symlink" without -P . schilytools star : -/ Don't strip leading slashes from file names when extracting an archive. Tar archives containing absolute pathnames are usually a bad idea. With other tar implementations, they may possibly never be extracted without clobbering existing files. Star for that reason, by default strips leading slashes from filenames when in extract mode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/465758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20840/"
]
} |
465,845 | The L7-filter project appears to be 15 years old, requires kernel patches with no support for kernels past version 2.6, and most of the pattern files it has appear to have been written in 2003. Usually when there's a project that is that old, and that popular, there are new projects to replace it, but I can't find anything more recent for Linux that does layer 7 filtering. Am I not looking in the right places? Was the idea of layer 7 filtering abandoned entirely for some reason? I would think that these days, with more powerful hardware, this would be even more practical than it used to be. | You must be talking of (the former) project Application Layer Packet Classifier for Linux , which was implemented as patches, for the 2.4 and the 2.6 kernels. The major problem with this project, is that the technology which it proposed to control, quickly outpaced the usefulness and efficacy of the implementation. The members of the project, also had no time (and money) to further invest in outpacing some advancements of the technology, as far as I remember, and then sold the rights to the implementation, which killed for good an already problematic project. The challenges this project/technology has faced over the years are, by no particular order: adapting the patches to the 3.x/4.x kernel versions; scarcity of processing power - in several countries, nowadays the speed of even domestic gigabit broad will demand ASICs to do efficient layer 7 traffic-shapping; bittorrent started using heavy obfuscation; HTTPS started being used heavily to encapsulate several protocols and/or to avoid detection; peer-to-peer protocols stopped using fixed ports, and started trying to get their way by any open/allowed port; the rise of ubiquitous voIP and video in real time, that makes traffic very sensitive to even small time delays: the widespread use of VPN connections. Heavy R&D was then invested heavily, into professional traffic shaping products. The state of the art ten years ago, involved already specific ASICs and (heavy use) of heuristics, for detecting encrypted/obfuscated traffic. At the present, besides of more than a decade of experience in advanced heuristics, with the advancement of global broadband, traffic-shapping (and firewall) vendors, are also using peer-2-peer sharing in real-time, of global data, to enhance the efficacy of their solutions. They are combining advanced heuristics, with real time profiling / sharing data from thousands of locations in the world. It would be very difficult, to put together a open source product, that will work as efficiently as an Allot NetEnforcer. Using open source solutions, for the purpose of infra-structure bandwidth health, it is not so usual, anymore, trying to traffic shape by the type/nature of traffic that IP address is using at the network level . Nowadays, for generic traffic control and protecting the bandwidth capacity of the infra-structure, the usual strategy is (besides firewalling), without using advanced traffic shaping hardware, allocating a small part of the bandwidth per IP address. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91667/"
]
} |
465,870 | I have a specific order of files that I want to list if they exist; around 40 files. Some kind of precedence. So I tried: ls -1d /opt/foo/lib.jar /opt/bar/lib.jar I expected this to list /opt/foo/lib.jar first if both exist.But actually it prints the bar first and the foo after that. Is there some way to make ls list the entries in the order given in parameters? Or some alternative approach with find ? | With GNU ls , you could try the -U option: -U : do not sort; list entries in directory order (though here, we're not listing the content of directories, so the part that matters is do not sort ). $ ls -1dU /opt/foo/lib.jar /opt/bar/lib.jar/opt/foo/lib.jar/opt/bar/lib.jar Slightly more portable (works with GNU and FreeBSD ls , but not with traditional ls implementations and is not POSIX either), you can use ls -1df : $ ls -1df /opt/foo/lib.jar /opt/bar/lib.jar/opt/foo/lib.jar/opt/bar/lib.jar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5801/"
]
} |
465,878 | I'm trying to parse subdirectory names, but the output breaks the names up into individual words. Eg. If the subdirectory name is "Hello World", the output will be: . Hello World The following code works, but the output includes the current directory, which I don't want: find "$my_dir" -maxdepth 1 -type d -print0 | while IFS= read -rd '' dir; do echo "$dir"; done I'm trying to include an if statement that eliminates the current directory from the output, but it seems the code still sees individual words for each subdirectory name: find "$my_dir" -maxdepth 1 -type d -print0 | while IFS= read -rd '' dir; do if ["$dir" != "."]; then echo "$dir" fi done | With GNU ls , you could try the -U option: -U : do not sort; list entries in directory order (though here, we're not listing the content of directories, so the part that matters is do not sort ). $ ls -1dU /opt/foo/lib.jar /opt/bar/lib.jar/opt/foo/lib.jar/opt/bar/lib.jar Slightly more portable (works with GNU and FreeBSD ls , but not with traditional ls implementations and is not POSIX either), you can use ls -1df : $ ls -1df /opt/foo/lib.jar /opt/bar/lib.jar/opt/foo/lib.jar/opt/bar/lib.jar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308202/"
]
} |
465,903 | JavaScript has a function for this: 'world'.startsWith('w')true How can I test this with shell? I have this code: if [ world = w ]then echo trueelse echo falsefi but it fails because it is testing for equality. I would prefer using a builtin,but any utilities from this page would be acceptable: http://pubs.opengroup.org/onlinepubs/9699919799/idx/utilities.html | If your shell is bash: within double brackets, the right-hand side of the == operator is a pattern unless fully quoted: if [[ world == w* ]]; then echo trueelse echo falsefi Or more tersely: [[ world == w* ]] && echo true || echo false [*] If you are not targetting bash specifically: use the case statement for pattern matching case "world" in w*) echo true ;; *) echo false ;;esac [*] but you need to be careful with the A && B || C form because C will be executed if either A fails or B fails. The if A; then B; else C; fi form will only execute C if A fails. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
465,914 | I have a following script that will take input (source path) from user and it will attach the volume inside docker container echo -n "Enter the source path: "read pathdocker run -v $path:/opt/$path/ fedora The problem is, i want to make a loop, so that the user can provide multiple source path and it can be attached to docker container. E.g. docker run -v $path1:/opt/$path1 -v $path2:/opt/$path2 etc , these number of $path variable depends on the user inputs. | In bash , you should use an array to hold the paths as read from the user. In general, it's better to keep separate strings (pathnames) separate rather than to concatenate them into a single string that you later have to correctly parse to extract the original constituent strings. #!/bin/bashecho 'Enter paths, one by one followed by Enter. End input with Ctrl+D' >&2mypaths=()while IFS= read -r -p 'Path: ' thepath; do mypaths+=( -v "$thepath:/opt/$thepath" )donedocker run "${mypaths[@]}" fedora Here, the user is prompted to input a path several times until they press Ctrl+D . The paths entered are saved in the mypaths array, which is laid out in such a way that docker may use it directly. Once there are no more paths to read, the docker command is called. The "${mypaths[@]}" will be expanded to the individually quoted elements of the mypaths array. Since the entries of the array are stored the way they are (with -v as a separate element before each specially formatted pathname:/opt/pathname string), this will be correctly interpreted by the shell and by docker . The only characters that will not be tolerated in pathnames by the above code are newlines, since these are separating the lines read by read . The above script would also accept input redirected from a text file containing a single path per line of input. Note that the quoting is important. Without the double quotes around the variable expansions, you would not be able to use paths containing whitespace, and you would potentially also have issues with paths containing characters special to the shell. Related: Why does my shell script choke on whitespace or other special characters? For non- bash ( sh ) shells: #!/bin/shecho 'Enter paths, one by one followed by Enter. End input with Ctrl+D' >&2set --while printf 'Path: ' >&2 && IFS= read -r thepath; do set -- "$@" -v "$thepath:/opt/$thepath"donedocker run "$@" fedora Here we use the list of positional parameters instead of an array (since arrays other than $@ are not available in sh in general), but the workflow is otherwise identical apart from printing the prompt explicitly with printf . Implementing the suggestion at the end of Stéphane Chazelas' comment , so that the script takes pathnames on its command line instead of reading them from its standard input. This allows the user to pass arbitrary pathnames to the script, even those that read can't easily read or a user can't easily type on a keyboard. For bash using an array: #!/bin/bashfor pathname do mypaths+=( -v "$pathname:/opt/$pathname" )donedocker run "${mypaths[@]}" fedora For sh using the list of positional parameters: #!/bin/shfor pathname do shift set -- "$@" -v "$pathname:/opt/$pathname"donedocker run "$@" fedora Both of these would be run like ./script.sh path1 path2 path3 ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277542/"
]
} |
465,937 | For some reason, the first partition of my VPS (running Debian 8) is aligned to sector 63 (instead of 2048) Model: VMware Virtual disk (scsi)Disk /dev/sda: 314572800sSector size (logical/physical): 512B/512BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 63s 79971569s 79971507s primary ext4 boot 2 79971570s 83875364s 3903795s primary linux-swap(v1) 83875365s 314572799s 230697435s Free Space Now I want to resize the partitions to allocate the free space, unfortunately fdisk starts the first sector at 2048. But as I've read here it is possible to force fdisk to start at 63 using this command. fdisk -c=dos -u=cylinders /dev/sda How safe is this? Moreover, as this method is deprecated does this harms the performance of my VPS? | When extending the size, as it involves deleting the partition, you will have to recreate it again at whatever number it starts. otherwise it won't be recognized at best, and at worst there can be data corruption. If that VPS is a template for creating other VMs, I would take the trouble to recreate/move the beginning *and * the data/sectors to sector 2048. As it is a VM, if you do want to move the partition, I would not exactly move it, I would create a partition on the side, copy the data and boot with the copy partition. That is the beauty of working with virtual machines, you have more room to test things. PS. As for my strictly personal opinion, the small gain on performance does not make it worth moving it from sector 63. I would wait for the machine to become retired, it will happen sooner or later. As for partitions alignment: You want to leave the partition aligned to the 4096 byte boundary. On that way real sectors are mostly certain to be aligned with virtual sectors and VMWare will your hypervisor/VM will extract a better performance from the hardware. For understanding why unaligned partitions are a performance problem, see this image from purestorage.com: Consulting the white papers of a storage specialist in the industry to have a better idea what are the current best pratices : Modern vendor-supported operating systems (OS) from Microsoft and Linux distributors such as Red Hat no longer require adjustments to align the file system partition with the blocks of the underlying storage system in a virtual environment. (.e.g. "leave the default settings alone") However to continue replying to the original question, visiting a couple of linked white papers: Aligning your partitions to a 4K boundary in both the VMDK and the LUN is a recommended best practice And also: In the output for each device, multiply the start by sector size (normally 512 in the fdisk output), and then divide it by 4096. If the result is an integer (whole number), it is ALIGNED, if not, it is MISALIGNED. So, checking your question about creating a partition at sector 63: 512 * 63 / 4096 = 7.875 => MISALIGNED I would probably use and leave the default in the future at 2048. Let's check it: 512 * 2048 / 4096 = 156 => ALIGNED References: FAQ: Guest VM file system partition/disk alignment for VMware vSphere, other virtual environments, and NetApp storage systems How to correct guest VM data partition alignment in a VMware vSphere 5.x environment How to align blocks in VMWare ESX | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308419/"
]
} |
465,964 | Example csv: AAA, BBB, CCC, DDD, EEE, FFF, GGG, HHH Now i wan't a copy column 2 (BBB) and add it in front of column 3 so the file looks like: AAA, BBB, BBB, CCC, DDD, EEE, FFF, GGG, HHH | $ cat test.txtAAA, BBB, CCC, DDD, EEE, FFF, GGG, HHHAAA, BBB, CCC, DDD, EEE, FFF, GGG, HHHAAA, BBB, CCC, DDD, EEE, FFF, GGG, HHH$ awk -F, '{$2=$2","$2}1' OFS=, test.txtAAA, BBB, BBB, CCC, DDD, EEE, FFF, GGG, HHHAAA, BBB, BBB, CCC, DDD, EEE, FFF, GGG, HHHAAA, BBB, BBB, CCC, DDD, EEE, FFF, GGG, HHH | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/297313/"
]
} |
465,977 | I get an issue using SENDMAIL on Ubuntu. All emails are going into the SPAM folder. I'm using NodeJS and the Nodemailer module. My code : var transporter = nodemailer.createTransport({ sendmail: true, newline: 'unix', path: '/usr/sbin/sendmail'});transporter.sendMail({ from: "[email protected]", to: "[email protected]", subject: "test", html: "test"}); | If you're sending with a gmail address but not through gmail's mail system using proper authentication your mail will be considered a spoofing attempt by many mail servers. Best practices for sending mails from a program: Only use sender addresses that you actually control. Only send from a properly configured mail server (static ip, correct forward and reverse DNS) or use a smarthost. Otherwise your mails are indistinguishable from typical spams sent via hacked servers using fake sender addresses, and you shouldn't be surprised that they are classified as spam. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181789/"
]
} |
465,991 | What is the fastest command line way to merge the different lines of files? For example, I have two files: a.txt: foo barfoobar b.txt foofoobarlinebybar And I would like to get the following output: foobarfoobarlineby Is there any fast way to merge files like the example above? (The order of the lines isn't important) | Use awk seen if you don't want to sort the file: $ awk '!seen[$0]++' a.txt b.txtfoo barfoobarlineby | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/465991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295635/"
]
} |
466,068 | I have a file of genomic data that is approximately 5 million lines long and should have only the characters A, T, C, and G in it. The problem is, I know how large the file should be, but it's slightly larger than that. Which means, something went wrong in an analysis, or there are lines that contain something other than genomic data. Is there a way to find any line that has something other than an A, T, C, or G? Due to the nature of the file, any other letter, spaces, numbers, symbols shouldn't be present. I've gone through searching symbol by symbol, so I was hoping there would be an easier way. | First of all, you definitely do not want to open the file in an editor (it's much too large to edit that way). Instead, if you just want to identify whether the file contains anything other than A , T , C and G , you may do that with grep '[^ATCG]' filename This would return all lines that contain anything other than those four characters. If you would want to delete these characters from the file, you may do so with tr -c -d 'ATCG\n' <filename >newfilename (if this is the correct way to "correct" the file or not, I don't know) This would remove all characters in the file that are not one of the four, and it would also retain newlines ( \n ). The edited file would be written to newfilename . If it's a systematic error that has added something to the file, then this could possibly be corrected by sed or awk , but we don't yet know what your data looks like. If you have the file open in vi or vim , then the command /[^ATCG] will find the next character in the editing buffer that is not a A , T , C or G . And :%s/[^ATCG]//g will remove them all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293255/"
]
} |
466,083 | Consider this scenario: User ssh into a system and does whatever he/she wants. Then schedules a shutdown using: sudo shutdown -h +1 Finally closes the ssh session Now /run/nologin has been created and no one can login anymore, but something comes up and we want to ssh back to the system before it goes down. Is it possible to remotely cancel the scheduled shutdown when we are not permitted to login any more? | Beside of using "root" account to make a new ssh connection, we can actually use PAM to allow specific user or groups logging in. PAM configurations of sshd are located at: /etc/pam.d/sshd which are in responsible of what you are looking for. By editing this file and using pam_succeed_if.so we can allow specific user or group to login even when /run/nologin exists on machine. pam_succeed_if.so is designed to succeed or fail authentication based on characteristics of the account belonging to the user being authenticated or values of other PAM items. One use is to select whether to load other modules based on this test. So we use it to detect whatever we should load pam_nologin.so module or not based on your username or user-group. Open the file using your favorite text editor: $ sudo vi /etc/pam.d/sshd And find these lines: # Disallow non-root logins when /etc/nologin exists.account required pam_nologin.so Add this line between them: account [default=1 success=ignore] pam_succeed_if.so quiet user notingroup sudo So now the lines should look like this: # Disallow non-root logins when /etc/nologin exists.account [default=1 success=ignore] pam_succeed_if.so quiet user notingroup sudoaccount required pam_nologin.so Now users who are in sudo group can login even when /run/nologin exists. And to allow a specific user: account [default=2 success=ignore] pam_succeed_if.so quiet user != username For more flexible conditions checkout: man pam_succeed_if | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64321/"
]
} |
466,209 | I need to modify a script which is part of a programme I downloaded. However, when I try to view the script with vim, it is full of symbols, numbers and letters placed randomly. Is there anything I can do to read this script? This is part of the script: ELF > @ @ J @ 8 @ @ @ @ @ @ À À @ @ @ @ TB TB XB XBa XBa \ ˜† €B €Ba €Ba à à @ @ Påtd „( „(A „(A ´ ´ Qåtd /lib64/ld-linux-x86-64.so.2 GNU % 8 ) # 7 $ . ' " , 1 * 6 5 3 / 2 % - 0 ! ( + 4 & ± A ! ® * 9 × ñ « P z â ó ³ í a ¥ 3 ¢ Æ ? s Š X ð é ö # N t 9 Ü M ) Š £ z [ - : S B Ô 3 e Ô P t : € ‘ \ È ò – É ² ï Û h : û ¦ A ÀFa ) E [ † Á H â d & ÈFa ä û Ð þ ‹ – libgfortran.so.3 _gfortran_st_write_done __gmon_start__ _Jv_RegisterClasses _gfortran_transfer_integer _gfortran_st_read _gfortran_st_inquire _gfortran_set_args _gfortran_iargc _gfortran_st_rewind _ITM_deregisterTMCloneTable _gfortran_pow_i4_i4 _ITM_registerTMCloneTable _gfortran_st_write _gfortran_st_read_done _gfortran_transfer_integer_write _gfortran_compare_string _gfortran_set_options _gfortran_st_close _gfortran_getarg_i4 _gfortran_transfer_character_write _gfortran_transfer_real_write _gfortran_transfer_logical_write _gfortran_stop_string _gfortran_transfer_real _gfortran_st_open _gfortran_transfer_character libm.so.6 truncf cosf sinf sqrtf powf log10f libgcc_s.so.1 __powisf2 libquadmath.so.0 libc.so.6 fflush exit sprintf _IO_putc fopen strncmp strncpy signal getpid calloc strlen memset stdout fputs memcpy fclose stderr fprintf memmove _IO_getc __libc_start_main free /cm/shared/apps/mpich2/3.2/gcc/lib:/cm/shared/apps/fftw/gcc/64/3.3.4/lib/ GLIBC_2.2.5 GCC_4.0.0 GFORTRAN_1.0 GFORTRAN_1.4 p ui  `Z' Î Æ ui  €eù Ø „eù å `Da ÀFa . ÈFa 4 €Da ˆDa Da ˜Da Da ¨Da °Da ¸Da ÀDa ÈDa ÐDa ØDa àDa èDa ðDa øDa Ea Ea Ea Ea Ea (Ea 0Ea 8Ea @Ea HEa PEa XEa `Ea ! hEa " pEa # xEa $ €Ea % ˆEa & Ea ' ˜Ea ( Ea ) ¨Ea * °Ea + ¸Ea , ÀEa - ÈEa / ÐEa 0 ØEa 1 àEa 2 èEa 3 ðEa 5 øEa 6 Fa 7 Hƒìè[ èZ èõ HƒÄÃÿ5z1! ÿ%|1! @ ÿ%z1! h éàÿÿÿÿ%r1! h éÐÿÿÿÿ%j1! h éÀÿÿÿÿ%b1! h é°ÿÿÿÿ%Z1! h é ÿÿÿÿ%R1! h éÿÿÿÿ%J1! h é€ÿÿÿÿ%B1! h épÿÿÿÿ%:1! h é`ÿÿÿÿ%21! h éPÿÿÿÿ%*1! h é@ÿÿÿÿ%"1! h é0ÿÿÿÿ%1! h é ÿÿÿÿ%1! h éÿÿÿÿ%1! h é ÿÿÿÿ%1! h éðþÿÿÿ%ú0! h éàþÿÿÿ%ò0! h éÐþÿÿÿ%ê0! h éÀþÿÿÿ%â0! h é°þÿÿÿ%Ú0! h é þÿÿÿ%Ò0! h éþÿÿÿ%Ê0! h é€þÿÿÿ%Â0! h épþÿÿÿ%º0! h é`þÿÿÿ%²0! h éPþÿÿÿ%ª0! h é@þÿÿÿ%¢0! h é0þÿÿÿ%š0! h é þÿÿÿ%’0! h éþÿÿÿ%Š0! h é þÿÿÿ%‚0! h éðýÿÿÿ%z0! h éàýÿÿÿ%r0! h! éÐýÿÿÿ%j0! h" éÀýÿÿÿ%b0! h# é°ýÿÿÿ%Z0! h$ é ýÿÿÿ%R0! h% éýÿÿÿ%J0! h& é€ýÿÿÿ%B0! h' épýÿÿÿ%:0! h( é`ýÿÿÿ%20! h) éPýÿÿÿ%*0! h* é@ýÿÿÿ%"0! h+ é0ýÿÿÿ%0! h, é ýÿÿÿ%0! h- éýÿÿÿ%0! h. é ýÿÿÿ%0! h/ éðüÿÿÿ%ú/! h0 éàüÿÿ1íI‰Ñ^H‰âHƒäðPTIÇÀ@A HÇÁPA HÇÇA°@ èWýÿÿôHƒìH‹.! H…ÀtÿÐHƒÄø¿Fa UH-¸Fa HƒøH‰åw]ø H…Àtô]¿¸Fa ÿà€ ¸¸Fa UH-¸Fa HÁøH‰åH‰ÂHÁê?HÐHÑøu]ú H…Òtô]H‰Æ¿¸Fa ÿ†€=ù/! u_UH‰åS»pBa HëhBa HƒìH‹ã/! HÁûHƒëH9Øs$fD HƒÀH‰Å/! ÿÅhBa H‹·/! H9Ørâè5ÿÿÿÆž/! HƒÄ[]À Hƒ=0+! t¸ H…ÀtU¿xBa H‰åÿÐ]é+ÿÿÿ é#ÿÿÿUH‰åH‰}è‰uä‹MäHcÉH‰Èº ‹Eä‰EøÇEô ‹Eø‰Eüƒ}ü ~.‹Eü‰EôH‹Uè‹EüƒèH˜¶< uƒ}ü”À¶Àƒmü…ÀuëÒ‹Eô]ÃUH‰åHƒì`H‰}ØH‰uÐH‰UÈH‰MÀL‰E¸D‰M´‹E´H˜I‰ÂA» ‹U´H‹EȉÖH‰Çè ‰EøH‹EØ‹ ‰EôH‹EØó‹Eøó*ÀH‹EÀóóYÂóXÁóEðH‹EÐóH‹EÀóóà óYÂó\È(ÁóEìH‹EÐóH‹EÀó óXÁóEè¿@A ¸ èTš HMèHUðHuìHEôHÇD$DA HÇ$DA A¹@A A¸@A H‰Ç¸ èZ³ H‹E¸H‰Ç¸ è š ‹Eø…À~4‹T ‰EüLMøLEüH‹MÈH‹UÀH‹uÐH‹EØ‹}´‰<$H‰Ç¸ èV« ÉÃUH‰åH‰}è‰uä‹MäHcÉH‰Èº ‹Eä‰Eø‹Eø‰Eüƒ}ü ~3‹Eü‰EôH‹Uè‹EüƒèH˜¶< t‹Eôëƒ}ü”À¶Àƒmü…ÀuëÍ‹Eô]ÃUH‰åHƒìpH‰}ÈH‰uÀH‰U¸H‰M°L‰E¨L‰M H‹E@Ç H‹E ‹ ‰EàH‹E(‹ ‰EØH‹E0‹ ‰EÜH‹E8‹ ‰EÔL‹MÀL‹E¸H‹MÈH‹U°HuØHEàH}èH‰<$H‰Çèy L‹MÀL‹E¸H‹MÈH‹U°HuÔHEÜH}äH‰<$H‰ÇèQ óEàóMÜ.Áz.Át%óEÔóMØó\ÁóMÜóUàó\Êó^ÁóEøóEØóMÔ.Áz.Át%óEÜóMàó\ÁóMÔóUØó\Êó^ÁóEô‹Eè…Àu‹Eä…À„œ ‹Eè™ÁêЃà)Ѓøu‹Eä™ÁêЃà)Ѓø„° ‹Eè‰ÂÁêÐÑø™ÁêЃà)Ѓøu‹Eä‰ÂÁêÐÑø™ÁêЃà)Ѓøtx‹EèP…ÀHÂÁø™ÁêЃà)Ѓøu‹EäP…ÀHÂÁø™ÁêЃà)Ѓøt<‹EèP…ÀHÂÁø™ÁêЃà)Ѓøu#‹EäP…ÀHÂÁø™ÁêЃà)Ѓøuéì ‹Uè‹Eä9ÂuéÝ ‹Eè…Àu‹Eä‰Eüë‹Eè‰Eü‹Eü™ÁêЃà)Ѓøu-H‹EÈó óMàó\ÁóYEøóMØóXÁóEìH‹EÈ‹ ‰Eð‹Eü‰ÂÁêÐÑø™ÁêЃà)Ѓøu-H‹EÀó óMàó\ÁóYEøóMØóXÁóEìH‹EÀ‹ ‰Eð‹EüP…ÀHÂÁø™ÁêЃà)Ѓøu-H‹E°ó óMØó\ÁóYEôóMàóXÁóEðH‹E°‹ ‰Eì‹EüP…ÀHÂÁø™ÁêЃà)Ѓøu-H‹E¸ó óMØó\ÁóYEôóMàóXÁóEðH‹E¸‹ ‰Eì‹Eè9Eüu9‹Eð‰Eà‹Eì‰EØL‹MÀL‹E¸H‹MÈH‹U°HuØHEàH}èH‰<$H‰Çèo é‹ýÿÿ‹Eð‰EÜ‹Eì‰EÔL‹MÀL‹E¸H‹MÈH‹U°HuÔHEÜH}äH‰<$H‰Çè6 éRýÿÿ‹EàH‹U¨‰‹EÜH‹U‰‹EØH‹U ‰‹EÔH‹U‰H‹E@Ç ÉÃUH‰åH‰}øH‰uðH‰UèH‰MàL‰EØL‰MÐH‹EÇ H‹EøóH‹Eàó .ÁvH‹EÇ ëH‹Eøó H‹EÐó.ÁvH‹EÇ H‹EðóH‹EØó .ÁvH‹E‹ PH‹E‰ë&H‹Eðó H‹Eèó.ÁwëH‹E‹ PH‹E‰]ÃUH‰åAUATSHìˆ H‰½øþÿÿH‰µðþÿÿH‰•èþÿÿH‰àþÿÿL‰…ØþÿÿL‰Ðþÿÿ‹EpH˜I‰ÄA½ ‹ExH˜I‰ÂA» H‹E | The "script" is not actually a script at all, but a compiled executable binary file. This is evident from the fact that it's not a text file and that it contains an ELF header. This means that to change it, you would have to locate its source code (which may not be available on your machine, and in some cases it may not available publicly at all), change it, and recompile the binary executable. How you do this depends on where you got the program from, what programming language its source is written in (probably Fortran using MPI judging from the library names and paths present in the binary output), and what build system it uses to build. This may require installing one or several additional pieces of software on your system for rebuilding the executable, along with any dependencies (libraries and headers) that the source may be using. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259909/"
]
} |
466,215 | Consider: #!/bin/kshdb2 connect to MKTETLPS user ....... using ........db2 "select count(*) from etl.IDM_COLLAPSE_ORG_DEE c where c.IDM_PROCESS_STEP = 'I' and priority in ( '1','2','3','4','5') and c.update_ts < (current timestamp - 60 minutes) with ur" > l.txt$a = /is115/idm/dsproj/scripts/l.txt if [ $a -gt 0 ]; then db2 "update etl.idm_collapse_org_dee set idm_process_step = NULL where priority in ('1','2','3','4','5') and idm_process_step ='I'" else echo "All is well"fi I am running above the script and am receiving the below error. How can I fix it? ./CORCleanup1.sh[8]: =: not found../CORCleanup1.sh[10]: test: 0403-004 Specify a parameter with this command.All is wellDB20000I The SQL command completed successfully.DB20000I The TERMINATE command completed successfully.db2 connect resetdb2 terminateexit | Variable assignments must not include $ and spaces around the = . I also would double quote the assignment. So the variable assignment should look like as follows. a="/is115/idm/dsproj/scripts/l.txt" From further reading the script, it looks like you rather want to store the content of the file 1.txt in $a rather than the file path itself. For that purpose you could use the assignment as follows. read -r a < /is115/idm/dsproj/scripts/l.txt ( read -r reads the first line of the file, strips the leading and trailing spaces and tabs (assuming the default value of $IFS ) and stores it in the supplied variable) You also may want to double quote the $a variable in the if statement. if [ "$a" -gt 0 ]; You can also use https://www.shellcheck.net/ to check the syntax of your script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308605/"
]
} |
466,242 | Is it possible to use xargs to invoke a command so that the last argument of the command is fixed? My attempt: printf '%s\n' a b c d | xargs -I{} echo {} LAST ends up doing echo a LAST echo b LAST echo c LAST echo d LAST I want for xargs to invoke echo a b c d LAST#fit as many as you can but always finish wiht LAST Is this possible to do, preferably in a portable way? | tl;dr; this is how you could do it portably, without -I and other broken fancy options: $ echo a b c d f g | xargs -n 2 sh -c 'echo "$@" LAST' sha b LASTc d LASTf g LAST$ seq 1 100000 | xargs sh -c 'echo "$#" LAST' sh23692 LAST21841 LAST21841 LAST21841 LAST10785 LAST The problem with the -I option is that it's broken by design, and there is no way around it: $ echo a b c d f g | xargs -I {} -n 1 echo {} LASTa b c d f g LAST$ echo a b c d f g | xargs -I {} -n 2 echo {} LAST{} LAST a b{} LAST c d{} LAST f g But they're probably covered, because that's what the standard says: -I replstr^[XSI] [Option Start] Insert mode: utility is executed for eachline from standard input , taking the entire line as a singleargument , inserting it in arguments for each occurrence ofreplstr. And it doesn't say anything about the interaction with the -n and -d options, so they're free to do whatever they please. This is how it is on an (older) FreeBSD, less unexpected but non-standard: fzu$ echo a b c d f g | xargs -I {} -n 2 echo {} LASTa b LASTc d LASTf g LASTfzu$ echo a b c d f g | xargs -I {} -n 1 echo {} LASTa LASTb LASTc LASTd LASTf LASTg LAST | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
466,244 | What are the consequences for a ext4 filesystem when I terminate a copying cp command by typing Ctrl + C while it is running? Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it? And, most importantly, is terminating a cp process a safe thing to do? | This is safe to do, but naturally you may not have finished the copy. When the cp command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall, or system call, is a function that an application can use to requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls from cp ~/hello.txt /mnt , it would look like: open("/home/user/hello.txt", O_RDONLY) = 3open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4read(3, "Hello, world!\n", 131072) = 14write(4, "Hello, world!\n", 14) = 14close(3) = 0close(4) = 0 This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished , not while it is running (in fact, signals only arrive during a kernelspace to userspace context switch). Note that some signals, like read() , can be terminated early. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has returned. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, so there is no risk of filesystem corruption. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/466244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233964/"
]
} |
466,389 | From The Linux Programming Interface: Where is the kernel stack (mentioned in the quote below) in the above diagram? Is it the top part "Kernel (mapped into process virtual memory, but no accessilto program)" in the above diagram? the term user stack is used to distinguish the stack we describe here from the kernel stack. The kernel stack is a per-process memory region maintained in kernel memory that is used as the stack for execution of the functions called internally during the execution of a system call. (The kernel can’t employ the user stack for this purpose since it resides in unprotected user memory.) Where are "Frames for C run-time startup functions" and "Frame for main()" (mentioned from the diagram below) in the above diagram? Is "argv, environ" in the above diagram "Frames for C run-time startup functions", "Frame for main()", or part of either? What is the lowest segment between 0x00000000 and 0x08048000 used for? Thanks. | There is not a kernel stack. For each thread, there is a memory region that is used as stack space when the process makes a system call. There are also separate "interrupt stacks", one per CPU, which are used by the interrupt handler. These memory areas reside in the kernel address space (above 0xc0000000 in your figure. The stack frames (C runtime frames, the frame for main, etc.) are part of the stack. The process arguments ( argv ) and the environment are separate areas, and are not part of the stack. The area between 0x0 and 0x08048000 (about 128 MB) is not used for anything. Originally, the i386 System V ABI reserved this area for the stack, but Linux does things differently. Leaving the area unused does not waste RAM, only address space, because the area is not mapped. Note that this information is almost totally obsolete by now, since it describes how things are done on the 32-bit x86 architecture. 32-bit only x86 machines are hard to find today, and distributions are phasing out support for them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
466,496 | Consider the following (with sh being /bin/dash ): $ strace -e trace=process sh -c 'grep "^Pid:" /proc/self/status /proc/$$/status'execve("/bin/sh", ["sh", "-c", "grep \"^Pid:\" /proc/self/status /"...], [/* 47 vars */]) = 0arch_prctl(ARCH_SET_FS, 0x7fcc8b661540) = 0clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fcc8b661810) = 24865wait4(-1, /proc/self/status:Pid: 24865/proc/24864/status:Pid: 24864[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 24865--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=24865, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---exit_group(0) = ?+++ exited with 0 +++ There's nothing unusual, grep replaced a forked process (here done via clone() ) from main shell process. So far so good. Now with bash 4.4: $ strace -e trace=process bash -c 'grep "^Pid:" /proc/self/status /proc/$$/status'execve("/bin/bash", ["bash", "-c", "grep \"^Pid:\" /proc/self/status /"...], [/* 47 vars */]) = 0arch_prctl(ARCH_SET_FS, 0x7f8416b88740) = 0execve("/bin/grep", ["grep", "^Pid:", "/proc/self/status", "/proc/25798/status"], [/* 47 vars */]) = 0arch_prctl(ARCH_SET_FS, 0x7f8113358b80) = 0/proc/self/status:Pid: 25798/proc/25798/status:Pid: 25798exit_group(0) = ?+++ exited with 0 +++ Here what's apparent is that grep assumes pid of the shell process and no apparent fork() or clone() call. Question is, then, how does bash achieve such acrobatics without either of the calls ? Note, however, that clone() syscalls appears if the command contains shell redirection, such as df > /dev/null | The sh -c 'command line' are typically used by things like system("command line") , ssh host 'command line' , vi 's ! , cron , and more generally anything that is used to interpret a command line, so it's pretty important to make it as efficient as possible. Forking is expensive, in CPU time, memory, allocated file descriptors... Having a shell process lying about just waiting for another process before exiting is just a waste of resources. Also, it makes it difficult to correctly report the exit status of the separate process that would execute the command (for instance, when the process is killed). Many shells will generally try to minimize the number of forks as an optimisation. Even non-optimised shells like bash do it in the sh -c cmd or (cmd in subshell) cases. Contrary to ksh or zsh, it doesn't do it in bash -c 'cmd > redir' or bash -c 'cmd1; cmd2' (same in subshells). ksh93 is the process that goes the furthest in avoiding forks. There are cases where that optimisation cannot be done, like when doing: sh < file Where sh can't skip the fork for the last command, because more text could be appended to the script whilst that command is running. And for non-seekable files, it can't detect the end-of-file as that could mean reading too much too early from the file. Or: sh -c 'trap "echo Ouch" INT; cmd' Where the shell may have to run more commands after the "last" command has been executed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
466,550 | I would like to find all the .html files in a folder and append [file](./file.html) to another file called index.md . I tried the following command: ls | awk "/\.html$/" | xargs -0 -I @@ -L 1 sh -c 'echo "[${@@%.*}](./@@)" >> index.md' But it can't substitute @@ inside the command? What am I doing wrong? Note: Filename can contain valid characters like space Clarification: index.md would have each line with [file](./file.html) where file is the actual file name in the folder | Just do: for f in *.html; do printf '%s\n' "[${f%.*}](./$f)"; done > index.md Use set -o nullglob ( zsh , yash ) or shopt -s nullglob ( bash ) for *.html to expand to nothing instead of *.html (or report an error in zsh ) when there's no html file. With zsh , you can also use *.html(N) or in ksh93 ~(N)*.html . Or with one printf call with zsh : files=(*.html)rootnames=(${files:r})printf '[%s](./%s)\n' ${basenames:^files} > index.md Note that, depending on which markdown syntax you're using, you may have to HTML-encode the title part and URI-encode the URI part if the file names contain some problematic characters. Not doing so could even end up introducing a form of XSS vulnerability depending on context. With ksh93, you can do it with: for f in *.html; do title=${ printf %H "${file%.*}"; } title=${title//$'\n'/"<br/>"} uri=${ printf '%#H' "$file"; } uri=${uri//$'\n'/%0A} printf '%s\n' "[$title]($uri)"done > index.md Where %H ¹ does the HTML encoding and %#H the URI encoding, but we still need to address newline characters separately. Or with perl : perl -MURI::Encode=uri_encode -MHTML::Entities -CLSA -le ' for (<*.html>) { $uri = uri_encode("./$_"); s/\.html\z//; $_ = encode_entities $_; s:\n:<br/>:g; print "[$_]($uri)" }' Using <br/> for newline characters. You may want to use  instead or more generally decide on some form of alternative representation for non-printable characters. There are a few things wrong in your code: parsing the output of ls use a $ meant to be literal inside double quotes Using awk for something that grep can do (not wrong per se, but overkill) use xargs -0 when the input is not NUL-delimited -I conflicts with -L 1 . -L 1 is to run one command per line of input but with each word in the line passed as separate arguments, while -I @@ runs one command for each line of input with the full line (minus the trailing blanks, and quoting still processed) used to replace @@ . using {} inside the code argument of sh ( command injection vulnerability ) In sh , the var in ${var%.*} is a variable name , it won't work with arbitrary text. use echo for arbitrary data. If you wanted to use xargs -0 , you'd need something like: printf '%s\0' * | grep -z '\.html$' | xargs -r0 sh -c ' for file do printf "%s\n" "[${file%.*}](./$file)" done' sh > file.md Replacing ls with printf '%s\0' * to get a NUL-delimited output awk with grep -z (GNU extension) to process that NUL-delimited output xargs -r0 (GNU extensions) without any -n / -L / -I , because while we're at spawning a sh , we might as well have it process as many files as possible have xargs pass the words as extra arguments to sh (which become the positional parameters inside the inline code), not inside the code argument. which means we can more easily store them in variables (here with for file do which loops over the positional parameters by default) so we can use the ${param%pattern} parameter expansion operator. use printf instead of echo . It goes without saying that it makes little sense to use that instead of doing that for loop directly over the *.html files like in the top example. ¹ It doesn't seem to work properly for multibyte characters in my version of ksh93 though (ksh93u+ on a GNU system) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
466,572 | So I ran a ls -F or ls -al on my /usr/bin directory and some of my files showed up with a red background and white text. What does this mean? | With the GNU implementation of ls , the meaning of the colours depends on the setting of the LS_COLORS environment variable, typically set up with the dircolors command. A (combination of) numeric code(s) determines which colours get used to indicate a particular file type: # Attribute codes:# 00=none 01=bold 04=underscore 05=blink 07=reverse 08=concealed# Text color codes:# 30=black 31=red 32=green 33=yellow 34=blue 35=magenta 36=cyan 37=white# Background color codes:# 40=black 41=red 42=green 43=yellow 44=blue 45=magenta 46=cyan 47=white A white text on a red background is defined with a combination of 37;41 Use echo "$LS_COLORS" to investigate and find that: su=37;41 thus SETUID files are white text on a red background (which happens to be the default) dircolors --print-database gives a more verbose and readable output for the default settings in absence of any customisation: SETUID 37;41 # file that is setuid (u+s)STICKY 37;44 # dir with the sticky bit set (+t) and not other-writable The only other default usage for a red highlight is blue text on a red background for directories with the sticky bit set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233811/"
]
} |
466,578 | In this answer I had some code which read: if [[ $ZSH_VERSION ]]; then This was edited to be: if [[ -n $ZSH_VERSION ]]; then Update: I just saw the edit comment: [[ x ]] didn't work until recently in zsh I looked through the zsh release notes and couldn't find reference to this. Which zsh version first allowed [[ x ]] ? | From the zsh 5.5.1 docs for CONDITIONAL EXPRESSIONS For compatibility, if there is a single argument that is not syntactically significant, typically a variable, the condition is treated as a test for whether the expression expands as a string of non-zero length. In other words, [[ $var ]] is the same as [[ -n $var ]]. It is recommended that the second, explicit, form be used where possible. With the source tree around, % grep -rl 'if there is a single argument' ../Doc/Zsh/cond.yo% git blame ./Doc/Zsh/cond.yo | grep 'if there is a single argument'd082827c83 (Jun T 2014-05-18 22:03:35 +0900 198) For compat... Inspection of git log shows that the code change went in a bit earlier than the documentation: commit 9d47e8398d299e53ffe4e7ddf3731d2fedae9948...Date: Tue May 13 08:16:50 2014 -0700 32609: [[ $var ]] behaves as [[ -n $var ]] for bash/ksh compatibility The mapping of the ChangeLog file to git tag is not clear to me, but it appears zsh 5.0.6 (Thu Aug 28 19:07:04 2014 +0100) is the first version with this change. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
466,599 | I have the following code in the middle of a script to confirm whether we want to resume the script or not. read -r -p "Would you like to continue [Y/N] : " icase $i in [yY]) echo -e "Resuming the script";; [nN]) echo -e "Skipped and exit script" exit 1;; *) echo "Invalid Option" ;;esac I would like to know is there any way to know is there any way to recall the switch-case if the input option is invalid? | Do your input in a loop. Exit the loop with break (or exit as the case may be) if you get a valid response from the user. while true; do read -p 'Continue? yes/no: ' input case $input in [yY]*) echo 'Continuing' break ;; [nN]*) echo 'Ok, exiting' exit 1 ;; *) echo 'Invalid input' >&2 esacdone As a utility function: ask_continue () { while true; do read -p 'Continue? yes/no: ' input case $input in [yY]*) echo 'Continuing' break ;; [nN]*) echo 'Ok, exiting' exit 1 ;; *) echo 'Invalid input' >&2 esac done} A variation of the utility function that allows exiting through EOF (e.g. pressing Ctrl+D ): ask_continue () { while read -p 'Continue? yes/no: ' input; do case $input in [yY]*) echo 'Continuing' return ;; [nN]*) break ;; *) echo 'Invalid input' >&2 esac done echo 'Ok, exiting' exit 1} Here, there are three ways out of the loop: The user enters "yes", in which case the function returns. The user enters "no", in which case the we break out of the loop and execute exit 1 . The read fails due to something like encountering an end-of-input or some other error, in which case the exit 1 is executed. Instead of exit 1 you may want to use return 1 to allow tho caller to decide what to do when the user does not want to continue. The calling code may then look like if ! ask_continue; then # some cleanup, then exitfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
466,653 | From APUE The real user ID and real group ID of a process identify who we really are. These two fields are taken from our entry in the password file when we log in. Normally, these values don’t change during a login session, although there are ways for a superuser process to change them Can a superuser process change the real user ID and real group ID of a process, so that the relation between the real user ID and real group ID doesn't match those in the password file? For example, if user Tim isn't a member of group ocean per the password file, can a superuser process change the real user ID and real group ID of a process to be Tim and ocean respectively? | Yes, a superuser process can change its real user ID and real group ID to any value it desires. The values in /etc/passwd and /etc/shadow are the configuration for what values should be set, but not a limitation of possible values. Edit #1 It means programs like login will read the values from the files, so the files are configuration files or input files. They are not constraints on what a program can do. A superuser process can pass any value to the kernel, and the kernel will not check any files. A program could call setgid (54321);setuid (12345); and this would work, even if neither of the id's are mentioned in any file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
466,702 | we can see that atop logs are created on each dayand that take a lot of space ls -l /var/log/atop/total 1634632-rw-r--r-- 1 root root 127992086 Aug 30 01:49 atop_20180829-rw-r--r-- 1 root root 262277153 Aug 31 00:00 atop_20180830-rw-r--r-- 1 root root 321592670 Sep 1 00:00 atop_20180831-rw-r--r-- 1 root root 330041977 Sep 2 00:00 atop_20180901-rw-r--r-- 1 root root 269040388 Sep 3 00:00 atop_20180902-rw-r--r-- 1 root root 274807097 Sep 4 00:00 atop_20180903-rw-r--r-- 1 root root 85426960 Sep 4 06:03 atop_20180904-rw-r--r-- 1 root root 0 Sep 4 06:03 daily.log how to limit the atop log for example to only 5 logs ( 5 last days ) | In RH/CentOS atop is not being regulated by logrotate . In /usr/share/atop/atop.daily there is an example script to deal with atop log file rotation. The script as a find line deleting logs older than 28 days as in: # delete logfiles older than four weeks# start a child shell that activates another child shell in# the background to avoid a zombie#( (sleep 3; find $LOGPATH -name 'atop_*' -mtime +28 -exec rm {} \;)& ) You can copy that script to /etc/cron.daily and change the number of days to 5. ( (sleep 3; find $LOGPATH -name 'atop_*' -mtime +5 -exec rm {} \;)& ) Dealing with daily files can also be a bit inconvenient. Using the above script, if you do not intend in doing a pure daily rotation, you can also edit /etc/sysconfig/atop and change the duration, for instance for 10 minutes, as in: INTERVAL=600 As an alternative , if you do want to keep rotating it daily, you can create a logrotate file at /etc/logrotate.d/atop as in: /var/log/atop/atop_20[0-9][0-9][0-9][0-9][0-9][0-9] { missingok daily nodateext rotate 5 ifempty nocreate postrotate /usr/bin/find /var/log/atop/ -maxdepth 1 -mount -name atop_20\[0-9\]\[0-9\]\[0-9\]\[0-9\]\[0-9\]\[0-9\]\* -mtime +40 -exec /bin/rm {} \; endscript } If you are doing the logrotate version, you need to keep the daily files, and do not change the INTERVAL parameter. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
466,711 | I started to learn Bash scripting and I'm using Bash scripting tutorial There it says Before Bash interprets (or runs) every line of our script it first checks to see if any variable names are present . For every variable it has identified, it replaces the variable name with its value. Then it runs that line of code and begins the process again on the next line. So does Bash first run through the whole script to find variables? I'm not sure whether this is what the author tried to say but if yes I guess it is not correct? when I execute: #!/bin/bashecho "hello $USERR"USERR=John I get hello as result. If I run: #!/bin/bashUSERR=Johnecho "hello $USERR" then i get hello John as result. | So does Bash first run through the whole script to find variables? Nope. As you yourself discovered in your example, Bash scripts are executed from top to bottom. A good practice is to define all variables that you need at the top of your script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307920/"
]
} |
466,720 | I'm trying to list the connected monitors using xrandr that is returning the following information: Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192eDP-1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.06*+ 1360x768 59.80 59.96 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1 disconnected (normal left inverted right x axis y axis)HDMI-1 disconnected (normal left inverted right x axis y axis) But I don't know why the VGA port was labeled as DP-1 instead of VGA-1, while the HDMI port was clearly labeled as HDMI-1. So does the Linux kernel label the VGA, DVI and DisplayPort ports as "DP"? | This answer on AskUbuntu seems relevant to your question. Basically, the VGA port you see is just a built-in adapter for the native DP port. In this case, xrandr correctly shows you the installed hardware which is a DisplayPort . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228308/"
]
} |
466,722 | Is it possible to increase the length of time-slices, which the Linux CPU scheduler allows a process to run for? How could I do this? Background knowledge This question asks how to reduce how frequently the kernel will force a switch between different processes running on the same CPU. This is the kernel feature described as "pre-emptive multi-tasking". This feature is generally good, because it stops an individual process hogging the CPU and making the system completely non-responsive. However switching between processes has a cost , therefore there is a tradeoff. If you have one process which uses all the CPU time it can get, and another process which interacts with the user, then switching more frequently can reduce delayed responses. If you have two processes which use all the CPU time they can get, then switching less frequently can allow them to get more work done in the same time. Motivation I am posting this based on my initial reaction to the question How to change Linux context-switch frequency? I do not personally want to change the timeslice. However I vaguely remember this being a thing, with the CONFIG_HZ build-time option. So I want to know what the current situation is. Is the CPU scheduler time-slice still based on CONFIG_HZ ? Also, in practice build-time tuning is very limiting. For Linux distributions, it is much more practical if they can have a single kernel per CPU architecture, and allow configuring it at runtime or at least at boot-time. If tuning the time-slice is still relevant, is there is a new method which does not lock it down at build-time? | For most RHEL7 servers, RedHat suggest increasing sched_min_granularity_ns to 10ms and sched_wakeup_granularity_ns to 15ms. ( Source . Technically this link says 10 μs, which would be 1000 times smaller. It is a mistake). We can try to understand this suggestion in more detail. Increasing sched_min_granularity_ns On current Linux kernels, CPU time slices are allocated to tasks by CFS, the Completely Fair Scheduler. CFS can be tuned using a few sysctl settings. kernel.sched_min_granularity_ns kernel.sched_latency_ns kernel.sched_wakeup_granularity_ns You can set sysctl's temporarily until the next reboot, or permanently in a configuration file which is applied on each boot. To learn how to apply this type of setting, look up "sysctl" or read the short introduction here . sched_min_granularity_ns is the most prominent setting. In the original sched-design-CFS.txt this was described as the only "tunable" setting, "to tune the scheduler from 'desktop' (low latencies) to 'server' (good batching) workloads." In other words, we can change this setting to reduce overheads from context-switching, and therefore improve throughput at the cost of responsiveness ("latency"). I think of this CFS setting as mimicking the previous build-time setting, CONFIG_HZ . In the first version of the CFS code, the default value was 1 ms, equivalent to 1000 Hz for "desktop" usage. Other supported values of CONFIG_HZ were 250 Hz (the default), and 100 Hz for the "server" end. 100 Hz was also useful when running Linux on very slow CPUs, this was one of the reasons given when CONFIG_HZ was first added as an build setting on X86 . It sounds reasonable to try changing this value up to 10 ms (i.e. 100 Hz), and measure the results. Remember the sysctls are measured in ns . 1 ms = 1,000,000 ns. We can see this old-school tuning for 'server' was still very relevant in 2011, for throughput in some high-load benchmark tests: https://events.static.linuxfound.org/slides/2011/linuxcon/lcna2011_rajan.pdf And perhaps a couple of other settings The default values of the three settings above look relatively close to each other. It makes me want to keep things simple and multiply them all by the same factor :-). But I tried to look into this and it seems some more specific tuning might also be relevant, since you are tuning for throughput. sched_wakeup_granularity_ns concerns "wake-up pre-emption". I.e. it controls when a task woken by an event is able to immediately pre-empt the currently running process. The 2011 slides showed performance differences for this setting as well. See also "Disable WAKEUP_PREEMPT" in this 2010 reference by IBM , which suggests that "for some workloads" this default-on feature "can cost a few percent of CPU utilization". SUSE Linux has a doc that suggests setting this to larger than half of sched_latency_ns will effectively disable wake-up pre-emption, and then "short duty cycle tasks will be unable to compete with CPU hogs effectively". The SUSE document also suggest some more detailed descriptions of the other settings. You should definitely check what the current default values are on your own systems though. For example the default values on my system seem slightly different to what the SUSE doc says. https://www.suse.com/documentation/opensuse121/book_tuning/data/sec_tuning_taskscheduler_cfs.html If you experiment with any of these scheduling variables, I think you should also be aware that all three are scaled (multiplied) by 1+log_2 of the number of CPUs. This scaling can be disabled using kernel.sched_tunable_scaling . I could be missing something, but this seems surprising e.g. if you are considering the responsiveness of servers providing interactive apps and running at/near full load, and how that responsiveness will vary with the number of CPUs per server. Suggestion if your workload has large numbers of threads / processes I also came across a 2013 suggestion, for a couple of other settings, that may gain significant throughput if your workload has large numbers of threads. (Or perhaps more accurately, it re-gains the throughput which they had obtained on pre-CFS kernels). " Two Necessary Kernel Tweaks " - discussion on PostgreSQL mailing list. " Please increase kernel.sched_migration_cost in virtual-host profile " - Red Hat Bug 969491. Ignore CONFIG_HZ I think you don't need to worry about what CONFIG_HZ is set to. My understanding is it is not relevant on current kernels, assuming you have reasonable timer hardware. See also commit 8f4d37ec073c, "sched: high-res preemption tick" , found via this comment in a thread about the change: https://lwn.net/Articles/549754/ . (If you look at the commit, I wouldn't worry that SCHED_HRTICK depends on X86 . That requirement seems to have been dropped in some more recent commit). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
466,747 | Let say, from kernel 2.6 onwards. I watch all the running processes on the system. Are the PID of the children always greater than their parents' PIDs? Is it possible to have special cases of "inversion"? | No, for the very simple reason that there is a maximum numerical value the PID can have. If a process has the highest PID, no child it forks can have a greater PID. The alternative to giving the child a lower PID would be to fail the fork() altogether, which wouldn't be very productive. The PIDs are allocated in order, and after the highest one is used, the system wraps around to reusing the (free) lower ones, so you can get lower PIDs for a child in other cases too. The default maximum PID on my system ( /proc/sys/kernel/pid_max ) is just 32768, so it's not hard to reach the condition where the wraparound happens. $ echo $$27468$ bash -c 'echo $$'1296$ bash -c 'echo $$'1297 If your system were to allocate PIDs randomly ( like OpenBSD appears to do ) instead of consecutively (like Linux), there would be two options. Either the random choice was made over the whole space of possible PIDs, in which case it would be obvious that a child's PID can be lower than the parent's. Or, the child's PID would be chosen by random from the values greater than the parent's PID, which would on average put it halfway between the parent's PID and the maximum. Processes forking recursively would then quickly reach the maximum and we'd be at the same point as mentioned above: a new fork would need to use a lower PID to succeed. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/466747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12985/"
]
} |
466,770 | It's best to use public keys for SSH. So my sshd_config has PasswordAuthentication no . Some users never log in, e.g. a sftp user with shell /usr/sbin/nologin . Or a system account. So I can create such a user without a password with adduser gary --shell /usr/sbin/nologin --disabled-password . Is that a good/bad idea? Are there ramifications I've not considered? | If you have root access to the server and can regenerate ssh keys for your users in case they lose them AND you're sure a user (as a person) won't have multiple user accounts and they need to switch between those on an SSH session (well, they can also open multiple SSH sessions if the need arises) AND they will never need "physical" (via keyboard+monitor or via remote console for a VM) access to the server AND no users have password-gated sudo access (i.e. they either don't have sudo access at all, or have sudo access with NOPASSWD ) I think you'll be good. We have many servers at work configured like this (only some accounts need access to the VM via vmware remote console, the others connect only via SSH with pubkey auth). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/466770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
466,781 | I have a collection of files with names like these: Ace of Aces (Europe).mp4Action Fighter (USA, Europe) (v1.2).mp4Addams Family, The (Europe).mp4Aerial Assault (USA).mp4After Burner (World).mp4Air Rescue (Europe).mp4Aladdin (Europe).mp4Alex Kidd - High-Tech World (USA, Europe).mp4Alex Kidd in Miracle World (USA, Europe) (v1.1).mp4Alex Kidd in Shinobi World (USA, Europe).mp4Alex Kidd - The Lost Stars (World).mp4Alf (USA).mp4Alien 3 (Europe).mp4Alien Storm (Europe).mp4Alien Syndrome (USA, Europe).mp4Altered Beast (USA, Europe).mp4American Baseball (Europe).mp4American Pro Football (Europe).mp4Andre Agassi Tennis (Europe).mp4Arcade Smash Hits (Europe).mp4Assault City (Europe) (Light Phaser).mp4 (I have these filenames listed in a text file.) I have to get away from parentheses, so I would like to rename the filesto remove the parentheses and the text between them,so the filenames will be as follows: Ace of Aces.mp4Action Fighter.mp4Addams Family, The.mp4Aerial Assaul.mp4After Burner.mp4Air Rescue.mp4Aladdi.mp4Alex Kidd - High-Tech World.mp4Alex Kidd in Miracle World.mp4Alex Kidd in Shinobi World.mp4Alex Kidd - The Lost Stars.mp4Alf.mp4Alien 3.mp4Alien Storm.mp4Alien Syndrome.mp4Altered Beast.mp4American Baseball.mp4American Pro Footbal.mp4Andre Agassi Tennis.mp4Arcade Smash Hits.mp4Assault City.mp4 Ideally, any solution should alert me if I have, for example, files like Aladdin (Europe).mp4 and Aladdin (Asia).mp4 , or Ghostbusters (1984).mp4 and Ghostbusters (2016).mp4 ,and not destroy any files. | Using Perl's rename commandline : Use this : rename 's/\s*\([^\)]+\)//g' *.mp4 Remove -n (aka dry-run ) when the output looks satisfactory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309038/"
]
} |
466,798 | I have one file like: ID_SOUR_CALENDAR BIGINT NOT NULL DEFAULT 0 COMPRESS 0 ,UNIQUE PRIMARY INDEX ( CALENDAR_DATE );ID ,ID_SOUR ,PRIMARY INDEX ( CALENDAR_DATE ); I want to replace the ',' by ')' in the line just before the line containing PRIMARY. The result should be: ID_SOUR_CALENDAR BIGINT NOT NULL DEFAULT 0 COMPRESS 0 )UNIQUE PRIMARY INDEX ( CALENDAR_DATE );ID ,ID_SOUR )PRIMARY INDEX ( CALENDAR_DATE ); | Using GNU sed : sed 'N;s/,\(\s*\n.*PRIMARY\)/)\1/;P;D' fileID_SOUR_CALENDAR BIGINT NOT NULL DEFAULT 0 COMPRESS 0 )UNIQUE PRIMARY INDEX ( CALENDAR_DATE );ID ,ID_SOUR )PRIMARY INDEX ( CALENDAR_DATE ); N Read/append the next line of input into the pattern space. P Print up to the first embedded newline of the current pattern space. D Delete up to the first embedded newline in the pattern space. Start next cycle, but skip reading from the input if there is still data in the pattern space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174248/"
]
} |
466,807 | I would like to transfer a zip file from my local server to a remote server. I used the following command which contain host, port, and username as follows: scp "$somepath/${file}.zip" "$ftp_user"@"ftp_server":upload/ In this command: ftp_user=royal ftp_server=np.royal.com This command is failing to connect. Would you please advise me? | Using GNU sed : sed 'N;s/,\(\s*\n.*PRIMARY\)/)\1/;P;D' fileID_SOUR_CALENDAR BIGINT NOT NULL DEFAULT 0 COMPRESS 0 )UNIQUE PRIMARY INDEX ( CALENDAR_DATE );ID ,ID_SOUR )PRIMARY INDEX ( CALENDAR_DATE ); N Read/append the next line of input into the pattern space. P Print up to the first embedded newline of the current pattern space. D Delete up to the first embedded newline in the pattern space. Start next cycle, but skip reading from the input if there is still data in the pattern space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191010/"
]
} |
466,825 | I have a text file contain numbers as follows: 34 77 1716 150?.2 67.5892 11.9691 23 1?6 8335 78 0 0 0 0 0 0 036 79 0 0 0 0 0 0 037 80 0 0 ? ? 0 0 038 81 0 0 0 ? 0 0 039 82 0 0 0 0 ? 0 ?40 85 169 152.8 81.5917 22.3759 18 118 10041 251 1412 131? 97.7358 16.6563 37 126 8942 252 578 488.5 88.?502 23.9728 29 124 9543 253 585 518.6 95.4444 19.6661 19 119 10044 254 576 533.2 96.4271 18.5693 13 119 10645 255 1424 1313.3 94.7584 21.7414 14 146 132 I would like to replace every ? with 0 and every 0 with ? at the same time, so the table above look like this: 34 77 1716 15?0.2 67.5892 11.9691 23 106 8335 78 ? ? ? ? ? ? ?36 79 ? ? ? ? ? ? ?37 8? ? ? 0 0 ? ? ?38 81 ? ? ? 0 ? ? ?39 82 ? ? ? ? 0 ? 04? 85 169 152.8 81.5917 22.3759 18 118 1??41 251 1412 1310 97.7358 16.6563 37 126 8942 252 578 488.5 88.05?2 23.9728 29 124 9543 253 585 518.6 95.4444 19.6661 19 119 1??44 254 576 533.2 96.4271 18.5693 13 119 1?645 255 1424 1313.3 94.7584 21.7414 14 146 132 How can I do it? | I think, since you are only swapping singular characters, tr may be a good tool for the job. Try something like this: tr '0?' '?0' < log.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
466,837 | When I run FOO=$(ssh -L 40000:localhost:40000 [email protected] cat /foo) I get the contents of /foo , but then it disconnects. What I'd like to do is somehow get the content of /foo and keep the connection open so that port 40000 is still forwarded to the same server. Is this possible? You might ask, why not just issue two ssh connections like this FOO=$(ssh [email protected] cat /foo)ssh -L 40000:localhost:40000 [email protected] -f -N In my situation, the reason I can't do this is because the ip ( 1.2.3.4 ) is a load balancer that forwards the connection to a number of random backends. Each time I ssh to 1.2.3.4 I get a different machine, and the contents of /foo are different for every machine. Moreover, the data I send over the forwarded port (40000) depends on the contents of /foo . If I grab the contents of /foo on machine A and then sent data over port 40000 to machine B, things don't work. | What you are describing is known as SSH multiplexing. I use that setup in a devops setting for caching my connections to any VMs. In that way I reuse the same connection for up to 30 minutes/cache the connection, without renegotiating the entire SSH connection (and authenticating the user) in each new command. It gives me an huge boost in speed, when sending (multiple) commands in a row to a VM/server. The setup is done on the client side, and for a cache of 30 minutes, the setup can be done in /etc/ssh/ssh_config as: ControlPath ~/.ssh/cm-%r@%h:%p ControlMaster auto ControlPersist 30m The MaxSessions parameter, also in ssh_config also defines how many multiplexed connections simultaneous connections are allowed; the default value is 10. If you need more simultaneous cached connections, you might want to change it. For instance, for a maximum of 20 cached connections: MaxSessions 20 For more information, see OpenSSH/Cookbook/Multiplexing An advantage of SSH multiplexing is that the overhead of creating newTCP connections is eliminated. ... The second and later connections will reuse the established TCP connection >over and over and not need to create a new TCP connection for each new SSH connection. Also see Using SSH Multiplexing SSH multiplexing is the ability to carry multiple SSH sessions over asingle TCP connection Without multiplexing, every time that command is executed your SSHclient must establish a new TCP connection and a new SSH session withthe remote host. With multiplexing, you can configure SSH to establisha single TCP connection that is kept alive for a specific period oftime, and SSH sessions are established over that connection. This canresult in speed increasesthat can add up when repeatedly running commands against remote SSH hosts. Lastly, as the multiplexing keeps the TCP connection open between the client and the server, you will have the guarantee that you are talking with the same machine in the load balancer, as long as the cache is open/active. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309086/"
]
} |
466,872 | So i am creating this bash script but it is giving me an error that ** name or service not known .** The code is : #!/bin/bashif [ "$1" == "" ]thenecho "Hello"echo "Bye"elsefor x in 'seq 1 254' ; doping -c 1 $1.$xdonefi | You're using single quote instead of backticks for your seq statement. for x in 'seq 1 254' Will result in 3 items: seq, 1, 254 for x in `seq 1 254` Will result in: 1, 2, 3, 4, 5, 6, etc.. Single quotes are for literals, backticks are for command substitution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309115/"
]
} |
466,916 | When you have modified /etc/ssh/sshd_config , you may execute systemctl restart sshd.service to reflect the change. At least in my environment, also, systemctl restart ssh.service works. And systemctl --all list-units ssh* tells me there isn't any service with the name sshd.service . Then why is sshd.service used wide and actually valid? (I know the name of ssh daemon is sshd but this is not the reasonable reason, I think.) I executed the following commands on linux mint 19 ( ubuntu -base) and volumio 2 ( raspbian -base), both of which are based on debian . systemctl restart sshd.service; echo $? #=> 0systemctl restart ssh.service; echo $? #=> 0systemctl --no-legend --all list-units ssh* #=> only ssh.service exists | The ssh service has always been named ssh in /etc/services , probably whatever the distribution, because it's the SSH protocol, not the sshd daemon. Then it made sense, at least in the Debian implementation and thus Debian derivatives, the same name was chosen to start the service as ... service ssh start which translated into system-V style /etc/init.d/ssh . This was kept in systemd, again for consistency since the service can be started indifferently with old style or systemd-style way. Still, an alias is also defined for compatibility with other distributions which made a different choice: [Install]WantedBy=multi-user.targetAlias=sshd.service So both can be used on Debian and derivatives and they represent the same service. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291779/"
]
} |
466,938 | I have a iso file named ubuntu.iso . I can mount it with the command: mount ubuntu.iso /mnt . After mounting it, I can see it from the outout of the command df -h : /dev/loop0 825M 825M 0 100% /mnt . However, if I execute the command mount -o loop ubuntu.iso /mnt , I'll get the same result. As I know, loop device allows us to visit the iso file as a device, I think this is why we add the option -o loop . But I can visit my iso file even if I only execute mount ubuntu.iso /mnt . So I can't see the difference between mount and mount -o loop . | Both versions use loop devices, and produce the same result; the short version relies on “cleverness” added to mount in recent years. mount -o loop tells mount explicitly to use a loop device; it leaves the loop device itself up to mount , which will look for an available device, set it up, and use that. (You can specify the device too with e.g. mount -o loop=/dev/loop1 .) The cleverness is that, when given a file to mount, mount will automatically use a loop device to mount it when necessary — i.e. , the file system isn’t specified, or libblkid determines that the file system is only supported on block devices (and therefore a loop device is needed to translate the file into a block device). The loop device section of the mount man page has more details. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/466938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145824/"
]
} |
466,961 | Hey folks I have the following script, where I just try to print an emoji, however when I execute the script I don't see the emoji, but when I do printf emoji-utf-code from the console it works. Am I missing something? #!/usr/bin/env bashUNICORN='\U1F984\n'# this does not work when I run the scriptprintf ${UNICORN}printf '\U1F984\n'echo "Riding an ${UNICORN}"# but when I type the printf command with the UTF-8 code in the console it works. PS: How could I add a shell here so I could run the script? I have seen it on other posted questions. EDIT 1: corrected code after some comments. Still getting this on the console: | printf '\U1F984\n' Versions 4.1 and earlier of the Bourne Again shell do not understand \U and \u escape sequences in the format argument of the built-in printf command. To use them, you need version 4.2 or later. This addition is in the Bourne Again shell's release notes for version 4.2, in 2011. Alternatively, use the Z shell version 4.1.1 or later. The Z shell gained this extension to printf several years earlier, in 2003. The 93 Korn shell also has had this extension for some time. You can of course convert the code point into UTF-8 and print the UTF-8 directly as a sequence of octal-encoded octets, which should work with any unextended standard-conformant printf : printf '\360\237\246\204\n' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309162/"
]
} |
466,986 | After using Linux for quite some time now I want to try out FreeBSD. I created a bootable USB stick and want to play around a bit in the live CD mode. The first problem I encounter is that I don't know how to get the wifi to work. Running sysctl net.wlan.devices yields an empty net.wlan.devices: . I guess this means that the module for my wifi-adapter is not loaded? Most of the stuff I find to enable wifi requires changing some configs and rebooting but I guess that's not that easy on a live USB. Now my question is: How do I enable wifi? How do I know which module I need to load? I am using a Thinkpad L480 (which is not listed on the laptops page ). Is free BSD even compatible with it? | According to ThinkPad L480 Tech Specs , it features Intel® Dual Band 8265 Wireless AC (2 x 2) wifi adapter, which should be supported by iwm driver. You should be able to load driver and firmware at runtime without rebooting: kldload if_iwmkldload iwm8265fw Check if they loaded successfully with kldstat . If modules aren't listed I guess you are out of luck until someone adds support for your card. If they are, read on. The rest is nicely explained in Wireless Networking chapter of FreeBSD Handbook, here are exact lines you need: ifconfig wlan0 create wlandev iwm0ifconfig wlan0 up scan You should be able to see list of wifi networks: ifconfig wlan0 list scan You will need to create /etc/wpa_supplicant.conf (assuming your wifi network is RSN/WPA2): network={ ssid="yournetwork" psk="yournetworkpass"} Append the following to the /etc/rc.conf : wlans_ath0="wlan0"ifconfig_wlan0="WPA DHCP" Bring up the interface: service netif restart ...and you should be good to go. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/466986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211901/"
]
} |
466,999 | I've been seeing this in a lot of docker-entrypoint.sh scripts recently, and can't find an explanation online. My first thoughts are that it is something to do with signaling but that's a pretty wild guess. | The "$@" bit will expand to the list of positional parameters (usually the command line arguments), individually quoted to avoid word splitting and filename generation ("globbing"). The exec will replace the current process with the process resulting from executing its argument. In short, exec "$@" will run the command given by the command line parameters in such a way that the current process is replaced by it (if the exec is able to execute the command at all). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/466999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233811/"
]
} |
467,001 | I have two machines, "sender" and "receiver". Sender runs the following command each night: zfs send -i bpool/backups@2018-09-04 bpool/backups@2018-09-05 | ssh receiver /sbin/zfs receive bpool/backups The sends the latest of bpool/backups from sender to receiver. (Dates are automatically generated each night.) If someone (on receiver) does as little as: cd /bpool/backupsls it breaks the nightly backup job, with the following error: root@sender:~# zfs send -i bpool/backups@2018-09-04 bpool/backups@2018-09-05 | ssh recevier /sbin/zfs receive bpool/backupscannot receive incremental stream: destination bpool/backups has been modifiedsince most recent snapshotwarning: cannot send 'bpool/backups@2018-09-04': Broken pipe (I assume this is because of updated atimes, or similar.) How can I stop this from happening? (If I made receiver:/bpool/backups read-only how would the receive work?) | zfs recv -F will force the receiving dataset to roll back to the previous received snapshot. Turning off atime will only address the issue of people examining the files on the backup, but if there are any other changes, you'll want to use the -F flag instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17288/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.