source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
451,797
im trying to install virtual box-5.2 (got the .deb file from the website) on my Debian 9 stretch. Currently I have the Kernel: Linux 4.9.0 When I run: sudo dpkg -I virtualbox-5.2_5.2.12-122591~Debian~stretch_amd64.deb I get following error message: This system is currently not set up to build kernel modules.Please install the Linux kernel "header" files matching the current kernelfor adding new hardware support to the system.This system is currently not set up to build kernel modules.Please install the Linux kernel "header" files matching the current kernelfor adding new hardware support to the system.There were problems setting up VirtualBox. To re-start the set-up process, run/sbin/vboxconfigas root. But when I am trying to install or upgrade the header files with: sudo apt-get install linux-headers-$(uname -r) , linux tells me, that the header files are already installed. And when I run /sbin/vboxconfig I get the same error message as above. Could anyone help with this issue? Thanks!
You should be able to do utilizing socat and ProxyCommand option for ssh. ProxyCommand configures ssh client to use proxy process for communicating with your server. socat establishes two-way communication between STDIN/STDOUT ( socat and ssh client) and your UNIX socket . ssh -o "ProxyCommand socat - UNIX-CLIENT:/home/username/test.sock" foo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/451797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297006/" ] }
451,825
pid name tid mod state data-------------------------------------------------------------------------------- 39523 srv0051_0001_0 39642 20-10:59:28 Working 820000:500196:500077 43137 srv0051_0005_0 43156 20-10:59:28 Working 820000:4250501:84005743895 srv0051_0006_0 43903 20-10:59:28 Working 820000:4250501:84005747523 srv0051_0009_0 47547 20-10:59:28 Working 600005:4250501:425084648841 srv0051_0010_0 48851 20-10:59:28 Working 600005:4290000:429000058182 srv0051_0020_0 58188 20-10:59:28 Working 820000:4250501:8400578297 srv0079_0008_0 8316 20-10:59:27 Working 600005:3070001:3050012pid,name,tid,mod,state,appnbr,request,tasknbr,appctx,username39523,srv0051_0001_0,39642,20-10:59:28,Working,820000,500196,50007743137,srv0051_0005_0,43156,20-10:59:28,Working,820000,4250501,84005743895,srv0051_0006_0,43903,20-10:59:28,Working,820000,4250501,84005747523,srv0051_0009_0,47547,20-10:59:28,Working,600005,4250501,425084648841,srv0051_0010_0,48851,20-10:59:28,Working,600005,4290000,429000058182,srv0051_0020_0,58188,20-10:59:28,Working,820000,4250501,8400578297,srv0079_0008_0,8316,20-10:59:27,Working,600005,3070001,3050012
sed ' # delete the 2nd line 2d # remove any leading whitespace s/^[[:blank:]]\+// # on line 1, replace "data" with other words 1s/data/appnbr request tasknbr appctx username/ # replace any sequences of whitespace with comma s/[[:blank:]]\+/,/g # replace the 3rd and subsequent colons s/:/,/3g' file Required GNU sed for the s///3g action
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/451825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275232/" ] }
451,975
I have production unit in which java process has become Zombie and remained there for some time now. If the unit is restarted, then it will be cleared. However, the unit is not restarted and another java process is up and running. Is there any issue if this zombie state remains as it is without clearing it? Will it affect in any way (performance or slowness)?
Zombie process won't have any effect on performance or slowness as Zombie processes don’t use up any system resources. Note:- Practically, it is still using the PID (which is a limited resource), and the kernel data structures for the process are still allocated.Usually, this won't matter much, but the kernel memory usage can besignificant on systems with very limited memory. Problem caused by zombie process Each zombie process retains its process ID . Linux systems have a finite number of process IDs – 32767 by default on 32-bit systems.If zombies are accumulating at a very quick rate ,the entire pool of available PIDs will eventually become assigned to zombie processes, preventing other processes from launching. Note : On 64-bit systems, you can increase the maximum PID, see https://unix.stackexchange.com/a/16884/170373 However, a few zombie processes hanging around are no problem – although they do indicate a bug with their parent process on your system. Explanation: When a process dies on Linux, it isn’t all removed from memory immediately — its process descriptor stays in memory. The process’s status becomes EXIT_ZOMBIE and the process’s parent is notified that its child process has died with the SIGCHLD signal. The parent process is then supposed to execute the wait() system call to read the dead process’s exit status and other information. This allows the parent process to get information from the dead process. After wait() is called, the zombie process is completely removed from memory. This normally happens very quickly, so you won’t see zombie processes accumulating on your system. However, if a parent process isn’t programmed properly and never calls wait(), its zombie children will stick around in memory until they’re cleaned up. Resolution: You can’t kill zombie processes as you can kill normal processes with the SIGKILL signal — zombie processes are already dead. One way to kill zombie is by sending the SIGCHLD signal to the parent process. This signal tells the parent process to execute the wait() system call and clean up its zombie children. Send the signal with the kill command, replacing pid in the command below with the parent process’s PID: kill -s SIGCHLD pid When the process that created the zombies ends, init inherits the zombie processes and becomes their new parent. (init is the first process started on Linux at boot and is assigned PID 1.) Note:- From Linux 3.4 onwards processes can issue the prctl() system call with the PR_SET_CHILD_SUBREAPER option, and as a result they, not process #1, will become the parent of their orphaned descendant processes. Refer: https://unix.stackexchange.com/a/177361/5132 INIT then executes the wait() system call to clean up its zombie children, so init will make short work of the zombies. You can restart the parent process after closing it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/451975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250201/" ] }
451,982
cpupower fails sometime to execute with this error : cpupower: error while loading shared libraries: libcpupower.so.0: cannot open shared object file: No such file or directory I've compiled and installed the last cpupower tool from sources on my workstation. The Makefile install command installs the libs in /usr/local/lib and my LD_LIBRARY_PATH is set accordingly : syl@WorkStation-T3500:~$ echo $LD_LIBRARY_PATH :/usr/local/lib/lrwxrwxrwx 1 root root 20 juin 26 11:46 libcpupower.so -> libcpupower.so.0.0.1lrwxrwxrwx 1 root root 20 juin 26 11:46 libcpupower.so.0 -> libcpupower.so.0.0.1 -rwxr-xr-x 1 root root 77048 juin 26 11:46 libcpupower.so.0.0.1l A simple cpupower info query works nice : syl@WorkStation-T3500:~$ cpupower frequency-infoanalyzing CPU 0:driver: intel_pstateCPUs which run at the same hardware frequency: 0CPUs which need to have their frequency coordinated by software: 0maximum transition latency: Cannot determine or is not supported.hardware limits: 1.20 GHz - 3.20 GHzavailable cpufreq governors: performance powersavecurrent policy: frequency should be within 1.20 GHz and 3.20 GHz. The governor "powersave" may decide which speed to use within this range.current CPU frequency: Unable to call hardwarecurrent CPU frequency: 1.20 GHz (asserted by call to kernel)boost state support:Supported: yesActive: yes Nervetheless, here is what happens when I try to set some policy : syl@WorkStation-T3500:~$ sudo cpupower frequency-set --governor userspacecpupower: error while loading shared libraries: libcpupower.so.0: cannot open shared object file: No such file or directory May I ask you some hints about this strange issue ? All the best Sylvain
Zombie process won't have any effect on performance or slowness as Zombie processes don’t use up any system resources. Note:- Practically, it is still using the PID (which is a limited resource), and the kernel data structures for the process are still allocated.Usually, this won't matter much, but the kernel memory usage can besignificant on systems with very limited memory. Problem caused by zombie process Each zombie process retains its process ID . Linux systems have a finite number of process IDs – 32767 by default on 32-bit systems.If zombies are accumulating at a very quick rate ,the entire pool of available PIDs will eventually become assigned to zombie processes, preventing other processes from launching. Note : On 64-bit systems, you can increase the maximum PID, see https://unix.stackexchange.com/a/16884/170373 However, a few zombie processes hanging around are no problem – although they do indicate a bug with their parent process on your system. Explanation: When a process dies on Linux, it isn’t all removed from memory immediately — its process descriptor stays in memory. The process’s status becomes EXIT_ZOMBIE and the process’s parent is notified that its child process has died with the SIGCHLD signal. The parent process is then supposed to execute the wait() system call to read the dead process’s exit status and other information. This allows the parent process to get information from the dead process. After wait() is called, the zombie process is completely removed from memory. This normally happens very quickly, so you won’t see zombie processes accumulating on your system. However, if a parent process isn’t programmed properly and never calls wait(), its zombie children will stick around in memory until they’re cleaned up. Resolution: You can’t kill zombie processes as you can kill normal processes with the SIGKILL signal — zombie processes are already dead. One way to kill zombie is by sending the SIGCHLD signal to the parent process. This signal tells the parent process to execute the wait() system call and clean up its zombie children. Send the signal with the kill command, replacing pid in the command below with the parent process’s PID: kill -s SIGCHLD pid When the process that created the zombies ends, init inherits the zombie processes and becomes their new parent. (init is the first process started on Linux at boot and is assigned PID 1.) Note:- From Linux 3.4 onwards processes can issue the prctl() system call with the PR_SET_CHILD_SUBREAPER option, and as a result they, not process #1, will become the parent of their orphaned descendant processes. Refer: https://unix.stackexchange.com/a/177361/5132 INIT then executes the wait() system call to clean up its zombie children, so init will make short work of the zombies. You can restart the parent process after closing it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/451982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297115/" ] }
452,011
I have a huge log file compressed in .gz format and I want to just read the first line of it without uncompressing it to just check the date of the oldest log in the file. The logs are of the form: YYYY-MM-DD Log content asnsenfvwen eaifnesinrngYYYY-MM-DD Log content asnsenfvwen eaifnesinrngYYYY-MM-DD Log content asnsenfvwen eaifnesinrng I just want to read the date in the first line which I would do like this for an uncompressed file: read logdate otherstuff < logfile.gzecho $logdate Using zcat is taking too long.
Piping zcat ’s output to head -n 1 will decompress a small amount of data, guaranteed to be enough to show the first line, but typically no more than a few buffer-fulls (96 KiB in my experiments): zcat logfile.gz | head -n 1 Once head has finished reading one line, it closes its input, which closes the pipe, and zcat stops after receiving a SIGPIPE (which happens when it next tries to write into the closed pipe). You can see this by running (zcat logfile.gz; echo $? >&2) | head -n 1 This will show that zcat exits with code 141, which indicates it stopped because of a SIGPIPE (13 + 128). You can add more post-processing, e.g. with AWK, to only extract the date: zcat logfile.gz | awk '{ print $1; exit }' (On macOS you might need to use gzcat rather than zcat to handle gzipped files.)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/452011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/295951/" ] }
452,048
I have some web applications that I wrote with python flask. I know which port I started each one on and each was started using nohup. Each one was started with something like nohup python mywebapp.py & When I look at my processes with ps , I only see something like 36697 ? 60-21:36:16 python 36971 ? 63-19:11:43 python 37038 ? 65-06:57:22 python 37312 ? 54-23:33:16 python 37442 ? 54-09:14:57 python 37716 ? 47-19:45:17 python 68019 ? 00:29:24 python146568 ? 00:20:57 python146699 ? 00:17:08 python150622 ? 00:32:20 python If I need to stop one particular web application, how can I get from a port number back to a python process id so that I can kill the process?
You can use lsof to find the process id associated with a known port number lsof -i :*port* Alternatively, you may wish to use netstat which can display all network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. Try netstat -tulpn
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293070/" ] }
452,079
I've just done a dist upgrade on my Debian SID computer, and the sound disappeared. I can play an audio file in root, but not as a normal user.I've checked that I'm in the "audio" group.I've checked to see if anything is muted by running alsamixer , but all line is up and running. Here are the Audio output I have on the computer: $ lspci | grep Audio00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05)01:00.1 Audio device: NVIDIA Corporation GK107 HDMI Audio Controller (rev a1) I have an unplugged HDMI output (the NVIDIA controller). My audio headers are plugged on my Intel controller.I can see this driver when I run alsamixer , but only see the HDMI output from my graphics card in the pavucontrol output devices. When listing my audio sinks, I only get one null device: $ pacmd list-sinks1 sink(s) available. * index: 2 name: <auto_null> driver: <module-null-sink.c> flags: DECIBEL_VOLUME LATENCY FLAT_VOLUME DYNAMIC_LATENCY state: IDLE suspend cause: priority: 1000 volume: front-left: 56362 / 86% / -3,93 dB, front-right: 55706 / 85% / -4,23 dB balance -0,01 base volume: 65536 / 100% / 0,00 dB volume steps: 65537 muted: no current latency: 5,63 ms max request: 6 KiB max rewind: 6 KiB monitor source: 2 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stéréo used by: 0 linked by: 1 configured latency: 40,00 ms; range is 0,50 .. 2000,00 ms module: 20 properties: device.description = "Sortie factice" device.class = "abstract" device.icon_name = "audio-card" I don't really understand deeply how the sound system works in Debian, but I think I understand that my normal user does not have the right to access the Intel chip. From here, I'm stuck and can't really find out what to do... EDIT: I managed to get back the sound by disabling auto-mute in alsamixer. This settings has been somehow change during the system update. But after a reboot, the problem came back, but the auto-mute was still off. Thanks to dirkt answer, I think I have found the source of this issue:When running aplay -L as normal user and root user, I noticed some differences: $ aplay -Ldefault Playback/recording through the PulseAudio sound serversysdefault:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Device$ sudo aplay -L default:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Devicesysdefault:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Device It seems that the default card is not the same when running as simple user.I could have some sound when selecting sysdefault instead of default card as normal user. $ aplay -D sysdefault sound.wav But now, I'm a bit stuck. I think I've pinpoint the source of the issue, but cannot find out how to solve it...
I solved this issue for me by running: sudo apt-get remove --purge timidity Please make sure this will only remove timidity and timidity-daemon . As far as I understand the daemon grabs some resources on startup and prevents the other services from finding the sound cards. I think timidity has something to do with MIDI files so make sure you do not need this when uninstalling the packages and make sure that you know what you uninstall. Hope this resolves the issue for some users. You might want to reinstall timidity in the future when the above mentioned issues are resolved.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297181/" ] }
452,093
I have two .csv files that I need to match based on column 1. The two file structures look like this. FILE1 gopAga1_00004004-RA,1122.825534, -2.497919969, 0.411529843gopAga1_00010932-RA,440.485381, 1.769511316, 0.312853434 gopAga1_00007012-RA, 13.37565185, -1.973108929, 0.380227982etc... FILE2 gopAga1_00004004-RA, ENSACAP00000013845gopAga1_00009937-RA, ENSACAP00000000905gopAga1_00010932-RA, ENSACAP00000003279gopAga1_00000875-RA, ENSACAP00000000296gopAga1_00010837-RA, ENSACAP00000011919gopAga1_00007012-RA, ENSACAP00000012682gopAga1_00017831-RA, ENSACAP00000016147gopAga1_00005588-RA, ENSACAP00000011117etc.. This is my current command that I am running using join: This is formatted from what I have also read on the following threads here join -1 1 -2 1 -t , -a 1 -e "NA" -o "2.2,1.1,1.2,1.3" <(sort -k 1 healthy_vs_unhealthy_de.csv) <(sort RBH.csv) > output.txt However, every time I run this prompt it only writes the first row to output. Anyone know why my code is running like this and not actually merging the two files based on the GOP ID?
I solved this issue for me by running: sudo apt-get remove --purge timidity Please make sure this will only remove timidity and timidity-daemon . As far as I understand the daemon grabs some resources on startup and prevents the other services from finding the sound cards. I think timidity has something to do with MIDI files so make sure you do not need this when uninstalling the packages and make sure that you know what you uninstall. Hope this resolves the issue for some users. You might want to reinstall timidity in the future when the above mentioned issues are resolved.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297189/" ] }
452,123
Trying to trouble-shoot this error which pertains to microcode, my card from lspci shows, Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34) system.log shows, iwlwifi: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0 When I run modinfo , I get, (a lot of stuff cut off) description: Intel(R) Wireless WiFi driver for Linux## lots of stuff...firmware: iwlwifi-6000g2b-6.ucodefirmware: iwlwifi-6000g2a-6.ucodefirmware: iwlwifi-6050-5.ucodefirmware: iwlwifi-6000-6.ucode## lots of stuff...## NO iwlwifi-6205-#.ucodesrcversion: 6BA065AF04F0DFDB8D91DBF But none of those show 6205. Which .ucode is system.log referring to when it says, iwlwifi: loaded firmware version 18.168.6.1 op_mode iwldvm There are two that I could assume, firmware: iwlwifi-6050-5.ucodefirmware: iwlwifi-6000-6.ucode But, is there way to know for certain?
This is documented at the Linux Wireless wiki : ------------------------------------------------------------------------------Device | Kernel | Module | Firmware |----------------- | ---------| ------- | ------------------------------------ | Intel® Centrino® | 2.6.36+ | iwldvm | 17.168.5.1, 17.168.5.2 or 17.168.5.3 |Advanced-N 6205 | -------- | | ------------------------------------ | | 3.2+ | | 18.168.6.1 |------------------------------------------------------------------------------ This table reflects the minimal version of each firmware hosted at linux-firmware.git repository that is known to work with that module and kernel version. In your specific case, the file is iwlwifi-6000g2a-6.ucode . modinfo will show all firmware files that can be used by that module(and that could support other hardware). The wireless wiki is a pretty reliable way to get information about hardware and firmware. dmesg | grep firmware could help you on probing what firmware file is being used. Example 1 - firmware was loaded correctly: [ 12.860701] iwlagn 0000:03:00.0: firmware: requesting lbm-iwlwifi-5000-1.ucode[ 12.949384] iwlagn 0000:03:00.0: loaded firmware version 8.24.2.12 Example 2 - missing firmware file d101m_ucode.bin : [ 77.481635] e100 0000:00:07.0: firmware: requesting e100/d101m_ucode.bin[ 137.473940] e100: eth0: e100_request_firmware: Failed to load firmware "e100/d101m_ucode.bin": -2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
452,148
I have a csv file with 17 columns and million rows. I want to search for a specific string in the 16th column and replace all the instances of that string with another string. Since my rest of the program uses bash script, I thought using awk instead of Python search & replace. My current OS is Rhel6. The following is the sample output of my data: SUBSCRIBER_ID|ACCOUNT_CATEGORY|ACCOUNT_ACTIVATION_DATE|PACKAGE_NAME|PACKAGE_TYPE|DURATION|ACTIVE_DATE|INACTIVE_DATE|STB_NO|PRIMARY_SECONDARY|MODEL_TYPE|VC_NO|MULTIROOM|STB_TYPE|IPKG|SERVICE_STATE|CURRENT_STATUS1001098068|ResidentialRegular|01/20/2007|Annual package 199 May17 pack|Basic Package|Annual|08/28/2017||027445053518|Primary|Pace - 31|000223871682|Yes|AMP|Package 199 pack|Market1|Active1001098068|ResidentialRegular|01/20/2007|Annual Pack|Premium Package|Annual|08/28/2017||027445053518|Primary|Pace - 31|000223871682|Yes|AMP|English Movies pack|Market1|Active1001098068|ResidentialRegular|01/20/2007|Annual SingleUnit Jun17 Pack|Secondary Pack|Annual|08/28/2017||032089364015|Secondary|Kaon|000017213968|Yes|AMP|SingleUnit|Market2|Active In this the 16th column is Market, wherein I want to change the Market1 to MarketPrime . The name of the file is marketinfo_2018-06-26.csv I tried the following code: awk -F '| +' '{gsub("Market1","MarketPrime",$16); print}' OFS="|" marketinfo_2018-06-26.csv > marketinfo_2018-06-26.csv This runs without any output, but the string Market1 still remains.
awk -F '|' -v OFS='|' '$16 == "Market1" { $16 = "MarketPrime" }1' file.csv >new-file.csv The only real issue in your code is that you set the input file separator to not just | but to spaces as well. This will make the spaces count as field separators in the data and it will be incredibly hard to figure out what the correct field number is (since some fields contain a variable number of spaces). You also can not redirect into the same filename as you use to read from. Doing so would cause the shell to first truncate (empty) the output file, and your awk program would have no data to read. Your code does a regular expression replacement. This is ok, but you need to be aware that if the 16th field happens to be something like Market12 or TheMarket1 , it would trigger the substitution due to the missing anchor points. It would be safer to use ^Market1$ as the expression to replace, or to use a string comparison. The awk command above uses only | as a field separator and then does a string comparison with the 16th field. If that field is Market1 , it is set to MarketPrime . The trailing 1 at the end of the awk code causes every record (modified or not) to be printed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247049/" ] }
452,231
I need to write a small shell script that will execute a python script and get the results. When I try to run it this way it works: #!/bin/sh/usr/bin/python /etc/scripts/backup.pyresult=$?if [ $result -gt 0 ]; then echo 'PROBLEM';else echo 'OK';fi But if I try to do it this way, it fails: #!/bin/shif [ $(/usr/bin/python /etc/scripts/backup.py) -gt 0 ]; then echo 'PROBLEM';else echo 'OK';fi It returns: lab-1:/etc/scripts# ./audit_test_wrapper.shsh: 0: unknown operandOK And the results are wrong. It should be printing "PROBLEM"" instead of OK. what I've tried so far if [ $(/usr/bin/python /etc/scripts/backup.py) -ne 0 ]; returns lab-1:/etc/scripts# ./audit_test_wrapper.shsh: 0: unknown operandOK And I also tried this: if [ "$(/usr/bin/python /etc/scripts/backup.py)" -gt 0 ]; it returns: lab-1:/etc/scripts# ./audit_test_wrapper.shsh: out of rangeOK Can someone point me in the right direction? Thanks.
A command does not return its exit status ( $? ) on standard output. A command substitution ( $(...) ) captures the standard output. Instead, just do #!/bin/shif ! /usr/bin/python /etc/scripts/backup.py; then echo 'PROBLEM'else echo 'OK'fi This would pick the first branch and print PROBLEM if the python script exited with a non-zero exit status. You may read the if statement as "If the script did not succeed". If you want to get rid of the ! : #!/bin/shif /usr/bin/python /etc/scripts/backup.py; then echo 'OK'else echo 'PROBLEM'fi You may read the if statement as "If the script did succeed". The error that you get comes from the fact that the script doesn't output anything, which means that the command substitution will be empty. Since it's unquoted, the command that the shell will try to execute will look like if [ -gt 0 ] This is a syntax error. When quoting the command substitution, you essentially try to run if [ "" -gt 0 ] and the shell is not able to compare the empty string as an integer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52684/" ] }
452,234
Following Input File: #Report Nr. 2343215#Errors 3243#Date: (Timestampt)#Informaiton## Headers# SpecsDLSLWD 0 0 0 0 Jun 22 01:51:16PM 2018#List of Objects## Headers# PathsFiles not found /var/xxxxxFiles not found /etc/xxxxxFiles not found /mnt/xxxxxFiles not found /safd/xxxxx##Reports#Error-Number 123Error Number 12345# What i need is an awk that pipes the "List of Objects" into a new file: #List of Objects## Headers# PathsFiles not found /var/xxxxxFiles not found /etc/xxxxxFiles not found /mnt/xxxxxFiles not found /safd/xxxxx# And the " Reports"into a differnt file: #Reports#Error-Number 123Error Number 12345# It's a match for #List of Objects + 3 lines until "first" #. Same for the Reports: Match #Reports + 1 line until "first" #. At first i tried something like: awk '/#List of Objects/,/#Reports/' For the list of objects followed by: awk '/#Reports/,0' To get the data from #Reports until EOF. But because #Reports and #List of Objects are both OPTIONAL and not in every input file I can't use #Reports as the "END-Pattern". so, I have to match on the # but ignore the first X occurrences of # after the matching pattern.
A command does not return its exit status ( $? ) on standard output. A command substitution ( $(...) ) captures the standard output. Instead, just do #!/bin/shif ! /usr/bin/python /etc/scripts/backup.py; then echo 'PROBLEM'else echo 'OK'fi This would pick the first branch and print PROBLEM if the python script exited with a non-zero exit status. You may read the if statement as "If the script did not succeed". If you want to get rid of the ! : #!/bin/shif /usr/bin/python /etc/scripts/backup.py; then echo 'OK'else echo 'PROBLEM'fi You may read the if statement as "If the script did succeed". The error that you get comes from the fact that the script doesn't output anything, which means that the command substitution will be empty. Since it's unquoted, the command that the shell will try to execute will look like if [ -gt 0 ] This is a syntax error. When quoting the command substitution, you essentially try to run if [ "" -gt 0 ] and the shell is not able to compare the empty string as an integer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297313/" ] }
452,249
I'm trying to run centos+systemd Docker container as described here https://hub.docker.com/_/centos/ . docker build --rm -t local/c7-systemd c7-systemd Dockerfile: FROM centos:7 ENV container docker RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \ systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; VOLUME [ "/sys/fs/cgroup" ] CMD ["/usr/sbin/init"] docker build --rm -t local/c7-systemd-httpd c7-systemd-httpd Dockerfile: FROM local/c7-systemd RUN echo "myproxy" >> /etc/yum.conf RUN yum -y install httpd; yum clean all; systemctl enable httpd.service EXPOSE 80 CMD ["/usr/sbin/init"] docker run -ti --cap-add SYS_ADMIN --security-opt seccomp:unconfined -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd /bin/bash I have also tried with --privileged but every time I get this: [root@e29ecfb082d8 /]# systemctl status Failed to get D-Bus connection: Operation not permitted I'm running it in Cygwin, Docker version 18.03.1-ce, build 9ee9f40 (Docker for Windows). Could you please say if there are any ways to get a working centos7+systemd container with this configuration?
I got a working container with https://hub.docker.com/r/centos/systemd/ docker build --rm --no-cache -t c7-systemd-off c7-systemd-off Dockerfile: FROM centos/systemd RUN echo "myproxy" >> /etc/yum.conf RUN yum -y install httpd; yum clean all; systemctl enable httpd.service EXPOSE 80 CMD ["/usr/sbin/init"] docker run --privileged --name c7 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 -d c7-systemd-off docker exec -it c7 /bin/bash
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/452249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297323/" ] }
452,320
I am just starting to learn how to use rsync, as I am trying to copy files from one server to another.I am using the following command: rsync -avzP [email protected]:/public_html/abc/ /www/abc After entering the other server's password, I then get the following message: stdin: is not a tty receiving incremental file list rsync: change_dir "/public_html/abc" failed: No such file or directory (2) sent 8 bytes received 101 bytes 8.72 bytes/sec total size is 0 speedup is 0.00rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [Receiver=3.1.1] However, directory abc does exist, I can browse it etc. There are no spaces in the name. Any ideas what this may be caused by?
You need to add the connection details, eg. for SSH use --rsh=ssh . Try: rsync -avzP --rsh=ssh [email protected]:/public_html/abc/ /www/abc And make sure the paths are correct. Are these paths absolute or relative: /public_html/abc/ , /www/abc ?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297388/" ] }
452,321
I was reading through all the things that are run during bootup and have seen that after mounting the rootfs, /sbin/fsck.ext4 is run and after that systemd is run. I was wondering where or how fsck is run, because I was searching for it in the kernel source code and couldn't find it and its not part of the init scripts. So what runs fsck ? The distro I am using is mint. EDIT: In this image it is shown that fsck is run after mounting the root file sytem
Edit 2: checked sources I've found the ubuntu initramfs-tools sources . Here you can see clearly, the Begin: "Mounting root file system" message is printed first, but in the mount_root function fsck is run before the actual mounting. I have ommited some non-relevant code, just to indicate the order. (If you would inspect the linked sources you will find also the other reported scripts from the screenshot). /init line 256 log_begin_msg "Mounting root file system"# Always load local and nfs (since these might be needed for /etc or# /usr, irrespective of the boot script used to mount the rootfs).. /scripts/local. /scripts/nfs. /scripts/${BOOT}parse_numeric ${ROOT}maybe_break mountrootmount_topmount_premountmountrootlog_end_msg /scripts/local @line 244 mountroot(){ local_mount_root} /scripts/local @line 131 local_mount_root(){# Some code ommited # FIXME This has no error checking [ -n "${FSTYPE}" ] && modprobe ${FSTYPE} checkfs ${ROOT} root "${FSTYPE}" # FIXME This has no error checking # Mount root mount ${roflag} ${FSTYPE:+-t ${FSTYPE} }${ROOTFLAGS} ${ROOT} ${rootmnt} mountroot_status="$?" if [ "$LOOP" ]; then if [ "$mountroot_status" != 0 ]; then if [ ${FSTYPE} = ntfs ] || [ ${FSTYPE} = vfat ]; then panic "<Error message ommited>" fi fi mkdir -p /host mount -o move ${rootmnt} /host# Some code ommitted} Original answer, retained for historical reasons Two options: Root is mounted read-only during boot and the init implementation is running fsck . Systemd is the init implementation on mint, and since you already checked if it exists there, this option does not apply. /sbin/fsck.ext4 is run in the "early user space" , set up by an initramfs . Which is most probably the case in your system. Systemd Even if you noticed that /sbin/fsck.ext4 was run before systemd , I want tot elaborate a bit. Systemd is perfectly capable of running fsck itself, on a read-only mounted filesystem. See [email protected] documentation. Most probably this service is not enabled by default in mint, since it will be redundant with the early user space one. Initramfs I don't know which implementation of an initramfs mint is running, but I will use dracut as an example. (used in Debian, openSuse and more) It states the following in its mount preperation documentation: When the root file system finally becomes visible: Any maintenance tasks which cannot run on a mounted root file system are done. The root file system is mounted read-only. Any processes which must continue running (such as the rd.splash screen helper and its command FIFO) are hoisted into the newly-mounted root file system. And maintenance tasks includes fsck . Further evidence, there is a possibility in dracut cmdline options to switch off fsck : rd.skipfsck skip fsck for rootfs and /usr. If you’re mounting /usr read-only and the init system performs fsck before remount, you might want to use this option to avoid duplication Implementations of initramfs An dynamic (udev based) and flexible initramfs can be implemented using the systemd infrastructure . Dracut is such an implementation and probably there are distro's out there that want to write their own. Another option would be a script based initramfs . In such a case busybox ash is used as a scripting shell and maybe even replacing udev with mdev , or maybe just completely static. I found some people being dropped to a busybox shell due to some fsck error int mint, so this implementation could apply to mint. If you really want to know for sure, try to decompress the initramfs file in /boot and see what's in there. It might also be possible to see it mounted under /initramfs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297180/" ] }
452,368
I have a little problem. I have a brand new Red Hat Linux Server and I installed Docker CE for CentOS/Red Hat with the official repositories for Docker CE. Now I see that docker creates the container under /var/lib/docker , but my problem is that I use an extra partition for my data under /data/docker . How can I change the default root directory for Docker in CentOS/Red Hat? I tried a few HOWTOs but I get the same problem. I can’t find the configuration. For example, I search for the following files: /etc/default/docker (I think only for Debian/Ubuntu) /etc/systemd/system/docker.service.d/override.conf (I can't find on my system) /etc/docker/daemon.json (I can't find on my system) If I get the docker info I see: Docker Root Dir: /var/lib/docker
Stop all running docker containers and then docker daemon. Move "/var/lib/docker" directory to the place where you want to have this data.For you it would be: mv /var/lib/docker /data/ and then create symlink for this docker directory in /var/lib path: ln -s /data/docker /var/lib/docker Start docker daemon and containers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297426/" ] }
452,391
I'm a bit puzzled why this is not working as intended. My goal is to map the caps lock key to control. I'm using debian. For this I'm using the following command /usr/bin/setxkbmap -layout "$(setxkbmap -print | awk -F + '/xkb_symbols/ {print $2}')" -option ctrl:nocaps which works perfectly fine if I execute it via the terminal. However, I want this to be done at startup or login and always execute it manually. I've tried to add this command to the automatic startup session application in XFCE as well as putting the command in my ~/.profile . However, both options don't seem to work. I still have to execute it manually (which after it is correctly mapped). What am I doing wrong?
The reason that setxkbmap command didn't after adding it to ~/.profile is that this file is read by your shell (which is probablybash) only when login shell is started. In X terminal emulatorsdon't start login shells. You add setxkbmap to your ~/.bashrc if you use Bash but there is abetter way available on debian systems - modify XKBOPTIONS sectionin your /etc/default/keyboard , for example: root@debian:/home/ja# cat /etc/default/keyboard# KEYBOARD CONFIGURATION FILE# Consult the keyboard(5) manual page.XKBMODEL="pc105"XKBLAYOUT="us"XKBVARIANT=""XKBOPTIONS="ctrl:nocaps"BACKSPACE="guess" Now run this command as described in man 7 keyboard : udevadm trigger --subsystem-match=input --action=change You don't even have to restart lightdm . Next time lightdm isstarted settings in /etc/default/keyboard will be appliedautomatically. I've just tested it on my Debian 9.4 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18180/" ] }
452,393
I have faced a problem in changing the permissions for a specific group of users in my system. suppose I have four users u1,v1 and u2,v2 and two groups g1 and g2. u1,v1 are in g1 group and u2,v2 are in g2 group respectively. Now I want a file which is "foo.bar" to be accessible(rwx) only by u1 in g1 group. I typed the following for this purpose: sudo chown u1:g1 foo.barsudo chmod 700 foo.bar but I have the following questions: first. What is the usage of giving a file ownership to a group where we can restrict other users using chmod command ?(the third field in file permissions rwx-rwx- rwx ) what is the application of chown here ? let me give an example for this question: I only want "foo.bar" to be accessible from g1 users and not any users in the system. Can it be done just by using chown command ? if not what is its application ! second. how we can give specific permissions to a specific group ? for example I want g1 group to have only read permission on any files which is owned by it. so when I type the following in the terminal sudo chown :g1 foo.bar without any more commands, all the u1 and v1 users have only read permission on this file. is it possible only by chown ? I hope I made my points clear to you all.
The reason that setxkbmap command didn't after adding it to ~/.profile is that this file is read by your shell (which is probablybash) only when login shell is started. In X terminal emulatorsdon't start login shells. You add setxkbmap to your ~/.bashrc if you use Bash but there is abetter way available on debian systems - modify XKBOPTIONS sectionin your /etc/default/keyboard , for example: root@debian:/home/ja# cat /etc/default/keyboard# KEYBOARD CONFIGURATION FILE# Consult the keyboard(5) manual page.XKBMODEL="pc105"XKBLAYOUT="us"XKBVARIANT=""XKBOPTIONS="ctrl:nocaps"BACKSPACE="guess" Now run this command as described in man 7 keyboard : udevadm trigger --subsystem-match=input --action=change You don't even have to restart lightdm . Next time lightdm isstarted settings in /etc/default/keyboard will be appliedautomatically. I've just tested it on my Debian 9.4 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297445/" ] }
452,483
I have a file of size 7GB. Now have two date time in that I want to use awk to get this difference of time between that two DateTime. Below is how my files look like: A B C D E18/06/28 09:19:07 295 141536 18-06-28 09:17:4718/06/28 09:20:07 268 1160 18-06-28 09:18:5818/06/28 09:21:07 317 1454 18-06-28 09:19:4718/06/28 09:22:07 275 1491 18-06-28 09:20:5918/06/28 09:23:07 320 1870 18-06-28 09:21:0718/06/28 09:24:07 310 1869 18-06-28 09:22:3018/06/28 09:25:07 150 693 18-06-28 09:23:2818/06/28 09:26:07 414 2227 18-06-28 09:24:34 I want the difference between (AB) - (E). I tried this : cat filename | awk -F " " '{print date -d ($1$2)-($5)}' Output should be the time difference between the two datetime. Like for the first row difference will be 1min 20sec
Using GNU awk: gawk ' function dt2epoch(date, time, timestr) { timestr = "20" substr(date,1,2) " " substr(date,4,2) " " substr(date,7,2) \ " " substr(time,1,2) " " substr(time,4,2) " " substr(time,7,2) return mktime(timestr) } function epoch2hms(t) { return strftime("%H:%M:%S", t, 1) } function abs(n) {return n<0 ? -1*n : n} NR == 1 {next} { print epoch2hms(abs(dt2epoch($5,$6) - dt2epoch($1,$2))) }' file outputs 00:01:2000:01:0900:01:2000:01:0800:02:0000:01:3700:01:3900:01:33 with perl, I'd use the DateTime ecosystem: perl -MDateTime::Format::Strptime -lane ' BEGIN {$f = DateTime::Format::Strptime->new(pattern => "%y-%m-%d %H:%M:%S")} next if $. == 1; $F[0] =~ s{/}{-}g; $t1 = $f->parse_datetime("$F[0] $F[1]"); $t2 = $f->parse_datetime("$F[4] $F[5]"); $d = $t1->subtract_datetime($t2); printf "%02d:%02d:%02d\n", $d->hours, $d->minutes, $d->seconds;' file A much faster perl version, that does not require any non-core modules perl -MTime::Piece -lane ' next if $. == 1; $t1 = Time::Piece->strptime("$F[0] $F[1]", "%y/%m/%d %H:%M:%S"); $t2 = Time::Piece->strptime("$F[4] $F[5]", "%y-%m-%d %H:%M:%S"); $diff = gmtime(abs($t1->epoch - $t2->epoch)); print $diff->hms;' file or, alternate output: $ perl -MTime::Piece -lane ' next if $. == 1; $t1 = Time::Piece->strptime("$F[0] $F[1]", "%y/%m/%d %H:%M:%S"); $t2 = Time::Piece->strptime("$F[4] $F[5]", "%y-%m-%d %H:%M:%S"); print abs($t1 - $t2)->pretty;' file1 minutes, 20 seconds1 minutes, 9 seconds1 minutes, 20 seconds1 minutes, 8 seconds2 minutes, 0 seconds1 minutes, 37 seconds1 minutes, 39 seconds1 minutes, 33 seconds
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148738/" ] }
452,487
I have some data where the 4th column will either be frz or - . I would like to find all lines where the 4th column is frz only if the 4th column on the next line is - and then print both lines. Sample input: 2018-04-09T14:15:23.366Z 7 multi - uuid1 uuid2 uuid3 -2018-04-09T14:15:23.978Z 8 multi frz uuid1 uuid3 - -2018-04-09T14:29:35.826Z 8 multi frz uuid1 uuid3 uuid2 -2018-04-09T17:19:01.901Z 8 multi frz uuid1 uuid3 uuid2 -2018-06-03T22:12:38.688Z 8 multi - uuid1 uuid3 uuid2 -2018-06-28T00:35:54.338Z 9 multi - uuid1 uuid2 - -2018-06-28T00:47:51.679Z 9 multi - uuid1 uuid2 uuid3 -2018-06-28T00:47:51.720Z 10 multi - uuid1 uuid3 - -2018-06-28T00:47:58.863Z 10 multi - uuid1 uuid3 uuid2 -2018-06-28T16:29:01.624Z 10 multi frz uuid1 uuid3 uuid2 -2018-06-28T17:29:01.624Z 10 multi - uuid1 uuid3 uuid2 - Expected output: 2018-04-09T17:19:01.901Z 8 multi frz uuid1 uuid3 uuid2 -2018-06-03T22:12:38.688Z 8 multi - uuid1 uuid3 uuid2 -2018-06-28T16:29:01.624Z 10 multi frz uuid1 uuid3 uuid2 -2018-06-28T17:29:01.624Z 10 multi - uuid1 uuid3 uuid2 - I've found a few awk commands to print the line after a match but I can't figure out how to match both lines and print both. What I currently have: $ awk 'f{print;f=0} $4=="frz"{f=1}' input2018-04-09T14:29:35.826Z 8 multi frz uuid1 uuid3 uuid2 -2018-04-09T17:19:01.901Z 8 multi frz uuid1 uuid3 uuid2 -2018-06-03T22:12:38.688Z 8 multi - uuid1 uuid3 uuid2 -2018-06-28T17:29:01.624Z 10 multi - uuid1 uuid3 uuid2 -
How about: awk '$4=="-" && prev4=="frz" {print prevline; print} {prev4 = $4; prevline=$0}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
452,569
I would like to ask if there is a out of the box multicore equivalent for a '| sort | uniq -c | sort -n' command? I know that I can use below procedure split -l5000000 data.tsv '_tmp';ls -1 _tmp* | while read FILE; do sort $FILE -o $FILE & done;sort -m _tmp* -o data.tsv.sorted But it tastes a bit overhelming.
GNU sort has a --parallel flag: sort --parallel=8 data.tsv | uniq -c | sort --parallel=8 -n This would use eight concurrent processes/threads to do each of the two sorting steps. The uniq -c part will still be using a single process. As Stéphane Chazelas points out in comments, the GNU implementation of sort is already parallelised (it's using POSIX threads), so modifying the number of concurrent threads is only needed if you want it to use more or fewer threads than what you have cores. Note that the second sort will likely get much less data than the first, due to the uniq step, so it will be much quicker. You may also (possibly) improve sorting speed by playing around with --buffer-size=SIZE and --batch-size=NMERGE . See the sort manual. To further speed the sorting up, make sure that sort writes its temporary files to a fast filesystem (if you have several types of storage attached). You may do this by setting the TMPDIR environment variable to the path of writable directory on such a mountpoint (or use sort -T directory ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223835/" ] }
452,590
I'm a big fan of Terminator . By default, the preferences menu is accessible by a right click on the terminal. I just configured Terminator to do a copy paste on right click (similar like Putty). Unfortunately, that makes the preferences menu is not accessible anymore. What is the solution?
It's the (awkward) shift +middle-click combo that brings up Terminator context menu for me when putty_paste_style = True . This is in contrast to the docs, both the terminator_config manpage and the online manual , which state middle-click alone will bring up the Context Menu. $ terminator --versionterminator 1.91$ cat /etc/lsb-release DISTRIB_ID=UbuntuDISTRIB_RELEASE=18.04DISTRIB_CODENAME=bionicDISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS" The mentioned shift + F10 combo also works, but also stuffs garbage (;2~) onto the command line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/452590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15041/" ] }
452,602
I have a sequence file: $ cat fileCACCGTTGCCAAACAATGTTAGAAGCCTGTCAGCCTCATTGCTCTCAGACCCACGATGTACGTCACATTAGAACACGGAATCTGCTTTTTCAGAATTCCCAAAGATGG I want to calculate the longest stretch of C+T. I could only count total C+T, but I want the longest stretch. $ cat file | awk '{ print $0, gsub(/[cCtT]/,"",$1)}'CACCGTTGCCAAACAATG 9TTAGAAGCCTGTCAGCCT 10CATTGCTCTCAGACCCAC 12GATGTACGTCACATTAGA 8ACACGGAATCTGCTTTTT 11CAGAATTCCCAAAGATGG 7 The Expected result would be to show the longest C+T stretch. CACCGTTGCCAAACAATG 9 2TTAGAAGCCTGTCAGCCT 10 3CATTGCTCTCAGACCCAC 12 5GATGTACGTCACATTAGA 8 2ACACGGAATCTGCTTTTT 11 6CAGAATTCCCAAAGATGG 7 5
FWIW here's a way to do it in perl, using max from List::Util $ perl -MList::Util=max -lpe '$_ .= " " . max 0, map length, /[CT]+/gi' fileCACCGTTGCCAAACAATG 2TTAGAAGCCTGTCAGCCT 3CATTGCTCTCAGACCCAC 5GATGTACGTCACATTAGA 2ACACGGAATCTGCTTTTT 6CAGAATTCCCAAAGATGG 5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229133/" ] }
452,620
Is moving a file via the mv command between two filesystems an atomic operation?
See the EXDEV error (in man 2 rename): EXDEV oldpath and newpath are not on the same mounted filesystem. (Linux permits a filesystem to be mounted at multiple points, but rename() does not work across different mount points, even if the same filesystem is mounted on both.) You can't move between file-systems with a system call, so what mv does is a user-space copy and delete, which is never atomic.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/452620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297603/" ] }
452,629
I had a folder /acme , that contained data. I found out that I was supposed to have already mounted /dev/centos/lv_acme to /acme . I did a minimum amount of research on the web, and it appeared that mounting did not erase data. (Though I now assume I misunderstood what I read.) I executed the command mount /dev/centos/lv_acme /acme As I'm sure you've figured by now, /acme no longer contains my data. Is there any way of recovering the data that was in /acme ?
Easy, umount /acme Your original /acme directory is simply "hidden" under the mount point. If nothing prevents you to umount the dir, you can umount it, copy the data elsewhere and remount it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/452629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222436/" ] }
452,634
When using find , how can I drop the original filename extension (i.e. .pdf ) from the second pair of -exec braces ( {} )? For example: find ~/Documents -regex 'LOGIC.*\.pdf' -exec pdf2svg {} {}.svg \; Input filename: ~/Documents/LOGIC-P_OR_Q .pdf Output filename: ~/Documents/LOGIC-P_OR_Q .pdf.svg Desired filename: ~/Documents/LOGIC-P_OR_Q .svg
You can use an "in-line" shell script, and parameter expansion : -exec sh -c 'pdf2svg "$1" "${1%.pdf}.svg"' sh {} \; or (more efficiently, if your find supports it) -exec sh -c 'for f; do pdf2svg "$f" "${f%.pdf}.svg"; done' sh {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108199/" ] }
452,655
I'm running Ubuntu 18.04 on an MSI GE63 Stealth 8RE, with an NVIDIA GTX 1060. There's a good amount of screen tearing when watching videos, and I found several sources online telling me that creating a file in /etc/modprobe.d/ with options nvidia_drm modeset=1 would resolve the issue. Lo and behold, it did! No more screen tearing! It fixed the Prime Synchronization issues. However , for some reason, I was no longer able to connect to my HDMI monitor. The output of xrandr --query is as follows: Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767eDP-1-1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1920x1080 60.02*+ 60.01 59.97 59.96 59.93 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 60.02 1440x900 59.89 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1-1 disconnected (normal left inverted right x axis y axis)HDMI-1-1 disconnected (normal left inverted right x axis y axis) I'd like to not have screen tearing, but I'd also like to be able to use my HDMI port. Does anyone have a suggestion as to what I can do to resolve this issue?
You can use an "in-line" shell script, and parameter expansion : -exec sh -c 'pdf2svg "$1" "${1%.pdf}.svg"' sh {} \; or (more efficiently, if your find supports it) -exec sh -c 'for f; do pdf2svg "$f" "${f%.pdf}.svg"; done' sh {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169049/" ] }
452,663
I'm migrating to Linux, and I need to convert the following Windows cmd command: fc file1.txt file2.txt | find /i "no se han encontrado diferencias" > nul && set equal=yes I think fc can be replaced by diff or comm , find with grep , but I don't how to do the && part, maybe an if statement...
Taking a guess as to what those Windows commands do, I'd say the equivalent in a POSIX sh script would be: equal=nocmp -s file1 file2 && equal=yes which would set the equal variable to yes if the two files can be read and have identical content (byte-to-byte). As an alternative to cmp -s , on some systems including Linux-based ones, you can use diff -q . diff -q ( q for quiet ), contrary to most cmp -s ( s for silent ) would report an error message if any of the files could not be read. While the GNU implementations of diff and cmp both first check to see if the two files are paths to the same file (including as hard or symbolic links one of the other) or are of different sizes to save having to read them, the busybox implementation of cmp does not while busybox diff does. So on those systems using busybox , you may prefer diff -q for performance reason.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258555/" ] }
452,673
diff command compares to see any difference betwenn two files. Can the same be used to compare two zip files, i.e if there is any difference in data ,like counts etc in individual files in the zipped files?
You will have to unzip them (if only in memory) to compare the two. A cool way I have seen to do this with diff is: diff -y <(unzip -l file1.zip) <(unzip -l file2.zip) That will show you if there are any files contained in one and not the other
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/452673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/292156/" ] }
452,723
Let's suppose I've declared the following variables: $ var='$test'$ test="my string" If I print their contents I see the following: $ echo $var$test$ echo $testmy string I'd like to find a way to print the content of the content of $var (which is the content of $test ). So I tried to do the following: $ echo $(echo $var)$test But here the result is $test and not "my string" ... Is it possible to print the content of the content of variables using bash?
You can accomplish this using bash's indirect variable expansion (as long as it's okay for you to leave out the $ from your reference variable): $ var=test$ test="my string"$ echo "$var"test$ echo "${!var}"my string 3.5.3 Shell Parameter Expansion
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/452723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103357/" ] }
452,757
As most of you have done many times, it's convenient to view long text using less : some_command | less Now its stdin is connected to a pipe (FIFO). How can it still read commands like up/down/quit?
As mentioned by William Pursell , less reads the user’s keystrokes from the terminal. It explicitly opens /dev/tty , the controlling terminal; that gives it a file descriptor, separate from standard input, from which it can read the user’s interactive input. It can simultaneously read data to display from its standard input if necessary. (It could also write directly to the terminal if necessary.) You can see this happen by running some_command | strace -o less.trace -e open,read,write less Move around the input, exit less , and look at the contents of less.trace : you’ll see it open /dev/tty , and read from both file descriptor 0 and whichever one was returned when it opened /dev/tty (likely 3). This is common practice for programs wishing to ensure they’re reading from and writing to the terminal. One example is SSH, e.g. when it asks for a password or passphrase. As explained by schily , if /dev/tty can’t be opened, less will read from its standard error (file descriptor 2). less ’s use of /dev/tty was introduced in version 177, released on April 2, 1991. If you try running cat /dev/tty | less , as suggested by Hagen von Eitzen , less will succeed in opening /dev/tty but won’t get any input from it until cat closes it. So you’ll see the screen blank, and nothing else until you press Ctrl C to kill cat (or kill it in some other way); then less will show whatever you typed while cat was running, and allow you to control it.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/452757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211239/" ] }
452,809
According to this answer by schily , less reads navigation commands from stderr if it's not able to open /dev/tty . This seems puzzling, since I've never seen anything write to another program's stderr stream, and I don't know how I would even accomplish that. What is the purpose of stderr being open for both reading and writing? And if this is useful, how do I make use of it on modern systems? (Is there some arcane syntax to pipe something into stderr instead of stdin, for instance?)
I was surprised at first. However after reading the answers, and doing a little investigation, it seems simple. So here is what I have found. (in the end there was no surprise.) Before redirection stdin, stdout, and stderr are as expected connected to the same device. #ctrl-alt-delor:~$#↳ ll /dev/std*lrwxrwxrwx 1 root root 15 Jun 3 20:58 /dev/stderr -> /proc/self/fd/2lrwxrwxrwx 1 root root 15 Jun 3 20:58 /dev/stdin -> /proc/self/fd/0lrwxrwxrwx 1 root root 15 Jun 3 20:58 /dev/stdout -> /proc/self/fd/1#ctrl-alt-delor:~$#↳ ll /proc/self/fd/*lrwx------ 1 richard richard 64 Jun 30 19:14 /proc/self/fd/0 -> /dev/pts/12lrwx------ 1 richard richard 64 Jun 30 19:14 /proc/self/fd/1 -> /dev/pts/12lrwx------ 1 richard richard 64 Jun 30 19:14 /proc/self/fd/2 -> /dev/pts/12 Therefore after most re-directions (that is if stderr) is not redirected. stderr is still connected to the terminal. Therefore it can be read, to get keyboard input. The only thing that is stopping the files being used in the unexpected direction is convention, and the pipes are unidirectional. Another example, try: cat | less This goes wrong after a page, when less tries to read the terminal (this is not a surprise, as cat is also reading the terminal). /dev/tty is more mysterious, it is not a link into /proc/self . #ctrl-alt-delor:~$#↳ ll /dev/ttycrw-rw-rw- 1 root tty 5, 0 Jun 29 09:18 /dev/tty See what relations are between my current controlling terminal and `/dev/tty`? for an explenation. Thanks to @StephenKitt for the link.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88106/" ] }
452,865
Given that zsh can clobber all files given the command: >* I'm thinking that setting the option noclobber would be a good idea. I can always use >| file if I want to use the default clobber behaviour in both bash and zsh. (zsh also allows the alternative syntax >!file ). I'm guessing noclobber is unset by default because of POSIX compatibility, but just to be sure: Are there any downsides to setting noclobber ? Is there anyway to set noclobber only for the interactive shell?
The reason noclobber is not set by default is tradition. As a matter of user interface design, it's a good idea to make “create this new file” the easy action and to put an extra hurdle the more dangerous action “either create a new file or overwrite an existing file”. Thus noclobber is a good idea ( > to create a new file, >| to potentially overwrite an existing file) and it would likely have been the default if the shell had been designed a few decades later. I strongly recommend to use the following in your interactive shell startup file ( .bashrc or .zshrc ): set -o noclobberalias cp='cp -i'alias mv='mv -i' In each case (redirection, copying, moving), the goal is to add an extra hurdle when the operation may have the side effect of erasing some existing data, even though erasing existing data is not the primary goal of the operation. I don't put rm -i in this list because erasing data is the primary goal of rm . Do note that noclobber and -i are safety nets . If they trigger, you've done something wrong . So don't use them as an excuse to not check what you're overwriting! The point is that you should have checked that the output file doesn't exist. If you're told file exists: foo or overwrite 'foo'? , it means you made a mistake and you should feel bad and be more careful. In particular, don't get into the habit of saying y if prompted to overwrite (arguably, the aliases should be alias cp='yes n | cp -i' mv='yes n | mv -i' , but pressing Ctrl + C makes the output look better): if you did mean to overwrite, cancel the command, move or remove the output file, and run the command again. It's also important not to get into the habit of triggering those safeties because if you do, one day you'll be on a machine which doesn't have your configuration, and you'll lose data because the protections you were counting on aren't there. noclobber will only be set for interactive shells, since .bashrc or .zshrc is only read by interactive shells. Of course you shouldn't change shell options in a way that would affect scripts, since it could break those scripts.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/452865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
452,907
I have a pair of Bluetooth headphones paired to the computer. Some apps are capable of playing through them if selected in pavucontrol. Others, though, won't let me select them. In fact, I can't change the output on those apps at all. I can click the button and see a list (as seen in screenshot), but if I select a different option it just goes straight back to HD-Audio Generic. Other apps (such as Spotify, as seen in the screenshot) allow me to switch from one device to another without issue. What's going on? I've also tried to change the sink via the command line using pactl but for some reason it returns "Invalid Argument." A pretty much identical situation is described in this thread , but unfortunately it was never answered. Here's the sink-input data from pactl : Sink Input #8 Driver: protocol-native.c Owner Module: 11 Client: 24 Sink: 0 Sample Specification: float32le 2ch 44100Hz Channel Map: front-left,front-right Format: pcm, format.sample_format = "\"float32le\"" format.channels = "2" format.channel_map = "\"front-left,front-right\"" Corked: no Mute: no Volume: front-left: 65536 / 100% / 0.00 dB, front-right: 65536 / 100% / 0.00 dB balance 0.00 Buffer Latency: 54807 usec Sink Latency: 23177 usec Resample method: copy Properties: media.role = hex: phonon.streamid = hex: media.name = "Playback Stream" application.name = "bioshock.i386" native-protocol.peer = "UNIX socket client" native-protocol.version = "26" application.process.id = "10390" application.process.user = "john" application.process.host = "strangelove" application.process.binary = "bioshock.i386" application.language = "C" window.x11.display = ":0" application.process.machine_id = [redacted] application.process.session_id = "2" module-stream-restore.id = "sink-input-by-application-name:bioshock.i386" I'm running Linux Mint 18.3 "Sylvia", KDE Plasma 5.8.9, KDE framework 5.36.0, and pulseaudio 8.3. The stubborn app that won't switch devices is BioShock Infinite, from Steam. I also tested The Talos Principle (also from Steam), 64-bit version, and it wouldn't allow me to change the output either.
I finally found the solution: https://steamcommunity.com/app/93200/discussions/0/864959809826195633/ It seems that some apps use something called OpenALsoft to control audio, and it has a configuration option that inhibits sink changes. To disable the option, you can create a config file. ~/.alsoftrc [pulse]allow-moves=yes
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238645/" ] }
452,962
I am trying to download a huge python package and suddenly I hit this space crunch issue. When I run df -h command, it shows: [root@darwin ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/vda1 5.0G 5.0G 0.0G 100% /devtmpfs 7.9G 0 7.9G 0% /devtmpfs 7.9G 0 7.9G 0% /dev/shmtmpfs 7.9G 17M 7.9G 1% /runtmpfs 7.9G 0 7.9G 0% /sys/fs/cgrouptmpfs 1.6G 0 1.6G 0% /run/user/0 I can see only /dev/vda1 has reached 100% of its capacity. But the other filesystems are free, why can't they be used for package installation?
devtmpfs contains nodes which are populated by the kernel with information about devices, etc. tmpfs is actually stored in memory even though it appears as a mounted file system. The contents of tmpfs can be swapped out to the swap space but it will all disappear when the system is rebooted. You can probably clear some space but that's only a temporary solution as logs, data, and installing more packages will only fill it up once again. 5GB just isn't enough space in the longterm. The only way to permanently resolve your issue is to add more permanent storage space to your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/452962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297582/" ] }
452,978
I use Nautilus to explore my files. I use a Debian-based OS with KDE Plasma 5. I use the keyboard a lot. When I press the key up when navigating files, if I'm already at the extremity of the list of files, Nautilus sends a big system beep which I will hear at 100% volume through my headphones. My reaction is comparable to getting electrified. I have placed the following lines in ~/.bashrc for the sudo (root) user and for my regular desktop user: # Turn off system beep in console:xset b offxset b 0 0 0 However, despite the beep going away from some places in the OS (such as erasing an empty line in the gnome-terminal), it's still in Nautilus. I believe it's because Nautilus doesn't source any of the .bashrc or because it ignores the xset commands. How do I fix this? What I need might be at a deeper level than the .bashrc , some file that is executed by everything, but which can still control the sound. Otherwise, disabling the sound another way or replacing it could be interesting.
Short of muting the sound entirely or disconnecting your headphones, there is no system-wide setting for events which will be followed by all applications. In your case especially, since you’re using Nautilus on a KDE system, you’ll run into issues since Nautilus won’t follow your desktop’s configured behaviour. Nautilus uses GNOME’s settings. If you have the GNOME control centre, you can disable sound effects there — go to the sound settings, and disable sound effects. Alternatively, run dconf-editor , go to “org/gnome/desktop/sound”, and disable “event-sounds” and “input-feedback-sounds”. You can do this from the command line too, see How to turn off alert sounds/sound effects on Gnome from terminal? for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/452978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244375/" ] }
453,025
I have a CSV file (in which the field separator is indeed comma) with 8 columns and a few million rows. Here's a sample: 1000024447,38111220,201705,181359,0,12,1,30901064458324,38009543,201507,9,0,1,1,12981064458324,38009543,201508,9,0,2,1,90017 What's the fastest way to print the sum of all numbers in a given column, as well as the number of lines read? Can you explain what makes it faster?
GNU datamash $ datamash -t, count 3 sum 3 < file3,604720 Some testing $ time gawk -F',' '{ sum += $3 } END{ print sum, NR }' longfile604720000000 3000000real 0m2.851suser 0m2.784ssys 0m0.068s$ time mawk -F',' '{ sum += $3 } END{ print sum, NR }' longfile6.0472e+11 3000000real 0m0.967suser 0m0.920ssys 0m0.048s$ time perl -F, -nle '$sum += $F[2] }{ print "$.,$sum"' longfile3000000,604720000000real 0m3.394suser 0m3.364ssys 0m0.036s$ time { cut -d, -f3 <longfile |paste -s -d+ - |bc ; }604720000000real 0m1.679suser 0m1.416ssys 0m0.248s$ time datamash -t, count 3 sum 3 < longfile3000000,604720000000real 0m0.815suser 0m0.716ssys 0m0.036s So mawk and datamash appear to be the pick of the bunch.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46796/" ] }
453,144
In bash, I can write: caller 0 and receive the caller context's: Line number Function Script name This is extremely useful for debugging. Given: yelp () { caller 0; } I can then write yelp to see what code lines are being reached. I can implement caller 0 in bash as: echo "${BASH_LINENO[0]} ${FUNCNAME[1]} ${BASH_SOURCE[1]" How can I get the same output as caller 0 in zsh ?
I don't think there's a builtin command equivalent, but some combination of these four variables from the zsh/Parameter module can be used: funcfiletrace This array contains the absolute line numbers and corresponding file names for the point where the current function, sourced file, or (if EVAL_LINENO is set) eval command was called. The array is of the same length as funcsourcetrace and functrace , but differs from funcsourcetrace in that the line and file are the point of call, not the point of definition, and differs from functrace in that all values are absolute line numbers in files, rather than relative to the start of a function, if any. funcsourcetrace This array contains the file names and line numbers of the points where the functions, sourced files, and (if EVAL_LINENO is set) eval commands currently being executed were defined. The line number is the line where the ‘ function name ’ or ‘ name () ’ started. In the case of an autoloaded function the line number is reported as zero. The format of each element is filename:lineno . For functions autoloaded from a file in native zsh format, where only the body of the function occurs in the file, or for files that have been executed by the source or ‘ . ’ builtins, the trace information is shown as filename:0 , since the entire file is the definition. The source file name is resolved to an absolute path when the function is loaded or the path to it otherwise resolved. Most users will be interested in the information in the funcfiletrace array instead. funcstack This array contains the names of the functions, sourced files, and (if EVAL_LINENO is set) eval commands. currently being executed. The first element is the name of the function using the parameter. The standard shell array zsh_eval_context can be used to determine the type of shell construct being executed at each depth: note, however, that is in the opposite order, with the most recent item last, and it is more detailed, for example including an entry for toplevel, the main shell code being executed either interactively or from a script, which is not present in $funcstack . functrace This array contains the names and line numbers of the callers corresponding to the functions currently being executed. The format of each element is name:lineno . Callers are also shown for sourced files; the caller is the point where the source or ‘ . ’ command was executed. Comparing: foo.bash : #! /bin/bashyelp() { caller 0}foo () { yelp}foo foo.zsh : #! /bin/zshyelp() { print -l -- $funcfiletrace - $funcsourcetrace - $funcstack - $functrace}foo () { yelp}foo The results: $ bash foo.bash7 foo foo.bash$ zsh foo.zshfoo.zsh:7foo.zsh:10-foo.zsh:2foo.zsh:6-yelpfoo-foo:1foo.zsh:10 So, the corresponding values are in ${funcfiletrace[1]} and ${funcstack[-1]} . Modifying yelp to: yelp() { print -- $funcfiletrace[1] $funcstack[-1]} The output is: foo.zsh:7 foo which is quite close to bash's 7 foo foo.bash
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
453,196
I have one particular server that is exhibiting strange behaviour when using tr. Here is an example from a working server: -bash-3.2$ echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]1234567890-bash-3.2$ That makes perfect sense to me. This, however, is from the 'special' server: [root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]abcdefghijklmnpqrstuvwxyz1234567890 As you can see, deleting all lower case characters fails. BUT, it has deleted the letter 'o' The interesting part is the following two examples, which make no sense to me whatsoever: [root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-n]opqrstuvwxyz1234567890[root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-o]abcdefghijklmnpqrstuvwxyz1234567890[root@host~]# (again, the 'o' is deleted in the last example) Does anyone have any idea what is going on here? I can't reproduce on any other linux box that I am using.
you have a file named o in current directory foo> lsfoo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]1234567890foo> touch ofoo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]abcdefghijklmnpqrstuvwxyz1234567890 shell will expand [a-z] string if a match is found. This is called pathname expansion, according to man bash Pathname Expansion After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. ... (...) bash will perform expansion. [...] Matches any one of the enclosed characters.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298058/" ] }
453,222
In Linux, is it possible to see football in VLC player without www browser from address https://areena.yle.fi/tv/ohjelmat/30-901?play=1-50003218
you have a file named o in current directory foo> lsfoo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]1234567890foo> touch ofoo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]abcdefghijklmnpqrstuvwxyz1234567890 shell will expand [a-z] string if a match is found. This is called pathname expansion, according to man bash Pathname Expansion After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. ... (...) bash will perform expansion. [...] Matches any one of the enclosed characters.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298091/" ] }
453,234
Where can I find reference for less regex search patterns? I want to search file with less using \d to find digits, but it does not seem to understand this wildcard. I tried to find a reference for less regex patterns, but could not find anything, not on man pages and not on the Internet.
less 's man page says: /pattern Search forward in the file for the N-th line containing the pattern. N defaults to 1. The pattern is a regular expression, as recognized by the regular expression library supplied by your system. so the accepted syntax may depend on your system. Off-hand, it seems to accept extended regular expressions on my Debian system, see regex(7) , and Why does my regular expression work in X but not in Y? \d is from Perl, and isn't supported by all regex engines. Use [0-9] or [[:digit:]] to match digits. (Their exact behaviour may depend on the locale.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298100/" ] }
453,294
I am using OpenSSH version 7.4p1, in CVE database I found that cpe:/a:openbsd:openssh:7.4:p1 is vulnerable to CVE-2017-15906 https://www.cvedetails.com/cve/CVE-2017-15906/ . Does this mean that for sure my version is affected or is it possible that this version has the same number but is already patched? How can I verify this?
CentOS is just rebuilt RHEL so your system is safe, if you updated to openssh-7.4p1-16.el7 or similar that is shipped in CentOS 7. There is CVE database in Red Hat access portal: https://access.redhat.com/security/cve/cve-2017-15906 With links to the erratas fixing the issues and with listing of packages fixing the specific issue: https://access.redhat.com/errata/RHSA-2018:0980 Similarly you can get the changelog of your installed package and it should list something related to this CVE number. Discaimer: I was fixing that package in this RHEL version.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298139/" ] }
453,338
The time only print the the execution second time of a command. If there is a solution like IPython's powerful timeit magic command, great.
zsh's time uses the TIMEFMT variable to control the format. By default , this is %J %U user %S system %P cpu %*E total , which produces the following. $ time sleep 2sleep 2 0.00s user 0.00s system 0% cpu 2.002 total This does produce millisecond accuracy (at least for total ), so perhaps your system has a different default set (lagging distro?), or has modified TIMEFMT . Have a look at the manual page for possible formats. I use the following in ~/.zshrc : TIMEFMT=$'\n================\nCPU\t%P\nuser\t%*U\nsystem\t%*S\ntotal\t%*E' which produces the following. $ time sleep 2 ================CPU 0%user 0.003system 0.000total 2.006
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34173/" ] }
453,364
On a shared server, I would like to have some very low priority users such that whenever an other user (also without root privileges) needs the resources, they can kill any of the low priority users' processes. Is it possible to allow something like that?
Give the other users permission to kill the processes as the low priority user through sudo -u lowpriouser /bin/kill PID A user can only signal their own processes, unless they have root privileges. By using sudo -u a user with the correct set-up in the sudoers file may assume the identity of the low priority user and kill the process. For example: %killers ALL = (lowpriouser) /bin/kill This would allow all users in the group killers to run /bin/kill as lowpriouser . See also the sudoers manual on your system. On an OpenBSD system, the same can be done through the native doas utility with a configuration like permit :killers as lowpriouser cmd /bin/kill Then doas -u lowpriouser /bin/kill PID See the manuals for doas and doas.conf .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18200/" ] }
453,414
Whenever I add my 4G modem to my raspberry, it gets on top of the default routes ou ip route list , however I want everything to go through wlan, and only use the 4G modem to receive SSH connections. I've found this answer on how to disable the default routes. however, after reboot, the 4G modem comes back to the top. How do I make wlan0 to always be the first rule on default? UPDATE: Here's the dmesg output when I connect the USB dongle: [426102.910168] usb 1-1.5.1: new full-speed USB device number 6 using dwc_otg[426103.046670] usb 1-1.5.1: not running at top speed; connect to a high speed hub[426103.056674] usb 1-1.5.1: New USB device found, idVendor=12d1, idProduct=1f01[426103.056693] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3[426103.056704] usb 1-1.5.1: Product: HUAWEI_MOBILE[426103.056714] usb 1-1.5.1: Manufacturer: HUAWEI_MOBILE[426103.056724] usb 1-1.5.1: SerialNumber: 0123456789ABCDEF[426103.121355] usb-storage 1-1.5.1:1.0: USB Mass Storage device detected[426103.122875] scsi host0: usb-storage 1-1.5.1:1.0[426103.987177] usb 1-1.5.1: USB disconnect, device number 6[426105.470211] usb 1-1.5.1: new full-speed USB device number 7 using dwc_otg[426105.606666] usb 1-1.5.1: not running at top speed; connect to a high speed hub[426105.615673] usb 1-1.5.1: New USB device found, idVendor=12d1, idProduct=14dc[426105.615692] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[426105.615703] usb 1-1.5.1: Product: HUAWEI_MOBILE[426105.615713] usb 1-1.5.1: Manufacturer: HUAWEI_MOBILE[426105.766297] usb-storage 1-1.5.1:1.2: USB Mass Storage device detected[426105.766768] scsi host0: usb-storage 1-1.5.1:1.2[426105.855053] cdc_ether 1-1.5.1:1.0 eth1: register 'cdc_ether' at usb-3f980000.usb-1.5.1, CDC Ethernet Device, 0c:5b:8f:27:9a:64[426105.855593] usbcore: registered new interface driver cdc_ether[426106.785653] scsi 0:0:0:0: Direct-Access HUAWEI TF CARD Storage 2.31 PQ: 0 ANSI: 2[426106.803758] sd 0:0:0:0: Attached scsi generic sg0 type 0[426106.820687] sd 0:0:0:0: [sda] Attached SCSI removable disk Here's ip addr eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff inet6 fe80::584f:751f:bb3e:e26b/64 scope link valid_lft forever preferred_lft forever UPDATE 2 I attached it a few more times until it showed the eth1 route: [10787.229141] usb 1-1.5: new full-speed USB device number 7 using dwc_otg[10787.363515] usb 1-1.5: New USB device found, idVendor=05e3, idProduct=0606[10787.363533] usb 1-1.5: New USB device strings: Mfr=1, Product=2, SerialNumber=0[10787.363544] usb 1-1.5: Product: USB Hub 2.0[10787.363555] usb 1-1.5: Manufacturer: ALCOR[10787.365166] hub 1-1.5:1.0: USB hub found[10787.369831] hub 1-1.5:1.0: 4 ports detected[10797.419094] usb 1-1.5.1: new full-speed USB device number 8 using dwc_otg[10797.555636] usb 1-1.5.1: not running at top speed; connect to a high speed hub[10797.565759] usb 1-1.5.1: New USB device found, idVendor=12d1, idProduct=1f01[10797.565777] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3[10797.565789] usb 1-1.5.1: Product: HUAWEI_MOBILE[10797.565799] usb 1-1.5.1: Manufacturer: HUAWEI_MOBILE[10797.565808] usb 1-1.5.1: SerialNumber: 0123456789ABCDEF[10797.630477] usb-storage 1-1.5.1:1.0: USB Mass Storage device detected[10797.631101] scsi host0: usb-storage 1-1.5.1:1.0[10798.472745] usb 1-1.5.1: USB disconnect, device number 8[10799.469081] usb 1-1.5.1: new full-speed USB device number 9 using dwc_otg[10799.630768] usb 1-1.5.1: not running at top speed; connect to a high speed hub[10799.646891] usb 1-1.5.1: New USB device found, idVendor=12d1, idProduct=14dc[10799.646909] usb 1-1.5.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[10799.646920] usb 1-1.5.1: Product: HUAWEI_MOBILE[10799.646930] usb 1-1.5.1: Manufacturer: HUAWEI_MOBILE[10799.814489] usb-storage 1-1.5.1:1.2: USB Mass Storage device detected[10799.815008] scsi host0: usb-storage 1-1.5.1:1.2[10799.897788] cdc_ether 1-1.5.1:1.0 eth1: register 'cdc_ether' at usb-3f980000.usb-1.5.1, CDC Ethernet Device, 0c:5b:8f:27:9a:64[10799.898127] usbcore: registered new interface driver cdc_ether[10800.889652] scsi 0:0:0:0: Direct-Access HUAWEI TF CARD Storage 2.31 PQ: 0 ANSI: 2[10800.910585] sd 0:0:0:0: Attached scsi generic sg0 type 0[10800.923297] sd 0:0:0:0: [sda] Attached SCSI removable disk Here's route -n Destination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan00.0.0.0 192.168.8.1 0.0.0.0 UG 207 0 0 eth10.0.0.0 192.168.1.1 0.0.0.0 UG 303 0 0 wlan0169.254.0.0 0.0.0.0 255.255.0.0 U 202 0 0 eth0169.254.0.0 0.0.0.0 255.255.0.0 U 204 0 0 docker0169.254.0.0 0.0.0.0 255.255.0.0 U 206 0 0 veth4557ad2172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0192.168.1.0 0.0.0.0 255.255.255.0 U 303 0 0 wlan0192.168.8.0 0.0.0.0 255.255.255.0 U 207 0 0 eth1 See that I did ifmetric wlan0 in order to be able to use the wlan0 to ssh into my raspberry UPDATE 09/10: allow-hotplug wlan0iface wlan0 inet dhcp up ifmetric wlan0 0 wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf This won't make my wlan0 have metric 0. What am I doing wrong?
For changing the routing priority for an interface, you change metrics. By default, all are 0, which is the highest priority. So, you can do: allow-hotplug eth1iface eth1 inet dhcp up ifmetric eth1 30 To use ifmetric in Debian, you have got to install it: sudo apt-get install ifmetric ifmetric Set routing metrics for a network interface ifmetric is a Linux tool for setting the metrics of all IPv4 routesattached to a given network interface at once. This may be used tochange the priority of routing IPv4 traffic over the interface. Lowermetrics correlate with higher priorities. The metric 0 means the highest priority route and is the default one.The larger metric value means lower priority routes. The IP address ofthe active interface with the lowest metric value becomes theoriginating one. See ifmetric(8).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271974/" ] }
453,436
According to man tmux to check the launch option: -C: Start in control mode (see the CONTROL MODE section). Given twice (-CC) disables echo. Then in the control mode section of the man tmux , there is the following description: CONTROL MODE tmux offers a textual interface called control mode. This allows applications to communicate with tmux using a simple text-only protocol. In control mode, a client sends tmux commands or command sequences terminated by newlines on standard input. Each command will produce one block of output on standard output. An output block consists of a %begin line followed by the output (which may be empty). The output block ends with a %end or %error. %begin and matching %end or %error have two arguments: an integer time (as seconds from epoch) and command number. For example: %begin 1363006971 2 0: ksh* (1 panes) [80x24] [layout b25f,80x24,0,0,2] @2 (active) %end 1363006971 2 The refresh-client -C command may be used to set the size of a client in control mode. In control mode, tmux outputs notifications. A notification will never occur inside an output block. I'm not sure what it means, but at least as far as I try a few commands and try to see the looks and feels of it via ( tmux -CC ), it looks like the same as when I launch via tmux new-session . So what is the "control mode" and what makes it different from the normal mode? EDIT I found that the session and the window that was launched via the control mode ( -CC ) does not react to the keyboard shortcut of the tmux commands, such as window split. So what is the point of using the control mode in the first place?
I'm on a Mac and I use iTerm2. As far as I know it's the only terminal emulator that has tmux integration. You start by doing tmux -CC and iTerm will control your tmux session. This means you can use iTerm2 normally as you usually do (CMD-D to split a window vertically, CMD-SHIFT-D split it horizontally). You can use your mouse to reposition the panes instead of using C-b { . You don't need to use the prefix at all. You don't have any problems with copy and paste either when you're dealing with panes. tl;dr Using tmux -CC allows you to use tmux "natively" on terminals that support it. So far I haven't seen any linux terminals that support it, only iTerm2 on a Mac.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73917/" ] }
453,491
As i run these commands to install any packages with yum or dnf: > sudo -c 'yum(or dnf) install [package name]' > sudo yum(or dnf) install [package name] I get this error: Last metadata expiration check: 0:01:34 ago on Thu 05 Jul 2018 12:27:36 AM +0430. No match for argument: [package name] Error: Unable to find a match Any solution?
yum repolist will display the active repo list, I suspect the packages your looking for are not in the base/update/extras repositories and you may need to add additional repositories. A good way to find out is to google search the package your looking for to get an idea of repository you need to have setup or install. A lot of repositories do have a RPM file that will install the repository for your or a "how to" for adding the repository. Examples below Red Hat has made the documentation free to read 9.5.2. Setting [repository] Options IUS repo setup IUS Getting Started yum repolist example: Loaded plugins: fastestmirror, ovlDetermining fastest mirrors * base: mirror.its.sfu.ca * extras: mirror.it.ubc.ca * updates: centos.mirror.rafal.cabase | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/4): base/7/x86_64/group_gz | 166 kB 00:00:00 (2/4): extras/7/x86_64/primary_db | 150 kB 00:00:00 (3/4): updates/7/x86_64/primary_db | 3.6 MB 00:00:00 (4/4): base/7/x86_64/primary_db | 5.9 MB 00:00:01 repo id repo name statusbase/7/x86_64 CentOS-7 - Base 9911extras/7/x86_64 CentOS-7 - Extras 314updates/7/x86_64 CentOS-7 - Updates 946repolist: 11171
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298117/" ] }
453,499
I would like to start a tmux session on a separate window in iTerm2. Now I'm writing my own configuration script to launch the session. tmux new-session -s dev -n main -dtmux send-keys -t dev "cd $DL" C-mtmux split-window -h -t devtmux split-window -v -t dev -p 30tmux resize-pane -x 70 -y 20tmux attach -t dev This starts a new session but the window is on the window I execute the script, not the new, separate window in iTerm2. So I changed the first line ( tmux new-session -s dev -n main -d ) to tmux -CC new -t dev , but then although the session starts in a new window, it does not have the split and the resize. It seems to only open the new session in a new window and that's all. How can I make it launched in a new window with all the initial settings including the directory change, split window, etc...?
yum repolist will display the active repo list, I suspect the packages your looking for are not in the base/update/extras repositories and you may need to add additional repositories. A good way to find out is to google search the package your looking for to get an idea of repository you need to have setup or install. A lot of repositories do have a RPM file that will install the repository for your or a "how to" for adding the repository. Examples below Red Hat has made the documentation free to read 9.5.2. Setting [repository] Options IUS repo setup IUS Getting Started yum repolist example: Loaded plugins: fastestmirror, ovlDetermining fastest mirrors * base: mirror.its.sfu.ca * extras: mirror.it.ubc.ca * updates: centos.mirror.rafal.cabase | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/4): base/7/x86_64/group_gz | 166 kB 00:00:00 (2/4): extras/7/x86_64/primary_db | 150 kB 00:00:00 (3/4): updates/7/x86_64/primary_db | 3.6 MB 00:00:00 (4/4): base/7/x86_64/primary_db | 5.9 MB 00:00:01 repo id repo name statusbase/7/x86_64 CentOS-7 - Base 9911extras/7/x86_64 CentOS-7 - Extras 314updates/7/x86_64 CentOS-7 - Updates 946repolist: 11171
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73917/" ] }
453,502
Trying to understand this piece of code: if [ -f /etc/bashrc ]; then . /etc/bashrcfi I'm not sure what the -f means exactly.
The relevant man page to check for this is that of the shell itself, bash , because -f is functionality that the shell provides, it's a bash built-in. On my system (CentOS 7), the fine man page covers it. The grep may not give the same results on other distributions. Nevertheless, if you run man bash and then search for '-f' it should give the results you require. $ man bash | grep -A1 '\-f file$' -f file True if file exists and is a regular file.$
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281244/" ] }
453,547
I know env is a shell command, it can be used to print a list of the current environment variables. And as far as I understand, RANDOM is alsoa environment variable. So why, when I launch env on Linux, does the output not include RANDOM ?
RANDOM is not an environment variable. It's a shell variable maintained by some shells. It is generally not exported by default. This is why it doesn't show up in the output of env . Once it's been used at least once, it would show up in the output of set , which, by itself, lists the shell variables (and functions) and their values in the current shell session. This behaviour is dependent on the shell and using pdksh on OpenBSD, RANDOM would be listed by set even if not previously used. The rest of this answer concerns what could be expected to happen if RANDOM was exported (i.e. turned into an environment variable). Exporting it with export RANDOM would make it an environment variable but its use would be severely limited as its value in a child process would be "random but static" (meaning it would be an unchanging random number). The exact behaviour differs between shells. I'm using pdksh on OpenBSD in the example below and I get a new random value in each awk run (but the same value every time within the same awk instance). Using bash , I would get exactly the same random value in all invocations of awk . $ awk 'BEGIN { print ENVIRON["RANDOM"], ENVIRON["RANDOM"] }'25444 25444$ awk 'BEGIN { print ENVIRON["RANDOM"], ENVIRON["RANDOM"] }'30906 30906 In bash , the exported value of RANDOM would remain static regardless of the use of RANDOM in the shell (where each use of $RANDOM would still give a new value). This is because each reference to the shell variable RANDOM in bash makes the shell access its internal get_random() function to give the variable a new random value, but the shell does not update the environment variable RANDOM . This is similar in behaviour as with other dynamic bash variables, such as LINENO , SECONDS , BASHPID etc. To update the environment variable RANDOM in bash , you would have to assign it the value of the shell variable RANDOM and re-export it: export RANDOM="$RANDOM" It is unclear to me if this would have the additional side effect of re-seeding the random number generator in bash or not (but an educated guess would be that it doesn't).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262683/" ] }
453,623
I've extracted strings I'm interested in from another file and now have a list like this: StringAStringBStringAStringAStringBStringCStringB How can I extract the number of occurrences each string has using common command-line tools? I would like to end up with a list like this: StringA 3StringB 3StringC 1
Use: sort file | uniq -c Looks simple?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46433/" ] }
453,646
I'm working on a bash script that parses a tab separated file. If the file contains the word "prompt" the script should ask the user to enter a value. It appears that while reading the file, the "read" command is not able to read from standard input as the "read" is simply skipped. Does anybody have a work around for doing both reading from a file as well as from stdin? Note: The script should run on both Git Bash and MacOS. Below is a little code example that fails: #!/bin/bash#for debuggingset "-x"while IFS=$'\r' read -r line || [[ -n "$line" ]]; do [[ -z $line ]] && continue IFS=$'\t' read -a fields <<<"$line" command=${fields[0]} echo "PROCESSING "$command if [[ "prompt" = $command ]]; then read -p 'Please enter a value: ' aValue echo else echo "Doing something else for "$command fidone < "$1" Output: $ ./promptTest.sh promptTest.tsv+ IFS=$'\r'+ read -r line+ [[ -z something else ]]+ IFS=' '+ read -a fields+ command=something+ echo 'PROCESSING something'PROCESSING something+ [[ prompt = something ]]+ echo 'Doing something else for something'Doing something else for something+ IFS=$'\r'+ read -r line+ [[ -z prompt ]]+ IFS=' '+ read -a fields+ command=prompt+ echo 'PROCESSING prompt'PROCESSING prompt+ [[ prompt = prompt ]]+ read -p 'Please enter a value: ' aValue+ echo+ IFS=$'\r'+ read -r line+ [[ -n '' ]] Sample tsv file: $ cat promptTest.tsvsomething elsepromptotherthing nelse
The simplest way is to use /dev/tty as the read for keyboard input. For example: #!/bin/bashecho hello | while read linedo echo We read the line: $line echo is this correct? read answer < /dev/tty echo You responded $answerdone This breaks if you don't run this on a terminal, and wouldn't allow for input to be redirected into the program, but otherwise works pretty well. More generally, you could take a new file handle based off the original stdin, and then read from that. Note the exec line and the read #!/bin/bashexec 3<&0echo hello | while read linedo echo We read the line: $line echo is this correct? read answer <&3 echo You responded $answerdone In both cases the program looks a bit like: % ./yWe read the line: hellois this correct?yesYou responded yes The second variation allows for input to also be redirected % echo yes | ./yWe read the line: hellois this correct?You responded yes
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268673/" ] }
453,700
I have an explicit path to a file: /aaa/bbb/ccc/ddd/eee/fff.txt I need to cd /aaa/bbb and perform an operation on ccc/ddd/eee/fff.txt . I got the first bit figured out: df_test=/aaa/bbb/ccc/ddd/eee/fff.txtcd $( echo ${df_test} | awk -F/ '{print "/"$2"/"$3}' ) How do I then chop off the first two steps in the path, and operate on the remainder? Additionally, I won't know in advance how deep these files will be. I might have /aaa/bbb/ccc/ddd.txt/aaa/bbb/ccc/ddd/eee/fff.txt/aaa/bbb/ccc/ddd/eee/fff/ggg/hhh/iii.txt I just need to change directories to /aaa/bbb, and operate on the remaining relative path.
In a standard shell: $ path=/aaa/bbb/ccc/ddd/eee/fff.txt$ tail="${path#/*/*/}"$ head="${path%/$tail}"$ echo "$head" "$tail"/aaa/bbb ccc/ddd/eee/fff.txt "${path#/*/*/}" is the value of path but with the (shortest) leading part matching /*/*/ removed, that is, your tail part. Then "${path%/$tail}" is path with a slash and the tail part removed. That will produce broken results if the path doesn't have enough components, so you may want to check that first. Alternatively, in Bash we can use a regular expression match within [[ .. ]] and pick up matching pieces: $ if [[ $path =~ (/[^/]+/[^/]+)/(.*) ]]; then echo "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}"fi /aaa/bbb ccc/ddd/eee/fff.txt [[ ... ]] works as a condition, so it's simple to use an if here to make sure the path has enough components.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31778/" ] }
453,703
I need to fire off an alert to the security team if a users AWS access keys exceed 90 days old. I am doing this in bash. So far my script is outputting the keys and the dates like this: AKIAJS7KPHZCQRQ5FJWA : 2016-08-31T15:38:18ZAKIAICDOHVTMEAB6RM5Q : 2018-02-08T03:55:51Z How do I handle determining if the date is past 90 days old using that date format in bash? I am using Ubuntu 18.04. I believe that the date format is ISO 8601. Please confirm/correct if that is wrong as well.
You can use GNU date to convert a date-time string into a number of seconds (since "the epoch", 1st January 1970). From there it's a simple arithmetic comparison datetime='2016-08-31T15:38:18Z'timeago='90 days ago'dtSec=$(date --date "$datetime" +'%s') # For "now", use $(date +'%s')taSec=$(date --date "$timeago" +'%s')echo "INFO: dtSec=$dtSec, taSec=$taSec" >&2[ $dtSec -lt $taSec ] && echo too old
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298424/" ] }
453,749
From this post it is shown that FS:[0x28] is a stack-canary. I'm generating that same code using GCC on this function, void foo () { char a[500] = {}; printf("%s", a);} Specifically, I'm getting this assembly.. 0x000006b5 64488b042528. mov rax, qword fs:[0x28] ; [0x28:8]=0x1978 ; '(' ; "x\x19" 0x000006be 488945f8 mov qword [local_8h], rax...stuff... 0x00000700 488b45f8 mov rax, qword [local_8h] 0x00000704 644833042528. xor rax, qword fs:[0x28] 0x0000070d 7405 je 0x714 0x0000070f e85cfeffff call sym.imp.__stack_chk_fail ; void __stack_chk_fail(void) ; CODE XREF from 0x0000070d (sym.foo) 0x00000714 c9 leave 0x00000715 c3 ret What is setting the value of fs:[0x28] ? The kernel, or is GCC throwing in the code? Can you show the code in the kernel, or compiled into the binary that sets fs:[0x28] ? Is the canary regenerated -- on boot, or process spawn? Where is this documented?
It's easy to track this initialization, as for (almost) every process strace shows a very suspicious syscall during the very beginning of the process run: arch_prctl(ARCH_SET_FS, 0x7fc189ed0740) = 0 That's what man 2 arch_prctl says: ARCH_SET_FS Set the 64-bit base for the FS register to addr. Yay, looks like that's what we need. To find, who calls arch_prctl , let's look for a backtrace: (gdb) catch syscall arch_prctlCatchpoint 1 (syscall 'arch_prctl' [158])(gdb) rStarting program: <program path>Catchpoint 1 (call to syscall arch_prctl), 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2(gdb) bt#0 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2#1 0x00007ffff7ddd3e3 in dl_main () from /lib64/ld-linux-x86-64.so.2#2 0x00007ffff7df04c0 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2#3 0x00007ffff7dda028 in _dl_start () from /lib64/ld-linux-x86-64.so.2#4 0x00007ffff7dd8fb8 in _start () from /lib64/ld-linux-x86-64.so.2#5 0x0000000000000001 in ?? ()#6 0x00007fffffffecef in ?? ()#7 0x0000000000000000 in ?? () So, the FS segment base is set by the ld-linux , which is a part of glibc , during the program loading (if the program is statically linked, this code is embedded into the binary). This is where it all happens. During the startup, the loader initializes TLS . This includes memory allocation and setting FS base value to point to the TLS beginning. This is done via arch_prctl syscall . After TLS initialization security_init function is called, which generates the value of the stack guard and writes it to the memory location, which fs:[0x28] points to: Stack guard value initialization Stack guard value write , more detailed And 0x28 is the offset of the stack_guard field in the structure which is located at the TLS start.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
453,753
If you put this link in a browser: https://unix.stackexchange.com/q/453740#453743 it returns this: https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu#453743 However cURL drops the Hash: $ curl -I https://unix.stackexchange.com/q/453740#453743HTTP/2 302cache-control: no-cache, no-store, must-revalidatecontent-type: text/html; charset=utf-8location: /questions/453740/installing-busybox-for-ubuntu Does cURL have an option to keep the Hash with the resultant URL? Essentially Iam trying to write a script that will resolve URLs like a browser - this is whatI have so far but it breaks if the URL contains a Hash: $ set https://unix.stackexchange.com/q/453740#453743$ curl -L -s -o /dev/null -w %{url_effective} "$1"https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu
It's easy to track this initialization, as for (almost) every process strace shows a very suspicious syscall during the very beginning of the process run: arch_prctl(ARCH_SET_FS, 0x7fc189ed0740) = 0 That's what man 2 arch_prctl says: ARCH_SET_FS Set the 64-bit base for the FS register to addr. Yay, looks like that's what we need. To find, who calls arch_prctl , let's look for a backtrace: (gdb) catch syscall arch_prctlCatchpoint 1 (syscall 'arch_prctl' [158])(gdb) rStarting program: <program path>Catchpoint 1 (call to syscall arch_prctl), 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2(gdb) bt#0 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2#1 0x00007ffff7ddd3e3 in dl_main () from /lib64/ld-linux-x86-64.so.2#2 0x00007ffff7df04c0 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2#3 0x00007ffff7dda028 in _dl_start () from /lib64/ld-linux-x86-64.so.2#4 0x00007ffff7dd8fb8 in _start () from /lib64/ld-linux-x86-64.so.2#5 0x0000000000000001 in ?? ()#6 0x00007fffffffecef in ?? ()#7 0x0000000000000000 in ?? () So, the FS segment base is set by the ld-linux , which is a part of glibc , during the program loading (if the program is statically linked, this code is embedded into the binary). This is where it all happens. During the startup, the loader initializes TLS . This includes memory allocation and setting FS base value to point to the TLS beginning. This is done via arch_prctl syscall . After TLS initialization security_init function is called, which generates the value of the stack guard and writes it to the memory location, which fs:[0x28] points to: Stack guard value initialization Stack guard value write , more detailed And 0x28 is the offset of the stack_guard field in the structure which is located at the TLS start.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453753", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17307/" ] }
453,757
I am trying to write a Bash script that takes a list of numbers as command line arguments and then outputs the sum of those numbers. So the script would be run as script.sh 1 555 22 122 66 and would then sum them all up. I know they need to be passed to the $@ variable and then likely run through a loop, but I don't understand how to convert the contents of $@ to an int to do the math. I tried to do this: #!/bin/bashfor i in $@do $@+$@ echo "the total is `$@`"done
In general, an argument is converted to an integer automatically if used inside an arithmetic $((...)) expansion. This loop will sum all the arguments: for x; do sum=$((sum+x)); done; echo "$sum" The shell caches all the arguments in separated memory locations as a c program deals with an argv[] array. The shell user does not need to deal directly with that array, the shell helps by assigning them to $1, $2, $3, etc. The shell also abstract such list as "$@". And finally, the syntax for x is a shorthand for for x in "$@" to loop over all arguments. That is assuming that arguments are decimal numbers that do not start with zero, octal numbers starting with zero or hexadecimal numbers that start with 0x , and that the total sum does not overflow (2^63-1 in 64 bit systems) This list: $ ./script 12 021 0xab Will print 200 (the decimal result).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297398/" ] }
453,798
I have a bunch of folders which have a subfolder somewhere called 360. find . -name '360' -type d -exec 'echo "{}"' \; output: find: echo "./workspace/6875538616c6/raw/2850cd9cf25b/360": No such file or directory For each found item, I want to do a curl call, and trigger a Jenkins build job.My problem is that ./ part at the start. I should be able to cut it off like this: find . -name '360' -type d -exec 'echo {} | cut -c 2-' \; But because it starts with a ./ it will just be executed ("No such file or directory").How can I get the output from find, without the leading ./ ? update: Here is the whole thing with a jenkins curl call: find reallylongfolderstructure -name '360' -type d -exec 'curl http://user:[email protected]/jenkins/job/jobname/buildWithParameters?token=ourtoken&parameter={}' \; output 08:53:52 find: ‘curl http://user:token@ourdomain/jenkins/job/jobname/buildWithParameters?token=ourtoken&parameter=reallylongfolderstructure/something/lol/360’: No such file or directory
You write because it starts with a ./ it will just be executed ("No such file or directory"). This isn't what's happening. You have provided a single command to the find ... -exec parameter of echo "{}" . Note that this is not echo and the directory found by find ; it's a single command that includes a space in its name. The find command (quite reasonably) cannot execute a command called echo "./workspace/6875538616c6/raw/2850cd9cf25b/360" . Remove the single quotes around the -exec parameter and you may find you don't need any additional changes or workarounds: find . -name '360' -type d -exec echo "{}" \; Similarly here you need to remove the quoting of the entire value passed to -exec . But in this case you still need to quote the storage arguments so the shell cannot interpret & , etc. find reallylongfolderstructure -name '360' -type d -exec curl 'http://user:[email protected]/jenkins/job/jobname/buildWithParameters?token=ourtoken&parameter={}' \;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298491/" ] }
453,827
Making alias of rm command so that when rm command is executed it will first copy file to /tmp/recycle_bin and then remove the file. But I'm getting issue in the process. Steps performed by me: I wrote below command in ~/.bashrc file: alias rm='cp $@ /tmp/recycle_bin && rm $@' execute it with below command: . ~/.bashrc Open new terminal and execute below command: rm "/home/XXXXX/My_Programs/test_method.py"cp: missing destination file operand after '/tmp/recycle_bin'** got this error message Need to understand: Even if the parameter is passed to rm command as the file which I want to remove. Why i am getting error **missing destination file operand after /tmp/recycle_bin ? The $@ variable in alias command is not populated? How to debug this case and resolve?
An alias can not take arguments and use $@ to access them like that. Alias expansion in bash is a simple text replacement. If you have alias rm ='something something' , then using rm file1 file2 would execute something something file1 file2 and if the alias included $@ , this would be expanded with the command line arguments of the shell , not of the alias. In your case, assuming the shell's list of command line arguments is empty, the alias alias rm='cp $@ /tmp/recycle_bin && rm $@' would execute as cp /tmp/recycle_bin && rm file1 file2 when invoked as rm file1 file2 . The cp utility will complain about only having a single operand. You could use a shell function instead: rm () { cp -- "$@" "${TMPDIR:-/tmp}/recycle_bin" && command rm -- "$@"} This would copy the indicated files to $TMPDIR/recycle_bin (or /tmp/recycle_bin if TMPDIR is unset or empty) and then delete the files. The command command is used to not cause an infinite recursion. The -- are needed to treat all arguments as filenames rather than as options. Note also that the quoting is important so that filenames are not split on whitespaces and so that filename globbing patterns in the arguments are not picking up files that you don't want to remove. A bit more efficient ( cp + rm == mv ): rm () { mv -- "$@" "${TMPDIR:-/tmp}/recycle_bin"} A bit more safe (creates the recycle bin if it's not there): rm () { mkdir -p "${TMPDIR:-/tmp}/recycle_bin" && mv -- "$@" "${TMPDIR:-/tmp}/recycle_bin"} And even safer, with GNU mv (creates backups in the recycle bin if name collisions occur): rm () { mkdir -p "${TMPDIR:-/tmp}/recycle_bin" && mv -b -- "$@" "${TMPDIR:-/tmp}/recycle_bin"} For an alias-only (and GNU-only) variation, see " Create a recycle bin feature without using functions ".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264177/" ] }
453,883
I have a file that contains new line characters. I am posting the file via curl to a server that would parse it as json. It rejects the request due to the new line characters. But when I do: $(echo "$MY_DATA" | sed 's/$//' | tr -d '\n\r') It works but the new line characters are gone. How can I escape the text so that it keeps the new line characters? I tried tr '\n' '\\n' and sed 's/\n/\\n/g and neither approach worked
I assume you want to change raw newline characters to \n (a backslash and an n ). tr '\n' '\\n' would change newlines to backslashes (and then there's an extra n in the second set). sed 's/\n/\\n/g won't work because sed doesn't load the line-terminating newline into the buffer, but handles it internally. Some alternatives are GNU sed with -z (takes the input as NUL-separated "lines", not newline-separated): sed -z 's/\n/\\n/g' and Perl (unlike sed, it does take the newline in the buffer, so s/// works on it): perl -pe 's/\n/\\n/g' ( tr -d '\n\r' will indeed remove newlines, that's exactly what you're asking it to do.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/453883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264385/" ] }
453,906
I have a text file containing lines like this: This is a thread 139737522087680This is a thread 139737513694976This is a thread 139737505302272This is a thread 139737312270080...This is a thread 139737203164928This is a thread 139737194772224This is a thread 139737186379520 How can I be sure of the uniqueness of every line? NOTE: The goal is to test the file, not to modify it if duplicate lines are present.
[ "$(wc -l < input)" -eq "$(sort -u input | wc -l)" ] && echo all unique
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/453906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262070/" ] }
453,998
I have a systemd service that is a console application, meaning that it is controlled by sending commands to its stdin and it outputs information to sdout. How can I set up the systemd service so that I can connect to its stdin and give it commands at any point, then detach from this, and repeat when necessary?
I can think of multiple ways to do this. Of course, each has its own caveats. Probably the most straightforward approach would be to create a simple service with a dedicated tty, similar to this: # /etc/systemd/system/systemd-interactive-simple-tty.service[Unit]Description=Example systemd interactive simple tty serviceAfter=getty.service[Service]# https://www.freedesktop.org/software/systemd/man/systemd.exec.htmlExecStart=/usr/local/sbin/systemd-interactive.bashStandardInput=tty-forceTTYVHangup=yesTTYPath=/dev/tty20TTYReset=yes# https://www.freedesktop.org/software/systemd/man/systemd.service.htmlType=simpleRemainAfterExit=falseRestart=alwaysRestartSec=5s[Install]WantedBy=default.target The following options will work with the above simple service: conspy takes (remote) control of a text mode virtual console. This is probably your best bet (with the above tty service). It's available via most extended package repositories and is simple to use, like this: conspy 20 # hit ESC+ESC+ESC (3 times quickly, to exit) chvt works similarly to conspy but makes /dev/ttyN the foreground (local) terminal. It's part of the kbd collection and is installed by default on virtually every modern Linux distribution. That's why I thought it was worth a mention. The major caveat with chvt is that it requires you to use the attached keyboard, which is probably not what you want. For the above service example, chvt could be used like this: chvt 20 # ALT+F1 to return to /dev/tty1 reptyr uses the ptrace(2) system call to attach to a remote program (via it's PID). This is a completely different approach than conspy & chvt , but would work with the above service definition too. Just keep in mind that reptyr , by itself, doesn't really support 'detaching'. Its termcap support isn't very robust, either. Typically, reptyr is used in conjunction with screen and/or tmux because they provide a more seamless way to 'detach'; I find reptyr is a great, niche tool to move existing PIDs into a screen session or tmux window or pane. That said; I put this option here, albeit last, because it's still possible to use reptyr without screen or tmux . The major caveat is if you break out of the process (e.g ^C), rather than reptyr'ing it (again) to another tty/pty (via another shell). Sending a break to the process may cause it to abort and I'm sure you know the rest. Maybe that's OK, especially if the process isn't critical and the systemd service is configured to Restart=always as I've shown above. If the process 'breaks' then systemd will automatically restart it (another cool feature of systemd!). There are different values for Restart , too. YMMV. reptyr is available via most extended package repositories and can be used, like this: reptyr $(systemctl status systemd-interactive-simple-tty.service | grep Main\ PID | awk '{print $3}') # or just reptyr <pid> Another (more complex [meaning there's more that could fail]) approach would be to create a forking service using screen, similar to this: # /etc/systemd/system/systemd-interactive-forking-screen.service[Unit]Description=Example systemd interactive forking screen service[Service]# https://www.freedesktop.org/software/systemd/man/systemd.exec.htmlExecStartPre=-/usr/bin/screen -X -S ${SCREEN_TITLE} kill # [optional] prevent multiple screens with the same nameExecStart=/usr/bin/screen -dmS ${SCREEN_TITLE} -O -l /usr/bin/bash -c /usr/local/sbin/systemd-interactive.bash# https://www.freedesktop.org/software/systemd/man/systemd.service.htmlType=forkingEnvironment=SCREEN_TITLE=systemd-interactiveRemainAfterExit=falseRestart=alwaysRestartSec=5sSuccessExitStatus=1[Install]WantedBy=default.target screen is a full-screen window manager that multiplexes a physical terminal between several processes. It's quite a bit more complex than anything listed in the first, simple option. Personally, I've been using screen for a long, long time and feel comfortable enough to trust it with most things. It's an invaluable tool. The primary advantage over the above is decent termcap support (though not as good as tmux's). That just means that your backspace key, arrows, etc. will work better than with conspy or reptyr . screen is available via most base package repositories, and can be used like this: screen -r systemd-interactive # CTRL-A+D to detach A similar approach to forking screen would be to fork tmux . The systemd service for tmux is almost the same as it is for screen . But, I'm not going to detail this because, well, it's late & I'm tired. Yes, I use tmux a lot more than screen (these days). In fact, I'm writing this in neovim pane in tmux right now. But, I've still used screen for a lot longer. In my experience and opinion, tmux is overkill for something like this. Sure tmux is newer, has more features, and is a MUCH better shell multiplexer than screen but ... it's even more complex. Along with that extra complexity comes some additional instability. More important, to me at least, is that tmux crashes more often than screen. I listed screen as #2 because, if it were me, for something like that I'd probably just use #1 with conspy . Depending on your program; named pipes ... systemd services support them, too! i.e. StandardInput=/path/to/named/pipe| ... and more.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/453998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298630/" ] }
454,014
I followed a tutorial on GTK which used this command to generate the build flags: $ pkg-config --cflags --libs gtk+-3.0 This outputs coherent flags. From research, I have found that pkg-config searches for .pc files in /usr/lib/pkginfo , usr/share/pkgconfig , in the /local equivalents, and in the folder indicated by the PKG_CONFIG_PATH variable. None of the folders contains the GTK file and the environment variable is not set. Where is pkg-config getting the flags?
It’s finding them in /usr/lib/x86_64-linux-gnu/pkgconfig/gtk+-3.0.pc (assuming you’re on amd64 ). On Debian, pkg-config also searches the multi-arch directory for the target, i.e. /usr/lib/$(dpkg-architecture -q DEB_TARGET_MULTIARCH)/pkgconfig .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296981/" ] }
454,026
This is another one in a long line of problems of this sort. After a recent upgrade from Fedora 27 to Fedora 28, broadcom-wl stopped working. The Broadcom adapter in question is an older BCM4312. To get this adapter to work, I usually installed broadcom-wl , akmod , akmod-wl , and all dependency packages, including kernel headers, etc. After running akmods --force or rebooting, things were usually fine... This time however, not so. lsmod | grep wl reports the driver is loaded: wl 6463488 0cfg80211 770048 1 wl After running akmods --force I'm getting no errors, the driver supposedly loads fine with modprobe wl , too. But, I still don't have a WiFi adapter visible with ip link show or iwconfig . In my quest to solve the issue of not having WiFi, I've installed unitedrpms ' repo and the broadcom-wl-dkms package from there. Still, no cigar. Not having WiFi on this laptop is most unpleasant, as being tethered to a router with a Cat5e cable is not really practical here for my application. I've retraced all steps that got me WiFi with Fedoras up to this one. However right now, I'm somewhat at a loss, and I ask for further advice what to do now to get that adapter working again. After loading wl with modprobe I'm getting this error with dmesg , which I believe is linked to my problem: [22856.976760] cfg80211: Loading compiled-in X.509 certificates for regulatory database[22856.977471] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'[22856.978252] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2[22856.978257] cfg80211: failed to load regulatory.db
It’s finding them in /usr/lib/x86_64-linux-gnu/pkgconfig/gtk+-3.0.pc (assuming you’re on amd64 ). On Debian, pkg-config also searches the multi-arch directory for the target, i.e. /usr/lib/$(dpkg-architecture -q DEB_TARGET_MULTIARCH)/pkgconfig .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1290/" ] }
454,039
I have a domain that points to the external IP of my webserver. I set up a Pi-Hole server and changed my network settings to use it, and I've verified it is working. I want to redirect the URL of my webserver, which points to the external IP, to the internal IP of the webserver. I used the command sudo pihole -a hostrecord example.com 192.168.0.12 to redirect the domain to the IP. But when I try to connect to it through ping, nslookup, and using a browser it goes to the external IP. I then did the command again, but this time with the domain misspelled. sudo pihole -a hostrecord eexample.com 192.168.0.12 This time it worked and visiting eexample.com worked as expected and it was redirected to the correct IP. I've cleared my computer's DNS cache and Pi-Hole's. Also, when I visited the correct domain on my phone (connected to the network and had mobile data turned off), which had never connected to the site before, it worked as expected. What's going on?
It’s finding them in /usr/lib/x86_64-linux-gnu/pkgconfig/gtk+-3.0.pc (assuming you’re on amd64 ). On Debian, pkg-config also searches the multi-arch directory for the target, i.e. /usr/lib/$(dpkg-architecture -q DEB_TARGET_MULTIARCH)/pkgconfig .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286215/" ] }
454,044
I'm using fold -w 3 to split a line into multiple 3 chars long, however with the GNU implementation, it does not work for text with multi-byte characters it seems. How can I achieve the above with sed ? I've come up with sed -r 's/^(.{0,3})(.*)/\1\n\2/g' however this only does a single replacement: echo "111222333444555666" | sed -r 's/^(.{0,3})(.*)/\1\n\2/g' 111222333444555666 Additional examples: echo "ĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄ" | sed -r 's/^(.{0,3})(.*)/\1\n\2/g' ĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄ And fold with it's corrupting behavior: echo "ĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄ" | fold -w 3 Ą��ĄĄ��ĄĄ�
Short grep approach: echo "ĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄ" | grep -Eo '.{1,3}'ĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄĄ To retain only 3-char sequences: ... | grep -Eo '.{3}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
454,197
I am using the find command of bash, and I am trying to understand it since it is part of a code that I am using. In the code, the command in question is - find -L $RAW_DIR -mindepth 2 -maxdepth 2 -name "*.bam" -o -name "*.sam" | wc -l I have been trying to understand this command by searching its components. In esence, I thinking it is trying to find the number of files that end with .bam or .sam . I think -maxdepth 2 means to search for these files in this folder and its immediate subfolder. What I do not understand is what mindepth -2 does in this case. I looked up mindepth, and the explanation given everywhere is - " Do not apply any tests or actions at levels less than levels (a non-negative integer). '-mindepth 1' means process all files except the command line arguments. " To me this explanation is not very clear. Just like maxdepth -2 means search for subfolders upto a depth of 2, what does mindepth -2 correspondingly mean, in simple language? Also, if mindepth is just the opposite of maxdepth in terms of the direction (which would make intuitive sense), then how do I understand the fact that executing the command above on a folder that does have a .bam file leads to the output 0 , whereas omitting the mindepth part of the command leads to the output 1 ?
Depth 0 is the command line arguments, 1 the files contained within them, 2 the files contained within depth 1, etc. -mindepth N tells to process only files that are at depth >= N, similar to how -maxdepth M tells to process only files are at depth <= M. So if you want the files that are at depth 2, exactly, you need to use both. Your command would match $RAW_DIR/foo/bam.bam , but not $RAW_DIR/bar.bam . Try, e.g. $ mkdir -p a/b/c/d$ find ./a -maxdepth 2./a./a/b./a/b/c$ find ./a -mindepth 2./a/b/c./a/b/c/d$ find ./a -maxdepth 2 -mindepth 2./a/b/c maxdepth with a negative argument doesn't mean anything: $ find ./a -maxdepth -2find: Expected a positive decimal integer argument to -maxdepth, but got ‘-2’
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/249637/" ] }
454,289
Why does the following regular expression print valid , when the name is hdpworker01 ? name=hdpworker01[[ $name =~ worker[[:digit:]] ]] && echo valid What I try to do is print valid only if the name matches worker[0-999] . Example expected results: For name=worker01 : valid For name=hdpworker01 : no output
A regular expression is not anchored to the start or end of a string by default. This is different from e.g. filename globbing patterns. This means that the expression may match anywhere in the given string. To have your expression anchored to the start of the string, use ^worker[[:digit:]] To additionally anchor it to the end of the string and to allow for one to three digits, use ^worker[[:digit:]]{1,3}$ If you want to match worker10 but not worker01 or worker003 (no zero-filled numbers), use ^worker([0-9]|[1-9][0-9]{1,2})$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
454,318
I drag and drop a folder into another by mistake in FileZilla. ~/big_folder~/some_other_folder The folder got moved is a very huge one. It includes hundreds of thousands of files (node_modules, small image files, a lot of folders) What is so weird is that after I release my mouse, the moving is done. The folder "big_folder" is moved into "some_other_folder". ~/some_other_folder/big_folder (there is no big_folder in the ~/ after moving) Then I realize the mistake and try move back but it fails both on FileZilla and terminal. Then I have to cp -r to copy files back because there are server-side codes accessing those files in ~/big_folder And it takes like forever to wait ... What should I do? BTW, here are the output from FileZilla (it's the failure of the moving back): Status: Renaming '/root/big_folder' to '/root/some_other_folder/big_folder'Status: /root/big_folder -> /root/some_other_folder/big_folderStatus: Renaming '/root/some_other_folder/big_folder' to '/root/big_folder'Command: mv "big_folder" "/root/big_folder"Error: mv /root/some_other_folder/big_folder /root/big_folder: received failure with description 'Failure'
If a directory is moved within the same filesystem (the same partition), then all that is needed is to rename the file path of the directory. No data apart from the directory entry for the directory itself has to be altered. When copying directories, the data for each and every file needs to be duplicated. This involves reading all the source data and writing it at the destination. Moving a directory between filesystems would involve copying the data to the destination and removing it from the source. This would take about as long time as copying (duplicating) the data within a single filesystem. If FileZilla successfully renamed the directory from ~/big_folder to ~/some_other_folder/big_folder , then I would revert that using mv ~/some_other_folder/big_folder ~/big_folder ... after first making sure that there were no directory called ~/big_folder (if there was, the move would put big_folder from some_other_folder into the ~/big_folder directory as a subfolder).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/454318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
454,470
I want to make a backup of a directory that contains thousands of sub-directories and deep sub-folder paths. In many of these directories there are a big number of files starting with . . I want to make sure that all of the . files in sub-directories and sub-sub-directories and etc. get properly copied. Is it enough to specify include="*" ? Will this cover everything? rsync -rvh --compress-level=0 --stats --progress --include ".*" user@vm:/mnt/storage8/backups ~/data/backup_of_backups/
All files are included by default, so if you want to include all files, don't pass any --include or --exclude option. If you do use patterns, rsync doesn't treat dot files specially. If you wanted to skip dot files, you'd have to explicitly use --exclude='.*' . Note that --include='.*' would only include dot files. This is a shell pattern, where . stands for itself and * means “any sequence of characters”, not a regex where . means “any character” and * means “any number of the preceding character or group”. Without any exclude directive, you still get all files, so an include directive is just pointless, but if you had some exclude directives, --include='.*' would not mean “include all files including dot files”, it would only mean “include dot files” (and on its own it wouldn't recurse into directories whose name don't start with a dot).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276131/" ] }
454,553
I have a function in my .bashrc to automatically sudo to open files that aren't writable by me: vim() { if [ -w "$1" ]; then \vim "$1" else sudo env HOME="$HOME" \vim -u ~/.vimrc "$1" fi} When the file needs sudo, it works fine. When it doesn't, it recursively calls this function and uses 100% of 1 CPU until I C-C. From this answer I see there are a few options, all of which I've tried. One actually works: 'vim' "$1" #fails\vim "$1" #failscommand vim "$1" #Works! Why do the other options not work as I would expect them to? (I know this is a duplicate, but it was very hard to find my answer on SO/SE with the current question titles, so I wanted to post a question with a title that I and others could've found by google searching)
Any character in front of an alias will prevent the alias from triggering: alias ls='ls -la'ls foo.txt #will run ls -la foo.txt\ls foo.txt #will run ls foo.txt'ls' foo.txt #will run ls foo.txt'ls foo.txt' #will run ls foo.txt However, that doesn't stop functions, and you need command to reference the underlying builtin if you create a function with the same name. ls () { echo "not an alias"}alias ls='echo "an alias"'ls foo.txt #will echo "an alias"\ls foo.txt #will echo "not an alias"'ls' foo.txt #will echo "not an alias"command ls foo.txt #will actually run `ls` Explanation The problem is that the first two options can only deal with aliases. They are not special redirecting operators that can sense if you have a function or command with the same name. All they do is put something in front of the alias, as aliases only expand if they are the first part of a pipe or command alias v='sudo vim'v x.txt#automatically expands "v" to "sudo vim" and then does the rest of the command# v x.txt ==> sudo vim x.txt Bash just tries to expand the first word of a command using the list of aliases it knows about (which you can get with alias ). Aliases don't take arguments, and can only be the first word (space-separated; vim x.txt won't expand into sudo vimim x.txt using the alias above) in a command to expand properly. However, expansion never happens to things in single quotes: echo '$USER' will print out a literal $USER and not what the variable stands for. Also, bash expands \x to be x (mostly). These aren't extra added-in ways specifically to escape an alias, they're just part of how bash expansion was written. Thus, you can use these methods to let an alias shadow a command and still have access to the actual command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/454553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257010/" ] }
454,590
When running sudo dd if=/dev/sda the internal PC speaker makes sounds. Also all fonts, even the prompt and typing input has different characters such as "♡" or "•". If you need to know more, comment please. What I usually do is of=/dev/null or >>/dev/null for reading performance testing and for spinning up the optical drive (with count=1 iflag=direct skip=500000 ), of which I skip to LBA 500000 to put the laser lens in the center of the data part. But I wanted to try out once what happens if I do not redirect the output anywhere, and that happened. Fun fact: The same happens to Windows too.
This is roughly what happens: Your dd command does not have an of=... argument so it sends data to stdout. And as you are running the command in a terminal the stdout of the running process is connected to the terminal. Terminals can display text and interpret control sequences . Depending on your terminal type there are sequences to change the font or the color or the position of the cursor or to ring the bell (beep) and so on. Your hard disk ( /dev/sda ) contains a lot of different data and there are surely some terminal control sequences in there by pure chance. So you are sending a lot of text & control sequences to your terminal and it dutifully tries to display and interpret it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270469/" ] }
454,686
After reading some pretty nice answers from this question , I am still fuzzy on why you would want to pretend that you are root without getting any of the benefits of actually being root. So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? A Google Groups discussion here points out that you need fakeroot to compile a Debian kernel (if you want to do it from an unprivileged user). My comment is that, the reason you need to be root in order to compile is probably because read permissions were not set for other users. If so isn't it a security violation that fakeroot allows for compilation(which means gcc can now read a file that was for root)? This answer here describes that the actual system calls are made with real uid/gid of the user , so again where does fakeroot help? How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? From what I have gathered, fakeroot is just useful when you want to change the owner of any package files that you built to root. But you can do that with chown, so where am I lacking in my understanding of how this component is suppose to be used?
So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? Because you can’t just do that with chown , at least not as a non-root user. (And if you’re running as root, you don’t need fakeroot .) That’s the whole point of fakeroot : to allow programs which expect to be run as root to run as a normal user, while pretending that the root-requiring operations succeed. This is used typically when building a package, so that the installation process of the package being installed can proceed without error (even if it runs chown root:root , or install -o root , etc.). fakeroot remembers the fake ownership which it pretended to give files, so subsequent operations looking at the ownership see this instead of the real one; this allows subsequent tar runs for example to store files as owned by root. How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? fakeroot doesn’t trick tar into doing anything, it preserves changes the build wants to make without letting those changes take effect on the system hosting the build. You don’t need fakeroot to produce a tarball containing a file owned by root and suid; if you have a binary evilbinary , running tar cf evil.tar --mode=4755 --owner=root --group=root evilbinary , as a regular user, will create a tarball containing evilbinary , owned by root, and suid. However, you won’t be able to extract that tarball and preserve those permissions unless you do so as root: there is no privilege escalation here. fakeroot is a privilege de -escalation tool: it allows you to run a build as a regular user, while preserving the effects the build would have had if it had been run as root, allowing those effects to be replayed later. Applying the effects “for real” always requires root privileges; fakeroot doesn’t provide any method of acquiring them. To understand the use of fakeroot in more detail, consider that a typical distribution build involves the following operations (among many others): install files, owned by root ... archive those files, still owned by root, so that when they’re extracted, they’ll be owned by root The first part obviously fails if you’re not root. However, when running under fakeroot , as a normal user, the process becomes install files, owned by root — this fails, but fakeroot pretends it succeeds, and remembers the changed ownership ... archive those files, still owned by root — when tar (or whatever archiver is being used) asks the system what the file ownership is, fakeroot changes the answer to match the ownership it recorded earlier Thus you can run a package build without being root, while obtaining the same results you’d get if you were really running as root. Using fakeroot is safer: the system still can’t do anything your user can’t do, so a rogue installation process can’t damage your system (beyond touching your files). In Debian, the build tools have been improved so as not to require this any more, and you can build packages without fakeroot . This is supported by dpkg directly with the Rules-Requires-Root directive (see rootless-builds.txt ). To understand the purpose of fakeroot , and the security aspects of running as root or not, it might help to consider the purpose of packaging. When you install a piece of software from source, for use system-wide, you proceed as follows: build the software (which can be done without privileges) install the software (which needs to be done as root, or at least as a user allowed to write to the appropriate system locations) When you package a piece of software, you’re delaying the second part; but to do so successfully, you still need to “install” the software, into the package rather than onto the system. So when you package software, the process becomes: build the software (with no special privileges) pretend to install the software (again with no special privileges) capture the software installation as a package (ditto) make the package available (ditto) Now a user completes the process by installing the package, which needs to be done as root (or again, a user with the appropriate privileges to write to the appropriate locations). This is where the delayed privileged process is realised, and is the only part of the process which needs special privileges. fakeroot helps with steps 2 and 3 above by allowing us to run software installation processes, and capture their behaviour, without running as root.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/454686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230582/" ] }
454,694
So, I deleted my home folder (or, more precisely, all files I had write access to). What happened is that I had build="build"...rm -rf "${build}/"*...<do other things with $build> in a bash script and, after no longer needing $build , removing the declaration and all its usages -- but the rm . Bash happily expands to rm -rf /* . Yea. I felt stupid, installed the backup, redid the work I lost. Trying to move past the shame. Now, I wonder: what are techniques to write bash scripts so that such mistakes can't happen, or are at least less likely? For instance, had I written FileUtils.rm_rf("#{build}/*") in a Ruby script, the interpreter would have complained about build not being declared, so there the language protects me. What I have considered in bash, besides corraling rm (which, as many answers in related questions mention, is not unproblematic): rm -rf "./${build}/"* That would have killed my current work (a Git repo) but nothing else. A variant/parameterization of rm that requires interaction when acting outside of the current directory. (Could not find any.)Similar effect. Is that it, or are there other ways to write bash scripts that are "robust" in this sense?
set -u or set -o nounset This would make the current shell treat expansions of unset variables as an error: $ unset build$ set -u$ rm -rf "$build"/*bash: build: unbound variable set -u and set -o nounset are POSIX shell options . An empty value would not trigger an error though. For that, use $ rm -rf "${build:?Error, variable is empty or unset}"/*bash: build: Error, variable is empty or unset The expansion of ${variable:?word} would expand to the value of variable unless it's empty or unset. If it's empty or unset, the word would be displayed on standard error and the shell would treat the expansion as an error (the command would not be executed, and if running in a non-interactive shell, this would terminate). Leaving the : out would trigger the error only for an unset value, just like under set -u . ${variable:?word} is a POSIX parameter expansion . Neither of these would cause an interactive shell to terminate unless set -e (or set -o errexit ) was also in effect. ${variable:?word} causes scripts to exit if the variable is empty or unset. set -u would cause a script to exit if used together with set -e . As for your second question. There is no way to limit rm to not work outside of the current directory. The GNU implementation of rm has a --one-file-system option that stops it from recursively delete mounted filesystems, but that's as close as I believe we can get without wrapping the rm call in a function that actually checks the arguments. As a side note: ${build} is exactly equivalent to $build unless the expansion occurs as part of a string where the immediately following character is a valid character in a variable name, such as in "${build}x" .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/454694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17409/" ] }
454,728
I've setup ssh and router port forwarding so I can ssh into a computer on my home network when I'm not at home. Currently I have two entries in my .ssh/config file one for when I'm on my home network and one for when I'm not: Host mycomputer HostName 192.168.X.XHost mycomputerathome HostName my.no-ip.dynamic This works but I'm wondering if I can make things easier on myself. I was hoping there's a way to list multiple HostName entries such that if the first fails it falls back to the second: Host mycomputer HostName 192.168.X.X HostName my.no-ip.dynamic So that it will first try to connect to a host on my local network and if that isn't present, it'll try to connect using my no-ip dynamic host name. I have tried entering two HostNames but running ssh mycomputer just blocks doing nothing. I've turned off password authentication in favor of keys so accidentally connecting to a computer on the local network when I'm not on my home network shouldn't risk my password going anywhere it shouldn't. Is it possible to specify fallback HostNames to try if the first one doesn't work?
It's ugly, but I think you could do it using the exec criterion to Match on the exit status of a port knock e.g. Host mycomputer Match exec "nc -z 192.168.1.11 %p" HostName 192.168.1.11 Match !exec "nc -z 192.168.1.11 %p" HostName my.no-ip.dynamic Note that this can't really tell whether you're on "your" home network - just that you're on a private LAN segment with the same address range that happens to have a service listening on the same address/port.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80529/" ] }
454,847
I am looking for a short an easy way to check whether a remote path points to a directory (or a symlinked directory) or to a readable file. If it is a directory then I can scp multiple specific files from inside that directory to my local system, while if it is a file I can only scp that file. I have the file path given in scp -manner as user@host:/path/to/dir-or-file . Note: On the local system (MobaXterm on Windows) I have bash , scp and ssh available, while the remote system (on which the path sits I want to check on) is a full Linux distro.
You are looking to transfer the file (if it's a file), or parts of the directory (if it's a directory). if ! scp user@host:/path/to/dir-or-file local/paththen scp user@host:/path/to/dir-or-file/some-specific-file local/pathfi If scp is told to transfer a directory without its -r option, it will fail. You may use this to detect whether the given path on the remote host is a directory or not, and then invoke scp a second time to get specific files inside the directory if the first scp couldn't get the pathname as a file. Of course, if you want to transfer the whole directory (if it's a directory) you could just do scp -r user@host:/path/to/dir-or-file local/path which would work regardless of whether the path is for a directory or file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45940/" ] }
454,863
I am just getting in Linux driver development and I have a conceptual question, which I think will help other newcomers into kernel development as well. I am reading through the Linux Device Drivers book and I have completed upto Ch. 3 of the book. Until now, I have seen that by issuing open , close and other commands to files in the /dev folder, the userspace can access kernel functions. Another method of sharing control is via files in /sys , where reading or writing from sys files can communicate with the driver. I wanted to know what would be the use-cases for each method? Are they 2 approaches to the same task? Does one have any limitations over another? Can someone share practical examples where one might be useful over the other? I have read the other questions here and they explain dev and sys . While that is helpful, I wanted to get a little more in-depth knowledge on how both differ and should be used.
Very roughly: /dev contains device nodes, which in earlier Unix systems was the only way to interact with the kernel. There are two types of these, block devices and character devices. The corresponding API is geared to something that will allow block-based I/O (some kind of disk) or character based I/O (e.g. a serial port). /sys (and /proc ) were added later, possibly inspired by the Plan 9 OS. They provide complete directory subtrees, and the file entries in these subtrees contain text that describes the internal state of the kernel module when read, or, when written, set the internal state. So a typical application would be: You want to write a kernel driver for some kind of storage device? Use a /dev node to access the device itself, and /sys (or /proc ) entries to fine-tune how the storage gets accessed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299284/" ] }
454,891
My question is not what each field means, my question is how to determine what each field means. The man page simply states -l use a long listing formatwithout describing what the long listing format entails.
At the end of the manual page for the GNU coreutils implementation of ls (as found on Linux systems, and some other Unices): SEE ALSO Full documentation at: <http://www.gnu.org/software/coreutils/ls> or available locally via: info '(coreutils) ls invocation' Following the link to the online manual, one sees a section labelled "What information is listed" , which amongst other things describes the long output format in greater detail. On most other systems, the ls manual is self-contained and describes the long format. For example the OpenBSD ls(1) manual . Whatever Unix you are on , the ls manual will hold the information you require, or it will refer to the relevant other manual or on-line document that holds the details. If it does not, you should report this as a documentation bug. Googling for what an option to a command does is hazardous, as many commands have non-standard extensions that could well be implemented differently in different Unices, or even differently depending what version of the tool happens to be installed. What you'd want to do is to read the manual on your system. If the manual is not describing exactly what an option does, or what a format is, either explicitly or by reference to some other documentation, then, as I said above, this would be considered a documentation bug.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/454891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298737/" ] }
454,896
I've been tuning my Linux kernel for Intel Core 2 Quad (Yorkfield) processors, and I noticed the following messages from dmesg : [ 0.019526] cpuidle: using governor menu[ 0.531691] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns[ 0.550918] intel_idle: does not run on family 6 model 23[ 0.554415] tsc: Marking TSC unstable due to TSC halts in idle PowerTop shows only states C1, C2 and C3 being used for the package and individual cores: Package | CPU 0POLL 0.0% | POLL 0.0% 0.1 msC1 0.0% | C1 0.0% 0.0 msC2 8.2% | C2 9.9% 0.4 msC3 84.9% | C3 82.5% 0.9 ms | CPU 1 | POLL 0.1% 1.6 ms | C1 0.0% 1.5 ms | C2 9.6% 0.4 ms | C3 82.7% 1.0 ms | CPU 2 | POLL 0.0% 0.1 ms | C1 0.0% 0.0 ms | C2 7.2% 0.3 ms | C3 86.5% 1.0 ms | CPU 3 | POLL 0.0% 0.1 ms | C1 0.0% 0.0 ms | C2 5.9% 0.3 ms | C3 87.7% 1.0 ms Curious, I queried sysfs and found that the legacy acpi_idle driver was in use (I expected to see the intel_idle driver): cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle Looking at the kernel source code, the current intel_idle driver contains a debug message specifically noting that some Intel family 6 models are not supported by the driver: if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 == 6) pr_debug("does not run on family %d model %d\n", boot_cpu_data.x86, boot_cpu_data.x86_model); An earlier fork (November 22, 2010) of intel_idle.c shows anticipated support for Core 2 processors (model 23 actually covers both Core 2 Duo and Quad): #ifdef FUTURE_USE case 0x17: /* 23 - Core 2 Duo */ lapic_timer_reliable_states = (1 << 2) | (1 << 1); /* C2, C1 */#endif The above code was deleted in December 2010 commit . Unfortunately, there is almost no documentation in the source code, so there is no explanation regarding the lack of support for the idle function in these CPUs. My current kernel configuration is as follows: CONFIG_SMP=yCONFIG_MCORE2=yCONFIG_GENERIC_SMP_IDLE_THREAD=yCONFIG_ACPI_PROCESSOR_IDLE=yCONFIG_CPU_IDLE=y# CONFIG_CPU_IDLE_GOV_LADDER is not setCONFIG_CPU_IDLE_GOV_MENU=y# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not setCONFIG_INTEL_IDLE=y My question is as follows: Is there a specific hardware reason that Core 2 processors are not supported by intel_idle ? Is there a more appropriate way to configure a kernel for optimal CPU idle support for this family of processors (aside from disabling support for intel_idle )?
While researching Core 2 CPU power states (" C-states "), I actually managed to implement support for most of the legacy Intel Core/Core 2 processors. The complete implementation (Linux patch) with all of the background information is documented here. As I accumulated more information about these processors, it started to become apparent that the C-states supported in the Core 2 model(s) are far more complex than those in both earlier and later processors. These are known as Enhanced C-states (or " CxE "), which involve the package, individual cores and other components on the chipset (e.g., memory). At the time the intel_idle driver was released, the code was not particularly mature and several Core 2 processors had been released that had conflicting C-state support. Some compelling information on Core 2 Solo/Duo C-state support was found in this article from 2006 . This is in relation to support on Windows, however it does indicate the robust hardware C-state support on these processors. The information regarding Kentsfield conflicts with the actual model number, so I believe they are actually referring to a Yorkfield below: ...the quad-core Intel Core 2 Extreme (Kentsfield) processor supportsall five performance and power saving technologies — Enhanced IntelSpeedStep (EIST), Thermal Monitor 1 (TM1) and Thermal Monitor 2 (TM2),old On-Demand Clock Modulation (ODCM), as well as Enhanced C States(CxE). Compared to Intel Pentium 4 and Pentium D 600, 800, and 900processors, which are characterized only by Enhanced Halt (C1) State,this function has been expanded in Intel Core 2 processors (as well asIntel Core Solo/Duo processors) for all possible idle states of aprocessor, including Stop Grant (C2), Deep Sleep (C3), and DeeperSleep (C4). This article from 2008 outlines support for per-core C-states on multi-core Intel processors, including Core 2 Duo and Core 2 Quad (additional helpful background reading was found in this white paper from Dell ): A core C-state is a hardware C-state. There are several core idlestates, e.g. CC1 and CC3. As we know, a modern state of the artprocessor has multiple cores, such as the recently released Core DuoT5000/T7000 mobile processors, known as Penryn in some circles. Whatwe used to think of as a CPU / processor, actually has multiplegeneral purpose CPUs in side of it. The Intel Core Duo has 2 cores inthe processor chip. The Intel Core-2 Quad has 4 such cores perprocessor chip. Each of these cores has its own idle state. This makessense as one core might be idle while another is hard at work on athread. So a core C-state is the idle state of one of those cores. I found a 2010 presentation from Intel that provides some additional background about the intel_idle driver, but unfortunately does not explain the lack of support for Core 2: This EXPERIMENTAL driver supersedes acpi_idle on Intel AtomProcessors, Intel Core i3/i5/i7 Processors and associated Intel Xeonprocessors. It does not support the Intel Core2 processor or earlier. The above presentation does indicate that the intel_idle driver is an implementation of the "menu" CPU governor, which has an impact on Linux kernel configuration (i.e., CONFIG_CPU_IDLE_GOV_LADDER vs. CONFIG_CPU_IDLE_GOV_MENU ). The differences between the ladder and menu governors are succinctly described in this answer . Dell has a helpful article that lists C-state C0 to C6 compatibility: Modes C1 to C3 work by basically cutting clock signals used inside theCPU, while modes C4 to C6 work by reducing the CPU voltage. "Enhanced"modes can do both at the same time. Mode Name CPUsC0 Operating State All CPUsC1 Halt 486DX4 and aboveC1E Enhanced Halt All socket LGA775 CPUsC1E — Turion 64, 65-nm Athlon X2 and Phenom CPUsC2 Stop Grant 486DX4 and aboveC2 Stop Clock Only 486DX4, Pentium, Pentium MMX, K5, K6, K6-2, K6-IIIC2E Extended Stop Grant Core 2 Duo and above (Intel only)C3 Sleep Pentium II, Athlon and above, but not on Core 2 Duo E4000 and E6000C3 Deep Sleep Pentium II and above, but not on Core 2 Duo E4000 and E6000; Turion 64C3 AltVID AMD Turion 64C4 Deeper Sleep Pentium M and above, but not on Core 2 Duo E4000 and E6000 series; AMD Turion 64C4E/C5 Enhanced Deeper Sleep Core Solo, Core Duo and 45-nm mobile Core 2 Duo onlyC6 Deep Power Down 45-nm mobile Core 2 Duo only From this table (which I later found to be incorrect in some cases), it appears that there were a variety of differences in C-state support with the Core 2 processors (Note that nearly all Core 2 processors are Socket LGA775, except for Core 2 Solo SU3500, which is Socket BGA956 and Merom/Penryn processors. "Intel Core" Solo/Duo processors are one of Socket PBGA479 or PPGA478). An additional exception to the table was found in this article : Intel’s Core 2 Duo E8500 supports C-states C2 and C4, while the Core 2Extreme QX9650 does not. Interestingly, the QX9650 is a Yorkfield processor (Intel family 6, model 23, stepping 6). For reference, my Q9550S is Intel family 6, model 23 (0x17), stepping 10, which supposedly supports C-state C4 (confirmed through experimentation). Additionally, the Core 2 Solo U3500 has an identical CPUID (family, model, stepping) to the Q9550S but is available in a non-LGA775 socket, which confounds interpretation of the above table. Clearly, the CPUID must be used at least down to the stepping in order to identify C-state support for this model of processor, and in some cases that may be insufficient (undetermined at this time). The method signature for assigning CPU idle information is: #define ICPU(model, cpu) \{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu } Where model is enumerated in asm/intel-family.h . Examining this header file, I see that Intel CPUs are assigned 8-bit identifiers that appear to match the Intel family 6 model numbers: #define INTEL_FAM6_CORE2_PENRYN 0x17 From the above, we have Intel Family 6, Model 23 (0x17) defined as INTEL_FAM6_CORE2_PENRYN . This should be sufficient for defining idle states for most of the Model 23 processors, but could potentially cause issues with QX9650 as noted above. So, minimally, each group of processors that has a distinct C-state set would need to be defined in this list. Zagacki and Ponnala, Intel Technology Journal 12 (3):219-227, 2008 indicate that Yorkfield processors do indeed support C2 and C4. They also seem to indicate that the ACPI 3.0a specification supports transitions only between C-states C0, C1, C2 and C3, which I presume may also limit the Linux acpi_idle driver to transitions between that limited set of C-states. However, this article indicates that may not always be the case: Bear in mind that is the ACPI C state, not the processor one, so ACPIC3 might be HW C6, etc. Also of note: Beyond the processor itself, since C4 is a synchronized effort betweenmajor silicon components in the platform, the Intel Q45 ExpressChipset achieves a 28-percent power improvement. The chipset I'm using is indeed an Intel Q45 Express Chipset. The Intel documentation on MWAIT states is terse but confirms the BIOS-specific ACPI behavior: The processor-specific C-states defined in MWAIT extensions can map toACPI defined C-state types (C0, C1, C2, C3). The mapping relationshipdepends on the definition of a C-state by processor implementation andis exposed to OSPM by the BIOS using the ACPI defined _CST table. My interpretation of the above table (combined with a table from Wikipedia , asm/intel-family.h and the above articles) is: Model 9 0x09 ( Pentium M and Celeron M ): Banias: C0, C1, C2, C3, C4 Model 13 0x0D ( Pentium M and Celeron M ): Dothan, Stealey: C0, C1, C2, C3, C4 Model 14 0x0E INTEL_FAM6_CORE_YONAH ( Enhanced Pentium M , Enhanced Celeron M or Intel Core ): Yonah ( Core Solo , Core Duo ): C0, C1, C2, C3, C4, C4E/C5 Model 15 0x0F INTEL_FAM6_CORE2_MEROM (some Core 2 and Pentium Dual-Core ): Kentsfield, Merom, Conroe, Allendale ( E2xxx/E4xxx and Core 2 Duo E6xxx, T7xxxx/T8xxxx , Core 2 Extreme QX6xxx , Core 2 Quad Q6xxx ): C0, C1, C1E, C2, C2E Model 23 0x17 INTEL_FAM6_CORE2_PENRYN ( Core 2 ): Merom-L/Penryn-L: ? Penryn ( Core 2 Duo 45-nm mobile ): C0, C1, C1E, C2, C2E, C3, C4, C4E/C5, C6 Yorkfield ( Core 2 Extreme QX9650 ): C0, C1, C1E, C2E?, C3 Wolfdale/Yorkfield ( Core 2 Quad , C2Q Xeon , Core 2 Duo E5xxx/E7xxx/E8xxx , Pentium Dual-Core E6xxx , Celeron Dual-Core ): C0, C1, C1E, C2, C2E, C3, C4 From the amount of diversity in C-state support within just the Core 2 line of processors, it appears that a lack of consistent support for C-states may have been the reason for not attempting to fully support them via the intel_idle driver. I would like to fully complete the above list for the entire Core 2 line. This is not really a satisfying answer, because it makes me wonder how much unnecessary power is used and excess heat has been (and still is) generated by not fully utilizing the robust power-saving MWAIT C-states on these processors. Chattopadhyay et al. 2018, Energy Efficient High Performance Processors: Recent Approaches for Designing Green High Performance Computing is worth noting for the specific behavior I'm looking for in the Q45 Express Chipset: Package C-state (PC0-PC10) - When the compute domains, Core andGraphics (GPU) are idle, the processor has an opportunity foradditional power savings at uncore and platform levels, for example,flushing the LLC and power-gating the memory controller and DRAM IO,and at some state, the whole processor can be turned off while itsstate is preserved on always-on power domain. As a test, I inserted the following at linux/drivers/idle/intel_idle.c line 127: static struct cpuidle_state conroe_cstates[] = { { .name = "C1", .desc = "MWAIT 0x00", .flags = MWAIT2flg(0x00), .exit_latency = 3, .target_residency = 6, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C1E", .desc = "MWAIT 0x01", .flags = MWAIT2flg(0x01), .exit_latency = 10, .target_residency = 20, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, },// {// .name = "C2",// .desc = "MWAIT 0x10",// .flags = MWAIT2flg(0x10),// .exit_latency = 20,// .target_residency = 40,// .enter = &intel_idle,// .enter_s2idle = intel_idle_s2idle, }, { .name = "C2E", .desc = "MWAIT 0x11", .flags = MWAIT2flg(0x11), .exit_latency = 40, .target_residency = 100, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .enter = NULL }};static struct cpuidle_state core2_cstates[] = { { .name = "C1", .desc = "MWAIT 0x00", .flags = MWAIT2flg(0x00), .exit_latency = 3, .target_residency = 6, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C1E", .desc = "MWAIT 0x01", .flags = MWAIT2flg(0x01), .exit_latency = 10, .target_residency = 20, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C2", .desc = "MWAIT 0x10", .flags = MWAIT2flg(0x10), .exit_latency = 20, .target_residency = 40, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C2E", .desc = "MWAIT 0x11", .flags = MWAIT2flg(0x11), .exit_latency = 40, .target_residency = 100, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C3", .desc = "MWAIT 0x20", .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 85, .target_residency = 200, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C4", .desc = "MWAIT 0x30", .flags = MWAIT2flg(0x30) | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 100, .target_residency = 400, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C4E", .desc = "MWAIT 0x31", .flags = MWAIT2flg(0x31) | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 100, .target_residency = 400, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .name = "C6", .desc = "MWAIT 0x40", .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, .exit_latency = 200, .target_residency = 800, .enter = &intel_idle, .enter_s2idle = intel_idle_s2idle, }, { .enter = NULL }}; at intel_idle.c line 983: static const struct idle_cpu idle_cpu_conroe = { .state_table = conroe_cstates, .disable_promotion_to_c1e = false,};static const struct idle_cpu idle_cpu_core2 = { .state_table = core2_cstates, .disable_promotion_to_c1e = false,}; at intel_idle.c line 1073: ICPU(INTEL_FAM6_CORE2_MEROM, idle_cpu_conroe),ICPU(INTEL_FAM6_CORE2_PENRYN, idle_cpu_core2), After a quick compile and reboot of my PXE nodes, dmesg now shows: [ 0.019845] cpuidle: using governor menu[ 0.515785] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns[ 0.543404] intel_idle: MWAIT substates: 0x22220[ 0.543405] intel_idle: v0.4.1 model 0x17[ 0.543413] tsc: Marking TSC unstable due to TSC halts in idle states deeper than C2[ 0.543680] intel_idle: lapic_timer_reliable_states 0x2 And now PowerTOP is showing: Package | CPU 0POLL 2.5% | POLL 0.0% 0.0 msC1E 2.9% | C1E 5.0% 22.4 msC2 0.4% | C2 0.2% 0.2 msC3 2.1% | C3 1.9% 0.5 msC4E 89.9% | C4E 92.6% 66.5 ms | CPU 1 | POLL 10.0% 400.8 ms | C1E 5.1% 6.4 ms | C2 0.3% 0.1 ms | C3 1.4% 0.6 ms | C4E 76.8% 73.6 ms | CPU 2 | POLL 0.0% 0.2 ms | C1E 1.1% 3.7 ms | C2 0.2% 0.2 ms | C3 3.9% 1.3 ms | C4E 93.1% 26.4 ms | CPU 3 | POLL 0.0% 0.7 ms | C1E 0.3% 0.3 ms | C2 1.1% 0.4 ms | C3 1.1% 0.5 ms | C4E 97.0% 45.2 ms I've finally accessed the Enhanced Core 2 C-states, and it looks like there is a measurable drop in power consumption - my meter on 8 nodes appears to be averaging at least 5% lower (with one node still running the old kernel), but I'll try swapping the kernels out again as a test. An interesting note regarding C4E support - My Yorktown Q9550S processor appears to support it (or some other sub-state of C4), as evidenced above! This confuses me, because the Intel datasheet on the Core 2 Q9000 processor (section 6.2) only mentions C-states Normal (C0), HALT (C1 = 0x00), Extended HALT (C1E = 0x01), Stop Grant (C2 = 0x10), Extended Stop Grant (C2E = 0x11), Sleep/Deep Sleep (C3 = 0x20) and Deeper Sleep (C4 = 0x30). What is this additional 0x31 state? If I enable state C2, then C4E is used instead of C4. If I disable state C2 (force state C2E) then C4 is used instead of C4E. I suspect this may have something to do with the MWAIT flags, but I haven't yet found documentation for this behavior. I'm not certain what to make of this: The C1E state appears to be used in lieu of C1, C2 is used in lieu of C2E and C4E is used in lieu of C4. I'm uncertain if C1/C1E, C2/C2E and C4/C4E can be used together with intel_idle or if they are redundant. I found a note in this 2010 presentation by Intel Labs Pittsburgh that indicates the transitions are C0 - C1 - C0 - C1E - C0, and further states: C1E is only used when all the cores are in C1E I believe that is to be interpreted as the C1E state is entered on other components (e.g. memory) only when all cores are in the C1E state. I also take this to apply equivalently to the C2/C2E and C4/C4E states (Although C4E is referred to as "C4E/C5" so I'm uncertain if C4E is a sub-state of C4 or if C5 is a sub-state of C4E. Testing seems to indicate C4/C4E is correct). I can force C2E to be used by commenting out the C2 state - however, this causes the C4 state to be used instead of C4E (more work may be required here). Hopefully there aren't any model 15 or model 23 processors that lack state C2E, because those processors would be limited to C1/C1E with the above code. Also, the flags, latency and residency values could probably stand to be fine-tuned, but just taking educated guesses based on the Nehalem idle values seems to work fine. More reading will be required to make any improvements. I tested this on a Core 2 Duo E2220 ( Allendale ), a Dual Core Pentium E5300 ( Wolfdale ), Core 2 Duo E7400 , Core 2 Duo E8400 ( Wolfdale ), Core 2 Quad Q9550S ( Yorkfield ) and Core 2 Extreme QX9650 , and I have found no issues beyond the afore-mentioned preference for state C2/C2E and C4/C4E. Not covered by this driver modification: The original Core Solo / Core Duo ( Yonah , non Core 2) are family 6, model 14. This is good because they supported the C4E/C5 (Enhanced Deep Sleep) C-states but not the C1E/C2E states and would need their own idle definition. The only issues that I can think of are: Core 2 Solo SU3300/ SU3500 (Penryn-L) are family 6, model 23 and will be detected by this driver. However, they are not Socket LGA775 so they may not support the C1E Enhanced Halt C-state. Likewise for the Core 2 Solo ULV U2100/U2200 ( Merom-L ). However, the intel_idle driver appears to choose the appropriate C1/C1E based on hardware support of the sub-states. Core 2 Extreme QX9650 (Yorkfield) reportedly does not support C-state C2 or C4. I have confirmed this by purchasing a used Optiplex 780 and QX9650 Extreme processor on eBay. The processor supports C-states C1 and C1E. With this driver modification, the CPU idles in state C1E instead of C1, so there is presumably some power savings. I expected to see C-state C3, but it is not present when using this driver so I may need to look into this further. I managed to find a slide from a 2009 Intel presentation on the transitions between C-states (i.e., Deep Power Down): In conclusion, it turns out that there was no real reason for the lack of Core 2 support in the intel_idle driver. It is clear now that the original stub code for "Core 2 Duo" only handled C-states C1 and C2, which would have been far less efficient than the acpi_idle function which also handles C-state C3. Once I knew where to look, implementing support was easy. The helpful comments and other answers were much appreciated, and if Amazon is listening, you know where to send the check. This update has been committed to github . I will e-mail a patch to the LKML soon. Update : I also managed to dig up a Socket T/LGA775 Allendale ( Conroe ) Core 2 Duo E2220, which is family 6, model 15, so I added support for that as well. This model lacks support for C-state C4, but supports C1/C1E and C2/C2E. This should also work for other Conroe-based chips ( E4xxx / E6xxx ) and possibly all Kentsfield and Merom (non Merom-L) processors. Update : I finally found some MWAIT tuning resources. This Power vs. Performance writeup and this Deeper C states and increased latency blog post both contain some useful information on identifying CPU idle latencies. Unfortunately, this only reports those exit latencies that were coded into the kernel (but, interestingly, only those hardware states supported by the processor): # cd /sys/devices/system/cpu/cpu0/cpuidle# for state in `ls -d state*` ; do echo c-$state `cat $state/name` `cat $state/latency` ; donec-state0/ POLL 0c-state1/ C1 3c-state2/ C1E 10c-state3/ C2 20c-state4/ C2E 40c-state5/ C3 20c-state6/ C4 60c-state7/ C4E 100 Update: An Intel employee recently published an article on intel_idle detailing MWAIT states.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/454896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120445/" ] }
454,912
variable_list=" any:any:-a -b -c any one:one:-c -b -f -m mul mul:one:-c -b -f -m mul"for f in `echo $variable_list`do c1=`echo $f | cut -d':' -f1`; c2=`echo $f | cut -d':' -f2`; c3=`echo $f | cut -d':' -f3-`; echo "c1==>$c1 and c2==>$c2 and c3==>$c3"; #exit 0; ###I made mistake heredone; Expected Output: c1==>any and c2==>any and c3==>-a -b -c anyc1==>one and c2==>one and c3==>-c -b -f -m mulc1==>mul and c2==>one and c3==>-c -b -f -m mul Edit 1: I realized that I was stupid while using the script and at first iteration I used exit 0 to test it only first line since I have lots of this in real. It was working as it has to be. Can I achieve the mentioned output by maintaining the variable_list with out modifying the format/way of input? (I am using bash)
Your issue are with the spaces in the data. The shell will split the string into words on all spaces and the for loop will iterate over those words. (For a solution that does not replace variable_list with an array, see the very end of this answer.) Instead, use a proper array: variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")for var in "${variable_list[@]}"; do c1=$( cut -d':' -f1 <<<"$var" ) c2=$( cut -d':' -f2 <<<"$var" ) c3=$( cut -d':' -f3- <<<"$var" ) printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done Using an array ensures that you can access each individual set of variables as its own array entry without relying on them being delimited by newlines or some other character. The code is also using "here-strings" in bash to send the string to cut (rather than echo and a pipe). Or, much more efficiently, variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")for var in "${variable_list[@]}"; do IFS=':' read -r c1 c2 c3 <<<"$var" printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done Setting IFS to a colon for read will make read split the input on colons (rather than on spaces, tabs and newlines). Note that all the quotation above is significant. Without the double quotes, the shell would perform word splitting and filename globbing on the values of variable_list , var and the three c variables. Related: When is double-quoting necessary? Why is printf better than echo? Have backticks (i.e. `cmd`) in *sh shells been deprecated? If all you're after is that specific output, then you may cheat a bit: variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")( IFS=':'; set -f; printf 'c1==>%s and c2==>%s and c3==>%s\n' ${variable_list[@]} ) This runs the printf in a subshell so that setting IFS and the -f ( noglob ) shell option does not affect the rest of the script. Setting IFS to a colon here will make the shell expand the unquoted variable_list array into three sets of three arguments for printf . printf will print the first three according to its format string and then reuse that format for the next set of three arguments, until all arguments have been processed. The set -f prevents the unquoted expansion of variable_list from triggering filename globbing, should there be any filename globbing characters in there. Using a newline-delimited string: variable_list="any:any:-a -b -c anyone:one:-c -b -f -m mulmul:one:-c -b -f -m mul"while IFS= read -r var; do IFS=':' read -r c1 c2 c3 <<<"$var" printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done <<<"$variable_list" This reads the data from the string as if it came from a file. Related: Understanding "IFS= read -r line"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81640/" ] }
454,924
I'm trying to get a game server started using SRCDS but whenever I try to get the dependencies using this command: sudo dpkg --add-architecture i386; sudo apt update; sudo apt -f install mailutils postfix curl wget file bzip2 gzip unzip bsdmainutils python util-linux ca-certificates binutils bc tmux lib32gcc1 libstdc++6 libstdc++6:i386 lib32tinfo5 I get the following error: Some packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation: The following packages have unmet dependencies: lib32gcc1 : Depends: libc6-i386 (>= 2.2.4) but it is not going to be installed lib32tinfo5 : Depends: libc6-i386 (>= 2.16) but it is not going to be installed libstdc++6:i386 : Depends: libc6:i386 (>= 2.18) but it is not going to be installed Depends: libgcc1:i386 (>= 1:4.2) but it is not going to be installedE: Unable to correct problems, you have held broken packages. I've tried using fixes around the internet, such as apt-get -f install , which didn't work. I still get the same error. I tried updating my sources.list to the following, but that also didn't work. #------------------------------------------------------------------------------## OFFICIAL DEBIAN REPOS#------------------------------------------------------------------------------####### Debian Main Reposdeb http://deb.debian.org/debian/ stable main contrib non-freedeb-src http://deb.debian.org/debian/ stable main contrib non-freedeb http://deb.debian.org/debian/ stable-updates main contrib non-freedeb-src http://deb.debian.org/debian/ stable-updates main contrib non-freedeb http://deb.debian.org/debian-security stable/updates maindeb-src http://deb.debian.org/debian-security stable/updates maindeb http://ftp.debian.org/debian stretch-backports maindeb-src http://ftp.debian.org/debian stretch-backports main Anyone have any ideas?
Your issue are with the spaces in the data. The shell will split the string into words on all spaces and the for loop will iterate over those words. (For a solution that does not replace variable_list with an array, see the very end of this answer.) Instead, use a proper array: variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")for var in "${variable_list[@]}"; do c1=$( cut -d':' -f1 <<<"$var" ) c2=$( cut -d':' -f2 <<<"$var" ) c3=$( cut -d':' -f3- <<<"$var" ) printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done Using an array ensures that you can access each individual set of variables as its own array entry without relying on them being delimited by newlines or some other character. The code is also using "here-strings" in bash to send the string to cut (rather than echo and a pipe). Or, much more efficiently, variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")for var in "${variable_list[@]}"; do IFS=':' read -r c1 c2 c3 <<<"$var" printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done Setting IFS to a colon for read will make read split the input on colons (rather than on spaces, tabs and newlines). Note that all the quotation above is significant. Without the double quotes, the shell would perform word splitting and filename globbing on the values of variable_list , var and the three c variables. Related: When is double-quoting necessary? Why is printf better than echo? Have backticks (i.e. `cmd`) in *sh shells been deprecated? If all you're after is that specific output, then you may cheat a bit: variable_list=( "any:any:-a -b -c any" "one:one:-c -b -f -m mul" "mul:one:-c -b -f -m mul")( IFS=':'; set -f; printf 'c1==>%s and c2==>%s and c3==>%s\n' ${variable_list[@]} ) This runs the printf in a subshell so that setting IFS and the -f ( noglob ) shell option does not affect the rest of the script. Setting IFS to a colon here will make the shell expand the unquoted variable_list array into three sets of three arguments for printf . printf will print the first three according to its format string and then reuse that format for the next set of three arguments, until all arguments have been processed. The set -f prevents the unquoted expansion of variable_list from triggering filename globbing, should there be any filename globbing characters in there. Using a newline-delimited string: variable_list="any:any:-a -b -c anyone:one:-c -b -f -m mulmul:one:-c -b -f -m mul"while IFS= read -r var; do IFS=':' read -r c1 c2 c3 <<<"$var" printf 'c1==>%s and c2==>%s and c3==>%s\n' "$c1" "$c2" "$c3"done <<<"$variable_list" This reads the data from the string as if it came from a file. Related: Understanding "IFS= read -r line"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299342/" ] }
454,928
Ubuntu 16.04 bash -version GNU bash, version 4.4.0(1)-release (x86_64-unknown-linux-gnu) I would like to grep for 2 patterns and then have them listed side by side. At the moment, this is what I have: root@tires ~ # grep -e tire_id -e appID /path/to/*/vehicle/production.json/path/to/000001_000002/vehicle/production.json: "tire_id": "1305436516186552",/path/to/000001_000002/vehicle/production.json: "appID": "1164562920689523",/path/to/000001_000079/vehicle/production.json: "tire_id": "1815123428733289",/path/to/000001_000079/vehicle/production.json: "appID": "18412365908966538",/path/to/000001_000088/vehicle/production.json: "tire_id": "138477888324", This is what I would like to have although anything similar would work actually. root@tires ~ # grep -e tire_id -e appID /path/to/*/vehicle/production.json/path/to/000001_000002/vehicle/production.json: tire_id: 1305436516186552, appID: 1164562920689523/path/to/000001_000079/vehicle/production.json: tire_id: 1815123428733289, appID: 18412365908966538 File example here: { "socal": "https://xxx.xxxxx.xxx", "ip": "xxx.xxx.xxx.xxx", "tire_id": "213275925375485", "client": { "platform": "xx", "clientID": "xxxxx", "serviceID": "xxxxx", "service_id": XXXX, "vendor": "default" }, "locale": "en_US", "cdc": { "appID": "233262274090443", "isdel": "ORdiZBMAQS2ZBCnTwZDZD", }, "attachments": { "output": "attachments", "public": false, },}
The right way with jq tool for a valid JSON documents: Sample file1.json : { "socal": "https://xxx.xxxxx.xxx", "ip": "xxx.xxx.xxx.xxx", "tire_id": "213275925375485", "client": { "platform": "xx", "clientID": "xxxxx", "serviceID": "xxxxx", "service_id": "XXXX", "vendor": "default" }, "locale": "en_US", "cdc": { "appID": "233262274090443", "isdel": "ORdiZBMAQS2ZBCnTwZDZD" }, "attachments": { "output": "attachments", "public": false }} Sample file2.json : { "socal": "https://xxx.xxxxx.xxx", "ip": "xxx.xxx.xxx.xxx", "tire_id": "1305436516186552", "client": { "platform": "xx", "clientID": "xxxxx", "serviceID": "xxxxx", "service_id": "XXXX", "vendor": "default" }, "locale": "en_US", "cdc": { "appID": "1164562920689523", "isdel": "ORdiZBMAQS2ZBCnTwZDZD" }, "attachments": { "output": "attachments", "public": false }} And the solution itself: jq -r 'input_filename + " tire_id: \(.tire_id) appID: \(.cdc.appID)"' file*.json The output: file1.json tire_id: 213275925375485 appID: 233262274090443file2.json tire_id: 1305436516186552 appID: 1164562920689523
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/454928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210789/" ] }
454,943
I've installed Lubuntu (not Ubuntu) in a VMWare Workstation VM. Version 18.04. I'm getting a startup warning message I always get on my Ubuntu installs: piix4_smbus: 000:00:07.3 SMBus Host Controller not enabled! However, my usual fix of adding blacklist i2c-piix4 to blacklist.conf doesn't appear to work on the Lubuntu install. Any idea why it doesn't work in Lubuntu and/or how to remove the warning from Lubuntu startup?
After blacklisting piix4_smbus , run update-initramfs -u . I don't remember off the top of my head which storage controller drivers are used in a VMware virtual machine, but ata_piix is a very likely candidate. If the initramfs generator only does simple string matching on module names, it might be picking up i2c-piix4 in addition to the ata_piix storage driver and including it into initramfs. And so it could get loaded before the system can see the root filesystem and its /etc/modprobe.d/blacklist.conf . Updating the initramfs will include the files in /etc/modprobe.d/ into initramfs, so piix4_smbus should then be blacklisted during the initramfs boot phase too.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/454943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299368/" ] }
454,957
I have a Anaconda Python Virtual Environment set up and if I run my project while that virutal environment is activated everything runs great. But I have a cronjob configured to run it every hour. I piped the output to a log because it wasn't running correctly. crontab -e : 10 * * * * bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1 I get this error in the cronlog.log: Traceback (most recent call last): File "__parallel_workflow.py", line 10, in <module> import yamlImportError: No module named yaml That is indicative of the cronjob somehow not running the file without the virtual environment activated. To remedy this I added a line to the /home/user/.bash_profile file: conda activate ~/anaconda3/envs/sql_server_etl/ Now when I login the environment is activated automatically. However, the problem persists. I tried one more thing. I changed the cronjob, (and I also tried this in the bash file the cronjob runs) to explicitly manually activate the environment each time it runs, but to no avail: 10 * * * * conda activate ~/anaconda3/envs/sql_server_etl/ && bash /work/sql_server_etl/src/python/run_parallel_workflow.sh >> /home/etlservice/cronlog.log 2>&1 Of course, nothing I've tried has fixed it. I really know nothing about linux so maybe there's something obvious I need to change. So, is there anyway to specify that the cronjob should run under a virutal environment?
Posted a working solution (on Ubuntu 18.04) with detailed reasoning on SO . The short form is: 1. Copy snippet appended by Anaconda in ~/.bashrc (at the end of the file) to a separate file ~/.bashrc_conda As of Anaconda 2020.02 installation, the snippet reads as follows: # >>> conda initialize >>># !! Contents within this block are managed by 'conda init' !!__conda_setup="$('/home/USERNAME/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"if [ $? -eq 0 ]; then eval "$__conda_setup"else if [ -f "/home/USERNAME/anaconda3/etc/profile.d/conda.sh" ]; then . "/home/USERNAME/anaconda3/etc/profile.d/conda.sh" else export PATH="/home/USERNAME/anaconda3/bin:$PATH" fifiunset __conda_setup# <<< conda initialize <<< Make sure that: The path /home/USERNAME/anaconda3/ is correct. The user running the cronjob has read permissions for ~/.bashrc_conda (and no other user can write to this file). 2. In crontab -e add lines to run cronjobs on bash and to source ~/.bashrc_conda Run crontab -e and insert the following before the cronjob : SHELL=/bin/bashBASH_ENV=~/.bashrc_conda 3. In crontab -e include at beginning of the cronjob conda activate my_env; as in example Example of entry for a script that would execute at noon 12:30 each day on the Python interpreter within the conda environment: 30 12 * * * conda activate my_env; python /path/to/script.py; conda deactivate And that's it. You may want to check from time to time that the snippet in ~/.bashrc_conda is up to date in case conda updates its snippet in ~/.bashrc .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/454957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299389/" ] }
454,962
A volume intended for use by my user was created at OS installation with root ownership and my user lacks write permissions. Some solutions I've read about include: changing ownership of the mount point with chown adding group write permissions with chmod adding user or users mount option in /etc/fstab . What is the best practice for this situation, and what are the implications of each approach?
If it's in /etc/fstab , then it will mount at boot. As only root has write permissions,you'll need to modify it so that the user has those permissions. The best way is: chown -R user /mnt/point where user represents your user name (or user ID),and, obviously, /mnt/point represents the mount point of your file system. If the root group has write permission as well and you want another group to have it then you can use: chown -R user : group /mnt/point If the root group doesn't have write access, then you can use chmod next: chmod -R 775 /mnt/point That will give write permission to the group if it's not there and read and execute to everyone else. You can modify the 775 to give whatever permissions you want to everyone else as that will be specified by the third number. To better cover what you asked in your comment below: You can add the user option to /etc/fstab , but that only allows the file system to be mounted by any user. It won't change the permissions on the file system,which is why you need chown and/or chmod . You can go ahead and add the user option so that a regular user without sudo can mount it should it be unmounted. For practicality, the best option here is chown as it gives the user the needed permissions instantly. The chmod command can be used afterwards if the permissions need to be modified for others.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/454962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211182/" ] }
455,013
I'd like to have a file on my computer that stores a particular token (rather than having them just exported to the shell env). As such, I'd like that the token can only be read by sudo, so access to it requires authorisation. How can I write a file that can only be read by sudo?
Note that sudo is not synonymous with root/superuser. In fact, sudo command let you execute commands as virtually any user, as specified by the security policy: $ sudo whoamiroot$ sudo -u bob whoamibob I assume you meant to create a file that only root user can read: # Create the filetouch file# Change permissions of the file# '600' means only owner has read and write permissionschmod 600 file# Change owner of the filesudo chown root:root file When you need to edit the content of the file: # Replace 'nano' with your prefered editorsudo nano file See how only root can read the file: $ cat filecat: file: Permission denied$ sudo cat filefoo bar baz
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/455013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50703/" ] }
455,125
I'm trying to execute the code below but when I try to use my function in the if statement I get the -bash: [: too many arguments error.Why is it happening? Thank you in advance! notContainsElement () { local e match="$1" shift for e; do [[ "$e" == "$match" ]] && return 1; done return 0}list=( "pears" "apples" "bananas" "oranges" )blacklist=( "oranges" "apples" )docheck=1for fruit in "${list[@]}"do if [ notContainsElement "$fruit" "${blacklist[@]}" -a $docheck = 1 ] then echo $fruit fidone
When using if [ ... ] you are actually using the [ utility (which is the same as test but requires that the last argument is ] ). [ does not understand to run your function, it expects strings. Fortunately, you don't need to use [ at all here (for the function at least): if [ "$docheck" -eq 1 ] && notContainsElement "$fruit" "${blacklist[@]}"; then ...fi Note that I'm also checking the integer first, so that we may avoid calling the function at all if $docheck is not 1. This works because if takes an arbitrary command and decides what to do from the exit status of that command. Here we use a [ ... ] test together with a call to your function, with && in-between, creating a compound command. The compound command's exit status would be true if both the [ ... ] test and the function returned zero as their exit statuses, signalling success. As a style note, I would not have the function test whether the array does not contain the element but whether if does contain the element, and then if [ "$docheck" -eq 1 ] && ! contains "$fruit" "${blacklist[@]}"; then ... Having a function test a negative will mess up logic in cases where you do want to test whether the array contains the element ( if ! notContainsElement ... ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299520/" ] }
455,129
TL;DR;I want to use Linux Mint GRUB instead of Kali Linux. Is there a way to disable the GRUB of Kali Linux and used GRUB of Linux Mint? Linux Mint GRUB is installed on a separate partion (/dev/sdb4)Kali Linux GRUB is installed together in its / directory (/dev/sdb2) I recently installed Kali Linux on my Laptop with Linux Mint installed. However, after installation, the GRUB used is Kali (determined by the kali background image) and my old GRUB (Linux Mint) was overwritten(?). I also checked the partition where I installed Kali Linux and found a separate boot folder. During installation of Kali, option to install new boot loader was not given. I understand there are commands like grub-install or something that I can run on my host system, in this case Linux Mint. However, I also read that during installtion of GRUB it writes something in the master boot record. $ lsblkNAME SIZE RO TYPE MOUNTPOINTsda 698.7G 0 disk # extra disk for files└─sda1 698.7G 0 part /media/user/Shared #Samba sharesdb 489.1G 0 disk # main disk├─sdb1 4G 0 part [SWAP] # this is shared between 2 distro├─sdb2 200G 0 part # Partition for Kali Linux├─sdb3 200G 0 part / # Partition for Linux Mint├─sdb4 976M 0 part /boot/efi # EFI partition (from Mint installation)└─sdb5 84.1G 0 part /media/user/DPartition # partition shared between distro I want to use the GRUB installed in /dev/sdb4 instead of the GRUB installed in the /dev/sdb2:/boot. Is there a way to disable or delete the GRUB in Kali Linux? Any help will do. Thanks!!! P.S. I don't have a live CD right now, I just used a borrowed flash disk to boot and install linux from USB. -edit-I can boot from Mint and Kali using the GRUB of Kali linux. I understand since I am using UEFI I can just remove the GRUB from Kali and my laptop will just boot from /dev/sdb4 however I cannot risk it since I don't have a live cd to use right now. Is there a way to do this without using a live cd?
When using if [ ... ] you are actually using the [ utility (which is the same as test but requires that the last argument is ] ). [ does not understand to run your function, it expects strings. Fortunately, you don't need to use [ at all here (for the function at least): if [ "$docheck" -eq 1 ] && notContainsElement "$fruit" "${blacklist[@]}"; then ...fi Note that I'm also checking the integer first, so that we may avoid calling the function at all if $docheck is not 1. This works because if takes an arbitrary command and decides what to do from the exit status of that command. Here we use a [ ... ] test together with a call to your function, with && in-between, creating a compound command. The compound command's exit status would be true if both the [ ... ] test and the function returned zero as their exit statuses, signalling success. As a style note, I would not have the function test whether the array does not contain the element but whether if does contain the element, and then if [ "$docheck" -eq 1 ] && ! contains "$fruit" "${blacklist[@]}"; then ... Having a function test a negative will mess up logic in cases where you do want to test whether the array contains the element ( if ! notContainsElement ... ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299518/" ] }
455,137
On my CentOS, a yum update brings up the following: (6/38): iwl1000-firmware-39.31.5.1-62.el7_39.31.5.1-62.2.el7_5.noarch.drpm (7/38): iwl105-firmware-18.168.6.1-62.el7_18.168.6.1-62.2.el7_5.noarch.drpm (8/38): iwl135-firmware-18.168.6.1-62.el7_18.168.6.1-62.2.el7_5.noarch.drpm (9/38): iwl2000-firmware-18.168.6.1-62.el7_18.168.6.1-62.2.el7_5.noarch.drpm (10/38): iwl2030-firmware-18.168.6.1-62.el7_18.168.6.1-62.2.el7_5.noarch.drpm (11/38): iwl3160-firmware-22.0.7.0-62.el7_22.0.7.0-62.2.el7_5.noarch.drpm etc. These being so-called "firmware packages". For example, let's find a few of them that are installed: rpm --query --all | grep firmware and then query its info: rpm --query --info iwl105-firmware-18.168.6.1-62.2.el7_5.noarch and we get: Summary : Firmware for Intel(R) Centrino Wireless-N 105 Series AdaptersDescription :This package contains the firmware required by the iwlagn driverfor Linux to support the iwl105 hardware. Usage of the firmwareis subject to the terms and conditions contained inside the providedLICENSE file. Please read it carefully. Ok, well. I don't even have that kind of hardware, as this is a VM. So, question: What do the firmware packages actually do? Are they "one-shot" installs that run an opaque executable (immediately? on next boot?) which checks whether the hardware exists, pumps binary code into the hardware's flash if the hardware is there (maybe while asking the user; on Windows at least, hardware flashing is always fraught with DOS windows that pop up, EULAs that have to been clicked through, and progress bars that have to be endured), and then marks the package as "installed". Do they modify the initramfs so that a binary blob is loaded by a kernel module or something happens at the next boot?
Loadable firmware is typically not "one-shot" installs that are written to flash on the device. The firmware is loaded into volatile storage on the device and needs to be done each time the host computer is turned on. The device does not function before the firmware is loaded. The firmware can be written to RAM on the device, in which case it contains code and data for the processor on the device, but it can also be a bit stream defining the logic of a field-programmable logic array (FPGA), or some combination of both. On the other hand, firmware in flash memory is typically pre-programmed on the devices, and only needs to be rewritten if there is an update to the firmware from the manufacturer. This is typically done via other mechanisms, like a separate executable that is run by the user. There are a few reasons why manufacturers want to use RAM instead of flash memory. First of all, it makes it possible to design a single version of the hardware, but at the same time deliver several versions of the product (for different market areas, for example). If the product is expected to be field-upgraded frequently, it may be easier to handle the firmware upgrades this way than go through the trouble of creating a program for upgrading the flash memory on the device. This program should have a nice user interface and be designed to be as user friendly as possible, since it is typically intended run by the end user of the product. Some devices with flash storage often run code from RAM anyway, and they just copy the contents of the flash to RAM when the device is started, in which case the flash chip just sits idle most of the time and is an extra expense.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/455137", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52261/" ] }
455,155
I have this scenario first_arg="$1";if foo; then shift 1; node foo.js "$@"elif bar; then shift 1; node bar.js "$@"elif baz; then shift 1; node baz.js "$@"else node default.js "$@"fi I would like to turn the above into this: first_arg="$1";shift 1;if foo; then node foo.js "$@"elif bar; then node bar.js "$@"elif baz; then node baz.js "$@"else unshift 1; node default.js "$@"fi but I am not sure if there is an operator like unshift, which I just made up. One workaround might be this: node default.js "$first_arg" "$@" but when I tried that, I got weird behavior.
You could save your arguments in an array: args=( "$@" ) # use double quotesshift 1if foo; then node foo.js "$@"elif bar; then node bar.js "$@"elif baz; then node baz.js "$@"else node default.js "${args[@]}"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
455,156
Can we use read() , write() on a directory just like on any other file in Unix/Linux? I have a confusion here because directories are also considered as files.
You could save your arguments in an array: args=( "$@" ) # use double quotesshift 1if foo; then node foo.js "$@"elif bar; then node bar.js "$@"elif baz; then node baz.js "$@"else node default.js "${args[@]}"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299542/" ] }
455,189
Installed Ubuntu with SIP disabled on MacBook 2017 - 0 issues, booted in seconds. I have been building it out and created a problem while trying to make the WiFi work. At some point (it was very late) a combination of 3 things happened: I enabled SIP I attempted to install Broadcom 4360x drivers I reinstalled Touch pad Drivers from GitHub repository These are the items : [ +0.001007] input: Apple Inc. iBridge as /devices/pci0000:00/0000:00:14.0/usb1/1-3/1-3:1.2/0003:05AC:8600.0001/input/input7[ +0.057765] hid-generic 0003:05AC:8600.0001: input,hidraw0: USB HID v1.01 Keyboard [Apple Inc. iBridge] on usb-0000:00:14.0-3/input2[ +0.000196] hid-generic 0003:05AC:8600.0002: hiddev0,hidraw1: USB HID v1.01 Device [Apple Inc. iBridge] on usb-0000:00:14.0-3/input3[ +0.000230] PKCS#7 signature not signed with a trusted key[ +0.000002] PKCS#7 signature not signed with a trusted key[ +0.000288] appletb: Touchbar usb device added; dev=0003:05AC:8600.0001[ +0.000004] appletb: releasing current hid driver 'hid-generic' and [ +0.002784] ACPI: Dynamic OEM Table Load:[ +0.000010] ACPI Exception: AE_NO_MEMORY, SSDT 0xFFFF948D2BD80800 Table is duplicated (20170831/tbdata-562)[ +0.000000] No Local Variables are initialized for Method [GCAP][ +0.000000] Initialized Arguments for Method [GCAP]: (1 arguments defined for method invocation) located a UUID issue where the boot was taking 2 minutes after installing Kali on same device. It was related to the swap file ID changing. Kali loads in 2.2 seconds on same device, also Debian, all green down the line.
PKCS#7 signature not signed with a trusted key This message is typically coming from a piece of hardware. In your case it's likely the Nvidia graphics card that's emitting this. This issue is discussed here in more detail, where 2 users were actually experiencing this issue, titled: PKCS Signature error/warnings running dmesg on Ubuntu Mate 18.04 . If you search the internet you'll come across dozens of people that are also experiencing this issue. From the looks of it this issue is still ongoing: After Upgrade to Mate 18.04 boot problems - not trusted key NOTE: The issue seems to be associated with Ubuntu 18.04. Source of message Further searches for this message led me to this source code: ubuntu-xenial-kernel/certs/system_keyring.c . These lines are the ones emitting this: if (!trusted) { pr_err("PKCS#7 signature not signed with a trusted key\n"); ret = -ENOKEY;} Further searches will take you to sites that touch on signed kernel modules, such as this one - MODSIGN: Use PKCS#7 for module signatures (2) Makes use of the PKCS#7 facility to provide module signatures. Secure boot Based on this AU Q&A titled: How to install module.ko module without kernel signature or kernel rebuild in Ubuntu 16.04? it was suggested you could either disable secure boot or sign the modules. You either disable secure boot or sign the kernel module. To disable secure boot you can follow directions in this Ubuntu wiki page titled: Testing Secure Boot . References Kernel module signing facility
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/455189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299550/" ] }
455,221
I'm using a installation scripts that needs setcap and it's not found. What package contains it? libcap2 is already installed.
Searching for setcap I believe setcap is contained in this package libcap2-bin . I found this by googling for "debian setcap" which led me to this man page: https://manpages.debian.org/jessie/libcap2-bin/setcap.8.en.html The title of the man page tells you which package it resides in: / jessie / libcap2-bin / setcap(8) Now that we "think" we know the package's name we can search for it: https://packages.debian.org/jessie/libcap2-bin If you scroll down to the bottom of that page you'll see all the various architectures. Click the link for amd64: https://packages.debian.org/jessie/amd64/libcap2-bin/filelist Found it And there's setcap : File list of package libcap2-bin in jessie of architecture amd64/sbin/capsh/sbin/getcap/sbin/getpcaps/sbin/setcap/usr/share/doc/libcap2-bin/README.Debian/usr/share/doc/libcap2-bin/changelog.Debian.gz/usr/share/doc/libcap2-bin/changelog.gz/usr/share/doc/libcap2-bin/copyright/usr/share/man/man1/capsh.1.gz/usr/share/man/man1/getpcaps.1.gz/usr/share/man/man5/capability.conf.5.gz/usr/share/man/man8/getcap.8.gz/usr/share/man/man8/pam_cap.8.gz/usr/share/man/man8/setcap.8.gz
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3853/" ] }
455,223
You can not use pivot_root on an initramfs rootfs, you will get Invalid Argument. You can only pivot real filesystems. Indeed: Fedora Linux 28 - this uses the dracut initramfs. Boot into an initramfs shell, by adding rd.break as an option on the kernel command line. cd /sysroot usr/bin/pivot_root . mnt -> pivot_root fails with "Invalid argument", corresponding to an errno value of EINVAL . There is no explanation for this in man 2 pivot_root : EINVAL put_old is not underneath new_root . Why does it fail? And as the next commenter replied, "Then how would Linux exit early user space?"
Unlike the initrd , Linux does not allow to unmount the initramfs . Apparently this helped keep the kernel code simple. Instead of pivot_root , you can use the switch_root command. It implements the following procedure. Notice that switch_root deletes all the files on the old root, to free the initramfs memory, so you need to be careful where you run this command. initramfs is rootfs: you can neither pivot_root rootfs, nor unmount it. Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs with the new root (cd /newmount; mount --move . /; chroot .), attach stdin/stdout/stderr to the new /dev/console, and exec the new init. Note the shell commands suggested are only rough equivalents to the C code. The commands won't really work unless they are all built in to your shell, because the first command deletes all the programs and other files from the initramfs :-). Rootfs is a special instance of ramfs (or tmpfs, if that's enabled), which is always present in 2.6 systems. You can't unmount rootfs for approximately the same reason you can't kill the init process; rather than having special code to check for and handle an empty list, it's smaller and simpler for the kernel to just make sure certain lists can't become empty. https://github.com/torvalds/linux/blob/v4.17/Documentation/filesystems/ramfs-rootfs-initramfs.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/455223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
455,261
I'm working with ROS, which has been installed on my Ubuntu correctly. To run the ROS, we have to first source /opt/ros/kinetic/setup.bash then execute roscore . If I execute roscore without source setup.bash , the command roscore can't be found. Now, I want to execute the ROS while the system starts up. I've read this link: https://askubuntu.com/questions/814/how-to-run-scripts-on-start-up It seems that I only need to create a custom service file and put it into /etc/systemd/system/ . But still I'm not sure what to do because I need to source setup.bash to setup some necessary environmental variables before executing roscore . Is it possible to set environmental variables in the service file? For my need, I have to set these environmental variables not only for the execution of roscore but also for the whole system. I have another idea, which is that I set these environmental variables in /etc/profile and write a service file only for the command roscore , will it work?
Normally systemd services have only a limited set of environment variables,and things in /etc/profile , /etc/profile.d and bashrc -related files are not set. To add environment variables for a systemd service you have different possibilities. The examples as follows assume that roscore is at /opt/ros/kinetic/bin/roscore , since systemd services must have the binary or script configured with a full path. One possibility is to use the Environment option in your systemd service and a simple systemd service would be as follows. [root@localhost ~]# cat /etc/systemd/system/ros.service[Unit]Description=ROS KineticAfter=sshd.service[Service]Type=simpleEnvironment="One=1" "Three=3"Environment="Two=2"Environment="Four=4"ExecStart=/opt/ros/kinetic/bin/roscore[Install]WantedBy=multi-user.target You also can put all the environment variables into a file that can be read with the EnvironmentFile option in the systemd service. [root@localhost ~]# cat /etc/systemd/system/ros.envOne=1Three=3Two=2Four=4[root@localhost ~]# cat /etc/systemd/system/ros.service[Unit]Description=ROS KineticAfter=sshd.service[Service]Type=simpleEnvironmentFile=/etc/systemd/systemd/ros.envExecStart=/opt/ros/kinetic/bin/roscore[Install]WantedBy=multi-user.target Another option would be to make a wrapper script for your ros binary and call that wrapper script from the systemd service. The script needs to be executable.  To ensure that, run chmod 755 /opt/ros/kinetic/bin/roscore.startup after creating that file. [root@localhost ~]# cat /opt/ros/kinetic/bin/roscore.startup#!/bin/bashsource /opt/ros/kinetic/setup.bashroscore[root@localhost ~]# cat /etc/systemd/system/ros.service[Unit]Description=ROS KineticAfter=sshd.service[Service]Type=simpleExecStart=/opt/ros/kinetic/bin/roscore.startup[Install]WantedBy=multi-user.target Note that you need to run systemctl daemon-reload after you have edited the service file to make the changes active. To enable the service on systemboot, you have to enter systemctl enable ros . I am not familiar with the roscore binary and it might be necessary to change Type= from simple (which is the default and normally not needed) to forking in the first two examples. For normal logins, you could copy or symlink /opt/ros/kinetic/setup.bash to /etc/profile.d/ros.sh , which should be sourced on normal logins.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/455261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
455,276
I'm trying to set up a chroot environment with only bash in it. This is what I have so far: [root@free]# tree ..├── bin -> usr/bin/├── dev├── etc│   ├── bash.bash_logout│   ├── bash.bashrc│   ├── inputrc│   └── profile├── lib -> usr/lib/├── lib64 -> usr/lib64/├── proc├── sys└── usr ├── bin │   └── bash ├── lib │   ├── libc.so │   ├── libc.so.6 │   ├── libdl.so │   ├── libdl.so.2 │   ├── libncursesw.so.6 │   ├── libreadline.so │   ├── libreadline.so.7 │   ├── libreadline.so.7.0 │   ├── libtinfo.so │   └── libtinfo.so.6 └── lib64 └── ld-linux-x86-64.so.211 directories, 16 files ldd lists the following for bash : [root@free]# ldd /bin/bash linux-vdso.so.1 (0x00007ffd388a3000) libreadline.so.7 => /usr/lib/libreadline.so.7 (0x00007fa6e0baa000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fa6e09a6000) libc.so.6 => /usr/lib/libc.so.6 (0x00007fa6e05ea000) libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007fa6e037d000) /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fa6e10d8000) Entering the chroot environment already works ( I have no name! is fine since i have not jet copied the passwd file): [root@free jail]# chroot .[I have no name!@jail]# The problem is that if I type in p e backspace w d the line will look like: [I have no name!@jail]#pe wd executing it with enter will execute pwd and print / also the arrow keys (left and right) acting weird like printing multiple characters but not moving the cursor p d left w leads to pdwd . This does not happen in bash outside of the chroot. How can I fix this? Did I forget to copy a library or something? Is it a libreadline problem (I already copied etc/inputrc )? Or could it be a libncursesw problem?
I took a look at a script to generate minimal chroots and noticed that you're missing the /usr/share/terminfo/ directory, which is used by libcurses and deals with terminal command sequences. In addition to some other files that will likely be necessary ( /etc/resolv.conf , etc.), that's what I'd look at trying.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/455276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299599/" ] }
455,284
Someone told me that it's possible to group machines you want to shutdown together, and that you can turn them off using the standard shutdown command in bash. How would one go about setting this up?
I took a look at a script to generate minimal chroots and noticed that you're missing the /usr/share/terminfo/ directory, which is used by libcurses and deals with terminal command sequences. In addition to some other files that will likely be necessary ( /etc/resolv.conf , etc.), that's what I'd look at trying.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/455284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
456,294
When I debug an executable program with arguments arg1 arg2 with gdb I perform the following sequence gdbfile ./programrun arg1 arg2btquit How can I do the same from one command line in shell script?
You can pass commands to gdb on the command line with option -ex . You need to repeat this for each command. This can be useful when your program needs to read stdin so you don't want to redirect it. Eg, for od -c echo abc |gdb -ex 'break main' -ex 'run -c' -ex bt -ex cont -ex quit od So in particular for your question, you can use: gdb -ex 'run arg1 arg2' -ex bt -ex quit ./program
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226971/" ] }
456,320
I use CentOS shared server environment with Bash. ll "$HOME"/public_html/cron_daily/ brings: ./../-rwxr-xr-x 1 user group 181 Jul 11 11:32 wp_cli.sh* I don't know why the filename has an asterisk in the end. I don't recall adding it and when I tried to change it I got this output: [~/public_html]# mv cron_daily/wp_cli.sh* cron_daily/wp_cli.sh+ mv cron_daily/wp_cli.sh cron_daily/wp_cli.shmv: `cron_daily/wp_cli.sh' and `cron_daily/wp_cli.sh' are the same file This error might indicate why my Cpanel cronjob failed: Did I do anything wrong when changing the file or when running the Cpanel cron command? Because both operations seem to fail.
The asterisk is not actually part of the filename. You are seeing it because the file is executable and your alias for ll includes the -F flag: -F Display a slash ('/') immediately after each pathname that is a directory, an asterisk ('*') after each that is executable, an at sign ('@') after each symbolic link, an equals sign (`=') after each socket, a percent sign ('%') after each whiteout, and a vertical bar ('|') after each that is a FIFO. As Kusalananda mentioned you can't glob all scripts in a directory with cron like that. With run-parts, you can call "$HOME"/public_html/cron_daily/ to execute all scripts in the directory (not just .sh) or loop through them as mentioned in this post .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/456320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
456,328
I want to find three patterns in a list. I tried typing $ pip3 list | grep -ei foo -ei bar -ei baz but the shell throws a broken pipe error and a large Traceback . How do I grep for multiple patterns passed from a list that is piped to grep ?
The asterisk is not actually part of the filename. You are seeing it because the file is executable and your alias for ll includes the -F flag: -F Display a slash ('/') immediately after each pathname that is a directory, an asterisk ('*') after each that is executable, an at sign ('@') after each symbolic link, an equals sign (`=') after each socket, a percent sign ('%') after each whiteout, and a vertical bar ('|') after each that is a FIFO. As Kusalananda mentioned you can't glob all scripts in a directory with cron like that. With run-parts, you can call "$HOME"/public_html/cron_daily/ to execute all scripts in the directory (not just .sh) or loop through them as mentioned in this post .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/456328", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289865/" ] }