source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
224,098 | My colleague ran grep | crontab . After that all jobs disappeared. Looks like he was trying to run crontab -l . So what happened after running the command grep | crontab ? Can anyone explain? | crontab can install new crontab for the invoking user (or the mentioned user as root ) reading from STDIN. This is what happended in your case. grep without any option will generate an error message on STDERR as usual and you are piping the STDOUT of grep to STDIN of crontab which is blank hence your crontab will be gone. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129224/"
]
} |
224,105 | I don't seem to be able to pass an environment variable to a chroot: $ sudo apt-get install debootstrap dchroot$ sudo debootstrap trusty mychroot$ sudo chroot mychroot /bin/bash -c "MY_VAR=5; echo ${MY_VAR}"$ | Use single quotes: $ sudo chroot mychroot /bin/bash -c 'MY_VAR=5; echo ${MY_VAR}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
224,113 | I just bought a USB wi-fi dongle using the MediaTek MT7610U (RT2860) chipset, and the drivers from their website: http://www.mediatek.com/en/downloads1/downloads/mt7610u-usb/ won't compile, with this error ktweed@PC-BL100TA ~/Desktop/drivers $ makemake -C toolsmake[1]: Entering directory `/home/ktweed/Desktop/drivers/tools'gcc -g bin2h.c -o bin2hmake[1]: Leaving directory `/home/ktweed/Desktop/drivers/tools'/home/ktweed/Desktop/drivers/tools/bin2hchipset = mt7650uchipset = mt7630uchipset = mt7610ucp -f os/linux/Makefile.6 /home/ktweed/Desktop/drivers/os/linux/Makefilemake -C /lib/modules/3.2.0-89-generic/build SUBDIRS=/home/ktweed/Desktop/drivers/os/linux modulesmake[1]: Entering directory `/lib/modules/3.2.0-89-generic/build' make[1]: *** No rule to make target `modules'. Stop.make[1]: Leaving directory `/lib/modules/3.2.0-89-generic/build'make: *** [LINUX] Error 2 I tried using files modified according to this site, with the same error: http://earthwithsun.com/questions/738096/how-to-install-mediatek-mt7610u-rt2860-driver LSUSB output Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 001 Device 002: ID 0408:13fd Quanta Computer, Inc. Bus 001 Device 003: ID 148f:7601 Ralink Technology, Corp. The Ralink one is the USB Wifi. ifconfig output: eth0 Link encap:Ethernet HWaddr 00:0d:5e:ed:d3:4e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:42 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:5538 errors:0 dropped:0 overruns:0 frame:0 TX packets:5538 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:609333 (609.3 KB) TX bytes:609333 (609.3 KB) I'm running linux mint 13 XFCE 32-bit. I've tried everything, and nothing works. If anyone could help, that would be great. Thanks! Edit:The CD that came with the USB Wifi has drivers for a realtek 8188CUS, and those don't work either. Does this help? | Use single quotes: $ sudo chroot mychroot /bin/bash -c 'MY_VAR=5; echo ${MY_VAR}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108372/"
]
} |
224,156 | I installed Debian Jessie with default partitioning on my SSD drive. My current disk partitioning looks like this: As I have 16GB of RAM, I assume I don't need swap . But since I have other disk drives I may create a swapfile for example, on one of the other drives instead. Can you tell me what steps I should take to remove the swap partition correctly and permanently for it not to occupy disk space ? I wish to delete the swap partition as I currently have only 128GB SSD. Here is what I tried and rebooted each time; each of these steps being not permanent , or did not do anything : Using the swapoff utility: swapoff --all Using the GParted utility: Right-clicking the swap partition and clicking Swapoff. Commenting out the swap partition's UUID in the following file: /etc/fstab Commenting out the swap partition's UUID in the following file: /etc/initramfs-tools/conf.d/resume Running these commands in the end (both in this and the opposite order): update-grubupdate-initramfs -u | If you have GParted open, close it. Its Swapoff feature does not appear to to be permanent. Open terminal and become root ( su ); if you have sudo enabled, you may also do for example sudo -i ; see man sudo for all options): sudo -i Turn off the particular swap partition and / or all of the swap s: swapoff --all Make 100% sure the particular swap partition partition is off: cat /proc/swaps Open a text editor you are skilled in with this file, e.g. nano if unsure: nano /etc/fstab Comment out / remove the swap partition's UUID , e.g.: # UUID=1d3c29bb-d730-4ad0-a659-45b25f60c37d none swap sw 0 0 Open a text editor you are skilled in with this file, e.g. nano if unsure: nano /etc/initramfs-tools/conf.d/resume Comment out / remove the previously identified swap partition's UUID , e.g.: # RESUME=UUID=1d3c29bb-d730-4ad0-a659-45b25f60c37d Don't close the terminal as you will need it later anyway. Note: The next steps differ depending on, whether you rely on CLI or GUI . GUI : Open up GParted , either from menu, or more conveniently from the terminal we have opened: gparted If you don't have it installed, you may do so; afterwards run the previous command again: apt-get install gparted Choose your drive from top-right menu. As the GParted reactivates the swap partition upon launch, you will have to right-click the particular swap partition and click Swapoff -> This will be applied immediately. Delete the swap partition with right click -> Delete. You must apply the change now. Resize your main / other partition with right click -> Resize/Move. You must apply the change now. Back to the terminal, let's recreate the boot images : update-initramfs -u -k all Update GRUB : update-grub You may reboot now if you wish to test that the machine boots up. Encryption note: If your swap partition is encrypted, then you also need to comment out the related line in /etc/crypttab , otherwise CryptSetup will keep you waiting for 90 seconds during boot time. Thanks frank for this addition. CLI : I will check in VM s if my solution works, then I will share it. In the meantime, see this answer . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/224156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
224,186 | Given the name of a binary (or any other program). How to find out which package provides this binary? Note: Assuming one uses apt / dpkg for package managing. Edit Additional to the correct answer below, I'd like to add some further information: In the question above I was assuming that the corresponding package was installed . If this is not the case, there is a package called apt-file which could do the job anyway. Searching for the mysqldump tool could be done with $ apt-file --regexp search .*mysqldump$ Resulting in: mariadb-client-10.0: /usr/bin/mysqldumpmysql-client-5.5: /usr/bin/mysqldump This solution was found here but I thought it could be of use to mention it here. | You want dpkg . Specifically the -S option will find which package owns a file. An example: $ dpkg -S /usr/bin/whereisutil-linux: /usr/bin/whereis The example shows that util-linux is the package which contains /usr/bin/whereis . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93225/"
]
} |
224,227 | I made an alias to save some keystrokes with working with systemd: $ alias sctl='systemctl' However, this breaks tab completion for the subcommands. Is it possible to alias a command without breaking tab completion? | First find out which complete-function is used for the systemctl command: complete | grep " systemctl$" The output looks like this: complete -F _functionname systemctl Then use: complete -F _functionname sctl To register the function for the completion of your alias. Now, when you type sctl <tab><tab> , the same suggestions as when you type systemctl will appear. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53613/"
]
} |
224,229 | I'm trying to do some calculation in shell script with CPU usage. Which return floating point number. But when I subtract this number I'm getting error. See the following code and error. Code #!/bin/shCPU_IDLE=98.67echo $CPU_IDLECPU_USAGE=$(( 100 - $CPU_IDLE ))echo $CPU_USAGE Error ./poc.sh: line 14: 100 - 98.67 : syntax error: invalid arithmetic operator (error token is ".67 ") | Neither bash nor ksh can perform floating point arithmetic ( ksh93 supports that if I remember correctly). I recommend to switch to zsh or run external tool like bc : $ CPU_IDLE=98.67$ echo "$CPU_IDLE"$ CPU_USAGE=$( bc <<< "100 - $CPU_IDLE" )$ echo "$CPU_USAGE"1.33 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80908/"
]
} |
224,240 | The only references to i915 I can find are indeed to the linux kernel driver for the intel chips. Intel just seems to call them HD graphics whatever. Intel 915 seems to refer to some Pentium 4 chipsets but they are unrelated to the current graphics architecture. | Well, that P4 chipset is the reason for the driver name. Starting with i810 , Intel outsourced the driver to Tungsten Graphics, but commissioned it as an open source one for Linux. The first 915 chipset was released in June 2004 and soon after 1 , a driver for this chipset was added to the linux kernel (see also 2.6.9-rc2 changelog). The driver name was, you guessed it, i915 : +#define DRIVER_AUTHOR "Tungsten Graphics, Inc."++#define DRIVER_NAME "i915"+#define DRIVER_DESC "Intel Graphics"+#define DRIVER_DATE "20040405" This was consistent with previous names of drivers that supported various Intel graphics chipset families (e.g. i810 , i830 2 ). Later on, support for other chipset families (including HD Graphics) was added to the same driver, which makes that nowadays i915 supports a long list 3 of Intel graphics chipsets. 1: as you can see in this message from David Airlie to Linus Torvalds and Andrew Morton 2: in fact, i830 was replaced by i915 in 2.6.39, see also the initial patch linked in another message from David to Linus 3: that list from wikipedia wasn't updated to include Broadwell & Skylake chipsets | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/224240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126364/"
]
} |
224,257 | I have a computer (in fact, a Banana Pi Pro) with a touchscreen which I have configured to emulate the right click via xorg.conf : Section "InputClass" Identifier "Touchscreen" Option "EmulateThirdButton" "1" Option "EmulateThirdButtonTimeout" "750" Option "EmulateThirdButtonThreshold" "30"EndSection This works really well. But sometimes, when I want to use a real mouse, these settings become quite annoying, because long left mouse clicks are converted to right mouse clicks. Also, drag selection becomes imprecise because of 30 pixels threshold. I wonder if it's possible to disable the right click emulation when the mouse is used: Is it possible to modify Xorg configuration at runtime to alter the "InputClass" section? If not, is it possible to apply this section only to one particular input device (the touchscreen)? If the only way is to update xorg.conf and restart the server, what would be the least painful way to do it? Ideally it would be nice to preserve the applicatons which are already running, but I doubt it's possible. Is there a program which does what I want without changing xorg.conf ? Like in this question , where xrandr is used to dynamically configure parameters which are static when configured via xorg.conf . | xinput controls input settings. It has the same role for input that xrandr has for the display. Run xinput list to list devices. Each device has a name and a numerical ID. You can use either this name or this ID to list properties of the corresponding device. Device IDs can depend on the order in which the devices are detected, so to target a specific device, use its name. For example, I have a mouse as device 8; here's an excerpt of its properties: $ xinput list-props 8… Evdev Third Button Emulation (280): 0 Evdev Third Button Emulation Timeout (281): 1000 Evdev Third Button Emulation Button (282): 3 Evdev Third Button Emulation Threshold (283): 20… So I can use either of the following commands to turn on third button emulation for this device: xinput set-prop 8 280 1xinput set-prop 8 'Evdev Third Button Emulation' 1 There is a hierarchy of devices, which xinput list represents graphically. Applying a property to a device also applies it to its children. For example, you can apply a property to all pointing devices by applying it to the root pointer Virtual core pointer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106927/"
]
} |
224,277 | Background I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot: Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.iso Use of dd to do so (apparently the most widely used): http://www.linuxjournal.com/content/archiving-cds-iso-commandline dd if=/dev/cdrom bs=blocksize count=count of=/path/to/isoimage.iso Use of just pv to accomplish this: See man pv for more information, although here's an excerpt of it: Taking an image of a disk, skipping errors: pv -EE /dev/sda > disk-image.imgWriting an image back to a disk: pv disk-image.img > /dev/sdaZeroing a disk: pv < /dev/zero > /dev/sda I don't know if all of them should be equivalent, although I tested some of them (using the md5sum tool) and, at least, dd and pv are not equivalent. Here's the md5sum of both the drive and generated files using each procedure: md5 of dd procedure: 71b676875b0194495060b38f35237c3c md5 of pv procedure: f3524d81fdeeef962b01e1d86e6acc04 EDIT: That output was from another CD than the output given. In fact, I realized there are some interesting facts I provide as an answer. In fact, the size of each file is different comparing to each other. So, is there a best procedure to copy a CD/DVD or am I just using the commands incorrectly? More information about the situation Here is more information about the test case I'm using to check the procedures I've found so far: isoinfo -d i /dev/sr0 Output: https://gist.github.com/JBFWP286/7f50f069dc5d1593ba62#file-isoinfo-output-19-aug-2015 dd to copy the media, with output checksums and file informationOutput: https://gist.github.com/JBFWP286/75decda0a67605590d32#file-dd-output-with-md5-and-sha256-19-aug-2015 pv to copy the media, with output checksums and file informationOutput: https://gist.github.com/JBFWP286/700a13fe0a2f06ce5e7a#file-pv-output-with-md5-and-sha256-19-aug-2015 Any help will be appreciated! | All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso . cat /dev/sr0 >image.isocat </dev/sr0 >image.isotee </dev/sr0 >image.isodd </dev/sr0 >image.isodd if=/dev/cdrom of=image.isopv </dev/sr0 >image.isocp /dev/sr0 image.isotail -c +1 /dev/sr0 >image.iso Why would you use one over the other? Simplicity. For example, if you already know cat or cp , you don't need to learn yet another command. Robustness. This one is a bit of a variant of simplicity. How much risk is there that changing the command is going to change what it does? Let's see a few examples: Anything with redirection: you might accidentally put a redirection the wrong way round, or forget it. Since the destination is supposed to be a non-existing file, set -o noclobber should ensure that you don't overwrite anything; however you might overwrite a device if you accidentally write >/dev/sda (for a CD, which is read-only, there's no risk, of course). This speaks in favor of cat /dev/sr0 >image.iso (hard to get wrong in a damaging way) over alternatives such as tee </dev/sr0 >image.iso (if you invert the redirections or forget the input one, tee will write to /dev/sr0 ). cat : you might accidentally concatenate two files. That leaves the data easily salvageable. dd : i and o are close on the keyboard, and somewhat unusual. There's no equivalent of noclobber , of= will happily overwrite anything. The redirection syntax is less error-prone. cp : if you accidentally swap the source and the target, the device will be overwritten (again, assuming a non read-only device). If cp is invoked with some options such as -R or -a which some people add via an alias, it will copy the device node rather than the device content. Additional functionality. The one tool here that has useful additional functionality is pv , with its powerful reporting options. But here you can check how much has been copied by looking at the size of the output file anyway. Performance. This is an I/O-bound process; the main influence in performance is the buffer size: the tool reads a chunk from the source, writes the chunk to the destination, repeats. If the chunk is too small, the computer spends its time switching between tasks. If the chunk is too large, the read and write operations can't be parallelized. The optimal chunk size on a PC is typically around a few megabytes but this is obviously very dependent on the OS, on the hardware, and on what else the computer is doing. I made benchmarks for hard disk to hard disk copies a while ago, on Linux, which showed that for copies within the same disk, dd with a large buffer size has the advantage, but for cross-disk copies, cat won over any dd buffer size. There are a few reasons why you find dd mentioned so often. Apart from performance, they aren't particularly good reasons. In very old Unix systems, some text processing tools couldn't cope with binary data (they used null-terminated strings internally, so they tended to have problems with null bytes; some tools also assumed that characters used only 7 bits and didn't process 8-bit character sets properly). I'm not sure if this ever was a problem with cat (it was with more line-oriented tools such as head , sed , etc.), but people tended to avoid it on binary data because of its association with text processing. This is not a problem on modern systems such as Linux, OSX, *BSD, or anything that's POSIX-compliant. There's a sort of myth that dd is somewhat “lower level” than other tools such as cat and accesses devices directly. This is completely false: dd and cat and tee and the others all read bytes from their input and write the bytes to their output. The real magic is in /dev/sr0 . dd has an unusual command line syntax, so explaining how it works gives more of an opportunity to shine by explaining something that just writing cat /dev/sr0 . Using dd with a large buffer size can have better performance, but it is not always the case (see some benchmarks on Linux ). A major risk with dd is that it can silently skip some data . I think dd is safe as long as skip or count are not passed but I'm not sure whether this is the case on all platforms. But it has no advantage except for performance. So just use pv if you want its fancy progress report, or cat if you don't. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/224277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
224,286 | How can I set English as a default language in Debian ?I've installed Debian in a different language. $ sudo update-locale LANG=en_US.UTF-8 LANGUAGE=en_US update-locale: Error: invalid locale settings: LANGUAGE=en_US LANG=en_US.UTF-8 | Using sudo dpkg-reconfigure locales should work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
224,306 | I'm running a script that needs root privileges. One of its functionality is mounting an attached user HDD by using udisksctl utility. I'm using it like udisksctl mount -b /dev/sdX --options umask=0000 but when it succeed, it automaticaly mounts it to /media/root/<LABEL> . But since it mounts it in root's directory, a normal user can't use it or unmount it etc. How can I succeed it? Anything wrong with my umask usage or it is about env variables? (My distro is Ubuntu, I'm using udisksctl for cross-platform issues.) | Using sudo dpkg-reconfigure locales should work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129386/"
]
} |
224,333 | I am having trouble formatting nested for loops for long strings of input: I am currently using the below nested for loops to name output files from a separate program that I have omitted: #!/bin/bashfor hemi in lh rhdo for measure in thickness area volume meancurv gauscurv foldind curvind do echo ${hemi}"--"${measure} donedone I would like to be able to format the loops like below to facilitate running multiple different loops with much longer array lists that may need to be modified. Only needing to modify the list in one place will greatly reduce the chance for errors/omissions etc: #!/bin/bashhemi=( lh rh )measure=( thickness area volume meancurv gauscurv foldind curvind )for((i=${hemi[${i}]};i<2;i++));do for((j=${measure[${j}]};j<7;j++));do echo ${hemi[${i}]}"--"${measure[${j}]} donedone This is the output I am getting and I am not able to figure out what I am doing wrong: lh--thicknesslh--arealh--volumelh--meancurvlh--gauscurvlh--foldindlh--curvind./for_loop_testing.sh: line 7: ((: j=: syntax error: operand expected (error token is "=") | Let me offer a simplified version: for i in "${hemi[@]}"; do for j in "${measure[@]}"; do echo "$i--$j" donedone The point is that "${array[@]}" expands to all array elements, so here we just iterate over all elements, put them into variables i and j , and later print them. As a side note: don't forget to put double quotes around variables and arrays, especially if any element can contain whitespaces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129191/"
]
} |
224,370 | I am using Zabbix for monitoring my environment and zabbix_agentd executes as user zabbix one custom script every 60 seconds; it uses sudo to run this script as root . In /var/log/auth.log I see every 60 seconds: Aug 11 17:40:32 my-server sudo: pam_unix(sudo:session): session opened for user root by (uid=0)Aug 11 17:40:32 my-server sudo: pam_unix(sudo:session): session closed for user root I want to stop this message from flooding my log. I added the following line to /etc/pam.d/sudo file, immediately before session required pam_unix.so : session [success=1 default=ignore] pam_succeed_if.so service in sudo quiet uid = 0 and the message disappeared. But the problem is that this way I have suppressed every PAM message when someone is executing a script with sudo as root . I want to stop the message only for user zabbix (not all other users). sudo knows that zabbix user wants to execute the script with root privileges and is there any way to tell PAM that? How can I tell PAM not to log for a specific user when using sudo ? Note : I tried filtering the messages in syslog; although this works, it has the same problem as the above, namely that it is too indiscriminate, as the log message does not indicate which user is becoming root. | You seem pretty close with your PAM conf line: session [success=1 default=ignore] pam_succeed_if.so service in sudo quiet uid = 0 Looking at the manual page for pam_succeed_if , I think you want to test that the requesting user ( ruser ) is zabbix . So I suggest: session [success=1 default=ignore] pam_succeed_if.so quiet uid = 0 ruser = zabbix That will suppress the next test when user zabbix becomes root (but no other transitions). I've tested this with a pair of my own users. Remove the uid = 0 test in the above if you want to keep quiet about zabbix becoming any user, rather than just root. I removed the service in sudo test: it's redundant given that this line is in /etc/pam.d/sudo . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
224,380 | I have a RHEL7 server with Apache Tomcat 7.0 installed and after a recent update to RHEL7.1 all of the logging to ${catalina.base}/logs/catalina.out was stopped. However I am receiving the logs inside journalctl. If I type journalctl -u tomcat I do get the logging. Is there any way for me to get the logging also to catalina.out? cat /usr/share/tomcat/logs/catalina.outno outputjournalctl -u tomcatAug 20 10:07:14 server.example.com server[26435]: at org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:549)Aug 20 10:07:14 server.example.com server[26435]: at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:455)Aug 20 10:07:14 server.example.com server[26435]: at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)Aug 20 10:07:14 server.example.com server[26435]: at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)Aug 20 10:07:14 server.example.com server[26435]: at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)Aug 20 10:07:14 server.example.com server[26435]: at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)Aug 20 10:07:14 server.example.com server[26435]: at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)... | Jamie's answer is correct that you can force rsyslog to log what's going on with tomcat. However, that doesn't answer why tomcat 7 on rhel 7 does not log to catalina.out. Or if it does, why does it log to both catalina.out and a catalina with a date (if you are not using an RPM install). First, in the past around 7.0.42, Red Hat's scripts used catalina.out because their scripts were mimicking behavior of RHEL 6. As far as I'm aware, they were using "forking" for the service for systemd . When 7.0.56 was released, they changed that entirely by making new scripts and wrappers in /usr/libexec/tomcat to force tomcat to run in simple mode instead of forked , allowing systemd to have control of the PID and stdout and stderr to go to the journal. There is still a catalina.$DATE.log file in /var/log/tomcat , but the information is more limited from a normal catalina.out . Second, let's look at /etc/tomcat/logging.properties. You'll see it sorts out the logs in a specific way between catalina, localhost, manager, host-manager. You'll also notice that it has rsyslog facility support too, and basically how it "deals" with it. What it comes down to is the ConsoleHandler in that file. Changing those would change the behavior of the logs in /var/log/tomcat . journalctl -u tomcat will show you everything that catalina.out is supposed to have. As far as I'm aware, without modifying Red Hat's wrappers in /usr/libexec/tomcat, there isn't a sure fire way to make everything go to just catalina.out. If you do modify those scripts and an update comes out, your changes will be overwritten. If you want catalina.out for sure, go with Jamie's rsyslog example configuration. Just know that it will not only fill that, but the systemd journal will also have the same information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92287/"
]
} |
224,415 | In Linux, a finished execution of a command such as cp or dd doesn't mean that the data has been written to the device. One has to, for example, call sync , or invoke the "Safely Remove" or "Eject" function on the drive. What's the philosophy behind such an approach? Why isn't the data written at once? Is there no danger that the write will fail due to an I/O error? | What's the philosophy behind such an approach? Efficiency (better usage of the disk characteristics) and performance (allows the application to continue immediately after a write). Why isn't the data written at once? The main advantage is the OS is free to reorder and merge contiguous write operations to improve their bandwidth usage (less operations and less seeks). Hard disks perform better when a small number of large operations are requested while applications tend to need a large number of small operations instead. Another clear optimization is the OS can also remove all but the last write when the same block is written multiple times in a short period of time, or even remove some writes all together if the affected file has been removed in the meantime. These asynchronous writes are done after the write system call has returned. This is the second and most user visible advantage. Asynchronous writes speeds up the applications as they are free to continue their work without waiting for the data to actually be on disk. The same kind of buffering/caching is also implemented for read operations where recently or often read blocks are retained in memory instead of being read again from the disk. Is there no danger that the write will fail due to an IO error? Not necessarily. That depends on the file system used and the redundancy in place. An I/O error might be harmless if the data can be saved elsewhere. Modern file systems like ZFS do self heal bad disk blocks. Note also that I/O errors do not crash modern OSes. If they happen during data access, they are simply reported to the affected application. If they happen during structural metadata access and put the file system at risk, it might be remounted read-only or made inaccessible. There is also a slight data loss risk in case of an OS crash, a power outage, or an hardware failure. This is the reason why applications that must be 100% sure the data is on disk (e.g. databases/financial apps) are doing less efficient but more secure synchronous writes. To mitigate the performance impact, many applications still use asynchronous writes but eventually sync them when the user saves explicitly a file (e.g. vim, word processors.) On the other hand, a very large majority of users and applications do not need nor care the safety that synchronous writes do provide. If there is a crash or power outage, the only risk is often to lose at worst the last 30 seconds of data. Unless there is a financial transaction involved or something similar that would imply a cost much larger than 30 seconds of their time, the huge gain in performance (which is not an illusion but very real) asynchronous writes is allowing largely outperforms the risk. Finally, synchronous writes are not enough to protect the data written anyway. Should your application really need to be sure their data cannot be lost whatever happens, data replication on multiple disks and on multiple geographical locations need to be put in place to resist disasters like fire, flooding, etc. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/224415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61003/"
]
} |
224,440 | I have a centOS VM that when I lock the screen it will not let log me back in. I can enter my password as many times as I'd like, even try to "switch users" but the VM just ignores everything and keeps me logged out. I then have to do a restart. I have both a domain user and local user, both are ignored when trying to log back in. Root does not allow a lock screen, and another person using this VM has both local and domain accounts as well. However he can log back in, but only with his local account. Besides root (obviously) all accounts have the same privileges. Any idea on what's going on, or how to fix it? CentOS 6.7 | In VSphere hold down the left mouse button over the screen and move up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129482/"
]
} |
224,447 | A PDB file contains numerous paragraphs of conformations of a protein. Each conformation starts with the keyword ATOM and ends with keyword END . I am trying to read the file in bash such that I read every line from ATOM until END, but I don't want to read the word END. I want to do this for each conformations (paragraphs) and store each paragraph in an array. The file somewhat looks like this: ATOM line 1...ATOM line 2...ATOM line 3...# More lines....ENDATOM line 1...ATOM line 2...ATOM line 3...# more lines...END An ATOM to END is one conformation. I want to be able to read each conformation into an array, including the ATOM but excluding the END . I can do read text between two keywords exclusive of both the words, but I don't know how to include the starting word, but exclude the end word. Also reading each conformation into an array such that conf[0] = first conformation , conf[1] = second conformation so on and so forth doesn't work. Code: #!/bin/bashfilename='coor.pdb'echo Starti=0while read line; do conf[$i]=$(sed -n '/ATOM/,/END/{//!p}') i=i+1 done < $filenameecho $conf[0] > first_frame.data | In VSphere hold down the left mouse button over the screen and move up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129488/"
]
} |
224,457 | I looking into writing my own init.d scripts to control several services running on my Linux server. I came across an example online which contained: nohup $EXEC_SCRIPT 0<&- &> $LOG_FILE & echo $! > $PID_FILE From what I understand: nohup Catches the hangup signal $EXEC_SCRIPT is a variable containing the command to be run 0<&- &> Not come across this before $LOG_FILE similar to $EXEC_SCRIPT but contains the log file path & starts logging to $LOG_FILE in the background? $! is the PID of the most last background command > writes the result of $! to the $PID_FILE I can work through it with this knowledge but the 0<&- &> is completely throwing me off. I don't like to include things that I don't at least partially understand first. | These are redirections. 0<&- closes the file descriptor 0 (standard input). &> redirects both stdout and stderr (in this case to the logfile) Are you sure there was no echo before $! ? $! would be interpreted as a command and most probably result in a -bash: 18552: command not found | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140895/"
]
} |
224,478 | I have read a lot of articles and SE questions regarding how and where the default $TERM environmental variable gets set. Unfortunately in Debian 8.1 I can't seem to find where the default $TERM variable is set when logging in to the system from tty1 . I would love to be pointed in the right direction if this is indeed a duplicate question but the following questions didn't seem like they provided an answer: tmux, TERM and 256 colours support Where does the TERM environment variable default get set? Is it correct to set the $TERM variable manually? Edit When I log in via tty1 here is what $TERM is set to: $> echo $TERMlinux Listing of /usr/lib/systemd/ , note that there is no system directory here. $> ls -altotal 28drwxr-xr-x 7 root root 4096 Aug 19 13:37 .drwxr-xr-x 44 root root 4096 Aug 20 14:28 ..drwxr-xr-x 2 root root 4096 Aug 19 13:37 catalogdrwxr-xr-x 2 root root 4096 May 26 02:07 networkdrwxr-xr-x 2 root root 4096 Aug 19 13:37 ntp-units.ddrwxr-xr-x 2 root root 4096 Aug 19 13:37 userdrwxr-xr-x 2 root root 4096 May 26 02:07 user-generators | I suppose TERM is set to linux for the init process (pid 1) by Linux kernel here and there . You can see it in /proc/1/environ (sorry the following output is from Ubuntu 15.04): $ sudo strings /proc/1/environ HOME=/init=/sbin/initrecovery=TERM=linuxBOOT_IMAGE=/boot/vmlinuz-3.19.0-25-generic.efi.signedPATH=/sbin:/usr/sbin:/bin:/usr/binPWD=/rootmnt=/root On Debian/Ubuntu systemd based systems it gets propagated to child getty processes by definitions in /lib/systemd/system/[email protected] . [Service]# the VT is cleared by TTYVTDisallocateExecStart=-/sbin/agetty --noclear %I $TERM So you might be able to override TERM in the kernel command line. Try to edit /etc/default/grub and run update-grub and reboot. GRUB_CMDLINE_LINUX="TERM=vt100" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42920/"
]
} |
224,487 | Using Debian 8 Jessie I don't like geoclue . I want to remove it Tried to do: apt-get remove geoclue* I got: The following packages will be REMOVED: empathy geoclue-2.0 gnome gnome-clocks gnome-core gnome-maps task-gnome-desktop Is there a way to remove it without uninstalling the whole gnome ? I don't have any window manager besides gnome I don't want to brick my install. Thanks | On a debian based system you can safely neuter geoclue by sending results of it's privacy disrespecting behavior right down to a black hole of nothingness. Do the following: Elevate your system privileges via 'sudo' command or log-in as root and do the following: sudo systemctl disable geoclue.servicesudo systemctl mask geoclue.service Once done, reboot the host, and when system boots back up, check the output of: $ sudo systemctl status geoclue.service● geoclue.serviceLoaded: masked (/dev/null; bad)Active: inactive (dead) That's it! No more chatty behavior to Mozilla servers to figure out your geo location. This does not fix a fundamental issue with gnome foundation and a geoclue developers clueless disregard for a users privacy, as well 'twisting' the users arm in to using gnome+geoclue by default and not providing an option to use gnome-desktop with out this geo-tracing component, but at least you would effectively cripple function of geoclue while continue using gnome-desktop without any issues. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33436/"
]
} |
224,499 | I've been struggling to get my IRC-Bouncer/Notification Script working. This is meant to be a script which will automatically login to a remote machine, attach to a screen session (or start one if none currently exists) running weechat, while simultaneously opening another ssh connection which uses netcat to read notifications from a socket-file to which a weechat add-on-script exports my notification messages. These notifications are then fed into lib-notify (via notify-send) so that I can be alerted of activity in weechat. Here's the script: #!/bin/bashBOUNCER="[email protected]"function irc_notify() { ssh $BOUNCER "nc -k -l -U /tmp/weechat.notify.sock" | \ while read type message; do notify-send -i weechat -u critical "$(echo -n $type | base64 -di -)" "$(echo -n $message | base64 -di -)" done}# Start listening for notificationsirc_notify &# Attach to remote IRC Bouncer Screenssh $BOUNCER -t 'screen -d -R -S irc weechat'# Cleanup Socket Listenerecho "cleaning up notification socket listener…"ssh $BOUNCER 'pkill -f -x "nc -k -l -U /tmp/weechat.notify.sock"' The setup actually works really well, except for one major glitch. Only two notifications were making it through to my notification manager, per invocation of the script. After that: nothing. So to eliminate issues with the notification script in weechat, I removed the second ssh invocation (the one which attaches to the screen session and starts weechat) and replaced it with a read command to block execution while I tested. Then, using irb on the remote machine, I sent messages to the socket with ruby. However, even when I was sending the messages manually, still only two messages would appear before it stopped working . strace showed me some interesting behavior (when I attached to the forked process) where it seemed that the messages stopped being terminated by new-line characters after the first or second message. But after a few more, they stopped appearing in strace all-together. At this point I decided to see if there was something in my script that was causing strange behavior. So on the command-line I simply invoked the ssh connection ( ssh $BOUNCER "nc -k -l -U /tmp/weechat.notify.sock" ) directly. And lo-and-behold, all the messages I was sending manually were appearing (still base64 encoded, of course). So then I added on the logic to decode every message as it came in, like I had in my script, and it also worked perfectly for every message. Including when I fed these messages into notify-send. So at this point I decided that something strange must be happening when I forked the function . But I observed no difference in effectiveness when I back-grounded the commands in the terminal. So I wondered if perhaps something strange was happening because it was being run from within a script. That's when things got weird… I began by breaking the logic out of the function, and just invoking it directly, with an ampersand at the end of the piped commands. Like so: ssh $BOUNCER "nc -k -l -U /tmp/weechat.notify.sock" | \ while read type message; do notify-send -i weechat -u critical "$(echo -n $type | base64 -di -)" "$(echo -n $message | base64 -di -)" done & As soon as I did that, the messages suddenly started working. And as soon as I reverted the change, I was back to square one with the same strange two-message-only behavior. But this fix, introduced some other strange behavior. Once I was inside the screen session, I would have to hit each key multiple times before it was registered by the program. As if there was a race-condition fighting over STDIN. Figuring perhaps the two SSH sessions were fighting over it (though I wasn't sure why) I tried to close and/or occupy STDIN on the first ssh command through various means. Such as by piping in : | before it, or appedning <&- or </dev/null after the SSH portion of the pipe. And while that did seem to resolve the race-condition, this reintroduced the two-message-only behavior. Thinking that it might have something to do with being under multiple-layers of sub-processing , I then attempted to reproduce this by wrapping the SSH call with bash -c like so: bash -c 'ssh $BOUNCER "nc -k -l -U /tmp/weechat.notify.sock" &' . And this too exhibited the two-message-only behavior. I also went ahead and tested this directly on the remote machine (SSHing to localhost, and wrapping in two bash -c invocations) and witnessed the same broken behavior. It also does not seem to be related to double forking causing orphaned processes. As it does not seem to matter if the process ends up orphaned or not. I also verified this is also happening under zsh . It seems that this is somehow related to the way which STDIN and STDOUT are treated when a process is run under layers of sub-processing. Repro. Instructions & strace Output: In order to simplify debugging I removed SSH from the picture and wrote two simplified test scripts which successfully reproduced the behavior entirely locally. Using the Juergen Nickelsen's socket command I created a local UNIX Domain Socket ( socket -l -s ./test.sock ), and once again was able to send test messages to it with irb using the following chunk of Ruby code: require 'socket'require 'base64'SOCKET = './test.sock'def send(subtitle, message) UNIXSocket.open(SOCKET) do |socket| socket.puts "#{Base64.strict_encode64(subtitle)} #{Base64.strict_encode64(message)}" endendsend('test', 'hi')send('test', 'hi')send('test', 'hi')send('test', 'hi')send('test', 'hi')send('test', 'hi') The First Script only backgrounded the piped expression (which, as previously stated, processed an unlimited number of messages): #!/bin/bash # to aid in cleanup when using Ctrl-C to exit stracetrap "pkill -f -x 'nc -k -l -U $HOME/test.sock'; exit" SIGINT # Start listening for notificationsnc -k -l -U $HOME/test.sock | \ while read type message; do # write messages to a local file instead of sending to notification daemon for simplicity. echo "$(echo -n $type | base64 -di -)" "$(echo -n $message | base64 -di -)" >> /tmp/msg done & read And produced the following output when run with strace -f : http://pastebin.com/SMjti3qW The Second Script backgrounded the wrapping function (which triggers the 2-and-done behavior): #!/bin/bash# to aid in cleanup when using Ctrl-C to exit stracetrap "pkill -f -x 'nc -k -l -U $HOME/test.sock'; exit" SIGINT# Start listening for notificationsfunction irc_notify() { nc -k -l -U $HOME/test.sock | \ while read type message; do # write messages to a local file instead of sending to notification daemon for simplicity. echo "$(echo -n $type | base64 -di -)" "$(echo -n $message | base64 -di -)" >> /tmp/msg done}irc_notify &read And which, in turn, produced this following output when run with strace -f : http://pastebin.com/WsrXX0EJ One thing which stands out to me when looking at the strace output from the above scripts is the output specific to the nc command . Which seems to show one of the main differences between the execution of these two scripts. The First script's "working" nc strace output: accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, The Second Script's "2-and-done" Behavior seen in nc strace output: accept(3, {sa_family=AF_FILE, NULL}, [2]) = 4poll([{fd=4, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 2 ([{fd=4, revents=POLLIN|POLLHUP}, {fd=0, revents=POLLHUP}])read(4, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14shutdown(4, 1 /* send */) = 0close(0) = 0poll([{fd=4, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=4, revents=POLLIN|POLLHUP}])read(4, "", 2048) = 0shutdown(4, 0 /* receive */) = 0close(4) = 0accept(3, {sa_family=AF_FILE, NULL}, [2]) = 0poll([{fd=0, events=POLLIN}, {fd=0, events=POLLIN}], 2, -1) = 2 ([{fd=0, revents=POLLIN|POLLHUP}, {fd=0, revents=POLLIN|POLLHUP}])read(0, "dGVzdA== aGk=\n", 2048) = 14write(1, "dGVzdA== aGk=\n", 14) = 14read(0, "", 2048) = 0shutdown(0, 1 /* send */) = 0close(0) = 0poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}])poll([{fd=0, events=POLLIN}, {fd=-1}], 2, -1) = 1 ([{fd=0, revents=POLLNVAL}]).......[truncated]....... I'm not where I'd like to be in regards to my strace output readability, so I'm not exactly sure what these different outputs imply—aside from the fact that one is obviously working while the other is not. As I've dug through the larger strace output, it also seems that messages after the first two are no longer terminated by a newline? But again, I'm not sure what that implies, or if I'm reading it correctly. And I definitely do not understand how the different sub-processing techniques, or even closing STDIN, could be affecting this behavior. Any idea what I'm encountering here? -- tl;dr I'm trying to figure out why running my notification listener under more than one-layer of sub-processing results in only two messages being processed; and not doing so results in a race-condition on STDIN. | On a debian based system you can safely neuter geoclue by sending results of it's privacy disrespecting behavior right down to a black hole of nothingness. Do the following: Elevate your system privileges via 'sudo' command or log-in as root and do the following: sudo systemctl disable geoclue.servicesudo systemctl mask geoclue.service Once done, reboot the host, and when system boots back up, check the output of: $ sudo systemctl status geoclue.service● geoclue.serviceLoaded: masked (/dev/null; bad)Active: inactive (dead) That's it! No more chatty behavior to Mozilla servers to figure out your geo location. This does not fix a fundamental issue with gnome foundation and a geoclue developers clueless disregard for a users privacy, as well 'twisting' the users arm in to using gnome+geoclue by default and not providing an option to use gnome-desktop with out this geo-tracing component, but at least you would effectively cripple function of geoclue while continue using gnome-desktop without any issues. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25944/"
]
} |
224,500 | In Debian, AFAIK some packages are maintained in Subversion (famously team-pkg-gnome), while some are maintained in git, and others in some other VCS. Is there a way to know where the source of a package is without doing an apt-get source $PACKAGENAME ? I tried three methods and all failed :- $apt show $PACKAGENAME$aptitude show $PACKAGENAME $apt-cache show $PACKAGENAME none of the above are able to give/share this information. Is there anyway to get the information. I need to know the source repo. name and whether it is in git, svn or some other version control. | A lot of package include this in their control information in the Vcs-* fields. You can see it easily (and without downloading the source package) using apt-cache showsrc . $ apt-cache showsrc gnome-disk-utility⋮Vcs-Browser: https://salsa.debian.org/gnome-team/gnome-disk-utilityVcs-Git: https://salsa.debian.org/gnome-team/gnome-disk-utility.git⋮ So in this case, you could do a git clone https://salsa.debian.org/gnome-team/gnome-disk-utility.git to download the source, or browse it on the web at https://salsa.debian.org/gnome-team/gnome-disk-utility . Not all packages have Vcs-* fields. They can be missing because the maintainer doesn't use a VCS (or at least not a public one), or hasn't added the headers for some other reason. apt-get source packagename will always get the source, though not via a VCS. Similarly, you can browse all the source online at http://sources.debian.net/ (Note: you will need deb-src lines in your sources.list file.) Update: Stephen Kitt points out debcheckout -d ; first time I'd heard of it too. See (and upvote) Stephen Kitt's answer for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
224,548 | I am trying to pass a "var name" to a function, have the function transform the value the variable with such "var name" contains and then be able to reference the transformed object by its original "var name". For example, let's say I have a function that converts a delimited list into an array and I have a delimited list named 'animal_list'. I want to convert that list to an array by passing the list name into the function and then reference, the now array, as 'animal_list'. Code Example: function delim_to_array() { local list=$1 local delim=$2 local oifs=$IFS; IFS="$delim"; temp_array=($list); IFS=$oifs; # Now I have the list converted to an array but it's # named temp_array. I want to reference it by its # original name.}# ----------------------------------------------------animal_list="anaconda, bison, cougar, dingo"delim_to_array ${animal_list} ","# After this point I want to be able to deal with animal_name as an array.for animal in "${animal_list[@]}"; do echo "NAME: $animal"done# And reuse this in several places to converted lists to arrayspeople_list="alvin|baron|caleb|doug"delim_to_array ${people_list} "|"# Now I want to treat animal_name as an arrayfor person in "${people_list[@]}"; do echo "NAME: $person"done | Description Understanding this will take some effort. Be patient. The solution will work correctly in bash. Some "bashims" are needed. First: We need to use the "Indirect" access to a variable ${!variable} . If $variable contains the string animal_name , the "Parameter Expansion": ${!variable} will expand to the contents of $animal_name . Lets see that idea in action, I have retained the names and values you used where possible to make it easier for you to understand: #!/bin/bashfunction delim_to_array() { local VarName=$1 local IFS="$2"; printf "inside IFS=<%s>\n" "$IFS" echo "inside var $VarName" echo "inside list = ${!VarName}" echo a\=\(${!VarName}\) eval a\=\(${!VarName}\) printf "in <%s> " "${a[@]}"; echo eval $VarName\=\(${!VarName}\)}animal_list="anaconda, bison, cougar, dingo"delim_to_array "animal_list" ","printf "out <%s> " "${animal_list[@]}"; echoprintf "outside IFS=<%s>\n" "$IFS"# Now we can use animal_name as an arrayfor animal in "${animal_list[@]}"; do echo "NAME: $animal"done If that complete script is executed (Let's assume its named so-setvar.sh), you should see: $ ./so-setvar.shinside IFS=<,>inside var animal_listinside list = anaconda, bison, cougar, dingoa=(anaconda bison cougar dingo)in <anaconda> in <bison> in <cougar> in <dingo> out <anaconda> out <bison> out <cougar> out <dingo> outside IFS=< >NAME: anacondaNAME: bisonNAME: cougarNAME: dingo Understand that "inside" means "inside the function", and "outside" the opposite. The value inside $VarName is the name of the var: animal_list , as a string. The value of ${!VarName} is show to be the list: anaconda, bison, cougar, dingo Now, to show how the solution is constructed, there is a line with echo: echo a\=\(${!VarName}\) which shows what the following line with eval executes: a=(anaconda bison cougar dingo) Once that is eval uated, the variable a is an array with the animal list. In this instance, the var a is used to show exactly how the eval affects it. And then, the values of each element of a are printed as <in> val . And the same is executed in the outside part of the function as <out> val That is shown in this two lines: in <anaconda> in <bison> in <cougar> in <dingo>out <anaconda> out <bison> out <cougar> out <dingo> Note that the real change was executed in the last eval of the function. That's it, done. The var now has an array of values. In fact, the core of the function is one line: eval $VarName\=\(${!VarName}\) Also, the value of IFS is set as local to the function which makes it return to the value it had before executing the function without any additional work. Thanks to Peter Cordes for the comment on the original idea. That ends the explanation, hope its clear. Real Function If we remove all the unneeded lines to leave only the core eval, only create a new variable for IFS, we reduce the function to its minimal expression: delim_to_array() { local IFS="${2:-$' :|'}" eval $1\=\(${!1}\);} Setting the value of IFS as a local variable, allows us to also set a "default" value for the function. Whenever the value needed for IFS is not sent to the function as the second argument, the local IFS takes the "default" value. I felt that the default should be space ( ) (which is always an useful splitting value), the colon (:), and the vertical line (|). Any of those three will split the values. Of course, the default could be set to any other values that fit your needs. Edit to use read : To reduce the risk of unquoted values in eval, we can use: delim_to_array() { local IFS="${2:-$' :|'}" # eval $1\=\(${!1}\); read -ra "$1" <<<"${!1}"}test="fail-test"; a="fail-test"animal_list='bison, a space, {1..3},~/,${a},$a,$((2+2)),$(echo "fail"),./*,*,*'delim_to_array "animal_list" ","printf "<%s>" "${animal_list[@]}"; echo $ so-setvar.sh<bison>< a space>< {1..3}><~/><${a}><$a><$((2+2))><$(echo "fail")><./*><*><*> Most of the values set above for the var animal_list do fail with eval. But pass the read without problems. Note: It is perfectly safe to try the eval option in this code as the values of the vars have been set to plain text values just before calling the function. Even if really executed, they are just text. Not even a problem with ill-named files, as pathname expansion is the last expansion, there will be no variable expansion re-executed over the pathname expansion. Again, with the code as is , this is in no way a validation for general use of eval . Example To really understand what, and how this function works, I re-wrote the code you posted using this function: #!/bin/bashdelim_to_array() { local IFS="${2:-$' :|'}" # printf "inside IFS=<%s>\n" "$IFS" # eval $1\=\(${!1}\); read -ra "$1" <<<"${!1}";}animal_list="anaconda, bison, cougar, dingo"delim_to_array "animal_list" ","printf "NAME: %s\t " "${animal_list[@]}"; echopeople_list="alvin|baron|caleb|doug"delim_to_array "people_list"printf "NAME: %s\t " "${people_list[@]}"; echo $ ./so-setvar.shNAME: anaconda NAME: bison NAME: cougar NAME: dingo NAME: alvin NAME: baron NAME: caleb NAME: doug As you can see, the IFS is set only inside the function, it is not changed permanently, and therefore it does not need to be re-set to its old value. Additionally, the second call "people_list" to the function takes advantage of the default value of IFS, there is no need to set a second argument. « Here be Dragons » ¯\_(ツ)_/¯ Warnings 01: As the (eval) function was constructed, there is one place in which the var is exposed unquoted to the shell parsing. That allows us to get the "word splitting" done using the IFS value. But that also expose the values of the vars (unless some quoting prevent that) to: "brace expansion", "tilde expansion", "parameter, variable and arithmetic expansion", "command substitution", and "pathname expansion", In that order. And process substitution <() >() in systems that support it. An example of each (except last) is contained in this simple echo (be careful): a=failed; echo {1..3} ~/ ${a} $a $((2+2)) $(ls) ./* That is, any string that starts with {~$`<> or could match a file name, or contains ?*[] is a potential problem. If you are sure that the variables do not contain such problematic values, then you are safe. If there is the potential to have such values, the ways to answer your question are more complex and need more (even longer) descriptions and explanations. Using read is an alternative. Warnings 02: Yes, read comes with it's own share of «dragons». Always use the -r option, it is very hard for me to think of a condition where it is not needed. The read command could get only one line. Multi-lines, even by setting the -d option, need special care. Or the whole input will be assigned to one variable. If IFS value contains an space, leading and trailing spaces will be removed. Well, the complete description should include some detail about the tab , but I'll skip it. Do not pipe | data to read. If you do, read will be in a sub-shell. All variables set in a sub-shell do not persist upon returning to the parent shell. Well, there are some workarounds, but, again, I'll skip the detail. I didn't mean to include the warnings and problems of read, but by popular request, I had to include them, sorry. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111730/"
]
} |
224,616 | Examining man test , I see that under synopsis for test are the possibilities test EXPRESSION and test . What does [ EXPRESSION ], [ ] and [OPTION mean below ? Why are the brackets empty and why does [OPTION miss a bracket ? Can someone interpret this for me ? NAME test - check file types and compare valuesSYNOPSIS test EXPRESSION test [ EXPRESSION ] [ ] [ OPTION | [ is another name for test . All three of those lines are command lines that run test with some options. In the first line, this is standard testing: [ 5 -gt 4 ] is the same as test 5 -gt 4 . In the second, the expression is omitted, which means to exit false ( 0 arguments: Exit false (1) ). For the third case you are, I guess, using GNU coreutils. In GNU test the help text contains this note: NOTE: [ honors the --help and --version options, but test does not.test treats each of those as it treats any other nonempty STRING. This is a non-POSIX extension ; the motivation seems to be that test is required to treat those arguments as strings like any other. [ is able to distinguish the option case from the string case by the presence of the closing ] bracket. Note that your shell will likely provide its own [ , and so you'll have to /bin/\[ to use this version. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
224,623 | I'm trying to execute telnet command and getting the out put on shell script. But when the port not open or the ip not exist. The command take too much time to give the response. Will it be possible to limit the maximum try time on telnet? telnet host port | This will run for no more than 2 secounds: []# echo quit | timeout --signal=9 2 telnet [SERVER] [PORT] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80908/"
]
} |
224,625 | I have a kinda problem about john in Kali x86. To make long story short, I can't allow to run john under Kali Linux 2.0 installed into LV, but as I remember I was able to run in older Kali and older version of john in VM. The error is: Sorry, SSE2 is required for this build Well the Kali runs with an enough cpu I think, so I looked it and for each core came out as: root@kali:~# cat /proc/cpuinfo model name : Intel(R) Core(TM)2 Duo CPU T5800 @ 2.00GHz microcode : 0xa4 cpu MHz : 800.000 cache size : 2048 KB flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm And, the result of dmidecode : root@kali:~# dmidecode -t 4 # dmidecode 2.12 SMBIOS 2.4 present. Handle 0x001E, DMI type 4, 35 bytes Processor Information Socket Designation: CPU Type: Central Processor Family: Pentium M Manufacturer: Intel(R) Corporation ID: FD 06 00 00 FF FB EB BF Signature: Type 0, Family 6, Model 15, Stepping 13Flags: FPU (Floating-point unit on-chip) VME (Virtual mode extension) DE (Debugging extension) PSE (Page size extension) TSC (Time stamp counter) MSR (Model specific registers) PAE (Physical address extension) MCE (Machine check exception) CX8 (CMPXCHG8 instruction supported) APIC (On-chip APIC hardware supported) SEP (Fast system call) MTRR (Memory type range registers) PGE (Page global enable) MCA (Machine check architecture) CMOV (Conditional move instruction supported) PAT (Page attribute table) PSE-36 (36-bit page size extension) CLFSH (CLFLUSH instruction supported) DS (Debug store) ACPI (ACPI supported) MMX (MMX technology supported) FXSR (FXSAVE and FXSTOR instructions supported) SSE (Streaming SIMD extensions) SSE2 (Streaming SIMD extensions 2) SS (Self-snoop) HTT (Multi-threading) TM (Thermal monitor supported) PBE (Pending break enabled)Version: Intel(R) Core(TM)2 Duo CPU T5800 @ 2.00GHzVoltage: 1.6 VExternal Clock: 800 MHzMax Speed: 2000 MHzCurrent Speed: 1200 MHzStatus: Populated, EnabledUpgrade: <OUT OF SPEC>L1 Cache Handle: 0x0021L2 Cache Handle: 0x001FL3 Cache Handle: Not ProvidedSerial Number: Not SpecifiedAsset Tag: FFFFPart Number: Not Specified The result of uname root@kali:~# uname -a Linux kali 4.0.0-kali1-686-pae #1 SMP Debian 4.0.4-1+kali2 (2015-06-03) i686 GNU/Linux The result of gcc version root@kali:~# gcc --version gcc (Debian 4.9.2-10) 4.9.2 However I installed John the Ripper 1.8 besides the one came preloaded with Kali Linux distribution and I didn't meet any benchmark for SSE2 during the compiling process. So, what is SSE2 in general? Can SS2E be used in x86 processors? Why do those builds like john need SSE2? EDIT: Why can not those builds be run on a system which has SSE2 specification? Thanks in advance. | What is SSE2 in general ? SSE2 is a extended specialized instructions sub-set of Intel x86 instruction set. They are dedicated to SIMD (Single-Instruction Multiple Data) which means that in one instruction they can handle several data thanks to specific extra-wide registers (namely the XMM registers which are 128-bits wides). The possible splits of the XMM registers are as shown in the following picture. Can SS2E be used in x86 processors ? Any relatively recent Intel x86 processor has SSE2 instruction set. If you want to check if you CPU has it, just do: $> cat /proc/cpuinfo | grep flags | tail -n 1flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt You can see here all the instruction sub-sets that your processor has built-in. You should find sse2 in the list (it is the case here). Why do those builds like john need SSE2 ? SEE are really useful for handling signal processing and highly parallelized algorithms. In the case of John the Ripper, SSE2 instruction set is used to parallelized the hash-function brute-force algorithm. It computes several hash attempts in one instruction to speed-up the exploration of the key-space (or to exhaust the dictionnary). Why can not those builds be run on a system which has SSE2 specification ? It is very likely linked to a software reason. Either you installed 32bits system over a 64bits CPU (i386 over amd64), or you may not have the compile tools that are able to handle SSE2 instruction sets. It also may be because the build-system of John has a flaw and failed to detect properly the ability of your system. But, you do not give enough information about your system to solve the problem. If you want to install john , you better use the pre-compiled package that comes with your distribution (this is a standard package in almost any mainstream distribution now). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110201/"
]
} |
224,627 | Am building a script to find out when a list of servers was last fully updated with yum update . I can find it by a history |grep "yum update"|head -n 1 however, the problem is that a user could have launced it but didn't type "y" in the prompt. Another way I tried was with yum history ID | Login user | Date and time | Action(s) | Altered------------------------------------------------------------------------------- 109 | <xyz user> | 2015-08-20 07:18 | Erase | 1 E< 108 | root <root> | 2015-08-18 08:56 | Update | 3 > 107 | root <root> | 2015-08-14 07:39 | Update | 1 106 | root <root> | 2015-08-14 07:38 | Update | 1 105 | root <root> | 2015-08-14 07:38 | Update | 3 104 | root <root> | 2015-08-13 07:31 | Update | 1 103 | root <root> | 2015-08-11 05:46 | Update | 1 102 | root <root> | 2015-08-11 05:46 | Update | 2 101 | root <root> | 2015-08-11 05:45 | Update | 3 100 | root <root> | 2015-08-11 05:45 | Update | 3 99 | root <root> | 2015-08-10 20:41 | Update | 1 98 | root <root> | 2015-08-05 02:35 | Update | 1 97 | root <root> | 2015-05-14 10:52 | Update | 1 96 | root <root> | 2015-05-01 02:59 | Obsoleting | 2 95 | root <root> | 2015-04-09 16:06 | Update | 1 < 94 | <xyz.user> | 2015-03-28 08:49 | Update | 1 >< 93 | <xyz.usert> | 2015-03-28 08:14 | Erase | 3 >< 92 | <xyz.user> | 2015-03-13 07:46 | Install | 6 >< 91 | <xyz.user> | 2015-03-13 05:45 | I, U | 24 > 90 | root <root> | 2015-03-04 01:24 | Update | 3 But I can't find a way of determining the date the yum update was launched and was successful. Since if I check, for example, the transaction ID 108 which is marked as "Update" launched on 18th, I don't find the command yum update for that particular date : history |grep 2015 |grep "yum update" 5182 20150313-054444 > yum update Another path I tried was with /var/log/yum.log but yum.log will show installs and updates also. If a package is updated while installing a package e:g: yum install varnish and it requires an update of certain packages eg:(varnish-libs-2.1.5-5.el6.i686, 3.0.7-1.el6.i686 etc) this will be shown as updated in the yum.log Is there a way to find the date a yum update was launched and it was successful? | You almost answered your question. Here is a way you can find latest 5 updated packages: grep Updated: /var/log/yum.log | tail -5 Output example: Aug 05 13:28:34 Updated: virt-manager-common-1.1.0-9.git310f6527.fc21.noarchAug 05 13:28:34 Updated: glusterfs-libs-3.5.5-2.fc21.i686Aug 05 13:28:35 Updated: virt-manager-1.1.0-9.git310f6527.fc21.noarchAug 05 13:28:36 Updated: virt-install-1.1.0-9.git310f6527.fc21.noarchAug 05 13:28:38 Updated: glusterfs-3.5.5-2.fc21.i686 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129073/"
]
} |
224,631 | let's say that i've done a find for .gif files and got a bunch of files back. I now want to test them to see if they are animated gifs. Can i do this via the command line? I have uploaded a couple of examples in the following, in case you want to experiment on them. Animated GIF image Static GIF Image | This can easily be done using ImageMagick identify -format '%n %i\n' -- *.gif12 animated.gif1 non_animated.gif identify -format %n prints the number of frames in the gif; for animated gifs, this number is bigger than 1. (ImageMagick is probably readily available in your distro's repositories for an easy install) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27368/"
]
} |
224,668 | I'm trying to make a script (using Perl, but it isn't necessary) that will only count the number of Established, Time_Wait, and Closed_Wait connections on a system and print them in terminal. So far I've figured out that I can use : netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n in order to print all of the connections, but when I run this from a script it will not print in terminal and it also gives me some connections that I am not looking for such as Listen and Foreign. The reason why it must only show Established, Time_Wait, and Closed_Wait is because the script is being used by a monitoring program that will fail if any other connection types show up. Can anyone make a suggestion? Thanks! | Your script can be slightly modified to only process the states you need: netstat -ant | awk '/ESTABLISHED|LISTEN|CLOSE_WAIT/ {print $6}' | \ sort | uniq -c | sort -n A further step would be to everything with awk , e.g. : netstat -ant | awk '/ESTABLISHED|LISTEN|CLOSE_WAIT/ {count[$6]++}END { for(s in count) { printf("%12s : %6d\n", s, count[s]); }}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117159/"
]
} |
224,697 | I have a file with a list of emails in it and each line has an email in it. I want to remove lines that contain the string example.com or test.com . I tried this: sed -i 's/example\.com//ig' file.txt But it will remove only the string example.com , how can I remove the entire line? | With GNU sed: sed '/example\.com/d;/test\.com/d' -i file.txt will remove the lines with example.com and test.com . From man sed : d Delete pattern space. Start next cycle. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81823/"
]
} |
224,722 | I have a file with lines similar to the below: 00.10.000200.30 I want to replace the lines with a single 0 to something else like 0.0001 . For the lines with a decimal and digits after, I don't want to alter them. I've tried the below variations of sed , but it either does nothing, or it changes all the zeros, even ones after the decimals: sed 's/0/x/g' filesed 's/^0/x/g' filesed 's/0^/x/g' filesed 's/^0^/x/g' filesed 's/"0"/x/g' filesed 's/[0]/x/g' file It seems sed cannot handle a single 0 character. Also, I do not have the replace command on my system. What else can I do? | To replace a single 0 on a line: sed 's/^0$/x/' ^ matches the beginning of the line $ matches the end of a line So the above command matches the beginning of a line, followed by 0, followed by the end of the line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129212/"
]
} |
224,771 | I read about how to update the vim statusline here . And I am able to update it successfully. But, I would like to retain the format of default vim statusline and just add some more info to it e.g. file-size, file-type, etc. Vim default status line is: <file-name> line_num,col_num %file How could I do the following? I would like to add info after the file-name Display the current format of statusline ( :set statusline displays nothing) I tried: set statusline+=%y But this overwrites the entire statusline and just displays the file-type ( %y ). Any hints? | Like @muru said, it doesn't seem to possible to exactly simulate the default status line when statusline is set as the code for rendering it does things that can't be specified in the statusline setting. It is possible to get pretty close, however. Here is a reasonable approximation of the way the default status line looks when ruler is enabled: :set statusline=%f\ %h%w%m%r\ %=%(%l,%c%V\ %=\ %P%) The main difference is the positioning of the line and column numbers. If it's possible to simulate the default spacing logic, I haven't been able to figure out a way to do it. Perhaps this will be close enough for your purposes. I use a split version of this in my own .vimrc to place Syntastic status line info in the middle of what looks like a normal vim status line with ruler: " start of default statuslineset statusline=%f\ %h%w%m%r\ " NOTE: preceding line has a trailing space character" Syntastic statuslineset statusline+=%#warningmsg#set statusline+=%{SyntasticStatuslineFlag()}set statusline+=%*" end of default statusline (with ruler)set statusline+=%=%(%l,%c%V\ %=\ %P%) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
224,779 | What's wrong with this script?I'm trying to define A1=1, B1=1, C1=1 LIST="A B C"for x in $LISTdo "$x"1=1done and the result is: ./x.: line 7: A1=1: command not found./x.: line 7: B1=1: command not found./x.: line 7: C1=1: command not found | A variable assignment has the form of a variable name, followed by the equal sign, followed by the (optional) value. This is a valid assignment: ABC=123 "$x"1=1 is not a valid assignment, because "$x"1 is not a variable name. It may be eval uated to a variable name, but it isn't. The shell, in fact, believes it is a command. One way for doing what you want to achieve is this: eval "$x"1=1 Another way in bash (but not in other shells) is: declare "$x"1=1 Or also (again bash-only): let "$x"1=1 (There is no much difference in your case.) But, as Jakuje noted in the comments , you probably want to go with arrays, if your shell has them (ksh, bash or zsh). For completeness: eval executes arbitrary commands. So, if on the right side of the equal sign you have a variable that expands to some command, that command will be executed. The following code: x=ay='$(echo hello)'eval "$x=$y" is equivalent to a=hello . declare is a bash builtin to assign variables and won't execute any command. The following code: x=ay='$(echo hello)'declare "$x=$y" is equivalent to a='$(echo hello)' . let is similar to declare , in that it doesn't execute commands. But contrary to declare , let may be used for arithmetic operations: let a="1 + 2" is equivalent to a=3 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129799/"
]
} |
224,835 | Consider a single tar file from an external system which contains some directories with various attributes which I want to retain such as permissions, mtimes, etc. How can I easily take a subset of these files as a regular user (not root)? Looking for something like: tar -f some.tar.gz --subset subdir/ | ssh remote@system tar xvz It is also essential that the main attributes (ownership, group, mode, mtime) in this tar archive are retained. What about other attributes in a tar file such as extended header keywords ? Bonus points for a solution that avoids use of a temporary directory in case this subdir contains huge files. | bsdtar (based on libarchive) can filter tar (and some other archives) from stdin to stdout. It can for example pass through only filenames matching a pattern, and can do s/old/new/ renaming. It's already packaged for most distros, for example as bsdtar in Ubuntu. sudo apt-get install bsdtar # or aptitude, if you have it.# example from the man page:bsdtar -c -f new.tar --include='*foo*' @old.tgz#create new.tar containing only entries from old.tgz containing the string ‘foo’bsdtar -czf - --include='*foo*' @- # filter stdin to stdout, with gzip compression of output. Note that has a wide choice of compression formats for input/output, so you don't have to manually pipe through gunzip / lz4 yourself. You can use - for stdin with the @tarfile syntax, and/or - for stdout like normal. My searching also found this streaming tar modify tool which appears to want you to define the archive changes you want using javascript. (I think the whole thing is written in js). https://github.com/mafintosh/tar-stream | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8250/"
]
} |
224,874 | Given input: hello: world foo bar bazbar:baz: bin boop bop fiz bang beepbap: bim bam bopboatkeeper: poughkeepsie I would like to sort it into most words at the top, to least at the end, like so: baz: bin boop bop fiz bang beephello: world foo bar bazbap: bim bam bopboatkeeper: poughkeepsiebar: How would I do this with sort or some other tool? | You could do something like: awk '{print NF,$0}' file | sort -nr | cut -d' ' -f 2- We use awk to prefix the number of fields to each line. We then sort by that number and remove it with cut . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70397/"
]
} |
224,887 | I want to automate a Linux build but eventually get to a point where I need to run what seems to be a very manual step: make menuconfig . This seems to synchronize configs between the OS and kernel configs? cp git-tracked-config .configmake defconfig make menuconfig # <- how to automate/script this?make V=s Basically, how can I remove the call to make menuconfig for a build script? As an aside, this is in reaction to a build error that seems to happen when I run without ever calling make menuconfig: make[1]: *** No rule to make target `include/config/auto.conf', needed by `include/config/kernel.release'. Stop. Which seems to be there is a missing rule in a makefile perhaps because the makefile itself does NOT exist or the makefile has not been generated/morphed to contain that rule but that is a separate question. There could be a smarter way to approach this alltogether. Are there other configs that I'm not tracking but should (e.g. oldconfig)? | The Linux kernel build-system provide many build targets, the best way to know about it is probably to do a make help : Configuration targets: config - Update current config utilising a line-oriented program nconfig - Update current config utilising a ncurses menu based program menuconfig - Update current config utilising a menu based program xconfig - Update current config utilising a QT based front-end gconfig - Update current config utilising a GTK based front-end oldconfig - Update current config utilising a provided .config as base localmodconfig - Update current config disabling modules not loaded localyesconfig - Update current config converting local mods to core silentoldconfig - Same as oldconfig, but quietly, additionally update deps defconfig - New config with default from ARCH supplied defconfig savedefconfig - Save current config as ./defconfig (minimal config) allnoconfig - New config where all options are answered with no allyesconfig - New config where all options are accepted with yes allmodconfig - New config selecting modules when possible alldefconfig - New config with all symbols set to default randconfig - New config with random answer to all options listnewconfig - List new options olddefconfig - Same as silentoldconfig but sets new symbols to their default value kvmconfig - Enable additional options for guest kernel support tinyconfig - Configure the tiniest possible kernel As jimmij says in the comments, the interesting parts are in the oldconfig related targets. Personally, I would recommend you to go for silentoldconfig (if nothing changed in the .config file or olddefconfig if you updated your .config file with a new kernel. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/224887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
224,907 | I have been wondering if there is any Unix command that does not make any system call . For example if we take into instance ls it also uses a lot of system calls, like open , read , write . We use the strace command to find those calls. | /bin/true and /bin/false don't have to make any system calls other than the required exit(2) . (GNU true and false are dynamically linked ELF executables, so of course you get the usual boiler-plate startup). If you write a program in asm that tries to return from _start , or that just keeps executing, it will segfault when execution jumps to an unmapped page. So you technically can avoid calling exit() , but it's not worth it. :P You may have read that the simplest implementation of true(1) is an empty file. This only works if called from the shell. touch truechmod 755 true./trueecho $? 0strace ./true write(2, "strace: exec: Exec format error\n", 32) = 32 exit_group(1) If you want to make your own minimal true implementation, then on amd64 you can do: cat > exit64.s <<EOF .section .text .globl _start _start: xor %edi, %edi mov $231, %eax # exit(0) syscall # movl $1,%eax # 32bit int80 can call the 32bit ABI system calls from long mode # int $0x80EOFas exit64.s -o exit64.o && ld exit64.o -o exit64strace ./exit64 execve("./exit64", ["./exit64"], [/* 69 vars */]) = 0 exit_group(0) = ? +++ exited with 0 +++file exit64 exit64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped Unlike GNU true , this one doesn't support --help and --version ! (e.g. command true --version to skip the shell-builtin.) See also A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux if you thought this was neat. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128949/"
]
} |
224,910 | I was doing a distribution upgrade on kali from 1.1.0 to 2.0 using: sudo apt-get dist-upgrade Upgrade was of 2 GB and after 1 GB download, I wanted to download openssh-server, so i did an apt-get install, but an upgrade message was shown ( apt-get upgrade ) , since it required only 74MB/1GB download, i preferred to do it first, and after installation, i rebooted my laptop and was welcomed with this message: I can't login into gui mode but cli mode is available.I have no idea how to fix it, please help. P.S.: During installation I was asked for some greeting file, it said to choose between downloaded or system file along with a message that system file was modified, so i preferred to choose system file. | /bin/true and /bin/false don't have to make any system calls other than the required exit(2) . (GNU true and false are dynamically linked ELF executables, so of course you get the usual boiler-plate startup). If you write a program in asm that tries to return from _start , or that just keeps executing, it will segfault when execution jumps to an unmapped page. So you technically can avoid calling exit() , but it's not worth it. :P You may have read that the simplest implementation of true(1) is an empty file. This only works if called from the shell. touch truechmod 755 true./trueecho $? 0strace ./true write(2, "strace: exec: Exec format error\n", 32) = 32 exit_group(1) If you want to make your own minimal true implementation, then on amd64 you can do: cat > exit64.s <<EOF .section .text .globl _start _start: xor %edi, %edi mov $231, %eax # exit(0) syscall # movl $1,%eax # 32bit int80 can call the 32bit ABI system calls from long mode # int $0x80EOFas exit64.s -o exit64.o && ld exit64.o -o exit64strace ./exit64 execve("./exit64", ["./exit64"], [/* 69 vars */]) = 0 exit_group(0) = ? +++ exited with 0 +++file exit64 exit64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped Unlike GNU true , this one doesn't support --help and --version ! (e.g. command true --version to skip the shell-builtin.) See also A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux if you thought this was neat. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129382/"
]
} |
224,969 | Is there a constant variable in awk , that store today's date? If not, is there a way that can store today's date for daily use? Let's say we have below file: boo,foo,2016-08-30foo,boo,2016-07-31 And I need to compare the date $3 in the file, with today's date, regardless what it is. i.e below script awk -F, '{if($3>"2015-08-23"){print $0}}' where 2015-08-23 will be changed by the current date. | There are no built-in functions in standard awk to get a date, butthe date can easily be assigned to a variable. awk -F, -v date="$(date +%Y-%m-%d)" '$3>date' or in an awk script BEGIN { str = "date +%Y-%m-%d"; str | getline date; close(str);}$3>date gawk , does have built-in time functions , and strftime can be used. gawk -F, 'BEGIN{date=strftime("%Y-%m-%d")}$3>date' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/224969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
224,979 | It seems that Linux supports changing the owner of a symbolic link (i.e. lchown ) but changing the mode/permission of a symbolic link (i.e. lchmod ) is not supported . As far as I can see this is in accordance with POSIX. However, I do not understand why one would support either one of these operations but not both. What is the motivation behind this? | Linux, like most Unix-like systems (Apple OS/X being one of the rare exceptions), ignores permissions on symlinks when it comes to resolving their targets for instance. However ownership of symlinks, like other files, is relevant when it comes to the permission to rename or unlink their entries in directories that have the t bit set, such as /tmp . To be able to remove or rename a file (symlink or not) in /tmp , you need to be the owner of the file. That's one reason one might want to change the ownership of a symlink (to grant or remove permission to unlink/rename it). $ ln -s / /tmp/x$ rm /tmp/x# OK removed$ ln -s / /tmp/x$ sudo chown -h nobody /tmp/x$ rm /tmp/xrm: cannot remove ‘/tmp/x’: Operation not permitted Also, as mentioned by Mark Plotnick in his now deleted answer , backup and archive applications need lchown() to restore symlinks to their original owners. Another option would be to switch euid and egid before creating the symlink, but that would not be efficient and complicate right managements on the directory the symlink is extracted in. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30497/"
]
} |
224,988 | GNU is a Unix-like operating system. That means it is a collection of many programs: applications, libraries, developer tools, even games. The development of GNU, started in January 1984, is known as the GNU Project. Many of the programs in GNU are released under the auspices of the GNU Project; those we call GNU packages. But where does the GNU word come from? | GNU is a recursive acronym, G NU's N ot U NIX It was chosen because: The name “GNU” was chosen because it met a few requirements; first, it was a recursive acronym for “GNU's Not Unix”, second, because it was a real word, and third, it was fun to say (or Sing). See this GNU webpage for more historical information on the the name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/224988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
224,992 | I read that there are two folders for unit files (not in user mode). /usr/lib/systemd/system/: units provided by installed packages/etc/systemd/system/: units installed by the system administrator Conflicting with this understanding is the answer to this question: How to write startup script for Systemd . Can someone fill in the missing information so that I understand what is going on? ( UPDATE: The answer has been updated, and my understanding no longer conflicts with it. ) Also, it seems that the scripts are organized in subfolders within the /etc/systemd/system/ folder: getty.target.wantsmulti-user.target.wants In another location I read that there are other locations. It seems these are for user-specific services. /usr/lib/systemd/user/ where services provided by installed packages go./etc/systemd/user/ where system-wide user services are placed by the system administrator.~/.config/systemd/user/ where the user puts its own services. Update 2015-08-31: For the sake of others, here is a link to a related question I recently asked: Where do I put scripts executed by systemd units? | The best place to put system unit files: /etc/systemd/system Just be sure to add a target under the [Install] section, read "How does it know?" for details. UPDATE : /usr/local/lib/systemd/system is another option, read "Gray Area" for details." The best place to put user unit files: /etc/systemd/user or $HOME/.config/systemd/user but it depends on permissions and the situation. Note also that user services will only run while a user session is active. The truth is that systemd units (or as the intro sentence calls them, "unit configurations") can go anywhere —provided you are willing to make manual symlinks and you are aware of the caveats. It makes life easier to put the unit where systemctl daemon-reload can find it for some good reasons: Using a standard location means that systemd generators will find them and make them easy to enable at boot with systemctl enable . This is because your unit will automatically be added to a unit dependency tree (a unit cache). You do not need to think about permissions, because only the right privileged users can write to the designated areas. How does it know? And how exactly does systemctl enable know where to create the symlink? You hard code it within theunit itself under the [install] section. Usually there is a line like [Install]WantedBy = multi-user.target that corresponds to a predefined place on the filesystem.This way, systemctl knows that this unit is dependent on a group of unit files called multi-user.target ("target" is the term used to designate unit dependency groups. You can list all groups with systemctl list-units --type target ). The group of unit files to be loaded with a target is put in a targetname.target.wants directory. This is just a directory full of symlinks (or the real thing). If your [Install] section says it is WantedBy the multi-user.target , but if a symlink to it does not exist in the multi-user.target.wants directory, then it will not load. When the systemd unit generators add your unit file to the dependency tree cache at boot (you can manually trigger generators with systemctl daemon-reload ), it automatically knows where to put the symlink—in this case in the directory /etc/systemd/system/multi-user.target.wants/ should you enable it. Key Points in the Manual: Additional units might be loaded into systemd ("linked") fromdirectories not on the unit load path. See the link command forsystemctl(1). Under systemctl, look for Unit File Commands Unit File Load Path Please read and understand the first sentence in the following quote from man systemd.unit (because it implies that all of the paths I mention here may not apply to you if your systemd was compiled with different paths): Unit files are loaded from a set of paths determined during compilation, described in the two tables below. Unit files found in directories listed earlier override files with the same name in directories lower in the list. When the variable $SYSTEMD_UNIT_PATH is set, the contents of this variable overrides the unit load path. If $SYSTEMD_UNIT_PATH ends with an empty component (":"), the usual unit load path will be appended to the contents of the variable. Table 1 and Table 2 from man systemd.unit are good. Load paths when running in system mode ( --system ). /etc/systemd/system Local configuration /run/systemd/system Runtime units /usr/lib/systemd/system Units of installed packages (or /lib/systemd/system in some cases, read man systemd.unit ) Load path when running in user mode ( --user ) There is a difference between per user units and all/global users units. User-dependent $XDG_CONFIG_HOME/systemd/user User configuration (only used when $XDG_CONFIG_HOME is set) $HOME/.config/systemd/user User configuration (only used when $XDG_CONFIG_HOME is not set) $XDG_RUNTIME_DIR/systemd/user Runtime units (only used when $XDG_RUNTIME_DIR is set) $XDG_DATA_HOME/systemd/user Units of packages that have been installed in the home directory (only used when $XDG_DATA_HOME is set) $HOME/.local/share/systemd/user Units of packages that have been installed in the home directory (only used when $XDG_DATA_HOME is not set) --global (all users) Units that apply to all users--meaning owned by each user, too. So each user can stop these services even if an administrator enables them at boot. /etc/systemd/user Local configuration for all users ( systemctl --global enable userunit.service ) /usr/lib/systemd/user Units of packages that have been installed system-wide for all users (or /lib/systemd/system in some cases, read man systemd.unit) /run/systemd/user Runtime units Gray Area On the one hand, the File Hierarchy Standard (also man file-hierarchy ) specifies that /etc is for local configurations that do not execute binaries. On the other handit specifies that /usr/local/ "is for use by the system administrator when installing software locally". You could also argue (if not just for the purpose of organization) that all system unit files should go under /usr/local/lib/systemd/system , but this is intended for unit files that are part of "software" not from a package manager.The corresponding systemd user units that are system-wide could go under /usr/local/lib/systemd/user . Transient Unit Another forgotten place is nowhere at all! Perhaps a lesser-known program is systemd-run . You can use it to run transient units on the fly. see man systemd-run . For example, to schedule a reboot tomorrow morning at 4 a.m. (you might need --force to ensure a reboot happens): systemd-run -u restart --description="Restarts machine" --on-calendar="2020-12-18 04:00:00" systemctl --force reboot This will yield a transient unit file restart.service and a corresponding timer (because of the --on-calendar , (and indicated by transient=yes in the resulting transient unit definition). /run/systemd/transient/restart.service # This is a transient unit file, created programmatically via the systemd API. Do not edit.[Unit]Description=Restarts machine[Service]ExecStart=ExecStart="/usr/bin/systemctl" "--force" "reboot" Note that there is also the more dangerous double force option --force --force , which tells the kernel to halt immediately (and, if you do not know what you're doing, unsafely, because it is almost equivalent to cutting the power). | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/224992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
225,003 | On Windows computers, there is a simple "hide the modes this screen does not support" button. How can I get the available video modes supported (accepted) by my actually connected screen via Linux command line ? I.e: I would like to answer to this question: "Would my 1280x1024 video mode be supported by this screen" I have read about the hwinfo program, but it seems not to be included on Ubuntu anymore. The other method I tested uses vbetool , but I think it is not the appropriate way: luis@Terminus:~$ sudo vbetool vbemode get16673 And I have read too about a method implying the commands execution on GRUB menu (like vbeinfo ), but I would like to find some inside-Linux way. Answers generic for any Linux distro are preferred. If not possible, Ubuntu or Kali are accepted. | Have you tried "xrandr" ? When run without any option, xrandr shows the names of different outputs available on the system (LVDS, VGA-0, etc.) and resolutions available on each Demo output :* $ xrandr -d :0 Screen 0: minimum 64 x 64, current 1920 x 975, maximum 16384 x 16384 VGA-0 connected primary 1920x975+0+0 0mm x 0mm 1920x975 60.0*+ 1600x1200 60.0 1440x1050 60.0 1280x960 60.0 1024x768 60.0 800x600 60.0 640x480 60.0 *Note that you can specify which X display to print info about (-d, --display), as I've done here because I ran the command over SSH (without any X-forwarding). There does need to be at least one X display for xrandr to be of any use. For more info, check out "man xrandr". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
225,062 | I ran the minimal Debian installation inside a VirtualBox instance, installed X11 and Awesome window manager manually (without any custom configuration yet) and installed VirtualBox additions as well (and enabled shared clipboard in settings). However, copy-pasting text from xterm terminal still doesn't seem to work: CTRL + C is sent as a signal to a terminal, and Shift + Insert inserts the text that I had selected (which probably means that it got copied to some buffer somehow), but it is still unavailable from the host operating system. | X11 uses two buffers: PRIMARY and CLIPBOARD . To copy/paste to the CLIPBOARD buffer you can often use CTRL-C and CTRL-V . You can insert to the PRIMARY buffer by selecting a text and paste from it by pressing the middle mouse button. If you want to use the CLIPBOARD buffer, put this in your ~/.Xresources file and use Ctrl + Shift + C and Ctrl + Shift + V to copy/paste from/to the CLIPBOARD buffer in xterm: xterm*VT100.Translations: #override \ Ctrl Shift <Key>V: insert-selection(CLIPBOARD) \n\ Ctrl Shift <Key>C: copy-selection(CLIPBOARD) You need to run xrdb -merge ~/.Xresources after putting that into the file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66352/"
]
} |
225,083 | I want to find the path for directory name bbb where the parent directory called aaa For example /aaa/bbb/tmp/aaa/bbb/usr/bin/aaa/bbb/home/aaa/bbb/home/aaa/xxx/bbb So i wrote something like this: find /*/aaa -name bbb In some platforms it works and in some it doesn't, and in any case the /aaa/bbb is not found because there is no parent directory to aaa I guess I could run find / -name bbb | grep \/aaa but I think if there is something smarter..? | Use the -path option for this case: find / -type d -path '*/aaa/bbb' From the man page for find: File name matches shell pattern pattern. The metacharacters do not treat / or . specially; so, for example, find . -path "./sr*sc" will print an entry for a directory called `./src/misc' (if one exists). Cross-Platform Compatibility Edit: I just noticed the aix and hp-ux tags. You don’t specify which version of find you’re using and the aboveinformation applies to GNU find. However, the man page also specifies that, The predicate -path is also supported by HP-UX find and will be in a forthcoming version of the POSIX standard. so it looks like -path can be used in HP-UX but not with AIX (at least not AIX 7.1 , the latest release as of 2015-08-24). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42081/"
]
} |
225,117 | I have many files I need to search for some string. I'm using grep -rl 'pattern' * to find files that contain the pattern. However, I'm only interested in file count - if string occurs in more than N files, I want grep to stop immediately after hitting Nth match(as searching through the whole file hierarchy is long operation). It would be nice if it returned some meaningful exit code, but if that's impossible then I can just pipe it to wc without a problem. How can I tell grep to stop searching other files after matching Nth file? | You can pipe grep result to head . Note, that in order to ensure stopping after Nth match, you need to using stdbuf to make sure grep don't buffer its output: stdbuf -oL grep -rl 'pattern' * | head -n10 As soon as head consumed 10 lines, it terminated and grep will receive SIGPIPE because it still output something to pipe while head was gone. This assumed that no file names contain newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
225,130 | I have a file which contains approximately 5 million records, as follows: - 1223423,21,foo,data1,data2,data3,data4,data5,45,267,index14234234,34,bar,cat1,cat2,cat3,cat4,cat5,34,2323,index2325423,23,foo,data1,data2,data3,data4,data5,23,1232,index32131,23,bar,cat1,cat2,cat3,cat4,cat5,22,4334,index41231,43,cat,val1,val2val3,val4,val5,96,4598,index54596,87,cat,val1,val2val3,val4,val5,08,234,index6 Desired output : foo,data1,data2,data3,data4,data5 : index1,index3bar,cat1,cat2,cat3,cat4,cat5 : index2,index4cat,val1,val2val3,val4,val5 : index5,index6 | You can pipe grep result to head . Note, that in order to ensure stopping after Nth match, you need to using stdbuf to make sure grep don't buffer its output: stdbuf -oL grep -rl 'pattern' * | head -n10 As soon as head consumed 10 lines, it terminated and grep will receive SIGPIPE because it still output something to pipe while head was gone. This assumed that no file names contain newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123822/"
]
} |
225,138 | Question relevant to RedHat/CentOS.I'm a little confused between package and library and how to pin them to a fixed version. There's an X11 rollover bug in recent libxcb versions that affects my C programs. libxcb-1.5-1 doesn't have that bug. So I did: # yum remove libxcb-devel# rpm -Uvh --oldpackage libxcb-1.5-1.el6.i686.rpm And now the bug is gone and my software works fine. Fast forward a few weeks and the bug is back. # yum info libxcb-develLoaded plugins: refresh-packagekit, securityAvailable PackagesName : libxcb-develArch : i686Version : 1.9.1Release : 2.el6Size : 1.0 MRepo : sl-security But is that the installed version or the version available through the repo ?How can I tell which version of libxcb is currently installed ? $ ll /usr/lib/libxcb.so*lrwxrwxrwx 1 root root 15 Aug 6 03:38 /usr/lib/libxcb.so.1 -> libxcb.so.1.1.0*-rwxr-xr-x 1 root root 130752 Oct 14 2014 /usr/lib/libxcb.so.1.1.0*$ readelf -d /usr/lib/libxcb.so.1.1.0 | grep SONAME 0x0000000e (SONAME) Library soname: [libxcb.so.1] So I'm confused between the version number shown by yum, the number on the .so and the number I want... | If you want to know what version is installed , just run: rpm -q libxcb-devel If you want to prevent upgrades to this package, you can add the package to the yum exclude configuration. Add the following to the main section in /etc/yum.conf : exclude=libxcb-devel The library version (e.g., in libxcb.so.1.1.0 ) very seldom tracks the package version, and is meant to track API changes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/225138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92092/"
]
} |
225,179 | How can I show spinner till command line finish it is job? In other words, If I am running a script and I want to show spinner while this script is running and the spinner disappears when the script finish it is job. Bellow is a common spinner code: i=1sp="/-\|"echo -n ' 'while truedoprintf "\b${sp:i++%${#sp}:1}"done How can I link the previous spinner code to a command to let it show spinner while the command is running and the spinner disappears when the command finish it is job? If I include the command inside the loop it will loop with the spinner so what is the solution in this case? | Have your while loop watch for your real command to exit. I'll assume a Linux environment that has /proc entries for each PID, but you could slice it other ways: #!/bin/bash# your real command here, instead of sleepsleep 7 &PID=$!i=1sp="/-\|"echo -n ' 'while [ -d /proc/$PID ]do printf "\b${sp:i++%${#sp}:1}"done | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
225,201 | Is there a way to hide the exit code of the background jobs that have finished? For example, if I run: sleep 5& After the 5s, if I hit return the terminal gives me: [1]+ Done sleep 5 Is there a way to hide/suppress the last notification? Particulary I have to do this:nohup script >> file.txt &And after that I don't want anything more on the terminal (stdout) | $ sleep 5 &[1] 1234$ disown | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103313/"
]
} |
225,217 | When it comes to passwd/user-password-crypted statement in a preseed file, most examples use an MD5 hash. Example: # Normal user's password, either in clear text#d-i passwd/user-password password insecure#d-i passwd/user-password-again password insecure# or encrypted using an MD5 hash.#d-i passwd/user-password-crypted password [MD5 hash] From Debian's Appendix B. Automating the installation using preseeding . A few sources show that it's also possible to use SHA-512: Try using a hashed password like this: $ mkpasswd -m sha-512 [...] And then in your preseed file: d-i passwd/user-password-crypted password $6$ONf5M3F1u$bpljc9f1SPy1w4J2br[...] From Can't automate user creation with preseeding on AskUbuntu . This is slightly better than MD5, but still doesn't resist well against brute force and rainbow tables. What other algorithms can I use? For instance, is PBKDF2 supported, or am I limited by the algorithms used in /etc/shadow , that is MD5, Blowfish, SHA-256 and SHA-512 ? | You can use anything which is supported in the /etc/shadow file. The string given in the preseed file is just put into /etc/shadow. To create a salted password to make it more difficult just use mkpasswd with the salt option (-S): mkpasswd -m sha-512 -S $(pwgen -ns 16 1) mypassword$6$bLyz7jpb8S8gOpkV$FkQSm9YZt6SaMQM7LPhjJw6DFF7uXW.3HDQO.H/HxB83AnFuOCBRhgCK9EkdjtG0AWduRcnc0fI/39BjmL8Ee1 In the command above the salt is generated by pwgen . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11229/"
]
} |
225,219 | On Debian 8.1, I'm using a Bash feature to detect whether the stackoverflow.com website is reachable: (echo >/dev/tcp/stackoverflow.com/80 ) &>/dev/null || echo "stackoverflow unreachable" This is Bash-specific and will not work in sh , the default shell of cron . If we, on purpose, try the script in sh , we get: $ /bin/sh: 1: cannot create /dev/tcp/stackoverflow.com/80: Directory nonexistent Hence, if I only put the following in my personal crontab (without setting SHELL to /bin/bash ) via crontab -e , I expect that once per minute, the script will be executed, and I therefore expect to also get the above error sent per mail once per minute: * * * * * (echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable" And indeed, exactly as expected, we see from /var/log/syslog that the entry is executed once per minute: # sudo grep stackoverflow /var/log/syslog Aug 24 18:58:01 localhost CRON[13719]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable")Aug 24 18:59:01 localhost CRON[13723]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable")Aug 24 19:00:01 localhost CRON[13727]: (mat) CMD ((echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable")... During the last ~2 hours, this was executed more than 120 times already, as I can verify with piping the output to wc -l . However , from these >120 times the shell command (to repeat: the shell command is invalid for /bin/sh ) has been executed, I only got three e-mails: The first one at 19:10:01, the second at 20:15:01, and the third at 20:57:01. The content of all three mails reads exactly as expected and contains exactly the error message that is to be expected from running the script in an incompatible shell (on purpose). For example, the second mail I received reads (and the other two are virtually identical): From [email protected] Mon Aug 24 20:15:01 2015From: [email protected] (Cron Daemon)To: [email protected]: Cron (echo >/dev/tcp/stackoverflow.com/80)&>/dev/null || echo "stackoverflow unreachable"... /bin/sh: 1: cannot create /dev/tcp/stackoverflow.com/80: Directory nonexistent` From /var/log/mail.log , I see that these three mails were the only mails sent and received in the last hours. Thus, where are the >100 additional mails we would expect to receive from cron due to the above output that is created by the erroneous script? To summarize: Mail is configured correctly on this system, I can send and receive mails without problem with /usr/bin/sendmail . Cron is set up correctly, notices the task as expected and executes it precisely at the configured times. I have tried many other tasks and scheduling options, and cron executed them all exactly as expected. The script always writes output (see below) and we thus expect cron to send the output to me via mail for each invocation. The output is mailed to me only occasionally, and apparently ignored in most cases. There are many ways to work around the obvious mistake that led to the above observations: I can set SHELL=/bin/bash in my crontab . I can create a heartbeat.sh with #!/bin/bash , and invoke that. I can invoke the script with /bin/bash -c ... within crontab . etc., all fixing the mistake of using a Bash-specific feature within sh . However, all of this does not address the core issue of this question, which is that in this case, cron does not reliably send mails even though the script always creates output. I have verified that the script always creates output by creating wrong.sh (which again on purpose uses the unsuitable /bin/sh shell, to produce the same error that cron should see): #!/bin/sh (echo >/dev/tcp/stackoverflow.com/80) &>/dev/null || echo "stackoverflow unreachable" Now I can invoke the script in a loop and see if there ever is a case where it finishes without creating output. Using Bash : $ while true; do [[ -n $(./wrong.sh 2>&1 ) ]]; echo $?; done | grep -v 0 Even in thousands of invocations, I could not reproduce a case where the script finishes without creating output. What may be the cause of this unpredictable behaviour? Can anyone reproduce this? To me, it looks like there may be a race condition where cron can miss a script's output, possibly primarily involving cases where the error stems from the shell itself. Thank you! | Upon further testing, I suspect the & is messing with your results. As you point out, &>/dev/null is bash syntax, not sh syntax. As a result, sh is creating a subshell and backgrounding it. Sure, the subshell's echo creates stderr, but my theory is that: cron is not catching the subshell's stderr, and the backgrounding of the subshell always completes successfully, thus bypassing your || echo ... . ... causing the cron job to have no output and thus no mail. Based on my reading of the vixie-cron source, it would seem that the job's stderr and stdout would be captured by cron, but it must be getting lost by the subshell. Test it yourself in a /bin/sh environment (assuming you do not have a file named 'bar' here): (grep foo bar) &echo $? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130058/"
]
} |
225,236 | I occasionally see things like: cat file | wc | cat > file2 Why do this? When will the results (or performance) differ (favourably) from simply: cat file | wc > file2 | cat file | wc | cat > file2 would usually be two useless uses of cat as that's functionally equivalent to: < file wc > file2 However, there may be a case for: cat file | wc -c over < file wc -c That is to disable the optimisation that many wc implementations do for regular files. For regular files, the number of bytes in the file can be obtained without having to read the whole content of the file, but just doing a stat() system call on it and retrieve the size as stored in the inode. Now, one may want the file to be read for instance because: the stat() information cannot be trusted (like for some files in /proc or /sys on Linux): $ < /sys/class/net/lo/mtu wc -c4096$ cat /sys/class/net/lo/mtu | wc -c6 one wants to check how much of the data can be read (like in case of a failing hard drive). one just wants to obtain benchmarks on how fast the data can be read. one wants for the content of the file to be cached in memory. Of course, those are exceptions. In the general case, you'd rather use < file wc -c for performance reasons. Now, you can imagine even more far fetched scenarios where one may want to use: cat file | wc | cat > file2 : maybe wc has an apparmor profile or other security mechanism that prohibits it from reading or writing to files while it's allowed for cat (that would be unheard of) maybe cat is able to deal with large (as in > 2 32 bytes) files, but not wc on that system (things like that have been needed for some commands on some systems in the past). maybe one wants wc (and the first cat ) to run and read the whole file (and be killed at the very last minute) even if file2 can't be open for writing. maybe one wants to hide the failure (exit status) of opening or reading the content of file . Though wc < file > file2 || : would make more sense. maybe one wants to hide (from the output of lsof (list open files)) the fact that he's getting a word count from file or that he's storing a word count in file2 . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62835/"
]
} |
225,319 | I want to debug/test a program in eclipse that uses a Redis server so I decided to turn the server into a user service to have the privilege of running it. What bothers me is that I can start or stop the service but not enable/disable it. The error I get is: Failed to execute operation: No such file or directory Original /usr/lib/systemd/system: [Unit]Description=Advanced key-value storeAfter=network.target[Service]User=arkosExecStart=/usr/bin/redis-server /etc/arkos/arkos-redis.confExecStop=/usr/bin/redis-cli shutdown[Install]WantedBy=multi-user.target Edited and moved to /usr/lib/systemd/user: [Unit]Description=Advanced key-value store[Service]ExecStart=/usr/bin/redis-server /etc/arkos/arkos-redis.confExecStop=/usr/bin/redis-cli shutdown[Install]WantedBy=default.target Systemctl status: �� arkos-redis.service - Advanced key-value store Loaded: loaded (/usr/lib/systemd/user/arkos-redis.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2015-08-25 09:19:25 UTC; 1min 55s ago Process: 644 ExecStop=/usr/bin/redis-cli shutdown (code=exited, status=1/FAILURE) Main PID: 736 (redis-server) CGroup: /user.slice/user-1000.slice/[email protected]/arkos-redis.service ������736 /usr/bin/redis-server *:0 Aug 25 09:19:25 arkos-vagrant redis-server[736]: | `-._`-._ _.-'_.-' |Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-._`-.__.-'_.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-.__.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-.__.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.471 # Server started, Redis version 3.0.3Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 * The server is now ready to accept connections at /tmp/arkos-redis.sock Journalctl: Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.470 # You requested maxclients of 10000 requiring at least 10032 max file descripAug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.470 # Redis can't set maximum open files to 10032 because of OS error: Operation Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.470 # Current maximum open files is 4096. maxclients has been reduced to 4064 to Aug 25 09:19:25 arkos-vagrant redis-server[736]: _._Aug 25 09:19:25 arkos-vagrant redis-server[736]: _.-``__ ''-._Aug 25 09:19:25 arkos-vagrant redis-server[736]: _.-`` `. `_. ''-._ Redis 3.0.3 (00000000/0) 64 bitAug 25 09:19:25 arkos-vagrant redis-server[736]: .-`` .-```. ```\/ _.,_ ''-._Aug 25 09:19:25 arkos-vagrant redis-server[736]: ( ' , .-` | `, ) Running in standalone modeAug 25 09:19:25 arkos-vagrant redis-server[736]: |`-._`-...-` __...-.``-._|'` _.-'| Port: 0Aug 25 09:19:25 arkos-vagrant redis-server[736]: | `-._ `._ / _.-' | PID: 736Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-._ `-./ _.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: |`-._`-._ `-.__.-' _.-'_.-'|Aug 25 09:19:25 arkos-vagrant redis-server[736]: | `-._`-._ _.-'_.-' | http://redis.ioAug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-._`-.__.-'_.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: |`-._`-._ `-.__.-' _.-'_.-'|Aug 25 09:19:25 arkos-vagrant redis-server[736]: | `-._`-._ _.-'_.-' |Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-._`-.__.-'_.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ `-.__.-' _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-._ _.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: `-.__.-'Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.471 # Server started, Redis version 3.0.3Aug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING overcommit_memory is set to 0! Background save may fail under low mAug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING you have Transparent Huge Pages (THP) support enabled in your kerneAug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/syAug 25 09:19:25 arkos-vagrant redis-server[736]: 736:M 25 Aug 09:19:25.472 * The server is now ready to accept connections at /tmp/arkos-redis.sock | Symlink issue? I had a similar error message when using symbolic links. Apparently systemd doesn't follow symbolic links, the solution is simply to copy or move the file. User service? I believe that you need to add --user to the command line for units in user/ : sudo systemctl --user enable arkos-redis.service | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/225319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126513/"
]
} |
225,323 | How to identify if a directory is created by Linux system and by user like root. Example:in /etc there is a directory named sys which is created by Linux. And I logged in using root and created the directory sys1. then how can I differentiate them? Example : var is a system created directory where as test is a user created with user root. drwxrwxrwx 34 root root 4096 Aug 25 22:52 vardrwxr-xr-x 2 root root 4096 Aug 25 23:19 test | I don't think that there is a concept of "directory created by system". When you're installing your system, installation media often gets job done for you - you see the result(e.g. /etc directory created), but that really is done by user who happened to run script. Anything created by "system" could be treated as created by root, but there's no way of telling if that was automated or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114633/"
]
} |
225,338 | I want to download the patch series RFC PATCH 00/26 i.MX5/6 IPUv3 CSI/IC In patchwork I can get access to individual patches https://patchwork.linuxtv.org/patch/24331/ . But downloading 26 patches and then applying them one by one gets tedious. Is there a way to download the complete patch series with patchwork or by other means? The question How do I get a linux kernel patch set from the mailing list? suggests marc.info and lkml.org for downloading individual patches but I want the whole series at once. How do I do that? | The patchwork project information page at https://patchwork.linuxtv.org/project/linux-media/ has a couple of links at the bottom to pwclient and a sample .pwclientrc Once you set these up, you can use pwclient list to search for patches and pwclient git-am to apply them. The awkward part is that there's apparently no single command to search and apply in one go. I used a combination of the two to get (for example) Philipp Zabel's recent IPU CSI patch series like this... pwclient list -w "Philipp Zabel" -s New v2 -f %{id} | egrep '^[0-9]' | xargs pwclient git-am | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5694/"
]
} |
225,350 | I try to find out the triplet for my device, because i try to cross compile , but there is no gcc installed on the target device and i am not allowed to install it. With gcc installed i could just write gcc -dumpmachine Is it possible to find this information without gcc? | You can get a lot of information by means of uname and also by checking with file the type of executables: $ gcc -dumpmachinex86_64-linux-gnu$ uname -o -mx86_64 GNU/Linux$ file /usr/bin/file/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=d8ac02806880576708bf189c064fca78ea89f1d0, stripped If your device doesn't have file installed, copy a binary executable from it to another Linux computer and run file there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
225,356 | I wrote a script that contains an until loop. This loop should run till a boolean variable is set to true from outside the loop. Unfortunately the loop ignores that the variable has been set to true and keeps running. Here are a few lines which generate this problem: boolean=false{ sleep 5 && boolean=true && echo "boolean is true now" ; } &{ until [ "$boolean" = true ] ; do sleep 1 && echo $boolean ; done ; } &&echo "boolean is true now: $boolean" The output this generates is: falsefalsefalsefalseboolean is true nowfalsefalsefalse... How can I make the loop exit when the boolean is set to true ? | The & character makes a background process. A background process is executed asynchronously in a subshell. Variables can be passed from a parent shell to sub shell, not the other way around. However, you can do a workaround if you are really in need of the value set in the child shell: boolean=$(mktemp) && echo "false" >$boolean{ sleep 5 && echo true >$boolean && echo "boolean is true now" ; } &{ until [ "$(cat $boolean)" = "true" ] ; do sleep 1 && cat $boolean ; done ; } &&echo "boolean is true now: $(cat $boolean)" That makes a temporary file whose content is the boolean value. In the until loop that file is checked until it contains true. Notice: I recommend to do the following in your script (if possible): { sleep 5 && echo "background process done" ; } &waitecho "continue in foregound." wait waits for the background process to finish. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129329/"
]
} |
225,401 | I check service status with systemctl status service-name . By default, I see few rows only, so I add -n50 to see more. Sometimes, I want to see full log, from start. It could have 1000s of rows. Now, I check it with -n10000 but that doesn't look like neat solution. Is there an option to check full systemd service log similar to less command? | Just use the journalctl command, as in: journalctl -u service-name.service Or, to see only log messages for the current boot: journalctl -u service-name.service -b For things named <something>.service , you can actually just use <something> , as in: journalctl -u service-name But for other sorts of units (sockets, targets, timers, etc), you need to be explicit. In the above commands, the -u flag is short for --unit , and specifies the name of the unit in which you're interested. -b is short for --boot , and restricts the output to only the current boot so that you don't see lots of older messages. See the journalctl man page for more information. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/225401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32942/"
]
} |
225,412 | I installed Apache using this script https://gist.github.com/Benedikt1992/e88c2114fee15422a4eb The system is a freshly installed CentOS 6.7 minimal system. After installation I can find the apache in /usr/local/apache2/ but I can't start the apache with service or enable start on boot with chkconfig . What am I missing? | Just use the journalctl command, as in: journalctl -u service-name.service Or, to see only log messages for the current boot: journalctl -u service-name.service -b For things named <something>.service , you can actually just use <something> , as in: journalctl -u service-name But for other sorts of units (sockets, targets, timers, etc), you need to be explicit. In the above commands, the -u flag is short for --unit , and specifies the name of the unit in which you're interested. -b is short for --boot , and restricts the output to only the current boot so that you don't see lots of older messages. See the journalctl man page for more information. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/225412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125010/"
]
} |
225,445 | I have file called install.sh and within this file I write something to $HOME/.bashrc file and after that I must call source command.In terminal I can type source $HOME/.bashrc but I can't do this in bash script. If I write this to file, then I get following error: ./install.sh: 1: ./install.sh: source: not found I am using Ubuntu 12.04 x64. Any suggestions how to do that? | If you want to program a bash script, then change your shebang (first line of the script file) to #!/bin/bash | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130238/"
]
} |
225,451 | I see a lot of posts out there that say you type in sudo su to get an interactive prompt with root privileges, and I see equally many posts debating the pros and cons over sudo -i vs sudo su . Here I'm sitting scratching my head over why people don't just write su ... After all su is short for subsitute user and switches to root by default, so is there really any need at all to write sudo su ? | If you can use simply su , you should. But, in most modern (desktop-) Linux distributions (for example Ubuntu) the root user is disabled and has no password set. Therefore you cannot switch to the root user with su (you can try). You have to call sudo with root privileges: sudo su . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74489/"
]
} |
225,537 | In Linux, every single entity is considered as FILE. If I do vim <cd-Name> then, vim will open the directory content into it's editor, because, it do not differentiate between file and directories. But today while working, I encountered a thing, which I am curious to know about. I planned to open a file from nested directory vim a/b/c/d/file But instead of vim , I typed cd a/b/c/d/ and hit the TAB twice, but command was showing only the available directory of the "d" directory rather files. Don't the cd command honour " everything is a file "? Or am I missing something ? | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file? S_ISDIR(m) directory? S_ISCHR(m) character device? S_ISBLK(m) block device? S_ISFIFO(m) FIFO (named pipe)? S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.) S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
225,541 | I use Mac's ssh to get access to a server. I find the emacs on the server can not recognize the ALT key. My server is RHEL with emacs23. | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file? S_ISDIR(m) directory? S_ISCHR(m) character device? S_ISBLK(m) block device? S_ISFIFO(m) FIFO (named pipe)? S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.) S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130297/"
]
} |
225,549 | I've been looking for the QEMU Guest Agent for Ubuntu 12.04 LTS. It seems like the Guest Agent is included in the official Repository from Ubuntu 14.04 and up ( http://packages.ubuntu.com/trusty/qemu-guest-agent ). Is there a way to get the Guest Agent running in 12.04? Update Compiling and/or installing qemu-guest-agent from the Trusty Repos seems to be the solution. While testing different VMs I noticed that the hosts have different OS versions (one with Precise/KVM and the other with Trusty/Spice). So my problem seems to be related to the combination of host and guest OSes. I have opened another question for this! | The " Everything is a file " phrase defines the architecture of the operating system. It means that everything in the system from processes, files, directories, sockets, pipes, ... is represented by a file descriptor abstracted over the virtual filesystem layer in the kernel. The virtual filesytem is an interface provided by the kernel. Hence the phrase was corrected to say " Everything is a file descriptor ". Linus Torvalds himself corrected it again a bit more precisely: " Everything is a stream of bytes ". However, every "file" has also an owner and permissions you may know from regular files and directories. Therefore classic Unix tools like cat, ls, ps, ... can query all those "files" and it's not needed to invent other special mechanisms, than just the plain old tools, which all use the read() system call. For example in Microsofts OS-family there are multiple different read() system calls (I heard about 15) for any file types and every of them is a bit different. When everything is a file, then you don't need that. To your question : Of course there are different file types . In linux there are 7 file types . The directory is one of them. But, the utilities can distinguish them from each other. For example, the complete function of the cd command (when you press TAB ) only lists directories, because the stat() system call (see man 2 stat ) returns a struct with a field called st_mode . The POSIX standard defines what that field can contain: S_ISREG(m) is it a regular file? S_ISDIR(m) directory? S_ISCHR(m) character device? S_ISBLK(m) block device? S_ISFIFO(m) FIFO (named pipe)? S_ISLNK(m) symbolic link? (Not in POSIX.1-1996.) S_ISSOCK(m) socket? (Not in POSIX.1-1996.) The cd command completion function just displays "files" where the S_ISDIR flag is set. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102403/"
]
} |
225,596 | Please consider below file: foo,1000,boo,Afoo,1000,boo,Bfoo,1001,boo,Bfoo,1002,boo,D And we have below rules: If $2 equal 1000, $4 should be equal AIf $2 equal 1001, $4 should be equal BIf $2 equal 1002, $4 should be equal C I want to apply the above rules to a single awk command, where if $4 does not obey, print the record. The desired output would be: foo,1000,boo,Bfoo,1002,boo,D I tried with: awk -F, '{if(($2==1000 && $4!=A) || ($2==1001 && $4!=B) || ($4==1002 && $4!=C)){print $0}}' | Use this: awk -F, '($2==1000 && $4!="A") || ($2==1001 && $4!="B") || ($2==1002 && $4!="C")' file In the curvy brachets are the 3 conditions; if one of them applies the line will be printed. The conditions inside the brackets are connected with a AND, so both must apply. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
225,612 | On my local machine, I have /sys/block/sda1/stat . On an Amazon machine, I have /sys/block/xvda1/stat . When I run cat /sys/block/sda1/stat or cat /sys/block/xvda1/stat both give 11 fields of output. What is the difference between /sys/block/sda1/stat and /sys/block/xvda1/stat files? | The /sys directory is generally where the sysfs filestystem is mounted, which contains information about devices and other kernel information. The files in /sys/block contain information about block devices on your system. Your local system has a block device named sda , so /sys/block/sda exists. Your Amazon instance has a device named xvda , so /sys/block/xvda exists. The /sys/block/<dev>/stat file provides several statistics about the state of block device <dev> . It consists of a single line of text containing 15 decimal values separated by whitespace: Name units description---- ----- -----------read I/Os requests number of read I/Os processedread merges requests number of read I/Os merged with in-queue I/Oread sectors sectors number of sectors readread ticks milliseconds total wait time for read requestswrite I/Os requests number of write I/Os processedwrite merges requests number of write I/Os merged with in-queue I/Owrite sectors sectors number of sectors writtenwrite ticks milliseconds total wait time for write requestsin_flight requests number of I/Os currently in flightio_ticks milliseconds total time this block device has been activetime_in_queue milliseconds total wait time for all requestsdiscard I/Os requests number of discard I/Os processeddiscard merges requests number of discard I/Os merged with in-queue I/Odiscard sectors sectors number of sectors discardeddiscard ticks milliseconds total wait time for discard requests So, each block device will have its own stat istics file, hence the different values. See kernel docs for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119746/"
]
} |
225,684 | When installing packages via yum on a RHEL Server 6.6 system, I get the following error: $ sudo yum install fooLoaded plugins: product-id, rhnplugin, security, subscription-managerThis system is receiving updates from RHN Classic or RHN Satellite.Setting up Install ProcessError: xz compression not available Search engine searches suggest that the pyliblzma package is missing. I can't install this via sudo yum install pyliblzma because I run into the same xz compression not available error. Instead, I downloaded the RPM archive and installed it via rpm : $ wget http://download.fedoraproject.org/pub/epel/6/SRPMS/pyliblzma-0.5.3-3.el6.src.rpm$ sudo rpm -ivh pyliblzma-0.5.3-3.el6.src.rpm It appears to not have been installed successfully; the following returns no results: $ rpm -qa | grep pyliblzma What should I do to correctly install pyliblzma and fix the xz compression not available error? | To fix this: yum remove epel-release Re-download the epel-release RPM Install it yum install pyliblzma | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38132/"
]
} |
225,687 | If a Unix (Posix) process receives a signal, a signal handler will run. What will happen to it in a multithreaded process? Which thread receives the signal? In my opinion, the signal API should be extended to handle that (i.e. the thread of the signal handler should be able to be determined), but hunting for infos on the net I only found year long flames on the linux kernel mailing list and on different forums. As I understood, Linus' concept differed from the Posix standard, and first some compat layer was built, but now the Linux follows the posix model. What is the current state? | The entry in POSIX on " Signal Generation and Delivery " in "Rationale: System Interfaces General Information" says Signals generated for a process are delivered to only one thread. Thus, if more than one thread is eligible to receive a signal, one has to be chosen. The choice of threads is left entirely up to the implementation both to allow the widest possible range of conforming implementations and to give implementations the freedom to deliver the signal to the "easiest possible" thread should there be differences in ease of delivery between different threads. From the signal(7) manual on a Linux system: A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2) ) or for a specific thread (e.g., certain signals, such as SIGSEGV and SIGFPE, generated as a consequence of executing a specific machine-language instruction are thread directed, as are signals targeted at a specific thread using pthread_kill(3) ). A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal. And in pthreads(7) : Threads have distinct alternate signal stack settings. However, a new thread's alternate signal stack settings are copied from the thread that created it, so that the threads initially share an alternate signal stack (fixed in kernel 2.6.16). From the pthreads(3) manual on an OpenBSD system (as an example of an alternate approach): Signals handlers are normally run on the stack of the currently executing thread. (I'm currently not aware of how this is handled when multiple threads are executing concurrently on a multi-processor machine) The older LinuxThread implementation of POSIX threads only allowed distinct single threads to be targeted by signals. From pthreads(7) on a Linux system: LinuxThreads does not support the notion of process-directed signals: signals may be sent only to specific threads. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/225687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52236/"
]
} |
225,736 | If I compile a program using gcc, and try to execute it from the bash shell, what is the exact sequence of steps followed by bash to execute it ? I know fork() , execve() , loader , dynamic linker (and other things) are involved, but can someone give an exact sequence of steps and some suitable reading reference ? Edit: From the answers, it seems the question could imply many possibilities. I want to narrow down to a simple case: (test.c just prints hello world) $ gcc test.c -o test$ ./test What will be the steps in the above case ( ./test ), specifically relating bash starting program in some child process, doing loading, linking etc. ? | Well, the exact sequence may vary, as there might be a shell alias or function that first gets expanded/interpreted before the actual program gets executed, and then differences for a qualified filename ( /usr/libexec/foo ) versus something that will be looked for through all the directories of the PATH environment variable (just foo ). Also, the details of the execution may complicate matters, as foo | bar | zot requires more work for the shell (some number of fork(2) , dup(2) , and, of course, pipe(2) , among other system calls), while something like exec foo is much less work as the shell merely replaces itself with the new program (i.e., it doesn't fork ). Also important are process groups (especially the foreground process group, all PIDs of which eat SIGINT when someone starts mashing on Ctrl + C , sessions, and whether the job is going to be run in the background, monitored ( foo & ) or background, ignored ( foo & disown ). I/O redirection details will also change things, e.g., if standard input is closed by the shell ( foo <&- ), or whether a file is opened as stdin ( foo < blah ). strace or similar will be informative about the specific system calls made along this process, and there should be man pages for each of those calls. Suitable system level reading would be any number of chapters from Stevens's "Advanced Programming in the UNIX Environment" while a shell book (e.g., "From Bash to Z Shell") will cover the shell side of things in more detail. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63934/"
]
} |
225,798 | I have a lot of filenames that listed in a .txt file called to_be_archived_files.txt , and its content is as: ~/Documents/dir1/a.html~/Documents/dir1/b.html~/Documents/dir1/c/1.html How do I add all of them into a archive.tar.gz by tar ? I want something like: tar czf archive.tar.gz -FROM-FILELIST=to_be_archived_files.zip How can I achieve that? | Use the -T option: -T, --files-from FILE get names to extract or create from FILE Note that it doesn't work with ~ as alias for home directory, you need to specify the folder explicitly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
225,802 | To debug a JACK/Pulseaudio issue, I want to understand when and why the pulseaudio daemon is started by systemd (on Fedora). Using: $ ps -o'pid,ppid,args' `pgrep pulse` I see that the pulseaudio daemon is being started by systemd (pid=1) PID PPID COMMAND2738 1 /usr/bin/pulseaudio --start However, I was unable to find any unit-file on my system containing pulseaudio or even just pulse . My specific questions are: A) Is there a way to determine the systemd unit that caused the creation of a specific process (in my example output, process 2738, the PA daemon)? B) Are there alternative approaches to find out which unit-dependency chain or other settings of systemd resulted in the invocation of /usr/bin/pulseaudio --start ? | A) Is there a way to determine the systemd unit that caused the creation of a specific process (in my example output, process 2738, the PA daemon)? Sure. You can run systemctl status <pid> and systemd will find you the unit that contains that PID. For example, on my system I find a dnsmasq process: # ps -fe | grep dnsmasqnobody 18834 1193 0 Aug25 ? 00:00:10 /usr/sbin/dnsmasq ... Who started it? # systemctl status 18834● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2015-08-25 11:07:40 EDT; 1 day 21h ago Main PID: 1193 (NetworkManager) Memory: 1.1M CGroup: /system.slice/NetworkManager.service ├─ 1193 /usr/sbin/NetworkManager --no-daemon ├─ 1337 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-wlp3s0.... ├─18682 /usr/libexec/nm-openvpn-service ├─18792 /usr/sbin/openvpn --remote ovpn-phx2.redhat.com 443 tcp --nobind --dev redhat --de... └─18834 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --... I also have a pulseaudio process: # ps -fe | grep pulseaudiolars 2948 1 0 Aug25 ? 00:06:20 /usr/bin/pulseaudio --start Running systemctl status 2948 , I see: ● session-3.scope - Session 3 of user lars Loaded: loaded (/run/systemd/system/session-3.scope; static; vendor preset: disabled) Drop-In: /run/systemd/system/session-3.scope.d └─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf Active: active (running) since Tue 2015-08-25 11:09:23 EDT; 1 day 21h ago CGroup: /user.slice/user-1000.slice/session-3.scope This tells me the pulseaudio was started from my desktop login session, rather than explicitly via systemd. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/225802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130481/"
]
} |
225,803 | I want to delete all the folders that are older than 1 day with following command: find /test -mmin +1440 | xargs rm -rf But the output of find lists /test (and would remove it accordingly). How can I find only the sub dirs of /test ? ( -maxdepth / -mindepth not available in AIX) | As @meuh said in his comment, you could use /test/* instead of /test .Your command could then look similar to this: find /test/* -type d -mmin +1440 | xargs rm -rf In this case only the subfolders of /test would be removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42081/"
]
} |
225,886 | I have two files, one of them has 10 records, and the other one with 15 record, the problem is when I cat them together, always FNR=NR . For example, consider below files: File1: 1,boo2,foo3,boo File2: 1,boo2,foo3,boo4,foo When applying the following code: cat File1 File2 | awk -F, '{print FNR}' The result should be: 12312 34 But what I actually have is that FNR = 1 to 7. | Use this instead: awk -F, '{print FNR}' file1 file2 The FNR variable in awk gives the number of records for each input file. But, when you use cat .. | awk awk reads for the stdin file descriptor, therefore awk sees only 1 "file". Try this to understand better ( FILENAME contains the current file being processed): $ awk -F, '{print FILENAME" "FNR}' file1 file2file1 1file1 2file1 3file2 1file2 2file2 3file2 4$ cat file1 file2 | awk -F, '{print FILENAME" "FNR}'- 1- 2- 3- 4- 5- 6- 7 As you can see in the first example, there are 2 files being processed, in the second example the FILENAME is - , indicating the standard input. Edit: See the following example: $ cat file1 | awk -F, '{print FILENAME" "FNR}' - file1 <(cat file1)- 1- 2- 3file1 1file1 2file1 3/dev/fd/63 1/dev/fd/63 2/dev/fd/63 3 First, awk reads from the stdin (the cat file1 part, also as a parameter - to awk). Then the regular file1 . Finally the <(cat file1) part, which creates a pipe ( /dev/fd/63 ), from where awk reads. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
225,899 | I am trying to compile my own kernel but I am having some issues with understanding. I am using the latest kernel available from kernel.org. My issue is at current: I have zero clue on what options I would need to ensure this kernel is correct for an armv7 (armhf) architecture. Does anyone have a list or a link to other sources which can aid me in the specific endeavour of compiling a kernel for arm7? Yes I have searched this previously but I have not found anything I think is of use. I am compiling this kernel on the same architecture I wish to run it on (if that is of any importance) | Use this instead: awk -F, '{print FNR}' file1 file2 The FNR variable in awk gives the number of records for each input file. But, when you use cat .. | awk awk reads for the stdin file descriptor, therefore awk sees only 1 "file". Try this to understand better ( FILENAME contains the current file being processed): $ awk -F, '{print FILENAME" "FNR}' file1 file2file1 1file1 2file1 3file2 1file2 2file2 3file2 4$ cat file1 file2 | awk -F, '{print FILENAME" "FNR}'- 1- 2- 3- 4- 5- 6- 7 As you can see in the first example, there are 2 files being processed, in the second example the FILENAME is - , indicating the standard input. Edit: See the following example: $ cat file1 | awk -F, '{print FILENAME" "FNR}' - file1 <(cat file1)- 1- 2- 3file1 1file1 2file1 3/dev/fd/63 1/dev/fd/63 2/dev/fd/63 3 First, awk reads from the stdin (the cat file1 part, also as a parameter - to awk). Then the regular file1 . Finally the <(cat file1) part, which creates a pipe ( /dev/fd/63 ), from where awk reads. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129282/"
]
} |
225,902 | How can I get a list of packages, that last installed / upgraded by pacman / yaourt in Arch Linux including the timestamp? | To get a list of last installed packages, you can run: grep -i installed /var/log/pacman.log Example output of last installed packages: [2015-08-24 15:32] [ALPM] warning: /etc/pamac.conf installed as /etc/pamac.conf.pacnew[2015-08-24 15:32] [ALPM] installed python-packaging (15.3-1)[2015-08-24 15:32] [ALPM] installed python2-packaging (15.3-1)[2015-08-25 10:37] [ALPM] installed ttf-ubuntu-font-family (0.80-5)[2015-08-25 10:43] [ALPM] installed ttf-google-fonts (20150805.r201-1)[2015-08-25 10:44] [ALPM] installed ttf-ubuntu-font-family (0.80-5)[2015-08-26 17:39] [ALPM] installed mozilla-extension-gnome-keyring-git (0.10.r36.378d9f3-1) To get a list of last upgraded packages, you can run: grep -i upgraded /var/log/pacman.log Example output of last upgraded packages: [2015-08-27 10:00] [ALPM] upgraded libinput (0.99.1-1 -> 1.0.0-1)[2015-08-27 10:00] [ALPM] upgraded python2-mako (1.0.1-1 -> 1.0.2-1)[2015-08-27 16:03] [ALPM] upgraded tdb (1.3.6-1 -> 1.3.7-1)[2015-08-27 16:03] [ALPM] upgraded ldb (1.1.20-1 -> 1.1.21-1)[2015-08-27 16:03] [ALPM] upgraded python2-mako (1.0.2-1 -> 1.0.2-2) To get a list of last installed or upgraded packages, you can run: grep -iE 'installed|upgraded' /var/log/pacman.log Example output of last upgraded packages: [2015-08-25 09:56] [ALPM] upgraded jdk (8u51-2 -> 8u60-1)[2015-08-25 10:37] [ALPM] installed ttf-ubuntu-font-family (0.80-5)[2015-08-25 10:43] [ALPM] installed ttf-google-fonts (20150805.r201-1)[2015-08-25 10:44] [ALPM] installed ttf-ubuntu-font-family (0.80-5)[2015-08-26 17:39] [ALPM] installed mozilla-extension-gnome-keyring-git (0.10.r36.378d9f3-1)[2015-08-27 10:00] [ALPM] upgraded curl (7.43.0-1 -> 7.44.0-1)[2015-08-27 10:00] [ALPM] upgraded gc (7.4.2-2 -> 7.4.2-3)[2015-08-27 10:00] [ALPM] upgraded kmod (21-1 -> 21-2)[2015-08-27 10:00] [ALPM] upgraded libinput (0.99.1-1 -> 1.0.0-1)[2015-08-27 10:00] [ALPM] upgraded python2-mako (1.0.1-1 -> 1.0.2-1)[2015-08-27 16:03] [ALPM] upgraded tdb (1.3.6-1 -> 1.3.7-1)[2015-08-27 16:03] [ALPM] upgraded ldb (1.1.20-1 -> 1.1.21-1)[2015-08-27 16:03] [ALPM] upgraded python2-mako (1.0.2-1 -> 1.0.2-2) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/225902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24036/"
]
} |
225,929 | My /proc/sys/kernel/sysrq contains the number 502, but Alt+SysRq+... seems not to work on my HP Pavilion laptop. How can I fix that? Update 1: By the way: neither my print-screen-key nor any other key has an additional label like "SysRq". Update 2: Hardware model: HP Pavilion 17 Notebook PC Keyboard layout: German, QWERTZ | Most laptops require pressing Fn to get the SysRq key. Pressing Fn usually doesn't affect the Alt key (at least the left one) but may affect the letter that you press after SysRq . Fortunately, you don't need to press SysRq and the third key together, it's enough to hold Alt down. The following sequence works on all the laptops I've seen: Press and hold Alt . Press Fn , press the SysRq key, and release both. Briefly the letter or punctuation key, e.g. S to sync. Release Alt . The SysRq key is usually the same key as PrintScreen . If your keyboard doesn't have a key labeled SysRq or PrintScreen , it may not have a key that sends the scan code that Linux expects. For the purpose of magic SysRq, the SysRq key is whichever key sends the scan code 99. With a PS/2 keyboard (including a laptop's internal keyboard), to find out what scan code a key sends, log into a text console (press Ctrl + Alt + F1 to switch to a text console, and usually Ctrl + Alt + F7 to go back to the GUI), and run the command showkey -s . showkey -s displays the scan code of each key as you type it. That's usually one byte (two hexadecimal digits) for a key press, then another byte for the key release. For a few keys, you'll get a byte sequence consisting of two bytes starting with e0 . Press the key you're interested in, then wait 10 seconds and showkey will exit. Now that you've identified a scan code, run setkeycodes … 99 as root to assign that scan code to key code 99. For example, if showkey -s prints 0xe0 0x6f 0xe0 0xef for the key you've chosen, run setkeycodes e06f 99 . To make this change permanent, either add the setkeycodes command to /etc/rc.local , or configure udev to change the keycode mappings . Configuring udev is the only solution for a USB keyboard, setkeycodes doesn't affect USB keyboards. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/225929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89224/"
]
} |
225,931 | I'm running Kali Linux on VirtualBox. I'm using bridged adapter, WNA with adapter type Intel Pro/1000 MT in the VBox network settings. My internet works flawlessly as long as my eth0 mac is the same as the one in the Network settings (option to change it is disabled, grayed out). Whenever I change my mac either by using macchanger or by turning off the network interface, changing it and turning the interface on, I lose the connection. After changing mac I'm also not able to ping the host and vice versa, 0 packets received. What should I do to make it work?Thank you in advance. | Most laptops require pressing Fn to get the SysRq key. Pressing Fn usually doesn't affect the Alt key (at least the left one) but may affect the letter that you press after SysRq . Fortunately, you don't need to press SysRq and the third key together, it's enough to hold Alt down. The following sequence works on all the laptops I've seen: Press and hold Alt . Press Fn , press the SysRq key, and release both. Briefly the letter or punctuation key, e.g. S to sync. Release Alt . The SysRq key is usually the same key as PrintScreen . If your keyboard doesn't have a key labeled SysRq or PrintScreen , it may not have a key that sends the scan code that Linux expects. For the purpose of magic SysRq, the SysRq key is whichever key sends the scan code 99. With a PS/2 keyboard (including a laptop's internal keyboard), to find out what scan code a key sends, log into a text console (press Ctrl + Alt + F1 to switch to a text console, and usually Ctrl + Alt + F7 to go back to the GUI), and run the command showkey -s . showkey -s displays the scan code of each key as you type it. That's usually one byte (two hexadecimal digits) for a key press, then another byte for the key release. For a few keys, you'll get a byte sequence consisting of two bytes starting with e0 . Press the key you're interested in, then wait 10 seconds and showkey will exit. Now that you've identified a scan code, run setkeycodes … 99 as root to assign that scan code to key code 99. For example, if showkey -s prints 0xe0 0x6f 0xe0 0xef for the key you've chosen, run setkeycodes e06f 99 . To make this change permanent, either add the setkeycodes command to /etc/rc.local , or configure udev to change the keycode mappings . Configuring udev is the only solution for a USB keyboard, setkeycodes doesn't affect USB keyboards. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/225931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130395/"
]
} |
225,943 | I need to write a shell script that runs in this way: ./myscript arg1 arg2_1 arg2_2 arg2_3 ....... arg2_# there is a for loop inside script for i in $@ However, as I know, $@ includes $1 up to $($#-1). But for my program $1 is distinctly different from $2 $3 $4 etc. I would like to loop from $2 to the end... How do i achieve this? Thank you:) | First, note that $@ without quotes makes no sense and should not be used. $@ should only be used quoted ( "$@" ) and in list contexts. for i in "$@" qualifies as a list context, but here, to loop over the positional parameters, the canonical, most portable and simpler form is: for ido something with "$i"done Now, to loop over the elements starting from the second one, the canonical and most portable way is to use shift : first_arg=$1shift # short for shift 1for ido something with "$i"done After shift , what used to be $1 has been removed from the list (but we've saved it in $first_arg ) and what used to be in $2 is now in $1 . The positional parameters have been shifted 1 position to the left (use shift 2 to shift by 2...). So basically, our loop is looping from what used to be the second argument to the last. With bash (and zsh and ksh93 , but that's it), an alternative is to do: for i in "${@:2}"do something with "$i"done But note that it's not standard sh syntax so should not be used in a script that starts with #! /bin/sh - . In zsh or yash , you can also do: for i in "${@[3,-3]}"do something with "$i"done to loop from the 3rd to the 3rd last argument. In zsh , $@ is also known as the $argv array. So to pop elements from the beginning or end of the arrays, you can also do: argv[1,3]=() # remove the first 3 elementsargv[-3,-1]=() ( shift can also be written 1=() in zsh ) In bash , you can only assign to the $@ elements with the set builtin, so to pop 3 elements off the end, that would be something like: set -- "${@:1:$#-3}" And to loop from the 3rd to the 3rd last: for i in "${@:3:$#-5}"do something with "$i"done POSIXly, to pop the last 3 elements of "$@" , you'd need to use a loop: n=$(($# - 3))for arg do [ "$n" -gt 0 ] && set -- "$@" "$arg" shift n=$((n - 1))done | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/225943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
225,972 | I started using fedora 22 and started learning dnf only to find out about two annoying facts: Almost every time I install or upgrade anything it has to rebuild the whole repository metadata cache all over. If I stopped the download of packages at 99% and re-ran the install command it would download them all over again! This is really annoying because I have a slow internet connection that drops every half an hour or so (It's an rtl8723be module, does anybody have a fix ?), so dnf essentialy becomes unusable. How can I resolve that ? | Look at the keepcache parameter. I believe that it goes in /etc/dnf/dnf.conf and should read keepcache=1 or keepcache="true" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80638/"
]
} |
225,977 | I'm looking to use the terminal more and more, and I'd like to find a terminal calendar app that can sync with Google calendar. I'm running ubuntu 14.04 | Look at the keepcache parameter. I believe that it goes in /etc/dnf/dnf.conf and should read keepcache=1 or keepcache="true" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/225977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130584/"
]
} |
226,027 | How can I encrypt a large file with a public key so that no one other than who has the private key be able to decrypt it? I don't want to use GPG! | This could be used to encrpyt a file mypic.png , given you already have a private/public keypair in ccbild-key.pem / ccbild-crt.pem . (You can find a guide to creating a keypair in this answer .) # encryptopenssl smime -encrypt -aes-256-cbc -binary -in mypic.png -outform DER -out mypic.png.der ccbild-crt.pem# decryptopenssl smime -decrypt -binary -in mypic.png.der -inform DER -out mypic.png -inkey ccbild-key.pem Note that the settings may not reflect best practice in selection of crypto standard (in particular if you read this in the future), also it might not be a good choice performance-wise. (We only use it for sub-1M files in our application.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118331/"
]
} |
226,037 | So I need to sort the IP Addresses and then sort my line by them. I can sort IP addres in a file by using : sort -n -t . -k1,1 -k2,2 -k 3,3 -k4,4 If my file looks like : add method1 a.b.c.d other thing add method2 e.f.g.h other thing2 add method2 a.b.c.d other thing2 add method5 j.k.l.m other thing5 add method3 a.b.c.d other thing3 add method1 e.f.g.h other thing2 But in this case, field 1 will be : add method1 a add method2 e add method2 a add method5 j add method3 a add method1 e And field 4 will be : d other thing h other thing2 d other thing2 m other thing5 d other thing3 h other thing2 How and What tools should I use to sort my IP addresses and then sort my lines by them.Thanks in advance. EDIT : Example modfied. There is several line with the same IP address but with different text and in a random order. | This script copies the ip address from field 3 using awk to thestart of the line with a "%" separator, then does the sorton the ip address now in the first field, then removesthe added part. awk '{print $3 " % " $0}' |sort -t. -n -k1,1 -k2,2 -k3,3 -k4,4 |sed 's/[^%]*% //' If the field with the ip address is not a constant, you canauto-detect it on each line. Replace the awk above with: awk '{ for(i=1;i<=NF;i++) if($i~/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/)break; print $i " % " $0 }' | | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130636/"
]
} |
226,069 | Here is my initial situation: In a folder named for example Father , are stored some files in the following way: Father contains 24 children folders (let's call them Child1 , Child2 , ...), and each one of them has 2 files in it, file1.avi and file1.nfo for the first child, file2.avi and file2.nfo for the second one, etc. What I'd like to do is to have all the .avi files in the Father folder. I don't care here about the other files being lost. So far, the best I've gone is with a cp -R ./*.avi . but it did not extract the files from the folder and was furthermore really long to process. How should I write it? | The glob * can be used to match not only plain files, but also directories, so the command you are looking for is mv ./*/*.avi . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130650/"
]
} |
226,082 | I need to get the next word of a word in a string. I tried to write a script but it's not working. And it's good if you suggest me any other alternatives to achieve this. Script: #!/bin/bashopts="OPTS=\"-name user -age 20 -where Asia -eats Brains\""echo $optsok="0"for word in $opts; do if [ "$word" = "-where" ] ; then if [ "$ok" = "1" ] ; then echo $word break fi ok="1" fidone I want to get the word after "-where". But above script not working. I didn't understand where I'm missing. Thanks. | I suggest to hire grep for this job: $ OPTS="\"-name user -age 20 -where Asia -eats Brains\""$ grep -Po -- '-where \K\w*' <<< "$OPTS"Asia Explanation: -P : perl compatible regular expression -o : show only matching parts \K : drop everything before that point \w* : match word constituent (synonym for [_[:alnum:]] ) To add " to the list of matching characters: $ grep -Po -- '-eats \K[_\"[:alnum:]]*' <<< $OPTSBrains" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48634/"
]
} |
226,090 | I have a Bash script of a thousand lines each containing an ffmpeg command. I start this with source script and it runs just fine. However, when I try to take control of this script in various ways, things go completely awry: If I do Ctrl + Z to pause the whole thing, only the current ffmpeg command is paused, and the next one is started! Not what I wanted! If I do Ctrl + C to stop everything, the script jumps to the next ffmpeg command, and I have to press once for every line in the script tofinally stop everything. Sheer hell. I tried using ps -ef from another shell to locate the source command to pause/kill it from there, but it does not exist in the list. So how can I pause/stop the parent script the way I wish? Or possibly, how can I execute the script in a different way to begin with that gives me the proper control over it? | Try running the script as a script instead of sourcing it: $ bash <scriptname> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
226,164 | That is literal, {fd} isn't a placeholder. I have a script that does this, and does not source in anything, nor does it reference {fd} anywhere else. Is this valid bash? exec {fd}</dev/watchdog | Rather than having to pick a file descriptor and hope it's available: exec 4< /dev/watchdog # Was 4 in use? Who knows? this notation asks the shell to pick a file descriptor that isn't currently in use, open the file for reading on that descriptor, and assign the number to the given variable ( fd ). $ exec {fd}< /dev/watchdog$ echo $fd10 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
226,206 | I have a bunch of output going through sed and awk. How can I prefix the output with START and suffix the answer with END? For example, if I have All this codeon all these linesand all these How could I get: STARTAll this codeon all these linesand all theseEND ? My attempt was: awk '{print "START";print;print "END"}' but I got ...START All this codeENDSTART on all these linesENDSTART and all theseEND | This works, as indicated by jasonwryan : awk 'BEGIN{print "START"}; {print}; END{print "END"}' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
226,219 | I am under the following restrictions: I have a 1.0 GB .zip file on my computer which contains one file, a disk image of raspbian . When uncompressed, this file is 3.2 GB large and named 2015-11-21-raspbian-jessie.img . After having downloaded the zip file, I have just under 1.0 GB of storage space on my computer, not enough space to extract the image to my computer. This file needs to be uncompressed and written to an SD card using plain old dd . Is it possible for me to write the image to the SD card under these restrictions? I know it's possible to pipe data through tar and then pipe that data elsewhere, however, will this still work for the zip file format, or does the entire archive need to be uncompressed before any files are accessible? | Use unzip -p : unzip -p 2015-11-21-raspbian-jessie.zip 2015-11-21-raspbian-jessie.img | dd of=/dev/sdb bs=1M -p extracts files to stdout | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
226,226 | I'm using Mac OX 10.10. I would like to know what problems are caused by this: user ALL=(ALL) /usr/bin/vim /etc/httpd/confs/httpd.conf I think this is in /etc/sudoers. I also have Cent OS. Does this commend behave in the same (bad) way? | Means folk can leverage this to get a root shell, thereby bypassing your security, eg :!/bin/sh from within vim . Or :r /etc/shadow and :w /etc/shadow . And so on... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130770/"
]
} |
226,267 | How can i create a script to count process run by each user in ps aux . I used this ps aux | awk '{print $1}' | grep root | wc -l but it lists count of root user only . I want to list number of process for each user. I need something like this: root 20jamshi 15 | ps -fo user | sort | uniq -c is worth a try. The command ps -eo user=|sort|uniq -c will list process counts by user. ps -eo user=|sort|uniq -c 2 avahi 1 kernoops 1 messagebus 1 nobody 231 root 1 statd 5 steve 1 syslog If flipping the column order to read is required, pipe it through awk '{ print $2 " " $1 }' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130790/"
]
} |
226,276 | I need to know if a process with a given PID has opened a port without using external commands.I must then use the /proc filesystem. I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID directory will also contains a net/tcp file. My question is : do I need to go over all the threads net/tcp files, or will the port opened by threads be written into the process net/tcp file. | I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. That file is not a list of tcp ports opened by the process . It is a list of all open tcp ports in the current network namespace, and for processes running in the same network namespace is identical to the contents of /proc/net/tcp . To find ports opened by your process, you would need to get a list of socket descriptors from /proc/<pid>/fd , and then match those descriptors to the inode field of /proc/net/tcp . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/226276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130795/"
]
} |
226,310 | Background: I have groups of files in their own directory which I merge into one file in order of their filename. I call them t1.txt, t2.txt, t3.txt... I merge them in order of the integer. Situation: For various reasons, I want to get away from the filename as the metadata for later file merge ops. Action: I'm thinking of moving to a file merging system that orders the file merge by the date/time of the file creation (obviously, I'll have to create the files in order of later merge). Question: Will date/time sorted file merging be reliable? Are there hidden gothchas? Some of the files will be created only tenths of a second apart, or less--is this an achilles heal? Is there something different I should consider for ordering merges. Date/time seems elementary to me. OTH, what seems simple and straight forward at first often ends up being more complicated than envisioned. So I ask. | Most Unix systems don't track file creation times. They track a file's modification time, which is updated each time the file is written to. If the files are written sequentially when they are created (i.e. the first file is fully written before the second file is created) and not modified later, then the order of the modification times will be the same as the order of the file creations, but in more complex scenarios, this may not be the same. In addition to the modification time (mtime), there are two other file timestamps on any Unix systems: the access time (atime) and the inode change time (ctime). The access time is updated when the file is read, but some systems (in particular Linux by default) don't always update it for performance reasons. The inode change time is updated when some metadata about the file changes (name, permissions, etc.; also when the file is written to, but not when it's read, even if the atime changes). Neither the atime nor the ctime would be useful to you. Many historical Unix systems tracked file timestamps with a resolution of one second. Modern Unix systems often have a better resolution, but this requires that several actors pay attention to it: The kernel you're using must support this finer time resolution. The filesystem must be able to store this finer time resolution. Any component in the chain (e.g. NFS server for a file on NFS) must support this finer time resolution. Any tool used to copy the files around (archiver, network synchronizer, …) must be able to preserve the finer time resolution, not just the seconds. The application reading the file times must take sub-second resolution into account. Classic Unix programming interfaces don't support sub-second resolution on file timestamps, so applications need to use a relatively modern API ( standardized in POSIX:2008 — still relatively recent as its adoption was not very fast). Even if everybody in the chain supports nanosecond timestamps, files will only have distinct timestamps if they're actually created more than one clock tick apart — just because the kernel records nanoseconds doesn't guarantee that it will notice that more than one nanosecond has passed between the two file creations: reading the clock takes time, so it isn't done all the time. If you have a single thread opening the file, writing data and closing the file before moving on to the next file, then I think in practice any existing system system that does record sub-second resolution will write different time stamps, but you are taking a small risk. (When different threads are writing to files, even with microsecond resolution, timestamp collisions are possible — but usually in that case you wouldn't be able to rely on the ordering for anything.) So it's possible, and it's reliable as long as computers don't get much faster than they are now, providing that all the tools you use do support sub-second resolution. But you are at the mercy of a clock glitch or of a tool you hadn't reviewed for subsecond timestamp support. I recommend relying on file names, there's less that can go wrong. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/226310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129403/"
]
} |
226,315 | I have an sh file that I would like to be able to open from the terminal at any time. I would like to type "studio" into the terminal, and have android studio open I recall using ln -s to do this, but I have forgotten and have already wasted much time searching the web. Also, in which directory is the created symbolic link kept in? Here is the syntax from my effort, command not found ricardo@debian:~$ ln -s /opt/android-studio/bin/studio.sh studioricardo@debian:~$ studiobash: studio: command not found | The command you ran created a symbolic link in the current directory. Judging by the prompt, the current directory is your home directory. Creating symbolic links to executable programs in your home directory is not particularly useful. When you type the name of a program, the shell looks for it in the directories listed in the PATH environment variable . To see the value of this variable, run echo $PATH . The directories are separated by a colon ( : ). A typical path is /home/ricardo/bin:/usr/local/bin:/usr/bin:/bin but there's a lot of variation out there. You need to create this symbolic link in one of the directories listed in $PATH . If you want to make the command available to all users, create the link in /usr/local/bin : sudo ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/studio If you want to make the command available only to you (which is the only possibility if you don't have administrator privileges), create the link in ~/bin (the bin subdirectory of your home directory). ln -s /opt/android-studio/bin/studio.sh ~/bin/studio If your distribution doesn't put /home/ricardo/bin in your PATH (where /home/ricardo is your home directory), create it first with mkdir ~/bin , and add it to your PATH by adding the following line to ~/.profile (create the file if it doesn't exist): PATH=~/bin:$PATH The .profile file is read when you log in. You can read it in the current terminal by running . ~/.profile (this only applies to programs started from that terminal). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128695/"
]
} |
226,327 | I just found out by accident that CTRL + 4 closes programs reading stdin input from the command-line. This is how it looks when I type CTRL + 4 or CTRL + / into programs reading stdin $ catwefwefwefwef^\Quit$ bcbc 1.06.95Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.This is free software with ABSOLUTELY NO WARRANTY.For details type `warranty'. ^\Quit$ I get ^\Quit displayed and then the program closes. What is the difference of this compared to using ^C or ^D ? What does ^\Quit do? Edit : Found out that CTRL + \ does the very same thing. | Ctrl+4 sends ^\ Terminals send characters (or more precisely bytes), not keys. When a key that represents a printable character is pressed, the terminal sends that character to the application. Most function keys are encoded as escape sequences: sequences of characters that start with the character number 27. Some keychords of the form Ctrl + character , and a few function keys, are sent as a control characters — in the ASCII character set , which all modern computers use as a basis (Unicode, ISO Latin- n , etc. are all supersets of ASCII), 33 characters are control characters: characters number 0 through 31 and 127. Control characters are not printable, but intended to have an effect in applications; for example character 10, which is Control-J (commonly written ^J), is a newline character, so when a terminal displays that character, it moves the cursor to the next line, rather than displaying a glyph. The escape character itself is a control character, ^[ (value 27). There aren't enough control characters to cover all Ctrl + character keychords. Only letters and the characters @[\]^_? have a corresponding control character. When you press Ctrl + 4 or Ctrl + $ (which I presume is Ctrl + Shift + 4 ), the terminal has to pick something to send. Depending on the terminal and its configuration, there are several common possibilities: The terminal ignores the Ctrl modifier and sends the character 4 or $ . The terminal sends an escape sequence that encodes the exact key and modifiers that were pressed. The terminal sends some other control character. Many terminal send control characters for some keys in the digit row: Ctrl + 2 → ^@ Ctrl + 3 → ^[ Ctrl + 4 → ^\ Ctrl + 5 → ^] Ctrl + 6 → ^^ Ctrl + 7 → ^_ Ctrl + 8 → ^? I don't know where this particular convention arose. Ctrl + | sends the same character because it's Ctrl + Shift + \ and the terminal sends ^\ whether the shift key was pressed or not. ^\ quits The terminal itself (more precisely, the generic terminal support in the kernel) interprets a few control characters specially. This interpretation can be configured to map different characters or turned off by applications that want to process the characters by themselves. One well-known such interpretation is that ^M, the character send by the Return key, sends the current line to the application, if the terminal is in cooked mode , in which applications receive input line by line. A few characters send signals to the application in the foreground. ^C sends the interrupt signal (SIGINT), which conventionally tells the application to stop what it's doing and read the user's next command. Non-interactive applications usually exit. ^\ sends the quit signal (SIGQUIT), which conventionally tells the application to exit as soon as possible without saving anything; many applications don't override the default behavior, which is to kill the application immediately¹. So when you press Ctrl + 4 (or anything that sends the ^\ character) in cat or bc , neither of which overrides the default behavior, the application is killed. The terminal itself prints the ^\ part of the message: it's a visual depiction of the character that you typed, and the terminal is in cooked mode and with echo turned on (characters are displayed by the terminal as soon as you type them, as opposed to non-echo mode where the characters are only sent to the application, which may or may not choose to display them). The Quit part comes from bash: it notices that its child process died from a quit signal, and that's its way of letting you know. Shells handle all common signals, so that if you type ^\ in a shell, you don't kill your session, you just get a new prompt, same as ^C. You can play with terminal settings with the stty command. ¹ And traditionally generate a core dump , but many systems disable that by default nowadays. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/226327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
226,348 | How to filter out 2 lines for each line matching the grep regex? this is my minimal test: SomeTestAAAAEndTestSomeTestABCDEndTestSomeTestDEFGEndTestSomeTestAABCEndTestSomeTestACDFEndTest And obviously I tried e.g. grep -vA 1 SomeTestAA which doesn't work. desired output is: SomeTestABCDEndTestSomeTestDEFGEndTestSomeTestACDFEndTest | You can use grep with -P (PCRE) : grep -P -A 1 'SomeTest(?!AA)' file.txt (?!AA) is the zero width negative lookahead pattern ensuring that there is no AA after SomeTest . Test : $ grep -P -A 1 'SomeTest(?!AA)' file.txt SomeTestABCDEndTestSomeTestDEFGEndTestSomeTestACDFEndTest | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7139/"
]
} |
226,395 | I have a newly installed Manjaro system and it works fine most of the time but it has frozen a couple of times randomly. What logs should look in (and what should I look for) to try to diagose the problem? I know its quite a broad question, but there must be somewhere that would be a good starting point. | The two most common causes of crashes are video driver bugs and bad RAM. You can look for clues in logs in /var/log . Video problems are logged in /var/log/Xorg.0.log . Problems detected by the kernel are logged in /var/log/kern.log or /var/log/messages or some other file depending on the distribution, I don't know which file Manjaro uses. However, if your system crashes, it often doesn't get a chance to write to the logs. Do run a memory test. Install Memtest86+ (Arch Linux has it as a package , so you should have it on Manjaro as well). Reboot and select “memory test” at the Grub prompt and let it run for at least one full pass. If you suspect a video driver problem, try using the free driver if you were using the proprietary driver, or vice versa. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38572/"
]
} |
226,420 | If I know that a partition is for example /dev/sda1 how can I get the disk name ( /dev/sda in this case) that contains the partition ? The output should be only a path to disk (like /dev/sda ). It shouldn't require string manipulation, because I need it to work for different disk types. | You can observe in /sys the block device for a given partition name. For example, /dev/sda1: $ ls -l /sys/class/block/sda1lrwxrwxrwx 1 root root /sys/class/block/sda1 -> \ ../../devices/pci0000:00/.../ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1 A script to take arg /dev/sda1 and print /dev/sda is: part=$1part=${part#/dev/}disk=$(readlink /sys/class/block/$part)disk=${disk%/*}disk=/dev/${disk##*/}echo $disk I don't have lvm etc to try out, but there is probably some similar path. There is also lsblk : $ lsblk -as /dev/sde1NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsde1 8:65 1 7.4G 0 part `-sde 8:64 1 7.4G 0 disk and as @don_crissti said you can get the parent directly by using -o pkname to get just the name column, -n to remove the header, and -d to not include holder devices or slaves: lsblk -ndo pkname /dev/sda1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130905/"
]
} |
226,421 | The command fc will allow visual editing of the previous command. If I change my mind in the editor, how do I stop the command from being executed. In vim, typing q! or q both result in the command being executed and CTRL-C doesn't work either. Is the only option to delete the command in the editor and then wq? | From vi you can type :cq to exit without saving and with a non-zero return code. In this case the command will not be repeated. Alternatively, you can usually suspend the editor with ctrl-z which gets you back to the shell without redoing the command. You still have to fg to restart the editor, but the tmp file will no longer be around, so you can safely quit the editor. Or you can kill -9 % this suspended editor. I agree, it could be easier. Of course, you can always edit lines within bash using vi or emacs commands. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/226421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130906/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.