source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
515,923
I need to sum numbers from a file line by line. The file: 1.00.460.67 I want to sum, then divide 3. I currently have: while IFS= read -r vardo x=$(($var + $x)) | bc -ldone < "file.txt"echo "$x / 3" My error: -bash: 1.0 + 0: syntax error: invalid arithmetic operator (error token is ".0 + 0")
Bash/shell arithmetic can't handle floating point arithmetic. You can accomplish your task using awk : awk '{sum= sum+$1} END {print sum/3}' file This will read through your file and add each line to sum . After it completes reading the file, it will print sum divided by 3.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142522/" ] }
515,935
When I run tmux new-window from a script outside of my current tmux session, how can I specify which session the new window should be associated with?
Though not very clearly documented, it turns out windows can be specified as session-name:window-number , so specifying tmux new-window -t SESSION: results in creating a new window in the session. The session name or default number may be used.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63744/" ] }
515,967
I am executing below script LOGDIR=~/curl_result_$(date |tr ' :' '_')mkdir $LOGDIRfor THREADNO in $(seq 20)dofor REQNO in $(seq 20)do time curl --verbose -sS http://dummy.restapiexample.com/api/v1/create --trace-ascii ${LOGDIR}/trace_${THREADNO}_${REQNO} -d @- <<EOF >> ${LOGDIR}/response_${THREADNO} 2>&1 {"name":"emp_${THREADNO}_${REQNO}_$(date |tr ' :' '_')","salary":"$(echo $RANDOM%100000|bc)","age":"$(echo $RANDOM%100000|bc)"}EOFecho -e "\n-------------------------------" >> ${LOGDIR}/response_${THREADNO}done 2>&1 | grep real > $LOGDIR/timing_${THREADNO} &done After sometime if i check for no of bash processes, it shows 20(not 1 or 21) ps|grep bash|wc -l The question is since I have not used brackets "()" to enclose inner loop, new shell process should not be spawned. I want to avoid creating new shells as the CPU usage nears 100%.I don't know if it matters, but i am using Cygwin.
Because you have piped the loop into grep , it must be run in a subshell. This is mentioned in the Bash manual : Each command in a pipeline is executed in its own subshell, which is a separate process (see Command Execution Environment ) It is possible to avoid that with the lastpipe shell option for the final command in the pipeline, but not any of the others. In any case, you've put the whole pipeline into the background, which also creates a subshell. There is no way around this. What you're doing inherently does require separate shell processes in order to work: even ignoring the pipeline, creating a background process requires creating a process. If your issue is the CPU usage, that's caused by running everything at once. If you remove the & after the grep , all the commands will run in sequence instead of simultaneously. There will still be subshells created (for the pipeline), but those are not themselves the main issue in that case. If you need them to run simultaneously, the increased CPU usage is the trade-off you've chosen.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233634/" ] }
516,088
I have this file: abcdef I want the last line to be copied to before the first line, so I'll get the following result: fabcdef How can I do it in sed or awk ?
Pure sed : sed 'H;1h;$!d;G' file.txt All lines are collected in the hold space, then appended to the last line: H appends the current line to the hold space. Unfortunally this will start the hold space with a newline, so 1h for the first line, the hold space gets overwritte with that line without preceeding newline $!d means if this is not ( ! ) the last line ( $ ), d elete the line, because we don't want to print it yet G is only executed for the last line, because for other lines the d stopped processing. Now all lines (first to last) we collected in the hold space get appended to the current (last) line with the G command and everything gets printed at the end of the script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233964/" ] }
516,102
I have a Thinkpad L380 with Intel 8265, on which I installed Debian 9 base system (from a netinstall USB stick). I downloaded https://wireless.wiki.kernel.org/_media/en/users/drivers/iwlwifi-8265-ucode-22.361476.0.tgz , which had the file iwlwifi-8265-22.ucode, which I copied to /lib/firmware and rebooted the machine. The output of dmesg | grep iwlfifi has the following lines. (There might be some typos, since Ihad to read it from the laptop and type it here, since iwlwifi isn't working on it, and I can't connect it to ethernet for want of a mini-RJ45 connector!) firmware: failed to load iwlwifi-8265-26.ucode (-2)Direct firmware load for iwlwifi-8265-26.ucode failed with error -2 Similarly for -25, -24 and -23. Then firmware: direct-loading firmware iwlwifi-8265-22.ucodeloaded firmware version 22.361476.0 op_mode iwlmvmDetected Intel(R) Dual Band Wireless AC 8265 REV=0x230L1 enabled - LTR enabledL1 enabled - LTR enabledwlp2s0 renamed frm wlan0 Looking at many online discussions/blogs discussing similar problems, I tried modinfo iwlwifi | more which gave (the first few line) filename: /lib/modules/4.9.0-8-amd64/kernel/drivers/net/wireless/intel/iwlwifi/iwlwifi.kolicence: GPLauthor : ....descrption: Intel(R) Wireless Wifi driver for Linux......firmware: iwlwifi-8265-26.ucode... Where can I find version -26 (or anything above -22)? The discussion https://forums.intel.com/s/question/0D50P0000490P1ASAU/iwlwifi8265-linux-firmware?language=en_US mentions that the drivers are maintained at kernel.org and asks users to contact the support people at kernel.org, but there I could only find -22. Is there any workaround for this problem?
Pure sed : sed 'H;1h;$!d;G' file.txt All lines are collected in the hold space, then appended to the last line: H appends the current line to the hold space. Unfortunally this will start the hold space with a newline, so 1h for the first line, the hold space gets overwritte with that line without preceeding newline $!d means if this is not ( ! ) the last line ( $ ), d elete the line, because we don't want to print it yet G is only executed for the last line, because for other lines the d stopped processing. Now all lines (first to last) we collected in the hold space get appended to the current (last) line with the G command and everything gets printed at the end of the script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350500/" ] }
516,106
I installed a new virtual machine yesterday. Previously, I download the guest addition iso from https://download.virtualbox.org/virtualbox/ , from within virtual box. And then mount the iso and run the VBoxLinuxGuestAdditions.run , but from yesterday, I can't access this site. Secondly (Virtualbox 6+ onwards), I can't install the guest additions from the Insert Guest Additions CD Image under the Devices tab. But nevertheless, I tried once more, and I get: The network operation failed with the following error: During network request: Wrong SSL certificate format. I need to test a software with the guest addition. I am on Arch Linux and the VirtualBox version is 6.0.6 r129722 . What's wrong with VirtualBox? Is there an alternative way to download the guest additions?
From the comments: The current version 6.0.6 of the guest additions iso can be downloaded from http://download.virtualbox.org/virtualbox/6.0.6/VBoxGuestAdditions_6.0.6.iso https://download.virtualbox.org/virtualbox/6.0.6/VBoxGuestAdditions_6.0.6.iso ( Edit: https works again) But get the hash from the https site, https://www.virtualbox.org/download/hashes/6.0.6/MD5SUMS and check it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
516,161
Suppose I have two files which look as follows: $ cat search_file.txtThis line contains kwd1.This line contains kwd2.This line contains no match.This line contains no match.This line contains kwd5. $ cat search_kwd.shgrep kwd1 search_file.txtgrep kwd2 search_file.txtgrep kwd3 search_file.txtgrep kwd4 search_file.txtgrep kwd5 search_file.txt When I run search_kwd.sh, I get: $ sh search_kwd.shThis line contains kwd1.This line contains kwd2.This line contains kwd5. I want to print a string whenever grep does not get a match. The output would look like: $ sh search_kwd.shThis line contains kwd1.This line contains kwd2.stringstringThis line contains kwd5. How do I go about doing this in bash?
grep exits with non-zero code when nothing found. From man grep : Normally the exit status is 0 if a line is selected, 1 if no lines were selected, and 2 if an error occurred. So you can use: grep kwd3 search_file.txt || echo "string"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/516161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290191/" ] }
516,185
I have a server in a locked-down environment with no egress to the internet, and 2 interfaces: a physical eth0, and a vlan iface eth0.101 /etc/network/interfaces contains a post-up command to enable a route to a specific net block via the vlan iface, like so: post-up ip route add 10.1.0.0/24 via 10.1.2.1 dev eth0.101 During switch failover testing, we noticed that the route was lost ( RTNETLINK answers: Network is unreachable. ) which makes sense. However, once the network came back online, the route was not added to the interface again. I understand why - the interface didn't go down , it just lost access to that net. How can I configure an interface to restore routes to networks that where lost, but, to quote the old song, have now been found? We use Debian 9 and have a service definition [email protected] for each interface, which uses ifup commands to bring the device up / down. But again, the device, and the link to the switch, never faltered. I mention this in case extra systemd options can be leveraged.
A routing table will make your route permanent (to avoid adding it again/manually after a switch failover); First, create a named routing table. As an example, we could use "mgmt". echo '200 mgmt' >> /etc/iproute2/rt_tables Just for an extended detail about the solution, above, the kernel supports many routing tables and refers to these by unique integers numbered 0-255. A name, mgmt, is also defined for the table. Below, a look at a default /etc/iproute2/rt_tables follows, showing that some numbers are reserved. The choice in this answer of 200 is arbitrary; one might use any number that is not already in use, 1-252. # reserved values255 local0 unspec Second, edit your post-up rule (under /etc/network/interfaces) like this post-up ip route add 10.1.0.0/24 dev eth0.101 table mgmt post-up ip route add default via 10.1.2.1 dev eth0.101 table mgmt post-up ip rule add from 10.1.0.0/24 table mgmt post-up ip rule add to 10.1.0.0/24 table mgmt Alternatively an other solution could be a background bash script checking for the route existence and adding it back if it's missing, the script could check the result of ip route add 10.1.0.0/24 via 10.1.2.1 dev eth0.101 the script could be setup in a loop or a cron ip route add 10.1.0.0/24 via 10.1.2.1 dev eth0.101if [ $? -eq 0 ]; then echo "Route added again" sleep 10; command-to-call-the-script-againelse echo "Route exists" sleep 10; command-to-call-the-script-againfi Source: what is the best way to add a permanent route?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193601/" ] }
516,203
I need to do some work on 700 network devices using an expect script. I can get it done sequentially, but so far the runtime is around 24 hours. This is mostly due to the time it takes to establish a connection and the delay in the output from these devices (old ones). I'm able to establish two connections and have them run in parallel just fine, but how far can I push that? I don't imagine I could do all 700 of them at once, surely there's some limit to the no. of telnet connections my VM can manage. If I did try to start 700 of them in some sort of loop like this: for node in `ls ~/sagLogs/`; do foo & done With CPU 12 CPUs x Intel(R) Xeon(R) CPU E5649 @ 2.53GHz Memory 47.94 GB My question is: Could all 700 instances possibly run concurrently? How far could I get until my server reaches its limit? When that limit is reached, will it just wait to begin the next iteration off foo or will the box crash? I'm running in a corporate production environment unfortunately, so I can't exactly just try and see what happens.
Could all 700 instances possibly run concurrently? That depends on what you mean by concurrently. If we're being picky, then no, they can't unless you have 700 threads of execution on your system you can utilize (so probably not). Realistically though, yes, they probably can, provided you have enough RAM and/or swap space on the system. UNIX and it's various children are remarkably good at managing huge levels of concurrency, that's part of why they're so popular for large-scale HPC usage. How far could I get until my server reaches its limit? This is impossible to answer concretely without a whole lot more info. Pretty much, you need to have enough memory to meet: The entire run-time memory requirements of one job, times 700. The memory requirements of bash to manage that many jobs (bash is not horrible about this, but the job control isn't exactly memory efficient). Any other memory requirements on the system. Assuming you meet that (again, with only 50GB of RAM, you still ahve to deal with other issues: How much CPU time is going to be wasted by bash on job control? Probably not much, but with hundreds of jobs, it could be significant. How much network bandwidth is this going to need? Just opening all those connections may swamp your network for a couple of minutes depending on your bandwidth and latency. Many other things I probably haven't thought of. When that limit is reached, will it just wait to begin the next iteration off foo or will the box crash? It depends on what limit is hit. If it's memory, something will die on the system (more specifically, get killed by the kernel in an attempt to free up memory) or the system itself may crash (it's not unusual to configure systems to intentionally crash when running out of memory). If it's CPU time, it will just keep going without issue, it'll just be impossible to do much else on the system. If it's the network though, you might crash other systems or services. What you really need here is not to run all the jobs at the same time. Instead, split them into batches, and run all the jobs within a batch at the same time, let them finish, then start the next batch. GNU Parallel ( https://www.gnu.org/software/parallel/ ) can be used for this, but it's less than ideal at that scale in a production environment (if you go with it, don't get too aggressive, like I said, you might swamp the network and affect systems you otherwise would not be touching). I would really recommend looking into a proper network orchestration tool like Ansible ( https://www.ansible.com/ ), as that will not only solve your concurrency issues (Ansible does batching like I mentioned above automatically), but also give you a lot of other useful features to work with (like idempotent execution of tasks, nice status reports, and native integration with a very large number of other tools).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/516203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302078/" ] }
516,276
I would like to restore the horizontal style in-application menu bar in KDE Plasma 5. Somehow I ended up with a single "control" menu item like this: Using that requires two mouse clicks instead of one. What I would like is the horizontal style in-application menu (just below the title bar) like this: What are the steps for switching back? The other questions on this topic discuss Global Menus, which I do not believe is related to my question about in-application menus. I am not using global (Unity or Mac style) menus. Other questions / answers I looked at include: https://unix.stackexchange.com/a/426002 https://unix.stackexchange.com/a/489727 https://askubuntu.com/questions/30180/menubar-hidden-in-all-kde-apps Pressing CTRL-M does not resolve it. That appears to be a different issue. I have a menu, it's just not the one I want.
Operating System: Kubuntu 19.04KDE Plasma Version: 5.15.5KDE Frameworks Version: 5.57.0Qt Version: 5.12.2Kernel Version: 5.0.0-15-genericOS Type: 64-bit I've checked with Dolphin, Kate and Gwenview. I think the problem is with the ☰ button in your title bar. Its presence suppresses the appearance of the horizontal menu. Remove it by opening System Settings > Application Style > Window Decorations > Buttons tab and dragging the ☰ button off your title bar into the space below. In Kate and Gwenview, the horizontal menu now can be toggled on/off with Ctrl + M . In Dolphin, now Ctrl + M toggles between the horizontal menu and the Control (☰) button. Edit And it's the same with the Fedora 30 KDE Spin: Operating System: Fedora 30KDE Plasma Version: 5.15.5KDE Frameworks Version: 5.58.0Qt Version: 5.12.1Kernel Version: 5.0.17-300.fc30.x86_64OS Type: 64-bit
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
516,350
If I have a folder of files with their filenames as the date they were created: 2019_04_30.txt2019_04_15.txt2019_04_10.txt2019_02_20.txt2019_01_05.txt How would I compare the files names against todays current date $ date "+%Y_%m_%d">>> 2019_04_30 If the file names date is greater than 30 days then delete it. I would expect to end up with 2019_04_30.txt2019_04_15.txt2019_04_10.txt I don't have to follow this naming convention, I could use a more suitable date format.
Here is a bash solution. f30days=$(date +%s --date="-30 days")for file in 20*.txt; do fdate=$(echo $file | tr _ -) fsec=$(date +%s --date=${fdate/.txt/}) if [[ $fsec -lt $f30days ]]; then echo "rm $file" fidone I ended it with " echo rm $file " instead of really deleting your files, this will test the result before.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343249/" ] }
516,381
I formatted an external hard drive (sdc) to ntfs using parted, creating one primary partition (sdc1). Before formatting the device SystemRescueCd was installed on the external hard drive using the command dd in order to be used as a bootable USB. However when listing devices with lsblk -f I am still getting the old FSTYPE (iso9660) and LABEL (sysrcd-5.2.2) for the formatted device (sdc): NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc iso9660 sysrcd-5.2.2 └─sdc1 ntfs sysrcd-5.2.2 /run/media/user/sysrcd-5.2.2 As shown in the output of lsblk -f only the FSTYPE of the partition sdc1 is correct, the sdc1 partition LABEL, sdc block device FSTYPE and LABEL are wrong. The nautilus GUI app is also showing the old device label (sysrcd-5.2.2). After creating a new partition table, parted suggested I reboot the system before formatting the device to ntfs , but I decided to unmount sdc instead of rebooting. Could it be that the kernel is still using the old FSTYPE and LABEL because I haven't rebooted the system? Do I have to reboot the system to get rid of the old FSTYPE and LABEL? As an alternative to rebooting is there a way to rename the FSTYPE and LABEL of a block device manually so that I can change them to the original FSTYPE and LABEL that shipped with the external hard drive?
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc : [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x8001 iso9660 sysrcd-5.2.2sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd , with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f ( --force ) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc/dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-labeltotal 0lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 1 2 . For an ntfs file system changing the label is done with the command ntfslabel : ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289865/" ] }
516,384
I'm trying to convert some .jb2e images, which I extracted from a PDF, into a proper, common image file format like PNG or JPG. I tried using jbig2dec , but that told me jbig2dec FATAL ERROR Not a JBIG2 file header What else can I try? I'm using Devuan ASCII (~= Debian Stretch).
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc : [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x8001 iso9660 sysrcd-5.2.2sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd , with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f ( --force ) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc/dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-labeltotal 0lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 1 2 . For an ntfs file system changing the label is done with the command ntfslabel : ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
516,404
I know there's the a syntax you can use, like: grep -oP '.word1.*?word2' but this doesn't work on multiple lines. So here's an example input: user1:x:1001:1001::/home/user1home:/bin/bashuser2:x:1002:1002::/home/user2home:/bin/bashuser3:x:1003:1003::/home/user3home:/bin/bashuser4:x:1004:1004::/home/user4home:/bin/bash The command I tried to use was: grep -oP '.1002:1002.*?user4home' My desired output would be something like this: 1002:1002::/home/user2home:/bin/bashuser3:x:1003:1003::/home/user3home:/bin/bashuser4:x:1004:1004::/home/user4home
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc : [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x8001 iso9660 sysrcd-5.2.2sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd , with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f ( --force ) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc/dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-labeltotal 0lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 1 2 . For an ntfs file system changing the label is done with the command ntfslabel : ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350735/" ] }
516,419
When I boot without the USB mounted I get this message even though the OS is installed in the hard drive: wee 0>find --set -root /grldr(0x80,0)wee 0> /grldrwee 13>
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc : [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x8001 iso9660 sysrcd-5.2.2sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd , with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f ( --force ) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc/dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-labeltotal 0lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 1 2 . For an ntfs file system changing the label is done with the command ntfslabel : ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350753/" ] }
516,423
I went from Windows 7 to Debian 9, copying most of the files I'm using for my projects from an NTFS drive. I see that : all the folders I have copied are now with rights drwxrwxrwx instead of drwxr-xr-x . all the files have those rights too, instead of -rw-r--r-- . Is there an easy way to correct this, recursively ? a chmod I think, but I'm not used with its parameters. Files and folders shall have differents rights.
From the output of lsblk -f in the original post I suspected that the signature of the installed SystemRescueCd was still present in the external hard drive. So I ran the command wipefs /dev/sdc and wipefs /dev/sdc1 which printed information about sdc and all partitions on sdc : [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x8001 iso9660 sysrcd-5.2.2sdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs sdc1 0x1fe dos The above printout confirmed that the iso9660 partition table created by SystemRescueCd was still present. lsblk was using the TYPE and LABEL of the iso9660 partition table instead of the dos (Master Boot Record) partition table. To get lsblk to display the correct partition table the iso9660 partition table had to be deleted. Note that dd can also be used to wipe out a partition-table signature from a block (disk) device but dd could also wipe out other partition tables. Because we want to target only a particular partition-table signature for wiping, wipefs was preferred since unlike dd , with wipefs we would not have to recreate the partition table again. The -a option of the command wipefs erases all available signatures on the device but the -t option of the command wipefs when used together with the -a option restricts the erasing of signatures to only a certain type of partition table. Below we wipe the iso9660 partition table. The -f ( --force ) option is required when erasing a partition-table signature on a block device. [root@fedora user]# wipefs -a -t iso9660 -f /dev/sdc/dev/sdc: 5 bytes were erased at offset 0x00008001 (iso9660): 43 44 30 30 31 After erasing the iso9660 partition table we check the partition table again to confirm that the partition table iso9660 was erased: [root@fedora user]# wipefs /dev/sdcDEVICE OFFSET TYPE UUID LABELsdc 0x1fe dos [root@fedora user]# wipefs /dev/sdc1DEVICE OFFSET TYPE UUID LABELsdc1 0x3 ntfs 34435675G36Y4776 sdc1 0x1fe dos But now that the problematic iso9660 partition table has been erased lsblk is now using the UUID of the partition as the mountpoint directory name since the previously used label of the iso9660 partition-table no longer exists: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs 34435675G36Y4776 /run/media/user/34435675G36Y4776 we can check which volumes (i.e. partitions) have labels in the directory /dev/disk/by-label which lists all the partitions that have a label: [root@fedora user]# ls -l /dev/disk/by-labeltotal 0lrwxrwxrwx. 1 root root 10 Apr 30 19:47 'System\x20Reserved' -> ../../sda1 The ntfs file system on the partition sda1 is the only partition that has a label To make the directory name of the mountpoint more human readable we change the label for the ntfs file system on the partition sdc1 from nothing (empty string) to a "new label". The commands for changing the label for a file system depend on the file system 1 2 . For an ntfs file system changing the label is done with the command ntfslabel : ntfslabel /dev/sdc1 "new-label" Now after changing the label on the ntfs file system lsblk uses the "new-label" as the name of the directory of the mountpoint: NAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 ntfs System Reserved ├─sda2 ntfs ├─sda3 ntfs ├─sda4 sdc └─sdc1 ntfs new-label /run/media/user/new-label Notice: also that the device sdc no longer has a file system type and label just like all the other block devices (e.g. sda). Only partitions should have a file system type since the file system is on the partition not the device, and label since the column header LABEL is the file system label not the device label.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350549/" ] }
516,472
I'm using Makefile such that the default step is to print out a help document for the other options. It would be great if I could add an EOF section instead of multiple echo s. Is it possible to use EOF in a Makefile like you can do it a bash script?
If you're using GNU make, you can use the define syntax for multi-line variables : define message =You can also include make macros like $$@ = $@in a multi-line variableendefhelp:; @ $(info $(message)) : By playing with the value function and the export directive, you can also include a whole script in your Makefile, which will allow you to use a multi-line command without turning on .ONESHELL globally: define _scriptecho SHELL is $SHELL, PID is $$cat <<'EOF'literally $SHELL and $$EOFendefexport script = $(value _script)run:; @ eval "$$script" will give SHELL is /bin/sh, PID is 12434literally $SHELL and $$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61262/" ] }
516,588
I am trying to figure out, whether or not it's possible to make sure, that if a user uses a browser and types in a domain name that ends on either "se" or "ru", they will be denied access to that site. PS: this is a school assignment, and my teacher demands that I make use of tcp wrapper, so downloading some module that will do the trick is out of the question, unfortunately
No, it is not possible. (It might be a trick question :-). TCP Wrapper (tcp_wrappers_7.6.tar.gz) Wietse Venema's network logger, also known as TCPD or LOG_TCP. These programs log the client host name of incoming telnet, ftp, rsh, rlogin, finger etc. requests. To fetch a website, a web browser makes an outgoing request. (And web browsers do not abuse libwrap for a purpose it is not intended for.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350908/" ] }
516,651
I'm setting up a PXE server to automate deployment of KVM guests. KVM hypervisor host: Fedora 29 KVM guests: Centos 7 During the installation I face a problem /sbin/dmsquash-live-root: write error: No space left on device and after this some "timeout scripts" are started with the following fail of the installation. Quick overview of the environment: DHCP server is alright dhcpd.conf subnet 172.31.0.0 netmask 255.255.255.0 { range 172.31.0.51 172.31.0.120; default-lease-time 1800; max-lease-time 3600; next-server 172.31.0.32; filename "pxelinux/pxelinux.0"; option routers 172.31.0.1; option subnet-mask 255.255.255.0; option broadcast-address 172.31.0.255; option domain-name-servers 172.31.0.2; option domain-name "corp.example.com";} VM actually gets IP address and TFTP server IP address TFTP server is also alright [root@kickstart ~]# ll /var/lib/tftpboot/pxelinux/total 57872-rw-r--r--. 1 root root 52584760 Apr 29 17:07 initrd.img-rw-r--r--. 1 root root 26759 Apr 29 17:02 pxelinux.0drwxr-xr-x. 2 root root 21 May 1 13:48 pxelinux.cfg-rwxr-xr-x. 1 root root 6639904 Apr 29 17:07 vmlinuz Kickstart file [root@kickstart ~]# cat /var/lib/tftpboot/pxelinux/pxelinux.cfg/defaultdefault Linuxprompt 1timeout 10display boot.msglabel Linux menu label ^Install Centos MA MAN menu default kernel vmlinuz append initrd=initrd.img ks=http://kickstart.corp.example.com/anaconda/anaconda-ks.cfg VM actually gets vmlinuz and initrd.img anaconda-ks.cfg is pretty standard I believe ignoredisk --only-use=sdakeyboard 'us'rootpw --iscrypted $1$tg.NYz9t$GnRVNLuQdB6mperFmUdwL.lang en_UShalttimezone America/New_Yorktextnetwork --bootproto=dhcp --device=eth0network --hostname=test1.corp.example.comurl --url="http://kickstart.corp.example.com/install" # Apache serverauth --useshadow --passalgo=sha512firewall --enabled --port=sshselinux --enforcingskipxbootloader --location=mbr --boot-drive=sdaautopart --type=lvmclearpart --none --initlabel Installation source is an Apache server It's available on the network. <VirtualHost *:80>DocumentRoot /www/docs/kickstart.corp.example.comServerName kickstart.corp.example.comOptions +Indexes</VirtualHost> I noticed "SATA link down" messages (see screenshot above) and problem with mounting /dev/loop0 but I don't know how to interpret it. I don't know where to dig further.
At this point, the guest has successfully booted the kernel and is running in initramfs environment. The installer initramfs is loading a squashfs file, which would be located at <CentOS DVD root>/LiveOS/squashfs.img . In this case, I believe it might be loading it from http://kickstart.corp.example.com/install/LiveOS/squashfs.img - or it might even be loading it over the internet from the CentOS package repository servers. (If the latter is true, you can add a boot option inst.stage2=http://kickstart.corp.example.com/install to the append line in /var/lib/tftpboot/pxelinux/pxelinux.cfg/default to enforce loading it from a local source.) Since the root filesystem is not yet mounted, it would be loading it into a RAM disk. At this point the installer UI is not started yet, and the local disks haven't been touched at all, although the kernel has detected that /dev/vda is present. On an old CentOS 7 ISO image I have at hand, the squashfs.img file is 352 MiB in size. An up-to-date version is likely to be a bit larger than that; the output of curl (the tool that is actually doing the downloading) encapsulated in the messages logged by dracut-initqueue suggests that your squashfs.img is 432 MiB in size, and the download gets aborted at about the 75% point because there is not enough space (in the ramdisk, I assume). Since the squashfs.img download was incomplete, mounting it will fail, and then the RAM disk will still be 100% full, causing the No space left on device error message. How much RAM does your guest VM have assigned to it? If the VM is tiny, you might be running out of memory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350951/" ] }
516,732
I have a file with lines, just like this: ABC I want to create a duplicate file in bash that contains each line merged with the copy of next line, like: A;BB;CC;
Using awk : awk 'prev{ print prev ";" $0 } { prev = $0 } END { if (NR) print prev ";" }' which with your input, gives A;BB;CC;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/516732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351016/" ] }
516,774
Here is a simple script which is curling https://unix.stackexchange.com/ and storing the result into an array, which is working fine. #!/usr/local/bin/bash[ -f pgtoscrap ] && { rm pgtoscrap; };curl -o pgtoscrap https://unix.stackexchange.com/;declare -a arr;fileName="pgtoscrap";exec 10<&0exec < $fileNamelet count=0while read LINE; do arr[$count]=$LINE ((count++))doneexec 0<10 10<&- But, each time I run this script; I get some error for the wrong file descriptor. ./shcrap./shcrap: line 14: 10: No such file or directory I think I don't understand well how to use exec command in a loop correctly. Can someone explain? -- Update after implementing mapfile for Bash 4 it became much simpler -- #!/usr/local/bin/bash## Pass a parameter as e.g. ./linkscrapping.bash https://unix.stackexchange.com/mapfile -t arr < <(curl -s $1); ## Doing exec stuff with process substitutionregex="<a[[:print:]]*<\/a>"; ELEMENTS=${#arr[@]}; firstline=0;for((i=0;i<$ELEMENTS;i++)); do if [[ ${arr[${i}]} =~ $regex ]]; then [[ $firstline<1 ]] && { echo ${BASH_REMATCH[0]} > scrapped; let firstline=$firstline+1; } || { echo ${BASH_REMATCH[0]} >> scrapped; } fidonepg2scrap="scrapped"; mapfile -t arr2 < <(cat $pg2scrap);regex="href=[\"\'][0-9a-zA-Z\:\/\.]+"; ELEMENTS2=${#arr2[@]}; line2=0for ((i=0;i<$ELEMENTS2;i++)); do if [[ ${arr2[${i}]} =~ $regex ]]; then [[ $line2<1 ]] && { echo ${BASH_REMATCH[0]#href=\"} > links; (( line2++ )); } || { echo ${BASH_REMATCH[0]#href=\"} >> links; } fidone; cat links;
It surely has to do with how you close the file descriptor that you had opened earlier for stdin. Using the below should just be fine exec 10<&- When you do 0<10 , you instruct the shell to look for and to slurp in the contents of a file named 10 in your current directory which makes no sense in this context. In bash you can also use an alternate form exec 10>&- which achieves the same purpose of closing the descriptor. But that said, you don't need to use exec on random file descriptor and read your input, you can just read in your input with the process substitution technique in bash of form < <() as while IFS= read -r line; do arr["$count"]="$line" ((count++))done< <(pgtoscrap)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318405/" ] }
516,777
this is Apache httpd 2.4. I am trying to port a legacy installation from 2.2 to 2.4 and clean it up a bit in the process. There are a lot off directories with identical setup, interspersed with a few dirs that differ. In essence, it looks like this: <directory /var/www/aaa > # lots of stuff</directory><directory /var/www/bbb > # lots of stuff</directory><directory /var/www/ > # lots of stuff</directory><directory /var/www/ccc > # lots of other stuff</directory> I'd love to avoid redundant lines, so I'm wishing for some sort of shorthand, like: <directory /var/www/aaa ><directory /var/www/bbb ><directory /var/www/ > # lots of stuff</directory><directory /var/www/ccc > # lots of other stuff</directory> But <Directory> doesn't support multiple dirs in any way that I can find. So <DirectoryMatch> seems appropriate, like this: <directoryMatch /var/www/(aaa|bbb|) > # lots of stuff</directory><directoryMatch /var/www/ccc > # lots of other stuff</directory> This fails, as the /var/www/(aaa|bbb|) matches /var/www/ccc in partial. And contrary to <Directory> , <DirectoryMatch> is not processed in the order shortest directory component to longest. The manual claims that "within each group the sections are processed in the order they appear in the configuration files", but even if I switch the sections, the result is still the same (that /var/www/ccc is hit by " # lots of stuff " instead of by " # lots of other stuff "). In this example, I can of course extend the regexp ( /var/www/(aaa|bbb|(?!ccc)) ), but the real configuration is unfortunately much more complex than this. Working with the regex is probably still a possibility, but the result will hardly be easy to understand or maintain. So can someone give me a hint if there is some way to do this, beside the DirectoryMatch? And perhaps an explanation for why Apache seems to prefer the partial match above? Thanks.
It surely has to do with how you close the file descriptor that you had opened earlier for stdin. Using the below should just be fine exec 10<&- When you do 0<10 , you instruct the shell to look for and to slurp in the contents of a file named 10 in your current directory which makes no sense in this context. In bash you can also use an alternate form exec 10>&- which achieves the same purpose of closing the descriptor. But that said, you don't need to use exec on random file descriptor and read your input, you can just read in your input with the process substitution technique in bash of form < <() as while IFS= read -r line; do arr["$count"]="$line" ((count++))done< <(pgtoscrap)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351043/" ] }
516,800
I have put set -g mouse on set -g mouse-select-pane on in my .tmux.conf file and then I open tmux and create a new pane and I cant switch pane with mouse.is there something else I need to configure?I am using tmux 2.9 on Mac using zsh.
Here is a clarification of Mark Volkmann's answer. First make sure that you have set -g mouse on in your .tmux.conf file, and that you have sourced the file by running tmux source <whatever config file> . I have found the other line you have in your config to be unnecessary. Once you have the config file set up, run: $ exit tmux$ tmux kill-server$ tmux NOTE: In a previous version of this answer I listed Volkmann's answer as "horrible and unreadable". I do not find this to be correct nor do I find it an acceptable way to treat another person. Please accept my apology for being rude and inconsiderate.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/516800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351061/" ] }
516,863
I am trying to align these columns super+t sticky togglesuper+Shift+space floating togglesuper+Shift+r restartsuper+Shift+d mode $mode_launchersuper+Shift+c reloadsuper+r mode resizesuper+Return i3-sensible-terminalsuper+q killsuper+n Nautilus scratchpad showsuper+m neomutt scratchpad showsuper+minus scratchpad showsuper+f fullscreen togglesuper+c bar mode togglesuper+button2 killsuper+alt+x systemctl -i suspendsuper+alt+v cmussuper+alt+m neomuttsuper+alt+c ~/bin/editinvimsuper+alt+b ranger I have tried to use awk but no luck there.The preferred format is like super+Shift+d mode $mode_launcher super+alt+c ~/bin/editinvimsuper+alt+b ranger
You could re-print the first column, left-justified in a suitably wide field: $ awk '{$1 = sprintf("%-30s", $1)} 1' filesuper+t sticky togglesuper+Shift+space floating togglesuper+Shift+r restartsuper+Shift+d mode $mode_launchersuper+Shift+c reloadsuper+r mode resizesuper+Return i3-sensible-terminalsuper+q killsuper+n Nautilus scratchpad showsuper+m neomutt scratchpad showsuper+minus scratchpad showsuper+f fullscreen togglesuper+c bar mode togglesuper+button2 killsuper+alt+x systemctl -i suspendsuper+alt+v cmussuper+alt+m neomuttsuper+alt+c ~/bin/editinvimsuper+alt+b ranger If you want to choose a suitable width automatically based on the length of column 1, then: awk ' NR==FNR {w = length($1) > w ? length($1) : w; next} {$1 = sprintf("%-*s", w+4, $1)} 1' file file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/516863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273967/" ] }
516,870
Is there any way to add custom commands to /bin ? For example, I use docker container ls a lot, and would like to turn this into a shortcut command, like dcls . If I add a file named dcls to /bin and inside the file, specify the exact command docker container ls , I assume this wouldn't work. What is the right way, if there is one, to do something like this?
An easy way for a shortcut is to define an alias alias dcls='docker container ls' This will execute docker container ls when you enter dcls and the command alias lists your defined aliases. To remove this alias use unalias dcls . If you use bash, you can save the alias in your ~/.bashrc or ~/.bash_aliases . If your ~/.bash_aliases is not read on startup, you can add this line to your ~/.bashrc : [ -f ~/.bash_aliases ] && . ~/.bash_aliases
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/516870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351127/" ] }
516,918
In zsh, is it possible to direct the trace output from set -x ( set -o xtrace ) to a file descriptor other than stderr? I'm looking for an equivalent of $BASH_XTRACEFD or a way to emulate the same behaviour.
As of zsh 5.7, the answer is no. The trace output always goes to stderr. Source: reading the source. The trace output is written to the file xtrerr , which looks promising, but the only assignments to xtrerr are to stderr , to a copy of it, or to NULL . It should be possible to write a dynamically loadable module that sets xtrerr , but writing a module outside the zsh source tree is not easy. A possible workaround is to emulate xtrace with a DEBUG trap . This provides the same basic information in most cases, but I'm sure there are plenty of corner cases where xtrace would be hard or impossible to emulate perfectly. One difference is that the inheritance of the xtrace option and the inheritance of traps follow different rules in some circumstances related to functions, subshells, emulate , etc. Proof-of-concept: trap 'print -r -- "+$0:$LINENO>$ZSH_DEBUG_CMD" >>trace_file' DEBUG Or maybe a bit more sophisticated (untested): zmodload zsh/systemsysopen -a -o create -u xtrace_fd trace_filetrap 'syswrite -o $xtrace_fd "+$0:$LINENO>$ZSH_DEBUG_CMD"' DEBUG
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/516918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
516,925
I don't want to remove the service, I just want to avoid its start on boot. I still need the option to start it manually later (with the systemctl start <service> command). I tried to use systemctl disable <service> . It doesn't work, because it removes the service. There is another possibility. In its service file, [Install]#WantedBy=multi-user.target could be commented out (and then, systemctl daemon-reload ). It works in the case of my own services, because their service files was written by me. However, the service files belonging to distribution, are in /lib/systemd/system . Files in this directory are managed by the OS, i.e. they would be overwritten by updates, other parts of the system might assume that these are unmodified, and so on. Simply editing system files out of the /etc is a bad practice, and I don't want to do that. I don't want to edit configuration files in my /lib . What to do?
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS)$ sudo systemctl disable unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8)$ sudo systemctl start unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/516925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52236/" ] }
516,931
How do I install Wine on Fedora 20? Other versions of Fedora or other Linuces are not an option. I would prefer to use a package manner, but can also build from source
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS)$ sudo systemctl disable unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8)$ sudo systemctl start unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/516931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9320/" ] }
516,949
Problem I have a script to be run at 1:32 am, so I set a cronjob by $ crontab -e And in the editting file, I have 32 1 * * * /home/user/.scripts/midnightjobs where "user" is my user name. However, it did not work. Attempts I made I tried adding a logging function in my script, and hoped to see what was wrong. It seems like the script never has run. I also tried adding another cronjob at 7:59am: 0 8 * * * /home/user/.scripts/midnightjobs And it works! The script ran, and did output a log file at 8 am. My guess I believe I have been very careful.. and based on my second attempt, my best guess is that my laptop (running on an archlinux) secretly falls asleep at nights, failing to run the cronjob.
systemctl disable is the correct way to do this; it still allows starting a unit manually, even if it doesn’t appear in systemctl --all ’s output — to list all startable units, you should run systemctl list-unit-files instead. To render a unit un-startable, you need to mask it. $ sudo systemctl stop unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled) Active: inactive (dead) since Fri 2019-05-03 13:12:41 CEST; 5s ago Docs: man:unbound(8) Main PID: 5320 (code=exited, status=0/SUCCESS)$ sudo systemctl disable unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:unbound(8)$ sudo systemctl start unbound$ sudo systemctl status unbound● unbound.service - Unbound DNS server Loaded: loaded (/lib/systemd/system/unbound.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-05-03 13:13:14 CEST; 1s ago Docs: man:unbound(8) Process: 30513 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS) Process: 30518 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS) Main PID: 30525 (unbound) Tasks: 1 (limit: 4915) CGroup: /system.slice/unbound.service └─30525 /usr/sbin/unbound -d If you really want to, you can override system-provided services defined in /lib by adding files in /etc , and change their desired target; systemctl edit yourunit will do the right thing: it opens an editor, allowing you to override only the settings you care about, and it will store the result in the right place, as an override “snippet”. Updates made to non-overridden settings in the system-provided services ( e.g. by package upgrades) will be taken into account transparently.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/516949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291142/" ] }
517,165
If the current user only has execute (--x) permissions on a file, under which user does the interpreter (specified by #!/path/to/interpreter at the beginning of the file) run? It couldn't be the current user, because he doesn't have permission to read the file. It couldn't be root, because then arbitrary code included in the interpreter would gain root access. As which user, then, does the interpreter process run? Edit: I think my question assumes that the file has already been read enough to know which interpreter it specifies, when in reality it wouldn't get that far. The current shell (usually b/a/sh) interpreting the command to execute the target file would attempt to read it, and fail.
If the user has no read permission on an executable script, then trying to run it will fail , unless she has the CAP_DAC_OVERRIDE capability (eg. she's root): $ cat > yup; chmod 100 yup#! /bin/shecho yup^D$ ./yup/bin/sh: 0: Can't open ./yup The interpreter (whether failing or successful) will always run as the current user, ignoring any setuid bits or setcap extended attributes of the script. Executable scripts are different from binaries in the fact that the interpreter should be able to open and read in order to run them. However, notice that they're simply passed as an argument to the interpreter, which may not try to read them at all, but do something completely different: $ cat > interp; chmod 755 interp#! /bin/shprintf 'you said %s\n' "$1"^D$ cat > script; chmod 100 script#! ./interpnothing to see here^D$ ./scriptyou said ./script Of course, the interpreter itself may be a setuid or cap_dac_override=ep -setcap binary (or pass down the script's path as an argument to such a binary), in which case it will run with elevated privileges and could ignore any file permissions. Unreadable setuid scripts on Linux via binfmt_misc On Linux you can bypass all the restrictions on executable scripts (and wreck your system ;-)) by using the binfmt_misc module: As root: # echo ':interp-test:M::#! ./interp::./interp:C' \ > /proc/sys/fs/binfmt_misc/register# cat > /tmp/script <<'EOT'; chmod 4001 /tmp/script # just exec + setuid#! ./interpid -uEOT As an ordinary user: $ echo 'int main(void){ dup2(getauxval(AT_EXECFD), 0); execl("/bin/sh", "sh", "-p", (void*)0); }' | cc -include sys/auxv.h -include unistd.h -x c - -o ./interp$ /tmp/script0 Yuppie! More information in Documentation/admin-guide/binfmt-misc.rst in the kernel source. The -p option may cause an error with some shells (where it could be simply dropped), but is needed with newer versions of dash and bash in order to prevent them from dropping privileges even if not asked for.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/317071/" ] }
517,255
I need a new server that will be storing a lot of data, I know that Debian 10 stable will be released soon. But wanted to know if I install Debian 10 right now and start configuring the server, then once Debian 10 gets a stable release, I will be able to upgrade to the stable version easily using apt upgrade ?
Actually you won't even have to upgrade or do anything, it will happen on its own: buster will become stable . If you're using the code name buster rather than the archive name testing for the repositories, then there's nothing to do. If the installation used testing you can change it to buster yourself (in /etc/apt/sources.list* ). When the stable "pointer" will move from stretch (the codename for Debian 9) to buster (codename for future Debian 10), buster will become the stable version of Debian, without any intervention from your side. You'll only notice it because the rate of upgrades will probably slow down. Note: of course, as OP and wurtel said, usual upgrades have still to be done regularly (for example with apt update + apt upgrade ) for anything to happen.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/517255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41804/" ] }
517,350
Why do selinux policies apply to commands (e.g: logrotate) running from cronjobs, but not when run directly from command line? When I run logrotate manually from the command line, it works perfectly. But when it runs from the cronjob, I get an error in the audit.log alerting me that selinux prevented access to www, etc. Why is that? And how can I simulate it running from the cronjob to test?
When cron runs logrotate , SELinux confines it to a logrotate_t "type". That "type" is restricted from modifying other file types (aka "escaping the confinement"). When you run logrotate, you're (most likely) starting from an "unconfined" type, which means what it says -- the logrotate process is permitted to modify files. You might also want logrotate to restart or signal processes (via postrotate, for example); that activity may also be confined by SELinux. My suggestion here is to tell SELinux to allow ("permit") the logrotate_t type to escape the confinement, with: semanage permissive -a logrotate_t Doing so is a moderate solution, in-between turning SELinux off and fine-tuning a policy that allows exactly the confinement escapes that you need (perhaps with custom labeling). To revert this change, use semanage permissive -d logrotate_t . The best way to simulate a cron-initiated process is to put the jobs into cron. Alternatively, I'm aware of runcon , although I wasn't able to use it successfully.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276132/" ] }
517,355
I installed telegram-desktop via Flatpak and would like to auto start the messenger when logging into Gnome 3 (or Unity as a fact). Is there a way to robustly do so?
Starting from the answer given by @intika I found a solution I like more. Instead of replicating the content of the existing desktop-file in /var/lib/flatpak/exports/share/applications/org.telegram.desktop.desktop I linked it inside my personal ~/.config/autostart/ . Works like a charm :-)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/517355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52680/" ] }
517,370
I want to use a program in the shebang, so I create a script named <myscript> with: #!<mypgm> I also want to be able to run <mypgm> directly from the command prompt. <mypgm> args... So far, no issue. I want to be able to run <myscript> from the command prompt with arguments. <myscript> blabla In turn, the shebang makes <mypgm> being called with the following arguments: <mypgm> <myscript> blabla Now, I need to know when <mypgm> <myscript> blabla is called using the shebang, or not: myscript blabla # uses the shebang-or-<mypgm> myscript blabla # directly in the command prompt. I looked at the environment variables (edit: <=== wrong assertion (¬,¬”) ), at the process table (parent process too) but didn't find any way to make a difference. The only thing I found so far is: grep nonvoluntary_ctxt_switches /proc/$$/status When this line is just after the shebang, the value is often 2 (sometimes 3) when called through the shebang, and 1 (sometimes 2) with the direct call. Being unstable and dependent on process scheduling (the number of times the process was taken off from its CPUs), I am wondering if anybody here might have a better solution.
Instead of having myprg magically detect whether it is being used in a shebang, why not make that explicit by using a command-line flag (such as -f ) to pass it a file as a script? From your example in the comments: In the calc theoretical example above. calc PI + 1 should return 4.14159... Now adding the support for the shebang (i.e. a filename as the first parameter) would return the calculation contained into the file. Make calc take a script file through -f and then create scripts with: #!/usr/local/bin/calc -f$1 + 1 Let's say you call this file addone.calc and make it executable. Then you can call it with: $ ./addone.calc PI4.141592... That call will translate into an invocation of /usr/local/bin/calc -f ./addone.calc PI , so it's pretty clear which argument is a script file and which is a parameter to the script. This is similar to how awk and sed behaves. A similar (but opposite) approach is to have calc take a script file argument by default (which simplifies its use with a shebang), but add a command-line flag to use it with an expression from an argument. This is similar to how sh -c '...' works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211496/" ] }
517,379
I have a directory with 100,000 files. These are audio files which I need to move to three different directories, train , dev and test . In the order of 80% , 10% and 10% respectively. mv `ls | head -500` ./subfolder1/ This moves files if we know the number by default but not by percent of total no of files. I wonder if there's a neater way to split dir to three.
Instead of having myprg magically detect whether it is being used in a shebang, why not make that explicit by using a command-line flag (such as -f ) to pass it a file as a script? From your example in the comments: In the calc theoretical example above. calc PI + 1 should return 4.14159... Now adding the support for the shebang (i.e. a filename as the first parameter) would return the calculation contained into the file. Make calc take a script file through -f and then create scripts with: #!/usr/local/bin/calc -f$1 + 1 Let's say you call this file addone.calc and make it executable. Then you can call it with: $ ./addone.calc PI4.141592... That call will translate into an invocation of /usr/local/bin/calc -f ./addone.calc PI , so it's pretty clear which argument is a script file and which is a parameter to the script. This is similar to how awk and sed behaves. A similar (but opposite) approach is to have calc take a script file argument by default (which simplifies its use with a shebang), but add a command-line flag to use it with an expression from an argument. This is similar to how sh -c '...' works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351588/" ] }
517,385
During an upgrade of Debian 8 to 9 I was asked about PAM Modules package installation. How i didn't knew the correct answer, I choosed option 1. Unix Authentication. Now the GDM3 doesn't starts anymore and my journalctl reads the following, repeatedly: May 06 08:32:03 jesacer gdm3[809]: Unable to kill session worker processMay 06 08:32:03 jesacer gdm-launch-environment][5948]: pam_unix(gdm-launch-environment:session): session opened for user Debian-gdm by (uid=0)May 06 08:32:03 jesacer /usr/lib/gdm3/gdm-wayland-session[5951]: Activating service name='org.freedesktop.systemd1'May 06 08:32:03 jesacer /usr/lib/gdm3/gdm-wayland-session[5951]: Activated service 'org.freedesktop.systemd1' failed: Process org.freedesktop.systemd1 exited with status 1May 06 08:32:03 jesacer /usr/lib/gdm3/gdm-wayland-session[5951]: Unable to register display with display managerMay 06 08:32:03 jesacer gdm-launch-environment][5948]: pam_unix(gdm-launch-environment:session): session closed for user Debian-gdmMay 06 08:32:03 jesacer gdm3[809]: Child process -5951 was already dead.May 06 08:32:03 jesacer gdm3[809]: Child process 5948 was already dead.May 06 08:32:03 jesacer gdm3[809]: Unable to kill session worker process
Instead of having myprg magically detect whether it is being used in a shebang, why not make that explicit by using a command-line flag (such as -f ) to pass it a file as a script? From your example in the comments: In the calc theoretical example above. calc PI + 1 should return 4.14159... Now adding the support for the shebang (i.e. a filename as the first parameter) would return the calculation contained into the file. Make calc take a script file through -f and then create scripts with: #!/usr/local/bin/calc -f$1 + 1 Let's say you call this file addone.calc and make it executable. Then you can call it with: $ ./addone.calc PI4.141592... That call will translate into an invocation of /usr/local/bin/calc -f ./addone.calc PI , so it's pretty clear which argument is a script file and which is a parameter to the script. This is similar to how awk and sed behaves. A similar (but opposite) approach is to have calc take a script file argument by default (which simplifies its use with a shebang), but add a command-line flag to use it with an expression from an argument. This is similar to how sh -c '...' works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153059/" ] }
517,542
How to know the IP address of some host somename I can ssh to? If I do nslookup on this host it says "no answer". How can ssh resolve it's name then? Neither /etc/hosts nor .ssh/config explanation worked. EDIT Sorry somename is fully qualified. ssh somename.somedomain works, while ping somename.somedomain and nslookup somename.somedomain don't
Nslookup is a program to query Internet domain name servers . Nslookup is very good for querying DNS servers but it does not give you the whole picture when it comes to name resolution. On Linux name resolution is most commonly controlled by NSS which is configured by /etc/nsswitch.conf . Specifically, this configuration contains a hosts entry. For example: hosts: files dns In the above entry you can see that the first thing to be queried is files followed by dns , meaning that /etc/hosts will be queried before DNS. Other options exist including LDAP , Multicast DNS and WINS . Answering your question directly, SSH resolves the hostname to an IP address using NSS (pulling results from multiple sources) where nslookup only queries the DNS. You can check to see which IP NSS resolves a hostname to using getent. For example to resolve somename : getent hosts somename Also In the case of SSH you can configure host specific information in /etc/ssh/ssh_config and ~/.ssh/config . This will even let you specify an IP address for a hostname, entirely skipping name resolution.: The following tells SSH to use 192.168.1.25 for both dev and dev.example.com . SSH will use this address whether or not these names exist as DNS names for a different IP: # contents of $HOME/.ssh/configHost dev dev.example.com HostName 192.168.1.25
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/517542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
517,746
This is my sample text. grep w , grep ^w and grep '^[ ]w' are working just fine. [user@linux ~]$ grep w text.txtwhitespace 0 whitespace 1 whitespace 2[user@linux ~]$[user@linux ~]$ grep ^w text.txtwhitespace 0[user@linux ~]$ With 1 space [user@linux ~]$ grep '^[ ]w' text.txt whitespace 1[user@linux ~]$ With 2 space, but getting the same output [user@linux ~]$ grep '^[ ]w' text.txt whitespace 1[user@linux ~]$ According to https://regex101.com/ , ^[ ]{0,} is the right syntax to find whitespace at the start of a line. However, it doesn't work well with GNU grep on Linux. I'm getting the error Invalid regular expression : [user@linux ~]$ grep ^[ ]{0,}w text.txtgrep: Invalid regular expression[user@linux ~]$ And these do not return anything at all [user@linux ~]$ grep '^[ ]{0}w' text.txt[user@linux ~]$ grep '^[ ]{1}w' text.txt[user@linux ~]$ grep '^[ ]{2}w' text.txt[user@linux ~]$ grep '^[ ]{0,}w' text.txt[user@linux ~]$ Question: Would it be possible to use ^[ ]{0,} with GNU grep? If yes, what was wrong with my previous syntax?
There are various issues here. First of all, the expression ^[ ]w means: find the start of the line, then exactly one space, then a w . So it's actually working perfectly. If you want it to match one or more spaces, you need to add a qualifier to the [ ] character class: $ grep '^[ ]\+w' text.txt whitespace 1 whitespace 2 The + means "one or more". The default flavor of regular expressions used by grep are called BRE (basic regular expressions) and in that regex flavor, the + needs to be escaped, hence the \+ above * . Alternatively, you can use ERE (extended regular expressions) by passing the -E flag, or PCRE (Perl compatible regular expressions) by passing the -P flag. With these regex flavors, you don't need to escape the + for it to act as a quantifier: $ grep -P '^[ ]+w' text.txt whitespace 1 whitespace 2$ grep -E '^[ ]+w' text.txt whitespace 1 whitespace 2 The next issue, and the more important one, is that you are not quoting the regular expression. Quoting is necessary to ensure that the regex is passed to grep as is and not first interpreted by the shell. However, since you are not quoting it, it is getting expanded by the shell before it is passed to grep . You can examine this by using the set -x option to have the shell print what it is doing: $ set -x$ grep ^[ ]{0,}w text.txt+ grep '^[' ']0w' ']w' text.txtgrep: Invalid regular expression First, because there is a space between the ^[ and the ] , the shell is interpreting this as two separate arguments: ^[ and ]{0,}w . But the {} are used in the shell for brace expansion. For example: $ echo foo{a,b}fooa foob But when the second part of an expansion is empty, you get: $ echo foo{a,}fooa foo So, the expansion ]{0,}w becomes: $ echo ]{0,}w]0w ]w And as a result, and as you can see in the output of the set -x above, these three arguments are what is actually passed to grep : '^[' ']0w' ']w' But if you do quote them, they will need to be escaped when using BRE, just like the + above: $ grep '^[ ]\{2\}w' text.txt whitespace 2 One final note: the [ ] is exactly the same as , there's no point in using a character class for a single character. Putting all this together, to match exactly one space at the beginning of the line, use: $ grep '^ w' text.txt whitespace 1 To match one or more, use: $ grep '^ \+w' text.txt whitespace 1 whitespace 2 Or: $ grep -E '^ +w' text.txt whitespace 1 whitespace 2 or $ grep -P '^ +w' text.txt whitespace 1 whitespace 2 To match a specific number range (e.g. 0, 1 or 2 spaces): $ grep '^ \{0,3\}w' text.txt whitespace 0 whitespace 1 whitespace 2 or $ grep -P '^ {0,3}w' text.txt whitespace 0 whitespace 1 whitespace 2 or $ grep -E '^ {0,3}w' text.txt whitespace 0 whitespace 1 whitespace 2 And to match a specific number, either set that number in the {} as shown above, or just repeat the character N times: $ grep '^ \{2\}w' text.txt whitespace 2$ grep '^ w' text.txt whitespace 1$ grep '^ w' text.txt whitespace 2 And always quote your regular expressions! * Actually, in POSIX BRE, the + has no special meaning, but the BRE implemented by GNU grep does recognize it if it is escaped.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/349707/" ] }
517,759
I'm learning how to create services with systemd. I get this error: .service: Start request repeated too quickly. I can't start the service any more; it was working yesterday. What am I doing wrong? (root@Kundrum)-(11:03:19)-(~)$nano /lib/systemd/system/swatchWATCH.service 1 [Unit] 2 Description=Monitor Logfiles and send Mail reports 3 After=syslog.target network.target 4 5 [Service] 6 Type=simple 7 ExecStart=/usr/bin/swatch --config-file=/home/kristjan/.swatchrc --input-record-separator="\n \n " --tail-file=/var/log/snort/alert --daemon 8 Restart=on-failure 9 StartLimitInterval=310 StartLimitBurst=1001112 [Install]13 WantedBy=multi-user.target StartLimitInterval and StartLimitBurst I added after trying to fix it. My system is Debian 9.8 Stretch all updates.
First, if this is a custom service, it belongs in /etc/systemd/system . /lib/systemd is intended for package-provided files. Second, the service is likely crashing and systemd is attempted to restart it repeatedly, so you need to figure out why it's crashing. Check the service logs with: journalctl -e -u swatchWATCH It's possible there will be some extra detail in the main journal: journalctl -e Finally, check to see it runs directly on the CLI ok: /usr/bin/swatch --config-file=/home/kristjan/.swatchrc --input-record-separator="\n \n " --tail-file=/var/log/snort/alert --daemon I see you are using a --daemon option. That's often a mistake with systemd. Systemd daemonizes for you. Try removing this option. If all else fails, review what changed since yesterday when it was working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/517759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
517,773
I have a mess in my photo libary. I have files like these: image-1.jpg image-1.jpegimage-2.jpg Now I want to delete all photos with the extension .jpeg when there is a file with the same name but with the extension .jpg. How can I do this?
for f in *.jpeg; do [ -e "${f%.*}.jpg" ] && echo rm -- "$f"done (remove echo if happy). With zsh and one rm invocation: echo rm -- *.jpeg(e'{[ -e $REPLY:r.jpg ]}') (change * to **/* to do that recursively, add the D glob qualifier, if you also want to consider hidden files or files in hidden directories).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328946/" ] }
517,814
I have a KSH ( not Bash ) script that I want to preempt with a file count check. If there's no files I want to print "EMPTY" then exit. Otherwise proceed. The problem is when I perform the file count, and it is EMPTY, then it breaks. Code: #################################################### Test to see if files exist in Drop Folder###################################################CONTENTS=$(ls ${gp_path}ALLSTUFF*.zip)if [[ ${#CONTENTS[@]} -eq 0 ]]; then print 'EMPTY' exit 0else print 'NOT EMPTY'fi If not empty, it works. If empty, I get the below error then the system breaks. I would like to have it just report EMPTY and exit 0: Error: /nas/Opt/databox/folder/ALLSTUFF*.zip not found What am I doing wrong? Attempt #2 I tried this as well, but I got the same result: if [ "$(ls ${gp_path}ALLSTUFF*.zip)" ]; then print 'NOT EMPTY'else print 'EMPTY' exit 0fi
You can make the error message disappear with 2>/dev/null inside the ls . You can then check to see if $CONTENTS is empty with -z CONTENTS=$(ls -d -- "${gp_path}ALLSTUFF"*.zip 2>/dev/null)if [ -z "$CONTENTS" ]; then print 'EMPTY' exit 0else print 'NOT EMPTY'fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/315200/" ] }
517,872
I have a "service" that originates from an /etc/init.d/XX script, and a systemd wrapper is generated for it. It doesn't autostart in any runlevel, and when I run systemctl --all or systemctl list-unit-files --all it doesn't show up in any list. My hunch is that because it has no dependencies, hasn't been started, it isn't "loaded" into systemd (so not enabled, not loaded) so systemd doesn't list it. Is there a way to get a full list of all possible services, even those not yet started and that aren't auto-started? Or do a systemctl search equivalent? This related question only asks for a list of services that will be attempted at boot time. The man page for systemctl under "--all" says To list all units installed on the system, use the list-unit-files command instead but these disabled units do not show up in the output of list-unit-files .
OK I asked "the systemd guys" and they said in old versions it didn't list them all, despite the man page saying this: -a, --all When listing units, show all loaded units, regardless of their state, including inactive units. When showing unit/job/manager properties, show all properties regardless whether they are set or not. To list all units installed on the system, use the list-unit-files command instead. Most of the functionality listed above didn't seem to actually happen with "my" systemd version systemd 219 (CentOS 6). But it seems fixed in newer versions of systemctl on Debian 9 which has version systemd 232 , perhaps that's why. You can still control them like normal (status, start, stop) with the older systemd version if the init.d script does specify runlevels but is absent from all /etc/rc?.d dirs (a la chkconfig --del my_service or its equivalent systemctl disable my_service , or if chkconfig was never run initially on it) then the systemd-sysv-generator with older versions of systemd it doesn't [show up in the lists][2] if you run systemctl list-units --all etc. even though it's working fine and still controllable. Or if they're normal services but disabled they seem to also not show up in any lists, despite being still functional. ref: https://lists.freedesktop.org/archives/systemd-devel/2019-May/042555.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8337/" ] }
517,924
Input: 1hghh2bh4h2okkokolkopk3uhjunfjvn4 Expected output: 1234 So, I need to have only 1st, 5th, 9th, 13th value of the file in the output file. How to do this?
Using AWK: awk '!((NR - 1) % 4)' input > output Figuring out how this works is left as an exercise for the reader.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/517924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347601/" ] }
517,948
I figured out the timestamp of a specific rm -r command run on an Ubuntu machine from history . Now, I want to get the IP of the user who executed this command. Is there any way I can do this?
You can use the last command which lists all users last logged in/out time: > lastroot pts/1 10.1.6.120 Tue Jan 28 05:59 still logged in root pts/0 10.1.6.120 Tue Jan 28 04:08 still logged in root pts/0 10.1.6.120 Sat Jan 25 06:33 - 08:55 (02:22) root pts/1 10.1.6.120 Thu Jan 23 14:47 - 14:51 (00:03) root pts/0 10.1.6.120 Thu Jan 23 13:02 - 14:51 (01:48) root pts/0 10.1.6.120 Tue Jan 7 12:02 - 12:38 (00:35) wtmp begins Tue Jan 7 12:02:54 2014
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/517948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145513/" ] }
518,001
In the Cinnamon Desktop: What command or code is run in response to Alt + F2 ? In what file is this association stored? What command or code is run in response to the r command in the command prompt window opened by Alt + F2 ?
Since posting this question, and with the help of the discussion following the earlier posted answer, I found the following answer in the Cinnamon source code : /** * cinnamon_global_reexec_self: * @global: A #CinnamonGlobal * * Restart the current process. Only intended for development purposes. */ void cinnamon_global_reexec_self (CinnamonGlobal *global) { meta_restart (); } I have implemented access to this function as a bash command ( restartcinnamon ) by adding the following line to my .bashrc file: alias restartcinnamon='dbus-send --type=method_call --print-reply \\ --dest=org.Cinnamon /org/Cinnamon org.Cinnamon.Eval \\ string:'\''global.reexec_self()'\''' \\ /usr/bin/dbus-send
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518001", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352143/" ] }
518,179
My Ubuntu/Debian-based Linux update POSIX shell script seems to require that I not double quote the string variable with the stored command, which is being executed. As I don't understand this issue, I'd like to ask why that is? And if the code is correct as it is? Warning: SC2086, wiki , "Double quote to prevent globbing and word splitting." The script follows, the problematic part is highlighted: #!/bin/sh# exit script when it tries to use undeclared variablesset -u# color definitionsreadonly red=$(tput bold)$(tput setaf 1)readonly green=$(tput bold)$(tput setaf 2)readonly yellow=$(tput bold)$(tput setaf 3)readonly white=$(tput bold)$(tput setaf 7)readonly color_reset=$(tput sgr0)# to create blocks of texts, I separate them with this linereadonly block_separator='----------------------------------------'step_number=0execute_jobs (){ while [ ${#} -gt 1 ] do job_description=${1} job_command=${2} step_number=$(( step_number + 1 )) printf '%s\n' "Step #${step_number}: ${green}${job_description}${color_reset}" printf '%s\n' "Command: ${yellow}${job_command}${color_reset}" printf '%s\n' "${white}${block_separator}${color_reset}" # RUN THE ACTUAL COMMAND # ShellCheck warns me I should double quote the parameter # If I did, it would become a string (I think) and I'd get 'command not found' (proven) # As I don't understand the issue, I left it as I wrote it, without quotes ### shellcheck disable=SC2086 if sudo ${job_command} # <-- HERE then printf '\n' else printf '%s\n\n' "${red}An error occurred.${color_reset}" exit 1 fi shift 2 done}execute_jobs \ 'configure packages' 'dpkg --configure --pending' \ 'fix broken dependencies' 'apt-get --assume-yes --fix-broken install' \ 'update cache' 'apt-get update' \ 'upgrade packages' 'apt-get --assume-yes upgrade' \ 'upgrade packages with possible removals' 'apt-get --assume-yes dist-upgrade' \ 'remove unused packages' 'apt-get --assume-yes --purge autoremove' \ 'clean up old packages' 'apt-get autoclean'
You can’t quote the variable here: if sudo ${job_command} because you do want word splitting. If you quote, the command becomes (for your first step) if sudo "dpkg --configure --pending" and sudo looks for the command dpkg --configure --pending , rather than the command dpkg with the arguments --configure and --pending , as indicated by its error message: sudo: dpkg --configure --pending: command not found (try it with extra spaces to make it more explicit). With the quotes omitted, the shell splits the arguments, and everything works as expected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
518,222
$ uname -r5.0.9-301.fc30.x86_64$ findmnt /TARGET SOURCE FSTYPE OPTIONS/ /dev/vda3 ext4 rw,relatime,seclabel$ sudo fstrim -v /fstrim: /: the discard operation is not supported Same VM, but after switching the disk from VirtIO to SATA: $ findmnt /TARGET SOURCE FSTYPE OPTIONS/ /dev/sda3 ext4 rw,relatime,seclabel$ sudo fstrim -v //: 5.3 GiB (5699264512 bytes) trimmed The virtual disk is backed by a QCOW2 file. I am using virt-manager / libvirt. libvirt-daemon is version 4.7.0-2.fc29.x86_64. My host is currently running a vanilla kernel build 5.1 (ish), so it's a little bit "customized" at the moment, but I built it starting from a stock Fedora kernel configuration. Is there a way to enable discard support on VirtIO somehow? Or does the code just not support it yet? I don't necessarily require the exact instructions how to enable it, but I am surprised and curious and I would like a solid answer :-).
Apparently discard wasn't supported on that setting. However it can work if you change the disk from "VirtIO" to "SCSI", and change the SCSI controller to "VirtIO". I found a walkthrough . There are several walkthroughs; that was just the first search result. This new option is called virtio-scsi . The other, older system is called virtio-block or virtio-blk . I also found a great thread on the Ubuntu bug tracker . It points out that virtio-blk starts supporting discard requests in Linux 5.0. It says this also requires support in QEMU, which was committed on 22 Feb 2019. Therefore in future versions, I think we will automatically get both VirtIO and discard support. Currently my virt-manager does not create virtio-scsi disks by default, even when it knows I am installing Fedora 29; it only creates the basic "VirtIO" disks. I do not know if there is any disadvantage of switching to virtio-scsi . I guess virtio-scsi provides the same sort of performance advantage as virtio-blk , when compared to emulated SATA. (I do not see an option to use NVME protocol anywhere in virt-manager :-P, with or without VirtIO). The oVirt website has some nice propaganda , which mentions some limitations in virtio-blk virtio-scsi can be used in pass-through mode to a SCSI LUN, and can use various new SCSI command features without needing modifications in virtio-scsi . If you are not specifically using SCSI pass-through, then any new commands will require new support in QEMU, but not in the virtio-scsi code. virtio-scsi includes support for multiple queues. (I am not clear whether this is also helpful for efficiency on single-queue hardware, or not).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
518,256
I would like to delete everything from a folder except csv files I am trying with a bash script and I am getting this error: syntax error near unexpected token `(' This is my script : PATH=/tmp/ run_spark_local rm -v !($PATH*.csv) cp -r $PATH /data/logs/ I have also tried rm -v !("$PATH*.csv")
You should avoid setting the PATH variable. This is used by your shell to find valid commands, setting it to /tmp/ is going to prevent the script from being able to find the rm and cp commands altogether. You can accomplish what you want with the following find command: find /tmp -not -name '*.csv' -not -path /tmp -exec rm -vr {} \; Note: this will delete any subdirectories under /tmp as well. If you do not want this you must change to: find /tmp -not -name '*.csv' -type f -exec rm -v {} \; Another note: This will still recurse into the subdirectories and delete the files in them. If you do not want this you can use the maxdepth argument: find /tmp -not -name '*.csv' -maxdepth 1 -type f -exec rm -v {} \; Extra note: I would never run a find ... -exec command that you find online without verifying it will do what you need it to do first. You should run: find /tmp -not -name '*.csv' -not -path /tmp And verify it is finding only the files you want before adding the -exec rm -vr {} \; bit.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352339/" ] }
518,266
I am running Arch Linux on a Raspberry Pi. Suddenly: I am unable to ping to a website. I am unable to access a website from the browser. I have two more computers (all running Arch Linux) connected to the Internet, where I can ping and use the Internet. Also, /etc/resolv.conf is identical on the other computers: nameserver 10.230.252.252nameserver 203.147.88.2nameserver 8.8.8.8search domain.name I can use VNC. I can also ping to 8.8.8.8.When trying to access DuckDuckGo on Chromium I get: This site can’t be reachedduckduckgo.com’s server IP address could not be found.DNS_PROBE_FINISHED_NXDOMAIN I have an active Internet connection. What's wrong?
Although I've never had problem with my other x86_64 PC all running Arch Linux, this frequently happens till date with Arch Linux ARM when running NetworkManager. The problem is like you are connected to wifi, but you can't ping or use the internet but you can access all the computers on the local network, and even use remote desktop sharing software. There is a high chance that something went wrong while your ping or your browser tries to resolve the host. I can think of 3 solutions: Solution 1 I believe this is a problem on the thousands of the Raspberry Pi systems running Archlinux ARM and using NetworkManger. In my case /etc/resolv.conf was a broken symlink to ../run/systemd/resolve/stub-resolv.conf . NetworkManager can't populate the symlink, and the /etc/resolv.conf is empty. We have to: Remove the broken symlink: # rm /etc/resolv.conf Create an /etc/NetworkManager/conf.d/dns.conf file with the contents: [main]dns=nonemain.systemd-resolved=false Restart NetworkManager: sudo systemctl restart NetworkManager This should fix the issue, if not follow Solution 2. Solution 2 In case the above didn't fix the issue for you, you can temporarily populate /etc/resolv.conf by: sudo systemctl restart systemd-resolved && sudo systemctl stop systemd-resolved The reason this works is because probably something is messing up the /etc/resolv.conf file. The above command should overwrite the contents, but again, you should look at what causing the issue. Solution 3 If you can't get your /etc/resolv.conf back, just create a new /etc/resolv.conf (delete if an empty old one or symbolic link exists) and paste the code: search domain.namenameserver 8.8.8.8nameserver 1.1.1.1nameserver 1.0.0.1 Note, in the first line, you can also use your router's IP address, for example ( nameserver 192.168.43.1 in my case) which will make other systems pingable on the same network. It's not a good idea to generate resolv like this, but I had a bad time with the NetworkManager's auto-generated resolv. Systemd-resolvd also generates wrong ones, even on my PC. A bit weird, here I am using google's primary dns and cloudflare's primary dns, you can use 8.8.8.8 with 8.8.4.4 or 1.1.1.1 with 1.0.0.1. Although that step works, but you may want to stop NetworkManager from overwriting the file whenever it restarts: Add this entry to /etc/NetworkManager/NetworkManager.conf [main]dns=nonesystemd-resolved=false They worked for my installations on Raspberry Pi 3 model B. Hope this will work for you, too.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/518266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
518,318
I'm on a second-hand MacBook Pro from late 2013 (Mojave 10.14.3) and when I type arch on the Terminal, I get back i386 . Shouldn't it be a x86_64 ? Did the seller misrepresent the item? Please see the screenshot below of 'About this Mac' .
According to this SO answer , arch distinguishes between PowerPC ( ppc ) and Intel ( i386 ), not between 32- and 64-bit kernels on x86. So in this context, i386 means an x86 CPU. Check the output of uname -m to find out your machine type. (On Linux, arch is the equivalent of uname -m .) See also this Ask Different Q&A .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55703/" ] }
518,424
I have copied a folder using rsync including symlinks, hard links, permissions, deleting files on destination et cetera. They should be pretty identical. One folder is on a USB drive and the other on a local disk. If I run: du -bls on both folders, the size comes up as slightly different. My du supports --apparent-size and it is applied by -s and -l should count the content of the hard links. How can this difference be explained and how do I get the actual total? Both file systems are ext4, the only difference is that the USB drive is encrypted. EDIT: I digged down to find the folders that were actually different, I found one and the content is not special (no block device, no pipes, no hard or symlinks, no zero bytes files), the peculiarity may be having several small files within it. The difference is 872830 vs 881022 of this particular folder. I also ran du -blsc in both folders and the result is the same in this case. Some extra details on the commands I used: $ du -Pbsl $LOCALDIR $USBDIR | cut -f1872830881022$ du -Pbslc $LOCALDIR/*[...]868734 total$ du -Pbslc $USBDIR/*[...]868734 total$ ls -la $USBDIR | wc 158 1415 9123$ ls -la $LOCALDIR | wc 158 1415 9123$ diff -sqr --no-dereference $LOCALDIR $USBDIR | grep -v identical[No output and all identical if I remove the grep]
Since you have copied the files using rsync and then compared the two sets of files using diff , and since diff reports no difference, the two sets of files are identical. The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed. If you have, at some point, kept many files that were later deleted, this might have left large directory nodes. Example: $ mkdir dir$ ls -ld dirdrwxr-xr-x 2 kk wheel 512 May 11 17:09 dir $ touch dir/file-{1..1000}$ ls -ld dirdrwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir $ rm dir/*$ ls -ld dirdrwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir$ du -h .20.0K ./dir42.0K .$ ls -Rdir./dir: Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157911/" ] }
518,432
I was installing Manjaro GNOME 18.0.4 for my sister. However, when I tried to update all packages using pacman -Syu , the update fails due to two signature errors: dunst package with signature by "Matti Hyttinen <[email protected]>" notification-daemon with signature by "Brett Cornwall <[email protected]>" Interestingly, there wasn't even anything I could have messed up, as this was the first thing I did after installation (and I reinstalled Manjaro, as it could have been a problem in installation). Additionally, it appears that both problematic packages are for notifications. I - of course - already tried to look up the problem, and the solution I found over and over was rm -r /etc/pacman.d/gnupgpacman-key --initpacman-key --populate archlinux manjaropacman-key --refresh-keys as root. But this solution does not work at all in this case. Full error message (Same with dunst ): $ sudo pacman -S notification-daemonresolving dependencies...looking for conflicting packages...Packages (1) notification-daemon-3.20.0-3Total Download Size: 0.05 MiBTotal Installed Size: 0.74 MiB:: Proceed with installation? [Y/n] :: Retrieving packages... notification-daemon... 52.4 KiB 64.7K/s 00:01 [######################] 100%(1/1) checking keys in keyring [######################] 100%(1/1) checking package integrity [######################] 100%error: notification-daemon: signature from "Brett Cornwall <[email protected]>" is unknown trust:: File /var/cache/pacman/pkg/notification-daemon-3.20.0-3-x86_64.pkg.tar.xz is corrupted (invalid or corrupted package (PGP signature)).Do you want to delete it? [Y/n] error: failed to commit transaction (invalid or corrupted package (PGP signature))Errors occurred, no packages were upgraded. Edit: I changed all SigLevel options (4 in total) in /etc/pacman.conf to SigLevel = Never , ran pacman -Syu and changed SigLevel options back. The system is now up-to-date, but the problem is still there.
Solution: open /etc/pacman.conf change all SigLevel entries to Never (comment the old ones out) pacman -Syu change /etc/pacman.conf back rm -r /etc/pacman.d/gnupg pacman-key --init pacman-key --populate archlinux manjaro pacman-key --refresh-keys
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252673/" ] }
518,496
The following command achieve my goal by grepping BTC price from specific exchange. curl -sS https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT | jq -r '.price' the output will be for the moment 7222.25000000 but i would like to get it 7222.25
Pass the price through tonumber : curl -sS 'https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT' |jq -r '.price | tonumber' This would convert the price from a string to a number, removing the trailing zeros. See the manual for jq .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227199/" ] }
518,568
I accidentally killed my ssh-agent , how do I restart it without having to reconnect ? I tried this but it does not work : $ eval $(ssh-agent -s)Agent pid 8055 Then, I open a new Gnome terminal with CTRL+SHIFT+N from the previous terminal window and type : $ ssh-addCould not open a connection to your authentication agent. But if I open a new Gnome terminal from my first Gnome terminal by typing : $ gnome-terminal & then this new window is able to connect to the ssh-agent . Is it not possible for all my Gnome terminals to "see" the ssh-agent without having to reconnect to the PC/server ?
This doesn't work as you supposed. ssh-agent overwrites the configuration. TO FIX THIS--- Find agent: eval "$(ssh-agent -s)" Agent pid 9546 Kill PID: kill -9 9546 THEN YOU CHECK ssh [email protected] [email protected] It should work now.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135038/" ] }
518,571
I recently purchased a new laptop which features an Intel Wireless-AX200 Networking device for Wi-Fi and Bluetooth connectivity. I installed elementary OS 5.0 "Juno" along with the latest stable linux kernel (5.1.1), as I've heard that it should come with support for the aforementioned networking device However, I cannot seem to find any drivers related to the device, either within /lib/firmware or within the 5.1.1 source (looking for anything prepended with "iwlwifi"). Looking through lspci reveals the following devices, none of which appear to be a wireless or bluetooth device: 00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07)00:02.0 VGA compatible controller: Intel Corporation Device 3e9b00:04.0 Signal processing controller: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem (rev 07)00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)00:15.0 Serial bus controller [0c80]: Intel Corporation Device a368 (rev 10)00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port 20 (rev f0)00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port 9 (rev f0)00:1d.4 PCI bridge: Intel Corporation Device a334 (rev f0)00:1e.0 Communication controller: Intel Corporation Device a328 (rev 10)00:1f.0 ISA bridge: Intel Corporation Device a30d (rev 10)00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)01:00.0 VGA compatible controller: NVIDIA Corporation Device 1f10 (rev a1)01:00.1 Audio device: NVIDIA Corporation Device 10f9 (rev a1)01:00.2 USB controller: NVIDIA Corporation Device 1ada (rev a1)01:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1adb (rev a1)02:00.0 Network controller: Intel Corporation Device 2723 (rev 1a)03:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM98104:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)05:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)05:01.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)05:02.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)06:00.0 System peripheral: Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 2C 2018] (rev 06)45:00.0 USB controller: Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 2C 2018] (rev 06) Has anyone been able to locate these aforementioned drivers within the 5.1 or 5.1.1 source? I've also just emailed [email protected], and can update this post with their response as it comes in. UPDATE I received a response from Intel. The driver itself has not made it into the kernel, therefore they suggested using their backport driver (which has now made wi-fi accessible on my laptop [mid-2019 Razer Blade]) https://wireless.wiki.kernel.org/en/users/drivers/iwlwifi/core_release & Here
This doesn't work as you supposed. ssh-agent overwrites the configuration. TO FIX THIS--- Find agent: eval "$(ssh-agent -s)" Agent pid 9546 Kill PID: kill -9 9546 THEN YOU CHECK ssh [email protected] [email protected] It should work now.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352621/" ] }
518,655
How can I achieve cmd >> file1 2>&1 1>>file2 That is, the stdout and stderr should redirect to one file (file1) and only stdout (file2) should redirect to another (both in append mode)?
Problem is that when you redirect your output, it's not available anymore for the next redirect. You can pipe to tee in a subshell to keep the output for the second redirection: ( cmd | tee -a file2 ) >> file1 2>&1 or if you like to see the output in terminal: ( cmd | tee -a file2 ) 2>&1 | tee -a file1 To avoid adding the stderr of the first tee to file1 , you should redirect the stderr of your command to some file descriptor (e.g. 3), and later add this to stdout again: ( 2>&3 cmd | tee -a file2 ) >> file1 3>&1# or( 2>&3 cmd | tee -a file2 ) 3>&1 | tee -a file1 (thanks @fra-san)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/518655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352141/" ] }
518,712
Example: File name: ENSG00000000003 ENSG00000000003 43120.829491094ENSG00000000005 39604.4956791524ENSG00000000419 7645.05624570546ENSG00000000457 2157.49855156382ENSG00000000460 3317.98417717746ENSG00000000938 6327.40515535397 Expected output; ideally, the filename is preceded by a tab: ENSG00000000003ENSG00000000003 43120.829491094ENSG00000000005 39604.4956791524ENSG00000000419 7645.05624570546ENSG00000000457 2157.49855156382ENSG00000000460 3317.98417717746ENSG00000000938 6327.40515535397 I want to do this in loop for my 45000 files together
I would use the standard UNIX editor ( of course! ): for f in ENSG*do printf '1i\n\t%s\n.\nw\nq\n' "$f" | ed -s "$f"done This sends a small script of commands to ed , namely: at line 1, insert ( i ) some text; the text is passed through printf as the filename, preceded by a tab ( \t ) after inserting that text ( . ), save the file to disk ( w ) and quit ( q ) If, indeed, the number of files exceeds the command-line limit, then you could use a find command; adjust the parameters (starting directories, filenames, etc) as needed: find . -name 'ENSG*' -exec sh -c 'printf "1i\n\t%s\\n.\nw\nq\n" "$1" | ed -s "$1" ' findsh {} \; The core solution is the same, but wrapped in what I call a "find shell" -- find executes sh -c ... for each (single) filename that matches; the findsh string is a stub name for $0 and the filename is passed to that shell in place of the {} curly braces. The shell itself then has the filename as parameter $1 , so that's what the printf and ed commands use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323523/" ] }
518,745
I need to turn a text file that has a single line per file name and output separated by a single space into specific blocks that have lines that are equal to 60 characters in length. Like this: >Directory1/file3 CTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCS>Directory1/file4 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...... turn into >Directory1/file3CTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCS>Directory1/file4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...... How do I go about this?
Try: $ awk '{print $1; for (i=1;i<=length($2);i=i+60) print substr($2,i,60)}' file>Directory1/file3CTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCS>Directory1/file4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA How it works: print $1 This prints the first field on the line. for (i=1;i<=length($2);i=i+60) print substr($2,i,60) For the second field on the line, we print 60 characters at a time until we reach the end of the field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169966/" ] }
518,767
I have to move the unique key which is in 3rd column of each record to first column. Now based on this key each record has different total columns in it (means # of fields) File content is 10,,FH,1834,1010 (newline) 11,10,BH,9899,1010 (newline) 21,11,TH,1010,345 (newline) 22,11,DA,34.65 (newline) 23,11,DA,76.89 (newline) 24,11,CC,1010 (newline) 25,11,CC,1011 (newline) 13,10,FT,200.68 (newline) Note: add (newline) as somehow when i was pasting the file all the records were coming in same line. I wrote below awk logic awk -F',' -v OFS=, '{printf "%s" ,$3 OFS; for(i=1;i<=NF;i++) if(i!=3) printf "%s",$i OFS;printf ORS}' test1.csv getting this output ,H,10,,1834,1010 ,H,11,10,9899,1010 ,H,21,11,1010,345KW ,U,22,11,34.65 ,U,23,11,76.89 ,H,24,11,1010 ,H,25,11,1011 ,T,13,10,200.68 Desired output is this FH,10,,1834,1010 (newline)BH,11,10,9899,1010 (newline)TH,21,11,1010,345 (newline)..... ... Issue is with my command, one character of 3rd field is getting replaced with comma.
Try: $ awk '{print $1; for (i=1;i<=length($2);i=i+60) print substr($2,i,60)}' file>Directory1/file3CTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCSCTTSCCCTTTTTSEEEEECGGGSCEEEEECCCSSBCCCSCCCCCTTTCCCCCCCCSCBCCCCCCCCS>Directory1/file4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA How it works: print $1 This prints the first field on the line. for (i=1;i<=length($2);i=i+60) print substr($2,i,60) For the second field on the line, we print 60 characters at a time until we reach the end of the field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352811/" ] }
518,800
I have directory say /var/tmp/abc which has 4 files: 12345-ram-3e3r5-io9490-89adu9.csv45434-dam-qwe35-to9490-43adu9.csv11234-cam-yy3r5-ro9490-85adu9.csv14423-sam-hh3r5-uo9490-869du9.csv I want to rename all the CSV files (find all the files & rename them) in shortest possible (probably one-liner) way to this: XXXXX-ram-3e3r5-io9490-89adu9.csvXXXXX-dam-qwe35-to9490-43adu9.csvXXXXX-cam-yy3r5-ro9490-85adu9.csvXXXXX-sam-hh3r5-uo9490-869du9.csv
rename -n 's/(\w+)/XXXXX/' *.csv remove the -n when happy.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/518800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190945/" ] }
518,928
I am trying to execute a script on remote servers, passing the script as the last argument ntrs exec-all-ubuntu --exec `cat << 'EOF' echo "$(pwd)" echo "$foobar"EOF` The problem is that the values in the text are sent as separate arguments, echo is the first arg and the pwd value is a second separate arg, but I want just one argument as a string The arguments end up looking like this: [ '--exec', 'echo', '"$(pwd)"', 'echo', '"$foobar"' ] but I am looking for something literal with newlines: [ '--exec', ' echo "$(pwd)"\n\n echo "$foobar"\n ' ] I also tried using this: ntrs exec-all-ubuntu --exec `read -d << EOF select c1, c2 from foo where c1='something'EOF` but that string is empty
You can simply use a regular string with embedded newlines: ntrs exec-all-ubuntu --exec ' echo "$(pwd)" echo "$foobar"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
518,935
Every terminal window I open automatically attaches to a new tmux session. Closing the shell inside such a tmux session should detach the tmux client for the terminal window to close. This can be done by setting this option: set -g detach-on-destroy on However, when I kill the current session from the session overview ( Ctrl + b , w , x , y ), the tmux client also detaches. Instead, I would like it to stay attached so I can select another session from the session overview. The question is, how can I have tmux detach when the session exits because the process it's running (i.e. the shell) exits but stay attached when the session is killed from the session overview?
You can simply use a regular string with embedded newlines: ntrs exec-all-ubuntu --exec ' echo "$(pwd)" echo "$foobar"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99676/" ] }
518,948
When building a new software that will store a directory of configuration files in /etc , should the directory be named software_name.d or just software_name ? I can't seem to find good style guides regarding this subject. Debian is the primary distribution I have in mind but more general style guides or guidelines are welcome. 'Official' guidance is preferred to personal opinions.
The typical use of a .d directory is to hold multiple config files (typically sharing a common file extension, such as *.conf ) which are then merged or composed to produce a single logical configuration. This mechanism is typically equivalent to one where the multiple config files were concatenated to form a single config file. Applications typically evolved to use .d directories in order to play nicer with Linux software packages (deb/rpm) and package managers. A .d directory is nicer towards the use of packages by distributions, since then packages can simply "drop" a standalone file into a directory (which is a natural operation, as packages manage/contain files), rather than having to edit a shared configuration file to add a stanza to plug in that particular package. (In earlier days of Linux distributions, you'll see packages doing exactly that, editing existing configuration files, through post-install and pre-uninstall scripts.) The use of a configuration directory named /etc/software_name is more common when the application uses multiple configuration files for different purposes (and perhaps even using different file formats, such as JSON, ini files, etc.) In that case, you might want to use a directory to group all the configuration files for that same application, but if you're not expecting other packages (or system administrators) to want to extend the configuration by "dropping" new files into a directory, then you won't use a .d directory. Some systems end up using both actually. For example, yum (the Red Hat package manager) has its own configuration file, but it also has a .d directory for its own repos ( /etc/yum.repos.d .) I believe apt is also similar, so you could look at that one if you're on Debian. You can see that setup makes sense, since you might want to manage adding/removing repositories by "dropping" new files into a .d directory (either from packages you push, or through a configuration management system) without having to modify the existing one (which might be getting updates from the O.S. distribution itself.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/518948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353006/" ] }
519,013
I want to assign the string contained in $value to multiple BASH variables. Actually the example I gave before ( var1=var2=...=$value ) was not reflecting exactly what I wanted.So far, I found this but it only works if $value is an integer: $ let varT=varz=var3=$value How can I do that?
In my opinion you're better of just doing the more readable: var1="$value" var2="$value" var3="$value" var4="$value" var5="$value" var6="$value" var7="$value" var8="$value" var9="$value" var10="$value" But if you want a very short way of accomplishing this then try: declare var{1..10}="$value" Edited : using brace expansions instead of printf and declare instead of eval, which could be dangerous depending on what's in $value . Cf. EDIT1: You could still use brace expansions in the new case: declare var{T,z,3}="$value" It's safer than the printf approach in the comments because it can handle spaces in $value .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135038/" ] }
519,040
I have a Debian 9 running.It has a SSD connected as well as a fibrechannel link to a SAN storage. As far I see both are visible as /dev/sdX devices. How can I find out what is the disk and what is the storage? Where is the storage configured in the system?
More convenient way is to use lsscsi utility. From documentation about FC: For FC devices (logical units), the '--transport' option will show the port name and the port identifier instead of the SCSI INQUIRY "strings". For example: $ lsscsi -g[3:0:0:0] enclosu HP A6255A HP04 - /dev/sg3[3:0:1:0] disk HP 36.4G ST336753FC HP00 /dev/sdd /dev/sg4[3:0:2:0] disk HP 36.4G ST336753FC HP00 /dev/sde /dev/sg5$ lsscsi -g --transport[3:0:0:0] enclosu fc:0x50060b00002e48a3,0x0b109b - /dev/sg3[3:0:1:0] disk fc:0x21000004cf97de68,0x0b109f /dev/sdd /dev/sg4[3:0:2:0] disk fc:0x21000004cf97e385,0x0b10a3 /dev/sde /dev/sg5 lsscsi uses sysfs (from Introduction section of documentation): The lsscsi command scans the sysfs pseudo file system that was introduced in the 2.6 Linux kernel series. Since most users have permissions to read sysfs (usually mounted at /sys ) then meta information can be found on some or all SCSI devices without a user needing elevated permissions to access special files (e.g. /dev/sda ). The lsscsi command can also show the relationship between a device's primary node name, its SCSI generic (sg) node name and its kernel name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215038/" ] }
519,203
Is it possible to sort between two strings in a large file ? e.g. Current file is as: 0cf Front Brake 0d0 Rear Brake 0ce Handle BarsHUT 03 VR Controls 009 Vest 001 Belt 002 Body Suit 020 Stereo Enable 003 Flexor 007 Hand Tracker 004 Glove 006 Head Mounted Display 008 Oculometer 00a Animatronic Device 000 Unidentified 021 Display Enable 005 Head TrackerHUT 04 Sport Controls 000 Unidentified 002 Golf Club 001 Baseball Bat And the desired output is as: 0ce Handle Bars 0cf Front Brake 0d0 Rear BrakeHUT 03 VR Controls 000 Unidentified 001 Belt 002 Body Suit 003 Flexor 004 Glove 005 Head Tracker 006 Head Mounted Display 007 Hand Tracker 008 Oculometer 009 Vest 00a Animatronic Device 020 Stereo Enable 021 Display EnableHUT 04 Sport Controls 000 Unidentified 001 Baseball Bat 002 Golf Club Here, Section HUT 03 VR Controls and HUT 04 Sports Controls is sorted out. In given file, Section headers starts with non-space characters while section content always begins with space or tab. Since this file have 100+ sections then it will not be feasible to hard-code section name in script/command
In Python: #!/usr/bin/python3with open("file.txt", "r") as ins: lines = [] for line in ins: if line.startswith((" ", "\t")): lines.append(line) else: lines.sort() print(*lines, end = "", sep = "") print(line, end = "") lines = [] lines.sort() print(*lines, end = "", sep = "") This sorts all the sections (separately), not only those between two specific lines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4843/" ] }
519,310
I make bash script temp.sh with the following content: age=0;((age++)); When I run it as a normal user, it runs fine. But when i run it as root I get error: ./temp.sh: 4: ./temp.sh: age++: not found Why is that?
In the absence of a hashbang, /bin/sh is likely being used. Some POSIX shells do support the ++ and -- operators, and ((...)) for arithmetic evaluations, but are not required to. Since you have not included a hashbang in your example I will assume you are not using one and therefore your script is likely running in a POSIX shell that does not support said operator. Such a shell would interpret ((age++)) as the age++ command being run inside two nested sub-shells. When you run it as a "normal" user it is likely being interpreted by bash or another shell that does support said operator and ((...)) . Related: Which shell interpreter runs a script with no shebang? To fix this you can add a hashbang to your script: #!/bin/bashage=0((age++)) Note: You do not need to terminate lines with ; in bash/shell. To make your script portable to all POSIX shells you can use the following syntax: age=$((age + 1))age=$((age += 1))
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/519310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8563/" ] }
519,315
I want to use printf to print a variable. It might be possible that this variable contains a % percent sign. Minimal example: $ TEST="contains % percent"$ echo "${TEST}"contains % percent$ printf "${TEST}\n"bash: printf: `p': invalid format charactercontains $ ( echo provides the desired output.)
Use printf in its normal form: printf '%s\n' "${TEST}" From man printf : SYNOPSISprintf FORMAT [ARGUMENT]... You should never pass a variable to the FORMAT string as it may lead to errors and security vulnerabilities. Btw: if you want to have % sign as part of the FORMAT, you need to enter %% , e.g.: $ printf '%d%%\n' 100100%
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296048/" ] }
519,321
On a new Debian system I installed JRE using sudo apt-get install default-jre Then I downloaded a jar-file and tried running it: java -jar server.jar The file threw an exception: java.lang.unsupportedclassversionerror unsupported major.minor version 52.0 So I started looking for a solution and followed a user's instruction: sudo rm /usr/bin/javasudo ln -s /Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java /usr/bin Afterwards I realized that I don't even have a directory /Library and therefore this link isn't pointing anywhere. Now I can't even run java anymore: java -jar server.jar-bash: /usr/bin/java: No such file or directory I already tried uninstalling JRE und reinstalling it: apt-get remove default-jreapt-get updateapt-get install default-jre Which didn't change anything though. Java seems to be installed: find /usr -name java/usr/share/bash-completion/completions/java/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java/usr/lib/jvm/java-7-openjdk-amd64/bin/java
To restore your /usr/bin/java link, you should run sudo update-java-alternatives -a If you were using Debian 9 (as mentioned initially), you shouldn’t have run into these issues, since OpenJDK 8 is the default there and OpenJDK 7 isn’t even available. To fix things so that you can run version 52 classes ( i.e. Java 8 classes), install OpenJDK 8: sudo apt install openjdk-8-jre On Debian 8 you can install OpenJDK 8 from backports: echo deb http://archive.debian.org/debian jessie-backports main | sudo tee /etc/apt/sources.list.d/jessie-backports.listecho 'Acquire::Check-Valid-Until "false";' | sudo tee -a /etc/apt/apt.confsudo apt updatesudo apt -t jessie-backports install openjdk-8-jre (See Failed to fetch jessie backports repository for details.) You’ll then need to specifically choose OpenJDK 8 as the default: sudo update-java-alternatives -s java-1.8.0-openjdk-amd64 (To see the possible values, run /usr/sbin/update-java-alternatives -l .)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353322/" ] }
519,338
I wrote the code: // a.c#include <stdlib.h>int main () { system("/bin/sh"); return 0;} compiled with command: gcc a.c -o a.out added setuid bit on it: sudo chown root.root a.outsudo chmod 4755 a.out On Ubuntu 14.04, when I run as general user, I got root privilege. but on Ubuntu 16.04, I still got current user's shell. Why is it different?
What changed is that /bin/sh either became bash or stayed dash which got an additional flag -p mimicking bash's behaviour. Bash requires the -p flag to not drop setuid privilege as explained in its man page : If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, no startup files are read, shell functions are not inherited from the environment, the SHELLOPTS, BASHOPTS, CDPATH, and GLOBIGNORE variables, if they appear in the environment, are ignored, and the effective user id is set to the real user id . If the -p option is supplied at invocation, the startup behavior is the same, but the effective user id is not reset. Before, dash didn't care about this and allowed setuid execution (by doing nothing to prevent it). But Ubuntu 16.04's dash 's manpage has an additional option described, similar to bash : -p priv Do not attempt to reset effective uid if it does not match uid. This is not set by default to help avoid incorrect usage by setuid root programs via system(3) or popen(3). This option didn't exist in upstream (which might not be have been reactive to a proposed patch * ) nor Debian 9 but is present in Debian buster which got the patch since 2018. NOTE: as explained by Stéphane Chazelas, it's too late to invoke "/bin/sh -p" in system() because system() runs anything given through /bin/sh and so the setuid is already dropped. derobert 's answer explains how to handle this, in the code before system() . * more details on history here and there .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353342/" ] }
519,469
I'm trying to recursively search a string with grep but I get this: $ grep -r "stuff" *grep: unrecognized option '---corporate-discount.csv'Usage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information. How can I prevent Bash from passing files starting with - as argument?
First, note that the interpretation of arguments starting with dashes is up to the program being started, grep or other. The shell has no direct way to control it. Assuming you want to process such files (and not ignore them completely), grep , along with most programs, recognizes -- as indicating the end of options, so grep -r -e "stuff" -- * will do what you want. The -e is there in case stuff starts with a - as well. Alternatively, you can also use: grep -r -e "stuff" ./* That latter one would also avoid the problem if there was a file called - in the current directory. Even after the -- separator, grep interprets - as meaning stdin, while ./- is the file called - in the current directory.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353460/" ] }
519,471
The output of the command below is weird to me. Why does it not give me back element 5? $ echo '0,1,2,3,4,5' | while read -d, i; do echo $i; done01234 I would expect '5' to be returned as well. Running GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu) . Adding a comma works, but my input data does not have a comma. Am I missing something?
With read , -d is used to terminate the input lines (i.e. not to separate input lines). Your last "line" contains no terminator, so read returns false on EOF and the loop exits (even though the final value was read). echo '0,1,2,3,4,5' | { while read -d, i; do echo "$i"; done; echo "last value=$i"; } (Even with -d , read also uses $IFS , absorbing whitespace including the trailing \n on the final value that would appear using other methods such as readarray ) The Bash FAQ discusses this, and how to handle various similar cases: Bash Pitfalls #47 IFS=, read [...] BashFAQ 001 How can I read a file [...] line-by-line BashFAQ 005 How can I use array variables?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/519471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5352/" ] }
519,567
In the case below, the report command must always be executed but I need to get an exit status 1 if the test command fails: test;reportecho $?0 How can I do it in a single bash line without creating a shell script?
Save and reuse $? . test; ret=$?; report; exit $ret If you have multiple test commands and you want to run them all, but keep track of whether one has failed, you can use bash's ERR trap. failures=0trap 'failures=$((failures+1))' ERRtest1test2if ((failures == 0)); then echo "Success"else echo "$failures failures" exit 1fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/519567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170163/" ] }
519,601
I'm considering running an X11 client on an existing server, and then having a thin 'client' (X11 server) on a Raspberry Pi or similar, as a development environment / general computing. However, occasionally I need to plug in a USB scanner (or flash drive, etc.). Can they be shared cleanly over X, or would the only way be something like saned (or samba, etc.)? I can ssh from client to server, but I can't/don't want to allow the server to access services like that on the 'client'.
Save and reuse $? . test; ret=$?; report; exit $ret If you have multiple test commands and you want to run them all, but keep track of whether one has failed, you can use bash's ERR trap. failures=0trap 'failures=$((failures+1))' ERRtest1test2if ((failures == 0)); then echo "Success"else echo "$failures failures" exit 1fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/519601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62835/" ] }
519,613
The server is listening on port '8765' and requires a SSH key for authentication. I can mount the remote directory using this command: sshfs -o idmap=user,port=8765 stephen@server:/export/usb2T /mnt/usb2T The server recognizes my SSH public key. I have seen that as an fstab entry for the standard SSH port this will be : stephen@server:/export/inbox /mnt/inbox fuse.sshfs defaults,_netdev 0 0 But I will need to add the server's listening port and the client user's SSH public key. How do I do that?
The entry in /etc/fstab you're looking for is: Using the ,port=PORTNUMBER and ,IdentityFile=/root.ssh/id_rsa options: sshfs#USER@IP-ADDRESS:/export/inbox /mnt/inbox fuse.sshfs delay_connect,_netdev,user,IdentityFile=/root.ssh/id_rsa,idmap=user,allow_other,default_permissions,port=PORTNUMBER,uid=0,gid=0,rw,nosuid,nodev 0 0 Mounting directory through ssh with SSHFS from remote By setting up SSH keys (as described above), you don't have to type your password when mounting. This will make mounting much simpler and can even be done using a script or automatically when you login to the local computer. As with SSH, all traffic between the local computer and remote computer is encrypted. If you are the admin on the local computer, you can configure the system to do this when the computer boots up so it will always be mounted. You need to modify /etc/fstab by adding a line like this (all on one line, though): You'll also need to setup SSH keys to do this so you don't have to type in a password. Consult the SSHFS man page for an explanation of the options.If you find that the fstab line above isn't working correctly (causing an error message at boot), you can modify it to this (note the addition of noauto): sshfs#USER@IP-ADDRESS: /export/inbox fuse defaults,user,noauto,uid=einstein,gid=einstein,allow_other,IdentityFile=/home/alfred/.ssh/id_dsa 0 0 sshfs#USER@IP-ADDRESS: /export/inbox fuse defaults,user,uid=USER,gid=USER,allow_other,IdentityFile=/home/USER/.ssh/id_dsa 0 0 Mead's Guide to the Secure Shell (SSH) How to mount sshfs remote directory in fstab Automount sshfs using fstab without mount -a SSHFS accepts many command-line options that you may want to check out. For example, if the SSH server on the remote computer was running on port 12345 instead of port 22, you would do this: sshfs USER@IP-ADDRESS: /export/inbox -p PORTNUMBER Here are the command-line options: SSHFS options: -p PORT equivalent to '-o port=PORT' -Cequivalent to '-o compression=yes'-F ssh_configfile specifies alternative ssh configuration file -1equivalent to '-o ssh_protocol=1'-o reconnect reconnect to server -o delay_connect delay connection to server -o sshfs_sync synchronous writes -o no_readahead synchronous reads (no speculative readahead) -o sshfs_debug print some debugging information -o cache=BOOL enable caching {yes,no} (default: yes) -o cache_timeout=N sets timeout for caches in seconds (default: 20) -o cache_X_timeout=N sets timeout for {stat,dir,link} cache -o workaround=LIST colon separated list of workarounds none no workarounds enabled all all workarounds enabled [no]rename fix renaming to existing file (default: off) [no]nodelaysrv set nodelay tcp flag in ssh (default: off) [no]truncate fix truncate for old servers (default: off) [no]buflimit fix buffer fillup bug in server (default: on)-o idmap=TYPE user/group ID mapping, possible types are: none no translation of the ID space (default) user only translate UID of connecting user file translate UIDs/GIDs based upon the contents of uidfile and gidfile -o uidfile=FILE file containing username:uid mappings for idmap=file -o gidfile=FILE file containing groupname:gid mappings for idmap=file -o nomap=TYPE with idmap=file, how to handle missing mappings ignore don't do any re-mapping error return an error (default) -o ssh_command=CMD execute CMD instead of 'ssh' -o ssh_protocol=N ssh protocol to use (default: 2) -o sftp_server=SERV path to sftp server or subsystem (default: sftp) -o directport=PORT directly connect to PORT bypassing ssh -o slave communicate over stdin and stdout bypassing network -o transform_symlinks transform absolute symlinks to relative -o follow_symlinks follow symlinks on the server -o no_check_root don't check for existence of 'dir' on server -o password_stdin read password from stdin (only for pam_mount!) -o SSHOPT=VAL ssh options (see man ssh_config) man/1/sshfs
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251630/" ] }
519,690
An example of my file looks like this: 201012,720,201011,119,201710,16 Output I want: 201012,720201011,119201710,16
Using a Sed loop: sed -e 's/,/\n/2' -e 'P;D' file Ex. $ echo '201012,720,201011,119,201710,16' | sed -e 's/,/\n/2' -e 'P;D'201012,720201011,119201710,16 This replaces the second , with \n , then prints and deletes up the \n , repeatedly until the substitution is no longer successful. BSD doesn't understand newline as \n in right side of s commands, this is a workaround for ksh,bash,zsh shells: sed -e :a -e $'s/,/\\\n/2' -e 'P;D' file Or, a general solution for (old) seds: sed ':as/,/\/2P;D' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353642/" ] }
519,734
I have a folder with lots of images named " clip01234-randomlongstring.png ", where 01234 is a random five digit number. I also have an array " clipnumbers " with a list of integers. Now I want to create a list "files" containing all file names which match the numbers in the " clipnumbers " array. How would I do that? The resulting output should be something I can process in the same way as my current list (of all files) I create with: files=($(printf "%s\n" *.* | sort -V | tr '\n' ' '))
In a loop: shopt -s nullglobfiles=()for number in "${clipnumbers[@]}"; do printf -v pattern 'clip%s-*.png' "$number" files+=( $pattern )done This loops over the numbers and creates a filename globbing pattern for each. The pattern is expanded to add the filenames matching it to the array files . The nullglob shell option makes non-matching patterns expand to nothing (as opposed to remain unexpanded). Using find (for recursion into all directories beneath the current directory, and for performing some action on each found file): patterns=()for number in "${clipnumbers[@]}"; do printf -v pattern 'clip%s-*.png' "$number" patterns+=( -o -name "$pattern" )donefind . -type f \( "${patterns[@]:1}" \) -exec action-to-perform-on-files {} \; The :1 removes the initial -o from the list in patterns in the expansion. This combines searching for the files with performing some action on them. It would fail if your clipnumbers array contains many thousands of numbers (the argument list would become too long).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300636/" ] }
519,758
Update 2019-05-21 19:37 EST : My motherboard is on the latest BIOS available, released 2019-03-06 , but still has the install problems described below. Update : I burned the Arch ISO to a CD then tried booting from it, both in UEFI and legacy. Same type of result: Original question : I used dd to put this Arch ISO (Version 2019.05.02) on a USB stick, then attempted to boot from it on my desktop computer. When the Arch menu comes up, I choose "Boot Arch Linux (x86_64)." But what follows is a bunch of error messages, then the process just hangs there doing nothing. Here's a pic: The messages start off as "AMD-Vi: Completion-Wait loop timed out" The messages include "kernel panic." My motherboard is an MSI B450 Tomahawk with a Ryzen 5 2600 CPU. I've tried booting via UEFI and legacy with the same result. How do I install Arch Linux?
Linux Kernel With MSI B450 The kernel fail in this case because of the support of the iommu feature; you can use some specific kernel adjustment (parameter) to fix your booting issue, this video demonstrate how to edit/apply the kernel parameters; here are some possibles solutions, try the different proposed parameters and choose the one that match best your needs. also you may turn off SVE in the bios. Possible Solutions: Kernel Parameters iommu=off iommu=off and amd_iommu=fullflush amd_iommu=off mem_encrypt=off amdgpu.runpm=0 pci=noats Involved Technology Definition Kernel Parameters: (aka Boot Options) Kernel command line parameters are parameters that you pass on to the kernel during the boot process to adjust its features or capabilities. IOMMU: is a memory management unit that basically increase performance and security; additional details can be found here IOMMU State: on, off or fullflush (detail on the linked article) mem_encrypt: Add support for Secure Memory Encryption (SME). and defines the memory encryption mask that will be used in subsequent patches to mark pages as encrypted. amdgpu.runpm=0: disable the graphical power management in the linux kernel (it will be then handled at the hardware/firmware/bios level) pci=noats: disable PCI Address Translation Services Note After the install you will need to be very careful on kernel updates Advanced technical users may build their own kernel with this or that patch Arch Boot Disk: To apply the parameters to the boot disk, on the boot menu, push "tab" to edit the boot command, hit space (to add a space) then write the parameter for instance "iommu=off" without quotes then hit enter to boot Sources: launchpad , freedesktop , freedesktop , freedesktop , askubuntu , wikipedia , artofcode , archlinux , linuxfoundation , fclose , youtube , youtube
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148559/" ] }
519,782
I start off in empty directory. $ touch aFile$ lsaFile Then I ls two arguments, one of which isn't in this directory. I redirect both output streams to a file named output . I use >> in order to avoid writing simultaneously. $ ls aFile not_exist >>output 2>>output$ cat outputls: cannot access 'not_exist': No such file or directoryaFile Which seems to work. Are there any dangers to this approach?
No, it's not just as safe as the standard >>bar 2>&1 . When you're writing foo >>bar 2>>bar you're opening the bar file twice with O_APPEND , creating two completely independent file objects[1], each with its own state (pointer, open modes, etc). This is very much unlike 2>&1 which is just calling the dup(2) system call, and makes the stderr and stdout interchangeable aliases for the same file object. Now, there's a problem with that: O_APPEND may lead to corrupted files on NFS filesystems if more than one process appends data to a file at once. This is because NFS does not support appending to a file, so the client kernel has to simulate it, which can't be done without a race condition. You usually can count on the probability of the file like bar in foo >>bar 2>&1 being written to at the same time from two separate places being quite low. But by your >>bar 2>>bar you just increased it by a dozen orders of magnitude, without any reason. [1] "Open File Descriptions" in POSIX lingo.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353638/" ] }
519,789
The /usr/bin/printf util argument list length is limited to the shell's maximum command line length, ( i.e. getconf ARG_MAX , on my system that'd be 2097152 ); example: # try using a list that's way too long/usr/bin/printf '%s\n' $(seq $(( $(getconf ARG_MAX) * 2 ))) | tail -1 Output: bash: /usr/bin/printf: Argument list too long Today I'm informed that shell builtin printf s don't have that limit; test: printf '%s\n' $(seq $(( $(getconf ARG_MAX) * 2 ))) | tail -1 Output: 4194304 Questions: A skim of man bash dash doesn't seem to say much about this advantage of builtin printf . Where is it documented? Do builtin printf s ( e.g. bash ) have an argument list maximum length in chars, and if so, what is that length?
It's not really the manual's job to advocate the use of any particular utility. It should primarily describe the available built-in utilities. The advantages of using a built-in utility over an external one is primarily speed and the availability of extended features ( printf in bash can, for example, write directly into a variable with -v varname , which no external printf could ever do). Executing external utilities is slow in comparison to executing a built-in utility, especially if done often in e.g. a loop, and, as you have noticed, they also allow for longer argument lists (this is not something that only the built-in printf allows, but all built-in utilities). The length of the argument list to the built-in printf utility in bash is limited by the resource restrictions on the bash process itself. On some systems, this may even mean that you could use most of the available RAM to construct its command line argument list. The documentation from where you would find these various bits of information is The bash source code , where you will see that the argument list for printf is a dynamically allocated linked list, and also that it's not using execve() to run printf (which is what limits the length of an argument list when running an external utility). An example of a shell where printf in not a built-in utility is the ksh shell of OpenBSD. The utility can also be disabled in bash using enable -n printf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/519789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165517/" ] }
519,914
VAR=a,b,c,d# VAR=$(echo $VAR|tr -d '\n')echo "[$VAR]"readarray -td, ARR<<< "$VAR"declare -p ARR Result: [a,b,c,d]declare -a ARR=([0]="a" [1]="b" [2]="c" [3]=$'d\n') How can I tell readarray not to add the final newline \n ? What is the meaning of the latest $ symbol?
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR")declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199331/" ] }
519,920
How to do I get the list days were no files been received. Have used the below command to get the file count along with the date: find . -maxdepth 1 -type f -printf '%TY-%Tm-%Td\n' | awk '{array[$0]+=1}END{ for(val in array) print val" "array[val] }'|sort Output: 2019-05-09 12019-05-10 32019-05-13 22019-05-14 52019-05-15 12019-05-16 22019-05-17 12019-05-20 2 I would need the missing days count as 0 as well. For eg: 2019-05-12 0
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR")declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353883/" ] }
519,921
Using "parents" command works but it copies complete directory structure, whereas I need to preseve only the last most directory find /tmp/data/ -type f -name "*.txt" -exec cp --parents {} /u01/ABC/ \; Output that I get: /u01/ABC/tmp/data/a/1.txt/u01/ABC/tmp/data/b/1.txt/u01/ABC/tmp/data/c/1.txt Output that I need: /u01/ABC/a/1.txt/u01/ABC/b/1.txt/u01/ABC/c/1.txt
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string ( <<< ) of bash , see Why does a bash here-string add a trailing newline char? . You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <() readarray -td, ARR < <(printf '%s' "$VAR")declare -p ARR would properly generate now declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/519921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353852/" ] }
520,026
If I run FOO=bar docker run -it -e FOO=$FOO debian env That environment variable is not set in the command output for the env command. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=03f3b59c0aabTERM=xtermFOO=HOME=/root But if I run FOO=bar; docker run -i -t --rm -e FOO=$FOO debian:stable-slim envPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=672bfdcde93cTERM=xtermFOO=barHOME=/root Then the variable is available from the container and also exported into my current shell environment. echo $FOObar I expect this behavior with export FOO=bar but why does that happen with ; too?
No, FOO=bar; does not export the variable into my environment A var is set in the (present) environment only if it was previously exported: $ export foo$ foo=bar$ env | grep foofoo=bar A variable is set in the environment of a command when it is placed before the command. Like foo=bar command . And it only exists while the command runs. $ foo=bar bash -c 'echo "foo is = $foo"'foo is = bar The var is not set for the command line (in the present shell): $ foo=bar bash -c echo\ $foo Above, the value of $foo is replaced with nothing by the present running shell , thus: no output. Your command: $ FOO=bar docker run -it -e FOO=$FOO debian env is converted to the actual string: $ FOO=bar docker run -it -e FOO= debian env by the present running shell . If, instead, you set the variable (in the present running shell) with foo=bar before running the command, the line will be converted to: $ FOO=bar; docker run -it -e FOO=bar debian env A variable set to the environment of a command is erased when the command returns: $ foo=bar bash -c 'echo'; echo "foo was erased: \"$foo\"" Except when the command is a builtin in some conditions/shells: $ ksh -c 'foo=bar typeset baz=quuz; echo $foo'bar
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/520026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31762/" ] }
520,031
I have a file where the first column is the key. The rows can have up to 2800 delimiters. I need to pivot the data from rows to columns. Below is the sample input and required output. Source File 123,A,B,,,,AC,DF,,,,,,,,,,,,n 567,A,B,,C,D,,,,,,,,, 789,C,B Output 123,A123,B123,123,..123,AC123,DF567,A567,B567,C567,D567,789,C89,B Please advise.
No, FOO=bar; does not export the variable into my environment A var is set in the (present) environment only if it was previously exported: $ export foo$ foo=bar$ env | grep foofoo=bar A variable is set in the environment of a command when it is placed before the command. Like foo=bar command . And it only exists while the command runs. $ foo=bar bash -c 'echo "foo is = $foo"'foo is = bar The var is not set for the command line (in the present shell): $ foo=bar bash -c echo\ $foo Above, the value of $foo is replaced with nothing by the present running shell , thus: no output. Your command: $ FOO=bar docker run -it -e FOO=$FOO debian env is converted to the actual string: $ FOO=bar docker run -it -e FOO= debian env by the present running shell . If, instead, you set the variable (in the present running shell) with foo=bar before running the command, the line will be converted to: $ FOO=bar; docker run -it -e FOO=bar debian env A variable set to the environment of a command is erased when the command returns: $ foo=bar bash -c 'echo'; echo "foo was erased: \"$foo\"" Except when the command is a builtin in some conditions/shells: $ ksh -c 'foo=bar typeset baz=quuz; echo $foo'bar
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/520031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219056/" ] }
520,045
When starting up by machine, I am prompted with a terminal asking me to login, rather than the nice GUI that I am use to. When I login in, I am able to run startx and everything works smoothly. I added the following to my ~/zprofile but it only ran once I was logged in. if [[ ! $DISPLAY && $XDG_VTNR -eq 1 ]]; then startxfi How can I get the login screen that I am use to, to appear again?
The Gui is loaded by systemd, when the init system is systemd, this is the case of Ubuntu Here is a nice answer about the subject systemctl get-default permit to see what target is set for the startup either multi-user.target or graphical.target To enable x at startup time you can use: sudo systemctl enable graphical.target --forcesudo systemctl set-default graphical.target And to disable it sudo systemctl enable multi-user.target --forcesudo systemctl set-default multi-user.target Note that /etc/X11/default-display-manager contain the default used display manager (this file is not required tho) Also find here how to setup the default display manager, this is required as well For a detailed answer more information about the setup are required (what desktop are you using kde/gnome what dm are using lightdm/sddm etc)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353963/" ] }
520,078
I want to recursively look for every *.pdf file in a directory ~/foo whose base name matches the name of the file's parent directory. For instance, suppose that the directory structure ~/foo looks like this foo├── dir1│   ├── dir1.pdf│   └── dir1.txt├── dir2│   ├── dir2.tex│   └── spam│   └── spam.pdf└── dir3 ├── dir3.pdf └── eggs └── eggs.pdf Running my desired command would return ~/foo/dir1/dir1.pdf~/foo/dir2/spam/spam.pdf~/foo/dir3/dir3.pdf~/foo/dir3/eggs/eggs.pdf Is this possible using find or some other core utility? I assume this is doable using the -regex option to find but I'm not sure how to write the correct pattern.
With GNU find : find . -regextype egrep -regex '.*/([^/]+)/\1\.pdf' -regextype egrep use egrep style regex. .*/ match grand parent directires. ([^/]+)/ match parent dir in a group. \1\.pdf use backreference to match file name as parent dir. update One (myself for one) might think that .* is greedy enough, it's unnecessary to exclude / from parent matching: find . -regextype egrep -regex '.*/(.+)/\1\.pdf' Above command won't work well, because it mathches ./a/b/a/b.pdf : .*/ matches ./ (.+)/ matches a/b/ \1.pdf matches a/b.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/520078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92703/" ] }
520,231
I've recently begun supporting Linux installed on devices with built-in nvme ssds. I noticed the device files had an extra number, beyond a number identifying the drive number and the partition number. IDE/SATA/SCSI drives normally only have a drive letter and partition number. For example: /dev/nvme0n1p2 I got to wondering what the n1 part was, and after a bit of searching, it looks like that identifies an nvme 'namespace'. The definitions for it were kind of vague: "An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks." So, does this act like a partition that is defined at the hardware controller level, and not in an MBR or GPT partition table? Can a namespace span multiple physical nvme ssd's? E.g. can you create a namespace that pools together storage from multiple ssd's into a single logical namespace, similar to RAID 0? What would you do with an NVME namespace that you can't already achieve using partition tables or LVM or a filesystem that can manage multiple volumes (like ZFS, Btrfs, etc)? Also, why does it seem like the namespace numbering starts at 1 instead of 0? Is that just something to do with how NVME tracks the namespace numbers at a low level (e.g. partitions also start at 1, not 0, because that is how the standard for partition numbers was set, so the Linux kernel just uses whatever the partition number that is stored on disk is - I guess nvme works the same way?)
In NVM Express and related standards, controllers give access to storage divided into one or more namespaces. Namespaces can be created and deleted via the controller, as long as there is room for them (or the underlying storage supports thin provisioning), and multiple controllers can provide access to a shared namespace. How the underlying storage is organised isn’t specified by the standard, as far as I can tell. However typical NVMe SSDs can’t be combined, since they each provide their own storage and controller attached to a PCI Express port, and the access point is the controller, above namespaces — thus a namespace can’t group multiple controllers (multiple controllers can provide access to a shared namespace). It’s better to think of namespaces as something akin to SCSI LUNs as used in enterprise storage (SANs etc.). Namespace numbering starts at 1 because that’s how per-controller namespace identifiers work. Namespaces also have longer, globally-unique identifiers. Namespaces can be manipulated using the nvme command, which provides support for low-level NVMe features including: formatting, which performs a low-level format and allows various features to be used (secure erase, LBA format selection...); attaching and detaching, which allows controllers to be attached to or detached from a namespace (if they support it and the namespace allows it). Attaching and detaching isn’t something you’ll come across in laptop or desktop NVMe drives. You’d use it with NVMe storage bays such as those sold by Dell EMC, which replace the iSCSI SANs of the past. See the NVM Express standards for details (they’re relatively easy to read), and this NVM Express tutorial presentation for a good introduction.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/520231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58182/" ] }
520,341
I have ssh and openconnect installed but when I proceed to start or stop the ssh service, I get the following error: Failed to start ssh.service: Unit ssh.service not found. Also, when I try sudo apt-get install ssh I get the following: sudo apt-get install sshReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: ncurses-term openssh-server openssh-sftp-server ssh-import-idSuggested packages: ssh-askpass rssh molly-guard monkeysphereThe following NEW packages will be installed: ncurses-term openssh-server openssh-sftp-server ssh ssh-import-id0 upgraded, 5 newly installed, 0 to remove and 193 not upgraded.Need to get 640 kB of archives.After this operation, 5.237 kB of additional disk space will be used.Do you want to continue? [Y/n] Which I find confusing. If I do which ssh , I get: /usr/bin/ssh How can the binary be there if apt-get thinks the package is not installed? Also, when calling ssh <valid-IP-address> , I get the following error: ssh: connect to host port 22: No route to host But if I use openconnect and connect to a VPN, ssh work without problems. What am I missing? I'm running Ubuntu 16.04.
The ssh binary, the SSH client, is provided by the openssh-client package, which is installed on your system. The ssh service runs the SSH server, provided by the openssh-server package, which isn’t installed on your system. The ssh package is a meta-package which installs both the client and the server.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/520341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345250/" ] }
520,368
I made a user once with the --disabled-login command. I then had to change the password so I could login to the user and test some stuff. I do now want to disable the login again, and I saw this post: what does `adduser --disabled-login` do? So I used sudo passwd user and then set it to ! . However, when I tried to login I could login using the password ! . So how do I disable it again? Note that I also tried * as password.
I think that your confusion stems from the fact you don't understand what ! does. Encrypted passwords are stored in /etc/shadow . For example, aftercreating a new user named new-user and giving it 12345678 passwordwe get this entry: $ sudo cat /etc/shadow(...)new-user:$6$zVbJcpZE$Bqnxr5cDkwjKOE06iAZu7/qIuH9UGXex28TU/aD0osft9DfdPVzcVwq2j410YxoPlZR310.heZyxaQq4iwWy9.:18038:0:99999:7::: You can now switch to new-user by doing su new-user and typing 12345678 as the password. You can disable a password for new-user by prepending it with ! like that: $ sudo cat /etc/shadow(...)new-user:!$6$zVbJcpZE$Bqnxr5cDkwjKOE06iAZu7/qIuH9UGXex28TU/aD0osft9DfdPVzcVwq2j410YxoPlZR310.heZyxaQq4iwWy9.:18038:0:99999:7::: From now on you will not be able to switch to new-user even afterproviding the correct password: $ su new-userPassword:su: Authentication failure Notice though that modifying /etc/shadow manually is very dangerousand not recommended. You can achieve the same with sudo passwd -l new-user . As man passwd says: -l, --lock Lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a ´!´ at the beginning of the password). For example: $ sudo passwd -l new-userpasswd: password expiry information changed. However, notice that passwd -l does not disable the account , itonly disables password and that means that user can still log in thesystem using other methods as man passwd explains: Note that this does not disable the account. The user may still be able to login using another authentication token (e.g. an SSH key). To disable the account, administrators should use usermod --expiredate 1 (this set the account's expire date to Jan 2, 1970). Users with a locked password are not allowed to change their password.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/324368/" ] }
520,373
I want to know how Linux uses routing table when reaching out to another ip. As i know for example when we run "ssh 192.168.21.2", system will create the tcp packet with destination ip as 192.168.21.2 and refer routing table for next hop. the routing table entries as shown below: Destination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.16.1 0.0.0.0 UG 0 0 0 br0169.254.0.0 0.0.0.0 255.255.0.0 U 1005 0 0 br0169.254.0.0 0.0.0.0 255.255.0.0 U 1006 0 0 br1192.168.16.0 0.0.0.0 255.255.240.0 U 0 0 0 br0192.168.66.0 0.0.0.0 255.255.255.128 U 0 0 0 br1 Here system will find the destination network address and match it with the network address of destination and if match found send that packet to respective gateway via interface. My doubt is how and when system is calculating the network address of destination ip in order to match it with routing table. I may be entirely wrong on this flow. need explanation to understand the packet flow from system.
I think that your confusion stems from the fact you don't understand what ! does. Encrypted passwords are stored in /etc/shadow . For example, aftercreating a new user named new-user and giving it 12345678 passwordwe get this entry: $ sudo cat /etc/shadow(...)new-user:$6$zVbJcpZE$Bqnxr5cDkwjKOE06iAZu7/qIuH9UGXex28TU/aD0osft9DfdPVzcVwq2j410YxoPlZR310.heZyxaQq4iwWy9.:18038:0:99999:7::: You can now switch to new-user by doing su new-user and typing 12345678 as the password. You can disable a password for new-user by prepending it with ! like that: $ sudo cat /etc/shadow(...)new-user:!$6$zVbJcpZE$Bqnxr5cDkwjKOE06iAZu7/qIuH9UGXex28TU/aD0osft9DfdPVzcVwq2j410YxoPlZR310.heZyxaQq4iwWy9.:18038:0:99999:7::: From now on you will not be able to switch to new-user even afterproviding the correct password: $ su new-userPassword:su: Authentication failure Notice though that modifying /etc/shadow manually is very dangerousand not recommended. You can achieve the same with sudo passwd -l new-user . As man passwd says: -l, --lock Lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a ´!´ at the beginning of the password). For example: $ sudo passwd -l new-userpasswd: password expiry information changed. However, notice that passwd -l does not disable the account , itonly disables password and that means that user can still log in thesystem using other methods as man passwd explains: Note that this does not disable the account. The user may still be able to login using another authentication token (e.g. an SSH key). To disable the account, administrators should use usermod --expiredate 1 (this set the account's expire date to Jan 2, 1970). Users with a locked password are not allowed to change their password.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202905/" ] }
520,523
I need to replace a string in a (large) file on a MacOSX 10.10. My file looks like this: Y16-TUL-SUB_ Y16-TUL-SUB_ Y16-TUL-SUB_ Y16-TUL-SUB_ Y16-TUL-SUB-Y16-TUL-SUB_ Y16-TUL-SUB_ and I need to replace Y16_TUL_SUB_ with Y16-TUL-SUB- . File name could be test.txt . I have tried a lot of the different suggestions with sed , awk and python . E.g this: #!/usr/bin/env python import sys import os import tempfile tmp=tempfile.mkstemp() with open(sys.argv[1]) as fd1, open(tmp[1],'w') as fd2: for line in fd1: line = line.replace('Y16_TUL_SUB_','Y16-TUL-SUB-') fd2.write(line) os.rename(tmp[1],sys.argv[1]) or sed supposedly for mac or find : find . -type f -name test.txt | xargs sed -i "" "s/Y16_TUL_SUB_/Y16-TUL-SUB-/g' or sed : sed -i -e "s/Y16_TUL_SUB_/Y16-TUL-SUB/g" test.txt or awk awk '{gsub(/Y16_TUL_SUB_/,"Y16-TUL-SUB")}' test.txt All these commands have run and returned empty output files, or not changed anything in the original files anyway. What am I doing wrong?
How it to do with sed in OS X. I've created test file: $ cat test.txttasdasdasdasdasdasdasddddddddffdfdfdfdfdf Calling sed ( '' is necessarily parameter): $ sed -i '' "s/as/replaced_ad/g" test.txt Show output file: $ cat test.txttreplaced_addreplaced_addreplaced_addreplaced_addreplaced_addreplaced_addreplaced_adddddddddffdfdfdfdfdf sed in OS X is slightly changes from GNU sed . Use man sed in OS X terminal to know how to use sed .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354164/" ] }
520,539
I have a CSV file like this: 2019.04.15;3.75;2019.04.29;-5.17;2019.05.01;7.5;2019.05.06;0.5;2019.05.13;0.25;2019.05.20;-8.5; I want to get the sum of the plus and the minus values off the second column. I solved this with the following pipe using awk and grep : plus=$(awk -F';' '{print $2};' "$file" | grep --invert-match "-" | awk '{s+=$1}END{print s}')minus=$(awk -F';' '{print $2};' "$file" | grep "-" | awk '{s+=$1}END{print s}') I'm pretty sure awk can do it on his own, by using one command, the question is how would it look like?
How it to do with sed in OS X. I've created test file: $ cat test.txttasdasdasdasdasdasdasddddddddffdfdfdfdfdf Calling sed ( '' is necessarily parameter): $ sed -i '' "s/as/replaced_ad/g" test.txt Show output file: $ cat test.txttreplaced_addreplaced_addreplaced_addreplaced_addreplaced_addreplaced_addreplaced_adddddddddffdfdfdfdfdf sed in OS X is slightly changes from GNU sed . Use man sed in OS X terminal to know how to use sed .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
520,587
The common unix utility program grep is an abbreviation of "globally search a regular expression and print", which makes grep a case of "does what it says on the tin". The name informs us of the functionality of the program. Is there a similar history behind the name of 'xargs'? Does the name have any meaning that helps as mnemonic in the same way as grep does?
The description in the Unix System 5 administrator's reference manual, the X/Open Portability Guide , and the System V Interface Definition is: xargs — construct argument list(s) and execute command These pre-date by years Wolfram Roesler's 1993 Unix Acronym List which calls it "extended arguments". As does Gordon A. Moffet's xargs clone published in 1986 whose manual says: xargs — execute a command with many arguments However : Whilst the System 5 doco and the clone doco might lead one to conclude that yes "x" relates to "execute", "extend" per Wolfram Roesler's Unix Acronym List is in fact more likely the case. Herb Gellis's own commentary on the subject implies that xe wrote it to extend the then limit of 512 bytes on filename expansion in the Mashey shell. Herb Gellis is apparently still alive. You could ask xem. ☺
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354394/" ] }
520,592
Add double quotes to 1st occurrence of = after delimiter | in a file Input: Ver=7|errmsg=0=sucess,1=failue I want output as Ver"="7|errmsg"="0=success,1=failue Only 1st occurance of = after delimiter | should be added with double quoted. With awk we are able to achieve this,but some how i am not able make those change inplace in the file by using awk -i inplace. Can we do it with sed or any other approach to make the changes inplace
The description in the Unix System 5 administrator's reference manual, the X/Open Portability Guide , and the System V Interface Definition is: xargs — construct argument list(s) and execute command These pre-date by years Wolfram Roesler's 1993 Unix Acronym List which calls it "extended arguments". As does Gordon A. Moffet's xargs clone published in 1986 whose manual says: xargs — execute a command with many arguments However : Whilst the System 5 doco and the clone doco might lead one to conclude that yes "x" relates to "execute", "extend" per Wolfram Roesler's Unix Acronym List is in fact more likely the case. Herb Gellis's own commentary on the subject implies that xe wrote it to extend the then limit of 512 bytes on filename expansion in the Mashey shell. Herb Gellis is apparently still alive. You could ask xem. ☺
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354401/" ] }
520,597
I want to reduce the size of a video to be able to send it via email and such. I looked at this question: How can I reduce a video's size with ffmpeg? where I got good advice on how to reduce it. Problem is that I need to manually calculate the bitrate. And also when doing this I had to finish of with manual trial and error. Is there any good way to use ffmpeg (or another tool) to reduce the size of a video to a target size? Note due to a comment from frostschutz: Worrying too much about size and going for a fixed filesize regardless of video length / resolution / framerate / content will not give satisfactory results most of the time... Upload it somewhere, email the link. If you must encode it, set a quality level, leave the bitrate dynamic. If it's obvious that size will not be met, cancel the encode and adapt quality level accordingly. Rinse and repeat. Good advice in general, but it does not suit my usecase. Uploading to an external link is not an option due to technical limitations, one of them being that the receiver does not have http access. I could also clarify that I'm NOT looking for exactly the size I'm specifying. It's enough if it is reasonably close to what I want (maybe up to 5 or 10% lower or so) but it should be guaranteed that it will not exceed my target limit.
@philip-couling's answer is close but missing several pieces: The desired size in MB needs to be multiplied by 8 to get bit rate For bit rates, 1Mb/s = 1000 kb/s = 1000000 b/s; the multipliers aren't 1024 (there is another prefix, KiB/s, which is 1024 B/s, but ffmpeg doesn't appear to use that) Truncating or rounding down a non-integer length (instead of using it precisely or rounding up) will result in the file size slightly exceeding target size The -b flag specifies the average bit rate: in practice the encoder will let the bit rate jump around a bit, and the result could overshoot your target size. You can however specify a max bit rate by using -maxrate and -bufsize For my use-case I also wanted a hardcoded audio bitrate and just adjust video accordingly (this seems safer too, I'm not 100% certain ffmpeg's -b flag specifies bitrate for video and audio streams together or not). Taking all these into account and limiting to strictly 25MB: file="input.mp4"target_size_mb=25 # 25MB target sizetarget_size=$(( $target_size_mb * 1000 * 1000 * 8 )) # target size in bitslength=`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file"`length_round_up=$(( ${length%.*} + 1 ))total_bitrate=$(( $target_size / $length_round_up ))audio_bitrate=$(( 128 * 1000 )) # 128k bit ratevideo_bitrate=$(( $total_bitrate - $audio_bitrate ))ffmpeg -i "$file" -b:v $video_bitrate -maxrate:v $video_bitrate -bufsize:v $(( $target_size / 20 )) -b:a $audio_bitrate "${file}-${target_size_mb}mb.mp4" Note that when -maxrate and -bufsize limit the max bit rate, by necessity the average bit rate will be lower, so the video will undershoot the target size by as much as 5-10% in my tests (on a 20s video at various target sizes). The value of -bufsize is important, and the calculated value used above (based on target size) is my best guess. Too small and it will vastly lower quality and undershoot target size by like 50%, but too large and I think it could potentially overshoot target size? To give the encoder more flexibility if you don't have a strict maximum file size, removing -maxrate and -bufsize will result in better quality, but can cause the video to overshoot the target size by 5% in my tests. More info in the docs , where you will see this warning: Note: Constraining the bitrate might result in low quality output if the video is hard to encode. In most cases (such as storing a file for archival), letting the encoder choose the proper bitrate is the constant quality or CRF-based encoding.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236940/" ] }
520,598
I have data below that needed to count or total the value start with specific field. Sample below the needed field to compute is from Field5.For sample Field5 total is 42, field6 total is 107, etc.. Input file: A,B,C,D,42,52,23,101,5,1,0,0,0A,B,C,D,0,52,3,0,5,0,0,0,0A,B,C,D,0,1,4,0,4,0,0,0,0A,B,C,D,0,2,2,0,1,0,0,0,0 Expected output: Field5=42, Field6=107, Field7=32, Field8=101, Field9=15, Field10=1 Trying to do with this code and I know its wrong. lol echo "A,B,C,D,42,52,23,101,5,1,0,0,0A,B,C,D,0,52,3,0,5,0,0,0,0A,B,C,D,0,1,4,0,4,0,0,0,0A,B,C,D,0,2,2,0,1,0,0,0,0" | awk -F"," '{for(i=0;i<=NF;i++){C++}} END {print i"=" C}'
@philip-couling's answer is close but missing several pieces: The desired size in MB needs to be multiplied by 8 to get bit rate For bit rates, 1Mb/s = 1000 kb/s = 1000000 b/s; the multipliers aren't 1024 (there is another prefix, KiB/s, which is 1024 B/s, but ffmpeg doesn't appear to use that) Truncating or rounding down a non-integer length (instead of using it precisely or rounding up) will result in the file size slightly exceeding target size The -b flag specifies the average bit rate: in practice the encoder will let the bit rate jump around a bit, and the result could overshoot your target size. You can however specify a max bit rate by using -maxrate and -bufsize For my use-case I also wanted a hardcoded audio bitrate and just adjust video accordingly (this seems safer too, I'm not 100% certain ffmpeg's -b flag specifies bitrate for video and audio streams together or not). Taking all these into account and limiting to strictly 25MB: file="input.mp4"target_size_mb=25 # 25MB target sizetarget_size=$(( $target_size_mb * 1000 * 1000 * 8 )) # target size in bitslength=`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file"`length_round_up=$(( ${length%.*} + 1 ))total_bitrate=$(( $target_size / $length_round_up ))audio_bitrate=$(( 128 * 1000 )) # 128k bit ratevideo_bitrate=$(( $total_bitrate - $audio_bitrate ))ffmpeg -i "$file" -b:v $video_bitrate -maxrate:v $video_bitrate -bufsize:v $(( $target_size / 20 )) -b:a $audio_bitrate "${file}-${target_size_mb}mb.mp4" Note that when -maxrate and -bufsize limit the max bit rate, by necessity the average bit rate will be lower, so the video will undershoot the target size by as much as 5-10% in my tests (on a 20s video at various target sizes). The value of -bufsize is important, and the calculated value used above (based on target size) is my best guess. Too small and it will vastly lower quality and undershoot target size by like 50%, but too large and I think it could potentially overshoot target size? To give the encoder more flexibility if you don't have a strict maximum file size, removing -maxrate and -bufsize will result in better quality, but can cause the video to overshoot the target size by 5% in my tests. More info in the docs , where you will see this warning: Note: Constraining the bitrate might result in low quality output if the video is hard to encode. In most cases (such as storing a file for archival), letting the encoder choose the proper bitrate is the constant quality or CRF-based encoding.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354226/" ] }
520,625
I just compiled the transmission{-daemon,-cli} on my Debian 10 Buster, and installed it with some minor and major problems. One of those minor ones is apparent failure to set UDP receive buffer / send buffer as per log: [2019-05-23 12:45:40.950] UDP Failed to set receive buffer: requested 4194304, got 425984 (tr-udp.c:84)[2019-05-23 12:45:40.950] UDP Please add the line "net.core.rmem_max = 4194304" to /etc/sysctl.conf (tr-udp.c:89)[2019-05-23 12:45:40.950] UDP Failed to set send buffer: requested 1048576, got 425984 (tr-udp.c:95)[2019-05-23 12:45:40.950] UDP Please add the line "net.core.wmem_max = 1048576" to /etc/sysctl.conf (tr-udp.c:100) I would like the client to show me its maximum performance, so I am curious as to how to tune these two things on my system, which it proposes? 4 on 1 MiB does not seem much to me, but since I am no networking guy, please elaborate, if I can tune it to even higher numbers. Hardware ISP link speed: Connection configuration: public static IPv4 with forwarded ports. For the sake of completeness, let me mention other components as well, I don't know if this info is important here or not, so... Server: Dell PowerEdge T20 CPU: Intel Xeon E3-1225v3 3.2GHz 4C/4T RAM: 32 GiB ECC DDR3 System drive: SSD
Following this old article helped. Let me mention the claimed solution first: Open this text file as root, be aware it is one of those important system files: /etc/sysctl.conf An alternative option is to create a new config file to hold the parameters to override, but this question is not per se about how else to make your config, and hence we solve this issue directly. I added these two lines since I have enough memory; if you are for instance on an embedded system, you might want to reconsider applying these lines ( we're setting 16 MiB for the receive buffer and 4 MiB for send buffer ), I can't tell the actual running memory requirements yet: net.core.rmem_max = 16777216net.core.wmem_max = 4194304 If you decided to add those lines above, you can re-read the config with: sysctl -p There is no need for a reboot to take effect. Now, let me quote that web page on this: This message tries to tell us, that for some reason, Transmission would like to have 4 Megabytes of receive buffer and 1 Megabyte send buffer for its UDP socket. It turns out that the support for µTP, the µTorrent transport protocol, is implemented using a single socket. By tuning the two variables, higher throughput can be achieved more easily using µTP. Since we're using a single UDP socket to implement multiple µTP sockets,and since we're not always timely in servicing an incoming UDP packet,it's important to use a large receive buffer. The send buffer is probablyless critical, we increase it nonetheless.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/520625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
520,629
Is there a workaround for Debian Bug #838871 ? Problem: I want to have a network configuration on Debian with the following properties: Automatically ifup network interface when a cable is plugged in boots without blocking for a long time when no cable is connected on boot not switching my init system The standard way to do this would be the following snippet in /etc/network/interfaces : allow-hotplug eth0iface eth0 inet dhcp However, this leads to the problem described in the linked bug report: The boot process blocks for >1min when no network cable is plugged in with the following message: configuring network interfaces... ifup: waiting for lock on /run/network/ifstate.eth0 A workaround given in this question seems to be changing allow-hotplug to auto : auto eth0iface eth0 inet dhcp This effectively makes the boot blocking message disappear, however, the system now blocks right before a login prompt is shown at tty1. This time dhclient is blocking because it tries to get a dhcp reply on eth0, which is not connected, and waits for several attempts to timeout. The login prompt only appears after the dhclient timeouts. For users with a graphical DE, this might not be a problem, as they don't need to log in on tty1, instead their DE boots and they never see the dhclient message. Another workaround would probably be to use network-manager . Ideally, I would prefer to not use networkmanager, but as a last resort, I tried that. However, on Debian buster, network-manager's dependency chain conflicts with sysvinit-core , which is my init system. The last alternative which comes to my mind is to not configure eth0 in interfaces(5). This makes all boot blocks go away, however, I need to manually ifup eth0 after plugging in an ethernet cable. Any better ideas? UPDATE: To address the quote from @sourcejedi, "allow-hotplug" is specified to "start interface when the kernel detects a hotplug event from the interface" in the Debian docs under Debian networking . Related questions: Good detailed explanation of /etc/network/interfaces syntax? and What is a hotplug event from the interface?
Before network-manager , the well-known way to "automatically ifup the network interface when a cable is plugged in" was ifplugd . (Note the original author :-P). ifplugd is still available in Debian. I do not have any recent experience with it. Firstly, you would remove the auto eth0 or allow hotplug eth0 line from /etc/network/interfaces . You would still need your line iface eth0 inet dhcp . (This line depends on what the name of your network interface is, and also if you want to add ipv6 support, etc). To configure ifplugd to bring up the interface, edit /etc/default/ifplugd to set INTERFACES= to include the name of your network interface. Alternatively, it suggests you could use the value auto . I do not know how well auto works on any recent system :-). https://manpages.debian.org/buster/ifplugd/ifplugd.conf.5.en.html This feature was never provided by allow-hotplug : Note that the check for the link state has not always been there, and in any case was only done at boot time. It never supported the case where there was no cable connected at boot, and where you plugged in the cable at a later time. -- Message #20 The sources which contradict this are just wrong. If you want this feature, you need to run a daemon which waits for "netlink" events.[*] The Debian ifupdown package does not include any daemon. allow-hotplug relies on the udev daemon, which does not read the necessary netlink events. The udev daemon only reads udev "hotplug" events ("uevent"s). There is no "uevent" when an Ethernet device detects a link status change. You can verify this using udevadm monitor . The Linux kernel developers made a deliberate decision not to provide a "uevent" for this. See: Re: Q: netdev: generate kobject uevent on network events . [*] Pedant: technically ifplugd works by polling the link status at regular intervals. So it does not necessarily rely on "netlink" events. This distinction is pointed out by netplug , which does use "netlink" events. netplug does not have all the same features as ifplugd .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354422/" ] }
520,630
consider file having values: foobootoo and another one: foo,1foo,2boo,1 soo,1 How to get only first match from the 2nd file, where output will be: foo,1boo,1
How about $ awk -F, 'NR==FNR {a[$1]; next} $1 in a {print; delete a[$1]}' file1 file2foo,1boo,1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/520630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123325/" ] }