source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
459,942 | I install Debian a lot. To do this I have a fully-automated preseed.cfg; at the end of the preseed, it downloads and runs a postinstall.sh script from my TFTP server, which does some additional customization. I'm in the process of switching from GNOME to LXQTE, and using SDDM instead of GDM. However, SDDM tries to start too quickly for my hardware. To get around this, I've been using systemctl edit sddm to add the following: [Service]ExecStartPre=/bin/sleep 5 This works great, and I'd like to automate this process by adding it to the postinstall.sh script. However, I can't figure out how to pass the file contents to systemctl edit via a bash script. How can I do this? | You can override the $SYSTEMD_EDITOR environment variable to use a different command other than your editor when running systemctl edit . For instance, using something like SYSTEMD_EDITOR='cp /path/to/source.file' seems to work OK (even though it's pretty ugly, expecting the last argument to be appended there by systemd!) For your particular case, you could use: $ { echo "[Service]"; echo "ExecStartPre=/bin/sleep 5"; } >~/tmp/sddm-override.conf$ sudo env SYSTEMD_EDITOR="cp $HOME/tmp/sddm-override.conf" systemctl edit sddm But all that systemctl edit really does is create an override file (in its case, named override.conf ) under the /etc/systemd/system/<service>.service.d/ directory, which is created if it does not exist... So doing that directly is also a totally accepted approach. (See mentions of "drop-in" and "override" in the man page for systemd.unit for more details.) So, in your case, this would be an appropriate solution: $ sudo mkdir -p /etc/systemd/system/sddm.service.d/$ { echo "[Service]"; echo "ExecStartPre=/bin/sleep 5"; } | sudo tee /etc/systemd/system/sddm.service.d/10-startup-delay.conf$ sudo systemctl daemon-reload Which drops a file with the expected contents in the "drop-in" directory for your unit, in which case you can also name it appropriately after what it tries to accomplish. UPDATED: As @GracefulRestart points out, you need a systemctl daemon-reload after adding a drop-in directly. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/459942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205533/"
]
} |
459,944 | [fakename]$ help timetime: time [-p] pipeline Report time consumed by pipeline's execution... From this, it seems that time is a Bash builtin. However, I cannot find a description of it on this page: https://www.gnu.org/software/bash/manual/html_node/Shell-Builtin-Commands.html#Shell-Builtin-Commands . Why is this the case? | It is described in the "Shell Grammar/Pipelines" subsection of the bash manpage . It is also described in thelink that you provided in the Pipelines section, where it is indexed under "Reserved Words" . Pipelines A pipeline is a sequence of one or more commands separated by one of the control operators | or |&. The format for a pipeline is: [time [-p]] [ ! ] command [ | or |& command2 ... ] The standard output of command is connected via a pipe to the standard input of command2. This connection is performed before any redirections specified by the command (see REDIRECTION below). If |& is used, the standard error of command is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error is performed after any redirections specified by the command. The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value. If the time reserved word precedes a pipeline, the elapsed as well as user and system time consumed by its execution are reported when the pipeline terminates. The -p option changes the output format to that specified by POSIX. The TIMEFORMAT variable may be set to a format string that specifies how the timing information should be displayed; see the description of TIMEFORMAT under Shell Variables below. Each command in a pipeline is executed as a separate process (i.e., in a subshell). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/459944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209133/"
]
} |
459,950 | I'm trying to mount a device but without success.The strange thing is that the mount command succeeds and return exit code 0, but the device is not mounted.Any idea on why this happens or how to investigate it?Please see the example below: [root@mymachine ~]# blkid -o listdevice fs_type label mount point UUID-----------------------------------------------------------------------------------------/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f[root@mymachine ~]# grep /dev/xvdx /etc/fstab/dev/xvdx /vols/data5 ext4 defaults 0 0[root@mymachine ~]# mount -a; echo $?0[root@mymachine ~]# blkid -o listdevice fs_type label mount point UUID-----------------------------------------------------------------------------------------/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f[root@mymachine ~]# mount /dev/xvdx /vols/data5; echo $?0[root@mymachine ~]# blkid -o listdevice fs_type label mount point UUID-----------------------------------------------------------------------------------------/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f[root@mymachine ~]# Full fstab: [root@mymachine ~]# cat /etc/fstab## /etc/fstab# Created by anaconda on Mon May 1 18:59:01 2017## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#UUID=29342a0b-e20f-4676-9ecf-dfdf02ef6683 / xfs defaults 0 0/dev/xvdb swap swap defaults,nofail 0 0/dev/xvdy /vols/data ext4 defaults 0 0/dev/xvdx /vols/data5 ext4 defaults 0 0 | Normally mount doesn't return 0 if there have been problems. When I had a similar problem, the reason was that systemd unmounted the filesystem immediately after the mount. You can try strace mount /dev/xvdx /vols/data5 to see the result of the syscall. You can also try mount /dev/xvdx /vols/data5; ls -li /vols/data5 to see whether something is mounted immediately after the mount command. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/459950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303326/"
]
} |
459,991 | Ubuntu 18 I've done a ton of research and am close to pulling this picture together but can't quite understand: How can I configure systemd-resolved for mdns? My goal specifically: to bring up a server on a 10.0.0.0/16 network for the new server to give itself some arbitrary name like foo1 to be able to connect to that server from another machine on the same network using the name foo1 Can anyone tell me please how to make this happen specifically using systemd-resolved? thanks So far I have configured resolved.conf at follows on ubuntu@ip-10-0-0-229:/etc$ --> CHROME -> cat /etc/systemd/resolved.conf# This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.## Entries in this file show the compile time defaults.# You can change settings by editing this file.# Defaults can be restored by simply deleting this file.## See resolved.conf(5) for details[Resolve]#DNS=#FallbackDNS=#Domains=LLMNR=yesMulticastDNS=yes#DNSSEC=no#Cache=yes#DNSStubListener=yesubuntu@ip-10-0-0-229:/etc$ --> CHROME -> | This is a late answer but I still think that can help someone because there are few infos on this topic. I also wasted time on this problem. Changing the /etc/systemd/resolved.conf is just a part of the work. After your changed it you still need to resolve this puzzle: Multicast DNS will be enabled on a link only if the per-link and theglobal setting is on. And if you know the trick... it's easy. sudo systemd-resolve --set-mdns=yes --interface=wlan0 wlan0 is the interface where the mDNS is requested. After that, you can see that mDNS is activated: $ sudo systemd-resolve --status wlan0 Link 3 (wlan0) Current Scopes: none DefaultRoute setting: no LLMNR setting: yes MulticastDNS setting: yes #<--- BINGO! DNSOverTLS setting: no DNSSEC setting: allow-downgrade DNSSEC supported: yes Restart systemd-resolved via systemctl: sudo systemctl restart systemd-resolved And mDNS is working! (before) ~ ❯ ping homebridge.local ping: cannot resolve homebridge.local: Unknown host (after) ~ ❯ ping homebridge.local PING homebridge.local (192.168.1.9): 56 data bytes64 bytes from 192.168.1.9: icmp_seq=0 ttl=64 time=21.721 ms64 bytes from 192.168.1.9: icmp_seq=1 ttl=64 time=22.429 ms | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/459991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117536/"
]
} |
459,996 | There are some log files created after a certain amount of time with the time stamp. /mylog/pathLog_file_2018-07-19-22-55-31Z.tgzLog_file_2018-07-20-01-29-11Z.tgzLog_file_2018-07-20-10-36-49Z.tgzLog_file_2018-07-21-18-26-36Z.tgz I need to delete older logs based on date. For example, I want only last 5 days logs and older logs should be deleted. Num of log files created daily varies. How to achieve this? | You can do with mtime (modified time) in find command. find /mylog/path -mindepth 1 -mtime +5 -delete -mindepth 1 means process all files except the command line arguments. -mtime +5 will check for the files modified 5 days ago. -delete will delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/459996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250201/"
]
} |
460,030 | I am trying to mass set a few user account passwords using chpasswd . The passwords should be generated randomly and printed to stdout (I need to write them down or put them in a password store), and also passed into chpasswd . Naively, I would do this like this { echo student1:$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo '') echo student2:$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo '')} | tee >(chpasswd) However I worry about passing the new password as a commandline argument to echo , because arguments are usually visible to other users in ps -aux (although I never saw any echo line appear in ps ). Is there an alternative way of prepending a value to my returned password, and then passing it into chpasswd ? | Your code should be safe as echo won't show up in the process table since it's a shell built-in. Here's an alternative solution: #!/bin/bashn=20paste -d : <( seq -f 'student%.0f' 1 "$n" ) \ <( tr -cd 'A-Za-z0-9' </dev/urandom | fold -w 13 | head -n "$n" ) |tee secret.txt | chpasswd This creates your student names and passwords, n of them, without passing any passwords on any command line of any command. The paste utility glues together several files as columns and inserts a delimiter in-between them. Here, we use : as the delimiter and give it two "files" (process substitutions). The first one contains the output of a seq command that creates 20 student usernames, and the second contains the output of a pipeline that creates 20 random strings of length 13. If you have a file with usernames already generated: #!/bin/bashn=$(wc -l <usernames.txt)paste -d : usernames.txt \ <( tr -cd 'A-Za-z0-9' </dev/urandom | fold -w 13 | head -n "$n" ) |tee secret.txt | chpasswd These will save the passwords and usernames to the file secret.txt instead of showing the generated passwords in the terminal. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34167/"
]
} |
460,078 | we have the following file : more value.js"\n#\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with t his work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \ "License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/lic enses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS I S\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n#\n#\nkafka.logs.dir=logs\n\nlog4j.rootLogger=INFO, stdout\n\nlog4j.appender.stdout=org.apache.log4j.ConsoleAppen der\nlog4j.appender.stdout.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n\n\nlog4j.appender.kafkaA ppender=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH\nlog4j.appender.kafkaAppender.File=${kafka.log s.dir}/server.log\nlog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout\nlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c )%n\n\nlog4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH\nlog4j .appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log\nlog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout\nlog4j.appe nder.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n\n\nlog4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender\nlog4j.appe nder.requestAppender.DatePattern='.'yyyy-MM-dd-HH\nlog4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log\nlog4j.appender.requestAppender. layout=org.apache.log4j.PatternLayout\nlog4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n\n\nlog4j.appender.cleanerAppender=org.apac he.log4j.DailyRollingFileAppender\nlog4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH\nlog4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-c leaner.log\nlog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout\nlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n \n\nlog4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH\nlog4j.appe nder.controllerAppender.File=${kafka.logs.dir}/controller.log\nlog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout\nlog4j.appender.cont rollerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n\n\n# Turn on all our debugging info\n#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG , kafkaAppender\n#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender\n#log4j.logger.kafka.perf=DEBUG, kafkaAppender\n#log4j.logger.kafka.perf.Produ cerPerformance$ProducerThread=DEBUG, kafkaAppender\n#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG\nlog4j.logger.kafka=INFO, kafkaAppender\nlog4j.logger. kafka.network.RequestChannel$=WARN, requestAppender\nlog4j.additivity.kafka.network.RequestChannel$=false\n\n#log4j.logger.kafka.network.Processor=TRACE, r equestAppender\n#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender\n#log4j.additivity.kafka.server.KafkaApis=false\nlog4j.logger.kafka.request.log ger=WARN, requestAppender\nlog4j.additivity.kafka.request.logger=false\n\nlog4j.logger.kafka.controller=TRACE, controllerAppender\nlog4j.additivity.kafka.c ontroller=false\n\nlog4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender\nlog4j.additivity.kafka.log.LogCleaner=false\n\nlog4j.logger.state.change.logger =TRACE, stateChangeAppender\nlog4j.additivity.state.change.logger=false" by the following trick , we can convert the "\n" to new lines and create the file.txt var=` cat value.js `echo -e "$var" | tee -a /tmp/file.txt"### Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements. See the NOTICE file# distributed with this work for additional information# regarding copyright ownership. The ASF licenses this file# to you under the Apache License, Version 2.0 (the# \"License\"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing,# software distributed under the License is distributed on an# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY# KIND, either express or implied. See the License for the# specific language governing permissions and limitations# under the License.###kafka.logs.dir=logslog4j.rootLogger=INFO, stdoutlog4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n# Turn on all our debugging info#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender#log4j.logger.kafka.perf=DEBUG, kafkaAppender#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUGlog4j.logger.kafka=INFO, kafkaAppenderlog4j.logger.kafka.network.RequestChannel$=WARN, requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false#log4j.logger.kafka.network.Processor=TRACE, requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.request.logger=WARN, requestAppenderlog4j.additivity.kafka.request.logger=falselog4j.logger.kafka.controller=TRACE, controllerAppenderlog4j.additivity.kafka.controller=falselog4j.logger.kafka.log.LogCleaner=INFO, cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=falselog4j.logger.state.change.logger=TRACE, stateChangeAppenderlog4j.additivity.state.change.logger=false" my question is how to convert back the file.txt to the previous format as value.js file ? | Instead of echo , you could use the jq JSON parsing tool: jq -r . < file.js > file.txt It would also have the advantage of removing the enclosing " , and turning the \" into " . To convert back to a JSON string: jq -Rs . < file.txt > newfile.js For the more generic question about converting newlines to \n , you can use perl : perl -pe 's/\n/\\n/' The difference with sed 's/\n/\\n/' which wouldn't work is that perl includes the trailing newline in the record that s operates on, but not sed . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
460,099 | I have a bash script, where I call exit somewhere to skip the rest of the script when getopts doesn't recognize an option or doesn't find an expected option argument. while getopts ":t:" opt; do case $opt in t) timelen="$OPTARG" ;; \?) printf "illegal option: -%s\n" "$OPTARG" >&2 echo "$usage" >&2 exit 1 ;; :) printf "missing argument for -%s\n" "$OPTARG" >&2 echo "$usage" >&2 exit 1 ;; esacdone# reset of the script I source the script in a bash shell. When something is wrong, the shell exits. Is there some way other than exit to skip the rest of the script but without exiting the invoking shell? Replacing exit with return doesn't work like for a function call, and the rest of the script will runs. Thanks. | Use return . The return bash builtin will exit the sourced script without stopping the calling (parent/sourcing) script. From man bash: return [n] Causes a function to stop executing and return the value specified by n to its caller. If n is omitted, the return status is that of the last command executed in the function body. … If return is used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. … | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
460,155 | I formatted an external hard disk on my ubuntu linux system with exfat. First I installed the exfat utilities: sudo apt-get install parted exfat-utils Then I partitioned the disk with a mbr boot record and one primary partition using parted Finally I formatted the partition with mkfs.exfat -n ShareDisk /dev/sdX1 Then I copied about 300 GB of data onto the disk. Everything worked fine on my linux machine - so far so uneventful. However, when I plug the disk into my Mac, it says it cannot handle that file system and proposes to initialize or eject it. Now I explicitly chose exfat so the disk would work with any operating system and I have been successfully using exfat formatted disks on my Mac before. | I just spent the better part of a day solving this problem. Apparently, Mac OS is quite picky about how the partition was created and with which flags.I was able to solve the problem by Converting the boot record to GPT using sudo gdisk /dev/sdx as suggested here . Just exit gdisk right away with w . It will warn about overwriting your drive. In my case answering with Y worked fine without losing data. Please make sure that you have backed up your date before doing this (no backup, no pity). Setting the msftdata data on the exfat partition (in my case partition number 1): sudo parted /dev/sdX and then set 1 msftdata on . Afterwards my Mac opened the partition without complaints. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98775/"
]
} |
460,182 | I am trying to compare two files using awk and I would like to print data from both files as output. The files I am comparing are as follows. File1: gene feature id fc a gene MSTRG.1.1 b gene MSTRG.1.2 c gene MSTRG.2.1 d gene MSTRG.3.1 File2: MSTRG.1.1 ALLMI MSTRG.3.1 COTJA MSTRG.4.1 SORCY I have been using the following command: $ awk -F '\t' 'BEGIN{OFS=FS} NR==FNR {a[$1]=$1; next} $3 in a {print $1}' File2 File1 I would like the output to be: a ALLMIc COTJAd SORCY, However, currently I am only getting the following as output: a c d Both files are tab delimited so I am not sure why my command isn't working? | awk solution How about this. Doesn't give the exact output you offer, but I'm unsure why d SORCY , would printed, as d is MSTRG3.1 , which is COTJA . Anyway, here goes. Starter-for-ten. Works fine on GNU Awk v4.0.2. $ awk 'NR==FNR{a[$1]=$2}NR!=FNR&&FNR>1&&a[$3]{print $1,a[$3]}' file2 file1a ALLMId COTJA$ If NR is same as FNR, we're on the first file, so populate the array. If NR isn't the same as FNR, we're on the 2nd file, so once we're past the first record of this file (header), and if field 3 exists in the array, print it. "golfed" awk solution Less readable, but shorter code. awk 'NR==FNR{a[$1]=$2}a[$3]{print$1,a[$3]}' file{2,1} join solution Alternatively, if you're not particular about needing it achieved using awk , just use join . $ join -1 3 -2 1 -o "1.1 2.2" file1 file2a ALLMId COTJA$ Join the files using field 3 from file 1 ( -1 3 ), and field 1 from file 2 ( -2 1 ). And then print field 1 from file1, and field2 from file2. Bingo. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266312/"
]
} |
460,196 | I have a server containing 10 hard disks. Device /dev/sdh is reporting uncorrectable read errors on btrfs scrub. How can I determine which physical disk corresponds to /dev/sdh ? I know I can get the disks' model numbers and serial numbers with hdparm -I /dev/sd? and I can get mountpoints with findmnt or lsblk . However, I am not finding a way to connect /dev/sdh to a hard disk by serial number, which is what I need. | lsscsi On servers where I have a lot of HDDs I've traditionally used lsscsi to determine which HDD is plugged into which port. You can use this output to get the names + the device & generic device names: $ lsscsi -g[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda /dev/sg0[2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb /dev/sg1[4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc /dev/sg2[12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde /dev/sg5[12:0:0:1] enclosu WD SES Device 4005 - /dev/sg6 And use this to get the list of ports on your MB that correspond to the above devices: $ lsscsi -H[0] ahci[1] ahci[2] ahci[3] ahci[4] pata_atiixp[5] pata_atiixp[12] usb-storage You can also use the verbose output instead: $ lsscsi --verbose[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host0/target0:0:0/0:0:0:0][2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host2/target2:0:0/2:0:0:0][4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:14.1/host4/target4:0:0/4:0:0:0][12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde dir: /sys/bus/scsi/devices/12:0:0:0 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:0][12:0:0:1] enclosu WD SES Device 4005 - dir: /sys/bus/scsi/devices/12:0:0:1 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:1] NOTE: The port that it's plugged into is the first digit in this block, [0] vs. [4] in the lsscsi -H output, for example. lshw I've also been able to use lshw for this because it tells you which ports etc. a particular HDD is plugged into so it's easier to figure out which one is which in a system that has multiples. Below you can see /dev/sda along with its serial number: $ lshw -c disk -c storage *-storage description: SATA controller product: SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:22 ioport:c000(size=8) ioport:b000(size=4) ioport:a000(size=8) ioport:9000(size=4) ioport:8000(size=16) memory:fbbff800-fbbffbff *-disk:0 description: ATA Disk product: Hitachi HDT72101 vendor: Hitachi physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: A3AA serial: STF604MH0AD4PB size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=0005edc1 You can figure out which is which based on the coordinates of their respective bus info & physical id. smartctl The other method I've used in the past is smartctl . You can query each device independently to find out it's serial number, make & model and figure out which device it is once you open up the case. $ smartctl -i /dev/sdasmartctl 5.43 2016-09-28 r4347 [x86_64-linux-2.6.32-642.6.2.el6.x86_64] (local build)Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net=== START OF INFORMATION SECTION ===Model Family: Hitachi Deskstar 7K1000.BDevice Model: Hitachi HDT721010SLA360Serial Number: STF604MH0AD4PBLU WWN Device Id: 5 000cca 349c4b953Firmware Version: ST6OA3AAUser Capacity: 1,000,204,886,016 bytes [1.00 TB]Sector Size: 512 bytes logical/physicalDevice is: In smartctl database [for details use: -P show]ATA Version is: 8ATA Standard is: ATA-8-ACS revision 4Local Time is: Thu Aug 2 21:11:01 2018 EDTSMART support is: Available - device has SMART capability.SMART support is: Enabled ledctl/ledmon On higher end rackmounted servers you can use ledctl to light up the LED for a given HDD through its /dev/ device name. ledctl usage # ledctl locate=/dev/rssda will blink drive LED# ledctl locate={ /dev/rssda /dev/rssdb } will blink both drive LEDs# ledctl locate_off=/dev/rssda will turn off the locate LED References Using ledmon/ledctl utilities on Linux to manage backplane LEDs for PCIE SSD Software RAID drives 12 Storage Enclosure LED Utilities for MD Software RAIDs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
460,205 | I'm seeing this error when running apt update in containers, if that is useful information. apt spits out: System error resolving 'archive.ubuntu.com:80' - getaddrinfo (16: Device or resource busy) I tried looking at the glibc source but I could not understand what was going on. [glibc.git] / resolv / getaddrinfo_a.c | lsscsi On servers where I have a lot of HDDs I've traditionally used lsscsi to determine which HDD is plugged into which port. You can use this output to get the names + the device & generic device names: $ lsscsi -g[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda /dev/sg0[2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb /dev/sg1[4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc /dev/sg2[12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde /dev/sg5[12:0:0:1] enclosu WD SES Device 4005 - /dev/sg6 And use this to get the list of ports on your MB that correspond to the above devices: $ lsscsi -H[0] ahci[1] ahci[2] ahci[3] ahci[4] pata_atiixp[5] pata_atiixp[12] usb-storage You can also use the verbose output instead: $ lsscsi --verbose[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host0/target0:0:0/0:0:0:0][2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host2/target2:0:0/2:0:0:0][4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:14.1/host4/target4:0:0/4:0:0:0][12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde dir: /sys/bus/scsi/devices/12:0:0:0 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:0][12:0:0:1] enclosu WD SES Device 4005 - dir: /sys/bus/scsi/devices/12:0:0:1 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:1] NOTE: The port that it's plugged into is the first digit in this block, [0] vs. [4] in the lsscsi -H output, for example. lshw I've also been able to use lshw for this because it tells you which ports etc. a particular HDD is plugged into so it's easier to figure out which one is which in a system that has multiples. Below you can see /dev/sda along with its serial number: $ lshw -c disk -c storage *-storage description: SATA controller product: SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:22 ioport:c000(size=8) ioport:b000(size=4) ioport:a000(size=8) ioport:9000(size=4) ioport:8000(size=16) memory:fbbff800-fbbffbff *-disk:0 description: ATA Disk product: Hitachi HDT72101 vendor: Hitachi physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: A3AA serial: STF604MH0AD4PB size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=0005edc1 You can figure out which is which based on the coordinates of their respective bus info & physical id. smartctl The other method I've used in the past is smartctl . You can query each device independently to find out it's serial number, make & model and figure out which device it is once you open up the case. $ smartctl -i /dev/sdasmartctl 5.43 2016-09-28 r4347 [x86_64-linux-2.6.32-642.6.2.el6.x86_64] (local build)Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net=== START OF INFORMATION SECTION ===Model Family: Hitachi Deskstar 7K1000.BDevice Model: Hitachi HDT721010SLA360Serial Number: STF604MH0AD4PBLU WWN Device Id: 5 000cca 349c4b953Firmware Version: ST6OA3AAUser Capacity: 1,000,204,886,016 bytes [1.00 TB]Sector Size: 512 bytes logical/physicalDevice is: In smartctl database [for details use: -P show]ATA Version is: 8ATA Standard is: ATA-8-ACS revision 4Local Time is: Thu Aug 2 21:11:01 2018 EDTSMART support is: Available - device has SMART capability.SMART support is: Enabled ledctl/ledmon On higher end rackmounted servers you can use ledctl to light up the LED for a given HDD through its /dev/ device name. ledctl usage # ledctl locate=/dev/rssda will blink drive LED# ledctl locate={ /dev/rssda /dev/rssdb } will blink both drive LEDs# ledctl locate_off=/dev/rssda will turn off the locate LED References Using ledmon/ledctl utilities on Linux to manage backplane LEDs for PCIE SSD Software RAID drives 12 Storage Enclosure LED Utilities for MD Software RAIDs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55528/"
]
} |
460,220 | My Makefile: FULL_VERSION ?= 1.2.3MINOR_VERSION := $(shell echo "${FULL_VERSION%.*}")test: echo $(MINOR_VERSION) Running make test gives nothing, I want to get 1.2 . I know I can get it via sed/grep but I'm looking for a more elegant solution, seems there's nothing simpler than bash parameter expansion | You'd need to first store the value in a shell variable: MINOR_VERSION := $(shell v='$(FULL_VERSION)'; echo "$${v%.*}") (assuming $(FULL_VERSION) doesn't contain single quotes) Now that calls sh , not bash . ${var%pattern} is a standard sh operator (comes from ksh ). If you wanted to use bash -specific operators, you'd need to tell make to call bash instead of sh with SHELL = bash Beware however that many systems don't have bash installed by default which would make your Makefile non-portable (but then, some systems don't have GNU make either and you're already using some GNUisms there)). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146366/"
]
} |
460,221 | Im trying to install and run minikube, and to do this I need to install virtualbox. Im trying to install virtualbox on Ubuntu 18.04. I already had virtualbox installed, but when I would try to run it, or minikube, I would get the following error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-dkms package and the appropriate headers, most likely linux-headers-generic. Steps taken to resolve this issue sudo apt-get purge virtualboxsudo apt-get install virtualbox This resulted in: Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: virtualbox-qtSuggested packages: vde2 virtualbox-guest-additions-isoThe following NEW packages will be installed: virtualbox virtualbox-qt0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.Need to get 25.7 MB of archives.After this operation, 108 MB of additional disk space will be used.Do you want to continue? [Y/n] yGet:1 http://ucmirror.canterbury.ac.nz/ubuntu bionic-updates/multiverse amd64 virtualbox amd64 5.2.10-dfsg-6ubuntu18.04.1 [17.1 MB]Get:2 http://ucmirror.canterbury.ac.nz/ubuntu bionic-updates/multiverse amd64 virtualbox-qt amd64 5.2.10-dfsg-6ubuntu18.04.1 [8,580 kB] Fetched 25.7 MB in 14s (1,820 kB/s) Selecting previously unselected package virtualbox.(Reading database ... 338152 files and directories currently installed.)Preparing to unpack .../virtualbox_5.2.10-dfsg-6ubuntu18.04.1_amd64.deb ...Unpacking virtualbox (5.2.10-dfsg-6ubuntu18.04.1) ...Selecting previously unselected package virtualbox-qt.Preparing to unpack .../virtualbox-qt_5.2.10-dfsg-6ubuntu18.04.1_amd64.deb ...Unpacking virtualbox-qt (5.2.10-dfsg-6ubuntu18.04.1) ...Processing triggers for mime-support (3.60ubuntu1) ...Processing triggers for ureadahead (0.100.0-20) ...Processing triggers for desktop-file-utils (0.23-1ubuntu3.18.04.1) ...Setting up virtualbox (5.2.10-dfsg-6ubuntu18.04.1) ...vboxweb.service is a disabled or a static unit, not starting it.Job for virtualbox.service failed because the control process exited with error code.See "systemctl status virtualbox.service" and "journalctl -xe" for details.invoke-rc.d: initscript virtualbox, action "restart" failed.● virtualbox.service - LSB: VirtualBox Linux kernel module Loaded: loaded (/etc/init.d/virtualbox; generated) Active: failed (Result: exit-code) since Fri 2018-08-03 17:03:20 NZST; 14ms ago Docs: man:systemd-sysv-generator(8) Process: 30224 ExecStart=/etc/init.d/virtualbox start (code=exited, status=1/FAILURE)Aug 03 17:03:20 anton-ThinkPad-T510 systemd[1]: Starting LSB: VirtualBox Linux kernel module...Aug 03 17:03:20 anton-ThinkPad-T510 virtualbox[30224]: * Loading VirtualBox kernel modules...Aug 03 17:03:20 anton-ThinkPad-T510 virtualbox[30224]: * No suitable module for running kernel foundAug 03 17:03:20 anton-ThinkPad-T510 virtualbox[30224]: ...fail!Aug 03 17:03:20 anton-ThinkPad-T510 systemd[1]: virtualbox.service: Control process exited, code=exited status=1Aug 03 17:03:20 anton-ThinkPad-T510 systemd[1]: virtualbox.service: Failed with result 'exit-code'.Aug 03 17:03:20 anton-ThinkPad-T510 systemd[1]: Failed to start LSB: VirtualBox Linux kernel module.Processing triggers for bamfdaemon (0.5.3+18.04.20180207.2-0ubuntu1) ...Rebuilding /usr/share/applications/bamf-2.index...Processing triggers for systemd (237-3ubuntu10.3) ...Processing triggers for man-db (2.8.3-2) ...Processing triggers for shared-mime-info (1.9-2) ...Processing triggers for gnome-menus (3.13.3-11ubuntu1) ...Processing triggers for hicolor-icon-theme (0.17-2) ...Setting up virtualbox-qt (5.2.10-dfsg-6ubuntu18.04.1) ...Processing triggers for ureadahead (0.100.0-20) ... I have also checked that I have installed the required dependecies that are mentioned in the error: sudo apt-get install virtualbox-dkmsReading package lists... DoneBuilding dependency tree Reading state information... Donevirtualbox-dkms is already the newest version (5.2.10-dfsg-6ubuntu18.04.1).0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.sudo apt-get install linux-headers-genericReading package lists... DoneBuilding dependency tree Reading state information... Donelinux-headers-generic is already the newest version (4.15.0.29.31).0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. | You'd need to first store the value in a shell variable: MINOR_VERSION := $(shell v='$(FULL_VERSION)'; echo "$${v%.*}") (assuming $(FULL_VERSION) doesn't contain single quotes) Now that calls sh , not bash . ${var%pattern} is a standard sh operator (comes from ksh ). If you wanted to use bash -specific operators, you'd need to tell make to call bash instead of sh with SHELL = bash Beware however that many systems don't have bash installed by default which would make your Makefile non-portable (but then, some systems don't have GNU make either and you're already using some GNUisms there)). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251802/"
]
} |
460,266 | I have a use case where I need to read in multiple variables at the start of each iteration and read in an input from the user into the loop. Possible paths to solution which I do not know how to explore -- For assignment use another filehandle instead of stdin Use a for loop instead of ... | while read ... ... I do not know how to assign multiple variables inside a for loop echo -e "1 2 3\n4 5 6" |\while read a b c; do echo "$a -> $b -> $c"; echo "Enter a number:"; read d ; echo "This number is $d" ; done | If I got this right, I think you want to basically loop over lists of values, and then read another within the loop. Here's a few options, 1 and 2 are probably the sanest. 1. Emulate arrays with strings Having 2D arrays would be nice, but not really possible in Bash. If your values don't have whitespace, one workaround to approximate that is to stick each set of three numbers into a string, and split the strings inside the loop: for x in "1 2 3" "4 5 6"; do read a b c <<< "$x"; read -p "Enter a number: " d echo "$a - $b - $c - $d ";done Of course you could use some other separator too, e.g. for x in 1:2:3 ... and IFS=: read a b c <<< "$x" . 2. Replace the pipe with another redirection to free stdin Another possibility is to have the read a b c read from another fd and direct the input to that (this should work in a standard shell): while read a b c <&3; do printf "Enter a number: " read d echo "$a - $b - $c - $d ";done 3<<EOF1 2 34 5 6EOF And here you can also use a process substitution if you want to get the data from a command: while read a b c <&3; ...done 3< <(echo $'1 2 3\n4 5 6') (process substitution is a bash/ksh/zsh feature) 3. Take user input from stderr instead Or, the other way around, using a pipe like in your example, but have the user input read from stderr (fd 2) instead of stdin where the pipe comes from: echo $'1 2 3\n4 5 6' |while read a b c; do read -u 2 -p "Enter a number: " d echo "$a - $b - $c - $d ";done Reading from stderr is a bit odd, but actually often works in an interactive session. (You could also explicitly open /dev/tty , assuming you want to actually bypass any redirections, that's what stuff like less uses to get the user's input even when the data is piped to it.) Though using stderr like that might not work in all cases, and if you're using some external command instead of read , you'd at least need to add a bunch of redirections to the command. Also, see Why is my variable local in one 'while read' loop, but not in another seemingly similar loop? for some issues regarding ... | while . 4. Slice parts of an array as needed I suppose you could also approximate a 2D-ish array by copying slices of a regular one-dimensional one: data=(1 2 3 4 5 6)n=3for ((i=0; i < "${#data[@]}"; i += n)); do a=( "${data[@]:i:n}" ) read -p "Enter a number: " d echo "${a[0]} - ${a[1]} - ${a[2]} - $d "done You could also assign ${a[0]} etc. to a , b etc if you want names for the variables, but Zsh would do that much more nicely . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79961/"
]
} |
460,321 | I've created a shell script daemon, based on the code of this answer . I've written a systemd service file with the following contents: [Unit]Description=My DaemonAfter=network.target[Service]Type=forkingPIDFile=/run/daemon.pidExecStart=/root/bin/daemon.shExecReload=/bin/kill -1 -- $MAINPIDExecStop=/bin/kill -- $MAINPIDTimeoutStopSec=5KillMode=process[Install]WantedBy=multi-user.target Just before the while loop starts, I'm creating a PID file (it is going to be created by the daemon, not by the child, or the parent, because they don't come to this point): echo $$ > /run/daemon.pid; . It works fine, but each time I call systemctl status daemon.service , I get the following warning: daemon.service: PID file /run/daemon.pid not readable (yet?) after start: No such file or directory If I insert the PID creaton statement echo $$ > /run/daemon.pid; at the very beginning of the script (which will be used by the children and parent, too), I get the following warning: daemon.service: PID 30631 read from file /run/daemon.pid does not exist or is a zombie. What would be the best approach for creating the PID file without getting any warning messages by systemd? | So the problem you're seeing here is because when Type=forking is in use, then the pid file must be created (with the correct pid) before the parent process exits. If you create the pidfile from the child, then it will race with the exit of the parent and in some (many?) cases will cause the first error you're seeing. If you create the pidfile writing $$ to it before you start the child, then it will have the pid of the parent, which will have exited, so you'll see the other error. One way to do this correctly is to write the pidfile from the parent, just before exiting. In that case, write $! (and not $$ ), which returns the pid of the last process spawned in background. For example: #!/bin/bash# Run the following code in background:( while keep_running; do do_something done) &# Write pid of the child to the pidfile:echo "$!" >/run/daemon.pidexit This should work correctly... HOWEVER , there's a much better way to accomplish this! Read on... Actually, the whole point of systemd is to daemonize processes and run them in background for you... By trying to do that yourself, you're just preventing systemd from doing it for you. Which is making your life much harder at the same time... Instead of using Type=forking , simply write your shell script to run in foreground and set up the service to use Type=simple . You don't need any pidfiles then. Update your /root/bin/daemon.sh to simply do this: #!/bin/bash# Run the following code in foreground:while keep_running; do do_somethingdone (NOTE: Perhaps daemon.sh is not the best name for it at this point... Since that would imply it runs in background. Maybe name it something more appropriate, related to what it actually does.) Then update the .service file to use Type=simple (which would actually be used by default here, so you could even omit it.) [Service]Type=simpleExecStart=/root/bin/daemon.shExecReload=/bin/kill -1 -- $MAINPIDExecStop=/bin/kill -- $MAINPIDTimeoutStopSec=5KillMode=process BTW, you can probably drop ExecStop= , since killing the process with a signal is the default behavior as well... systemd's Type=forking is really there only for legacy programs that only work that way and can't be easily fixed to work in foreground... It's hacky and inefficient. The whole point of systemd (and some of its alternatives, predecessors) is to do the forking and daemonizing itself and let services just worry about doing what they need to do! :-) I hope you find this helpful... And I really hope you choose to let systemd do the heavy lifting for you! It's much more efficient that way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264975/"
]
} |
460,366 | I have a awk script where i want to be able to pass N arguments into it and also read from stdin. I would like to be able to do something like tail -f logfile | my_cool_awk_scipt var1 var2 var3 ... varN And then use these variables inside the script. #!/bin/awk -fBEGIN { print "AWK Script Starting" print ARGV[1]} { if ($0 < ARGV[1]) print $0 else if ($0 < ARGV[2]) print $0 + ARGV[2] } If i try to pass the variables as it stands it print ARGV[1] and then hits awk: ./my_cool_awk_script:4: fatal: cannot open file `var1' for reading (No such file or directory) I can do, tail -f logfile | my_cool_awk_scipt -v var1=var1 -v var2=var2 -v var3=var3 ... varN=varN but this is a bit limiting and verbose. I know I can also wrap this in a shell script but am unsure a clean way to embed what I have into something like that. | The moment awk hits the main body of the script, after BEGIN , it's going to want to read the filenames specified in ARGV[x]. So just nuke 'em. $ cat a.awk#!/bin/awk -fBEGIN {print "AWK Script Starting"ZARGV[1]=ARGV[1]ZARGV[2]=ARGV[2]ARGV[1]=""ARGV[2]=""}{ if ($0 < ZARGV[1]) print $0 else if ($0 < ZARGV[2]) print $0 + ZARGV[2]}$ Example: $ cat logfile12345$ ./a.awk 3 4 <logfileAWK Script Starting127$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179802/"
]
} |
460,422 | I get some standard output in a terminal and try to use the arrow keys to scroll up but instead it gives me previous commands. Page up and page down do nothing. Using the scroll bar is extremely difficult because it moves about a page per micro-inch. Please tell me there is a way to get sensible scrolling using the arrow keys (or something equivalent) on a Ubuntu terminal? I see nothing in the preferences for scrolling. | You can use ⇧ Shift + PgUp and ⇧ Shift + PgDown to scroll in most terminals. The addition of ⇧ Shift stops the keypress from being sent through the terminal to applications, as of course happens if you just press PgUp and PgDown unmodifed. These must, moreover, be the PgUp and PgDown on the editing keypad, not the ones on the calculator keypad. ⇐ This is the editing keypad . If you have a laptop without a full 104/105/106/107/109-key keyboard, you will have to find its equivalent on your laptop keyboard, wherever that is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303691/"
]
} |
460,441 | Ubuntu, sed remove ze if ze is alone on the line The closest I got is sed "s/\bze\b//g" and sed "s/^\bze\b//g" which does not produce the desired result. Before sed: ze.comexample.zezero.comze After sed: ze.comexample.zezero.com | You need line anchors rather than word boundaries $ sed '/^ze$/d' fileze.comexample.zezero.com | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302603/"
]
} |
460,462 | I am looking for a way to echo names and values of all env variables that start with nlu_setting, so the output might look like: nlu_setting_json=truenlu_setting_global=0nlu_setting_bar=foo does anyone know how to do this? | for var in "${!nlu_setting_@}"; do printf '%s=%s\n' "$var" "${!var}"done The expansion ${!nlu_setting_@} is a bash -specific expansion that returns a list of variable names matching a particular prefix. Here we use it to ask for all names that start with the string nlu_setting_ . We loop over these names and output the name along with the value of that variable. We get the value of the variable using variable indirection ( ${!var} ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
460,478 | $ su -Password: # echo $PATH/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin# exitlogout$ suPassword: # echo $PATH/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games I have no idea why /bin and /sbin are not added to $PATH , if I do the plain su. This used to be the case. How can I fix this? I did notice that: -rw-r--r-- 1 root root 0 Jan 8 2018 /etc/environment But otherwise my system seems normal. EDIT: I forgot the obligatory uname -a Linux rpi3 4.17.0-1-arm64 #1 SMP Debian 4.17.8-1 (2018-07-20) aarch64 GNU/Linux EDIT2: $ cat /etc/issueDebian GNU/Linux buster/sid \n \l all of the packages are from the "testing" repo, since "stable" ones don't work very well on aarch64. | Very recently (with version 2.32-0.2 of util-linux from 27 Jul 2018) Debian switched to a different su implementation, see bug 833256 . The "new" su is from util-linux while the "old" one was contained in the login package and originated from src:shadow Quoting from util-linux/NEWS.Debian.gz : The two implementations are very similar but have some minor differences (and there might be more that was not yet noticed ofcourse), e.g. new 'su' (with no args, i.e. when preserving the environment) also preserves PATH and IFS, while old su would always reset PATH and IFS even in 'preserve environment' mode. su '' (empty user string) used to give root, but now returns an error. previously su only had one pam config, but now 'su -' is configured separately in /etc/pam.d/su-l The first difference is probably the most user visible one. Doing plain 'su' is a really bad idea for many reasons, so using 'su -' is strongly recommended to always get a newly set up environment similar to a normal login. If you want to restore behaviour more similar to the previous one you can add 'ALWAYS_SET_PATH yes' in /etc/login.defs. The previously used su implementation behaved differently regarding PATH . This is also discussed in this bug report, see 833256#80 . The new su preserves PATH if not invoked with su - . In short: Debian's old su behaved like su - , at least regarding PATH . With the new implementation you should almost always use su - , similar to other distributions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89452/"
]
} |
460,480 | I've just installed UFW 0.35 on Ubuntu 16.04: root@localhost:/etc# ufw --versionufw 0.35Copyright 2008-2015 Canonical Ltd. and root@localhost:/etc# ufw app listAvailable applications: OpenSSH I would like to allow access to Apache on both port 80 and 443, with the command $ ufw allow "Apache Full" but I got an error ERROR: Could not find a profile matching 'Apache Full' | You are likely receiving that error because there has not been a profile created for 'Apache Full'. You can see which profiles exist on your system by checking the directory: /etc/ufw/applications.d/ To create a profile known as 'Apache Full' create a file in the above directory using the following syntax (from the man page): [Apache Full] title=<title> description=<description> ports=80/tcp,443/tcp Next, you will update ufw app: ufw app update "Apache Full" Now you should be able to run the command from your question: ufw allow "Apache Full" To confirm that your profile is included in ufw's rules run: ufw status | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301526/"
]
} |
460,533 | A software I installed inserted a line in my profile that reads: [ -s "$SOME_FILE" ] && \. "$SOME_FILE" I know dot . is synonymous with source , so I suspect this is just sourcing the file, but I have never seen \. before; does it do something else? Edit, regarding DVs: searching for "backslash dot" leads to questions regarding ./ when calling executable files, and man source leads to a manpage where \. does not appear. I don't know what else to try, hence the question. Edit 2: see related questions Why start a shell command with a backslash Backslash at the beginning of a command Why do backslashes prevent alias expansion Run a command that is shadowed by an alias | A backslash outside of quotes means “interpret the next character literally during parsing”. Since . is an ordinary character for the parser, \. is parsed in the same way as . , and invokes the builtin . (of which source is a synonym in bash). There is one case where it could make a difference in this context. If a user has defined an alias called . earlier in .profile , and .profile is being read in a shell that expands aliases (which bash only does by default when it's invoked interactively), then . would trigger the alias, but \. would still trigger the builtin, because the shell doesn't try alias expansion on words that were quoted in any way. I suspect that . was changed to \. because a user complained after they'd made an alias for . . Note that \. would invoke a function called . . Presumably users who write functions are more knowledgeable than users who write aliases and would know that redefining a standard command in .profile is a bad idea if you're going to include code from third parties. But if you wanted to bypass both aliases and functions, you could write command . . The author of this snippet didn't do this either because they cared about antique shells that didn't have the command builtin, or more likely because they weren't aware of it. By the way, defining any alias in .profile is a bad idea because .profile is a session initialization script, not a shell initialization script. Aliases for bash belong in .bashrc . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/460533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
460,567 | For example, [fakename]$ type echoecho is a shell builtin But man echo gives me the GNU coreutils version of echo . What's the easiest way to tell if the man page I'm looking at is the correct one, i.e the one for the utility I'd get if I directly invoked it? | You don't, really. Not without knowledge external to the man page. In the case of echo (and printf , and test , ...), it's often a shell builtin, so you'll need to know that and read the shell's documentation. (And echo is notoriously different in different implementations, use printf instead .) In most, if not all shells, you can find if something is a builtin with type command , e.g. type echo will print echo is a shell builtin . ( type is specified by POSIX but e.g. fish supports it too, as non-POSIXy as it is.) In Bash, you'd then read man bash , the online documentation , or use the builtin command help (which is specific to Bash, and which you need to know exists). Even if the command is not a builtin, it's possible that there are several commands with the same name, rename being a famous example (see Why is the rename utility on Debian/Ubuntu different than the one on other distributions, like CentOS? ). Now, your OS should have the correct man page for the actually installed utility, and e.g. in Debian, the "alternatives" system updates the corresponding man pages also when the command alternatives are changed. But if you read an online man page , you'll need to be aware of it. Many utilities have a command line option like --version which might tell you what implementation that command is. (But not nearly all utilities have it. I think it's a GNUism originally, so GNU utilities have it, as well as those that happened to copy the custom.) In the case of rename , it happens to work in telling two different implementations apart: debian$ rename --version/usr/bin/rename using File::Rename version 0.20centos$ rename --versionrename (util-linux-ng 2.17.2) Besides that, your system might have an alias or a function with the same name of a utility, usually to modify the behaviour of the utility. In that case, the defaults presented in a man page might not apply. Aliases for ls are common, as are aliases adding -i to rm or mv . But type foo would also tell you if foo is an alias or function. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209133/"
]
} |
460,579 | What I'm looking for is a way to pass in arbitrary data through STDIN, and have it get tar'd up as if its a normal file. I tried this $ echo test | tar -cf test.tar /dev/stdin and that gives me a test.tar file with the contents dev/stdin . The stdin file is a link to /proc/self/fd/0 . What I want instead is for dev/stdin inside the TAR file to be a regular file with the text test inside. A similar thing happens with: $ tar -cf test.tar <(echo test) but the names are different. Is this doable? | I don't think you can do what you want here. The problem with your approach is that tar deals in files and directory trees, which you're not providing it with commands such as this: $ echo test | tar -cf test.tar /dev/sdtin Even when you attempt to "wrap" your strings in temporary files using subshells such as this: $ tar -cf test.tar <(echo test) You can see your content is still being TAR'ed up using these temporary file descriptors: $ tar tvf test.tarlr-x------ vagrant/vagrant 0 2018-08-04 23:52 dev/fd/63 -> pipe:[102734] If you're intent is just to compress strings, you need to get them into a file context. So you'd need to do something like this: $ echo "test" > somefile && tar -cf /tmp/test.tar somefile You can see the file's present inside of the TAR file: $ tar tvf /tmp/test.tar-rw-rw-r-- vagrant/vagrant 5 2018-08-05 00:00 somefile Replicating data using tar Most that have been working with Unix for several years will likely have seen this pattern: $ (cd /; tar cf - .)|(cd /mnt/newroot; tar pxvf -) I used to use this all the time to replicate data from one location to another. You can use this over SSH as well. Other methods are discussed in this U&L Q&A titled: clone root directory tree using busybox . Backing up / There's also this method if you're intent is to back up the entier HDD: $ sudo tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system / References 4 Ways to Back Up Your Entire Hard Drive on Linux | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303809/"
]
} |
460,595 | I have below scenario like: if [file exists]; then exit elif recheck if file exist (max 10 times) if found exit else recheck again as per counter fi | There are many ways to do this loop. With ksh93 syntax (also supported by zsh and bash ): for (( i=0; i<10; ++i)); do [ -e filename ] && break sleep 10done For any POSIX-like shell: n=0while [ "$n" -lt 10 ] && [ ! -e filename ]; do n=$(( n + 1 )) sleep 10done Both of the loops sleep 10 seconds in each iteration before testing the existence of the file again. After the loop has finished, you will have to test for existence of the file a last time to figure out whether the loop exited due to running 10 times or due to the file appearing. If you wish, and if you have access to inotify-tools, you may replace the sleep 10 call with inotifywait -q -t 10 -e create ./ >/dev/null This would wait for a file creation event to occur in the current directory, but would time out after 10 seconds. This way your loop would exit as soon as the given filename appeared (if it appeared). The full code, with inotifywait (replace with sleep 10 if you don't want that), may look like for (( i=0; i<10; ++i)); do [ -e filename ] && break inotifywait -q -t 10 -e create ./ >/dev/nulldoneif [ -e filename ]; then echo 'file appeared!'else echo 'file did not turn up in time'fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/460595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190945/"
]
} |
460,615 | I run Linux Live CD and I need to extract a specific file from a wim-archive that is located on a disk drive. I know a full path to the file in the archive: xubuntu@xubuntu:~$ 7z l winRE.wim | grep -i bootrec.exe2009-08-28 15:02:29 ....A 299008 134388 Windows/System32/BootRec.exe I am short on disk space and do not have a possibility to unpack the whole archive. How could I extract that specific file from the archive? I tried the -i option, but that did not work: xubuntu@xubuntu:~$ 7z x -i Windows/System32/BootRec.exe winRE.wim Error:Incorrect command line | The man 7z page says: -i[r[-|0]]{@listfile|!wildcard} Include filenames You need to explicitly specify ! before the file name and protect the switch from bash expansion with single quotes: 7z x '-i!Windows/System32/BootRec.exe' winRE.wim xubuntu@xubuntu:~$ 7z x '-i!Windows/System32/BootRec.exe' winRE.wim7-Zip [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,4 CPUs)Processing archive: winRE.wimExtracting Windows/System32/BootRec.exeEverything is OkSize: 299008Compressed: 227817568 (You can avoid keeping the full path by using the e function letter: 7z e '-i!Windows/System32/BootRec.exe' winRE.wim .) BTW, if you do not protect the -i option with single quotes or protect it with double quotes, you get an error: xubuntu@xubuntu:~$ 7z x "-i!Windows/System32/BootRec.exe" winRE.wim bash: !Windows/System32/BootRec.exe: event not found | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73766/"
]
} |
460,658 | I'm new to shell scripting and I'd like to know if there is maybe a better solution then the one I figured out: I want to check if a user is in a list and if yes than the script should terminate with the exit_program function: $USER is defined as somebody who logs into the system My solution (it works) is: $IGNORE_USER="USER1 USER2 USER3"if [ ! -z "$IGNORE_USER" ]; thenfor usr in $IGNORE_USER do if $USER = $usr; then exit_program "bye bye" fi donefi | That script does not work. You have a syntax error on the first line in that an assignment to IGNORE_USER should not dereference the variable with $ . There is another syntax error in your if statement. Use [ "string1" = "string2" ] to compare strings. Your code relies on using $IGNORE_USER unquoted. This splits the string on whitespaces, which is what you want to do. In the most general case, this is not what you want to do though, as the items in the list may well contain whitespace characters that should be preserved. The shell would also perform filename generation (globbing) on the values in the string if it's used unquoted. It would be better to use an array for this as you're dealing with separate items (usernames). Whenever you want to treat separate items as separate items, don't put them inside a single string. Doing so would potentially make it hard to distinguish one item from another. Suggestion: ignore=( 'user1' 'user2' 'user3' )for u in "${ignore[@]}"; do if [ "$USER" = "$u" ]; then exit_program 'bye bye' fidone This assumes that exit_program takes care of exiting the program. If not, add exit after calling exit_program . There is no need to test whether the ignore array is empty as the loop would not run a single iteration if it was. In the code above, "${ignore[@]}" (note the double quotes) would expand to the list of usernames, each username individually quoted and protected from further word splitting and filename generation. Related: The ShellCheck website For a version that is not specific to bash , but that would run in any POSIX-like shell: set -- 'user1' 'user2' 'user3'for u do if [ "$USER" = "$u" ]; then exit_program 'bye bye' fidone This uses the list of positional parameters as the list of usernames to ignore instead of an array. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303844/"
]
} |
460,716 | This question addresses the first pass of ddrescue on the device to be rescued. I had to rescue a 1.5TB hard disk. The command I used is: # ddrescue /dev/sdc1 my-part-img my-part-map When the rescue is started (with no optional parameters) on a goodarea of the disk, the read rate (" current rate ") stays around 18 MB/s. It occasionally slows a bit, but then comes back to this speed. However, when it encounters a bad area of the disk, it may slow downsignificantly, and then it never comes back to the 18 MB/s, but staysaround 3 MB/s, even after reading 50 GB of good disk with no problem. The strange part is that, when it is currently scanning a gooddisk area at 3 MB/s, if I stop ddrescue and restart it, it restarts at the higher reading rate of 18 MB/s. Iactually saved about 2 days by stopping and restarting ddrescue when it was going at 3 MB/s, which I had to do 8 times to finish the first pass. My question is: why is it that ddrescue will not try to go back to thehighest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast theeasy areas, that is what should be done, and the behavior I observedseems to me to be a bug. I have been wondering whether this can be dealt with with the option -a or --min-read-rate=… but the manual is so terse that I was notsure. Besides, I do not understand on what basis one should choose aread rate for this option. Should it be the above 18 MB/s? Still, even with an option to specify it, I am surprised this is not done by default. Meta note Two users have voted to close the question for being primarily opinionbased. I would appreciate knowing in what sense it is? I describe with some numerical precision the behavior of animportant piece of software on an actual example, showing clearly thatit does not meet a major design objective stated in its documentation(doing the easy parts as quickly as possible), and that very simplereasoning could improve that. The software is well know, from a very trusted source, with precisealgorithms, and I expect that most defects were weeded out long ago.So I am asking experts for a possible known reason for this unexpectedbehavior, not being an expert myself on this issue. Furthermore, I ask whether one of the options of the software shouldbe used to resolve the issue, which is even more a very precisequestion. And I ask for a detailed aspect (how to choose the parameterfor this option) since I did not find documentation for that. I am asking for facts that I need for my work, not opinions. And Imotivate it with experimental facts, not opinions. | I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s? The --min-read-rate= option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition. even after reading 50 GB of good disk with no problem. Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it. Now, ddrescue supports using a dynamic --min-read-rate= value, from info ddrescue : If BYTES is 0 (auto), the minimum read rate is recalculated every second as (average_rate / 10). But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective. So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10 manually, average_rate being what the drive's average rate would be if it was intact. So for example you can go with 10M here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later. the behavior I observed seems to me to be a bug. If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode. When dealing with defective drives, you also have to check dmesg if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others. Sometimes manual intervention just can't be avoided. Even then, I am surprised this is not done by default. Most programs don't come with sane defaults. dd still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time. I am asking for facts that I need for my work, not opinions. Having good backups is better than having to rely on ddrescue . Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions. Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38442/"
]
} |
460,718 | I have an excel file that I converted to csv. When converted, it looks like the following example (Please note that there are 100+ columns in the csv. This is a minified version): ,Product," ",Citty," ",Price,Name," ",Location," ",Per Unit,banana," ",CA," ",5.7,apple," ",FL," ",2.3 I need to write a script that will take the first & second line and "merge" them together based on their comma position: ,Product Name," "" ",Citty Location," "" ",Price Per Unit,banana," ",CA," ",5.7,apple," ",FL," ",2.3 I've looked at other questions on here and stack overflow, but the answers don't seem to pertain to this weird column-by-column situation for just the first 2 lines of the file. As an additional unrelated task, I'd also like to get rid of the empty columns in the csv and fix the spelling error so that it looks like this: Product Name,City Location,Price Per Unitbanana,CA,5.7apple,FL,2.3 (The csv currently has a tab surrounded by quotes between every actual column of data except for the first column, which is just empty followed by a comma). I will be receiving the csv with the spelling error multiple times, so I would like to programmatically fix the error in the script. Please also note that the columns may not always be in the order shown above, so I need to dynamically check each column name for the error during the script. | Try this $ awk -F, 'NR<2{split(gensub(/Citty/,"City","g",$0),a,FS)}NR==2{for(b=2;b<=NF;b+=2){c=c a[b]" "$b","}print gensub(/,$/,"",1,c)}NR>2{print gensub(/(^,|" *",)/,"","g",$0)}' inpProduct Name,City Location,Price Per Unitbanana,CA,5.7apple,FL,2.3$ Same code is more readable if split across a few lines : $ awk -F, '> NR<2{split(gensub(/Citty/,"City","g",$0),a,FS)}> NR==2{for(b=2;b<=NF;b+=2){c=c a[b]" "$b","}print gensub(/,$/,"",1,c)}> NR>2{print gensub(/(^,|" *",)/,"","g",$0)}' inpProduct Name,City Location,Price Per Unitbanana,CA,5.7apple,FL,2.3$ If 1st line, split the line into array elements within a. Fix the Citty->City typo. If 2nd line, starting with the 2nd column, print the corresponding column from 1st line together with this column. Repeat for each column, going in 2 column increments. Strip the trailing , . After 2nd line, replace any leading , or any "<spaces>", with an empty string and then print the result. Tested ok on GNU Awk 4.0.2 Try it online! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303325/"
]
} |
460,774 | I need to split the output from ps , which is spaced separated. #!/bin/bashA=$(ps -r -e -o pcpu=,comm= | head -1)B=${A[0]}C=${A[1]}printf '%3s %s\n' $B $(basename $C) Output should be: 42 bar Instead, I get: usage: basename ... Why doesn't this work, and most importantly, how do I make it work? | Others have already noted what the error in your code is and correctly suggested that for your initial placeholder data, an array would be the better choice of data structure, along with how to make sure you split the string correctly etc. Now that we know what your actual command is that you're parsing, we can be slightly more creative with suggestions for improvement. The following script will take each of the lines of output of your ps command and read it as two space-delimited bits. The body of the loop output the read bits in different ways: #!/bin/bashps -r -e -o pcpu=,comm= |while IFS=' ' read -r pcpu comm; do printf 'pcpu=%s,\tcomm=%s,\tbasename of comm=%s\n' \ "$pcpu" "$comm" "${comm##*/}"done Here, comm will hold everything after the first sequence of spaces in the output of ps (the initial spaces, before the first column, would be trimmed off). You may obviously insert your head -n 1 as a part of the initial pipeline if you wish. Note that in some shells, including bash , the loop is running in a subshell, so any variables created there will not be available after the pipeline has finished. There are two solutions to this in bash : Enable the lastpipe shell option in the script with shopt -s lastpipe , or Read the data into the loop with a process substitution: while IFS=' ' read ... # ...done < <( ps ... ) Example run: $ bash script.shpcpu=0.0, comm=tmux, basename of comm=tmuxpcpu=0.0, comm=sh, basename of comm=shpcpu=0.0, comm=sh, basename of comm=shpcpu=0.0, comm=bash, basename of comm=bashpcpu=0.0, comm=bash, basename of comm=bashpcpu=0.0, comm=bash, basename of comm=bashpcpu=0.0, comm=ps, basename of comm=pspcpu=0.0, comm=sh, basename of comm=sh | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
460,836 | I'm running this loop to check and print some things every second. However, because the calculations take maybe a few hundred milliseconds, the printed time sometimes skip a second. Is there any way to write such a loop that I am guaranteed to get a printout every second? (Provided, of course, that the calculations in the loop take less than a second :)) while true; do TIME=$(date +%H:%M:%S) # some calculations which take a few hundred milliseconds FOO=... BAR=... printf '%s %s %s\n' $TIME $FOO $BAR sleep 1done | To stay a bit closer to the original code, what I do is: while true; do sleep 1 & ...your stuff here... wait # for sleepdone This changes the semantics a little: if your stuff took less than a second, it will simply wait for the full second to pass. However, if your stuff takes longer than a second for any reason, it won't keep spawning even more subprocesses with never any end to it. So your stuff never runs in parallel, and not in the background, so variables work as expected too. Note that if you do start additional background tasks as well, you'll have to change the wait instruction to only wait for the sleep process specifically. If you need it to be even more accurate, you'll probably just have to sync it to the system clock and sleep ms instead of full seconds. How to sync to system clock? No idea really, stupid attempt: Default: while sleep 1do date +%Ndone Output: 003511461 010510925 016081282 021643477 028504349 03... (keeps growing) Synced: while sleep 0.$((1999999999 - 1$(date +%N))) do date +%N done Output: 002648691 001098397 002514348 001293023 001679137 00... (stays same) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/460836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
460,845 | I have a script bash that converts this file "origin.txt" cxx-yyy-zzz-999-1112018-01-1T00:10:54.412Z2018-01-5T00:01:19.447Z1111-6b54-eeee-rrrr-tttt2018-01-1T00:41:38.867Z2018-01-5T01:14:55.744Z1234456-1233-6666-mmmm-121232018-01-1T00:12:37.152Z2018-01-5T00:12:44.307Z to cxx-yyy-zzz-999-111,2018-01-1T00:10:54.412Z,2018-01-5T00:01:19.447Z1111-6b54-eeee-rrrr-tttt,2018-01-1T00:41:38.867Z,2018-01-5T01:14:55.744Z1234456-1233-6666-mmmm-12123,2018-01-1T00:12:37.152Z,2018-01-5T00:12:44.307Z How could I do it in bash with AWK? | To stay a bit closer to the original code, what I do is: while true; do sleep 1 & ...your stuff here... wait # for sleepdone This changes the semantics a little: if your stuff took less than a second, it will simply wait for the full second to pass. However, if your stuff takes longer than a second for any reason, it won't keep spawning even more subprocesses with never any end to it. So your stuff never runs in parallel, and not in the background, so variables work as expected too. Note that if you do start additional background tasks as well, you'll have to change the wait instruction to only wait for the sleep process specifically. If you need it to be even more accurate, you'll probably just have to sync it to the system clock and sleep ms instead of full seconds. How to sync to system clock? No idea really, stupid attempt: Default: while sleep 1do date +%Ndone Output: 003511461 010510925 016081282 021643477 028504349 03... (keeps growing) Synced: while sleep 0.$((1999999999 - 1$(date +%N))) do date +%N done Output: 002648691 001098397 002514348 001293023 001679137 00... (stays same) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/460845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303997/"
]
} |
460,887 | I need to speed up a script that essentially determines whether or not all the "columns" for each row are the same, then writes a new file containing either one of the identical elements, or a "no_match". The file is comma delimited, consists of around 15,000 rows, and contains varying numbers of "columns". For example: 1-694-59,4-59,4-59,4-61,4-61,4-611-46,1-464-59,4-59,4-59,4-61,4-61,4-616-1,6-15-51,5-514-59,4-59 Writes a new file: 1-69no_match1-46no_match6-15-514-59 Deleting the second and fourth rows because they contain non-identical columns. Here is my far from elegant script: #!/bin/bashind=$1 #file innum=`wc -l "$ind"|cut -d' ' -f1` #number of lines in 'file in'echo "alleles" > same_alleles.txt #new file to write to#loop over every line of 'file in'for (( i =2; i <= "$num"; i++));do #take first column of row being looped over (string to check match of other columns with) match=`awk "FNR=="$i" {print}" "$ind"|cut -d, -f1` #counts how many matches there are in the looped row match_num=`awk "FNR=="$i" {print}" "$ind"|grep -o "$match"|wc -l|cut -d' ' -f1` #counts number of commas in each looped row comma_num=`awk "FNR=="$i" {print}" "$ind"|grep -o ","|wc -l|cut -d' ' -f1` #number of columns in each row tot_num=$((comma_num + 1)) #writes one of the identical elements if all contents of row are identical, or writes "no_match" otherwise if [ "$tot_num" == "$match_num" ]; then echo $match >> same_alleles.txt else echo "no_match" >> same_alleles.txt fidone#END Currently, the script takes around 11 min to do all ~15,000 rows. I'm not really sure how to speed this up (I'm honestly surprised I could even get it to work). Any time knocked off would be fantastic. Below is a smaller excerpt of 100 rows that could be used: allele4-391-46,1-46,1-464-394-4,4-4,4-4,4-43-23,3-23,3-233-21,3-214-34,4-343-334-4,4-4,4-44-59,4-593-23,3-23,3-231-451-46,1-463-23,3-23,3-234-611-83-74-44-59,4-59,4-591-18,1-183-21,3-213-23,3-23,3-233-23,3-23,3-233-30,3-30-34-39,4-394-612-704-38-2,4-38-21-69,1-69,1-69,1-69,1-691-694-59,4-59,4-59,4-61,4-61,4-611-46,1-464-59,4-59,4-59,4-61,4-61,4-616-1,6-15-51,5-514-59,4-591-183-71-694-30-44-391-691-694-393-23,3-23,3-234-392-53-30-34-59,4-59,4-593-21,3-214-59,4-593-94-59,4-59,4-594-31,4-311-46,1-461-46,1-46,1-465-51,5-513-484-31,4-313-74-614-59,4-59,4-59,4-61,4-61,4-614-38-2,4-38-23-21,3-211-69,1-69,1-693-23,3-23,3-234-59,4-593-483-481-46,1-463-23,3-23,3-233-30-3,3-30-31-46,1-46,1-463-643-73,3-734-41-183-71-46,1-461-34-612-704-59,4-595-51,5-513-49,3-494-4,4-4,4-44-31,4-311-691-69,1-69,1-694-393-21,3-213-333-93-484-59,4-594-59,4-594-39,4-393-21,3-211-18 My script takes ~ 7 sec to complete this. | $ awk -F, '{ for (i=2; i<=NF; ++i) if ($i != $1) { print "no_match"; next } print $1 }' file1-69no_match1-46no_match6-15-514-59 I'm sorry, but I did not even look at your code, there was too much going on. When you find yourself calling awk three times in the body of a loop on the same data, you will have to look at other ways to do it more efficiently. Also, if you involve awk , you don't need grep and cut as awk would easily be able to do their tasks (which are not needed in this case though). The awk script above reads a comma-delimited line at a time and compares each field with the first field. If any of the tests fails, the string no_match is printed and the script continues with the next line. If the loop finishes (without finding a mismatch), the first field is printed. As a script: #!/usr/bin/awk -fBEGIN { FS = "," }{ for (i=2; i<=NF; ++i) if ($i != $1) { print "no_match" next } print $1} FS is the input field separator, also settable with the -F option on the command line. awk will split each line on this character to create the fields. NF is the number of fields in the current record ("columns on the line"). $i refers the the i:th field in the current record, where i may be a variable or a constant (as in $1 ). Related: Why is using a shell loop to process text considered bad practice? DRY variation: #!/usr/bin/awk -fBEGIN { FS = "," }{ output = $1 for (i=2; i<=NF; ++i) if ($i != output) { output = "no_match" break } print output} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293654/"
]
} |
460,890 | I'm using bash. I have a CSV file with two columns of data that looks roughly like this num_logins,day 253,2016-07-01 127,2016-07-02 I want to swap the first and second columns (making the date column the first one). So I tried this awk ' { t = $1; $1 = $2; $2 = t; print; } ' /tmp/2016_logins.csv However, the results are outputting the same . What am I missing in my awk statement above to get things to switch properly? | Here you go: awk -F, '{ print $2 "," $1 }' sampleData.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/460890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
460,985 | I want to add an array with elements and value into an existing json file using jq. I already have a file (input.json) with { "id": 9, "version": 0, "lastUpdTs": 1532371267968, "name": "Training"} I want to add this into another groups array into this json (orig.json) [ { "name": "JAYS", "sourceConnection": { "name": "ORACLE_connection", "connectionType": "JDBC", "commProtocol": "JDBC" }, "checked": true, "newlyAdded": false, "id": null, "groups": [], "displayName": "SCOTT", "defaultLevel": "MANAGED" }] The end result should look like [ { "name": "JAYS", "sourceConnection": { "name": "ORACLE_connection", "connectionType": "JDBC", "commProtocol": "JDBC" }, "checked": true, "newlyAdded": false, "id": null, "groups": [ { "id": 9, "version": 0, "lastUpdTs": 1532371267968, "name": "Training" } ], "displayName": "SCOTT", "defaultLevel": "MANAGED" }] I know how to add elements into an array, but not sure how to pass in from a file. jq '.[].groups += [{"INPUT": "HERE"}]' ./orig.json | jq has a flag for feeding actual JSON contents with its --argjson flag. What you need to do is, store the content of the first JSON file in a variable in jq 's context and update it in the second JSON jq --argjson groupInfo "$(<input.json)" '.[].groups += [$groupInfo]' orig.json The part "$(<input.json)" is shell re-direction construct to output the contents of the file given and with the argument to --argjson it is stored in the variable groupInfo . Now you add it to the groups array in the actual filter part. Putting it in another way, the above solution is equivalent of doing this jq --argjson groupInfo '{"id": 9,"version": 0,"lastUpdTs": 1532371267968,"name": "Training" }' \ '.[].groups += [$groupInfo]' orig.json | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304099/"
]
} |
460,988 | I am having a file xyz.sh JAVA_OPTS="-Xmx3072M" RESOLVED_HEAP_SIZE="2048M" RESOLVED_OFF_HEAP_SIZE="256M" Expected - I want to replace -Xmx3072M with -Xmx4096M but is not compulsory that value of JAVA_OPTS="-Xmx3072" will always be same it can be -Xmx1234 or -Xmx5120 . | jq has a flag for feeding actual JSON contents with its --argjson flag. What you need to do is, store the content of the first JSON file in a variable in jq 's context and update it in the second JSON jq --argjson groupInfo "$(<input.json)" '.[].groups += [$groupInfo]' orig.json The part "$(<input.json)" is shell re-direction construct to output the contents of the file given and with the argument to --argjson it is stored in the variable groupInfo . Now you add it to the groups array in the actual filter part. Putting it in another way, the above solution is equivalent of doing this jq --argjson groupInfo '{"id": 9,"version": 0,"lastUpdTs": 1532371267968,"name": "Training" }' \ '.[].groups += [$groupInfo]' orig.json | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/460988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301199/"
]
} |
461,044 | I need to add two consecutive spaces to a variable. for e.g I want to write Feb 5 (with two spaces in-between) in a variable. I am using the following command but I get Feb 5 instead. I am using AIX. START_DATE=`echo $PREV_MONTH" "$START_DAY` | You don't need the backtick construct here. Simply START_DATE="$PREV_MONTH $START_DAY" Now remember that you need to enclose $START_DATE inside " for it to retain spacing $ PREV_MONTH=Feb$ START_DAY=7$ START_DATE="$PREV_MONTH $START_DAY"$ echo "$START_DATE"Feb 7$ echo $START_DATEFeb 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302621/"
]
} |
461,058 | I'm using following script for Shortest Sub-string Match in String handling. filename="bash.string.txt"echo ${filename#*.} It gives following output. string.txt Here is an explanation of above example (Link: https://www.thegeekstuff.com/2010/07/bash-string-manipulation ): The above example deletes the shortest match of $substring from front of $string. In the first echo statement substring ‘*.’ matches the characters and a dot, and # strips from the front of the string, so it strips the substring “bash.” from the variable called filename. Then I changed the code as below: filename="bashshell.string.txt"echo ${filename#*.} I just extended the first string from bash. to bashshell. and expecting the output "bashshell.txt" according to explanation given above. But instead it gives me same output as first example. i.e. string.txt So do I misunderstood the concept? If yes than how it actually works? | You don't need the backtick construct here. Simply START_DATE="$PREV_MONTH $START_DAY" Now remember that you need to enclose $START_DATE inside " for it to retain spacing $ PREV_MONTH=Feb$ START_DAY=7$ START_DATE="$PREV_MONTH $START_DAY"$ echo "$START_DATE"Feb 7$ echo $START_DATEFeb 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184505/"
]
} |
461,077 | I have a csv file with data similar to this: "1b579a5e-9701-40eb-bd36-2bc65169da99","week14_Friday-019","6907eaad-1aff-4d26-9088-ba20374b67c0","2181-019","f20af5bb-c716-42e0-9b9d-cbolf5bfecea","15-BIO-2001","COLLEGE Bio 1","d39330be-df56-4365-8fb4-37e68d040c52","Which engine has the smaller efficiency?","{choices:[a","b","c","d],"type:MultipleChoice}","{solution:[0],"selectAll:false,{"selectMultiple:false",}"type:MultipleChoice}","2016-04-25 00:30:19.000","1922ac5a-6ff6-4ea4-9078-6df4d85d294f","{solution:[0],"type:MultipleChoice}","1","1116911f-8ee5-45c3-b173-a6be681bb15a","FakeLastName","FakeFirstName","[email protected]","Student" I want to remove the double-quotes ", but only if they are inside of the curly braces {}, preferably with sed or awk. Desired output is: "1b579a5e-9701-40eb-bd36-2bc65169da99","week14_Friday-019","6907eaad-1aff-4d26-9088-ba20374b67c0","2181-019","f20af5bb-c716-42e0-9b9d-cbolf5bfecea","15-BIO-2001","COLLEGE Bio 1","d39330be-df56-4365-8fb4-37e68d040c52","Which engine has the smaller efficiency?","{choices:[a,b,c,d],type:MultipleChoice}","{solution:[0],selectAll:false,{selectMultiple:false,}type:MultipleChoice}","2016-04-25 00:30:19.000","1922ac5a-6ff6-4ea4-9078-6df4d85d294f","{solution:[0],type:MultipleChoice}","1","1116911f-8ee5-45c3-b173-a6be681bb15a","FakeLastName","FakeFirstName","[email protected]","Student" Any help would be greatly appreciated. Thanks! | It would be easier with sed : sed -e :1 -e 's/\({[^}]*\)"\([^}]*}\)/\1\2/g; t1' Or perl : perl -pe 's{\{.*?\}}{$& =~ s/"//gr}ge' Note that it assumes there's no nested {...} . To handle nested {...} , you can use perl 's recursive regexp capabilities: perl -pe 's(\{(?:[^{}]++|(?0))*\})($& =~ s/"//gr)ge' With sed , working our way outwards to escape the inner {...} s before removing the " s: sed 's/_/_u/g :1 s/\({[^{}]*\){\([^{}]*\)}/\1_<\2_>/g; t1 :2 s/\({[^}]*\)"\([^}]*}\)/\1\2/g; t2 s/_</{/g; s/_>/}/g;s/_u/_/g' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304180/"
]
} |
461,080 | Let's say I want to compare the files with the same filename between these two directories: /tmp/datadir/dir1/dir2/ and /datadir/dir1/dir2/ It is required to sort them before comparing. I am currently in the /tmp directory, and I tried to run this find with the exec option: find datadir -type f -exec sdiff -s <( sort {} ) <( sort "/"{} ) \; But, it gives me an error sort: open failed: {}: No such file or directory sort: open failed: /{}: No such file or directory However, the below command works fine, but the data is not sorted for proper comparison. find data -type f -exec sdiff -s {} "/"{} \; How can I fix this problem? | The replacement of {} is done by find and happens after the evaluation of the process substitution ( <(…) ) and such in your code. Using bash -c : find datadir -type f -exec \ bash -c 'sdiff -s <( sort "$1" ) <( sort "/$1" )' bash {} \; Or to avoid running one bash instance per file: find datadir -type f -exec bash -c ' for file do sdiff -s <( sort "$file" ) <( sort "/$file" ) done' bash {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239859/"
]
} |
461,095 | I am running sudo tcpdump -i enp0s31f6 -n port 67 and port 68 on one terminal and running sudo dhclient -r on another. During this, I see nothing on first terminal. What I am doing wrong? Both terminals are on the same machine. I wish to sniff DHCP communication on the same machine, which does it. | The replacement of {} is done by find and happens after the evaluation of the process substitution ( <(…) ) and such in your code. Using bash -c : find datadir -type f -exec \ bash -c 'sdiff -s <( sort "$1" ) <( sort "/$1" )' bash {} \; Or to avoid running one bash instance per file: find datadir -type f -exec bash -c ' for file do sdiff -s <( sort "$file" ) <( sort "/$file" ) done' bash {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28089/"
]
} |
461,097 | I execute below command: ulimit -a And it gives output as: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14881 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 14881 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Question is: What happens once this limit reached ? How do I come to know that limit has been reached and now I need to execute some steps ? e.g. If max-user-processes reaches 819200, then does it mean that new process will not start ? OR system will gracefully close most idle process to free up some space ? Or may be something else ? The mentioned numbers/limitations does add any overhead to system performance ? | The replacement of {} is done by find and happens after the evaluation of the process substitution ( <(…) ) and such in your code. Using bash -c : find datadir -type f -exec \ bash -c 'sdiff -s <( sort "$1" ) <( sort "/$1" )' bash {} \; Or to avoid running one bash instance per file: find datadir -type f -exec bash -c ' for file do sdiff -s <( sort "$file" ) <( sort "/$file" ) done' bash {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
461,113 | Linux is only a kernel, and if users want to use it, then they need a complete distribution. That being said, how were the first versions of Linux used when there were no Linux distributions? | In the early stages of Linux, Linus Torvalds released the Linux kernel source in an alpha state to signal to others that work towards a new Unix-like kernel was in development. By that time, as @RalfFriedi stated, the Linux kernel was cross-compiled in Minix. As for usable software, Linus Torvalds also ported utilities to distribute along with the Linux kernel in order for others to test it. These programs were mainly bash and gcc , as described by LINUX's History by Linus Torvalds . Per the the Usenet post : From: [email protected] (Linus Benedict Torvalds) Newsgroups: comp.os.minixSubject: What would you like to see most in minix?Summary: small poll for my new operating system Message-ID: <[email protected]> Date: 25 Aug 91 20:57:08 GMTOrganization: University of Helsinki Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-) Linus distributed the kernel and core utility programs in a diskette format for users to try it and possibly to contribute to it. Afterwards, there were H.J. Lu's Boot-root floppy diskettes. If this could be called a distribution, then it would gain the fame of being the first distribution capable of being installed on hard disk. These were two 5¼" diskette images containing the Linux kernel and the minimum tools required to get started. So minimal were these tools that to be able to boot from a hard drive required editing its master boot record with a hex editor. Eventually the number of utilities grew larger than the maximum size of a diskette. MCC Interim Linux was the first Linux distribution to be used by people with slightly less technical skills by introducing an automated installation and new utilities such as fdisk . MCC Interim Linux was a Linux distribution first released in February 1992 by Owen Le Blanc of the Manchester Computing Centre (MCC), part of the University of Manchester. The first release of MCC Interim Linux was based on Linux 0.12 and made use of Theodore Ts'o's ramdisk code to copy a small root image to memory, freeing the floppy drive for additional utilities diskettes.[2] He also stated his distributions were "unofficial experiments", describing the goals of his releases as being: To provide a simple installation procedure. To provide a more complete installation procedure. To provide a backup/recovery service. To back up his (then) current system. To compile, link, and test every binary file under the current versions of the kernel, gcc, and libraries. To provide a stable base system, which can be installed in a short time, and to which other software can be added with relatively little effort. After the MCC precursor, SLS was the first distribution offering the X Window System in May of 1992. Notably, the competitor to SLS, the mythical Yggdrasil , debuted in December of 1992. Other major distributors followed as we know them today, notably Slackware in July of 1993 (based on SLS) and Debian in December of 1993 until the first official version 1.1 release in December of 1995. Image credits: * Boot/Root diskettes image from: https://www.maketecheasier.com/ * yggdrasil diskette image from: https://yggdrasilblog.wordpress.com/ | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/461113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304221/"
]
} |
461,170 | * is roughly a wild card character with a unlimited amount of length. ? is roughly a wild card character for one or zero length. Is there a difference between using * vs ?* when searching for strings in bash ? | The difference is that in bash (as you tagged the question) * matches any string with length zero or more characters, while ?* matches a string with at least 1 character. Consider for example two files: file.txt and xfile.txt and try to list them with ls ?*file.txt or ls *file.txt . One real case scenario when I use such construct is to list hidden files. Very often I just do ls .??* Double question marks are here to prevent listing the current directory . and the parent directory .. , like it would be with a simpler form ls .* . I need to point here that my .??* is not perfect; for example filenames with only two characters, like .f , don't match this pattern. More reliable solution is ls {..?,.[!.]}* , but usually that is too much to type for me. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/461170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230085/"
]
} |
461,267 | Unix provides standard output and standard error, which can be redirected independently. $ ls /not-existls: cannot access '/not-exist': No such file or directory$ ls /not-exist > redirect.outls: cannot access '/not-exist': No such file or directory$ ls /not-exist 2> redirect.err$ I heard there's a story somewhere on the Web, which gives a fun reason why this separation was implemented. It involves the computerized typesetting that early Unix was used for (and Unix pipelines, I think). I failed to find it right now. Would anyone like to link that story here, to associate it with the relevant tags and make it easier to find? | There's Steve C. Johnson's 2013 account of this, as a user, where users complain about phototypesetting and — lo! — the problem is fixed two days later. But Douglas McIlroy told the tale slightly differently a quarter of a century earlier. In McIlroy's version, standard error was a natural consequence of Ken Thompson's famous all-nighter introduction of the Unix command pipeline. In the world of Unix prior to the pipeline, the fact that errors would be sent to the file where standard output had been redirected to was "trouble". But after the introduction of the pipeline, this behaviour "became intolerable when the output was sent to an unsuspecting process". McIlroy recounts that Dennis Ritchie introduced the standard error mechanism to finally rectify this "shortly" after Sixth Edition. Also, McIlroy had of course been working on the idea of pipelines in Unix for a fair while, by this point, including a number of proposals over the period of at least 2 years; having invented the garden hosepipe metaphor half a decade earlier than that. The concept of a separate stream distinct from the pipeline streams did not magically appear from nothing in just a couple of days. Further reading Steve C. Johnson (2013-12-11). Graphic Systems C/A/T phototypesetter . TUHS mailing list. The Unix Heritage Society. M Douglas McIlroy (1987). A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971–1986 . AT&T Bell Laboratories Computing Science Technical Report #139. p. 9. ( archive ) " 'I'm going to do it,' and so he did ". The Creation of the UNIX Operating System . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
461,275 | I'm trying to make a script that will unzip a password protected file, the password being the name of the file that I will get when unzipping Eg. file1.zip contains file2.zip and it's password is file2.file2.zip contains file3.zip and it's password is file3 How do I unzip file1.zip , and read the name of file2.zip so it can be entered in the script? Here's a screenshot of what I meant , I just need bash to read that output in order to know the new password(In this case the password is 13811). Here's what I've done so far #!/bin/bash echo First zip name: read firstfile pw=$(zipinfo -1 $firstfile | cut -d. -f1) nextfile=$(zipinfo -1 $firstfile) unzip -P $pw $firstfile rm $firstfile nextfile=$firstfile Now how can I make it do the loop? | If you don't have and cannot install zipinfo for any reason, you can imitate it by using unzip with -Z option. To list the contents of the zip use unzip -Z1 : pw="$(unzip -Z1 file1.zip | cut -f1 -d'.')" unzip -P "$pw" file1.zip Put it to a loop: zipfile="file1.zip"while unzip -Z1 "$zipfile" | head -n1 | grep "\.zip$"; do next_zipfile="$(unzip -Z1 "$zipfile" | head -n1)" unzip -P "${next_zipfile%.*}" "$zipfile" zipfile="$next_zipfile"done or a recursive function: unzip_all() { zipfile="$1" next_zipfile="$(unzip -Z1 "$zipfile" | head -n1)" if echo "$next_zipfile" | grep "\.zip$"; then unzip -P "${next_zipfile%%.*}" "$zipfile" unzip_all "$next_zipfile" fi}unzip_all "file1.zip" -Z zipinfo(1) mode. If the first option on the command line is -Z, the remaining options are taken to be zipinfo(1) options. See the appropriate manual page for a description of these options. -1 : list filenames only, one per line. This option excludes all others; headers, trailers and zipfile comments are never printed. It is intended for use in Unix shell scripts. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304382/"
]
} |
461,283 | According to Microsoft , When files are copied from NTFS drives to FAT drives, some file time stamp rounding has to occur; the file time stamp is rounded up to the next even second. (snip) NTFS time stamp: 7 hours 31 min 0 sec 001. FAT time stamp becomes 7 hours 31 min 2 sec 000. However, man rsync says --modify-window When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 (for an exact match), but you may find it useful to set this to a larger value in some situations. In particular, when transferring to or from an MS Windows FAT filesystem (which represents times with a 2-second resolution), --modify-window=1 is useful (allowing times to differ by up to 1 second). I think --modify-window=2 is the correct option, because not "rounding" is executed but "ceiling" is done. Could anyone please tell me whether I am correct? Relevant or irrelevant info: In my environment, the resolution of mtime of files in FAT32 USB is 1 second, and "flooring" is done, though I don't know the reason. The USB is formatted with fdisk and mkfs -t fat -F 32 . Files are transferred from Linux Mint to Volumio. I check the timestamp, using date -r +%s.%N . Suppliment: I found another information. A reliable mail thread of rsync says timestamps will always be a problem on vfat. It has a 1 or 2 second resolution so --modify-window=2 is a common solution. but this contradicts with man rsync and there are many accepted answers on StackExchange which recommend --modify-window=1 . Now I'm confused. | Just to avoid any confusion about how the modify_window works, it's checked in either direction. (If you want to read this in the source code, check util.c :: cmp_time().) That means, if A is newer than B, it checks if A is still newer than B + modify_window. if B is newer than A, it checks if B is still newer than A + modify_window. So let's say the original A has the time 123, but your backup filesystem is lousy so copy B ends up with either time 122 (making A newer than B), or time 124 (making B newer than A). What happens with modify_window = 1? If A (123) is newer than B (122), it checks if A (123) is still newer than B (122+1 = 123). If B (124) is newer than A (123), it checks if B (124) is still newer than A (123+1 = 124). In both cases it turns out to be identical, so modify_window = 1 is sufficient for the time to deviate by one second in either direction. According to the rsync manpage, this is supposed to be good enough(tm) for FAT32. According to the documentation you cited (turning 122 into 124, what the heck), it's not good enough. So this is inconclusive. By experimentation, using NTFS(-3g) and FAT32 in Linux, modify_window = 1 seems to work fine. My test setup was thus: truncate -s 100M ntfs.img fat32.imgmkfs.ntfs -F ntfs.imgmkfs.vfat -F 32 fat32.imgmount -o loop ntfs.img /tmp/ntfs/mount -o loop fat32.img /tmp/fat32/ So, a 100M NTFS/FAT32 filesystem. Create a thousand files with a variety of timestamps: cd /tmp/ntfsfor f in {000..999}do sleep 0.0$RANDOM # widens the timestamp range touch "$f"done For example: # stat --format=%n:%y 111 222 333111:2018-08-10 20:19:10.011984300 +0200222:2018-08-10 20:19:13.553878700 +0200333:2018-08-10 20:19:17.765753000 +0200 According to you, 20:19:10.011 should come out as 2018-08-10 20:19:12.000 . So let's see what happens. First, copy all of these files over to FAT32. # rsync -a /tmp/ntfs/ /tmp/fat32/ Then I noticed the timestamps are actually accurate, until you umount and re-mount: # umount /tmp/fat32# mount -o loop fat32.img /tmp/fat32 Compare: # stat --format=%n:%y /tmp/{ntfs,fat32}/{111,222,333}/tmp/ntfs/ 111:2018-08-10 20:19:10.011984300 +0200/tmp/fat32/ 111:2018-08-10 20:19:10.000000000 +0200/tmp/ntfs/ 222:2018-08-10 20:19:13.553878700 +0200/tmp/fat32/ 222:2018-08-10 20:19:12.000000000 +0200/tmp/ntfs/ 333:2018-08-10 20:19:17.765753000 +0200/tmp/fat32/ 333:2018-08-10 20:19:16.000000000 +0200 So this pretty much looks like it got floored to me. I don't know if Windows would do it the same way, but this is what happens using Linux and rsync. What rsync would do when copying again: # rsync -av --dry-run /tmp/ntfs/ /tmp/fat32sending incremental file list./000001002035036...963964997998999 So there are some gaps in the list but in general, it would re-copy quite a lot of files. With --modify-window=1 , the list is empty: # rsync -av --dry-run --modify-window=1 /tmp/ntfs/ /tmp/fat32/sending incremental file list./ So, at least for Linux, the man page is accurate. The offset seems to be never larger than 1. (Well, one plus fraction, but that is ignored as well.) So, should you be using --modify-time=2 anyway? Not until you can show experimentally that this is actually a possible condition. Even then, it's hard to tell. This is an awful hack in the first place and the larger the time window, the more likely that genuine modifications will be missed. Even --modify-time=1 already ignores changes that can't be related to the way FAT32 timestamps get rounded - since it goes in both directions, but FAT32 only ever floors, and rsync ignores this when copying to FAT32 (target files can only be older), and vice versa when copying from FAT32 (target files can only be newer). An option to handle this better does not seem to exist. I also tried to track this behavior down in the kernel sources, unfortunately the comments (in linux/fs/fat/misc.c) don't give much to go on. /* * The epoch of FAT timestamp is 1980. * : bits : value * date: 0 - 4: day (1 - 31) * date: 5 - 8: month (1 - 12) * date: 9 - 15: year (0 - 127) from 1980 * time: 0 - 4: sec (0 - 29) 2sec counts * time: 5 - 10: min (0 - 59) * time: 11 - 15: hour (0 - 23) */ So according to this, FAT timestamp uses 5 bits for seconds, so you get only 32 possible states, of which 30 are used. The conversion is done with a simple bit shift. in fs/fat/misc.c :: fat_time_unix2fat() /* 0~59 -> 0~29(2sec counts) */ tm.tm_sec >>= 1; So 0 is 0, 1 is 0, 2 is 1, 3 is 1, 4 is 2, and so on... in fs/fat/misc.c :: fat_time_fat2unix() second = (time & 0x1f) << 1; Reverse of the above, and the 0x1f is the bitmask to only grab bits 0-4 of the FAT time which represents 0-29 seconds. If this is any different than it should be, there is nothing about it in the comments that I could see. An interesting post by Raymond Chen about why Windows would go to the trouble of rounding the times up: https://blogs.msdn.microsoft.com/oldnewthing/20140903-00/?p=83 Okay, but why does the timestamp always increase to the nearest two-second interval? Why not round to the nearest two-second interval? That way, the timestamp change is at most one second. Because rounding to the nearest interval means that the file might go backward in time, and that creates its own problems. (Causality can be such a drag.) According to this, the Windows xcopy tool has a /D flag which says "only copy source files if newer than destination file". Basically what rsync --update or cp --update would do. Rounding the time down, making files seem to be created 1 second in the past, as it happens in Linux, would cause files to be copied all over again every time you run the command. Rounding time up fixes that. OTOH the Windows solution just gives you the same headache when copying those files back. It would copy files that are made out to be newer than they really are, and then you have to be careful the roundup doesn't happen twice. No matter what you do, it's always wrong, a filesystem that can't store timestamps properly is just a bother. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/461283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291779/"
]
} |
461,285 | I need to transpose x and y axis, of a file of 450.000 × 15.000 tab separated fields, so I tried first with a small 5 × 4 test file named A.txt: x column1 column2 column3row1 0 1 2row2 3 4 5row3 6 7 8row4 9 10 11 I tried this: for i in {1..4}; do cut -f"$i" A.txt | paste -s; done > At.txt but it does not work fine. The output is: X row1 row2 row3 row4column1 0 3 6 9column2 1 4 7 10column3 2 5 8 11 | Just to avoid any confusion about how the modify_window works, it's checked in either direction. (If you want to read this in the source code, check util.c :: cmp_time().) That means, if A is newer than B, it checks if A is still newer than B + modify_window. if B is newer than A, it checks if B is still newer than A + modify_window. So let's say the original A has the time 123, but your backup filesystem is lousy so copy B ends up with either time 122 (making A newer than B), or time 124 (making B newer than A). What happens with modify_window = 1? If A (123) is newer than B (122), it checks if A (123) is still newer than B (122+1 = 123). If B (124) is newer than A (123), it checks if B (124) is still newer than A (123+1 = 124). In both cases it turns out to be identical, so modify_window = 1 is sufficient for the time to deviate by one second in either direction. According to the rsync manpage, this is supposed to be good enough(tm) for FAT32. According to the documentation you cited (turning 122 into 124, what the heck), it's not good enough. So this is inconclusive. By experimentation, using NTFS(-3g) and FAT32 in Linux, modify_window = 1 seems to work fine. My test setup was thus: truncate -s 100M ntfs.img fat32.imgmkfs.ntfs -F ntfs.imgmkfs.vfat -F 32 fat32.imgmount -o loop ntfs.img /tmp/ntfs/mount -o loop fat32.img /tmp/fat32/ So, a 100M NTFS/FAT32 filesystem. Create a thousand files with a variety of timestamps: cd /tmp/ntfsfor f in {000..999}do sleep 0.0$RANDOM # widens the timestamp range touch "$f"done For example: # stat --format=%n:%y 111 222 333111:2018-08-10 20:19:10.011984300 +0200222:2018-08-10 20:19:13.553878700 +0200333:2018-08-10 20:19:17.765753000 +0200 According to you, 20:19:10.011 should come out as 2018-08-10 20:19:12.000 . So let's see what happens. First, copy all of these files over to FAT32. # rsync -a /tmp/ntfs/ /tmp/fat32/ Then I noticed the timestamps are actually accurate, until you umount and re-mount: # umount /tmp/fat32# mount -o loop fat32.img /tmp/fat32 Compare: # stat --format=%n:%y /tmp/{ntfs,fat32}/{111,222,333}/tmp/ntfs/ 111:2018-08-10 20:19:10.011984300 +0200/tmp/fat32/ 111:2018-08-10 20:19:10.000000000 +0200/tmp/ntfs/ 222:2018-08-10 20:19:13.553878700 +0200/tmp/fat32/ 222:2018-08-10 20:19:12.000000000 +0200/tmp/ntfs/ 333:2018-08-10 20:19:17.765753000 +0200/tmp/fat32/ 333:2018-08-10 20:19:16.000000000 +0200 So this pretty much looks like it got floored to me. I don't know if Windows would do it the same way, but this is what happens using Linux and rsync. What rsync would do when copying again: # rsync -av --dry-run /tmp/ntfs/ /tmp/fat32sending incremental file list./000001002035036...963964997998999 So there are some gaps in the list but in general, it would re-copy quite a lot of files. With --modify-window=1 , the list is empty: # rsync -av --dry-run --modify-window=1 /tmp/ntfs/ /tmp/fat32/sending incremental file list./ So, at least for Linux, the man page is accurate. The offset seems to be never larger than 1. (Well, one plus fraction, but that is ignored as well.) So, should you be using --modify-time=2 anyway? Not until you can show experimentally that this is actually a possible condition. Even then, it's hard to tell. This is an awful hack in the first place and the larger the time window, the more likely that genuine modifications will be missed. Even --modify-time=1 already ignores changes that can't be related to the way FAT32 timestamps get rounded - since it goes in both directions, but FAT32 only ever floors, and rsync ignores this when copying to FAT32 (target files can only be older), and vice versa when copying from FAT32 (target files can only be newer). An option to handle this better does not seem to exist. I also tried to track this behavior down in the kernel sources, unfortunately the comments (in linux/fs/fat/misc.c) don't give much to go on. /* * The epoch of FAT timestamp is 1980. * : bits : value * date: 0 - 4: day (1 - 31) * date: 5 - 8: month (1 - 12) * date: 9 - 15: year (0 - 127) from 1980 * time: 0 - 4: sec (0 - 29) 2sec counts * time: 5 - 10: min (0 - 59) * time: 11 - 15: hour (0 - 23) */ So according to this, FAT timestamp uses 5 bits for seconds, so you get only 32 possible states, of which 30 are used. The conversion is done with a simple bit shift. in fs/fat/misc.c :: fat_time_unix2fat() /* 0~59 -> 0~29(2sec counts) */ tm.tm_sec >>= 1; So 0 is 0, 1 is 0, 2 is 1, 3 is 1, 4 is 2, and so on... in fs/fat/misc.c :: fat_time_fat2unix() second = (time & 0x1f) << 1; Reverse of the above, and the 0x1f is the bitmask to only grab bits 0-4 of the FAT time which represents 0-29 seconds. If this is any different than it should be, there is nothing about it in the comments that I could see. An interesting post by Raymond Chen about why Windows would go to the trouble of rounding the times up: https://blogs.msdn.microsoft.com/oldnewthing/20140903-00/?p=83 Okay, but why does the timestamp always increase to the nearest two-second interval? Why not round to the nearest two-second interval? That way, the timestamp change is at most one second. Because rounding to the nearest interval means that the file might go backward in time, and that creates its own problems. (Causality can be such a drag.) According to this, the Windows xcopy tool has a /D flag which says "only copy source files if newer than destination file". Basically what rsync --update or cp --update would do. Rounding the time down, making files seem to be created 1 second in the past, as it happens in Linux, would cause files to be copied all over again every time you run the command. Rounding time up fixes that. OTOH the Windows solution just gives you the same headache when copying those files back. It would copy files that are made out to be newer than they really are, and then you have to be careful the roundup doesn't happen twice. No matter what you do, it's always wrong, a filesystem that can't store timestamps properly is just a bother. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/461285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303171/"
]
} |
461,308 | I updated my SSH port from 22 to 6433 and now I can't SSH into my machine. I updated this line in /etc/ssh/sshd_config : # If you want to change the port on a SELinux system, you have to tell# SELinux about this change.# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER#Port 22 to # If you want to change the port on a SELinux system, you have to tell# SELinux about this change.# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER#Port 6433 I restarted my ssh service using $ service sshd restart no errors were returned. Open up a new Terminal tab and run: $ ssh [email protected] -p6433 which returns: ssh: connect to host ip.address port 6433: No route to host Not sure how to go about fixing? update - SELinux is not enabled | Thanks to @Vinod I got on the right track, achieved by doing: $ firewall-cmd --zone=permanent --add-port=6433/tcp$ firewall-cmd --reload now I can SSH into my server. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226212/"
]
} |
461,433 | I'm using a systemd timer and unit to automatically trigger a backup job. But currently it runs only at one moment in the evening. Is it possible to have it run at multiple moments by declaring it in the same timer? This is how it's now: [Unit]Description=Run luky-borg-backup every night[Timer]OnCalendar=21:00AccuracySec=1hPersistent=yes[Install]WantedBy=timers.target Should be something like this: [Unit]Description=Run luky-borg-backup every night[Timer]OnCalendar=10:00,21:00AccuracySec=1hPersistent=yes[Install]WantedBy=timers.target | Is it possible to have it run at multiple moments by declaring it in the same timer? Yes. See this excerpt from man systemd.timer (my emphasis): OnCalendar= Defines realtime (i.e. wallclock) timers with calendar event expressions. See systemd.time(7) for more information on the syntax of calendar event expressions. Otherwise, the semantics are similar to OnActiveSec= and related settings. Note that timers do not necessarily expire at the precise time configured with this setting, as it is subject to the AccuracySec= setting below. May be specified more than once . Here is a working example that I use: [Timer]OnCalendar=Mon-Sun *-*-* 23:00:00OnCalendar=Mon-Sun *-*-* 06:00:00 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/461433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244418/"
]
} |
461,523 | The following syntax will remove the files under hive folder: /usr/bin/find /var/log/hive -type f -print -delete I am trying to do the following: Remove the oldest files under /var/log/hive only if folder size is more than 10G NOTE - the deletion process will stop when size under hive folder is exactly 10G , so purging process will start if size is more then 10G Can we create this solution with find command or maybe another approach? | On a GNU system, you could do something like: cd /var/log/hive && find . -type f -printf '%T@ %b :%p\0' | sort -zrn | gawk -v RS='\0' -v ORS='\0' ' BEGIN {max = 10 * 1024 * 1024 * 1024} # 10GiB; use max=10e9 for 10GB {du += 512 * $2} du > max { sub("[^:]*:", ""); print }' | xargs -r0 echo rm -f That is sort the regular files by last modification time (from newest to oldest), then count their cumulative disk usage (here assuming there are no hard links) and delete every file when we've passed the 10GiB threshold. Note that it doesn't take into account the size of the directory files themselves. It only considers the disk usage of regular files. Remove echo when satisfied with the result. On one line: find . -type f -printf '%T@ %b :%p\0' |sort -zrn|gawk -vRS='\0' -vORS='\0' '{du+=512*$2};du>10*(2^30){sub("[^:]*:","");print}'|xargs -r0 echo rm -f To delete only *.wsp files when the cumulative disk usage of all regular files goes over 10GiB, you'd want to list the non-wsp files first. And at the same time, we can also account for the disk usage of directories and other non-regular files we were missing earlier: cd /var/log/hive && find . \( -type f -name '*.wsp' -printf WSP -o -printf OTHER \) \ -printf ' %T@ %b :%p\0' | sort -zk 1,1 -k2,2rn | gawk -v RS='\0' -v ORS='\0' ' BEGIN {max = 10 * 1024 * 1024 * 1024} # 10 GiB {du += 512 * $3} du > max && $1 == "WSP" { sub("[^:]*:", ""); print }' | xargs -r0 echo rm -f | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
461,689 | I am not sure if this question should go here or in reverseengineering.stackexchange.com Quoting from wikipedia : In the 8086 processor, the interrupt table is called IVT (interrupt vector table). The IVT always resides at the same location in memory, ranging from 0x0000 to 0x03ff, and consists of 256 four-byte real mode far pointers (256 × 4 = 1024 bytes of memory). This is what I find in qemu monitor: (qemu) xp/128xw 00000000000000000: 0xf000ff53 0xf000ff53 0xf000e2c3 0xf000ff530000000000000010: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000020: 0xf000fea5 0xf000e987 0xf000d62c 0xf000d62c0000000000000030: 0xf000d62c 0xf000d62c 0xf000ef57 0xf000d62c0000000000000040: 0xc0005526 0xf000f84d 0xf000f841 0xf000e3fe0000000000000050: 0xf000e739 0xf000f859 0xf000e82e 0xf000efd20000000000000060: 0xf000d648 0xf000e6f2 0xf000fe6e 0xf000ff530000000000000070: 0xf000ff53 0xf000ff53 0xf0006aa4 0xc00089300000000000000080: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000090: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000a0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000b0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000c0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000d0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000e0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000000f0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000100: 0xf000ec59 0xf000ff53 0xf000ff53 0xc00067300000000000000110: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000120: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000130: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000140: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000150: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000160: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000170: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff530000000000000180: 0x00000000 0x00000000 0x00000000 0x000000000000000000000190: 0x00000000 0x00000000 0x00000000 0xf000ff5300000000000001a0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000001b0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff5300000000000001c0: 0xf000d611 0xf000ec4e 0xf000ec4e 0xf000ec4e00000000000001d0: 0xf000d61a 0xf000d623 0xf000d608 0xf000ec4e00000000000001e0: 0xf000ff53 0x00000000 0xf000ff53 0xf000ff5300000000000001f0: 0xf000ff53 0xf000ff53 0xf000ff53 0xf000ff53 I am not sure what to make of those values. It does not look like an interrupt descriptor table (dereferencing those values gives all nulls). So what am I actually looking at here? | Whatever your firmware left it containing. On an ideal modern system, the processor never enters real mode at all, as I explained in this SU Q&A titled: What mode do modern 64-bit Intel chip PCs run the boot sector in? , the first KiB of physical memory is as irrelevant as Johan Myréen made it out to be in another answer here. But many modern firmwares (still) have compatibility support , meaning that they can drop back (yes, back , given that they went directly from unreal mode to protected mode) from protected mode to real mode in order to run system softwares that are written for real mode, such as old style PC/AT boot programs in MBRs and VBRs; and they provide the old real mode firmware APIs and set up all of the data structures for those APIs, that the aforementioned system softwares rely upon. One of those data structures is the real mode IVT. The old real mode firmware APIs are based upon int instructions, and the real mode IVT is populated by the firmware as part of its initialization with pointers to the various firmware handling routines for those instructions. Protected mode system softwares do not need the old real mode firmware APIs, and never run the processor in real mode, so the real mode IVT in the first 1KiB of physical memory is unused. (v8086 protected mode does not address physical address 00000000 and upwards, remember. It addresses logical addresses 00000000 and upwards, which are translated by page tables.) In modern EFI systems, the firmware hands over a memory map of physical memory to the operating system bootstrap, telling it which parts are reserved to the firmware for its own protected mode API purposes, and which parts the operating system is free to just go ahead and use for its pool of physical memory. In theory, the first page of physical memory can be in the latter category. In practice, firstly, firmwares often mark the first page of physical memory as "boot services code", meaning that an operating system can claim it and just go ahead and use it as part of its physical memory pool, but only after the boot-time services of the EFI firmware have been shut down by the operating system and the firmware reduced to providing its run-time services only. An example of this can be seen in the Linux kernel log (with the add_efi_memmap option) shown by Finnbarr P. Murphy: [ 0.000000] efi: mem00: type=3, attr=0xf, range=[0x0000000000000000-0x0000000000001000) (0MB) which xe decodes with another program in a more human-readable form as: [#00] Type: EfiBootServicesCode Attr: 0xF Phys: 0000000000000000-0000000000001000 Virt: 0000000000000000-0000000000001000 In practice, secondly, Linux explicitly ignores this range of physical memory even if the firmware says that it can go ahead and use it. You'll find that on both EFI and non-EFI firmwares alike, once Linux has the physical memory map it patches it ( in a function named trim_bios_range ), resulting in kernel log messages such as: [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved This is not so much to cope with modern EFI firmwares, where the real mode IVT is not part of the firmware API, as it is to cope with old PC98 firmwares, where it is part of the firmware API but the firmwares report it (via that self-same API) as physical memory available to be blithely overwritten by the operating system. So whilst in theory that range of physical memory could contain arbitrary code or data, depending from the momentary needs of the kernel memory allocators and demand-paged virtual memory; in practice Linux just leaves it untouched as the firmware originally set it up. And on your system the firmware had populated it with real mode IVT entries. Real mode IVT entries are just 16:16 far pointers, of course, and if you look at your memory using a 2-byte hexdump you can actually see this pretty clearly. Some examples: Most of your IVT entries point to F000:FF53, an address in the real mode firmware ROM area. It is probably a dummy routine that does nothing more than an iret . IVT entry 1E points to F000:6AA4, a table in that same ROM area. IVT entry 1F points to C000:8930, a table in the real mode video ROM firmware area. IVT entry 43 points to C000:6730, another table in the real mode video ROM firmware area. Further reading Finnbarr P. Murphy (2012-08-18). UEFI Memory V E820 Memory . fpmurphy.com. What mode do modern 64-bit Intel chip PCs run the boot sector in? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262263/"
]
} |
461,775 | I would like to create a directory name for each month. I know, after some playing with the shell, that: date -d 1/01 +%b # Gives Jandate -d 2/01 +%b # Gives Feb.date -d 12/01 +%b # Gives Dec So I have used brace expansion, echo {1..12}/01 and tried to xargs it: echo {1..12}/01 | xargs -n 1 -I {} date -d {} +%b But it fails miserably :/ (afterwards I'd like to apply mkdir ). How can I do this? | With -I , xargs gets one argument per line as opposed to the default of one argument per (blank or newline delimited, possibly quoted) word without -I (and implies -n ). So in your example date is called only once with {} expanded to the whole output of echo (which is on one line), minus the trailing newline. Here you can do (note that that -d is a GNU extension): printf '%s\n' {1..12}/01 | xargs -I {} date -d {} +%b | xargs mkdir -- (note that it won't work correctly in locales where month name abbreviations contain spaces or quote characters; with GNU xargs , you can work around that by using xargs -d '\n' mkdir -- ) Now, to get the list of month abbreviations in your locale, querying the locale directly would make more sense: (IFS=';'; set -o noglob; mkdir -- $(locale abmon)) (see also locale -k LC_TIME to see all the locale data in the LC_TIME category). Or natively in zsh : zmodload zsh/langinfomkdir -- ${(v)langinfo[(I)ABMON_*]} At least on GNU systems, in some locales, month abbreviations are padded to fixed width with spaces: $ LC_ALL=et_EE.UTF-8 locale title abmonEstonian locale for Estoniajaan ;veebr;märts;apr ;mai ;juuni;juuli;aug ;sept ;okt ;nov ;dets$ LC_ALL=zh_TW.UTF-8 locale title abmonChinese locale for Taiwan R.O.C. 1月; 2月; 3月; 4月; 5月; 6月; 7月; 8月; 9月;10月;11月;12月 You may want to remove that padding. The leading spaces would be removed by xargs -I , but not the trailing ones. With zsh : zmodload zsh/langinfoset -o extendedglobmkdir -- ${${${(v)langinfo[(I)ABMON*]}##[[:space:]]#}%%[[:space:]]#} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/461775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114640/"
]
} |
461,783 | In my zsh script, I would like to find out, whether my working directory starts with /cygdrive/?/... or matches exactly /cygdrive/? (where the drive letter (?) can be any letter different from the letter c), and if it does, retrieve into two variables the /cygdrive/? part and the remaining /... . Example: If my working directory is /cygdrive/r/abc/xyz , I would like to have the variable head set to /cygdrive/r and the variable tail set to /abc/xyz . If PWD is just, say, /cygdrive/r , the variable tail should be empty. I prefer a solution using zsh internal commands only, i.e. without the need of spawning a process. I came up with the following solution, which does the job, but I don't like it: if [[ $PWD == /cygdrive/[abd-z]* ]]then local head=${PWD:0:11} local tail=${PWD#/cygdrive/?} ....fi In particular, I don't like the calculation of head, with the hardcoded value of 11, and I'm wondering whether there might perhaps be a completely different approach, which would be more elegant. UPDATE: I am aware that my if condition would also be true, if PWD is, for instance, /cygdrive/foo , but for my application, I don't consider this a problem. Of course if you can suggest a better alternative for writing the condition, which exactly does what I want, I appreciate any comment. | if [[ $PWD =~ '^(/cygdrive/[abd-z])(.*)' ]]; then head=$match[1] tail=$match[2]fi Same with globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(*) ]]; then head=$match[1] tail=$match[2]fi Globs also have the advantage of using zsh 's own pattern matching where d-z only matches on defghijklmnopqrstuvwxyz , while =~ would use your system's regexps where [d-z] may very well match on many more characters (like é or even sequences of characters like dzs in Hungarian locales). Doing a set -o rematchpcre would cause =~ to use PCRE which are more reasonable in that regard. To not match on /cygdrive/foo : if [[ $PWD =~ '^(/cygdrive/[abd-z])(/.*)?$' ]]; then head=$match[1] tail=$match[2]fi With globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(/*|) ]]; then head=$match[1] tail=$match[2]fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95236/"
]
} |
461,802 | how to run Linux's factor in parallel, i.e. utilize all CPU cores? I have tried to run factor <prime number> but unfortunately only one CPU core is being utilized. | if [[ $PWD =~ '^(/cygdrive/[abd-z])(.*)' ]]; then head=$match[1] tail=$match[2]fi Same with globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(*) ]]; then head=$match[1] tail=$match[2]fi Globs also have the advantage of using zsh 's own pattern matching where d-z only matches on defghijklmnopqrstuvwxyz , while =~ would use your system's regexps where [d-z] may very well match on many more characters (like é or even sequences of characters like dzs in Hungarian locales). Doing a set -o rematchpcre would cause =~ to use PCRE which are more reasonable in that regard. To not match on /cygdrive/foo : if [[ $PWD =~ '^(/cygdrive/[abd-z])(/.*)?$' ]]; then head=$match[1] tail=$match[2]fi With globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(/*|) ]]; then head=$match[1] tail=$match[2]fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304799/"
]
} |
461,804 | When I am assigning .* to a variable , it is assigning all hidden files. [root@s1 ~]# a=".*"[root@s1 ~]# echo $a. .. .bash_history .bash_logout .bash_profile .bashrc .cache .config .cshrc .history .lesshst .mozilla .pki .rnd .ssh .tcshrc .viminfo .virsh .xauth6SHzeY .xauthhAVYfm .xauthI6Cte3 .xauthk7ea35 .xauthlXtiZ9 .Xauthority .xauthQm7mJ8 .xauthTpWbxP .xauthY9KsdC I expect below output : .* How to escape it, thanks. I tried below and it gives output [root@s1 ~]# a='".*"' [root@s1 ~]# echo $a ".*" ".*" but not .* | if [[ $PWD =~ '^(/cygdrive/[abd-z])(.*)' ]]; then head=$match[1] tail=$match[2]fi Same with globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(*) ]]; then head=$match[1] tail=$match[2]fi Globs also have the advantage of using zsh 's own pattern matching where d-z only matches on defghijklmnopqrstuvwxyz , while =~ would use your system's regexps where [d-z] may very well match on many more characters (like é or even sequences of characters like dzs in Hungarian locales). Doing a set -o rematchpcre would cause =~ to use PCRE which are more reasonable in that regard. To not match on /cygdrive/foo : if [[ $PWD =~ '^(/cygdrive/[abd-z])(/.*)?$' ]]; then head=$match[1] tail=$match[2]fi With globs: set -o extendedglobif [[ $PWD = (#b)(/cygdrive/[abd-z])(/*|) ]]; then head=$match[1] tail=$match[2]fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283762/"
]
} |
461,829 | How are services installed by the package manager (in /usr/lib/systemd/ ) supposed to be used in Debian (9.5)? I can see loads of services installed in /usr/lib/systemd/user , but these are not available via the normal systemctl status my-service command. Am I supposed to manually copy the services I want to use into /etc/systemd/user/ ? | Systemd allows you to have a user-specific services for each user in addition to the system-wide services. Those can be managed by regular systemctl command ( stop , start , status , edit , enable , ...) if you add the --user flag. If you have for example /usr/lib/systemd/user/syncthing.service , the system-wide service manager doesn't know about it, but the one for users does: ↪ ls /usr/lib/systemd/user/syncthing.service Permissions Size User Date Modified Name.rw-r--r-- 285 root 13 Jun 20:40 /usr/lib/systemd/user/syncthing.service↪ systemctl status syncthing Unit syncthing.service could not be found.↪ systemctl --user status syncthing ● syncthing.service - Syncthing - Open Source Continuous File Synchronization Loaded: loaded (/usr/lib/systemd/user/syncthing.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2018-07-24 17:10:22 CEST; 2 weeks 3 days ago Docs: man:syncthing(1) Main PID: 815 (syncthing) CGroup: /user.slice/user-1000.slice/[email protected]/syncthing.service └─815 /usr/bin/syncthing -no-browser -no-restart -logflags=0 systemctl status --user can be used to list all currently active units for the current user. It's also possible for users to define their own units if they are unable to write to /usr/lib/systemd/user or for the administrator to define user units local to the system, by adding them to one of the directories listed for this purpose in systemd.unit(5) : ~/.config/systemd/user/ /etc/systemd/user/ $XDG_RUNTIME_DIR/systemd/user/ ~/.local/share/systemd/user/ … | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31513/"
]
} |
461,850 | Recently I made a bash executable file with permissions 722 as I am almost perpetually root. The contents of the file are as follows: #!/home/nolan/Documents/test/listFiles[ $# -lt 1 ] && dirFocus = "" || dirFocus = $1dirSize=$(ls -a $dirFocus | wc -w)for ((a = 1; a <= $dirSize; a++)) ; do i = 1 for ITEM in $(ls -a $dirFocus); do declare -i i declare -i a if [ $a -eq $i ]; then echo "$a : $ITEM" fi i = $[ $i + 1 ] donedone When run in terminal using: root @ /home/nolan/Documents/test: bash listFiles1 : .2 : ..3 : apple4 : dirCheck5 : ifTest6 : ifTest.txt7 : listFiles8 : myscript9 : nolan.txt10 : pointer_to_apple11 : scriptx.sh12 : Stuff13 : weekend14 : weekend215 : weekend3 I receive this outcome as expected.However the second I do: root @ /home/nolan/Documents/test: ./listFilesbash: ./listFiles: /home/nolan/Documents/test/listFiles: bad interpreter: Toomany levels of symbolic links Is the error I get.What exactly is going wrong? I've checked other forums but they don't seem to pertain to my situation. | The first line of a script is the "shebang" line. It tells kernel (program loader) what program to run to interpret the script. Your script tries to run itself to interpret the script, which in turn calls itself to interpret the interpreter, and so on ad infinity. When you run the script with bash filename , the kernel isn't invoked and bash is used to run the script which works. Put #! /bin/bash to the first line and all will be fine. BTW, create a user with limited privileges to experiment with the system. As root , you can easily destroy everything beyond repair. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/461850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304727/"
]
} |
462,017 | Look at the following: $ echo .[].aliases[]..$ echo .[].foo[]..$ echo .[].[]..$ echo .[].xyz[]..$ echo .xyz[].xyz[].xyz[].xyz[]$ echo .xyz[].[].xyz[].[] Apparently this seems to be globbing something, but I don’t understand how the result comes together. From my understanding [] is an empty character class. It would be intuitive if it matched only the empty string; in this case, I’d expect bash to reproduce in its entirety since nothing matches it in this directory, but also match things like ..aliases (in the first example), or nothing at all; in this case, I’d expect bash to reproduce the string in total, too. This is with GNU bash, version 4.4.23(1)-release. | The [ starts a set. A set is terminated by ] . But there is a way to have ] as part of the set, and that is to specify the ] as the first character. As an empty set doesn't make any sense, this is not ambiguous. So your examples are basically all a dot followed by a set that contains a dot, therefore it matches two dots. The later examples don't find any files and are therefore returned verbatim. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/462017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20834/"
]
} |
462,044 | I recently switched from win10 to Ubuntu because I don't use it for gaming anymore.When I installed it, I only had access to my desktop(gdm3) but a resolution of 800x600 which looked awful. Now that I installed all neccessary drivers, I get my full 1920x1080 resolution but I'm unable to access the GUI. (I can't get out of tty1.) SOLVED: Use Integrated Graphics (intel) as primary graphics: sudo prime-select intel This is what it says: >sudo startxX.Org X Server 1.19.6Release Date: 2017-12-20X Protocol Version 11, Revision 0Build Operating System: Linux 4.4.0-119-generic x86_64 UbuntuCurrent Operating System: Linux <machine name> 4.15.0-30-generic #32-Ubuntu SMP Thu Jul 26 17:42:43 UTC 2018 x86_64Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-30-generic root=UUID=<uuid> ro quiet splash pci=noaer 3Build Date: 13 April 2018 08:07:36PMxorg-server 2:1.19.6-1ubuntu4 (For technical support please see http://www.ubuntu.com/support)Current version of pixman: 0.34.0 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (II) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown.(==) Log file: /var/log/Xorg.2.log", Time: Sat Aug 11 20:56:06 2018(==) Using config file: "/etc/X11/xorg.conf"(==) Using config directory: /etc/X11/xorg.conf.d"(==) Using system config directory "/usr/share/X11/xorg.conf.d" And that is where it freezes and doesn't generate any output anymore. I let it like that for 2 hours with no change whatsoever. I'm using a MSI notebook with a gtx960M and Ubuntu 18.04.Further information on request. Can someone tell me how to fix this please? Because I really need this notebook for my work... Driver output: >ubuntu-drivers devices== sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==modalias : pci:v000010DEd0000139Bsv00001462sd00001150BbcO3sc02i00vendor : NVIDIA Corporationmodel : GM107M [GeForce GTX 960M]driver : nvidia-driver-390 - third-party freedriver : nvidia-driver-396 - third-party free recommendeddriver : xserver-xorg-video-nouveau - distro free builtin Freeze#2 [ OK ] Started Daily apt download activities.[ OK ] Listening on UUID daemon activation socket.[ OK ] Started Discard unused blocks once a week. Starting Socket activation for snappy daemon.[ OK ] Started CUPS Scheduler.[ OK ] Listening on ACPID Listen Socket.[ OK ] Started Message of the Day.[ OK ] Started Daily apt upgrade and clean activities[ OK ] Started ACPI Events Check.[ OK ] Reached target Paths.[ OK ] Started Trigger anacron every hour.[ OK ] Listening on CUPS Scheduler.[ OK ] Reached target Timers.[ OK ] Listening on D-Bus System Message Bus Socket.[ OK ] Listening on Socket activation for snappy daemon.[ OK ] Reached target Sockets.[ OK ] Reached target Basic System. Starting System Logging Service... Starting LSB: Record successful boot for GRUB... Starting Thermal Daemon Service... Starting LSB: Speech Dispatcher...[ OK ] Started D-Bus System Message Bus.[ OK ] Reached target Login Prompts. Starting Accounts Service... Starting Login Service... Starting Avahi mDNS/DNS-SD Stack...[ OK ] Started Run anacron Jobs.[ OK ] Started CUPS Scheduler. Starting rng-tools.service... Starting WPA supplicant... Starting LSB: automatic crash report generation... Starting Network Manager... Starting Restore /etc/resolv.conf if the system crashed before the ppp link was shut down...[ OK ] Started irqbalance daemon. Starting Dispatcher daemon for systemd-networkd... Starting Detect the available GPUs and deal with any system changes...[ OK ] Started Set the CPU Frequency Scaling governor. Starting Bluetooth service...[ OK ] Started Regular background program processing daemon. Starting Disk Manager...[ OK ] Started ACPI event daemon. Starting Save/Restore Sound Card State... Starting Modem Manager... Starting Snappy daemon...[ OK ] Started System Logging Service.[ OK ] Started Restore /etc/resolv.conf if the system crashed before the ppp link was shut down.[ OK ] Started Save/Restore Sound Card State.[ OK ] Started Thermal Daemon Service.[ OK ] Started Login Service.[ OK ] Started LSB: Speech Dispatcher.[ OK ] Started LSB: automatic crash report generation.[ OK ] Started rng-tools.service. Starting Authorization Manager...[ OK ] Started Detect the available GPUs and deal with any system changes.[ OK ] Started Bluetooth service.[ OK ] Reached target Bluetooth.[ OK ] Started Avahi mDNS/DNS-SD Stack.[ OK ] Started Make remote CUPS printers available locally. Starting Hostname service...[ OK ] Started LSB: Record successful boot for GRUB.[ OK ] Started Authorization Manager.[ OK ] Started Raise network interfaces.[ OK ] Started Accounts service.[ OK ] Started Modem Manager.[ OK ] Started Hostname Service.[ OK ] Started Disk Manager. nvidia-smi / uname -r >nvidia-smiSat Aug 11 23:01:59 2018+-----------------------------------------------------------------------------+| NVIDIA-SMI 396.51 Driver Version: 396.51 ||-----------------------------------------------------------------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. ||===============================+======================+======================|| 0 GeForce GTX 960M Off | 00000000:01:00.0 Off | N/A || N/A 49C P8 N/A / N/A | 13MiB / 2004MiB | 0% Default |+-----------------------------------------------------------------------------++-----------------------------------------------------------------------------+| Processes: GPU Memory || GPU PID Type Process name Usage ||=============================================================================|| 0 1164 G /usr/lib/xorg/Xorg 7MiB || 0 2023 G /usr/bin/gnome-shell 5MiB |+-----------------------------------------------------------------------------+>uname -r4.15.0-30-generic lspci >lspci -knn | grep VGA -A300:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:191b] (rev 06) Subsystem: Micro-Star International Co., Ltd. [MSI] HD Graphics 530 [1462:115b] Kernel modules: i91500:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31) Thanks in advance, Lydia | Remove nvidia drivers first: sudo apt purge nvidia-* Next allow Ubuntu to install recommended 396 driver: sudo ubuntu-drivers autoinstall Reboot your laptop: sudo reboot Since the Ubuntu 18.04 has been used, enable graphical environment by default: sudo systemctl set-default graphical.target If you want to start Gnome Desktop from a current session without GUI (multi-user environment), just execute: sudo systemctl start gdm3.service Update Since hybrid graphics is in use, install nvidia-prime to switch between intel and nvidia graphics (it can be installed already): sudo apt install nvidia-prime Check which graphics card is being used: prime-select query You can see intel or nvidia as output of the command. If you see intel , switch to nvidia : sudo prime-select nvidia Reboot and check if the graphics works normally. If prime-select query returns nvidia , try to switch to intel : sudo prime-select intel reboot and see if everything is OK. If nothing helps, please, post in the text of question output of the command: sudo lshw -c display | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304976/"
]
} |
462,065 | Okay, first thing first, I'm new to Linux and I'm using Linux Mint. I learned that when I want to add more directories to my PATH (specifically my home directory) I need a Bash command that looks like PATH=$PATH:~ , right? The question is why should I need to put $PATH there? It stands for system path, right? Will it work if I only type PATH=~ ? I mean I only want to add my home directory to the existing PATH directories. | In Unix certain environment variables, such as $PATH are special in the sense that they're a list of items, not just a single item. With these types of lists, a colon ( : ) separates the items in the list. For $PATH you can see this if you merely print it: $ printenv PATH/sbin:/bin:/usr/sbin:/usr/bin If you want to add additional items to this, you have to include the previous list plus the new item. That's effectively what you're doing when you say PATH=$PATH:<new item> . $ PATH=$PATH:/path/to/some/dir$ printenv PATH/sbin:/bin:/usr/sbin:/usr/bin:/path/to/some/dir Keep in mind that these changes are local only to the shell where you ran them. If you want your changes to $PATH to persist between reboots, or show up in other instances of your shell, you need to add them to a configuration file so that they'll get setup as part of your defaults. Typically for user's this is what you'd do to these files ~/.bashrc & ~/.bash_profile : export PATH=$PATH:$HOME/bin:$HOME/somedir Adding a line such as this will modify your $PATH . Alternative to $PATH usage If you'd simply like to be able to run scripts and executables that are not in your $PATH that can be easily solved by using this method instead of adding to $PATH . Here's a scenario, lets say we have an executable such as this: $ ls -l helloexec.bash-rwxr-xr-x 1 user1 user1 31 Aug 12 07:45 helloexec.bash But it's not on the $PATH so we cannot run it: $ helloexec.bashbash: helloexec.bash: command not found... So you're thinking, oh, I have to add it to my $PATH to be able to run it. But instead, you can run any executable that's in the current directory like so: $ ./helloexec.bashhello bash In Unix type operating systems, it's imperative that you internalize this method of interacting with your scripts and executables, rather than insist that they all be on the $PATH . Dangers of adding to $PATH In your examples you show that you'd like to add ~ to your $PATH . I've seen many users do this over the years, or want to, thinking that it'll be a huge convenience & time saver to just put this directory, directly on their $PATH . This is typically not a good approach to things. Rather, you should think long and hard about where you want to store executables in Linux/Unix, and only add directories that are critically necessary to have such a prominent place such as being on $PATH . Most will typically add the system directories, and then add a $HOME/bin to the $PATH and leave it at that. Putting more things on the $PATH can lead to unintended consequences such as commands not working as expected or even worse, creating a situation that allows a system to be more easily compromised. For example, say you downloaded some script from a website, and hadn't realized that your web browser was changed to save files to $HOME . This downloaded file, is now in a position that it can be invoked by a would be attacker. Alternatively if you have the order of your $PATH in such a state that ~ comes before other directories, such as something like this: $ printenv PATH/home/vagrant:/sbin:/bin:/usr/sbin:/usr/bin And we accidentally downloaded an executable such as this: $ cat ps#!/bin/bash/bin/ps -eaf | grep -v "spyware" Now when someone runs ps , they're using this version and not the intended /bin/ps . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304995/"
]
} |
462,156 | How do you find the line number in Bash where an error occurred? Example I create the following simple script with line numbers to explain what we need. The script will copy files from cp $file1 $file2cp $file3 $file4 When one of the cp commands fail then the function will exit with exit 1 . We want to add the ability to the function to also print the error with the line number (for example, 8 or 12). Is this possible? Sample script 1 #!/bin/bash234 function in_case_fail {5 [[ $1 -ne 0 ]] && echo "fail on $2" && exit 16 }78 cp $file1 $file29 in_case_fail $? "cp $file1 $file2"101112 cp $file3 $file413 in_case_fail $? "cp $file3 $file4"14 | Rather than use your function, I'd use this method instead: $ cat yael.bash#!/bin/bashset -eE -o functracefile1=f1file2=f2file3=f3file4=f4failure() { local lineno=$1 local msg=$2 echo "Failed at $lineno: $msg"}trap 'failure ${LINENO} "$BASH_COMMAND"' ERRcp -- "$file1" "$file2"cp -- "$file3" "$file4" This works by trapping on ERR and then calling the failure() function with the current line number + bash command that was executed. Example Here I've not taken any care to create the files, f1 , f2 , f3 , or f4 . When I run the above script: $ ./yael.bashcp: cannot stat ‘f1’: No such file or directoryFailed at 17: cp -- "$file1" "$file2" It fails, reporting the line number plus command that was executed. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/462156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
462,159 | I've seen numerous questions, answers, and guides relating to use of dd, cat, and clonezilla to facilitate partition and device cloning. Rather than continue those discussions I'm hoping to give very targeted questions to dd behavior within this post. I'm using Ubuntu 18.04 LTS live, booted from USB. I have a 500gb HDD with Windows7 (/dev/sda). I've resized the partitions such that slightly more than 420gb is unallocated. I aim to backup the boot table and partitions to a 120gb SSD (/dev/sdb). The SSD has no partitions and shows 110gb unallocated. sudo dd if=/dev/sda of=/dev/sdb hits failure and conveys 'No space left on device'. Spot checking partitions using gparted, I see /dev/sdb contains the same partitions, labels, size, and unused. The only noticeable difference is unallocated space. I am able to boot Windows from the SSD but am left with a few questions related to dd behavior. did dd attempt to copy device content which was not allocated? does dd have an order of operations? (something like partition 1->99, then misc drive blocks) by chance have I just been lucky so far in my immediate use of this Windows drive? am I being overly paranoid in thinking the drive is missing content? For what it's worth, I do plan to retain the HDD for the foreseeable future. | dd does not know anything about partitions or any other structure on the disk. It doesn't recognize "allocated" or "unallocated" parts of the disk. From dd 's point of view, a device given as parameter consists of a series of blocks. Thus, if you run dd if=/dev/sda of=/dev/sdb , dd will copy blocks starting from the beginning of /dev/sda to the correspondingly numbered blocks on /dev/sdb , until it has read all blocks from /dev/sda or until it hits an error. In your example, because /dev/sdb was smaller in size than /dev/sda , dd was terminated with the error message "No space left on device" (on /dev/sdb ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305071/"
]
} |
462,184 | Say I have these Bash commands: $ sha="$(git log --all --format=format:%H -n 1 -- .npp.json)"$ git branch --contains "$sha" | tr -d " *" right now that might log something like: masterdevremotes/origin/foo my question is - is there some Bash utility that can concatenate all the output for me, so that I get something like this: master:dev:remotes/origin/foo the utility might look like: $ git branch --contains "$sha" | tr -d " *" | concat ":" of course the final value would need to be echoed, so it might look like: $ result="$(git branch --contains "$sha" | tr -d " *" | concat ":")"$ echo "$result" | If you're asking how to change masterdevremotes/origin/foo to master:dev:remotes/origin/foo then tr '\n' : would be the classical UNIXy way to do it. As for trimming a possible final newline, you could save the output to a variable first via $() and the $() will remove it, or you can do the substitution, save the result in a variable and then do variable=${variable%:} to trim it the final colon. (See https://stackoverflow.com/questions/1654021/how-can-i-delete-a-newline-if-it-is-the-last-character-in-a-file for more options.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
462,275 | I have noticed that after a package installation via apt-get in Debian the service in systemd is enabled by default. However, in other distributions, such as Arch Linux, the service in that package is disabled by default. My questions are: On what does this behavior depend? Is it some setting in the package manager or the package itself decides whether it is enabled or not? I mean on Debian it looks like systemctl enable docker.service was executed after installation. And on Arch-linux the docker.service is disabled. How can I change it? | As the systemd preset blurb states , this is a policy choice made by distributors: On Fedora all services stay off by default, so that installing a package will not cause a service to be enabled (with some exceptions). On Debian all services are immediately enabled by default, so that installing a package will cause its service(s) to be enabled right-away. In theory, systemd distributions use the preset system for deciding whether a service should be enabled after package installation, running systemctl preset rather than systemctl enable in package post-install maintenance scripts; and applying your local overrides to the distribution policy is as simple as creating your own higher priority presets in /etc/systemd/system-preset/ . (The Arch doco is rather misleading, here. The usual case is to create an individual local preset file that addresses specific services.) In practice, some systemd distributions do not use the preset system for this, and applying your local overrides to systemd is a matter of employing the distributions' own mechanisms, if they even actually have such. Further reading Raphaël Hertzog (2014-12-08). deb-systemd-helper does not respect systemd Preset files . Debian Bug #772555. " Enable installed units by default ". systemd . Arch wiki. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305195/"
]
} |
462,364 | A google search reveals this slashdot story yielding this github repository that hasn't had a commit since 2016 . There are 22,602 forks listed on github.com but these are going to be mostly (if not virtually all) simply development forks for torvalds/linux . I have read before that Linux has become quite crufty. It seems to me that, at least in terms of user experience, Linux has become much more polished than I remember 10+ years ago (obviously this is not an accurate assessment of the kernel; I am only now reading K&R and have never dipped into the kernel source except a cursory glance that yielded a "whoa, I can't understand a line of this", but I am aware of a great amount of development regarding linux-on-the-laptop features in the kernel, for example). Regardless, I know I've seen BSD people complaining about Linux cruft. Considering the neovim fork of vim based on vim's cruft, I would think similar efforts would be rewarding for the kernel. What prompts this question was this article on LWN discussing attempts to compile Linux with clang. I've read that the kernel uses many quirks/special features specific to gcc for optimization (though the linked article seems to downplay them compared to my memory), and I began to wonder if anyone had attempted to refactor/fork the kernel to make it more portable, or at least compilable outside of the gnu-environment. I also understand that gcc iteself is crufty, and Linus himself has criticized it. I know I am not alone in my personal distaste for RMS and GNU and interest in Linux devoid of GNU; I am aware of Alpine Linux which does without gnu tools, but the kernel is still compiled with gcc, isn't it? There are many references to alternative toolchains and userland software, but I am specifically wondering about the kernel and whether there are forks that eliminate gcc/gnu dependencies--consider this a subsidiary question of the title--it seems to me it would be a waste to ask it separately. | I am specifically wondering about the kernel and whether there are forks that eliminate gcc/gnu dependencies-- No-one is going to finish the work syncing up clang & Linux and then maintain that as a long-term fork . Especially when there's such interest and willingness from mainline. Unless it was a small part of a big visible project, which you would have found. (As you did...). consider this a subsidiary question of the title--it seems to me it would be a waste to ask it separately. Android , as mentioned by the first answer. Some parts get merged so maybe it's not as bad as it was. I'm not really up to date on this. Mainline certainly gets work for some ARM CPU related stuff e.g. BIG.little. And Android repeatedly rebase on mainline versions; Google is not going to fall too far behind. But it's a long running fork. It doesn't run on "upstream first" rules. They're carrying a lot of hardware support. IMO "Android" and "Google" are good pointers to the levels of resource you need for something that justifies being called a fork of Linux. Android has also been a problem in that devices running it ship with kernels containing large amounts (often millions of lines) of out-of-tree code. -- https://lwn.net/Articles/738225/ Also RHEL kernels, which have terrifying names like 2.6.32-754 as of 2018. These are not just security updates; they will include support for some new hardware, while at the same time aiming to provide closer behaviour to the base kernel version e.g. 2.6.32. I believe fork is a appropriate word for the resources RH require to maintain this. Running well on a wide range of recent hardware is costly, and hence valuable. That's mostly what the Linux kernel project is, and I'd say it's the single most important factor to understand these two forks. You might compare the code size of vim and Linux on openhub.net and think, oh, Linux is "only" about 20x the size. However the difference in number of commits is significantly larger; the rate of churn is quite ferocious. Also, it's not just that kernel code is harder to get right... though it is... but also it's hardware support. When your apparently harmless refactoring ends up breaking a few devices, you need to have access to those specific devices to debug and fix the problem. Hardware support makes me think of https://en.wikipedia.org/wiki/Embeddable_Linux_Kernel_Subset :-P. In this world of server virtualization, you might also think there would be fork(s) optimized for it, as they would not need to keep up with so much different hardware. I can't think of any good examples though. You could look up "unikernels"; there seem to be several not based on Linux. linux-rt / PREEMPT_RT also comes to mind as an out of tree patch set. This is a patch set which is rebased on successive mainline versions. 200 KB (compressed) of specialist code is a respectable patch set. Some large pieces of it have been merged in, at at least one point. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
462,416 | After installing Linux Mint 19 I wanted to check how vsinc affects fps in Linux, so I typed this command: CLUTTER_SHOW_FPS=1 cinnamon --replace After some time I accidentally pressed Ctrl + Z and paused that process. Immediately my Bash shell and everything except the mouse cursor froze, so I can't type the fg command. Is there a way to unpause that process without rebooting and should I use Ctrl + C next time to properly exit that process? | Switch to a new TTY. See How to switch between tty and xorg session? for tips on how to switch TTYs. Determine the PID of the cinnamon process: ps -e | grep cinnamon Send this process the SIGCONT signal with kill -SIGCONT [pid] | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/462416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305327/"
]
} |
462,463 | I want to edit my server files as a user in the apache group So, I need to chmod all files to 0775 as user apache . But, I am unable to su apache . I don't know the password for it, and entering empty password does not work. | To switch to apache user su -s /bin/bash apache | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142579/"
]
} |
462,509 | I'm on a remote server which is a Ubuntu 16.04.1 LTS and I need to list only directories in this folder called "NewsData"I found that ls -d */ is a good command to list folders, however it works in some folders and not in others. Below is the sample output (venv_p3.5) anjali@momo:/scratche/home/anjali$ ls -d */archive/ DownloadImages/ fixed/ getNews/ html/ log/ MonumentData/ NewsData/ Pytorch-finetuning/ src/ TestData/ TrainData/ TrainData2/ venv_p3.5/ VGG16FeatureExtraction/(venv_p3.5) anjali@momo:/scratche/home/anjali$ cd NewsData/(venv_p3.5) anjali@momo:/scratche/home/anjali/NewsData$ ls -d */ls: invalid option -- '/'Try 'ls --help' for more information.(venv_p3.5) anjali@momo:/scratche/home/anjali/NewsData$ ls - A B C cleanData.py D E F G getList.py H I J K L M N O P Q R S sequentialNumbering.sh T U V W Y Z Why does this happen? how can I fix this? | The directory contains a subdirectory named - , so that expansion of */ by the shell includes -/ , which is being misinterpreted as a command option. You can avoid this by marking the end of options explicitly using -- i.e. ls -d -- */ or by prepending the glob with a path ls -d ./*/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/462509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305416/"
]
} |
462,515 | I would like to check if the first command line argument ( $1 ) has an minimum amount of 10 characters and if it is empty. The script is called as: ./myscript.sh 2018-08-14 I tried this but it doesn't work timestamp="$1"# check if command line argument is empty or not presentif [ "$1" == "" ] || [ $# -gt 1 ]; then echo "Parameter 1 is empty" exit 0elif [! "${#timestamp}" -gt 10 ]; then echo "Please enter at least a valid date" echo "Example: 2018-08-14" exit 0else echo "THIS IS THE VALID BLOCK"fi | Well, if [ "$1" == "" ] || [ $# -gt 1 ]; then echo "Parameter 1 is empty" First, use = instead of == . The former is standard, the latter a bashism (though I think it's from ksh, too). Second, the logic here isn't right: if $# is greater than one, then parameter 1 probably isn't empty (it might be set to the empty string, though). Perhaps you meant "$#" -lt 1 , though that would also imply that "$1" = "" . It should be enough to test [ "$1" = "" ] , or [ "$#" -lt 1 ] . elif [! "${#timestamp}" -gt 10 ]; then Here, the shell would try to run a command called [! (literally). You need a space in between, so [ ! "${#timestamp}" -gt 10 ] . But that's the same as [ "${#timestamp}" -le 10 ] , which would also catch strings of exactly 10 characters, like 2018-08-14 . So maybe you want [ "${#timestamp}" -ne 10 ] . ( != instead of -ne would also work, even though it's a string comparison.) if ... exit 0 It's customary to return with a non-zero exit code in case of an error, so use exit 1 in the error branches. You could also use case or [[ .. ]] to pattern match the argument against the expected format, e.g.: case "$1" in "") echo date is empty exit 1;; [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]) echo date is ok;; *) echo "date is _not_ ok" exit 1;;esac That would also reject arguments like abcdefghij , even though it's 10 characters long. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305422/"
]
} |
462,589 | I have some files here that I copied over to this linux box using the python module pysftp: [jm@spartan tf]$ ls -latotal 0drwxrwxrwx. 3 jm jm 69 Aug 14 13:50 .drwxrwxrwt. 9 root root 238 Aug 14 13:49 ..-rwxrwxrwx. 1 jm jm 0 Aug 14 13:49 .\gitkeepdrwxrwxrwx. 2 jm jm 6 Aug 14 13:50 .\innerfile-rwxrwxrwx. 1 jm jm 0 Aug 14 13:50 .\innerfile\gitkeep[jm@spartan tf]$ rm .\gitkeeprm: cannot remove ‘.gitkeep’: No such file or directory They're hidden, so I'm still trying to figure out how to copy then over so they aren't hidden, but in the mean time I want to delete them, but I'm unable to. What's going on here? I'm on CentOS 7. | What you really need to do is to fix your script so it converts the windows paths to Unix paths. One relatively easy way to do that is to take path separators out of the equation: rather than providing full path names to copy, you recursively walk directories, creating target directories on the remote side and specifying only the filename rather than the full path. :) But until you get to that point, you need to protect the backslashes from the shell. You can do that by quoting using single quotes (backslashes are interpreted for some characters inside double quotes). Note specifically that the wildcard is outside of the quotes so the shell treats it as a wildcard rather than a literal * : :) rm -rv '.\'* Or you can do that by escaping the backslash (which would also work in double quotes, but the double quotes aren't needed here): rm -rv .\\* I would suggest that, before you remove stuff using a wildcard, you always first run ls with the same arguments, then use the up arrow to recall the last command, where you can change the ls to an rm . That way you can see the list of files before it's removed, preventing a potentially big mistake. :) I'm also a big fan of using -v with rm in cases like this. sauer@lightning:/tmp> ls -vr .\\*.\innerfile\gitkeep .\gitkeep.\innerfile:sauer@lightning:/tmp> rm -vr .\\*removed '.\gitkeep'removed directory '.\innerfile'removed '.\innerfile\gitkeep' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299389/"
]
} |
462,597 | I'm writing a script to configure new debian installs while finding the best solution to confirming that a user exists in the script, the best way I found gives me wierd output. PROBLEM: id -u $var and id -u $varsome give the same output even though var has a value (the username) and varsome has no value [19:49:24][username] ~ ~↓↓$↓↓ var=`whoami`[19:53:38][username] ~ ~↓↓$↓↓ id -u $var1000[19:53:42][username] ~ ~↓↓$↓↓ echo $?0[19:53:49][username] ~ ~↓↓$↓↓ id -u $varsome1000[19:09:56][username] ~ ~↓↓$↓↓ echo $?0[20:10:18][username] ~ ~↓↓$↓↓ bash --versionGNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)Copyright (C) 2016 Free Software Foundation, Inc.Licens GPLv3+: GNU GPL version 3 eller senere <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.[20:27:08][username] ~ ~↓↓$↓↓ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)"NAME="Debian GNU/Linux"VERSION_ID="9"VERSION="9 (stretch)"ID=debianHOME_URL="https://www.debian.org/"SUPPORT_URL="https://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/" I got the command from this question on stackoverflow: Check Whether a User Exists QUESTIONS: What is happening here? Is there a better way you can find to verify a user exist's in a script? Pointers on the script well appreciated | Since the variable expansion wasn't quoted, the empty word that results from $varsome being expanded is removed completely. Let's make a function that prints the number of arguments it gets and compare the quoted and non-quoted case: $ args() { echo "got $# arguments"; }$ var=""$ args $vargot 0 arguments $ args "$var"got 1 arguments The same happens in your case with id : id -u $var is exactly the same as just id -u when var is empty. Since id doesn't see a username, it by default prints the current user's information. If you quote "$var" , the result is different: $ var=""$ id -u "$var"id: ‘’: no such user With that fixed, you can use id to find if a user exists. (We don't need the outputs here though, so redirect them away.) check_user() { if id -u "$1" >/dev/null 2>&1; then echo "user '$1' exists" else echo "user '$1' does not exist" fi}check_user rootcheck_user asdfghjkl That would print user 'root' exists and user 'asdfghjkl' does not exist . This is a bit of the inverse of the usual problems that come from the unexpected word splitting of unquoted variables. But the basic issue is the same and fixed by what half the answers here say: always quote the variable expansions (unless you know you want the unquoted behaviour). See: When is double-quoting necessary? Why does my shell script choke on whitespace or other special characters? BashGuide on Word Splitting | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
462,602 | What is the default password for Kali on Windows 10 via Windows Subsystem for Linux? | Traditional Kali Searching for this via Google it appears to be toor for the root user. Notice it's just the name root backwards which is a typical hacker thing to do on compromised systems, as an insider's joke. If you happened to provide a password during the installation, then this would be the password to use here instead of the default toor . Kali on WSL NOTE: WSL = Windows Subsystem for Linux . In this particular flavor of Kali the root password appears to be randomly generated for the root user. To get into root you simply use sudo su instead. Reference: Thread: Unable to 'su root' in kali on WSL I'm sure the root password is randomly generated in WSL. It's irrelevant though, just type Code: sudo su What's WSL? So there are various flavors to Kali. You can download it and install it natively as a bare OS, you can also go into the Window's App Store and install it as an addon. For the past few weeks, we’ve been working with the Microsoft WSL team to get Kali Linux introduced into the Microsoft App Store as an official WSL distribution and today we’re happy to announce the availability of the “Kali Linux” Windows application. For Windows 10 users, this means you can simply enable WSL, search for Kali in the Windows store, and install it with a single click. This is especially exciting news for penetration testers and security professionals who have limited toolsets due to enterprise compliance standards. For an overview of what limitations there are in WSL see this U&L Q&A titled: Attempting to run a regular tunnel in Debian version 9.5 Linux . References Kali Linux Default Passwords Is there a default password of Kali Linux OS after first installation? I cannot log into Kali Linux after installing it. How can I log in? Install the Windows Subsystem for Linux | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305490/"
]
} |
462,663 | What purpose does the [ -n "$PS1" ] in [ -n "$PS1" ] && source ~/.bash_profile; serve? This line is included in a .bashrc of a dotfiles repo . | This is checking whether the shell is interactive or not. In this case, only sourcing the ~/.bash_profile file if the shell is interactive. See "Is this Shell Interactive?" in the bash manual, which cites that specific idiom. (It also recommends checking whether the shell is interactive by testing whether the $- special variable contains the i character, which is a better approach to this problem.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/462663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305528/"
]
} |
462,670 | How do I set the default profile that is used after each boot, in PulseAudio? When I boot, sound doesn't work. If I open the PulseAudio Volume Control app, and go to the Configuration pane and select "Analog Surround 4.0 Output" from the Profile drop-down menu, then sound works again. However this only lasts until the next reboot. How do I configure the system to use that profile in the future after reboots? | Add the following to /etc/pulse/default.pa : set-card-profile <cardindex> <profilename> How do we figure out what to use as cardindex and as profilename ? Here's one way. Configure the card so everything is working. The cardindex will usually be 0, but you can find it by running pacmd list-cards and looking at the line index: ... . To find the profilename , use pacmd list-cards | grep 'active profile' The name of the current profile should appear in the output. Remove the angle brackets (the < and > ). You can test your configuration by running pactl set-card-profile <cardindex> <profilename> from the command line to see if it sets the profile correctly, then add it to /etc/pulse/default.pa . Since the index name is dynamic (it can change your PCI device index if you boot with a USB audio device plugged in), you could use <symbolic-name> instead of <index> (if you run pacmd list-cards , the symbolic name is right below the index). Also, the command might fail if the device is missing when starting pulseaudio so it might worth to wrap the command with an .ifexists clause: .ifexists <symbolic-name>pactl set-card-profile <symbolic-name> <profilename>.endif | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/462670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
462,686 | Say, I have 'a.txt'. I would like to copy the file and open the newly copied file using VI. cp a.txt b.txtvi b.txt How to combine the two command in a command? | You can use vi itself to do the copy, by opening a.txt then saving the contents to b.txt (effectively copying it) and then switching to b.txt. Putting it all together: vi -c 'w b.txt' -c 'e#' a.txt This is equivalent to running vi a.txt , followed by the :w b.txt command (inside vi ), which will save the contents to a file named b.txt. But vi will still be editing a.txt at this point, so you follow up with the :e# command, which means "edit alternative file" (or "edit last file") and given vi has just touched b.txt, it will switch to editing that file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/462686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210520/"
]
} |
462,737 | Unlike htop -- top is more difficult for me and I would to know how to kill any process in top ? Thank in advance my Stackoverflow friends ! | Press k for kill and enter PID and signal to kill. Some common signals : Number Name (short name) Description Used for 0 SIGNULL (NULL) Null Check access to pid 1 SIGHUP (HUP) Hangup Terminate; can be trapped 2 SIGINT (INT) Interrupt Terminate; can be trapped 3 SIGQUIT (QUIT) Quit Terminate with core dump; can be trapped 9 SIGKILL (KILL) Kill Forced termination; cannot be trapped 15 SIGTERM (TERM) Terminate Terminate; can be trapped 24 SIGSTOP (STOP) Stop Pause the process; cannot be trapped. This is default if signal not provided to kill command. 25 SIGTSTP (STP) Terminal Stop/pause the process; can be trapped 26 SIGCONT (CONT) Continue Run a stopped process | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283800/"
]
} |
462,757 | What are the benefits of running a docker container inside a VM vs running docker containers on bare metal (on the host directly)? I have heard of companies running docker containers inside of a VM, particularly it has been mentioned in docker conferences that some organizations are doing it. Why? ( Comparing Docker container running on host vs Docker container running inside KVM on host ) Both Docker and KVM have ways to save their current state, no added benefit here Both Docker and KVM can be provided separate IP's for network use Both Docker and KVM separate running programs and installs from conflicting with host running processes Both Docker and KVM provide easy ways to scale with enterprise growth Both Provide simple methods of moving instances to different hosts So why would anyone run Docker inside a KVM? Wouldn't they be taking a unnecessary performance hit from the KVM? | Regarding your main points: Both Docker and KVM have ways to save their current state, no added benefit here Except that how they store their state is different, and one method or the other may be more efficient. Also, you can't reliably save 100% of the state of a container. Both Docker and KVM can be provided separate IP's for network use Depending on what VM and container system you use, this may be easier to set up for VM's than for containers. This is especially true if you want a dedicated layer 2 interface for the VM/container, which is almost always easier to do with a VM. Both Docker and KVM separate running programs and installs from conflicting with host running processes VM's do it better than containers. Containers are still making native system calls to the host OS. That means they can potentially directly exploit any bugs in those system calls. VM's have their own OS, so they're much better isolated. Both Docker and KVM provide easy ways to scale with enterprise growth This is about even, though I've personally found that VM's done right scale a bit better than containers done right (most likely because VM's done right offload the permissions issues to the hardware, while containers need software to handle it). Both Provide simple methods of moving instances to different hosts No, not exactly. Both can do offline migration, but a lot of container systems can't do live migration (that is, moving a running container from one host to another). Live migration is very important for manageability reasons if you're running at any reasonable scale (Need to run updates on the host? Migrate everything to another system, reboot the host, migrate everything off of the second host to the first, reboot that, rebalance.). Some extra points: VM's generally have easier to work with high-availability options. This isn't to say that containers don't have such options, just that they're typically easier to work with and adapt application code to with VM's. VM's are a bit easier to migrate directly to and from cloud hosting (you don't have to care to quite the same degree what the underlying hosting environment is like). VM's let you run a different platform from the host OS. Even different Linux distributions have sufficient differences in their kernel configuration that stuff written for one is not completely guaranteed to work on another. VM's give you better control of the potential attack surface. With containers, you just can't get rid of the fact that the code for your host OS is still in memory, and therefore a potential attack vector. With VM's, you're running an isolated OS, so you can strip it down to the absolute minimum of what you actually need. Running a group of related containers together in a VM gives you an easy foolproof way to start and stop that group of containers together. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230085/"
]
} |
462,900 | i am using centos 7. I am typing the command ip addr show eth0 but its reply Device "eth0" does not exist . | In CentOS, network interfaces are named differently. So they aren't called eth0 or eth1 , but rather have names like eno1 or enp2s0 . ( Source. ) Run ip addr to see how these interfaces are named on your system. These names are defined in /etc/sysconfig/network-scripts/ifcfg-<iface> . You can change their names if you really wanted to, but I don't recommend it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156974/"
]
} |
462,904 | There are 200 plus files in a folder where some of the files consisting the following pattern in their records ABCD<Space><tab><Space>,EFGH,<SPACE>, . Without amending or replacing it, I just want to know the number of files with this format. | In CentOS, network interfaces are named differently. So they aren't called eth0 or eth1 , but rather have names like eno1 or enp2s0 . ( Source. ) Run ip addr to see how these interfaces are named on your system. These names are defined in /etc/sysconfig/network-scripts/ifcfg-<iface> . You can change their names if you really wanted to, but I don't recommend it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303591/"
]
} |
462,955 | I am reading "Advanced Bash-Scripting Guide" at the moment.There is some script that not working correctly on my machine: HNAME=news-15.net # Notorious spammer.# HNAME=$HOST# Debug: test for localhost.count=2 # Send only two pings.if [[ `ping -c $count "$HNAME"` ]]thenecho ""$HNAME" still up and broadcasting spam your way."elseecho ""$HNAME" seems to be down. Pity."fi It always prints $HNAME" still up and broadcasting spam your way. - even with non-existing IP addresses. Can someone clarify what is the problem? It works correctly when I use a hostname instead of an IP address. | if [[ `ping -c $count "$HNAME"` ]] This runs ping in a command substitution (using the old backtick syntax, instead of the saner $(...) ). The resulting output is put on the command line, and then [[ .. ]] , tests to see if the output is empty. (That is, regular output. Command substitution doesn't capture error output, which would still be sent to the script's error output.) If ping outputs anything, the test succeeds. E.g. on my system: $ ping -c1 1.2.3.4 2>/dev/nullPING 1.2.3.4 (1.2.3.4) 56(84) bytes of data.--- 1.2.3.4 ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0ms Since 1.2.3.4 is a valid IP address, ping tries to contact it. With an invalid IP address or a nonexisting hostname, it would only print an error, and the standard output would be empty. A better way to test if a host is up would be to test the exit status of ping , and redirect its output away: host=1.2.3.4if ping -c1 "$host" >/dev/null 2>&1; then echo "'$host' is up"else echo "'$host' does not respond (or invalid IP address/hostname)"fi Note that the quoting in the echo commands is off: echo ""$HNAME" seems to be down." This has an empty quoted string "" , then an unquoted parameter expansion $HNAME , and then the quoted string " seems to be down." . For various reasons, it's better to quote all paremeter expansions, so use "$var blahblah" , or "\"$var\" blahblah" if you want to put quotes around the variable in the output. See: When is double-quoting necessary? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/300842/"
]
} |
462,960 | I'm currently exploring and experimenting with my linux just for fun and educational purposes. I have deleted the content of the /etc/bash.bashrc and /etc/profile and I don't have any configuration in the home directory for both root and local users, I also perform a reboot. However when i run printenv PATH the values of the path was still there which is: $PATH=usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games what might be defining the $PATH? EDIT: I forgot to mention that I'm using Kali linux. I've found one of the culprit is in the /etc/profile.d/kali.sh as mentioned by a guy named flyingdrifter in the comment section of this thread -> Complete view of where the PATH variable is set in bash (although his distribution is LinuxMint). Now the $PATH variable has been reduced to $PATH=/usr/local/bin:/usr/bin:/bin:/usr/games which means that there are probably one last file that setting the PATH variable. | if [[ `ping -c $count "$HNAME"` ]] This runs ping in a command substitution (using the old backtick syntax, instead of the saner $(...) ). The resulting output is put on the command line, and then [[ .. ]] , tests to see if the output is empty. (That is, regular output. Command substitution doesn't capture error output, which would still be sent to the script's error output.) If ping outputs anything, the test succeeds. E.g. on my system: $ ping -c1 1.2.3.4 2>/dev/nullPING 1.2.3.4 (1.2.3.4) 56(84) bytes of data.--- 1.2.3.4 ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0ms Since 1.2.3.4 is a valid IP address, ping tries to contact it. With an invalid IP address or a nonexisting hostname, it would only print an error, and the standard output would be empty. A better way to test if a host is up would be to test the exit status of ping , and redirect its output away: host=1.2.3.4if ping -c1 "$host" >/dev/null 2>&1; then echo "'$host' is up"else echo "'$host' does not respond (or invalid IP address/hostname)"fi Note that the quoting in the echo commands is off: echo ""$HNAME" seems to be down." This has an empty quoted string "" , then an unquoted parameter expansion $HNAME , and then the quoted string " seems to be down." . For various reasons, it's better to quote all paremeter expansions, so use "$var blahblah" , or "\"$var\" blahblah" if you want to put quotes around the variable in the output. See: When is double-quoting necessary? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305401/"
]
} |
462,984 | I've written 2 php scripts one worker and one master. If I run master manually it checks a rabbitmq system and spawns 1 worker for each queue it detects. Then if I run master again it checks see if the each worker process is still running if it is then it does nothing if not then it restarts it. Fairly simple concept I'd have thought. I achieved it using nohup and & in the master for each child it spawns, now as I'm sure you can guess i want to run master every 60 seconds to check the queues are alive and respawn them if they're not. This gives me the problem of I can't use nohub in cron jobs and just using & on its own doesn't seem to work. As far as I can tell the workers aren't even execing with the call I've tried creating a separate shell script that then calls master.php also didn't work. I tried creating it as an upstart task then remembered that in ubuntu 17+ upstart is removed. I'm open to any suggestions of different ways of doing this but whatever route I take it must allow my master.php to spawn the work.php files as headless background processes. | if [[ `ping -c $count "$HNAME"` ]] This runs ping in a command substitution (using the old backtick syntax, instead of the saner $(...) ). The resulting output is put on the command line, and then [[ .. ]] , tests to see if the output is empty. (That is, regular output. Command substitution doesn't capture error output, which would still be sent to the script's error output.) If ping outputs anything, the test succeeds. E.g. on my system: $ ping -c1 1.2.3.4 2>/dev/nullPING 1.2.3.4 (1.2.3.4) 56(84) bytes of data.--- 1.2.3.4 ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0ms Since 1.2.3.4 is a valid IP address, ping tries to contact it. With an invalid IP address or a nonexisting hostname, it would only print an error, and the standard output would be empty. A better way to test if a host is up would be to test the exit status of ping , and redirect its output away: host=1.2.3.4if ping -c1 "$host" >/dev/null 2>&1; then echo "'$host' is up"else echo "'$host' does not respond (or invalid IP address/hostname)"fi Note that the quoting in the echo commands is off: echo ""$HNAME" seems to be down." This has an empty quoted string "" , then an unquoted parameter expansion $HNAME , and then the quoted string " seems to be down." . For various reasons, it's better to quote all paremeter expansions, so use "$var blahblah" , or "\"$var\" blahblah" if you want to put quotes around the variable in the output. See: When is double-quoting necessary? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/462984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122528/"
]
} |
463,034 | I am trying to learn how to use getopts so that I can have scripts with parsed input (although I think getopts could be better). I am trying to just write a simple script to return partition usage percentages. The problem is that one of my bash functions does not seem to like that I reference $1 as an variable within the function. The reason I reference $1 is because the get_percent function can be passed a mount point as an optional argument to display instead of all of the mount points. The script #!/usr/bin/bashset -eset -uset -o pipefailget_percent(){ if [ -n "$1" ] then df -h $1 | tail -n +2 | awk '{ print $1,"\t",$5 }' else df -h | tail -n +2 | awk '{ print $1,"\t",$5 }' fi}usage(){ echo "script usage: $(basename $0) [-h] [-p] [-m mount_point]" >&2}# If the user doesn't supply any arguments, we run the script as normalif [ $# -eq 0 ];then get_percent exit 0fi# ... The Output $ bash thing.shthing.sh: line 8: $1: unbound variable$ bash -x thing.sh+ set -e+ set -u+ set -o pipefail+ '[' 0 -eq 0 ']'+ get_percentthing.sh: line 8: $1: unbound variable | set -u will abort exactly as you describe if you reference a variable which has not been set. You are invoking your script with no arguments, so get_percent is being invoked with no arguments, causing $1 to be unset. Either check for this before invoking your function, or use default expansions ( ${1-default} will expand to default if not already set to something else). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/463034",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139546/"
]
} |
463,072 | The Ubuntu man page for apt-key includes the following note regarding apt-key add : Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either "gpg" or "asc" as file extension. I don't think I've ever seen this advice anywhere else. Most projects that host their own repositories say to download their key file and add it with apt-key . What is the motivation behind this advice? Is this an Ubuntu-ism, or does it apply to any APT-based distro? | Those projects have outdated instructions. I know this because I publish a Debian repository and I updated my instructions when I found out about the changes in Debian 9 APT. Indeed, this part of the manual is now out of date, as it is the wrong directory. This is not really to do with .d directories and more to do with preventing a cross-site vulnerability in APT. The older system used separate keyring files for convenience, but this is now a necessity for security; your security. This is the vulnerability. Consider two repository publishers, A and B. In the world of Debian 8 and before, both publishers' keys went in the single global keyring on users' machines. If publisher A could somehow arrange to supplant the repository WWW site of publisher B, then A could publish subversive packages, signed with A's own key , which APT would happily accept and install. A's key is, after all, trusted globally for all repositories. The mitigation is for users to use separate keyrings for individual publishers , and to reference those keyrings with individual Signed-By settings in their repository definitions. Specifically, publisher A's key is only used in the Signed-By of repository A and publisher B's key is only used in the Signed-By of repository B. This way, if publisher A supplants publisher B's repository, APT will not accept the subversive packages from it since they and the repository are signed by publisher A's key not by publisher B's. The /etc/apt/trusted.gpg.d mechanism at hand is an older Poor Man's somewhat flawed halfway house towards this, from back in 2005 or so, that is not quite good enough. It sets up the keyring in a separate file, so that it can be packaged up and just installed in one step by a package manager (or downloaded with fetch / curl / wget ) as any other file. (The package manager handles preventing publisher A's special this-is-my-repository-keyring package from installing over publisher B's, in the normal way that it handles file conflicts between packages in general.) But it still adds it to the set of keys that is globally trusted for all repositories. The full mechanism that exists now uses separate, not globally trusted, keyring files in /usr/share/keyrings/ . My instructions are already there. ☺ There are moves afoot to move Debian's own repositories to this mechanism, so that they no longer use globally trusted keys either. You might want to have a word with those "most projects" that you found. After all, they are currently instructing you to hand over global access to APT on your machine to them. Further reading Daniel Kahn Gillmor (2017-05-02). Please ship release-specific keys separately outside of /etc/apt/trusted.gpg.d/ . Debian bug #861695. Daniel Kahn Gillmor (2017-07-27). debian sources.list entries should have signed-by options pointing to specific keys . Debian bug #877012. "Sources.list entry" . Instructions to connect to a third-party repository . Debian wiki. 2018. Why isn't it a security risk to add to sources.list? Debian 9, APT, and "GPG error: ... InRelease: The following signatures were invalid:" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/463072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121291/"
]
} |
463,126 | I'm currently on Ubuntu 16.04 LTS. As of writing this, 18.04 LTS is available. However, I do not wish to upgrade to it.Instead, I would like to upgrade to 17.04 LTS. I've done: sudo apt updatesudo apt dist-upgrade Many tutorials suggest sudo do-release-upgrade as the next step. But I believe that would upgrade to the latest distro and not the target 17.04.How do I go about this? | To answer your question, I don’t think Ubuntu officially supports upgrades to releases other than either the latest release or the latest LTS. It might be possible to upgrade to a specific release by changing the appropriate code name in /etc/apt/sources.list and running apt update && apt dist-upgrade , but that won’t take into account any upgrade step performed by the do-release-upgrade tool (if any). However in your specific case, 17.04 isn’t an LTS, and is already out of support . 16.04 is still supported; if you don’t want to upgrade to 18.04 you should stick with 16.04. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305986/"
]
} |
463,137 | I have a file. I want to select the lines that does not start with www. and does not contains slashes. I want to output the result into result.txt To achieve the first requirement: grep -v '^www\.' myfile.txt > result.txt . To achieve the second, I will take result.txt and execute: grep -v '/' result.txt > result2.txt Is there any better shorcut to execute several commands on the file and store the result into one output file: result.txt I am aware of | to execute several commands where the output of the command on the left of | is input of the command in the right of | . What I do not know is, in case of grep or any other command, should I use the file name in the command on the right of | . In other words, should it be: grep -v '^www\.' myfile.txt | grep -v '/' > result.txt OR grep -v '^www\.' myfile.txt | grep -v '/' mfile.txt > result.txt | To answer your question, I don’t think Ubuntu officially supports upgrades to releases other than either the latest release or the latest LTS. It might be possible to upgrade to a specific release by changing the appropriate code name in /etc/apt/sources.list and running apt update && apt dist-upgrade , but that won’t take into account any upgrade step performed by the do-release-upgrade tool (if any). However in your specific case, 17.04 isn’t an LTS, and is already out of support . 16.04 is still supported; if you don’t want to upgrade to 18.04 you should stick with 16.04. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299440/"
]
} |
463,146 | How can I remove nanosecond from every line in a file.data looks likefilename is test.csv ip,time,name1.1.1.1,2018-08-17 15:05:52:016469121,1.13.0-00071.1.1.2,2018-08-17 15:05:52:016469121,1.13.0-0007 | To answer your question, I don’t think Ubuntu officially supports upgrades to releases other than either the latest release or the latest LTS. It might be possible to upgrade to a specific release by changing the appropriate code name in /etc/apt/sources.list and running apt update && apt dist-upgrade , but that won’t take into account any upgrade step performed by the do-release-upgrade tool (if any). However in your specific case, 17.04 isn’t an LTS, and is already out of support . 16.04 is still supported; if you don’t want to upgrade to 18.04 you should stick with 16.04. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305805/"
]
} |
463,147 | I have the following command to extract string that do not start with www. and do not contain / : grep -v -e '^www\.' -e '/' test2.txt But I want the above in addition to matching somestring.somestring pattern, which can be achieved with this command: grep -e '^[^\.]*\.[^\.]*$' How to put all theses into one line? | To answer your question, I don’t think Ubuntu officially supports upgrades to releases other than either the latest release or the latest LTS. It might be possible to upgrade to a specific release by changing the appropriate code name in /etc/apt/sources.list and running apt update && apt dist-upgrade , but that won’t take into account any upgrade step performed by the do-release-upgrade tool (if any). However in your specific case, 17.04 isn’t an LTS, and is already out of support . 16.04 is still supported; if you don’t want to upgrade to 18.04 you should stick with 16.04. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299440/"
]
} |
463,198 | I want to search for the lines that contains any of the following characters: : / / ? # [ ] @ ! $ & ' ( ) * + , ; = % | grep "[]:/?#@\!\$&'()*+,;=%[]" Within a bracketed expression, [...] , very few character are "special" (only a very small subset, like ] , - and ^ , and the three combinations [= , [: and [. ). When including ] in [...] , the ] must come first (possibly after a ^ ). I opted to put the ] first and the [ last for symmetry. The only other thing to remember is that a single quoted string can not include a single quote, so we use double quotes around the expression. Since we use a double quoted string, the shell will poke around in it for things to expand. For this reason, we escape the $ as \$ which will make the shell give a literal $ to grep , and we escape ! as \! too as it's a history expansion in bash (only in interactive bash shells though). Would you want to include a backslash in the set, you would have to escape it as \\ so that the shell gives a single backslash to grep . Also, if you want to include a backtick ` , it too must be escaped as \` as it starts a command substitution otherwise. The command above would extract any line that contained at least one of the characters in the bracketed expression. Using a single quoted string instead of a double quoted string, which gets around most of the annoyances with what characters the shell interprets: grep '[]:/?#@!$&'"'"'()*+,;=%[]' Here, the only thing to remember, apart from the placing of the ] , is that a single quoted string can not include a single quote, so instead we use a concatenation of three strings: '[]:/?#@!$&' "'" '()*+,;=%[]' Another approach would be to use the POSIX character class [[:punct:]] . This matches a single character from the set !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ , which is a larger set than what's given in the question (it additionally contains "-.<>^_`{|}~ ), but is all the "punctuation characters" that POSIX defines. LC_ALL=C grep '[[:punct:]]' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299440/"
]
} |
463,422 | I have tried nvme_load=yes in place of quiet --- with Ubuntu 18.04.1 and Xubuntu 18.04. df -h results in only the system generated mounts and the installation media.The weekly live including firmware image of Debian also fails to discover the ssd. I have located more information on the SSD. I found this information in the system profiler on OS X High Sierra. Apple SSD Controller:APPLE SSD AP1024M:Capacity: 1 TB (1,000,555,581,440 bytes)TRIM Support: YesModel: APPLE SSD AP1024MRevision: 177.77.7Serial Number: C02829600M9JPD216Link Width: x4Link Speed: 8.0 GT/sDetachable Drive: NoBSD Name: disk0Partition Map Type: GPT (GUID Partition Table) . Removable Media: NoS.M.A.R.T. status: Verified lsblk from a xubuntu 18.04.1 live installer does not show any pcie or nvme devices. Note: the installer and Gparted fail to list it. Typically these both require an unmounted drive to work with. So, it simply does not see the SSD. I read that this system uses a PCIE SSD , though im not sure how to send a kernel module to allow use of it. | It's currently not possible to install anything except Windows 10 on Apple computers equipped with T2 chip . This security chip makes it impossible to see the internal drive, Apple generously did an exception only for Windows 10 (but only if you install it via Boot Camp). A possible option could be Linux installed on a USB/Thunderbolt external drive, unfortunately I tried this only for Windows but it worked (though the internal drive was not visible). Update: changing the Secure Boot option makes no difference. Update 2 (July 2019): custom patching of linux kernel seems to do the thing , unfortunately it's a pretty nerdy solution. Source | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197972/"
]
} |
463,477 | I need to run a remote script using ssh via Ruby ( net/ssh ) to recursively copy a folder and exclude a subfolder. I am looking for the fastest way to do it so rsync is not good. Also, I understand that ssh uses sh and not bash . In bash I do: cp -r srcdir/!(subdir) dstdir and it works fine. However when I launch the script via ssh I receive the error sh: 1: Syntax error: "(" unexpected because it is using sh . I have checked the sh man page, but there is no option to exclude files. Is it my assumption of ssh using sh correct?Any alternative suggestion? EDIT 1: In case it is useful, the output of sudo cat /etc/shells is the following: # /etc/shells: valid login shells/bin/sh/bin/dash/bin/bash/bin/rbash/usr/bin/tmux/usr/bin/screen EDIT 2: OK. So bash it is available and that does not seems to be the problem. I have verified that the ssh is actually using bash . The issue seems to be related to the escaping of parenthesis or exclamation mark.I have tried to run the command from the shell (macos) and this is the actual command: ssh -i .ssh/key.pem [email protected] 'mkdir /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N; cp -r /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/!\(constant\) /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N; ln -s /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/constant /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N/constant' In this way I receive a different error cp: cannot stat '/home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/!(constant)': No such file or directory EDIT 3: Based on the comments I have changed my command adding extglob If I use ssh -i .ssh/key.pem [email protected] 'shopt -s extglob; mkdir /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N; cp -r /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/!\(constant\) /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N; ln -s /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/constant /home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/N/constant' I receive the following error: cp: cannot stat '/home/ubuntu/OpenFOAM/ubuntu-4.1/run/LES_New-Area_residuals2/mesh/!(constant)': No such file or directory If I do not escape the parenthesis I get bash: -c: line 0: syntax error near unexpected token `(' | SSH runs your login shell on the remote system, whatever that is. But !(foo) requires shopt -s extglob , which you might not have set on the remote. Try this to see if SSH runs Bash on the remote side: ssh me@somehost 'echo "$BASH_VERSION"' If that prints anything, but your startup scripts don't set extglob , you can do it by hand on the command passed to ssh : ssh me@somehost 'shopt -s extglob echo srcdir/!(subdir)' # orssh me@somehost $'shopt -s extglob\n echo srcdir/!(subdir)' extglob affects the parsing of the command line, and only takes effect after a newline, so we have to put a literal newline there, a semicolon isn't enough. ssh me@somehost 'shopt -s extglob; echo srcdir/!(subdir)' Also not that if you escape the parenthesis with backslashes, they lose their special properties, like any other glob characters. This is not what you want to do in this case. $ touch foo bar; shopt -s extglob; set +o histexpand$ echo *bar foo$ echo !(foo)bar$ echo \**$ echo !\(foo\)!(foo) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106210/"
]
} |
463,548 | While looking through /proc/[PID]/fd/ folder of various processes, I found curious entry for dbus lrwx------ 1 root root 64 Aug 20 05:46 4 -> anon_inode:[eventpoll] Hence the question, what are anon_inode s ? Are these similar to anonymous pipes ? | Everything under /proc is covered in the man proc . This section covers anon_inode . For file descriptors for pipes and sockets, the entries will be symbolic links whose content is the file type with the inode. A readlink(2) call on this file returns a string in the format: type:[inode] For example, socket:[2248868] will be a socket and its inode is 2248868. For sockets, that inode can be used to find more information in one of the files under /proc/net/ . For file descriptors that have no corresponding inode (e.g., file descriptors produced by epoll_create(2) , eventfd(2) , inotify_init(2) , signalfd(2) , and timerfd(2)) , the entry will be a symbolic link with contents of the form anon_inode:<file-type> In some cases, the file-type is surrounded by square brackets. For example, an epoll file descriptor will have a symbolic link whose content is the string anon_inode:[eventpoll] . For more on epoll I discuss them here - What information can I find out about an eventpoll on a running thread? . For additional information on anon_inode 's - What is an anonymous inode in Linux? . Basically there is/was data on disk that no longer has a filesystem reference to access it. An anon_inode shows that there's a file descriptor which has no referencing inode. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/463548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
463,582 | Is there any command or option to read user hashed password from /etc/shadow file? | No this isn't possible. It's a one way hash that's been salted. All you can do is take a dictionary of words, hash them using the same crypt function and see if their results match what's in /etc/shadow . Tools such as John the Ripper automated this process, but that's effectively all they're doing to crack a password. John the Ripper is a fast password cracker, currently available for many flavors of Unix, Windows, DOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords. Besides several crypt(3) password hash types most commonly found on various Unix systems, supported out of the box are Windows LM hashes, plus lots of other hashes and ciphers in the community-enhanced version. There are countless others - https://en.wikipedia.org/wiki/Password_cracking . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287827/"
]
} |
463,590 | When attempting to boot Debian, it stops at the following message. It seems to me the crucial line of failure is "Unable to load firmware patch rtl_nic/rtl8168d-1.fw (-2)". The lines under that seem like it's unable because it can't connect to the internet. This machine has started up before. This boot problem came into existence after I had to reset the computer because of an infinity loop in a subthread, and it didn't allow me to switch process anymore. It also used to be able to connect to the internet and install packages/use firefox just fine. Possible related to Debian stretch failed to load firmware rtl_nic/rtl8168g-3.fw (-2) , and I'd love to attempt the same fix. However, that failure was non-lethal and the user was able to boot into debian and fix it. My problem happens at a moment I cannot use Ctrl+Alt+FX to get into a terminal, and it does not progress beyond this screen. Any other google results I got also had the user being able to boot into GUI/Terminal just fine. My apologies for the screenshot being a bit out of focus. The screen also started flickering, which made it hard to get a good photo. I don't have an installer USB, but I could make one. I'm not that experienced with unix in general or debian, so please keep that in mind for answers. Anybody got pointers how to recover this installation ? | No this isn't possible. It's a one way hash that's been salted. All you can do is take a dictionary of words, hash them using the same crypt function and see if their results match what's in /etc/shadow . Tools such as John the Ripper automated this process, but that's effectively all they're doing to crack a password. John the Ripper is a fast password cracker, currently available for many flavors of Unix, Windows, DOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords. Besides several crypt(3) password hash types most commonly found on various Unix systems, supported out of the box are Windows LM hashes, plus lots of other hashes and ciphers in the community-enhanced version. There are countless others - https://en.wikipedia.org/wiki/Password_cracking . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306383/"
]
} |
463,654 | So I made a script that adds users to the system, and I wanted to force the length of the usernames to 8 characters or below. #!/bin/bash# Only works if you're rootfor ((a=1;a>0;a)); do if [[ "$UID" -eq 0 ]]; then echo "Quit this shit anytime by pressing CTRL + C" read -p 'Enter one usernames: ' USERNAME nrchar=$(echo ${USERNAME} | wc -c) echo $nrchar spcount=$(echo ${USERNAME} | tr -cd ' ' | wc -c) echo $spcount if [[ "${nrchar}" -ge 8 ]]; then echo "You may not have more than 8 characters in the username" elif [[ "${spcount}" -gt 0 ]]; then echo "The username may NOT contain any spaces" else read -p 'Enter one names of user: ' COMMENT read -s -p 'Enter one passwords of user: ' PASSWORD useradd -c "${COMMENT}" -m ${USERNAME} echo ${PASSWORD} | passwd --stdin ${USERNAME} passwd -e ${USERNAME} fi echo "------------------------------------------------------" else echo "You're not root, so GTFO!" a=0 fidone That's the full script, but I think the problem is only here somewhere: read -p 'Enter one usernames: ' USERNAME nrchar=$(echo ${USERNAME} | wc -c) echo $nrchar So the problem with this is, whenever I enter a 8 character username, the nrchar variable seems to always add one more character to it, as so: [vagrant@localhost vagrant]$ sudo ./exercise2-stuffs.shQuit this shit anytime by pressing CTRL + CEnter one usernames: userdoi190You may not have more than 8 characters in the username------------------------------------------------------Quit this shit anytime by pressing CTRL + CEnter one usernames: ^C[vagrant@localhost vagrant]$ Even if I leave it blank, it still somehow counts one character: [vagrant@localhost vagrant]$ sudo !.sudo ./exercise2-stuffs.shQuit this shit anytime by pressing CTRL + CEnter one usernames:10Enter one names of user: How to identify this problem? | Even if I leave it blank, it still somehow counts one character [. . .] Could somebody please help me identify this problem? Try printf instead of echo $ echo "" | wc -m1$ printf "" | wc -m0 Using echo , wc will count a newline character. Or, perhaps better, use pure Bash without piping to wc : $ string=foobar$ echo "${#string}"6 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304183/"
]
} |
463,676 | OS = CentOS 7.3 If I try to run the 'df' command on this server with an option (e.g. -h -l) it hangs and leaves a zombie process. I cannot Ctrl+z out back to a prompt. If I run 'df' against specific mount points that are found in FSTAB the command works successfully (e.g. df /home). How do I go about troubleshooting this? | Even if I leave it blank, it still somehow counts one character [. . .] Could somebody please help me identify this problem? Try printf instead of echo $ echo "" | wc -m1$ printf "" | wc -m0 Using echo , wc will count a newline character. Or, perhaps better, use pure Bash without piping to wc : $ string=foobar$ echo "${#string}"6 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/463676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166222/"
]
} |
463,807 | I'm trying to clear my Chromium browser cache using CLI on Gallium OS (based on Xubuntu ). What command should I use? | On Linux you can clear the browser cache using the following command, after closing the browser: Chromium The browser cache is stored in different directories depending on whether Chromium is installed as a regular package or as a snap. Run $ which chromium to find out which one your system is using. Examples: /usr/bin/chromium -> traditional package/snap/bin/chromium -> snap Installed from regular package rm -rf ~/.cache/chromium Installed from snap rm -rf ~/snap/chromium/common/.cache/chromium Google Chrome rm -rf ~/.cache/google-chrome | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/463807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286107/"
]
} |
463,813 | I need to find and delete files older than 1 week in the Development unit. There are limited number utilities available on this unit. -mtime find 's predicate is not available. How do I check all files which are older than x days in this case? | -mtime is a standard predicate of find (contrary to -delete ) but it looks like you have a stripped down version of busybox , where the FEATURE_FIND_MTIME feature has been disabled at build time. If you can rebuild busybox with it enabled, you should be able to do: find . -mtime +6 -type f -exec rm -f {} + Or if FEATURE_FIND_DELETE is also enabled: find . -mtime +6 -type f -delete If not, other options could be to use find -newer (assuming FEATURE_FIND_NEWER is enabled) on a file that is set to have a one week old modification time. touch -d "@$(($(date +%s) - 7 * 86400))" ../ref && find . ! -type f -newer ../ref -exec rm -f {} + Or if -newer is not available but sh 's [ supports -nt : touch -d "@$(($(date +%s) - 7 * 86400))" ../ref && find . ! -type f -exec sh -c ' for f do [ "$f" -nt ../ref ] || printf "%s\0" "$f" done' sh {} + | xargs -0 rm -f | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/463813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250201/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.