source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
236,935
I'm using fedora 22 and dnf-1.1.2-4.fc22.noarch As an use-case scenario: I found that strace package is not installed. I want to figure out if this package belongs to any other group, to install software that I'll probably will need too for similar tasks. I found this brute-force way(grepping for 3 spaces bacause group names start with this indent): dnf grouplist | grep ' ' | while read line; do dnf groupinfo "$line"; done Then redirect this output to a file, search for a package name, and find a group name there.
Since Fedora 26, the following works: dnf repoquery --groupmember <pkg-name> See the bug report where this feature was implemented.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/236935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139031/" ] }
236,937
while installing java in redhat i'm facing some problems..i've attached the screen print. i unzipped the file using tar command .once i created the softlink for it ,i should be able to see java version and all but which in this case i'm not able to see
Since Fedora 26, the following works: dnf repoquery --groupmember <pkg-name> See the bug report where this feature was implemented.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/236937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139032/" ] }
236,953
I need to find out what kind of script runs fsck during the boot on CentOS 7?I know that all scenarios are located in /etc/rc.d directory.But I haven't any idea about where is this script is located.
I know that all scenarios are located in /etc/rc.d directory. What you know is wrong. Welcome to CentOS 7. The world has changed. In particular, your base of Red Hat Enterprise Linux 7 has changed. You are using a systemd Linux operating system. A lot of the received wisdom about Linux is not true for such systems. fsck is not run by any script at all on systemd Linux operating systems. The native format for systemd is the unit , which can be amongst other things a service unit or a mount unit . systemd's service management proper operates solely in terms of those, which it reads from one of nine directories where (system-wide) .service and .mount files can live. /etc/systemd/system , /run/systemd/system , /usr/local/lib/systemd/system , and /usr/lib/systemd/system are four of those directories. Your /etc/fstab database is converted into mount units by a program named systemd-fstab-generator . This program is listed in the /usr/lib/systemd/system-generators/ directory and is thus run automatically by systemd early in the bootstrap process at every boot, and again every time that systemd is instructed to re-load its configuration later on. This program is a generator , a type of ancillary utility whose job is to create unit files on the fly, in a tmpfs where three more of those nine directories (which are intended to be used only by generators) are located. systemd-fstab-generator generates .mount units that mount the volumes. These in their turn reference .service units that run fsck . Those fsck service units don't themselves exist as files in the filesystem (not even in a tmpfs), and are not the products of a generator. They are instantiated by systemd from a template service unit file, named [email protected] , using the device name as the service unit instance name. The instantiation happens because of the Requires= and After= references to systemd-fsck@ device .service from the generated .mount units. This instantiated template is a service that runs a program named systemd-fsck , which sets up a client-server connection for displaying progress information and then in its turn runs fsck . systemd-fsck is a compiled C program, not an interpreted script. Further reading "New Features: System and Services" . Red Hat Enterprise Linux 7 Release Notes . Red Hat. Stephen Wadeley (2014). "8. Managing Services with systemd" Red Hat Enterprise Linux 7 System Administrators' Guide . Red Hat. systemd-fstab-generator . systemd manual pages. Freedesktop.org. [email protected] . systemd manual pages. Freedesktop.org. systemd.mount . systemd manual pages. Freedesktop.org. https://unix.stackexchange.com/a/204075/5132 https://unix.stackexchange.com/a/196014/5132
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/236953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137094/" ] }
236,954
I have trouble in connecting lftp with ftps (ftp over ssl, not sftp!) server ( FTP Server Ultimate ( PRO version).) running on Android phone. Technical details: Linux part.Following : https://superuser.com/questions/623236/simple-command-to-connect-to-ftps-server-on-linux-command-line I’ve created following lftp_config file and source it in following way: $ cat lftps_config user photos PASSWORDset ftps:initial-prot "";set ftp:ssl-force true;set ftp:ssl-protect-data true;set ssl:verify-certificate no;open ftps://192.168.1.103:43210$ lftplftp :~> source lftps_config lftp 192.168.1.103:~> dirls at 0 [530 Login incorrect.] while on “FTP Server Ultimate Pro” logs I see: 2015-10-18 10:10:13 [photosXYZ] - 192.168.1.123 (JBTTAX) - "" and *** are not allowed combination...2015-10-18 10:10:13 [photosXYZ] - 192.168.1.123 (JBTTAX) New connection... Could you help me how to setup FTP over SSL (ftps) connection on Linux using lftp (or other command-line tool with good mirror capability) ? FTR, I use: $ lftp -v | tail -n 1Libraries used: Readline 6.3, Expat 2.1.0, GnuTLS 3.4.5, zlib 1.2.8 which according to documentation has FTPS capability (GnuTLS implies it). For curious, more context: My final goal: Have some directories automatically backed up (both, locally and remotely) from my Android phone to Linux workstation, laptop etc. Android: FTPS server (ftp over ssl, not sftp!), starting automatically when I enter my home wifi, when away using DDNS (Dynamic DNS) Linux: lftp (or other command line tool) that backs up stuff from phone - might be triggered by some cron-like automation that in presence of my phone ftps server would trigger automatic backup Android part I (as least I thought, that I) solved with FTP Server Ultimate (to be specific PRO version).Server has capability of running FTPS server, and starting it automatically on given SSID or BSSID. When I am travelling it can update DDNS automatically, what makes reachability from my home servers easy.
I know that all scenarios are located in /etc/rc.d directory. What you know is wrong. Welcome to CentOS 7. The world has changed. In particular, your base of Red Hat Enterprise Linux 7 has changed. You are using a systemd Linux operating system. A lot of the received wisdom about Linux is not true for such systems. fsck is not run by any script at all on systemd Linux operating systems. The native format for systemd is the unit , which can be amongst other things a service unit or a mount unit . systemd's service management proper operates solely in terms of those, which it reads from one of nine directories where (system-wide) .service and .mount files can live. /etc/systemd/system , /run/systemd/system , /usr/local/lib/systemd/system , and /usr/lib/systemd/system are four of those directories. Your /etc/fstab database is converted into mount units by a program named systemd-fstab-generator . This program is listed in the /usr/lib/systemd/system-generators/ directory and is thus run automatically by systemd early in the bootstrap process at every boot, and again every time that systemd is instructed to re-load its configuration later on. This program is a generator , a type of ancillary utility whose job is to create unit files on the fly, in a tmpfs where three more of those nine directories (which are intended to be used only by generators) are located. systemd-fstab-generator generates .mount units that mount the volumes. These in their turn reference .service units that run fsck . Those fsck service units don't themselves exist as files in the filesystem (not even in a tmpfs), and are not the products of a generator. They are instantiated by systemd from a template service unit file, named [email protected] , using the device name as the service unit instance name. The instantiation happens because of the Requires= and After= references to systemd-fsck@ device .service from the generated .mount units. This instantiated template is a service that runs a program named systemd-fsck , which sets up a client-server connection for displaying progress information and then in its turn runs fsck . systemd-fsck is a compiled C program, not an interpreted script. Further reading "New Features: System and Services" . Red Hat Enterprise Linux 7 Release Notes . Red Hat. Stephen Wadeley (2014). "8. Managing Services with systemd" Red Hat Enterprise Linux 7 System Administrators' Guide . Red Hat. systemd-fstab-generator . systemd manual pages. Freedesktop.org. [email protected] . systemd manual pages. Freedesktop.org. systemd.mount . systemd manual pages. Freedesktop.org. https://unix.stackexchange.com/a/204075/5132 https://unix.stackexchange.com/a/196014/5132
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/236954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
236,960
AFAICT, neither the man page for GNU grep , nor info grep , deigns to spell out what --color=auto means. I must be one of the very few people on the planet for which the meaning of this option is not immediately obvious. I surmise that --color=auto "is SOMEWHERE in-between" --color=never and --color=always , but that still leaves too much unspecified.
The rules are the same as for ls , which does a better job documenting it in man ls . Quoting: Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors command to set it. So it will make the command only add the color formatting when the output is going to a terminal and not, say, when it is going to a pipe where the program consuming the pipe might not handle the color formatting well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/236960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
236,985
I have VirtualBox 5 installed and working in Fedora for some 5-6 weeks with no problems after following this guide . However after I ran a dnf update yesterday it stopped working. VirtualBox itself launches but when I try to launch the VM here is what I get: The virtual machine 'MyVM' has terminated unexpectedly during startup with exit code 1 (0x1). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine With some further instructions on drill down: Kernel Driver is not installed (rc= -1908) The VirtualBox Linux kernel driver (kvboxdvr) is not loaded... So here is what I tried so far without luck: 1.Checked what I have installed: $ dnf list installed | grep kmod-VirtualBox*akmod-VirtualBox.x86_64 4.3.30-1.fc22 @rpmfusion-free-updateskmod-VirtualBox-4.1.10-200.fc22.x86_64.x86_64kmod-VirtualBox-4.1.7-200.fc22.x86_64.x86_64 2.Checked what is available in the repo: $ dnf provides kmod-VirtualBoxLast metadata expiration check performed 0:03:30 ago on Sun Oct 18 10:37:47 2015.kmod-VirtualBox-4.3.30-1.fc22.x86_64 : Metapackage which tracks in VirtualBox kernel module for newest kernelRepo : rpmfusion-free-updateskmod-VirtualBox-4.3.28-1.fc22.x86_64 : Metapackage which tracks in VirtualBox kernel module for newest kernelRepo : rpmfusion-free 3.Tried to install updated kmod: $ sudo dnf install --allowerasing kmod-VirtualBox-4.3.30-1.fc22.x86_64Last metadata expiration check performed 1:43:30 ago on Sun Oct 18 09:05:58 2015.Error: nothing provides kernel-uname-r = 4.0.8-300.fc22.x86_64 needed by kmod-VirtualBox-4.0.8-300.fc22.x86_64-4.3.30-1.fc22.x86_64 4.Run uname to check what is current version: $ uname -r4.2.3-200.fc22.x86_64 No matter what I try I keep getting this same error that nothing provides an outdated kernel. As far as I understand it shouldn't. I ran a dnf clean all and dnf clean metadata but it didn't help. I also already run the dnf update virtualbox and it tells me I have the latest version installed. Any ideas how to solve this issue? Note: I also tried running dnf update kmod-VirtualBox but nothing happens, it tells me something like "Nothing to Do."
This happens from time to time because the current kmod package sometimes isn't in the repository yet. You don't have to reinstall VirtualBox completely, but uninstalling the kmod packages might be necessary: # dnf remove kmod-VirtualBox-* However, you do not want to uninstall the akmod package because this is your alternative. If you install the required akmod packages (and no pre-built kmod packages), your system will build the VirtualBox kernel modules when necessary (after a kernel update), so this should always work - unlike the pre-build kmod packages, which aren't always available. Install/update the akmod package and the kernel headers required for building: # dnf install akmod-VirtualBox kernel-devel You can start the build process manually: # akmods You may have to force a rebuild (see below): # akmods --force The modules service should not print any error messages anymore: # systemctl restart systemd-modules-load VirtualBox should now be able to start vms, even after kernel updates. The build process might fail if there are still old kmod packages installed. In this case, uninstall them one by one and run akmods again. Update : This question is still relevant, even on Fedora 25. Note that akmods may have to be run with the --force option as shown above, especially when running the build manually. If you forget this option, it might simply show a warning and not do anything ( Bug 4485 ): Ignoring VirtualBox-kmod as it failed earlier [WARNING] This may also be the reason why VirtualBox sometimes won't start any VMs ("Kernel driver not installed") after a kernel update and subsequent reboot, even though all required packages are installed. Sometimes, the akmods tool complains that the previous build attempt was not successful and simply shows a warning instead of starting a new build. If this happens during a reboot, when the VirtualBox modules should be rebuilt automatically, you'd find this warning later in your system log and you will have to run akmods manually with the --force option, so that it'll actually start the build process that was supposed to run during the reboot. See bug 4485 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/236985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129469/" ] }
237,050
I am making a backup of some files and am creating a checksum file, checksums.txt , to later check the integrity of my data. However pasting each checksum manually into the file is inconvenient since my backup consists of hundreds of files, so asking around I got the suggestion to simplify the process by redirecting the output to the checksum file: $ md5sum file > checksums.txt or $ sha512sum file > checksums.txt However doing this replaces the contents of the checksums.txt file with the checksum of, in this example, the file file; I would like, instead, to append the checksum of file to checksums.txt without deleting its contents. So my question is: how do I do this? Just one more thing. Since I am a basic user, try (only if possible) to make easy-to-follow suggestions.
If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93996/" ] }
237,063
I was happily watching a TV show episode and 5 minutes later I am with a fried computer. I was using elementaryOS. I am going step by step with what just happened: I was using VLC when suddently the video stopped working and after 10-20 seconds a warning message from VLC popped up, said something like it couldn't reproduce the file. Then everything started freezing really fast, I could barely do one task, minimize and maximize some windows and 10 seconds later the system completely froze. I forced shutdown the computer holding the button and started it again. It starts, shows the Acer logo, then the Windows startup menu, I press Escape so Grub can show up, here's where I have elementary OS as well as Ubuntu. Surprise. Frozen black screen and Grub does not show up. After a few seconds I get a screen saying error: unknown filesystem. Entering rescue mode... and a grub rescue> prompt. I shut down and restart again 3-4 more times and the same thing keeps happening. Then the 4th or 5th time I restart the computer not even the Windows startup menu is showing up, it just froze at the Acer logo. Shutdown again and now the Windows menu is up, press Escape, and same story with Grub. It's gone. Again that error. I shutdown again and it stalls for a while in the Acer screen but finally the Windows menu shows up. And I'm like well "would even Windows work?". Nope, it doesn't. Trying to start Windows just brings back the Acer screen and it's locked there as I am typing this. Latest update: when I start up the computer there's a weird cracking repeating sound. Fried hard drive?
If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64512/" ] }
237,072
Recently, I installed Kali Linux 2.0 as a third OS on my Dell Latitude E7240. I used Unetbootin to make a bootable USB of Kali Linux. When I booted from it, it gave me options I wasn't used to. I chose Default the first time, and I was met with a black screen, so I manually shut my computer off, rebooted, and then chose Live Encrypted USB Persistence. After a while of outputting stuff while booting which I did not understand at all, I finally got into a live session of Kali Linux 2.0. From there, I searched in the applications and found Install Kali, which I clicked. I was given a graphical interface which removed the dash and bar at the top, and only showed me the installer, which was partially cut off. So, for I believe two things (one of which was configuring the network), I could not see any options, and blindly hit enter. However, I'm fairly certain nothing went wrong here, as the installation carried through smoothly and asked me mostly what I would expect to be asked while installing. However, since I already have Ubuntu, the first time, I selected no for installing Grub, then it said I had to make my OS bootable and so I had to install something somewhere, so I just chose my hard drive, /dev/sda , rather than entering the device manually, which I don't have any experience with. I finished the installation successfully, then rebooted. My Ubuntu Grub loaded, but I didn't see Kali Linux. I tried following tutorials to add it to Grub, but had no luck. So, I reinstalled Kali, this time choosing to install Grub. Then I rebooted, and the Kali Grub showed up. However, now I get warnings when booting into Kali ( Using Kali Linux ) and when booting into Ubuntu. This post is about booting into Ubuntu. When I boot into Ubuntu, I get the warnings shown in the following image: EDIT2: New Image with more of the warnings (the warnings that flash for less than a second). I don't think I have experienced any problems yet. However, just to be safe, I would like to know what this means. So what does that mean? If I am missing information, please tell me, and I will add it. EDIT1: This all began AFTER I installed Kali Linux.
If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139098/" ] }
237,080
Here's another question that is impossible to search for: how to interpret $+commands[foobar] ? I assume that it is a variant of $commands[foobar] , but who knows. (With zsh, at least I never know.) I'd also like to know how one would search for the answer to this question, either in the zsh documentation or online.
That was documented under Parameter Expansion section in zsh documentation : ${+name} If name is the name of a set parameter ‘1’ is substituted, otherwise ‘0’ is substituted. Example: $ unset foo$ if (( $+foo )); then echo set; else echo not set; finot set$ foo=1$ if (( $+foo )); then echo set; else echo not set; fiset In $+commands[foobar] , zsh check if the name return by $commands[foobar] is a set parameter.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
237,105
I have a script which generates several output files and use these output files during runtime. Following are the some of the files generated by the script: apple.txt , section_fruit_out.csv , section_fruit_out_lookup.csv , food_lookup.csv , section_fruit_lookup.csv . I have a code phrase as below: nawk 'FNR == NR && NF!=0 {x[$1] = $1; next;} {FS=OFS=","} FNR>1{if ($2 in x) {($6 = "apple")} } 1' apple.txt section_fruit_out.csv > section_fruit_out_lookup.csv nawk 'BEGIN { FS = OFS = ","; } FNR == NR { x[$1] = $2; next; } { if ($7 in x && $6 == "") { $6 = x[$7]; } else if ($6 == "" && $7 != "") { $6 = "TO_BE_DEFINED" } } 1' food_lookup.csv section_fruit_out_lookup.csv > section_fruit_lookup.csv This code phrase mainly handles the expected job. But the script does not work as expected if the apple.txt file is empty (this file is generated by database queries). If the apple.txt file is empty, the output file ( section_fruit_out_lookup.csv ) of the first nawk section is also generated empty. Since section_fruit_out_lookup.csv is generated empty and it is used by the second nawk command, the second nawk command also generates an empty output file ( section_fruit_lookup.csv ). How can I bypass first nawk command if the apple.txt file is empty and make the second nawk command to use section_fruit_out.csv file instead of using the file: section_fruit_out_lookup.csv ?
there is a test function for zero size file. if test -s apple.txtthen ## apple.txt non empty code ...else ## file emptyfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124682/" ] }
237,160
I am sill new in this. I would like to ask, how to inverse the cut? Example; ./24feb/frfr I want after cut command, the result will be ./feb/frfr . How to do it?
% echo ./24feb/frfr | cut -c 1-2,5-./feb/frfr That would be the inverse of cut -c 3-4 , that is outputs all characters ( bytes with current versions of GNU cut ) of each line except the 3rd and 4th. The GNU implementation of cut also has a --complement option for that: cut --complement -c 3-4 To remove the first sequence of decimal digits, you can use sed instead: sed 's/[0-9]\{1,\}//' To remove it, only if it's in 3rd position: sed 's/^\(..\)[0-9]*/\1/' Or to be very explicit on what pattern should trigger the removal: sed 's|^\(./\)[0-9]*\([[:lower:]]\{3\}/\)|\1\2|' That is only removed the <0-or-more-digits> in a line matching: ./<0-or-more-digits><3-lowercase-letters>/<anything> .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139133/" ] }
237,221
If I understand the Linux philosophy correctly, sudo should be used sparingly, and most operations should be performed as an under-privileged user. But that doesn't seem to make sense, since I'm always having to input sudo , whether I'm managing packages, editing config files, installing a program from source, or what have you. These are not even technical stuff, just anything a regular user does. It reminds me very much of Window's UAC, which people either disable, or configure to not require a password (just a click). Furthermore, many people's Windows users are administrator accounts as well. Also, I've seen some people display commands that require sudo privileges without sudo . Do they have their system configured in such a way that sudo is not required?
You mentioned these system adminstration functions managing packages, editing config files, installing a program from source as things that anything a regular user does In a typical multiuser system these are not ordinary user actions; a systems administrator would worry about this. Ordinary users (not "under privileged") can then use the system without worrying about its upkeep. On a home system, yes, you end up having to administer the system as well as using it. Is it really such a hardship to use sudo ? Remember that if it's just your system there's no reason why you can't either pop into a root shell ( sudo -s - see this post for an overview of various means of getting a root shell) and/or configure sudo not to prompt for a password.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139248/" ] }
237,236
I am trying to find out the file in which the following flag is used - TRACE_WANTED". However, I don't want the find to search this flag in .c and .h. How can I issue a command to exclude the find from *.c and *.h files. Here is the typical command I am using - find ./ -iname *.c -exec grep -iHrn TRACE_WANTED {} \;
GNU grep has the ability to exclude globs from its recursive searches built in. Try: grep -iHrn --exclude='*.c' --exclude='*.h' TRACE_WANTED This searches recursively starting from the current directory, just like your find command. It excludes all *.c and *.h files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139258/" ] }
237,252
I have many CSV files in one directory which have various lengths. I'd like to put the second to last line of each file into one file. I tried something like tail -2 * | head -1 > file.txt , then realized why that doesn't work. I'm using BusyBox v1.19.4. Edit: I do see the similarity with some other questions, but this is different because it's about reading multiple files. The for loop in Tom Hunt's answer is what I needed and hadn't thought of before.
for i in *; do tail -2 "$i" | head -1; done >>file.txt That should be sh (and hence Busybox) compatible, but I don't have a non-bash available for testing ATM. Edited in accord with helpful comments.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237252", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124698/" ] }
237,297
Let's say I have a variable foo , that is: foo=`echo ab cd ef gh` If you echo foo , you get: $ echo $fooab cd ef gh Now, I want to remove ef from $foo . What is the fastest way to do that?
Assume your variable contain at least one occurrence of ef , POSIXly: $ printf '%s\n' "${foo%ef*}${foo##*ef}" ab cd gh In bash , ksh variants (exclude posh ), zsh and yash , you can use: $ printf '%s\n' "${foo/ef}" to remove the first occurrence of ef , or "${foo//ef}" to remove all occurrences.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139283/" ] }
237,409
I had a qemu virtual-machine which crashed several times because HDD in hypervisor had no space left. This made me wonder is there a possibility to to set up a logging/debugging for quemu virtual-machines. I tried to start virtual-machine with -D /tmp/qemu-debug-log command: qemu-system-i386 -D /tmp/qemu-debug-log -monitor pty -device e1000,netdev=tap0 -netdev tap,id=tap0 -m 512M -display vnc=:1 -drive file=FreeBSD10.2 ..but this did not even create a /tmp/qemu-debug-log file. In addition, qemu does not seem to write into messages or kernel ring buffer( dmesg ). What are the best practices to enable logging for qemu virtual machines?
qemu command accepts a simple -D switch which can create a log file. So for example including -D ./log.txt will create "log.txt" in your working directory. You can access more logging/debugging options via QEMU Monitor (e.g. qemu -monitor stdio ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
237,412
So, I'm using a system with Fedora 20 installed, running a KDE 4.14.7 desktop. (I don't have root access so please no complaints about why I don't just upgrade my distro.) I have just installed Eclipse Mars.1 (in my home directory), and it runs fine, but - the tooltips I get (e.g. hover line error descriptions) appear as black-on-black (!) I've seen some similar complaints online, from several years ago, about this problem on Ubuntu systems, but that's not me... also, note that I'm not root, so I can't change any KDE or GTK system-wide defaults, only personal settings. What can I do? Notes: I've seen suggestion s to use "gnome-color-chooser" and fiddle with its settings, but I don't have that. I've tried changing the KDE tooltip background, no effect. I've tried the Eclipse Color Theme add-on, and with some themes the background is dark gray, or the foreground color is dark gray; but I still can't edit just that (and dark gray on black is not good enough either). None of these suggestions have worked either.
qemu command accepts a simple -D switch which can create a log file. So for example including -D ./log.txt will create "log.txt" in your working directory. You can access more logging/debugging options via QEMU Monitor (e.g. qemu -monitor stdio ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
237,443
I know how to pass arguments into a shell script. These arguments are declared in AWS datapipeline and passed through. This is what a shell script would look like: firstarg=$1secondarg=$2 How do I do this in Python? Is it the exact same?
This worked for me: import sysfirstarg=sys.argv[1]secondarg=sys.argv[2]thirdarg=sys.argv[3]
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/237443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133594/" ] }
237,460
I have a virtual private server, which I would like to run a web server while my server is connected to a VPN service When the VPN connection to my provider is not up, I can do anything I want with this server, ssh, scp, http etc. Once the openvpn is running and connected to the provider's VPN service, the server is not accessible by any means and of course for a good reason The picture is something like this : My VPS ------------ +----------------+ / \ | | / Internet / 101.11.12.13 | 50.1.2.3|-----------------\ cloud /----<--- me@myhome | | / \ | 10.80.70.60| / \ +----------------+ \ \ : \_____________/ : : : : : : : : +------------------+ : | 10.80.70.61 | : | \ | : | \ | : | 175.41.42.43:1197|..............: | 175.41.42.43:yy| | ..... | | 175.41.42.43:xx| +------------------+Legend ------ Line No VPN connection present...... Line VPN connection established Things to clarify: All IP addresses and port numbers above and below are fictitious The lines with port numbers xx, yy and anything in between are myassumption, not something that I know for a fact. I set up a cron job which runs every minute pings another VPS of mine, running apache2 In the apache2 logs, I can see the origin IP address changing from 50.1.2.3 to 175.41.42.43, when VPN is active, so VPN is working fine OpenVPN logs show these: UDPv4 link remote: [AF_INET]175.41.42.43:1197[ProviderName] Peer Connection Initiated with [AF_INET]175.41.42.43:1197TUN/TAP device tun0 openeddo_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0sbin/ip link set dev tun0 up mtu 1500/sbin/ip addr add dev tun0 local 10.80.70.60 peer 10.80.70.61 At this point, I would like to be able to ssh from myhome to My VPS in the picture, while the VPN is up and using PuTTY. In the past, in one of my workplaces, I have been given a very strange sequence to ssh into one extremely secure server which had three @ signs in the string. So, it was jumping from box to box as I imagine, but since the jump boxes were running some version of windows OS and a proprietary app on those, there was no visibility for me to see what was happening under the wraps. So I did not pay much attention. Now I am beginning to realize, I may be in the same or similar situation. Using the IP addresses and ports in the diagram and/or log snippet, can someone tell me how I can traverse through this tunnel and access my server ?
You get locked out of your VPS because once the VPN service is up, your ssh packets get routed via the VPN not your VPS's public IP 50.2.1.3. Lets assume your server's: Public IP is 50.1.2.3 (as per your example setup) Public IP Subnet is 50.1.2.0/24 Default Gateway is probably 50.1.2.1 eth0 is device to gateway Do the following using iproute2: ip rule add table 128 from 50.1.2.3ip route add table 128 to 50.1.2.0/24 dev eth0ip route add table 128 default via 50.1.2.1 Then run your OpenVPN client config: openvpn --config youropenvpn-configfile.ovpn & You will then be able to ssh into your server while your server is connected to the vpn service. You would need to add the appropriate iptable filters to restrict access to your public IP from non-ssh:22 sessions. To understand these commands in details, see the related answers .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62499/" ] }
237,462
When running df /nfs/mount/point , I expect that it will be faster than df | grep /nfs/mount/point , because it will not stat all other mount points. But strace shows that stat is executed all NFS mounts and then the output is shown for the specific mount point. Is this a bug? Or is there any deeper reason for going over all mount points? I am seeing this with df version 8.4, on CentOS 6.6, with 2.6.32 kernel. Sample output (with edits to remove company information) $ strace df /home/user1/some/Directory~ ~ stat("/home/user2", {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0 stat("/home/user3", {st_mode=S_IFDIR|0777, st_size=20480, ...}) = 0 stat("/home/user4", {st_mode=S_IFDIR|0777, st_size=36864, ...}) = 0 stat("/home/user5", {st_mode=S_IFDIR|0755, st_size=663552, ...}) = 0 stat("/software/bin", {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0 stat("/scratch/space", {st_mode=S_IFDIR|0777, st_size=8192, ...}) = 0 stat("/eng/tools", {st_mode=S_IFDIR|0755, st_size=20480, ...}) = 0 ~ ~
You get locked out of your VPS because once the VPN service is up, your ssh packets get routed via the VPN not your VPS's public IP 50.2.1.3. Lets assume your server's: Public IP is 50.1.2.3 (as per your example setup) Public IP Subnet is 50.1.2.0/24 Default Gateway is probably 50.1.2.1 eth0 is device to gateway Do the following using iproute2: ip rule add table 128 from 50.1.2.3ip route add table 128 to 50.1.2.0/24 dev eth0ip route add table 128 default via 50.1.2.1 Then run your OpenVPN client config: openvpn --config youropenvpn-configfile.ovpn & You will then be able to ssh into your server while your server is connected to the vpn service. You would need to add the appropriate iptable filters to restrict access to your public IP from non-ssh:22 sessions. To understand these commands in details, see the related answers .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54246/" ] }
237,513
The problem I was trying to find shell scripting information without an Internet connection, i.e. via man page.Specifically, I was looking how to pass and use parameters . man bash does not contain all I need ( shell-scripting is missing). Incidentally, browsing the Internet, I found out that I want to read the Bash Reference Manual (and yes, I have already found all I need online). The same result could have been achieved by looking at the end of the man page, where one can find the following section: SEE ALSO Bash Reference Manual, Brian Fox and Chet Ramey The Gnu Readline Library, Brian Fox and Chet Ramey The Gnu History Library, Brian Fox and Chet Ramey Portable Operating System Interface (POSIX) Part 2: Shell and Utilities, IEEE sh(1), ksh(1), csh(1) emacs(1), vi(1) readline(3) The first item, Bash Reference Manual is actually what I want to read. How do I navigate through that link? Where do I find the rest of the documentation? It looks like one has to rely always on the Network for retrieving meaningful information. Please, enlighten me with the man way. There must be something I am missing.
On a debian system, the Bash Reference Manual is in the bash-doc package. It's probably similarly packaged in other distros. The The Gnu Readline Library and The Gnu History Library manuals are both in the readline-doc package. You can read them with info , but IMO info itself is ghastly and almost unusable, with a terrible user interface - pinfo is a better alternative: apt-get install pinfo on debian-based systems. e.g. pinfo bash or pinfo history info navigation works in some inscrutable fashion. pinfo navigates info docs in a manner similar to a text-mode web-browser like lynx . Amongst other benefits, forward and back keys actually work as expected. The pinfo project page, including access to source code, is at https://alioth.debian.org/projects/pinfo/ Portable Operating System Interface (POSIX) Part 2: Shell and Utilities is a for-sale .PDF document available from http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6880751&filter%3DAND%28p_Publication_Number%3A6880749%29 (this is dated 1993) - but you can also find a 2007 draft PDF of the spec at http://www.open-std.org/jtc1/sc22/open/n4217.pdf and probably many other places. Newer versions may also be available, these were just the first I found with google.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38879/" ] }
237,520
With slabtop I get the following output (50 lines): $slabtop -sc -o Active / Total Objects (% used) : 110864927 / 111473562 (99.5%) Active / Total Slabs (% used) : 2826375 / 2826375 (100.0%) Active / Total Caches (% used) : 83 / 121 (68.6%) Active / Total Size (% used) : 48207397.02K / 48498057.95K (99.4%) Minimum / Average / Maximum Object : 0.01K / 0.43K / 16.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME10855309 10855309 100% 1.07K 374321 29 11978272K zfs_znode_cache10893059 10893059 100% 0.85K 294407 37 9421024K dnode_t412694 410756 99% 16.00K 206347 2 6603104K zio_buf_1638412502304 12290713 98% 0.50K 390697 32 6251152K kmalloc-51212776610 12743989 99% 0.29K 232302 55 3716832K dmu_buf_impl_t10855309 10855309 100% 0.27K 374321 29 2994568K sa_cache370776 370718 99% 8.00K 92694 4 2966208K kmalloc-81923269280 3028688 92% 0.32K 66720 49 1067520K taskstats10898853 10898853 100% 0.08K 213703 51 854812K selinux_inode_security12161344 12148434 99% 0.06K 190021 64 760084K kmalloc-643257058 3255733 99% 0.19K 77549 42 620392K dentry5577558 5519367 98% 0.09K 132799 42 531196K kmalloc-96 92872 82421 88% 4.00K 11609 8 371488K kmalloc-40961962464 1953470 99% 0.12K 61327 32 245308K kmalloc-1286021888 6021888 100% 0.03K 47046 128 188184K kmalloc-32 8356 8346 99% 12.00K 4178 2 133696K zio_buf_122881026675 1026675 100% 0.10K 26325 39 105300K blkdev_ioc7955456 7955456 100% 0.01K 15538 512 62152K kmalloc-8 31744 23790 74% 1.00K 992 32 31744K kmalloc-1024 2040 2008 98% 10.00K 680 3 21760K zio_buf_10240 1332 1318 98% 14.00K 666 2 21312K zio_buf_14336 3150 3094 98% 5.00K 525 6 16800K zio_buf_5120 2050 1984 96% 6.00K 410 5 13120K zio_buf_6144 6480 5958 91% 2.00K 405 16 12960K kmalloc-2048 1596 1548 96% 7.00K 399 4 12768K zio_buf_7168 20075 20075 100% 0.58K 365 55 11680K inode_cache 7413 7279 98% 1.50K 353 21 11296K zio_buf_1536 15925 15818 99% 0.64K 325 49 10400K proc_inode_cache 3360 3252 96% 2.50K 280 12 8960K zio_buf_2560 2660 2574 96% 3.00K 266 10 8512K zio_buf_3072 8192 8192 100% 1.00K 256 32 8192K xfs_inode 2295 2208 96% 3.50K 255 9 8160K zio_buf_3584 67899 66971 98% 0.10K 1741 39 6964K buffer_head 27008 13057 48% 0.25K 844 32 6752K kmalloc-256 59904 59904 100% 0.11K 1664 36 6656K sysfs_dir_cache 2156 2019 93% 2.84K 196 11 6272K task_struct 2625 2497 95% 2.06K 175 15 5600K sighand_cache 9072 9005 99% 0.57K 324 28 5184K radix_tree_node 3584 3341 93% 1.12K 128 28 4096K signal_cache 19992 18791 93% 0.19K 476 42 3808K kmalloc-192 16095 15519 96% 0.21K 435 37 3480K vm_area_struct124440 124440 100% 0.02K 732 170 2928K fsnotify_event_holder 1798 1305 72% 1.09K 62 29 1984K zio_cache But when piping to tail I only get 23 lines: $slabtop -sc -o | tail -n+0 Active / Total Objects (% used) : 110863370 / 111473331 (99.5%) Active / Total Slabs (% used) : 2826376 / 2826376 (100.0%) Active / Total Caches (% used) : 83 / 121 (68.6%) Active / Total Size (% used) : 48207346.77K / 48498099.95K (99.4%) Minimum / Average / Maximum Object : 0.01K / 0.43K / 16.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME10855309 10855309 100% 1.07K 374321 29 11978272K zfs_znode_cache10893059 10893059 100% 0.85K 294407 37 9421024K dnode_t412694 410756 99% 16.00K 206347 2 6603104K zio_buf_1638412502304 12290595 98% 0.50K 390697 32 6251152K kmalloc-51212776610 12743989 99% 0.29K 232302 55 3716832K dmu_buf_impl_t10855309 10855309 100% 0.27K 374321 29 2994568K sa_cache370776 370718 99% 8.00K 92694 4 2966208K kmalloc-81923269280 3028688 92% 0.32K 66720 49 1067520K taskstats10898853 10898853 100% 0.08K 213703 51 854812K selinux_inode_security12161344 12148483 99% 0.06K 190021 64 760084K kmalloc-643257058 3255733 99% 0.19K 77549 42 620392K dentry5577558 5519367 98% 0.09K 132799 42 531196K kmalloc-96 92872 82417 88% 4.00K 11609 8 371488K kmalloc-40961962464 1953501 99% 0.12K 61327 32 245308K kmalloc-1286021888 6021888 100% 0.03K 47046 128 188184K kmalloc-32 8356 8346 99% 12.00K 4178 2 133696K zio_buf_12288 The same can be confirmed by piping to wc directly: $slabtop -sc -o | tail -n+0 | wc -l23 Where is the rest of the output?
On a debian system, the Bash Reference Manual is in the bash-doc package. It's probably similarly packaged in other distros. The The Gnu Readline Library and The Gnu History Library manuals are both in the readline-doc package. You can read them with info , but IMO info itself is ghastly and almost unusable, with a terrible user interface - pinfo is a better alternative: apt-get install pinfo on debian-based systems. e.g. pinfo bash or pinfo history info navigation works in some inscrutable fashion. pinfo navigates info docs in a manner similar to a text-mode web-browser like lynx . Amongst other benefits, forward and back keys actually work as expected. The pinfo project page, including access to source code, is at https://alioth.debian.org/projects/pinfo/ Portable Operating System Interface (POSIX) Part 2: Shell and Utilities is a for-sale .PDF document available from http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6880751&filter%3DAND%28p_Publication_Number%3A6880749%29 (this is dated 1993) - but you can also find a 2007 draft PDF of the spec at http://www.open-std.org/jtc1/sc22/open/n4217.pdf and probably many other places. Newer versions may also be available, these were just the first I found with google.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81989/" ] }
237,531
If you issue the ls -all command some files are displayed with the timestamp containing the year without the time and others with the timestamp containing the time but not the year. Why does this happen? Is the timestamp representative of the time the file was created at?
By default, file timestamps are listed in abbreviated form, using a date like ‘Mar 30 2002’ for non-recent timestamps, and a date-without-year and time like ‘Mar 30 23:45’ for recent timestamps. This format can change depending on the current locale as detailed below. A timestamp is considered to be recent if it is less than six months old, and is not dated in the future. If a timestamp dated today is not listed in recent form, the timestamp is in the future, which means you probably have clock skew problems which may break programs like make that rely on file timestamps. Source: http://www.gnu.org/software/coreutils/manual/coreutils.html#Formatting-file-timestamps To illustrate: $ for i in {1..7}; do touch -d "$i months ago" file$i; done$ ls -ltotal 0-rw-r--r-- 1 terdon terdon 0 Sep 21 02:38 file1-rw-r--r-- 1 terdon terdon 0 Aug 21 02:38 file2-rw-r--r-- 1 terdon terdon 0 Jul 21 02:38 file3-rw-r--r-- 1 terdon terdon 0 Jun 21 02:38 file4-rw-r--r-- 1 terdon terdon 0 May 21 02:38 file5-rw-r--r-- 1 terdon terdon 0 Apr 21 2015 file6-rw-r--r-- 1 terdon terdon 0 Mar 21 2015 file7
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122887/" ] }
237,547
From what I have read, one way to speed up the emacs startup is to run emacs --daemon on login and then open files using emacslient instead of emacs , which will access the running emacs server instead of creating a new emacs instance. However, I prefer to not put proframs in my autostart unless absolutely necessary, as a way yo speed up the login process. Is there a robust way to detect if an emacs server is running? This would let me write a simple script that would spawn the emacs server the first time I open a file with emacs. #!/bin/shif emacs_daemon_is_not_running # <-- How do I do this?then emacs --daemonfiemacsclient -c "$@"
You shouldn't even need to test if emacs is already running or not. emacsclient can start the emacs daemon if it's not already running. From emacsclient(1) : -a, --alternate-editor=EDITOR if the Emacs server is not running, run the specified editor instead. This can also be specified via the `ALTERNATE_EDITOR' environment variable. If the value of EDITOR is the empty string, run `emacs --daemon' to start Emacs in daemon mode, and try to connect to it. I use an alias, ge , for editing files, defined like this: alias ge="emacsclient -c -n --alternate-editor=\"\""
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
237,573
So this should be simple. I am trying to test a condition at the top of a script and bail out of the entire script when the condition fails, and I have two statements I want to execute when I bail. With a lone exit and no second statement, it's fine, but no matter how I add a second statement, I can't get it to completely exit. The implicit cosmetic rule is that this must be all on one line. I discovered this when this one-liner change to a script did not work as intended. The script kept going. First let me show what does work. The following line completely exits if the $USER variable isn't 'x', and that's good. I know this works because typing this line into a terminal window will close that terminal window (unless your user id really is 'x'), so it's really doing a top-level exit: [ $USER = 'x' ] || exit 1 So this is good and just as I want, except I want to echo a message before exiting; however, if I try to add the echo, the exit no longer occurs or rather it seems to occur "differently," like perhaps in a bash function context or something. The next line will not close your terminal, and this is bad. [ $USER = 'x' ] || (echo "Time to bail" ; exit 1) I thought maybe the semi-colon was getting eaten by echo, but no, the exit does seem to be getting hit. How do I know? I changed the code above and then echoed $? and I saw whatever value I put where you see "1" above. And of course I was viewing these values in a terminal window that I wanted to be closed, and it wasn't closed. The next variation also shows a second way to perform an echo and a second statement, but again the exact same behavior occurs when an exit is used: [ $USER = 'x' ] || (echo "Time to bail" && exit 1) I'm hoping someone here is going to make this all not only not strange but sensible-seeming. So is this not possible to do on one line? ( bash --version : GNU bash, version 4.3.30(1)-release )
What you're searching is something like this: [ "$USER" = "x" ] || { echo "Time to bail"; exit 1; } The { list; } statement executes the commands in the given list in the current shell context. No subshell is created, unlike in the ( list ) notation. An exit call between parentheses will exit that subshell and not the parent shell itself. The examples in your question with the if-statement on one line or multiple lines are syntactically correct. I cannot reproduce that behavior. It doesn't matter how many lines there arre; the if-statement never starts a subshell in its body. BTW: I added double quotes to the variable in the condition, because when the variable $USER is empty, the construct would expand to [ = "x" ] which is not valid. With double quotes it expands to [ "" = "x" ] .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107551/" ] }
237,594
I'm looking for command to check from terminal, is any GUI installed to my UBUNTU.I couldn't find any satisfying answer
dpkg -l|grep xserver will tell you if X11 (core system for most GUIs) is installed. To check if any desktops are installed, you will have to guess, as there are just too many. Try something like: dpkg -l|egrep -i "(kde|gnome|lxde|xfce|mint|unity|fluxbox|openbox)" | grep -v library
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139500/" ] }
237,603
I'm prepending the Unix epoch with "nanosecond" precision into output of my command as below: $ command | while read line; do d=`date +%s%N`; echo $d $line; done > file I looked around to find out how to turn "nanosecond" into "millisecond". For example, I followed the solution given here . So, I tried both suggested approaches: $ command | while read line; do d=`echo $(($(date +%s%N)/1000000))`; echo $d $line; done > file$ command | while read line; do d=`date +%s%N | cut -b1-13`; echo $d $line; done > file However, in both cases when I insert the file into InfluxDB and query my database I get this: time 1970-01-01T00:24:05.419982325Z 1970-01-01T00:24:05.419982344Z 1970-01-01T00:24:05.419982371Z 1970-01-01T00:24:05.419982378Z 1970-01-01T00:24:05.419982388Z 1970-01-01T00:24:05.419982401Z Update When I use epoch with nanosecond accuracy date +%s%N , I get this: time2015-10-21T08:59:59.978902683Z 2015-10-21T08:59:59.982615836Z 2015-10-21T08:59:59.983958069Z 2015-10-21T08:59:59.98805317Z 2015-10-21T08:59:59.99717678Z 2015-10-21T09:00:00.028624495Z I'm expecting such an output: 2015-10-21T09:12:10.001327Z Please let me know if you have any solution.
You could use bc and printf : printf "%0.f" "$(bc <<<"$(date +"%s.%N")*1000")" This gives the number of miliseconds since january 1970. I didn't use the scale=n option of bc on purpose, because that would not round the value, instead it cuts the rest away (I know, it's pedantic). bc reads from file or from the standard input. <<< is a here string which expands the contents and supplies them as standard input to bc . This is given to printf to round the value. See this as an example: $ d=$(date "+%s.%N")$ echo $d; bc <<<"scale=0; ($d*1000)/1"; printf "%0.f" "$(bc <<<"$d*1000")"1445423229.512731661 # plain date1445423229512 # bc with scale1445423229513 # bc/printf In the loop it would then look like this: command | while read line; do d=$(printf "%0.f" "$(bc <<<"$(date +"%s.%N")*1000")") echo "$d $line"done >file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136557/" ] }
237,605
I am not able to take care of special characters. I have the following perl script. while(@mapping_array[$i]){ chomp(@mapping_array[$i]); my @core= split ( / / , $mapping_array[$i]) ; @core[0] =~ tr/ //ds ; ## Deleting blank spaces @core[1] =~ tr/ //ds ; system("perl -pi -e 's/@core[0]/@core[1]/' $testproc "); print "@core[0] \n"; print "@core[1] \n"; $i++;} The issue is that my @core[0] variable could be a simple string like abc or a more complex one like TEST[1] . My script works as expected for abc , replacing it with the value of @core[1] , but it failes if my @core[0] is TEST[1] . Using ? instead of / in the substitution operator doesn't help. How can I do this correctly?
Sounds like you're looking for quotemeta . As explained in perldoc -f quotemeta : quotemeta EXPR Returns the value of EXPR with all the ASCII non-"word" characters backslashed. (That is, all ASCII characters not matching "/[A-Za-z_0-9]/" will be preceded by a backslash in the returned string, regardless of any locale settings.) This is the internal function implementing the "\Q" escape in double-quoted strings. So, your script would be (note that array elements should be specified as $foo[N] , not @foo[N] ): chomp(@mapping_array);while($mapping_array[$i]){ my @core= split ( / / , $mapping_array[$i]) ; $core[0] =~ tr/ //ds ; ## // Deleting blank spaces $core[1] =~ tr/ //ds ; # / fix SO highlighting my($k,$l)=(quotemeta($core[0]),quotemeta($core[1])) system("perl -pi -e 's/$k/$l/' $testproc "); print "$core[0] \n$core[1] \n"; $i++;}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139503/" ] }
237,619
I just struggled (again) over this: # only in bashNORM=$'\033[0m'NORM=$'\e[0m'# only in dashNORM='\033[0m'# only in bash and busyboxNORM=$(echo -en '\033[0m') The goal is to include special characters in the string, not only for output using echo but also for piping into a cli tool etc. In the specific use case above using $(tput ...) is probably the best way, but I ask for a general escaping solution with minimal requirements to external tools but maximum compatibility. Normally, I help myself using conditions like [ -z "$BASH_VERSION" ] but I didn't find an easy way to detect busybox yet a normal variable assignment in 5 lines (using if/else/fi) looks like overkill I prefer simple solutions
What you want is "$(printf ...)" . Stephane has already written an excellent expose of printf vs echo , more of an article than a mere answer, so I won't repeat the whole thing here. Keynotes pertinent to the current question are: Stick to POSIX features and it is very portable, and It is frequently a shell builtin, in which case you have no external calls or dependencies. I will also add that it took me quite a while (okay, just a few weeks) to get around to switching from echo , because I was used to echo and thought printf would be complicated. (What are all those % signs about, huh?) As it turns out, printf is actually extremely simple and I don't bother with echo anymore for anything but fixed text with a newline at the end. Printf Made Easy There are vast numbers of options for printf . You can print numbers to specific decimal places of accuracy. You can print multiple fields, each with a specified fixed width (or a minimum width, or a maximum width). You can cause a shell string variable which contains the character sequences \t or \n to be printed with those character sequences interpreted as tabs and newlines. You can do all these things, and you should know they are possible so you can look them up when you need them, but in the majority of cases the following will be all you need to know: printf takes as its first argument a string called "format". The format string can specify how further arguments are to be handled (i.e. how they will be formatted). Further arguments, if not referenced at all* within the format argument, are ignored . Since alphanumeric characters (and others) can be embedded in the format argument and will print as-is, it may look like printf is doing the same thing as echo -n but that for some unknown reason it's ignoring all but the first argument. That's really not the case. For example, try printf some test text . In this example some is actually taken as the format , and since it doesn't contain anything special and doesn't tell printf what to do with the rest of the arguments, they are ignored and all you get printed is some . % followed by a specific letter needs to be used within the format string (the first argument to printf ) to specify what type of data the subsequent arguments contain. %s means "string" and is what you will use most often. \n or \t within the format translate to newlines and tab characters respectively. That's really all you need to use printf productively. See the following code block for some very simple illustrative examples. $ var1="First"$ var2="Second"$ var3="Third"$ printf "$var1" "$var2" "$var3" # WRONGFirst$ # Only the first arg is printed, without a trailing newline$$ printf '%s\n' "$var1" # %s means that the next arg will be formatted as a literal string with any special characters printed exactly as-is.First$$ printf '%s' "$var1" "$var2" "$var3" # When more args are included than the format calls for, the format string is reused. This example is equivalent to using '%s%s%s' as the format.FirstSecondThird$ # All three args were printed; no trailing newline.$$ printf '%s\t%s\t%s\n' "$var1" "$var2" "$var3"First Second Third # Tab separation with trailing newline. This format is very explicit about what to do with three arguments. Now see what happens if four are used:$ var4="Fourth"$ printf '%s\t%s\t%s\n' "$var1" "$var2" "$var3" "$var4"First Second Third # The specified format is reused after the three expected args,Fourth # so this line has two trailing tabs.$$ printf '%s\n' "$var1" "$var2" "$var3" # This format reuse can be used to advantage in printing a list, for example.FirstSecondThird$$ printf '%s\t' "$var1" "$var2" "$var3" ; printf '\n' # Here is a dual command that could have args added without changing the format string...First Second Third $ printf '%s\t' "$var1" "$var2" "$var3" "$var4" ; printf '\n'First Second Third Fourth # ...as you can see here.$ # It does print a trailing tab before the newline, however. * Of course, if you include a single argument format specifier sequence such as %s , your whole format string is reused as many times as needed to handle all arguments provided. See examples.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237619", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55508/" ] }
237,635
I have question, is it possible to get header of website by using telnet? Website looks like this domain.name.server.com/~USER (just example). And I want to get it header by telnet. telnet domain.name.server.com/~USER 80 <-- doesn't work telnet domain.name.server.com 80 works but I neet to get ~user. Is there any possibility to do this?
Use telnet domain.name.server.com 80 then HEAD /~USER HTTP/1.1Host: domain.name.server.com (Then you have to hit Enter once more.) Now it should show you the header of this page. For a real life example: $ telnet unix.stackexchange.com 80 Trying 198.252.206.16...Connected to unix.stackexchange.com.Escape character is '^]'.HEAD /questions/237635/using-telnet-to-get-website-header HTTP/1.1Host: unix.stackexchange.comHTTP/1.1 200 OKCache-Control: public, no-cache="Set-Cookie", max-age=60Content-Length: 70679Content-Type: text/html; charset=utf-8Expires: Wed, 21 Oct 2015 19:27:43 GMTLast-Modified: Wed, 21 Oct 2015 19:26:43 GMTVary: *X-Frame-Options: SAMEORIGINX-Request-Guid: dbf9d0f6-0ca4-423f-98f0-4cdf2bf51bf1Set-Cookie: prov=08886524-c640-40ad-a0ee-246db3219228; domain=.stackexchange.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnlyDate: Wed, 21 Oct 2015 19:26:43 GMTConnection closed by foreign host.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139525/" ] }
237,636
I'm trying to run ADB on a linux server with multiple users where I am not root (to play with my android emulator). The adb daemon writes its logs to the file /tmp/adb.log which unfortunately seems to be hard-coded into ADB and this situation is not going to change . So, adb is failing to run, giving the obvious error: cannot open '/tmp/adb.log': Permission denied . This file is created by another user and /tmp has sticky bit on. If I start adb with adb nodaemon server making it write to stdout, no errors occur (I also set up its port to a unique value to avoid conflicts). My question is: is there some way to make ADB write to another file than /tmp/adb.log ? More generally, is there a way to create a sort of a process-specific symlink? I want to redirect all file accesses to /tmp/adb.log to, saying, a file ~/tmp/adb.log . Again, I am not root on the server, so chroot , mount -o rbind and chmod are not valid options. If possible, I'd like not to modify ADB sources, but surely if there are no other solutions, I'll do that. P.S. For the specific ADB case I can resort to running adb nodaemon server with nohup and output redirection, but the general question is still relevant.
LD_PRELOAD isn't too difficult, and you don't need to be root.Interpose your own C routine which is called instead of the real open() in the C library. Your routine checks if the file to open is "/tmp/adb.log" and calls the real open with a different filename. Here's your shim_open.c: /* * capture calls to a routine and replace with your code * gcc -Wall -O2 -fpic -shared -ldl -o shim_open.so shim_open.c * LD_PRELOAD=/.../shim_open.so cat /tmp/adb.log */#define _FCNTL_H 1 /* hack for open() prototype */#define _GNU_SOURCE /* needed to get RTLD_NEXT defined in dlfcn.h */#include <stdlib.h>#include <stdio.h>#include <string.h>#include <dlfcn.h>#define OLDNAME "/tmp/adb.log"#define NEWNAME "/tmp/myadb.log"int open(const char *pathname, int flags, mode_t mode){ static int (*real_open)(const char *pathname, int flags, mode_t mode) = NULL; if (!real_open) { real_open = dlsym(RTLD_NEXT, "open"); char *error = dlerror(); if (error != NULL) { fprintf(stderr, "%s\n", error); exit(1); } } if (strcmp(pathname,OLDNAME)==0) pathname = NEWNAME; fprintf(stderr, "opening: %s\n", pathname); return real_open(pathname, flags, mode);} Compile it with gcc -Wall -O2 -fpic -shared -ldl -o shim_open.so shim_open.c and test it by putting something in /tmp/myadb.log and running LD_PRELOAD=/.../shim_open.so cat /tmp/adb.log . Then try the LD_PRELOAD on adb.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104533/" ] }
237,638
So I goofed when using sshfs and the folder I was using as a mountpoint for the server has been borked. The server wasn't unmounted correctly (I think due to a network drop out). consequently, when I ls my /Volumes/ where I had originally made the mountpoint folder I now get an I/O error: joehealey@Joes-MacBook-Pro:/Volumes$ ls -alls: mountpoint: Input/output errortotal 24drwxrwxrwt@ 7 root admin 238 21 Oct 13:08 ./drwxr-xr-x 37 root wheel 1326 3 Oct 12:38 ../-rw-r--r--@ 1 joehealey admin 6148 22 Sep 2014 .DS_Storedrwxr-xr-x 1 joehealey staff 8192 28 Jul 20:04 BOOTCAMP/lrwxr-xr-x 1 root admin 1 15 Oct 08:52 Macintosh HD@ -> /drwxrwxrwx 0 root wheel 0 21 Oct 13:08 MobileBackups/joehealey@Joes-MacBook-Pro:/Volumes$ mkdir mountpointmkdir: mountpoint: File existsjoehealey@Joes-MacBook-Pro:/Volumes$ I've seen similar problems in thread such as this where the suggestions are to nuke the whole disk etc. Now, I'm not so concerned by this that I'm prepared to go that far, so I'm just wondering if there is any way to force-remove and resolve this specific instance?
LD_PRELOAD isn't too difficult, and you don't need to be root.Interpose your own C routine which is called instead of the real open() in the C library. Your routine checks if the file to open is "/tmp/adb.log" and calls the real open with a different filename. Here's your shim_open.c: /* * capture calls to a routine and replace with your code * gcc -Wall -O2 -fpic -shared -ldl -o shim_open.so shim_open.c * LD_PRELOAD=/.../shim_open.so cat /tmp/adb.log */#define _FCNTL_H 1 /* hack for open() prototype */#define _GNU_SOURCE /* needed to get RTLD_NEXT defined in dlfcn.h */#include <stdlib.h>#include <stdio.h>#include <string.h>#include <dlfcn.h>#define OLDNAME "/tmp/adb.log"#define NEWNAME "/tmp/myadb.log"int open(const char *pathname, int flags, mode_t mode){ static int (*real_open)(const char *pathname, int flags, mode_t mode) = NULL; if (!real_open) { real_open = dlsym(RTLD_NEXT, "open"); char *error = dlerror(); if (error != NULL) { fprintf(stderr, "%s\n", error); exit(1); } } if (strcmp(pathname,OLDNAME)==0) pathname = NEWNAME; fprintf(stderr, "opening: %s\n", pathname); return real_open(pathname, flags, mode);} Compile it with gcc -Wall -O2 -fpic -shared -ldl -o shim_open.so shim_open.c and test it by putting something in /tmp/myadb.log and running LD_PRELOAD=/.../shim_open.so cat /tmp/adb.log . Then try the LD_PRELOAD on adb.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139523/" ] }
237,728
Let's say I have a directory with files a1, a2, a3, b1, b2, b3. I only want to match files that start with a but don't contain 3 . I tried ls -I "*3" *a* but it returns a1 a2 a3 , even though I don't want it to match a3 . Is this possible with ls ?
Just: shopt -s extglob ls a!(*3*) shopt -s extglob activates extended globbing. a matches the starting a !() negates the match inside the () ... *3* which is 3 and anything before or after it. $ touch 1 2 3 a1 a2 a3 b1 b2 b3 aa1 aa2 aa3 a2a a3a$ ls a!(*3*)a1 a2 a2a aa1 aa2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3120/" ] }
237,735
I'm passing data from STDIN into an array using read like so: prompt$ cat myfilea "bc" "d e" fprompt$ read -a arr < myfile But read doesn't appear to pay attention to the quoted strings and provides me an array of 5 elements: prompt$ echo ${#arr[@]}5prompt$ echo ${arr[@]:0}a "bc" "d e" fprompt$ echo ${arr[2]}"dprompt$ echo ${arr[3]}e" I'm using the default IFS setting: \t\n in bash .There are several ways to accomplish task using different tools, but I'm surprised that read doesn't support quoted strings. Any other suggestions for getting a delimited list with quotes into an array?
I can't think of a very good way to do what you are asking for, but,if you know that your input file is going to contain space-separatedtokens that are valid syntax for bash, then something like thefollowing could work: declare -a arr="($(<myfile))"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139579/" ] }
237,778
According to the manual , I should be able to specify TLS version 1.2 when using wget. When I try, it fails. wget https://site --no-check-certificate --secure-protocol=TLSv1_2wget: --secure-protocol: Invalid value ‘TLSv1_2’. If I use wget https://site --no-check-certificate --secure-protocol=TLSv1 , it works just fine. Version information: wget --versionGNU Wget 1.15 built on linux-gnu.+digest +https +ipv6 +iri +large-file +nls +ntlm +opie +ssl/openssl Wgetrc: /etc/wgetrc (system)Locale: /usr/share/locale Compile: gcc -DHAVE_CONFIG_H -DSYSTEM_WGETRC="/etc/wgetrc" -DLOCALEDIR="/usr/share/locale" -I. -I../../src -I../lib -I../../lib -D_FORTIFY_SOURCE=2 -I/usr/include -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall Link: gcc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall -Wl,-Bsymbolic-functions -Wl,-z,relro -L/usr/lib -lssl -lcrypto -ldl -lz -lidn -luuid ftp-opie.o openssl.o http-ntlm.o ../lib/libgnu.a Copyright (C) 2011 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later<http://www.gnu.org/licenses/gpl.html>.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Originally written by Hrvoje Niksic <[email protected]>.Please send bug reports and questions to <[email protected]>.
As written on the project page of wget, the secure protocols TLSv1_1 and TLSv1_2 were added in wget version 1.16.1. Your wget 1.15 does not support it. Ressources: http://savannah.gnu.org/forum/forum.php?forum_id=8159
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40449/" ] }
237,817
I have a process that receives a video file (RAW) and transcodes it with FFMPEG generating three (different resolutions) resultant files. I'm using a distributed task queue system ( Celery ) to process every process from FFMPEG in a different asynchronous task. The three tasks run, according to the flow Convert video Upload result to a bucket in cloud Delete result And a last task upload the RAW video (used for transcoding) to bucket, and delete it. If I start the three tasks asynchronously, and delete the RAW file just after, will the tasks (that are using the RAW file) be interrupted by deleting the file? PS: I assuming that, the RAW file is loaded in memory, and opened three times, while the transcoding task were started.
The assumption that the complete RAW file is in memory is not true. Normally, when a file is opened the process gets a file descriptor which can be used to read/write the file. When a file is opened by a process and then is deleted while the file is still open does not actually delete the file instantly. The file is actually deleted when there are no processes anymore with handles (file descriptors) to that file. You can use lsof to see if the file still has handles and when you delete such file it is often listed with the (deleted) text appended to the line. Disk space is also not reclaimed when an open file is deleted so it is safe to still use the file as long as it is open. When the deleted file does not have active file descriptors anymore the filesystem will reclaim the consumed disk space.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87428/" ] }
237,854
When you paste some command in terminal, it will sometimes automatically execute the command (just like if the "Enter" key was pressed), sometimes not. I've been using Linux for ages, pasted thousands of commands in various consoles on many distros, and I am still unable to tell if the command I'm about to paste will be executed automatically or not. What triggers this behavior?
It's the return character in the text you are copying that's triggering the automatic execution. Let's take a different example, copy these lines all at once and paste them into your terminal: echo "Hello";echo "World"; If you look in your terminal, you will not see this: $ echo "Hello";echo "World"; You will see this (there may also be a line saying World ): $ echo "Hello";Hello$ echo "World"; Instead of waiting for all the input to be pasted in, the first line executes (and for the same reason, the second line may or may not do so as well). This is because there is a RETURN character between the two lines. When you press the ENTER key on your keyboard, all you are doing is sending the character with the ASCII value of 13 . That character is detected immediately by your terminal, and knows it has special instructions to execute what you have typed so far. When stored on your computer or printed on your screen, the RETURN character is just like any other letter of the alphabet, number, or symbol. This character can be deleted with backspace, or copied to the clipboard just like any other regular character. The only difference is, when your browser sees the character, it knows that instead of printing a visible character, it should treat it differently, and has special instructions to move the next set of text down to the next line. The RETURN character and the SPACE character (ascii 32 ), along with a few other seldom used characters, are known as "non-printing characters" for this reason. Sometimes when you copy text from a website, it's difficult to copy only the text and not the return at the end (and is often made more difficult by the styling on the page). Experiment time! Below you will find two commands that will illustrate the problem, and that you can "practice" on. Start your cursor right before echo and drag until the highlight is right before the arrow: echo "Wait for my signal...";<- End cursor here right after the semicolon And now try the second command. Start your cursor right before echo and drag down until the cursor is on the second line, but is right in front of the <- arrow. Copy it, and then paste it into your terminal: echo 'Go go go!';<- End cursor here right before the arrow Depending on your browser, it may not even be visible that the text you selected went over two lines. But when you paste it into the terminal, you will find that it executes the line, because it found a RETURN character in the copied text.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54995/" ] }
237,861
I am trying to set up a LAN chat with two users using Linux server and none of them is root. I have tried this two methods: write account_name on both computers And: nc -l port_number on first computer nc IP_adress port_number on second computer But the problem is whenever I write something and person on the other side hits enter it breaks also my line e.g: I am typing: "This is just a sim enter ple text". And this enter from another person breaks my line. Is there way how can I fix that? Or another way I can set up this chat?
It's the return character in the text you are copying that's triggering the automatic execution. Let's take a different example, copy these lines all at once and paste them into your terminal: echo "Hello";echo "World"; If you look in your terminal, you will not see this: $ echo "Hello";echo "World"; You will see this (there may also be a line saying World ): $ echo "Hello";Hello$ echo "World"; Instead of waiting for all the input to be pasted in, the first line executes (and for the same reason, the second line may or may not do so as well). This is because there is a RETURN character between the two lines. When you press the ENTER key on your keyboard, all you are doing is sending the character with the ASCII value of 13 . That character is detected immediately by your terminal, and knows it has special instructions to execute what you have typed so far. When stored on your computer or printed on your screen, the RETURN character is just like any other letter of the alphabet, number, or symbol. This character can be deleted with backspace, or copied to the clipboard just like any other regular character. The only difference is, when your browser sees the character, it knows that instead of printing a visible character, it should treat it differently, and has special instructions to move the next set of text down to the next line. The RETURN character and the SPACE character (ascii 32 ), along with a few other seldom used characters, are known as "non-printing characters" for this reason. Sometimes when you copy text from a website, it's difficult to copy only the text and not the return at the end (and is often made more difficult by the styling on the page). Experiment time! Below you will find two commands that will illustrate the problem, and that you can "practice" on. Start your cursor right before echo and drag until the highlight is right before the arrow: echo "Wait for my signal...";<- End cursor here right after the semicolon And now try the second command. Start your cursor right before echo and drag down until the cursor is on the second line, but is right in front of the <- arrow. Copy it, and then paste it into your terminal: echo 'Go go go!';<- End cursor here right before the arrow Depending on your browser, it may not even be visible that the text you selected went over two lines. But when you paste it into the terminal, you will find that it executes the line, because it found a RETURN character in the copied text.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/237861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139074/" ] }
237,877
I have this code in a tool I am currently building: while [ $# -gt 0 ]; do case "$1" in --var1=*) var1="${1#*=}" ;; --var2=*) var1="${1#*=}" ;; --var3=*) var1="${1#*=}" ;; *) printf "***************************\n * Error: Invalid argument.*\n ***************************\n" esac shiftdone I have many options to add, but five of my options should be saved as arrays. So if I call the tool, let's say from the shell using something like this: ./tool --var1="2" --var1="3" --var1="4" --var1="5" --var2="6" --var3="7" How can I save the value of var1 as an array? Is that possible? And, if so, what is the best way to deal with these arrays in terms of efficiency if I have too many of them?.
If on Linux (with the util-linux utilities including getopt installed, or the one from busybox ), you can do: declare -A opt_specvar1=() var2=() var4=falseunset var3opt_spec=( [opt1:]='var1()' # opt with argument, array variable [opt2:]='var2()' # ditto [opt3:]='var3' # opt with argument, scalar variable [opt4]='var4' # boolean opt without argument)parsed_opts=$( IFS=, getopt -o + -l "${!opt_spec[*]}" -- "$@") || exiteval "set -- $parsed_opts"while [ "$#" -gt 0 ]; do o=$1; shift case $o in (--) break;; (--*) o=${o#--} if ((${opt_spec[$o]+1})); then # opt without argument eval "${opt_spec[$o]}=true" else o=$o: case "${opt_spec[$o]}" in (*'()') eval "${opt_spec[$o]%??}+=(\"\$1\")";; (*) eval "${opt_spec[$o]}=\$1" esac shift fi esacdoneecho "var1: ${var1[@]}" That way, you can call your script as: my-script --opt1=foo --opt2 bar --opt4 -- whatever And getopt will do the hard work of parsing it, handling -- and abbreviations for you. Alternatively, you could rely on the type of the variable instead of specifying it in your $opt_spec associative array definition: declare -A opt_specvar1=() var2=() var4=falseunset var3opt_spec=( [opt1:]=var1 # opt with argument [opt2:]=var2 # ditto [opt3:]=var3 # ditto [opt4]=var4 # boolean opt without argument)parsed_opts=$( IFS=, getopt -o + -l "${!opt_spec[*]}" -- "$@") || exiteval "set -- $parsed_opts"while [ "$#" -gt 0 ]; do o=$1; shift case $o in (--) break;; (--*) o=${o#--} if ((${opt_spec[$o]+1})); then # opt without argument eval "${opt_spec[$o]}=true" else o=$o: case $(declare -p "${opt_spec[$o]}" 2> /dev/null) in ("declare -a"*) eval "${opt_spec[$o]}+=(\"\$1\")";; (*) eval "${opt_spec[$o]}=\$1" esac shift fi esacdoneecho "var1: ${var1[@]}" You can add short options like: declare -A long_opt_spec short_opt_specvar1=() var2=() var4=falseunset var3long_opt_spec=( [opt1:]=var1 # opt with argument [opt2:]=var2 # ditto [opt3:]=var3 # ditto [opt4]=var4 # boolean opt without argument)short_opt_spec=( [a:]=var1 [b:]=var2 [c]=var3 [d]=var4)parsed_opts=$( IFS=; short_opts="${!short_opt_spec[*]}" IFS=, getopt -o "+$short_opts" -l "${!long_opt_spec[*]}" -- "$@") || exiteval "set -- $parsed_opts"while [ "$#" -gt 0 ]; do o=$1; shift case $o in (--) break;; (--*) o=${o#--} if ((${long_opt_spec[$o]+1})); then # opt without argument eval "${long_opt_spec[$o]}=true" else o=$o: case $(declare -p "${long_opt_spec[$o]}" 2> /dev/null) in ("declare -a"*) eval "${long_opt_spec[$o]}+=(\"\$1\")";; (*) eval "${long_opt_spec[$o]}=\$1" esac shift fi;; (-*) o=${o#-} if ((${short_opt_spec[$o]+1})); then # opt without argument eval "${short_opt_spec[$o]}=true" else o=$o: case $(declare -p "${short_opt_spec[$o]}" 2> /dev/null) in ("declare -a"*) eval "${short_opt_spec[$o]}+=(\"\$1\")";; (*) eval "${short_opt_spec[$o]}=\$1" esac shift fi esacdoneecho "var1: ${var1[@]}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137406/" ] }
237,896
I have two scripts: running_script script_one I need to get the PID for the/any instances of running_script running under a username, and then pkill to stop the running_script and daughter processes. We expected something like: ps -fu will | grep running_script to find the running_script process(es). However checking the PID against the ps command output show that the cmd as: "bin/bash" for the running_script process. running_script runs as a detached process( & operator) which starts script_one . I print the PID-s at the start to compare with ps command's output. running_script &echo $! -- $BASHPID In the real use-case, we won't have PIDs for some running_script processes running. Also, script_one may or may not be a detached process from the running_script parent. For the purposes of the exercise, script_one just does loops. while [ true ]do echo " $0 - 35sec ..." sleep 35done However that's just the example. The requirement is to get PID for the parent, running_script process(es). Is there an option on ps or another command that can give me the name of the script file and the PID? Or a method to set a name that can be searched for? In the final use-case, there could be several instances of running_script so picking them out by name seems the best option to date. example I thought it might help to show what the ps command shows, since most responses appear to think that's going to work. I ran this example just a while ago. $ ./running_script &$ echo $! - $BASHPID9047 - 3261$ ps -ef | grep will UID PID PPID C STIME TTY TIME CMD will 8862 2823 0 22:48 ? 00:00:01 gnome-terminal will 8868 8862 0 22:48 ? 00:00:00 gnome-pty-helper will 8869 8862 0 22:48 pts/4 00:00:00 bash* will 9047 3261 0 22:55 pts/2 00:00:00 /bin/bash will 9049 9047 0 22:55 pts/2 00:00:00 /bin/bash will 9059 2886 0 22:56 pts/0 00:00:00 man pgrep will 9070 9059 0 22:56 pts/0 00:00:00 pager -s will 10228 9049 0 23:31 pts/2 00:00:00 sleep 35 will 10232 8869 0 23:31 pts/4 00:00:00 ps -ef will 10233 8869 0 23:31 pts/4 00:00:00 grep --colour=auto william I have marked PID #9047, is simply shows: - will 9047 3261 0 22:55 pts/2 00:00:00 /bin/bash Is there something like a a " jobname " attribute I could set on linux?
Try pgrep -f running_script -- the -f option uses the whole command line to match against, not just the process name
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79098/" ] }
237,914
I have a 2013 Retina MacBook Pro, and I really want to install Debian on it. I have the know-how and have had at least three Debian systems before this. I am very knowledgable with the command-line and Linux's inner workings, and partitioning isn't an issue for me. So, I just have one question before I install Debian. My dad has warned me that Linux, in particular, can make laptop batteries explode and/or ruin hardware on MacBooks. I find this very strange, but don't really have any research to disprove it. I can't seem to find anything about it on the Internet, so can someone help me out?
Laptop batteries typically have onboard firmware to control safe charging & discharging of the battery, report battery charge level to the OS, and prevent thermal runaway , which is what will cause an Li-ion battery to explode (or more accurately, catch fire). Most modern ones also contain mechanical failsafes to prevent such fires & explosions. This firmware is stored on the battery, separate from the OS. While it can be updated from the OS (although this depends on the battery & laptop), it's not something that is altered when installing a new OS or something that is typically ever tampered with unless done so by the user running a battery firmware update. The only thing changing OS will affect is the load on the system & the hardware drivers used, not the safety features of the battery. Load on the system in and of itself will not normally cause issues with the battery other than faster discharging. Interestingly, according to this forbes article , there was actually a vulnerability in Apple laptops (running OSX, not Linux) that could do nasty things to the firmware on the batteries - perhaps your Dad has read something like that which is why he seems to think the OS can do this? (It's more than likely been fixed since 2011 when the article was written). EDIT - in conclusion, aside from possible attack vectors for battery firmware hacks, the choice of OS alone cannot cause a battery to explode.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/237914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133466/" ] }
237,917
I have a Bluetooth headset Suicen AX610 . My system is Debian 8.1. I want to run my headset, but I can't. Debian 8.1 can find the headset, But, Debian 8.1 can't pair the headset, I try to set Connection for on , but it does not work. I haven't installed any package. Because I looked many websites and each one says to set up the Bluetooth in a different way. For instance, install a single package $ sudo apt-get install bluez-tools or install a lot of packages $ sudo apt-get install bluez-audio pavucontrol bluez-firmware bluez-tools or ... $ sudo apt-get install bluez-utils bluez-gnome bluez-alsa Can anyone help me with this issue? My bluetooth folder: $ ls /etc/bluetooth/input.conf main.conf network.conf proximity.conf ========================================================================= I solved this problem, but my solution isn't good and I can't explain my solution. I used trial and error method. Install a lot of packeges $ sudo apt-get install gnome-bluetooth pulseaudio pulseaudio-module-bluetooth pavucontrol blueman bluetooth bluez Edit /etc/default/bluetooth to enable the following HID2HCI_ENABLED=1HID2HCI_UNDO=1 Get ??? $ hcitool con Connections:< ACL 00:11:67:00:52:55 handle 2 state 1 lm MASTER AUTH ENCRYPT Create .asoundrc $ sudo pico ~/.asoundrc pcm.bluetooth {type bluetoothdevice "00:11:67:00:52:55"profile "auto"} pcm.pulse {type pulse} ctl.pulse {type pulse} pcm.!default {type pulse} ctl.!default {type pulse} Reboot the system pairing by system tray $ sudo killall pulseaudio pairing again
You might just need to delete the pairing, then in terminal enter sudo pactl load-module module-bluetooth-discover then pair with the headset
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139706/" ] }
237,939
I need a bash script to source a file that is encrypted, as the file being sourced contains sensitive information. I would like the script to prompt for the GPG passphrase and then run, sourcing the encrypted file. I cannot figure out how to do this though. There must be user input for the passphrase, as I don't want to store a key on the server with the encrypted file. Looking into some different methods, I do not want to decrypt the file, source the non-encrypted file, then delete it after. I would like to reduce the chance of leaving an non-encrypted file behind, if something went wrong in the script. Is there a way to get the GPG output of a file to source it this way? Possibly collecting STDOUT and parsing it (if GPG can even output the contents this way). Also, if there is another way to encrypt a file that shell scripts can use, I am not aware of it, but open to other possibilities.
You can do this using process substitution . . <(gpg -qd "$encrypted_filename") Here's an example: % cat > to-source <<< 'echo "Hello"'% . ./to-source Hello% gpg -e -r [email protected] to-source% . <(gpg -qd to-source.gpg)Hello gpg -d does not persist the file to disk, it just outputs it to stdout. <() uses a FIFO , which also does not result in the actual file data being written to disk. In bash, . and source are synonyms, but . is more portable (it's part of POSIX), so I've used it here. Note, however, that <() is not as portable -- as far as I know it's only supported in bash, zsh, ksh88, and ksh93. pdksh and mksh have coprocesses which can have the same effect.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/237939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34433/" ] }
237,965
Okay its easy to create a ssh pair with ssh-keygen , but how do I generate with ssh-keygen a ssh pair which allows me to use AES-256-CBC? The default one is always AES-128-CBC, I tried already different parameters but they didn't function like: ssh-keygen -b 4096 -t rsa -Z aes-256-cbc But they didn't work, any idea how to do so?
You do not generate the key used by aes when you use ssh-keygen . Since aes is a symmetric cipher , its keys do not come in pairs. Both ends of the communication use the same key. The key generated by ssh-keygen uses public key cryptography for authentication. From the ssh-keygen manual: ssh-keygen generates, manages and converts authentication keys for ssh(1). ssh-keygen can create RSA keys for use by SSH protocol version 1 and DSA, ECDSA, Ed25519 or RSA keys for use by SSH protocol version 2. From the ssh manual: Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are done using separate keys, and it is unfeasible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. ssh implements public key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms. The problem with public key cryptography is that it is quite slow. Symmetric key cryptography is much faster and is used by ssh for the actual data transfer. The key used for the symmetric cryptography is generated on the fly after the connection was established (quoting from the sshd manual): For protocol 2, forward security is provided through a Diffie-Hellman key agreement. This key agreement results in a shared session key. The rest of the session is encrypted using a symmetric cipher, currently 128-bit AES, Blowfish, 3DES, CAST128, Arcfour, 192-bit AES, or 256-bit AES. The client selects the encryption algorithm to use from those offered by the server. Additionally, session integrity is provided through a cryptographic message authentication code (hmac-md5, hmac-sha1, umac-64, umac-128, hmac-ripemd160, hmac-sha2-256 or hmac-sha2-512). If you wish to use aes256-cbc you need to specify it on the command line using the -c option, in its most basic form this would look like this: $ ssh -c aes256-cbc user@host You can also specify your preferred selection of ciphers in ssh_config , using a comma-separated list. Tinkering with the defaults, is, however, not recommended since this is best left to the experts. There are lots of considerations and years of experience that went into the choice of defaults by the OpenSSH developers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/237965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139737/" ] }
238,055
Is there something like: mdadm --verify <device> Or similar command, which would read all sectors of all drives of a Software RAID array in any mdadm implemented RAID to verify the array is doing just fine? Please include important steps like the need of un-mounting the array if applicable.
You can do the following: echo check > /sys/block/mdX/md/sync_action This will force the MD subsystem to perform a check of /dev/mdX . This is what checkarray does eventually, after a number of extra checks. The above also works on systems without such a utility. Note that with a mounted filesystem the check nearly always give a number of inconsistent blocks. Remember to unmount the filesystem first, if possible, to avoid those inconsistencies. Note, that the above command can be particularly useful for newly created arrays, which checkarray skips.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
238,080
Is there a concise way of testing for array support by the local Bourne-like shell at command line ? This is always possible: $ arr=(0 1 2 3);if [ "${arr[2]}" != 2 ];then echo "No array support";fi or testing for $SHELL and shell version: $ eval $(echo "$SHELL --version") | grep version and then reading the man page, assuming I have access to it. (Even there, writing from /bin/bash , I am assuming that all Bourne-like shells admit the long option --version , when that breaks for ksh for instance .) I am looking for a simple test that could be unattended and incorporated in a Usage section at beginning of script or even before calling it.
Assuming you want to restrict to Bourne-like shells (many other shells like csh , tcsh , rc , es or fish support arrays but writing a script compatible at the same time to Bourne-like shells and those is tricky and generally pointless as they are interpreters for completely different and incompatible languages), note that there are significant differences between implementations. The Bourne like shells that support arrays (in chronological order of when support was added) are: ksh88 (the last evolution of the original ksh, the first one implementing arrays, ksh88 is still found as ksh on most traditional commercial Unices where it's also the basis for sh ) arrays are one-dimensional Arrays are defined as set -A array foo bar or set -A array -- "$var" ... if you can't guarantee that $var won't start with a - or + . Array indices start at 0 . Individual array elements are assigned as a[1]=value . arrays are sparse. That is a[5]=foo will work even if a[0,1,2,3,4] are not set and will leave them unset. ${a[5]} to access the element of indice 5 (not necessarily the 6th element if the array is sparse). The 5 there can be any arithmetic expression. array size and subscript is limited (to 4096). ${#a[@]} is the number of assigned element in the array (not the greatest assigned indice). there is no way to know the list of assigned subscripts (other than testing the 4096 elements individually with [[ -n "${a[i]+set}" ]] ). $a is the same as ${a[0]} . That is arrays somehow extend scalar variables by giving them extra values. pdksh and derivatives (that's the basis for the ksh and sometimes sh of several BSDs and was the only opensource ksh implementation before ksh93 source was freed): Mostly like ksh88 but note: Some old implementations didn't support set -A array -- foo bar , (the -- wasn't needed there). ${#a[@]} is one plus the indice of the greatest assigned indice. ( a[1000]=1; echo "${#a[@]}" outputs 1001 even though the array has only one element. in newer versions, array size is no longer limited (other than by the size of integers). recent versions of mksh have a few extra operators inspired from bash , ksh93 or zsh like assignments a la a=(x y) , a+=(z) , ${!a[@]} to get the list of assigned indices. zsh . zsh arrays are generally better designed and take the best of ksh and csh arrays. As you can see from the zsh 2.0 announcement in 1991 , the design was inspired from tcsh rather than ksh. They have some resemblance with ksh arrays but with significant differences: indices start at 1, not 0 (except in ksh emulation), that's consistent with the Bourne array (the position parameters $@, which zsh also exposes as its $argv array) and csh arrays. they are a separate type from normal/scalar variables. Operators apply differently to them and like you'd generally expect. $a is not the same as ${a[0]} but expands to the non-empty elements of the array ( "${a[@]}" for all the elements like in ksh ). they are normal arrays, not sparse arrays. a[5]=1 works but assigns all the elements from 1 to 4 the empty string if they were not assigned. So ${#a[@]} (same as ${#a} which in ksh is the size of the element of indice 0) is the number of elements in the array and the greatest assigned indice. associative arrays are supported. a great numbers of operators to work with arrays is supported, too big to list here. arrays defined as a=(x y) . set -A a x y also works for compatibility with ksh , but set -A a -- x y is not supported unless in ksh emulation (the -- is not needed in zsh emulation). ksh93 . (here describing latest versions). ksh93 , a rewrite of ksh by the original authors, long considered experimental can now be found in more and more systems now that it has been released as FOSS. For instance, it's the /bin/sh (where it replaced the Bourne shell, /usr/xpg4/bin/sh , the POSIX shell is still based on ksh88 ) and ksh of Solaris 11 . Its arrays extend and enhance ksh88's. a=(x y) can be used to define an array, but since a=(...) is also used to define compound variables ( a=(foo=bar bar=baz) ), a=() is ambiguous and declares a compound variable, not an array. arrays are multi-dimensional ( a=((0 1) (0 2)) ) and array elements can also be compound variables ( a=((a b) (c=d d=f)); echo "${a[1].c}" ). A a=([2]=foo [5]=bar) syntax can be used to define sparse arrays at once. maximum array index raised to 4,194,303. Not to the extent of zsh , but great number of operators supported as well to manipulate arrays. "${!a[@]}" to retrieve the list of array indices. associative arrays also supported as a separate type. bash . bash is the shell of the GNU project. It's used as sh on recent versions of OS/X and some GNU/Linux distributions. bash arrays mostly emulate ksh88 ones with some features of ksh93 and zsh . a=(x y) supported. set -A a x y not supported. a=() creates an empty array (no compound variables in bash ). "${!a[@]}" for the list of indices. a=([foo]=bar) syntax supported as well as a few others from ksh93 and zsh . recent bash versions also support associative arrays as a separate type. yash . It's a relatively recent, clean, multi-byte aware POSIX sh implementation. Not in wide use. Its arrays are another clean API similar to zsh arrays are not sparse Array indices start at 1 defined (and declared) with a=(var value) elements inserted, deleted or modified with the array builtin array -s a 5 value to modify the 5 th element would fail if that element was not assigned beforehand. the number of elements in the array is ${a[#]} , ${#a[@]} being the size of the elements as a list. arrays are a separate type. You need a=("$a") to redefine a scalar variable as an array before you can add or modify elements. "$array" expands to all the elements of the array as-is, which makes them much easier to use than in other shells ( cmd "$array" to call cmd with the elements of the array as arguments compared to cmd "${array[@]}" in ksh/bash/zsh; zsh 's cmd $array is close but strips empty elements). arrays are not supported when invoked as sh . So, from that you can see that detecting for array support, which you could do with: if (unset a; set -A a a; eval "a=(a b)"; eval '[ -n "${a[1]}" ]' ) > /dev/null 2>&1then array_supported=trueelse array_supported=falsefi is not enough to be able to use those arrays. You'd need to define wrapper commands to assign arrays as a whole and individual elements, and make sure you don't attempt to create sparse arrays. Like unset aarray_elements() { eval "REPLY=\"\${#$1[@]}\""; }if (set -A a -- a) 2> /dev/null; then set -A a -- a b case ${a[0]}${a[1]} in --) set_array() { eval "shift; set -A $1"' "$@"'; } set_array_element() { eval "$1[1+(\$2)]=\$3"; } first_indice=0;; a) set_array() { eval "shift; set -A $1"' -- "$@"'; } set_array_element() { eval "$1[1+(\$2)]=\$3"; } first_indice=1;; --a) set_array() { eval "shift; set -A $1"' "$@"'; } set_array_element() { eval "$1[\$2]=\$3"; } first_indice=0;; ab) set_array() { eval "shift; set -A $1"' -- "$@"'; } set_array_element() { eval "$1[\$2]=\$3"; } first_indice=0;; esacelif (eval 'a[5]=x') 2> /dev/null; then set_array() { eval "shift; $1=("'"$@")'; } set_array_element() { eval "$1[\$2]=\$3"; } first_indice=0elif (eval 'a=(x) && array -s a 1 y && [ "${a[1]}" = y ]') 2> /dev/null; then set_array() { eval "shift; $1=("'"$@")'; } set_array_element() { eval " $1=(\${$1+\"\${$1[@]}"'"}) while [ "$(($2))" -ge "${'"$1"'[#]}" ]; do array -i "$1" "$2" "" done' array -s -- "$1" "$((1+$2))" "$3" } array_elements() { eval "REPLY=\${$1[#]}"; } first_indice=1else echo >&2 "Array not supported"fi And then you access array elements with "${a[$first_indice+n]}" , the whole list with "${a[@]}" and use the wrapper functions ( array_elements , set_array , set_array_element ) to get the number of elements of an array (in $REPLY ), set the array as a whole or assign individual elements. Probably not worth the effort. I'd use perl or limit to the Bourne/POSIX shell array: "$@" . If the intent is to have some file to be sourced by the interactive shell of a user to define functions that internally use arrays, here are a few more notes that may be useful. You can configure zsh arrays to be more like ksh arrays in local scopes (in functions or anonymous functions). myfunction() { [ -z "$ZSH_VERSION" ] || setopt localoption ksharrays # use arrays of indice 0 in this function} You can also emulate ksh (improve compatibility with ksh for arrays and several other areas) with: myfunction() { [ -z "$ZSH_VERSION" ] || emulate -L ksh # ksh code more likely to work here} With that in mind and you're willing to drop support for yash and ksh88 and older versions of pdksh derivatives, and as long as you don't try to create sparse arrays, you should be able to consistently use: a[0]=foo a=(foo bar) (but not a=() ) "${a[#]}" , "${a[@]}" , "${a[0]}" in those functions that have the emulate -L ksh , while the zsh user still using his/her arrays normally the zsh way.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72707/" ] }
238,081
I am trying to upgrade my fedora system (21 → 22) using fedup. I removed all old kernels using package-cleanup but fedup still needs 2MB more on /boot . These are the files in /boot : -rw-r--r--. 1 root root 153K Sep 22 17:52 config-4.1.8-100.fc21.x86_64drwxr-xr-x. 4 root root 1.0K May 25 09:38 efi-rw-r--r--. 1 root root 181K Oct 21 2014 elf-memtest86+-5.01drwxr-xr-x. 2 root root 3.0K May 25 09:47 extlinuxdrwxr-xr-x. 6 root root 1.0K Oct 23 13:32 grub2-rw-------. 1 root root 38M Aug 18 2014 initramfs-0-rescue-91b91d0aa1ed43eab9d2bcf5b8669540.img-rw-r--r--. 1 root root 19M Oct 11 11:58 initramfs-4.1.8-100.fc21.x86_64.img-rw-r--r--. 1 root root 41M May 22 05:12 initramfs-fedup.img-rw-r--r--. 1 root root 552K May 25 09:51 initrd-plymouth.imgdrwx------. 2 root root 12K Aug 18 2014 lost+found-rw-r--r--. 1 root root 179K Oct 21 2014 memtest86+-5.01-rw-------. 1 root root 3.0M Sep 22 17:52 System.map-4.1.8-100.fc21.x86_64-rwxr-xr-x. 1 root root 5.0M Aug 18 2014 vmlinuz-0-rescue-91b91d0aa1ed43eab9d2bcf5b8669540-rwxr-xr-x. 1 root root 5.7M Sep 22 17:52 vmlinuz-4.1.8-100.fc21.x86_64-rw-r--r--. 1 root root 5.7M May 21 18:46 vmlinuz-fedup initramfs-0-rescue-... is taking up the maximum space. This was created when I upgraded my OS from last version (fedora 20). I guess this file can be removed. Is there a way to remove this without manually deleting using rm ? If not this file, which other file can be safely deleted (there is a folder called /efi/EFI/fedora/fonts , but I think the rescue files are the most dispensable)?
The vmlinuz-0-rescue-* and initramfs-0-rescue-* files can be safely removed with rm . They're not owned by any package, and to my knowledge there isn't any tool for deleting them (although you can create new ones with dracut ). After removing, run grub2-mkconfig -o /boot/grub2/grub.cfg to regenerate your grub config so they don't show up in the boot menu. These images are the largest, by the way, because they are machine-independent — they'll boot on any system. The other kernel/ramfs combinations leave out some modules not needed for the hardware on the machine they were installed on, and may not be portable to other systems. The rescue image lets you fix that if need be. (As for other files, you can also remove the fedup ones. Those were used in the upgrade, and should have been removed automatically.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139833/" ] }
238,097
Can Nano save the current position of the cursor at exit and, when you reopen the file, restore the old cursor position, like vim does?
On Ubuntu 2018. In ~/.nanorc put: set positionlog Just as a tip, I also have these: set tabsize 4set tabstospacesset autoindentset smooth
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41011/" ] }
238,132
This is for "research" not pragmatic purposes -- I want to know how this is supposed to work, since my guess below does not. In other words, I do not want an answer that involves /etc/network/interfaces or anything else distro specific, or NetworkManager. Please do not close this as a duplicate of a question which provides such answers. I'm trying to connect two GNU/Linux systems w/ a regular (not cross-over) ethernet cable. Rumor has it that this should not be a problem now-a-days. What I tried to do is add a private IP for the interface on both machines: ip addr add 10.0.0.1 dev eth0 And 10.0.0.2 on the other machine. Neither one is attached to a network that could be identified this way. I then added routes back and forth: ip route add 10.0.0.2 via 10.0.0.1 And vice versa. Subsequently, the output of ip addr and ip route seems to be correct (see below). As per John's comment, I also tried this without adding any route; in this case the ping simply times out. Both machines have iptables wide open; INPUT, OUTPUT, and FORWARD are ACCEPT with no rules. But this is what happens when I try a ping: > ping 10.0.0.2PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.From 10.0.0.1 icmp_seq=1 Destination Host UnreachableFrom 10.0.0.1 icmp_seq=2 Destination Host Unreachable Notice it's the local interface (10.0.0.1) that returns this. What additional steps are needed here and/or where have I gone wrong? The routing table after using ip route ... looks like: default via 192.168.0.1 dev wlan0 10.0.0.2 via 10.0.0.1 dev eth0 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.19 Sans ip route ... , it looks the same but without line 2. Output from ethtool (both NICs are identical hardware) looks like: ethtool eth0Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbag Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes The output from ip a for the ethernet NIC looks like: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether b8:27:eb:f5:4f:7a brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::ba27:ebff:fef5:4f7a/64 scope link valid_lft forever preferred_lft forever
As written in the comments, you need to fix the routing table. The syntax ip route add X via Y is used for gateway traffic, i.e. if the traffic to X should be sent tothe (most time external address) Y . There need to be a extra route how Y could be reached. If Y is your own interface addressand you do not solve the problem otherwise, you create a loop and the routingdoes not work. What you need is that the traffic to the other host is sent directly viathe interface (not via a gateway). There many different possibilities, depending on the netmask you use: ip r add 10.0.0.2/32 dev eth0 # only 10.0.0.2 should go via eth0...ip r add 10.0.0.0/8 dev eth0 # 10.0.0.0 - 10.255.255.255 should go via eth0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
238,140
$ cat file/tmp/foldera/folderb/folderc/file1/tmp/folderc/folderd/foldere/file2/tmp/folderf/folderg/folderh/file3 Need to move this files to /tmp/foldera//tmp/folderc//tmp/folderf/ respectively using a loop or any easy handy way to do that.
Use a while loop: while IFS= read -r l; do mv -v -- "$l" "${l%/*/*/*}/"done <file while IFS= read -r l will read the file line by line. mv -v move the files and -v tells mv to be verbose "$l" is the source filename "${l%/*/*/*}/" is the target directory. It removes 3 slashes and what is between them ( /*/*/* ) from the back end of the filename. This will produce: »/tmp/foldera/folderb/folderc/file1“ -> »/tmp/foldera/file1“»/tmp/folderc/folderd/foldere/file2“ -> »/tmp/folderc/file2“»/tmp/folderf/folderg/folderh/file3“ -> »/tmp/folderf/file3“
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135551/" ] }
238,152
I'm trying to copy a batch of files with scp but it is very slow. This is an example with 10 files: $ time scp cap_* user@host:~/dircap_20151023T113018_704979707.png 100% 413KB 413.2KB/s 00:00 cap_20151023T113019_999990226.png 100% 413KB 412.6KB/s 00:00 cap_20151023T113020_649251955.png 100% 417KB 416.8KB/s 00:00 cap_20151023T113021_284028464.png 100% 417KB 416.8KB/s 00:00 cap_20151023T113021_927950468.png 100% 413KB 413.0KB/s 00:00 cap_20151023T113022_567641507.png 100% 413KB 413.1KB/s 00:00 cap_20151023T113023_203534753.png 100% 414KB 413.5KB/s 00:00 cap_20151023T113023_855350640.png 100% 412KB 411.7KB/s 00:00 cap_20151023T113024_496387641.png 100% 412KB 412.3KB/s 00:00 cap_20151023T113025_138012848.png 100% 414KB 413.8KB/s 00:00 cap_20151023T113025_778042791.png 100% 413KB 413.4KB/s 00:00 real 0m43.932suser 0m0.074ssys 0m0.030s The strange thing is that the transfer rate is about 413KB/s and the file size is about 413KB so really it should transfer one file per second, however it's taking about 4.3 seconds per file. Any idea where this overhead comes from, and is there any way to make it faster?
You could use rsync (over ssh ), which uses a single connection to transfer all the source files. rsync -avP cap_* user@host:dir If you don't have rsync (and why not!?) you can use tar with ssh like this, which avoids creating a temporary file (these two alternatives are equivalent): tar czf - cap_* | ssh user@host tar xvzfC - dirtar cf - cap_* | gzip | ssh user@host 'cd dir && gzip -d | tar xvf -' The rsync is to be preferred, all other things being equal, because it's restartable in the event of an interruption.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/238152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38085/" ] }
238,177
I have a file in the following format: field1|field2|field3field1|"field2|field2"|field3 Notice the second row contains double quotes. The string within the double quotes belongs to field 2. How do extract this using awk? I've been googling with no results. I tried this with no luck as well FS='"| "|^"|"$' '{print $2}'
If you have a recent version of gawk you're in luck. There's the FPAT feature, documented here awk 'BEGIN { FPAT = "([^|]+)|(\"[^\"]+\")"}{ print "NF = ", NF for (i = 1; i <= NF; i++) { sub(/"$/, "", $i); sub(/^"/, "", $i);printf("$%d = %s\n", i, $i) }}' fileNF = 3$1 = field1$2 = field2$3 = field3NF = 3$1 = field1$2 = field2|field2$3 = field3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238177", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135169/" ] }
238,180
I'm currently studying penetration testing and Python programming. I just want to know how I would go about executing a Linux command in Python. The commands I want to execute are: echo 1 > /proc/sys/net/ipv4/ip_forwardiptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080 If I just use print in Python and run it in the terminal will it do the same as executing it as if you was typing it yourself and pressing Enter ?
You can use os.system() , like this: import osos.system('ls') Or in your case: os.system('echo 1 > /proc/sys/net/ipv4/ip_forward')os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 8080') Better yet, you can use subprocess's call, it is safer, more powerful and likely faster: from subprocess import callcall('echo "I like potatos"', shell=True) Or, without invoking shell: call(['echo', 'I like potatos']) If you want to capture the output, one way of doing it is like this: import subprocesscmd = ['echo', 'I like potatos']proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)o, e = proc.communicate()print('Output: ' + o.decode('ascii'))print('Error: ' + e.decode('ascii'))print('code: ' + str(proc.returncode)) I highly recommend setting a timeout in communicate , and also to capture the exceptions you can get when calling it. This is a very error-prone code, so you should expect errors to happen and handle them accordingly. https://docs.python.org/3/library/subprocess.html
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/238180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122522/" ] }
238,228
I need to send keystrokes virtually to a terminal program (like vi or emacs ). I want to do something like this: echo -e 'iHello, world!\e' | vi and then have a vi session open with this buffer: Hello, world!~~~~~ But that does not work as vi does not read keystrokes through stdin . I get this error: ex/vi: Vi's standard input and output must be a terminal How can I send some text string to a terminal program as if the string was typed directly on a keyboard?
That's typically what expect was written for: expect -c 'spawn -noecho vi; send "iHello World!\r\33"; interact' While expect was written for TCL in days prior to perl or python being popular, now similar modules for perl or python are also available. Another option is to issue TIOCSTI ioctls to your tty device to insert characters (one byte at a time) in its input queue: perl -le 'require "sys/ioctl.ph"; ioctl(STDIN, &TIOCSTI, $_) for split "", join " ", @ARGV ' $'iHello World!\r\e'; vi That has the benefit of avoiding an extra pseudo-terminal layer in between your terminal emulator and the application (here vi ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128859/" ] }
238,302
I'm trying to rename a bunch of music tracks in a directory, but I got this error: When moving multiple files, last argument must be a directory This is the script: for file in * ; do mv $file $(echo $file |sed 's/^.\{5\}//g')done This works for a file without whitespace, how would I modify this script?
Use quotes: mv -- "$file" "$(echo "$file" | sed ...)" Else mv sees multiple arguments. A filename called file name with spaces would be 4 arguments for mv . Therefore the error: when moving multiple files, last argument must be a directory . When mv has more than 2 arguments, it's assuming you want to move multiple files to a directory (which would then be the last argument). But however, it looks like you want to remove the first 5 characters from the filename. That can be done simpler with bash : mv -- "$file" "${file:5}" Edit : I added the -- flag, thanks to the comment of @ pabouk . Now file starting with dash - are also correctly processed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140284/" ] }
238,403
I'm trying to force users logging in through SSH to have a shell inside IP namespace. I've tried replacing the shell in /etc/passwd with something like ip netns exec sshns /bin/bash but it didn't work. Any other ideas? Is it possible at all? Would it be secure or not at all?
The problem with handling this by changing login shell, as in OP's example, is that when user connects to sshd in the main network namespace then even if you get their shell to run inside another namespace but their port forwarding will operate in the default network namespace anyway. My proposed solution also addresses containing port forwarding to the namespace as well as the shell. It is probably limited to using local accounts, as authenticating against remote system over the network (NIS, SMB etc) will probably not work because authentication stage will be executed from within the network namespace. I needed both the shell and port forwarding to operate in the target namespace without creating networking/routing between default and contained namespaces . Here are a few methods/tools to achieve this: xinetd - Thanks Stéphane Chazelas for pointing it out.For a single or static number of namespaces and forwarding to them xinetd seems a better option.e.g. file /etc/xinetd.d/sshd-netns-foo service sshdnetns{ type = UNLISTED socket_type = stream protocol = tcp port = 222 wait = no user = root server = /sbin/ip server_args = netns exec NameSpaceName /usr/sbin/sshd -i} socat -for multiple namespaces where forwarding to them needs to be started/stopped independently socat is a good fit.Run this in the default namespace: socat tcp-listen:222,fork,reuseaddr \ exec:'ip netns exec NameSpaceName /usr/sbin/sshd -i',nofork ncat -if socat is not available then ncat (on my RHEL box as nc ) can do the job.Downside with ncat is the sshd is connected to ncat via a pipe rather than directly to the socket so sshd can not see client IP so logs is less useful.You also end up running an extra intermediate ncat process. ncat --keep-open --sh-exec "exec ip netns exec NameSpaceName /usr/sbin/sshd -i" -l 222 and probably other tools. This accepts SSH connections in the default namespace on a custom port 222 and for each connection starts one time sshd -i inside the target namespace. That solved it for me, but you also have a requirement of limiting users that can login to each namespace. Create a namespace specific sshd config: mkdir -pv /etc/netns/NameSpaceName/cp -Rp /etc/ssh /etc/netns/NameSpaceName/ Add access controls to each sshd_config file, e.g. in default /etc/ssh/sshd_config : AllowUsers user1 user2 ... and in /etc/netns/NameSpaceName/ssh/sshd_config AllowUsers restrictedUser1 restrictedUser2 ... also look at AllowGroups directives now re-create the namespace for the dir binds to become effective My brief tests show user access control works as expected, but I have not really used it much so its is for you to validate. I tried putting separate /etc/passwd , /etc/shadow and /etc/group files into /etc/netns/NameSpaceName/ for having separate list of users, but in my quick test that did not work: useradd test inside the namespace fails. Notes: If you don't like custom port you could dual home e.g. macvlan or just add another IP address and listen on the default port on a dedicated IP. All authentication, shell, subsystem, port forwarding etc is handled by the sshd so I don't have to hack anything else. It does have drawback of running sshd -i like this, read man sshd look for -i option. You could easily solve it by running a full time sshd inside the namespace and change the forwarding daemon to something like this: nc --keep-open --sh-exec "exec ip netns exec NameSpaceName nc localhost 22" -l 222 I wonder if mount and/or user namespaces (in addition to network namespaces) could be used to solve it more neatly. I have no experience with those. There probably are better ways to achieve this, I'd be very interested in what others come up with.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140038/" ] }
238,455
I'm wondering if we can combine the honesty of 'du' with the indented formatting of 'tree'. If I want a listing of the sizes of directories: du -hx -d2 ...displays two levels deep and all the size summaries are honest, but there's no indenting of subdirs. On the other hand: tree --du -shaC -L 2 ...indents and colorizes nicely however the reported sizes are a lie. To get the real sizes one must: tree --du -shaC ...which is to say that you only get the true sizes if you let 'tree' show you the entire directory structure. I'd like to be able to always have correct size summaries regardless of how many levels of subdirs I want to actually display. I often do this: tree -du -shaC | grep "\[01;34m" ... which prunes out everything but directories, and indents them nicely ... but there's no easy way to limit the display to just a given number levels (without the summaries lying). Is there a way? Perhaps I've missed the correct switches ...
Also checkout ncdu : http://dev.yorhel.nl/ncdu Its page also lists other "similar projects": gt5 - Quite similar to ncdu, but a different approach. tdu - Another small ncurses-based disk usage visualization utility. TreeSize - GTK, using a treeview. Baobab - GTK, using pie-charts, a treeview and a treemap. Comes with GNOME. GdMap - GTK, with a treemap display. Filelight - KDE, using pie-charts. QDirStat - KDE, with a treemap display. QDiskUsage - Qt, using pie-charts. xdiskusage - FLTK, with a treemap display. fsv - 3D visualization. Philesight - Web-based clone of Filelight.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238455", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56145/" ] }
238,478
I have a systemd container running, and I can login into it with machinectl login <container> . How can I execute a command inside the container directly, i.e. without first logging in, executing the command, and then logging out? Another way to put it is that I'm looking for the systemd equivalent of: $ docker exec <container> <command> or $ ssh <host> <command>
Try systemd-run : # systemd-nspawn -D <machine-root> -b 3 --link-journal host# systemd-run --machine <machine-name> envRunning as unit run-1356.service.# journalctl --machine <machine-name> -u run-1356 -b -qOct 30 07:45:09 jessie-64 systemd[1]: Started /usr/bin/env.Oct 30 07:45:09 jessie-64 env[37]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Excerpt from the manpage : Use shell (see below) or systemd-run(1) with the --machine= switch to directly invoke a single command, either interactively or in the background. (The command shell available since v225 )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137914/" ] }
238,482
The first 2 lines in dd stats have the following format: a+b records inc+d records out Why 2 numeric values? What does this plus sign mean?It's usually a+0 , but sometimes when I use bigger block size, dd prints 0+b records out
It means full blocks of that bs size plus extra blocks with size smaller than the bs. pushd "$(mktemp -d)"dd if=/dev/zero of=1 bs=64M count=1 # and you get a 1+0dd if=1 of=/dev/null bs=16M # 4+0dd if=1 of=/dev/null bs=20M # 3+1dd if=1 of=/dev/null bs=80M # 0+1_crap=$PWD; popd; rm -rf "$_crap"; unset _crap# frostschutz's caseyes | dd of=/dev/null bs=64M count=1 # 0+1 Edit : frostschutz's answer mentions another case to generate non-full blocks. Worth reading. See also https://unix.stackexchange.com/a/17357/73443 .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73160/" ] }
238,489
I'm on Debian 8. While How to set default file permissions for all folders/files in a directory? is about permissions, I'd like something similar for ownership. Whenever I login as root and add a file to a daemons config directory, the ownership of the newly created file is root:root . While this is OK for most situation, here it isn't. I'd like to have the ownership set to daemon:daemon automatically when I create a file somewhere under the config directory. How do I accomplish that?
You can't. You can use chmod to set the sticky bit on a directory ( chmod g+s directory/ ) and that will cause all files created in the directory to be in the same group as the directory itself. But that only affects the group, not the owner. You can also set your umask or ACLs on the directory to affect the default permissions of files created. But you can't automatically set the owner of a file you (root) created to some other user. You have to do that with chown . You're just going to have to get used to the chown , chgrp , and chmod commands.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23598/" ] }
238,496
Yesterday I did apt-get update && apt-get upgrade to make my system up-to-date. It went through without any error. After rebooting my Menu Bar 'disappeared', there's just a dark gray bar without any icons on it. When clicking on the place where the Menu Icon would be, it shows all applications installed, but the strange thing is, that there is only the text and the application icons. I then tried to open the terminal, it opened, but it was not visible. (I know that the terminal was there, since the mouse icon changed when hovering over the center of the screen, where the terminal would be.) After that I wanted to take a look at some logs, but when opening the file system the Window was also not visible... Also when entering screensaver mode the display remains black instead of showing GLmatrix. Everything else looks normal, the Desktop Icons are there, and Context Menus are showing. I don't know how to fix this, since I can't see anything when using my terminal, and I don't want to make a mistake, because I have some important data on my Linux Partition. Any help to solve that problem would be appreciated!
You can't. You can use chmod to set the sticky bit on a directory ( chmod g+s directory/ ) and that will cause all files created in the directory to be in the same group as the directory itself. But that only affects the group, not the owner. You can also set your umask or ACLs on the directory to affect the default permissions of files created. But you can't automatically set the owner of a file you (root) created to some other user. You have to do that with chown . You're just going to have to get used to the chown , chgrp , and chmod commands.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136869/" ] }
238,522
I have a .csv file with contents similar to this: BIHAR,PURNIA,DAGARUA,BELGACHHI,BELGACHHI,KARBOLA TOLA,0,0,312,0,0,312,Fully Covered,NO,NO,01_04_2010,241656,312,2123,910,1811.5BIHAR,PURNIA,SRINAGAR,THARI,THARI,ARBANNA,0,0,312,0,0,312,Fully Covered,NO,NO,01_04_2010,244374,312,2123,910,1811.5BIHAR,PURNIA,RUPAULI,DHOBGIDHA-RUPAULI,DHOBHGIDHA-RUPAULI-II,MATELI,0,0,312,0,0,312,Fully Covered,NO,NO,01_04_2010,243748,312,2123,910,1811.5ETCETC,PURNIA,KRITYANAND NAGAR,CHUNAPUR,BANBHAG,BANGALI TOLA KOSHI KINARA,0,0,312,0,0,312,Fully Covered,NO,NO,01_04_2010,242663,312,2123,910,1811.5 I want to grab all the lines that start with BIHAR and then output it to another separate csv file. How do I do that? I have tried using sublime's "Find All" feature and then use the right arrow to the end of the line to highlight them, but unfortunately some lines are much longer than the others so it doesn't work. There are about 100'000 lines in the .txt file. I also tried with sed: sed -n 'BIHAR /myfile.txt' /newfile.txt EDIT: For some reason grep/sed/awk ignores the newlines at the end of each line, and so as a result it only attempts to match the first line and nothing else, how do I fix this?
Try this with GNU sed: sed -n '/^BIHAR/p' file > new_file or with grep: grep '^BIHAR' file > new_file or with awk: awk '/^BIHAR/' file > new_file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140125/" ] }
238,551
Say someone gains physical access to my computer, and they want to login to my account and see everything I have. Is it possible that they take the hard-drive out of my computer, modify the file /etc/shadow with a new password, and then use it to login? In other words, does the Linux password change by simply modifying /etc/shadow ? (All this assuming that there's no HD volume-encryption involved)
Once they have the hard disk drive they hardly need your password. They simply mount all partitions according to (your) /etc/fstab . The next step is sudo su - "your account id" (if your id is 501, just sudo su - 501 ). Short on using encrypted disk with a good password and all, there is little if any you can do to make your data safe. This "little" include: Do not use plain text password in scripts (for instance a cron job collecting email ( ...=pop("[email protected]","avreyclverpassword") , access to remote hosts, etc.) Do not use password-less gpg and ssh keys. (Re-type them each time or use an agent to store them in memory.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46235/" ] }
238,638
ls --dired -l prints all dir and files along with some number followed by //DIRED// ***66 69 122 131 ....***//DIRED-OPTIONS// --quoting-style=literal What do these numbers in bold mean? --dired is an Emacs option to work with directories but I don't understand the numbers here.
The output of ls -D is meant to be parsed by Emacs' dired mode . From the GNU Coreutils manual ‘-D’‘--dired’ With the long listing ( -l ) format, print an additional line after the main output: //DIRED// beg1 end1 beg2 end2 … The begn and endn are unsigned integers that record the byte position of the beginning and end of each file name in the output. This makes it easy for Emacs to find the names, even when they contain unusual characters such as space or newline, without fancy searching. If directories are being listed recursively ( -R ), output a similar line with offsets for each subdirectory name: //SUBDIRED// beg1 end1 … Finally, output a line of the form: //DIRED-OPTIONS// --quoting-style=word where word is the quoting style (see Formatting the file names). Here is an actual example: $ mkdir -p a/sub/deeper a/sub2$ touch a/f1 a/f2$ touch a/sub/deeper/file$ ls -gloRF --dired a a: total 8 -rw-r--r-- 1 0 Jun 10 12:27 f1 -rw-r--r-- 1 0 Jun 10 12:27 f2 drwxr-xr-x 3 4096 Jun 10 12:27 sub/ drwxr-xr-x 2 4096 Jun 10 12:27 sub2/ a/sub: total 4 drwxr-xr-x 2 4096 Jun 10 12:27 deeper/ a/sub/deeper: total 0 -rw-r--r-- 1 0 Jun 10 12:27 file a/sub2: total 0//DIRED// 48 50 84 86 120 123 158 162 217 223 282 286//SUBDIRED// 2 3 167 172 228 240 290 296//DIRED-OPTIONS// --quoting-style=literal Note that the pairs of offsets on the ‘//DIRED//’ line above delimit these names: f1, f2, sub, sub2, deeper, file. The offsets on the ‘//SUBDIRED//’ line delimit the following directory names: a , a/sub , a/sub/deeper , a/sub2 . Here is an example of how to extract the fifth entry name, deeper , corresponding to the pair of offsets, 222 and 228: $ ls -gloRF --dired a > out$ dd bs=1 skip=222 count=6 < out 2>/dev/null; echodeeper Note that although the listing above includes a trailing slash for the deeper entry, the offsets select the name without the trailing slash. However, if you invoke ls with --dired along with an option like --escape (aka -b ) and operate on a file whose name contains special characters, notice that the backslash is included: $ touch 'a b'$ ls -blog --dired 'a b' -rw-r--r-- 1 0 Jun 10 12:28 a\ b//DIRED// 30 34//DIRED-OPTIONS// --quoting-style=escape If you use a quoting style that adds quote marks (e.g., --quoting-style=c ), then the offsets include the quote marks. So beware that the user may select the quoting style via the environment variable QUOTING_STYLE . Hence, applications using --dired should either specify an explicit --quoting-style=literal option (aka -N or --literal ) on the command line, or else be prepared to parse the escaped names. The numbers are the positions of the file names in the output The begn and endn are unsigned integers that record the byte position of the beginning and end of each file name in the output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77912/" ] }
238,639
I have a files named with YYYYMMDD in the file name, such as file-name-20151002.txt I want to determine if this file was modified after 2015-10-02. Notes: I can do this by looking at the output of ls , but I know that parsing the output of ls is a bad idea. I don't need to find all files dated after a specific date, just need to test one specific file at a time. I am not concerned about the file being modified on the same date after I created it. That is, I just want to know if this file with 20151002 in the name was modified on Oct 03, 2015 or later. I am on MacOs 10.9.5.
Here are some possible ways with : OSX stat : newer () {tstamp=${1:${#1}-12:8}mtime=$(stat -f "%Sm" -t "%Y%m%d" "$1")[[ ${mtime} -le ${tstamp} ]] && printf '%s\n' "$1 : NO: mtime is ${mtime}" || printf '%s\n' "$1 : YES: mtime is ${mtime}"} GNU date : newer () {tstamp=${1:${#1}-12:8}mtime=$(date '+%Y%m%d' -r "$1")[[ ${mtime} -le ${tstamp} ]] && printf '%s\n' "$1 : NO: mtime is ${mtime}" || printf '%s\n' "$1 : YES: mtime is ${mtime}"} zsh only: zmodload zsh/statnewer () {tstamp=${1:${#1}-12:8}mtime=$(zstat -F '%Y%m%d' +mtime -- $1)[[ ${mtime} -le ${tstamp} ]] && printf '%s\n' "$1 : NO: mtime is ${mtime}" || printf '%s\n' "$1 : YES: mtime is ${mtime}"} Usage: newer FILE Example output: file-name-20150909.txt : YES: mtime is 20151026 or file-name-20151126.txt : NO: mtime is 20151026
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7723/" ] }
238,640
My rental Linux server doesn't respond to nmap the way I thought it would. When I run nmap it shows three open ports: 80, 443 and 8080. However, I know ports 2083, 22 and 2222 should all be open, as they're used for the web-based C-Panel, SSH and SFTP, respectively. Has my server rental company not opened these ports fully, or is does nmap not give a complete list (by default)?
By default, nmap scans the thousand most common ports. Ports 2083 and 2222 aren't on that list. In order to perform a complete scan, you need to specify "all ports" ( nmap -p 1-65535 , or the shortcut form nmap -p- ). Port 22, on the other hand, is on the list. If nmap isn't reporting it, it's because something's blocking your access, or the SSH server isn't running.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/238640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138973/" ] }
238,641
In linux, from /proc/PID/stat , I can get the start_time (22:nd) field, which indicates how long after the kernel booted the process was started. What is a good way to convert that to a seconds-since-the-epoch format? Adding it to the btime of /proc/stat ? Basically, I'm looking for the age of the process, not exactly when it was started. My first approach would be to compare the start_time of the process being investigated with the start_time of the current process (assuming it has not been running for long). Surely there must be way better ways. I didn't find any obvious age-related parameters when looking at https://www.kernel.org/doc/Documentation/filesystems/proc.txt So, What I have currently is: process age = (current_utime - ([kernel]btime + [process]start_time)) Any alternative ways that are more efficient from within a shell script? (Ideally correct across DST changes)
Since version 3.3.0, the ps of procps-ng on Linux has a etimes output field that gives you the elapsed time in seconds since the process was started (which by the way is not necessarily the same thing as the elapsed time since the last time that process executed a command (if at all!) (the time that process has been running the command in the process name), so may not be as useful as you thought). So you can do: ps -o etimes= -p "$pid" For the start time as Unix epoch time (with GNU date ): (export TZ=UTC0 LC_ALL=C; date -d "$(ps -o lstart= -p "$pid")" +%s) Note that you cannot use the modification time of /proc/$pid . That is just the time those files were instantiated which has nothing to do with the start time of the process.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5923/" ] }
238,673
Fr. Br. George told in one of his lectures (it's in Russian) that there are some access rights that superuser can not violate. That is there are some access right which can forbid superuser doing something. I was not able to find this information on the Internet and I'm curious what they are. This is probably something related to the system's core execution, isn't it? Maybe he can not stop some system processes? Or maybe he can not run a process in real mode? This question is no related to SELinux (George was talking about it right before the question).
acess denied to root : root can be denied direct network access. This is useful on internet connected hosts, as it requires that you login as smith , then sudo . some stuff root can't do : This is NOT for a lack of privilege. I can't see anything root couldn't do, however some technical issues might be experienced as "forbidden". I am root, why can't I create/delete this file, while ordinary user can? You are on a NFS/samba share, and you weren't give specific ( access= ) authorization. Ordinary user fail to common law. (see local vs remote root below) I am root, why can't I kill this process? There is a pending I/O and physical drive/remote LUN have been disconnected, process can only be killed by reboot. I am root, how do I get archemar's password? You can su - archemar all right, or change archemar's password without knowing the previous one, but you can't read it (short of a key logger), since passwords are stored using a one-way hash. local vs remote root You can be root on your station/PC, and use a company/college/university/provider NFS share. Next you can have only a non-root login on computer exporting NFS. Now cp /bin/bash /nfs/home/me/bashchown root /nfs/home/me/bashchmod u+s /nfs/home/me/bash simply log on NFS server, run ./bash and you are root on company/university server.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22362/" ] }
238,700
I scheduled a task using at . The output of the scheduled job was sent to mail . But the output is quite huge; I prefer reading it in a text editor. Additionally, I don't want to forward the mail, I just want to read locally. I hope there is a standard way for all *nix. But I use OS X and RedHat.
You can pipe mail to vim, at least on my test system (RHEL 6.7) it worked. mail | vim - the vim - tells vim to read from standard input you should see output that say's: Vim: reading from stdin... at that point just press the number of the mail item you want to read, IE 1 to read the first message,then press ctrl-d to push it forward.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86960/" ] }
238,738
I want to find files newer than 15 seconds but older than 2 seconds.Here is the script I'm currently using that grabs files newer than 15 seconds: find /my/directory -name '*.jpg' -not -newermt '-15 seconds' Any help is greatly appreciated
You can combine multiple predicates by chaining them. There's no -oldermt , but you can write that as -not -newermt . You want: -newermt '-15 seconds' to say the file is less than 15 seconds old, and -not -newermt '-2 seconds' to say the file is more than 2 seconds old Try: find /my/directory -newermt '-15 seconds' -not -newermt '-2 seconds' Or, to be POSIX compliant: find /my/directory -newermt '-15 seconds' \! -newermt '-2 seconds' Also, just so you (and other readers) are aware, "newer" means modified more recently than, not created more recently than.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/238738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140279/" ] }
238,783
I know how to create and use a swap partition but can I also use a file instead? How can I create a swap file on a Linux system?
I myself have on several machines a swap file on mdadm RAID, therefore there's a bit of overhead. But anyway, if you adjust vm.swappiness wisely to a more acceptable value than 60, which is the default, you should have no problem. For instance, I have 32GB RAM server with 32GB swap file on RAID6 with vm.swappiness = 1. Quoting the Wikipedia: vm.swappiness = 1: Kernel version 3.5 and over, as well as Red Hat kernel version 2.6.32-303 and over: Minimum amount of swapping without disabling it entirely. In this example, we create a swap file: 8GB in size Located in /raid1/ Change these two things accordingly to your needs. Open terminal and become root ( su ); if you have sudo enabled, you may also do for example sudo -i ; see man sudo for all options): sudo -i Allocate space for the swap file: dd if=/dev/zero of=/raid1/swapfile bs=1G count=8 Optionally, if your system supports it, you may add status=progress to that command line. Note, that the size specified here in G is in GiB (multiples of 1024). Change permissions of the swap file, so that only root can access it: chmod 600 /raid1/swapfile Make this file a swap file: mkswap /raid1/swapfile Enable the swap file: swapon /raid1/swapfile Verify, whether the swap file is in use: cat /proc/swaps Open a text editor you are skilled in with this file, e.g. nano if unsure: nano /etc/fstab To make this swap file available after reboot, add the following line: /raid1/swapfile none swap sw 0 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
238,809
I'm stuck in a quite trivial problem here: how can I make the * symbol in bash mean zero or more , like it does in tools such as sed ? For example, ak* should match any file whose name consists entirely of an a followed by zero or more k s. Its expansion would include a , ak , akk , and akkk , but not akc . I have already tried unsetopt sh_glob in zsh and set -o noglob in bash; they did not produce the desired behavior.
Except for ksh93 , none of the usual shells have regular expressions with the same syntax as sed, awk, etc. that can be used for matching files. Ksh93, bash and zsh have regular expressions with a different syntax that's backward compatible with globs: ? matches any single character (like . in the usual regexp syntax) […] matches a character set in mostly the same way *( FOO ) matches any number of occurrences of FOO (like ( FOO )* in the usual regexp syntax) similarly +( FOO ) matches one or more occurrences, and ?( FOO ) matches zero or one occurrence @( FOO | BAR ) matches either FOO or BAR Matches apply to the whole string, not a substring; if you want a substring, put * at the beginning and at the end This syntax needs to be activated with shopt -s extglob in bash and with setopt ksh_glob in zsh. So in bash you'd write shopt -s extglobls a*(k) See also Why does my regular expression work in X but not in Y? Ksh93, zsh and bash can do regular expression matching with extended regular expressions (basically the syntax of awk) on strings, with the =~ operator of the [[ … ]] construct. This isn't convenient for listing files though, but if you really want it, it can be done. shopt -s dotglob # <<< include dot files, for bashsetopt globdots # <<< include dot files, for zshFIGNORE='@(.|..)' # <<< include dot files, for kshfor x in *; do if [[ $x =~ ^ak*$ ]]; then … fidone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139259/" ] }
238,810
I've been shortly using the following on Linux Debian Jessie to create a "RAM disk": mount -o size=1G -t tmpfs none /mnt/tmpfs But I was told it doesn't reserve memory, which I didn't know. I would like a solution, which does reserve memory.
Except for ksh93 , none of the usual shells have regular expressions with the same syntax as sed, awk, etc. that can be used for matching files. Ksh93, bash and zsh have regular expressions with a different syntax that's backward compatible with globs: ? matches any single character (like . in the usual regexp syntax) […] matches a character set in mostly the same way *( FOO ) matches any number of occurrences of FOO (like ( FOO )* in the usual regexp syntax) similarly +( FOO ) matches one or more occurrences, and ?( FOO ) matches zero or one occurrence @( FOO | BAR ) matches either FOO or BAR Matches apply to the whole string, not a substring; if you want a substring, put * at the beginning and at the end This syntax needs to be activated with shopt -s extglob in bash and with setopt ksh_glob in zsh. So in bash you'd write shopt -s extglobls a*(k) See also Why does my regular expression work in X but not in Y? Ksh93, zsh and bash can do regular expression matching with extended regular expressions (basically the syntax of awk) on strings, with the =~ operator of the [[ … ]] construct. This isn't convenient for listing files though, but if you really want it, it can be done. shopt -s dotglob # <<< include dot files, for bashsetopt globdots # <<< include dot files, for zshFIGNORE='@(.|..)' # <<< include dot files, for kshfor x in *; do if [[ $x =~ ^ak*$ ]]; then … fidone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
238,856
So I have a script that adds 2 films together using the audio from the $1.audio file. What I would like to do is rename any file in the directory with: *.mp4 To: *.audio Keeping original file name.
You can use the rename command. It's not portable, but it exists in different forms in different distributions. In CentOS/RHEL and probably Fedora: rename .mp4 .audio *.mp4 Should do it. From man rename on CentOS 6: SYNOPSIS rename from to file... rename -VDESCRIPTION rename will rename the specified files by replacing the first occur- rence of from in their name by to. In Ubuntu and probably any Debian variant: rename 's/\.mp4$/.audio/' *.mp4 should do it. From man rename on Ubuntu 14.04: SYNOPSIS rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]DESCRIPTION "rename" renames the filenames supplied according to the rule specified as the first argument. The perlexpr argument is a Perl expression which is expected to modify the $_ string in Perl for at least some of the filenames specified. If a given filename is not modified by the expression, it will not be renamed. If no filenames are given on the command line, filenames will be read via standard input. For example, to rename all files matching "*.bak" to strip the extension, you might say rename 's/\.bak$//' *.bak
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130767/" ] }
238,886
I have shell scripts like #!/bin/bashwhile true;do #Code HERE! #Pushing Data to DB echo "Data to DB"> /root/schip.log 2>&1done This is script is continuously running and gathering info on server and then sending data to DB(TimeStamp DB). I don't why, sometimes the scripts are dieing. In logs I can't see any thing. In same way, I saw in Python script. Python script like this import <stuff>while True: #Code HERE #Push data to DB print "Data to DB" So, what could be the reasons?, how do I prevent? and how can I enable the logs(In python and Shell) to know the reason?. Thanks!
A few things that may cause a shell to exit (not exhaustive): calling the exit utility. Let's not forget about the obvious calling the return utility. In the case of bash that will return only if in a function or sourced file. exec cmd . That will execute cmd in the same process so in effect breaking out of that loop. The script will end when cmd exits. set -e / set -o errexit is enabled (see also the SHELLOPTS environment variable for bash ) and a command exits with an error. set -u / set -o nounset is enabled and an unset variable is referenced. a DEBUG or ERR trap is defined that calls exit . Failing special builtins. Failure of special builtins (like set , : , eval ...) causes the shell to exit. In the case of bash though, that only happens in POSIX mode (like when POSIXLY_CORRECT is in the environment or when invoked as sh ...) and even then not for all special builtins. For instance : > / will cause the shell to exit. as mentioned by @schily , syntax error (like in code that is only reached conditionally). division by 0 (in $((1/x)) or ${array[1/x]} ). internal bash error for instance because some limit is reached: fails to allocate memory fails to fork a process stack size exceeded (for instance when using function recursion) Some other limits in place via ulimit (which may also cause some signals to be sent). killed by a another process. Another process can call kill() to explicitly kill the interpreter of your script. killed by the system. SIGINT/SIGQUIT. If you press ^C / ^\ . SIGHUP. If the terminal is disconnected. SIGSEGV/SIGBUS/SIGILL. The bash command does something wrong (a bug) or failing hardware (memory). SIGPIPE: builtin ( echo , printf ) writing to a now-closed pipe or socket (could also happen for error messages if stderr is a pipe). The first thing to check would be the error messages and the exit status.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/238886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96601/" ] }
238,995
tr seems to buffer its input so that this command LongRunningCommand|tr \\n , will only start producing output after a few kilobytes of input from LongRunningCommand have accumulated. Is there a way to force tr to stop this buffering or any other command that can replace new-lines with an other character without buffering? P.S. I've already tried the first two suggestions from Turn off buffering in pipe without success.
Commands generally don't buffer their input. They would do a read() for a large chunk, but when reading from a pipe, if there aren't that many bytes in the pipe, the read() system call will return with as many characters there are and the application will generally work with that if it can. A notable exception to that is mawk which will keep re- read() ing until the input buffer is full. Applications do buffer their output (stdout) though. The usual behaviour is that if the output is going to a tty, then the buffering will be line-wise (that is, it won't start writing to stdout until it has a full line to output, or a block-full for very long line), while for every other type of file, the buffering is by blocks (that is, it won't start writing until is has one block full to write (something like 4KiB/8KiB... depends on the software and system)). So in your case LongRunningCommand likely buffers its output by blocks (since its output is a pipe and not a tty), and tr likely buffers its output by line since its output is probably the terminal. But, since you remove every newline character from its output, it will never output a line, so the buffering will be by block. So here you want to disable buffering for both LongRunningCommand and tr . On GNU or FreeBSD systems: stdbuf -o0 LongRunningCommand | stdbuf -o0 tr '\n' , Note that if you want to join the lines with a comma, a better approach is to use paste -sd , - . That way the output will be terminated by a newline character (you'll probably still need to disable buffering).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/238995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73271/" ] }
239,002
Why can root 's password can be changed without entering the old password? Is there any benefit to this or is it just an implementation fault? If we issue passwd from a normal user account it first asks for " (Current) Unix Password: " but in the case of root it takes us directly to " Enter new Unix password: ". I don't understand the logic behind this.
Root owns and can write to both /etc/passwd and /etc/shadow anyway. Which does not mean the sysadmin SHOULD know her user's passwords. In fact, she should not know anything else than the root password.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
239,018
Not sure why I am getting this. I realize this must be a common question, but can't figure it out. #!/bin/bash #Checks word count of each text file directory and deletes if less than certain amount of words #Lastly, displays number of files delter count = 0 #Set counter to 0 limit = 2000 for file in *.txt do words = wc -w > $file if words < $limit rm $file count = $count + 1 end end print "Number of files deleted: $count"
I'm afraid your script is full of syntax errors. The specific error you are seeing is because you're not closing the for loop correctly, but there are many, many more: You can't have spaces around the = when assigning values to variables (except in arithmetic expressions); In order to save a command's output in a variable, you must use command substitution , either var=`command` or var=$(command) ; When referring to the value of a variable, you must use $var , not var and generally, that needs to be quoted ( "$var" ); When doing an arithmetical comparison , you need to use the -lt of the [ command, not < unless you're using double parentheses; The command > file format will overwrite file with the output of command. You probaly meant to use wc < "$file" and not wc > $file ; You can't add a value to a variable using var=$var+1 unless that variable has been previously declared as an integer, you need ((var=var+1)) , var=$((var+1)) or declare -i var; var=var+1 . For adding 1, you can also use ((var++)) ; Your if syntax is wrong. The right format is if condition; then do something; fi Same goes for the for loop, the right syntax is for loop-specification; do something; done ; There is no print command (not built in bash anyway), only printf and echo ; You should always quote your variables unless there is a good reason not to. So, a working version of your script with slight improvements, would be: #!/bin/bash -# Checks word count of each text file directory and deletes if less than certain amount of words# Lastly, displays number of files deletedcount=0 # Set counter to 0limit=2000for file in *.txtdo words=$(wc -w < "$file") if [ "$words" -lt "$limit" ] then rm -- "$file" ((count++)) fidoneecho "Number of files deleted: $count" Next time, I recommend you familiarize yourself with a language before attempting to code in it. Each language has its own rules and syntax.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121022/" ] }
239,035
This is a sed -specific question; I am well aware it could be done with other tools but I am working on expanding my knowledge of sed . How can I use sed to globally quote (actually backtick) a word that is not specified in the script? The word is held in the hold space. What I want is something like: s/word/`&`/g But the trick is, word will be contained not in the sed script but in the hold space. So it looks something more like: Hgs/^\(.*\)\n\(.*\)\1\(.*\)$/\2`\1`\3/ which will quote one occurrence of the word held in the hold space. I want to quote all of them, but I can't just add a g flag, because of the way this uses backreferences rather than a static regex. Hgs/^\(.*\)\n\(.*\)\1\(.*\)\1\(.*\)$/\2`\1`\3`\1`\4/ This handles two occurrences of the word, but fails on one, and ignores more than one. I thought I could use something clean and simple like: s//`&`/g But that reuses the last used regex , not what it matches. (Which makes sense.) Is there any way in sed to do what I am trying to do? (Actually I would be interested in seeing how easy this would be in perl , but I would still like to see how to do it in sed .) UPDATE Not that it's needed for this question, but I thought I would give a little more context on what exactly I was doing when I came up with this question: I had a big text file of documentation, certain parts of which needed to be condensed and summarized into an asciidoc table. It was pretty easy because of the Description: and Prototype: lines, etc., so I actually wrote a quick sed script to do all the parsing for me. It worked beautifully—but the one thing it was missing was that I wanted to backtick the words in the Description line that matched the arguments listed in the Prototype line. The prototype lines looked something like this: Prototype: some_words_here(and, arg, list,here) There were upwards of 200 different entries in the table I was outputting (and the source documentation included a lot more text than that) and each arglist only needed to be used to backtick-quote matching words on a single line. To make things trickier, some of the args were not in the Description line, some were in more than once, and some arglists were empty(). However, given that sometimes an arg would match a part of a word, which I didn't want to get backticked, and sometimes an arg name was a common word (like from ) which I only wanted to get backticked when it was used in the context of explaining the use of the function, an automated solution wasn't actually a good fit at all and I instead used vim to do the job semi-manually, with the help of some tricky macros. :)
That was a hard one. Assuming you have a file like this: $ cat filewordline with a word and words and wording wordy words. Where: Line 1: is the search pattern that should be held in the hold space and quoted to `word` . Line 2: is the line to seach and replace globally. The sed command: sed -n '1h; 2{x;G;:l;s/^\([^\n]\+\)\n\(.*[^`]\)\1\([^`]\)/\1\n\2`\1`\3/;tl;p}' file Explanation : 1h; save the first line to the hold space (this is wait we want to search for). hold space contains: word 2{...} applies to the second line. x; exchange the pattern space and the hold space. G; append the hold space to the pattern space. In the pattern space we have now: word # I will call this line the "pattern line" from now online with a word and words and wording wordy words. :l; set a label called l as point for later. s/// do the actual search/replace in the pattern space mentioned above: ^\([^\n]\+\)\n search in the "pattern line" for all characters (from the beginning of the line ^ ) which are not a newline [^\n] (one or more times \+ ), until a newline \n . This is now stored in the back-reference \1 . It contains the "pattern line". (.*[^`]) search for any character .* followed by a character, which is not a backtick [^`] . This is stored in \2 . \2 contains now: line with a word and words and wording wordy , until the last occurence of word , because... \1 is the next search term (the back-reference \1 , word ), hence what the "pattern line" contains. ([^`]) this is followed by another character which is not a backtick; saved to reference \3 . If we don't do this (and the part in \2 from above), we would end of in an endless loop quoting the same word , again and again -> ````word```` , because s/// would always be successful and tl; jumps back to :l (see tl; further down). \1\n\2 \1 \3 all of the above is replaced by the back-references. The second \1 is the one we should quote (note the first reference is the "pattern line"). tl; if the s/// was successful (we replaced something) jump to the label called l and start again until there is nothing more to search and replace. This is the case, when all occurences of word are replaced/quoted. p; when all is done, print the altered line (pattern space). The output: $ sed -n '1h; 2{x;G;:l;s/^\([^\n]\+\)\n\(.*[^`]\)\1\([^`]\)/\1\n\2`\1`\3/;tl;p}' filewordline with a `word` and `word`s and `word`ing `word`y `word`s.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
239,055
I have 2 files File 1: 01:12:00,001 Some text01:14:00,003 Some text02:12:01,394 Some text File 2: 01:12:00,001 Some text01:12:01,029 Some text01:13:21,123 Some text I need output as follows: 01:12:00,001 Some text01:12:00,001 Some text01:12:01,029 Some text01:13:21,123 Some text01:14:00,003 Some text02:12:01,394 Some text How can I achieve this?
Because you're asking for the file to be sorted by the order that the fields appear in the file, this is the most basic use of sort : sort file1 file2 > outputfile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81640/" ] }
239,118
I am learning Linux. I was surprised to see that the parameter order seems to matter when making a tarball. tar -cfvz casual.tar.gz snapback.txt bucket.txt gives the error: tar: casual.tar.gz: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors But if I issue the command like this: tar -cvzf casual.tar.gz snapback.txt bucket.txt the tarball is created without errors Can anyone explain to me why the parameter order matters in this example or where I can find that information to learn why myself? I tried it the way I did in my first example that received an error with the logic of putting the required parameters c and f first followed by my other parameters. I want to completely absorb Linux, which includes understanding why things like this occur. Thanks in advance!
Whether the order matters depends on whether you start the options with a minus $ tar -cfvz casual.tar.gz snapback.txt bucket.txttar: casual.tar.gz: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors$ tar cfvz casual.tar.gz snapback.txt bucket.txtsnapback.txtbucket.txt This unusual behavior is documented in the man page Options to GNU tar can be given in three different styles. In traditional style ... Any command line words that remain after all options has been processed are treated as non-optional arguments: file or archive member names. ... tar cfv a.tar /etc ... In UNIX or short-option style, each option letter is prefixed with a single dash, as in other command line utilities. If an option takes argument, the argument follows it, either as a separate command line word, or immediately following the option. ... tar -cvf a.tar /etc ... In GNU or long-option style, each option begins with two dashes and has a meaningful name ... tar --create --file a.tar --verbose /etc tar , which is short for "tape archive" has been around before the current conventions were decided on, so it keeps the different modes for compatibility. So to "absorb Linux", I'd suggest a few starting lessons: always read the man page minor differences in syntax are sometimes important the position of items - most commands require options to be the first thing after the command name whether a minus is required (like tar , ps works differently depending on whether there is a minus at the start) whether a space is optional, required, or must not be there ( xargs -ifoo is different from xargs -i foo ) some things don't work the way you'd expect To get the behavior you want in the usual style, put the output file name directly after the f or -f , e.g. $ tar -cvzf casual.tar.gz snapback.txt bucket.txtsnapback.txtbucket.txt or: $ tar -c -f casual.tar.gz -z -v snapback.txt bucket.txt or you could use the less common but easier to read GNU long style: $ tar --create --verbose -gzip --file casual.tar.gz snapback.txt bucket.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/239118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109278/" ] }
239,125
I have this: function abash { if [[ -v $1 ]] then atom ~/Shell/$1.sh else atom ~/.bashrc fi} in my ~/.bashrc file, so as to make it easier to use Atom to edit my bash scripts, now the problem is that [[ -v $1 ]] is meant to be checking whether the input $1 exists but it does not appear to be, as even when I provide a valid input (e.g., running abash cd where ~/Shell/cd.sh is a file I want to edit) abash opens up ~/.bashrc . How do I fix this problem? Where did I get the idea for the [[ -v $1]] test? This answer.
bash conditional expression -v var check if shell variable named var is set. When using [[ -v $1 ]] , you actually checked whether a variable named by content of $1 was set. In your example, it means $cd , which was never set. You can simply check if $1 is non-empty string, using -n : function abash { if [[ -n "$1" ]] then atom ~/Shell/"$1.sh" else atom ~/.bashrc fi} Note that var must be a shell variable for -v var work. [[ -v 1 ]] will never work because 1 is denoted for positional parameter .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
239,205
I'm currently working on a PCI device driver for Ubuntu. I have some example code about PCI driver, but I have difficult on understanding the ioremap and file_operation.mmap. The description of file operation mmap: Memory mapping is one of the most interesting features of modern Unix systems. As far as drivers are concerned, memory mapping can be implemented to provide user programs with direct access to device memory. Mapping a device means associating a range of user-space addresses to device memory. Whenever the program reads or writes in the assigned address range, it is actually accessing the device. The description of ioremap: On many systems, I/O memory is not directly accessible in this way at all. So a mapping must be set up first. This is the role of the ioremap function.The function is designed specifically to assign virtual addresses to I/O memory regions. The above description all come from "makelinux". But still I'm not sure if I correctly understand the difference between the two functions. For now, I understand it the way like this: The fops.mmap (file operation mmap) associates a range of user-space addresses to device memory. Which means for a pci device, we do real address map for the device's BAR with fops.mmap .And with ioremap , we do virtual address map for these "real addresses" got from fops.mmap . Could someone tell me if I was wrong? Thx~ PS. I posted this also in Ubuntu community, hope I didn't break any rules.
I suggest you look into the LDD3 book , it is free. It does explain ioremap in chapter 9, page 249. Also look into APIU 3rd edition , chapter 14.8, page 525. Let me summarize, best to my abilities: ioremap is a kernel function that allows to access hardware through a mechanism called I/O mapped memory. There are certain addresses in memory that are intercepted by motherboard between CPU and RAM and redirected to other hardware, like disks or keyboard. Not sure if you can use the usual addressing through pointers or some other kernel functions. I/O memory is simply a region of RAM-like locations that the device makes available to the processor over the bus. This memory can be used for a number of purposes, such as holding video data or Ethernet packets, as well as implementing device registers that behave just like I/O ports (i.e., they have side effects associated with reading and writing them). mmap is a syscall available in user space that maps a process memory region to content of a file, instead of RAM. When you access that mapped region of memory, through usual pointer dereference, kernel translates it to a file operation. Essentially writing to memory becomes writing into a file. It is just a more fancy way of calling write(). Memory-mapped I/O lets us map a file on disk into a buffer in memory so that, when we fetch bytes from the buffer, the corresponding bytes of the file are read. Similarly, when we store data in the buffer, the corresponding bytes are automatically written to the file. This lets us perform I/O without using read or write. (sidenote) I think first is called "IO mapped memory" and second is called "memory mapped IO". No wonder you are confused.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140572/" ] }
239,218
This is a follow up to unix: replace one entire column in one file with a single value from another file I am trying to replace one column of a file (file1) with one specific value from another file (file2). file1 is structured like this: HETATM 8 P FAD B 600 98.424 46.244 76.016 1.00 18.65HETATM 9 O1P FAD B 600 98.634 44.801 75.700 1.00 17.69 O HETATM 10 O2P FAD B 600 98.010 46.640 77.387 1.00 15.59 O HETATM 11 H5B1 FAD B 600 96.970 48.950 72.795 1.00 -1.00 H and I absolutely need to conserve that structure. file2 is structured like this: 1 27, -81.883, 4.05 48, -67.737, 20.01 55, -72.923, 4.04 27, -62.64, 16.0 I noticed that awk is "misbehaving" and looses the format of my pdb file, meaning that instead of: HETATM 1 PA FAD B 600 95.987 47.188 74.293 1.00 -73.248 I get HETATM 1 PA FAD B 600 95.887 47.194 74.387 1.00 -73.248 I have tried: file1="./Min1_1.traj_COP1A_.27.pdb"file2="./COP1A_report1"value="$(awk -F, 'NR==1{print $2;exit}' $file2)"#option 1: replaces the column I want but messes up the formatawk -F ' ' '{$11 = v} 1' v="$value" $file1 >TEST1#option 2: keeps the format but adds the value at the end onlyawk -F ' ', '{$2 = v} 1' v="$value" $file1 >TEST2awk -F, '{$11 = v} 1' v="$value" $file1 >TEST3 I guess it is because a pdb file does not have the same delimiters for all columns and awk is not dealing with that in the manner I want it to. Any ideas how to "tame" awk for this problem or what other command to use?
Use a regex ( [^[:blank:]] i.e. non-blank) and replace the 11 th match: awk '{print gensub (/[^[:blank:]]+/, v, 11)}' v="$value" infile Same with sed : sed "s/[^[:blank:]]\{1,\}/${value}/11" infile Another way, if your file has fixed length fields and you know the "position" of each field (e.g. assuming only spaces in your sample file, the 11th field takes up 4 chars, from 57th to 60th on each line) awk '{print substr($0,1,56) v substr($0,61)}' v=$value file or sed -E "s/^(.{56}).{4}(.*)$/\1${value}\2/" infile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239218", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140431/" ] }
239,271
Looking at terminfo and Parameterized Strings . Some examples from infocmp -1 xterm : cud=\E[%p1%dB , given argument 13 : \E => <ESC> [ => [ %p1 PUSH parameter 1 (13) onto stack %d POP and print from stack as signed decimal => 13 Result: <ESC>[13B csr=\E[%i%p1%d;%p2%dr , given arguments 13, 16 : \E => <ESC> [ => [ %i Increment parameter 1 and 2: ++13, ++16 gives 14, 17 %p1 PUSH parameter 1 (14) onto stack. %d POP and print from stack as signed decimal. => 14 ; => ; %p2 PUSH parameter 2 (17) onto stack. %d POP and print from stack as signed decimal. => 17 r => r Result: <ESC>14;17r But, ... how to read this one? u6=\E[%i%d;%dR After processing \E[%i we have <ESC>[ and incremented parameter 1 and 2 (if any). But stack is empty. Should not the two %d 's pop and print two numbers from stack?
The absence of a %p marker is a quirk of ncurses: the terminfo compiler ( tic ) recognizes either terminfo (which uses %p1 to mark parameters) or termcap (which relies upon convention for the parameters). That would be a legal termcap expression. Since tic knows how to process a termcap expression, the string shown is "close enough" that there was no need to translate it further. You can see what ncurses does using tput , e.g., tput u6 40 50 gives (note the reversal of parameters) ^[[51;41R If the expression were given as u6=\E[%i%p2%d;%p1%dR it would have the same result. The u6-u9 capabilities are an early extension documented in ncurses's terminal database : # INTERPRETATION OF USER CAPABILITIES## The System V Release 4 and XPG4 terminfo format defines ten string# capabilities for use by applications, <u0>...<u9>. In this file, we use# certain of these capabilities to describe functions which are not covered# by terminfo. The mapping is as follows:## u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)# u8 terminal answerback description# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)## The terminal enquire string <u9> should elicit an answerback response# from the terminal. Common values for <u9> will be ^E (on older ASCII# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).## The cursor position request (<u7>) string should elicit a cursor position# report. A typical value (for VT100 terminals) is \E[6n.## The terminal answerback description (u8) must consist of an expected# answerback string. The string may contain the following scanf(3)-like# escapes:## %c Accept any character# %[...] Accept any number of characters in the given set## The cursor position report (<u6>) string must contain two scanf(3)-style# %d format elements. The first of these must correspond to the Y coordinate# and the second to the %d. If the string contains the sequence %i, it is# taken as an instruction to decrement each value after reading it (this is# the inverse sense from the cup string). The typical CPR value is# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).## These capabilities are used by tack(1m), the terminfo action checker# (distributed with ncurses 5.0). Checking that last comment, tack exercises u8 and u9 but does nothing with u6 and u7 . The extension was added early in 1995 : # 9.3.4 (Wed Feb 22 19:27:34 EST 1995):# * Added correct acsc/smacs/rmacs strings for vt100 and xterm.# * Added u6/u7/u8/u9 capabilities.# * Added PCVT entry. and while it is included in several entries for completeness (not many: there are 16 occurrences in 18,699 lines of terminfo.src ), there are no well-known users of the feature. In fact, there is one place in ncurses where it could have been written to use it (some ifdef'd debugging code in the tty_update.c file), but that uses hard-coded escape sequences (marked as "ANSI compatible"). The reason for the absence of users would be: inverting an arbitrary terminfo expression is harder than it may seem xterm and similar terminals interpret these escape sequences In ECMA-48 , these are (u7) DSR (device status report) and (u6) CPR (active position report).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140633/" ] }
239,295
People say you shouldn't use spaces in Unix file naming. Are there good reasons to not use capital letters in file names (i.e., File_Name.txt vs. file_name.txt )? Or is this just a matter of personal preference?
People say you shouldn't spaces in Unix file naming. People say a lot of things. There are some tools that may screw up, but hopefully they are few in number at this point in time, since spaces are a virus proliferated by giant consumer proprietary OS corporations and now impossible to avoid. Spaces make specifying filenames on the command line, etc., awkward. That's about it. The only categorically prohibited characters on *nix systems are NUL (don't worry, it's not on your keyboard, or anyone else's) and / , since that is the path separator. 1 Other than that anything goes. Individual path elements (file names) are limited to 255 bytes (a possible complication if you are using extended character sets) and complete paths to 4 KiB. Or is this just a matter of personal preference I would say it is. Most DE's seem to create a slew of capitalized directories in your $HOME ( Downloads , Desktop , Documents -- the D is very popular), so there's nothing bizarre about it. There are also very commonplace traditional files with capitals in them, such as .Xclients and .Xauthority . A value of capitalizing things at the beginning is that when listed lexicographically they'll come before lower case things -- at least, with many tools, and subject to locale. I'm a fan of camel case (aka. camelCase) and use it with filenames, e.g., /home/goldilocks/blueSuedeShoes -- never mind what's in there. Definitely a matter of personal preference but it has yet to cause me grief. Java class files tend to contain capitals by nature, because Java class names do. And of course, let's not forget NetworkManager , even if some of us would prefer to. 1. There is a much more delimited, recommended by POSIX "Portable Filename Character Set" that doesn't include the space -- but it does include upper case! POSIX also specifies the more general restriction regarding "the slash character and the null byte" elsewhere in the same document . This reflects, or is reflected in, long standing conventional practices .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/239295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139554/" ] }
239,309
I have setup a 2-seat computer. I start one X11 server using the computer's onboard graphics card (intel) and another in the dedicated one (nvidia).Everything runs fine, except opengl. Currently, only the nvidia-seat has opengl due to conflicting files from nvidia and intel opengl packages in /lib. Is there any way to force one user to use libs from different a path? Every general /lib thing I found affects the whole system (ldconfig).I've also considered FUSE, but I worry about general security and performance issues. chroot is only viable if I don't have to double and maintain all files. unionfs seemed right if it would allow for user-dependent overlays, but I never messed with unionfs and nothing I found suggests it's possible.
People say you shouldn't spaces in Unix file naming. People say a lot of things. There are some tools that may screw up, but hopefully they are few in number at this point in time, since spaces are a virus proliferated by giant consumer proprietary OS corporations and now impossible to avoid. Spaces make specifying filenames on the command line, etc., awkward. That's about it. The only categorically prohibited characters on *nix systems are NUL (don't worry, it's not on your keyboard, or anyone else's) and / , since that is the path separator. 1 Other than that anything goes. Individual path elements (file names) are limited to 255 bytes (a possible complication if you are using extended character sets) and complete paths to 4 KiB. Or is this just a matter of personal preference I would say it is. Most DE's seem to create a slew of capitalized directories in your $HOME ( Downloads , Desktop , Documents -- the D is very popular), so there's nothing bizarre about it. There are also very commonplace traditional files with capitals in them, such as .Xclients and .Xauthority . A value of capitalizing things at the beginning is that when listed lexicographically they'll come before lower case things -- at least, with many tools, and subject to locale. I'm a fan of camel case (aka. camelCase) and use it with filenames, e.g., /home/goldilocks/blueSuedeShoes -- never mind what's in there. Definitely a matter of personal preference but it has yet to cause me grief. Java class files tend to contain capitals by nature, because Java class names do. And of course, let's not forget NetworkManager , even if some of us would prefer to. 1. There is a much more delimited, recommended by POSIX "Portable Filename Character Set" that doesn't include the space -- but it does include upper case! POSIX also specifies the more general restriction regarding "the slash character and the null byte" elsewhere in the same document . This reflects, or is reflected in, long standing conventional practices .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/239309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140667/" ] }
239,364
I am trying to create a patch with the command git diff sourcefile >/var/lib/laymab/overlay/category/ebuild/files/thepatch.patch when I apply the patch, it gives me $ patch -vGNU patch 2.7.5$ /usr/bin/patch -p1 </var/lib/laymab/overlay/category/ebuild/files/thepatch.patchpatching file sourcefileHunk #1 FAILED at 1 (different line endings).Hunk #2 FAILED at 23 (different line endings).Hunk #3 FAILED at 47 (different line endings).Hunk #4 FAILED at 65 (different line endings).Hunk #5 FAILED at 361 (different line endings).5 out of 5 hunks FAILED -- saving rejects to file sourcefile.rej I tried to apply dos2unix to both src file and patch file, but the message don't gone... UPD: --ignore-whitespace doesn't help too PATCH COMMAND: patch -p1 -g0 -E --no-backup-if-mismatch --ignore-whitespace --dry-run -f < '/var/lib/layman/dotnet/dev-dotnet/slntools/files/remove-wix-project-from-sln-file-v2.patch'=====================================================checking file Main/SLNTools.slnHunk #1 FAILED at 14 (different line endings).Hunk #2 FAILED at 49 (different line endings).Hunk #3 FAILED at 69 (different line endings).Hunk #4 FAILED at 102 (different line endings).4 out of 4 hunks FAILED UPD: found a very good article: https://stackoverflow.com/a/4425433/1709408
I had the same problem using the patch command that comes with MSYS2 on Windows. In my case both the source file and the patch had CRLF line-ending, and converting both to LF didn't work either. What worked was the following: $ dos2unix patch-file.patch$ patch -p1 < patch-file.patch$ unix2dos modified-files... patch will convert the line-endings to LF on all the patched files, so it's necessary to convert them back to CRLF. Obs: the patch version I'm using is 2.7.5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98794/" ] }
239,458
I have a text file and I want to prepend some text to the first line.I tried something like this: sed -i '1i\'"string" file However this inserts a new line to the text file.
This should work with GNU sed: sed -i '1s/^/string/' file it's different from your solution in not adding the new line. test Before running command, the content of file is this: sometextherealready After running the command: stringsometextherealready
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140741/" ] }
239,478
I'm having a lot of difficulty figuring out how to phrase this so Google-fu is failing. I have a text file with a table of data. I'd like to insert newlines to visually separate subgroups. For example, if I start with: jan fordjan trillianmar trilliansep marvin And the first field is my subgroup field then the output should be: jan fordjan trillianmar trilliansep marvin I can do something like ^(a-z){3}\t(.*)\n\1\t(.*)$ to identify two lines where the month is the same but I don't know how to match when they're different. Ideally I'd love this to be a regex I can throw into BBedit but I'm open to other solutions.
It looks like bbedit is some kind of paid OSX editor. I'm afraid I;ve never used it and can't install it so I can't help you there. Based on the regex you show, it has its own regular expression syntax so it's unlikely you'll find a solution on a general *nix site using it. However, here are a couple of other options. In both, the idea is to save the first field and print a blank line if it is different than the one seen on the previous line: $ awk '{if($1!=last && NR>1){print ""}last=$1;}1;' filejan fordjan trillianmar trilliansep marvin awk is a scripting language that is designed to deal with field-based data. It will automatically split each line into fields which can then be referred to as $1 , $2 ... $N . So, the script above will save the first field in the variable last , and for each line but the first (that's what the NR>1 means), it will print an empty line if last is not the same as the currently saved value. The 1; is awk shorthand for "print every line". Alternatively, you could also do this in perl : $ perl -lape '$F[0] ne $last && $.>1 && print ""; $last=$F[0]' filejan fordjan trillianmar trilliansep marvin Here, we're using perl command line switches to do most of the work. The -a makes perl act like awk and split each input line into the array @F . Therefore, $F[0] is the first field. The -l makes perl add a newline to each print call, so print "" just prints an empty line. The -p makes it print each input line after applying the script given by -e . The script itself is exactly the same as the awk one above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140762/" ] }
239,479
I know I can use du -h to output the total size of a directory. But when it contains other subdirectories, the output would be something like: du -h /root/test....24K /root/test/164K /root/test/2876K /root/test/31.1M /root/test/415M /root/test/517M /root/test I only want the last line because there are too many small directories in the /root/test directory. What can I do?
Add the --max-depth parameter with a value of 0: du -h --max-depth=0 /root/test Or, use the -s (summary) option: du -sh /root/test Either of those should give you what you want. For future reference, man du is very helpful.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/239479", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
239,489
I keep getting the following error messages in the syslog of one of my servers: # tail /var/log/syslogOct 29 13:48:40 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed outOct 29 13:48:40 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service'Oct 29 13:49:05 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed outOct 29 13:49:05 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service' They seem to correlate to FTP Logins on the ProFTPd daemon: # tail /var/log/proftpd/proftpd.log2015-10-29 13:48:40,433 myserver proftpd[17872] myserver.example.com (remote.example.com[192.168.22.33]): USER switch: Login successful.2015-10-29 13:48:40,460 myserver proftpd[17872] myserver.example.com (remote.example.com[192.168.22.33]): FTP session closed.2015-10-29 13:48:40,664 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): FTP session opened.2015-10-29 13:49:05,687 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): USER switch: Login successful.2015-10-29 13:49:05,705 myserver proftpd[17881] myserver.example.com (remote.example.com[192.168.22.33]): FTP session closed.2015-10-29 13:49:05,908 myserver proftpd[17915] myserver.example.com (remote.example.com[192.168.22.33]): FTP session opened. The FTP logins themselves seem to work without problems for the user, though. I've got a couple of other servers also running ProFTPd but so far never got these errors. They might be related to a recent upgrade from Debian 7 to Debian 8 though. Any ideas what the message want to tell me or even what causes them? I already tried restarting the dbus and proftpd daemons and even the server and made sure that the DBUS socket /var/run/dbus/system_bus_socket is existing but so far the messages keep coming. EDIT:The output of journalctl as requested in the comment: root@myserver:/home/chammers# systemctl status -l dbus-org.freedesktop.login1.service● systemd-logind.service - Login Service Loaded: loaded (/lib/systemd/system/systemd-logind.service; static) Active: active (running) since Tue 2015-10-27 13:23:32 CET; 1 weeks 0 days ago Docs: man:systemd-logind.service(8) man:logind.conf(5) http://www.freedesktop.org/wiki/Software/systemd/logind http://www.freedesktop.org/wiki/Software/systemd/multiseat Main PID: 467 (systemd-logind) Status: "Processing requests..." CGroup: /system.slice/systemd-logind.service └─467 /lib/systemd/systemd-logindOct 28 10:15:25 myserver systemd-logind[467]: New session c3308 of user switch.Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3308.Oct 28 10:15:25 myserver systemd-logind[467]: New session c3309 of user switch.Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3309.Oct 28 10:15:25 myserver systemd-logind[467]: New session c3310 of user switch.Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3310.Oct 28 10:15:25 myserver systemd-logind[467]: New session c3311 of user switch.Oct 28 10:15:25 myserver systemd-logind[467]: Removed session c3311.Oct 28 10:19:52 myserver systemd-logind[467]: New session 909 of user chammers.Oct 28 10:27:11 myserver systemd-logind[467]: Failed to abandon session scope: Transport endpoint is not connected And more journalctl output: Nov 03 16:21:19 myserver dbus[19617]: [system] Failed to activate service 'org.freedesktop.login1': timed outNov 03 16:21:19 myserver proftpd[23417]: pam_systemd(proftpd:session): Failed to create session: Activation of org.freedesktop.login1 timed outNov 03 16:21:19 myserver proftpd[23418]: pam_systemd(proftpd:session): Failed to create session: Activation of org.freedesktop.login1 timed outNov 03 16:21:19 myserver proftpd[23417]: pam_unix(proftpd:session): session closed for user switchNov 03 16:21:19 myserver proftpd[23418]: pam_unix(proftpd:session): session closed for user switchNov 03 16:21:19 myserver proftpd[23420]: pam_unix(proftpd:session): session opened for user switch by (uid=0)Nov 03 16:21:19 myserver dbus[19617]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service'Nov 03 16:21:19 myserver proftpd[23421]: pam_unix(proftpd:session): session opened for user switch by (uid=0)
Restart logind: # systemctl restart systemd-logind Beware that restarting dbus will break their connection again.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/239489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48404/" ] }
239,527
It's long time I'm trying to fix my .conkyrc configuration file in order to set real transparency. There are many post out there about it, but none of them helped in my case, it seems the solution depends on many factors(windows manager, desktop environment, conky version and probably others). Actually it seems that my environment support real transparency since it works for my terminal(see Screenshot), but conky is using fake transparency(files on Desktop are covered/overridden) As you can see, I use Metacity as window manager, Mate as desktop environment. I installed conky 1.9 : conky -versionConky 1.9.0 compiled Wed Feb 19 18:44:57 UTC 2014 for Linux 3.2.0-37-generic (x86_64) And my distro is Mint 17.2 Rafaela : lsb_release -aNo LSB modules are available.Distributor ID: LinuxMintDescription: Linux Mint 17.2 RafaelaRelease: 17.2Codename: rafaela My .conkyrc actually is as following: background yesuse_xft yesxftfont Roboto:size=9xftalpha 0.8update_interval 1total_run_times 0own_window yesown_window_transparent yes############################################### Compositing tips:# Conky can play strangely when used with# different compositors. I have found the# following to work well, but your mileage# may vary. Comment/uncomment to suit.################################################ no compositor#own_window_type conky#own_window_argb_visual no## xcompmgr#own_window_type conky#own_window_argb_visual yes## cairo-compmgrown_window_type desktopown_window_argb_visual no##############################################own_window_hints undecorated,below,sticky,skip_taskbar,skip_pagerdouble_buffer yesdraw_shades nodraw_outline nodraw_borders nodraw_graph_borders nostippled_borders 0#border_margin 5 #commento non è supportatoborder_width 1default_color EDEBEBdefault_shade_color 000000default_outline_color 000000alignment top_rightminimum_size 600 600maximum_width 900gap_x 835gap_y 77alignment top_rightno_buffers yesuppercase nocpu_avg_samples 2net_avg_samples 2short_units yestext_buffer_size 2048use_spacer noneoverride_utf8_locale yescolor1 212021color2 E8E1E6color3 E82A2Aown_window_argb_value 0own_window_colour 000000TEXT${goto 245}${voffset 25}${font GeosansLight:size=25} Today${goto 124}${voffset -}${font GeosansLight:light:size=70}${time %I:%M}${image .conky/line.png -p 350,27 -s 3x189}${offset 150}${voffset -55}${font GeosansLight:size=17}${time %A, %d %B}${offset 380}${voffset -177}${font GeosansLight:size=25}Systems${font GeosansLight:size=22}${offset 400}${voffset 5}${font GeosansLight:size=15}$acpitemp'C${offset 400}${voffset 10}${cpu cpu0}% / 100%${offset 400}${voffset 4}$memfree / $memmax${font GeosansLight:size=15}${offset 400}${voffset 5}${if_up wlan0}${upspeed wlan0} kb/s / ${totalup wlan0}${endif}${if_up eth0}${upspeed eth0} kb/s / ${totalup eth0}${endif}${if_up ppp0}${upspeed ppp0} kb/s / ${totalup ppp0}${endif}${offset 400}${voffset 5}${if_up wlan0}${downspeed wlan0} kb/s / ${totaldown wlan0}${endif}${if_up eth0}${downspeed eth0} kb/s / ${totaldown eth0}${endif}${if_up ppp0}${downspeed ppp0} kb/s / ${totaldown ppp0}${endif}${goto 373}${voffset -162}${font Dingytwo:size=17}M$font ${goto 373}${voffset 7}${font Dingytwo:size=17}7$font ${goto 373}${voffset 1}${font Dingytwo:size=17}O$font ${goto 373}${voffset 1}${font Dingytwo:size=17}5$font ${goto 373}${voffset 1}${font Dingytwo:size=17}4$font I've tried many values for the own_window_type param, but none fixed the issue. Does somebody know how to achieve this, or what are the others environment factors that affect how the .conkyrc param must be set ?
-You just define: own_window yes own_window_transparent yes own_window_type conky own_window_argb_visual yes own_window_class override ...and you can get the transparency on the desktop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55995/" ] }
239,533
Is OpenSSH an implementation of a SSH server? Is it also an implementation of a SSH client? Is AutoSSH not an implementation of SSH server? Is it an implementation of a SSH client?
-You just define: own_window yes own_window_transparent yes own_window_type conky own_window_argb_visual yes own_window_class override ...and you can get the transparency on the desktop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
239,539
I have a small business network and for this network, one computer is DHCP-, DNS- and web server.It uses local domains like machinary.ao or resources.ao I'd like to use ssl. Computers and network don't connected to the internet, everything is local only and nothing goes out or comes in. I enabled SSL in apache and I created some certificates for it.The problem is: The browsers don't accept the SSL certificate, all browsers say it is not trusted. Why is that and what should i do? I want to use SSL for security, but how can I make it trusted? I don't want to pay SSL certificates amount of money for nothing. I already have local network and I want to make it more secure. How can I make local Certificate Authority to sign my certificate?Or what should I do?
-You just define: own_window yes own_window_transparent yes own_window_type conky own_window_argb_visual yes own_window_class override ...and you can get the transparency on the desktop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123257/" ] }
239,543
and vice versa. I am running a RedHat if relevant.
You cannot do this because for such a conversion, you need to know the meaning of the binary content. If e.g. there is a string inside a binary file it must not be converted and a 4 byte integer may need different treatment than a two byte integer. In other words, for a byte order conversion, you need a data type description.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/239543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86960/" ] }
239,599
I have noticed that doing a service restart with something like: service sshd restart is very similar to doing something like: pkill -HUP sshd However the pkill would close my ssh session where the service restart would leave it open. This lead to my question and that is does a service restart send a true HUP like the pkill command? And if they do the same thing why does the service restart leave my ssh session open but the pkill closes it?
No. SIGHUP probably does not mean what you think it does. In the olden days (and I'm talking 1970s here), a terminal was a device that would connect to a UNIX machine over a serial line. This was often done through a modem connection, as in those days, getting a second machine was way more expensive than having to pay for a lot of phone connectivity; and by using a modem line, you could share a machine with someone far away. When you're done using the machine, it was quite necessary to make sure that whatever you were running would be stopped, since machines back then did not have the same amount of resources that today's machines do, and hence the system would help you ensure you did not forget to do so by sending a signal to any processes connected to your serial line when the modem would hang up. This was the 'hangup' signal, the name of which got abbreviated to SIGHUP . At some point, someone figured out that it could sometimes make sense to have a process run continuously, so as to provide some service to the users on the machine. In order to make sure the process would indeed keep running, it was then necessary that it would be detached from the terminal on which it was started, so that when the user would disconnect and the modem would hang up, the process wouldn't be killed. Additionally, if the process would not detach from the serial line, then the terminal would not be released and the next user who tried to use it would not be able to do so. So for those reasons, you detach. Now you have a long-running process which at some point might need to be reconfigured. You could restart it, or you could have the process poll its configuration file every so often. Both waste resources, however. It would be much better if you could tell it when to reread its configuration file. Since there's this one signal which for a daemon is meaningless anyway, why not just reuse that? Right, so that's what happened, and as z result a convention today is indeed for daemons to reread their configuration file when they receive SIGHUP . But that's only a convention, and it is by no means a general rule. The primary and documented purpose of SIGHUP is still to signal the fact that the terminal connection has been severed. For that reason, the default action for any program upon receipt of the SIGHUP signal is still to terminate, even if the process is a daemon. As such, an init implementation cannot just send SIGHUP to random processes that it manages. Yes, in many cases the reload action does end up sending SIGHUP to the daemon through whatever configuration the init system has (be that init scripts, systemd unit files, upstart configuration, or whatever); but it is incorrect to assume that this is what will always happen, and therefore if it's also incorrect to say that the two are equivalent. Occasionally, this also explains why sending SIGHUP to an sshd in command of a terminal kills the session; it's because the sshd assumes something killed the connection, and that it therefore must terminate.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81926/" ] }
239,601
../../../ is removed when i use it with the url in wget command. PLease see below: user $ wget http://n0t.meaning.anything:20000/../../../some/folder/file--2015-10-29 16:48:13-- http://n0t.meaning.anything:20000/some/folder/fileResolving n0t.meaning.anything (n0t.meaning.anything)... failed: Name or service not known.wget: unable to resolve host address ‘n0t.meaning.anything’user $ You can ignore second and third line (because the url actually doesnt exist). But in the first line you see: --2015-10-29 16:48:13-- http://n0t.meaning.anything:20000/some/folder/file But my command was wget http://n0t.meaning.anything:20000/../../../some/folder/file So you can see that ../../../ was dropped by my shell (or by the wget command). How do I retain the ../../../ in wget command.
No. SIGHUP probably does not mean what you think it does. In the olden days (and I'm talking 1970s here), a terminal was a device that would connect to a UNIX machine over a serial line. This was often done through a modem connection, as in those days, getting a second machine was way more expensive than having to pay for a lot of phone connectivity; and by using a modem line, you could share a machine with someone far away. When you're done using the machine, it was quite necessary to make sure that whatever you were running would be stopped, since machines back then did not have the same amount of resources that today's machines do, and hence the system would help you ensure you did not forget to do so by sending a signal to any processes connected to your serial line when the modem would hang up. This was the 'hangup' signal, the name of which got abbreviated to SIGHUP . At some point, someone figured out that it could sometimes make sense to have a process run continuously, so as to provide some service to the users on the machine. In order to make sure the process would indeed keep running, it was then necessary that it would be detached from the terminal on which it was started, so that when the user would disconnect and the modem would hang up, the process wouldn't be killed. Additionally, if the process would not detach from the serial line, then the terminal would not be released and the next user who tried to use it would not be able to do so. So for those reasons, you detach. Now you have a long-running process which at some point might need to be reconfigured. You could restart it, or you could have the process poll its configuration file every so often. Both waste resources, however. It would be much better if you could tell it when to reread its configuration file. Since there's this one signal which for a daemon is meaningless anyway, why not just reuse that? Right, so that's what happened, and as z result a convention today is indeed for daemons to reread their configuration file when they receive SIGHUP . But that's only a convention, and it is by no means a general rule. The primary and documented purpose of SIGHUP is still to signal the fact that the terminal connection has been severed. For that reason, the default action for any program upon receipt of the SIGHUP signal is still to terminate, even if the process is a daemon. As such, an init implementation cannot just send SIGHUP to random processes that it manages. Yes, in many cases the reload action does end up sending SIGHUP to the daemon through whatever configuration the init system has (be that init scripts, systemd unit files, upstart configuration, or whatever); but it is incorrect to assume that this is what will always happen, and therefore if it's also incorrect to say that the two are equivalent. Occasionally, this also explains why sending SIGHUP to an sshd in command of a terminal kills the session; it's because the sshd assumes something killed the connection, and that it therefore must terminate.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109220/" ] }
239,636
I have a .txt that can be exemplified like this: NAME | CODEname1 | 001name2 | 001name3 | 002name4 | 003name5 | 003name6 | 003 I need to write a script to split this file according to the CODE column, so in this case I'd get this: file 1:NAME | CODEname1 | 001name2 | 001file 2:NAME | CODEname3 | 002file 3:NAME | CODEname4 | 003name5 | 003name6 | 003 According to some research, using awk would work: $ awk -F, '{print > $2".txt"}' inputfile The thing is, I also need to include the header to the first line and I need the file names to be different. Instead of 001.txt , for example, I need the file name to be something like FILE_$FILENAME_IDK.txt .
You could try like this: awk 'NR==1{h=$0; next}!seen[$3]++{f="FILE_"FILENAME"_"$3".txt";print h > f} {print >> f}' infile The above saves the header in a variable h ( NR==1{h=$0; next} ) then, if $3 not seen ( !seen[$3]++ i.e. if it's the first time it encounters the current value of $3 ) it sets the filename ( f=...) and writes the header to filename ( print h > f ). Then it appends the entire line to filename ( print >> f ). It uses default FS (field separator): blank . If you want to use | as FS (or even a regex with gnu awk ) see cas ' comment below.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/239636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140841/" ] }