source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
321,709
I was trying to run flume on Ubuntu 16.04 as systemd service and have following in /etc/systemd/system/flume-ng.service [Unit]Description=Apache Flume[Service]ExecStart=/usr/bin/nohup /opt/flume/current/bin/flume-ng agent -c /etc/flume-ng/conf -f /etc/flume-ng/conf/flume.conf --name a1 &ExecStop=/opt/flume/current/bin/flume-ng agent stop[Install]WantedBy=multi-user.target I tried adding following lines StandardOutput=/var/log/flume-ng/log1.logStandardError=/var/log/flume-ng/log2.log which didn't work for me. I did run systemctl daemon-reload and systemctl restart flume-ng anyone know how this works ?
ExecStart=/usr/bin/nohup … This is wrong. Remove it. This service is not running in an interactive login session. There is no controlling terminal, or session leader, to send a hangup signal to it in the first place. ExecStart=… & This is wrong. Remove it. This is not shell script. & has no special shell-like meaning, and in any case would be the wrong way to start a service. StandardOutput=/var/log/flume-ng/log1.logStandardError=/var/log/flume-ng/log2.log These are wrong. Do not use these. systemd already sends the standard output and error of the service process(es) to its journal, without any such settings in the service unit. You can view it with journalctl -e -u flume-ng.service
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/321709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66640/" ] }
321,710
I am running Mac OS 10.11.6 El Capitan. There is a link I would like to download programmatically: https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg If I paste this URL into any browser (e.g. Safari) the download works perfectly. However, if I try to download the same URL from the command line using curl , it doesn't work—the result is an empty file: $ ls -lA$ curl -O https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0$ ls -lAtotal 0-rw-r--r-- 1 myname staff 0 Nov 7 14:07 mysql-5.7.16-osx10.11-x86_64.dmg$ Of course I can get the file through the browser, but I would like to understand why the curl command above doesn't work. Why can't curl download this file correctly, when it is evidently present on the website and can be correctly accessed and downloaded through a graphical web browser?
There is a redirect on the webserver-side to the following URL: http://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg . Because it's a CDN, the exact behaviour (whether you get redirected or not) might depend on your location. curl does not follow redirects by default. To tell it to do so, add the -L argument: curl -L -O https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.16-osx10.11-x86_64.dmg
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/321710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198344/" ] }
321,831
I have a property file in which a particular key has comma delimited values. $cat sample.prorertiesvalue=alex,raj,kaly,rema In my shell script I implement this property file and use the key value to get its values. .... sample.propertiesecho $value... here is what I get as output $ alex,raj,kaly,rema However I am looking for a space delimited value like this alex raj kaly rema I know how to do this for a file however I am not certain about something which is assigned to a variable. Any help is much appreciated.
You can use Bash string manipulation : value=alex,raj,kaly,remaecho $value alex,raj,kaly,rema Now replace all , with ' ' (space): echo ${value//,/ }alex raj kaly rema bash --versionGNU bash, version 3.2.57(1)-release
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161341/" ] }
321,860
Is there a simple command to reverse an hexadecimal number? For example, given the hexadecimal number: 030201 The output should be: 010203 Using the rev command, I get the following: 102030 Update $ bash --version | head -n1GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)$ xxd -versionxxd V1.10 27oct98 by Juergen Weigert$ rev --versionrev from util-linux 2.20.1
You can convert it to binary , reverse the bytes , optionally remove trailing newlines rev <2.24 , and convert it back: $ xxd -revert -plain <<< 030201 | LC_ALL=C rev | tr -d '\n' | xxd -plain010203 Using $ bash --version | head -n1GNU bash, version 4.3.42(1)-release (x86_64-redhat-linux-gnu)$ xxd -versionxxd V1.10 27oct98 by Juergen Weigert$ rev --versionrev from util-linux 2.28.2 This does not work if the string contains 00 (the NUL byte), because rev will truncate the output at that point, or 0a (newline), because rev reverses each line rather than the entire output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161541/" ] }
321,904
So, for now, let's say we only need this to work for debian-based systems (but I will need to be able to do it for yum in the future). The best I have right now is dpkg-query . So, for example, if I run this: dpkg-query --show I'll get a list like this (with a few thousand entries): ... sudo 1.8.17p1-2 ... vim 2:7.4.1829-1 ... There is no naming convention though. Some of the packages have the version number in them, some of them have the architecture. ex gcc-4.9-base:amd64 , but what I want would only have gcc 4.9 . Ideally, I would like to be able to get vendor, product, and version information for all of the software installed. Is there any way to do this natively , or does it have to be some kind of "fuzzy" match? I'm open to alternative ways of querying the package manager, or some other method that I am not thinking of. I am not able to install additional packages to accomplish this goal (though, I would be interested to see how they work if that exists).
This will list the source packages and versions corresponding to the installed binary packages: dpkg-query --show -f '${source:Package} ${source:Version}\n' | sort -u This is the closest match to individual pieces of software you can get automatically: you'll only see gcc-4.9 once, with the associated version, instead of all the corresponding binary packages. You can't easily retrieve "vendor" information, you'd need to look at the package details ( apt-cache show ... ) or the licensing information (in /usr/share/doc/<package>/copyright — it should point to the "upstream" project, i.e. the "vendor"); this isn't guaranteed to be in machine-readable format so there will be some human parsing involved. You'll still find some source packages whose name contains the (major) version, e.g. gcc-4.9 , gcc-5 etc.; these are unavoidable when packages are designed so that major versions are co-installable, as is the case for GCC. The equivalent RPM command is rpm --qf "%{SOURCERPM}\n" -qa | sort -u
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37211/" ] }
321,931
Ubuntu 16.04 I have some pretty large file so modifying it by hand is not possible. I'd like to change all occurences of <tag1>true</tag1> to <tag1>false</tag1>
You can use sed -e 's|<tag1>false</tag1>|<tag1>true</tag1>|g' -i file although I recommend doing the edit to a copy of the file, sed -e 's|<tag1>false</tag1>|<tag1>true</tag1>|g' file > newfile and using less to check if the new contents are acceptable; i.e. less newfile Edited: Note the g modifier at the end of the pattern. It is necessary, if there can be more than one match on a line. When g is present, it means all matches on a line are replaced. Furthermore, instead of complete tags, you could consider just sed -e 's|>false<|>true<|g' file > newfile or perhaps sed -e 's|>[Ff]alse<|>true<|g' file > newfile which changes both >false< and >False< to >true< . You can use diff to compare the two files, after using one of the commands above. One option is diff --side-by-side file newfile | less but it does not really work if the lines are very long. The "unified diff" format is commonly used, diff -u file newfile | less where lines beginning with - are from file , lines beginning with + from newfile , and lines beginning with a space are common to both.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102919/" ] }
321,944
I am running an Ubuntu 12.04 system. While I am logged in using the i3 window manager how is it possible to login as another user (perhaps using a different window manager) without losing my i3wm session? This is something I can easily do when I am using Unity (and then use Ctrl+F7 and Ctrl+F8 to switch between the two users) but I haven't figured a way to do this from i3wm.
You can ask your display manager to handle this, for instance: lxdm: lxdm -c USER_SWITCH lightdm: dm-tool switch-to-greeter gdm: gdmflexiserver kdm: kdmctl reserve via edgardcastro on Reddit So on Ubuntu 16.04 (that is i3 over LightDM), you could bind the following shortcut: bindsym $mod+Shift+l exec dm-tool switch-to-greeter
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24044/" ] }
321,960
My internet provider informed me that my computer is infected with Locky Ransomware. I have Linux Ubuntu. How to remove virus or scan system?
False claims that your computer has been infected by malware and that you should install some particular software to fix it are common scams . They usually pretend to come from someone that many people trust on matters of computer security, such as Microsoft or your service provider. The scam is that they are in fact trying to persuade you to install their malware. There are a few malware that can be detected over the network, through their network traffic, but mostly you would need local access to your machine. Ransomware in particular can't be detected over the network. And even if your ISP did need to contact you about something (e.g. your computer is involved in a botnet), they'd use the contact address they have for you (e.g. email coming from them), they wouldn't inject content in a web page. Never install something that someone pretending to be from your ISP/Microsoft/… wants you to install . If by “website is certified” you mean that it shows a green padlock icon in your browser, that only means that they paid a few dollars and that your browser really is connected to the scammer's site. HTTPS only guarantees that the site that your browser is talking to is the site it pretends to be (i.e. the correct domain name), it doesn't mean that the site that it's talking to is trustworthy. Having a phone likewise guarantees nothing beyond the fact that the scammers are paying someone (likely in a country with low wages) to answer the phone. When you visit a website, your browser sends some basic information about your computer, including the operating system that you're using. The website also knows your IP address and it can look up which ISP this address belongs to as well as your approximate geographical location through geolocation services. Scammers take advantage of this to pretend that they actually know something about you. It's a classic thing with confidence tricksters to appear more informed that they really are by using clues that they hope you won't think of. In your case pretending that you have a Windows malware on your Linux machine is clearly wrong, but scammers are looking for the most gullible victims, they don't mind that most people who see the ad will ignore it as an obvious scam. Such scams are often shown through ads. If the scam appears in an unrelated web page and that web page has a way to report fraudulent ads, do so.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199320/" ] }
321,968
I created a passwordless ssh connection to my remote server from my mac. It worked(!) and then I closed my terminal, re-opened it, tried again, and got the following (username, my_ip are not real): ssh -vvv username@my_ipOpenSSH_7.2p2, LibreSSL 2.4.1debug1: Reading configuration data /Users/Me/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 20: Applying options for *debug1: /etc/ssh/ssh_config line 53: Applying options for *debug2: resolving "my_ip" port 22debug2: ssh_connect_direct: needpriv 0debug1: Connecting to my_ip [my_ip] port 22.debug1: Connection established.debug1: identity file /Users/Me/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Mes/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Me/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.2ssh_exchange_identification: read: Connection reset by peer When I checked my .ssh folder, id_rsa was there but none of the others were. From the error, it looks like I need to somehow create these files but am not sure how to do so. Any help would be appreciated.
debug1: key_load_public: No such file or directory The line above is not error, but just simple debug log saying that ssh client is not able to find separate public key (named ~/.ssh/id_rsa.pub ). This file is not needed to connect to the remote server, but it can be useful. The actual error ssh_exchange_identification: read: Connection reset by peer points to error in server configuration. The server is running, but fails to accept the SSH connection. Check the server log for more information. Similar problems
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/321968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138677/" ] }
321,992
I have a 2.5 TB of data that I want to put in a 2TB hard drive to mail somewhere. It's not hopeless, as a very large fraction of the data consists of duplicate files. I am considering using jdupes with the -H option, which will replace duplicate files with hardlinks to a single file. Here's the problem: If I tar a directory containing multiple hard links to other files in the directory tree, will tar "reduplicate" them in the archive file?
Probably a duplicate from Dereferencing hard links By default, a single copy of hardlinked data should be included in your archive.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/321992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8474/" ] }
321,997
I have Linux Mint installed on 3 computers at home, and all of them are almost unusably slow whenever Firefox is open. Here is the output from top : As you can see, "Web Content" and Firefox are collectively using up nearly all of my CPU, and more than 50% (4GB+) of system memory. I have never had this problem in the past with Debian or Ubuntu, but it is occuring on every computer I've installed Mint on so far. This extremely high (near total) CPU/memory usage is constant, and is rendering my computer unusable. Does anyone have ideas about how to fix this? If there is no fix, how can I keep this "Web Content" application from running at all?
this is a common problem causing nothing but the battery wasted energy decreasing unplugged operation time significantly. the cause of the problem appears to be very simple: you may have too many tabs opened each having bulky and useless endless loops running java-scripts . those java-scripts are usually not origin of the web site you are working with but an ad based 3rd parties from somewhere else trying to collect some info from your FFox session or just to display switching ads on a side. the simple (but not unique) solution would be to install NoScript plugin - causing immediate effect - Web Content process CPU consumtion will decrease almost to 0% . so keep NoScript installed on all your FFox'es and keep track on what domain you are actually allowing scripts from to be executed very carefully. it's a good practice to allow only original domain scripts for permanent (a choice " allow ") to have the web site you are visiting to display correctly all the useful information, but to keep side or extra domains only in " forbid " or " temporary allow " mode so the next FFox load will keep all these unwelcome scripts banned again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/321997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5447/" ] }
322,029
So I know that there is a way to change the color of text for directories, regular files, bash scripts, etc. Is there a way to change the color to the file based on the _file extension_ ? Example: $ ls -lfoo.txt [is red] foo.text [is blue] foo.secret [is green] foo.txt [is red]
Yes, using the LS_COLORS variable (assuming GNU ls ). The easiest way to manipulate that is to use dircolors : dircolors --print-database > dircolors.txt will dump the current settings to dircolors.txt , which you can then edit; once you've added your settings, eval $(dircolors dircolors.txt) will update LS_COLORS and export it. You should add that to your shell startup script. To apply the example settings you give, the entries to add to dircolors.txt would be .txt 00;31.text 00;34.secret 00;32
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186552/" ] }
322,035
/bin/sh , the Bourne shell created in 1977, used to be the default shell for Unix systems. Nowadays this file still exists but mostly just as a symbolic link to the default POSIX-compatible shell installed on the system: on RHEL/CentOS it points to /bin/bash , the Bourne Again shell on Ubuntu Linux it points to /bin/dash , the Debian Almquist shell on Debian it points to /bin/dash (6.0 and later; older Debian releases had it point to /bin/bash ) Which made me curious: Is there a Unix system, or Linux distro, that still provides a binary for /bin/sh ?
/bin/sh is not always a symlink NetBSD is one system where /bin/sh is not a symlink. The default install includes three shells: the Korn shell, the C shell, and a modified Almquist shell. Of these, the latter is installed only as /bin/sh . Interix (the second POSIX subsystem for Windows NT) does not have /bin/sh as a symlink. A single binary of the MirBSD Korn shell is linked twice as /bin/sh and /bin/mksh . FreeBSD and its derivative TrueOS (formerly PC-BSD) have the TENEX C shell as both /bin/csh and /bin/tcsh , and the Almquist shell as (only) /bin/sh . No symlink there, either. OpenBSD has the (original) C shell as /bin/csh and the PD Korn shell linked thrice as /bin/sh , /bin/ksh , and /bin/rksh . Also no symlink.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/322035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
322,038
How can I check if mv is atomic on my fs (ext4)? The OS is Red Hat Enterprise Linux Server release 6.8. In general, how can I check this? I have looked around, and didn't find if my OS is standard POSIX.
Interestingly enough, it seems the answer may be, "It depends". To be clear, mv is specified to The mv utility shall perform actions equivalent to the rename() function The rename function specification states: This rename() function is equivalent for regular files to that defined by the ISO C standard. Its inclusion here expands that definition to include actions on directories and specifies behavior when the new parameter names a file that already exists. That specification requires that the action of the function be atomic. But the latest the ISO C specification for rename() states: 7.21.4.2 The rename function Synopsis #include <stdio.h>int rename(const char *old, const char *new); Description The rename function causes the file whose name is the string pointed to by old to be henceforth known by the name given by the string pointed to by new . The file named old is no longer accessible by that name. If a file named by the string pointed to by new exists prior to the call to the rename function, the behavior is implementation-defined. Returns The rename function returns zero if the operation succeeds, nonzero if it fails, in which case if the file existed previously it is still known by its original name. Surprisingly, note that there is no explicit requirement for atomicity. It may be required somewhere else in the latest publicly-available C Standard, but I haven't been able to find it. If anyone can find such a requirement, edits and comments are more than welcome. See also Is rename() atomic? Per the Linux man page : If newpath already exists, it will be atomically replaced, so that there is no point at which another process attempting to access newpath will find it missing. However, there will probably be a window in which both oldpath and newpath refer to the file being renamed. The Linux man page claims the replacement of the file will be atomic. Testing and verifying that atomicity might be very difficult, though, if that is how far you need to go. You're not clear as to what you mean in your use of "How can I check if mv is atomic". Do you want requirements/specification/documentation that it's atomic, or do you need to actually test it? Note also, the above assumes the two operand file names are in the same file system. I can find no standard restriction on the mv utility to enforce that.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/322038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199367/" ] }
322,043
I'm trying to access U-Boot environment from Linux. It seems that there is only one tool to achieve that : fw_printenv/fw_setenv . But those tools are only usable on a MTD with UBIFS, and I'm running on a more "classical" file system (FAT for U-Boot, ext4 for Linux). I tried to find a format spec for U-Boot env file, unsuccessfully. Do you guys have an idea of how I could get/set those U-Boot variables from my Linux without MTD/UBI ?
Interestingly enough, it seems the answer may be, "It depends". To be clear, mv is specified to The mv utility shall perform actions equivalent to the rename() function The rename function specification states: This rename() function is equivalent for regular files to that defined by the ISO C standard. Its inclusion here expands that definition to include actions on directories and specifies behavior when the new parameter names a file that already exists. That specification requires that the action of the function be atomic. But the latest the ISO C specification for rename() states: 7.21.4.2 The rename function Synopsis #include <stdio.h>int rename(const char *old, const char *new); Description The rename function causes the file whose name is the string pointed to by old to be henceforth known by the name given by the string pointed to by new . The file named old is no longer accessible by that name. If a file named by the string pointed to by new exists prior to the call to the rename function, the behavior is implementation-defined. Returns The rename function returns zero if the operation succeeds, nonzero if it fails, in which case if the file existed previously it is still known by its original name. Surprisingly, note that there is no explicit requirement for atomicity. It may be required somewhere else in the latest publicly-available C Standard, but I haven't been able to find it. If anyone can find such a requirement, edits and comments are more than welcome. See also Is rename() atomic? Per the Linux man page : If newpath already exists, it will be atomically replaced, so that there is no point at which another process attempting to access newpath will find it missing. However, there will probably be a window in which both oldpath and newpath refer to the file being renamed. The Linux man page claims the replacement of the file will be atomic. Testing and verifying that atomicity might be very difficult, though, if that is how far you need to go. You're not clear as to what you mean in your use of "How can I check if mv is atomic". Do you want requirements/specification/documentation that it's atomic, or do you need to actually test it? Note also, the above assumes the two operand file names are in the same file system. I can find no standard restriction on the mv utility to enforce that.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/322043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198609/" ] }
322,124
In my day-to-day, I need to ssh to various machines, all of which I have a different private key for. When I start a new shell session - only my default id_rsa is added to the ssh key chain - I have been running ssh-add ~/.ssh/* However this also trys to, and fails, when adding things like ~/.ssh/config Using find / grep , how can I go about only adding valid private key files?
Slightly convoluted, but: for possiblekey in ${HOME}/.ssh/id_*; do if grep -q PRIVATE "$possiblekey"; then ssh-add "$possiblekey" fidone You can also add all of your keys to your ~/.ssh/config each in their own IdentityFile directive outside of a Host directive: # Global SSH configurations here will be applied to all hostsIdentityFile ~/.ssh/id_dsaIdentityFile ~/.ssh/id_project1IdentityFile ~/.ssh/id_someotherkeyHost somespecifichost.example.com IdentityFile ~/.ssh/id_specifichostonlykey The latter, honestly-better, method has the added perk of not suddenly picking up a new key that you've added without you explicitly adding it to the "keyring" as it were.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/322124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36669/" ] }
322,203
I am having trouble (only recently) logging into a machine at work. I have always entered my credentials like [email protected] . But it doesn't seem to work anymore... I noticed that after typing the username, and before typing the password it says: [email protected]@machine.domain.local's password: Is this normal that the remote machine is "tagged" on to the end of my username? Or is it the root of my login problem?
I can replicate this to a Debian-based system joined to an Active Directory domain, and I get a successful login with the correct password: ssh -l [email protected] [email protected]@remotehost's password: My guess would be that the remote server has been recently updated from using winbindd to sssd for its AD authentication layer. Why you are getting a permission denied error is not something that can be easily diagnosed without access to the remote host in question. I would start by looking at the authentication log files on the server. In a Debian-based environment that would be /var/log/auth.log , the files corresponding to your client in /var/log/samba , and files under /var/log/sssd . Be aware that the domain usage changed from winbindd to sssd , so any "allowed groups" in /etc/ssh/sshd_config may need adjusting. Update for early 2022. Even if you're using sssd you still need winbind . On my servers I abandoned sssd about a year ago (because I didn't need it to manage login sessions) and reverted to using winbind to talk directly to AD.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148695/" ] }
322,223
I need to copy one disk to another. I tried with the command below and it takes nearly a day to copy 1 TB of disk in federo. dd if=/dev/sda of=/dev/sdb I have tried the same on a Unix(HP-UX) system with the command below and it completes within a few hours dd if=/dev/sda of=/dev/rdsk What is the alternative I could use to copy from disk to disk as faster?
dd has many (weird) options, see dd(1) . You should explicitly state the buffer size, so try dd if=/dev/sda of=/dev/sdb bs=16M IIRC, the default buffer size is only 512 bytes. The command above sets it to 16 megabytes. You could try something smaller (e.g. bs=1M ) but you should use more than the default (especially on recent disk hardware with sectors of 4Kbytes, i.e. Advanced Format ). I naively recommend some power of two which is at least a megabyte. With the default 512 bytes buffer size, I guess (but I could be very wrong) that the hardware requires the kernel to transfer 4K for each 512 bytes block. Regarding rdsk , the sd(4) man pages say: At this time, only block devices are provided. Raw devices have not yet been implemented. Increase of dd's buffer size will give you more performance for read and write operations. Now all disks have hardware read/write buffer. But if you will increase dd's buffer size more than hardware buffer its performance will decrease because dd will read from first disk to buffer when second disk will have written all from its own hardware buffer. You need set bs option of dd command each time different value for different devices.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/322223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125769/" ] }
322,238
I have a file that contains text as follows: dt=2016-06-30,path=path1,site=US,mobile=1dt=2016-06-21,path=path2,site=UK,mobile=0 I want to convert it to text with double-quoted values in the key-value pairs, like so: dt="2016-06-30",path="path1",site="US",mobile="1"dt="2016-06-21",path="path2",site="UK",mobile="0" How can I place every value within double quotes, using Sed or any other command?
Using Sed sed -e 's/=\([^,]*\)/="\1"/g' file.txt Using Awk awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {sub(/=/,"&\"",$f); sub(/$/,"\"",$f)}; print}' file.txt Or, shorter: awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {gsub(/=|$/,"&\"",$f)} print}' file.txt Shorter still: awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {gsub(/=|$/,"&\"",$f)}} 1' file.txt Using ex Ex is designed for file editing , but you can preview changes without actually saving them back to the file like so: ex -sc '%s/=\([^,]*\)/="\1"/g | %p | q!' file.txt To actually make the changes and save them to the file, use: ex -sc '%s/=\([^,]*\)/="\1"/g | x' file.txt However if you give a pattern which is found nowhere in the file (e.g. there is no = anywhere in the file) then Ex will not exit automatically. Thus for better robustness I usually use printf to pass commands to Ex: printf '%s\n' '%s/=\([^,]*\)/="\1"/g' %p | ex file.txt And to save changes: printf '%s\n' '%s/=\([^,]*\)/="\1"/g' x | ex file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199557/" ] }
322,244
What does the centos-release-upstream file mean in CentOS? The centos-release file already tells me that a CentOS 7.2.x release was installed. root# cat /etc/centos-releaseCentOS Linux release 7.2.1511 (Core) root# cat /etc/centos-release-upstream Derived from Red Hat Enterprise Linux 7.2 (Source)root# cat /etc/os-release NAME="CentOS Linux"VERSION="7 (Core)"ID="centos"ID_LIKE="rhel fedora"VERSION_ID="7"PRETTY_NAME="CentOS Linux 7 (Core)"ANSI_COLOR="0;31"CPE_NAME="cpe:/o:centos:centos:7"HOME_URL="https://www.centos.org/"BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"CENTOS_MANTISBT_PROJECT_VERSION="7"REDHAT_SUPPORT_PRODUCT="centos"REDHAT_SUPPORT_PRODUCT_VERSION="7"
Using Sed sed -e 's/=\([^,]*\)/="\1"/g' file.txt Using Awk awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {sub(/=/,"&\"",$f); sub(/$/,"\"",$f)}; print}' file.txt Or, shorter: awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {gsub(/=|$/,"&\"",$f)} print}' file.txt Shorter still: awk -F, -v OFS=, '{for (f=1;f<=NF;f++) {gsub(/=|$/,"&\"",$f)}} 1' file.txt Using ex Ex is designed for file editing , but you can preview changes without actually saving them back to the file like so: ex -sc '%s/=\([^,]*\)/="\1"/g | %p | q!' file.txt To actually make the changes and save them to the file, use: ex -sc '%s/=\([^,]*\)/="\1"/g | x' file.txt However if you give a pattern which is found nowhere in the file (e.g. there is no = anywhere in the file) then Ex will not exit automatically. Thus for better robustness I usually use printf to pass commands to Ex: printf '%s\n' '%s/=\([^,]*\)/="\1"/g' %p | ex file.txt And to save changes: printf '%s\n' '%s/=\([^,]*\)/="\1"/g' x | ex file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50835/" ] }
322,255
I have a directory full of files. My goal is to append text to the beginning of the file. The text that goes at the beginning is the same for each file. This is my attempt #!/bin/bash for file in `find . -type f -executable`;do sed '/\#!/bin/bash\/a Hello Word' done My script does nothing but crash
#!/bin/bashfor file in `find . -type f -executable`;do sed '/\#!/bin/bash\/a Hello Word'fidone This script has a great number of problems. First, it's not a valid Bash script, because you have fi with no corresponding if . Stylistically (after removing the fi line), I would remove the empty line before done and add a space before do . But that's relatively trivial. Now, as to best practices, you are using backticks for command substitution rather than the recommended modern form, $(...) . Backticks are supported purely for historical reasons and are not recommended for any new scripts; see: Have backticks (i.e. `cmd`) in *sh shells been deprecated? You have a robustness issue in that you are looping over the output of find —a very bad idea, and totally unnecessary. Your script will break on any filenames containing whitespace or special characters. See: Why is looping over find's output bad practice? ) If you want your script to be portable, you should stick to POSIX specified features whenever possible. In particular the -executable primary to find is not specified by POSIX . Consider using -perm +700 instead. Also in the realm of portability, using the "append" command to Sed ( a ) without a following \<newline> sequence works in GNU Sed, but is not standard. You set the file variable in your for loop to the name of each file in turn (assuming no special characters or whitespace in the filenames, which will cause the file variable to contain something which is not a filename), but you never actually use the file variable. Your Sed command is not given any file to run on, so it will attempt to run on standard input. Thus when you run the script it will simply wait for input. Your Sed script itself is incorrect, independent of the fact that it's missing a filename to operate on. If you use / as a delimiter for a regex (which is most usual), you need to backslash-escape all instances of / which occur within the regex. The only portion of your command which will be read as a regex is /\#!/ , and the rest (starting with bin ) will be interpreted as a Sed command. Instead, the usual solution would be to replace each / other than the final regex delimiter with \/ . (I see that you escaped only the last slash, which should not be escaped.) There is a little-known feature in Sed , which you could use to your advantage here. Any character (other than a backslash or newline) can be used as a regex delimiter, rather than only using a slash. So rather than using /#!\/bin\/bash/ as a Sed address, you could use the equivalent \:#!/bin/bash: Now if you've handled all of the above points, you will have a working script. It may not do what you want it to do, but it will actually do something . Such a script would look like this: #!/bin/bashfind . -type f -perm -700 -exec sed '\:#!/bin/bash:a\Hello World' {} + What does this script do? It searches the current directory recursively for all files with the executable bit set (for the owner), and for each such file, prints the entire file with the text Hello World appended after any lines which contain the text #!/bin/bash . Sed is not actually designed for editing files in place; it is the Stream EDitor. GNU Sed will allow you to edit files in place using the -i switch, but I would just use the standard tool ex for file editing. But there is another point here. If you want to add the line Hello World in a Bash script, it won't actually do anything, as Hello is not a valid command name. Perhaps what you want is to print the text "Hello World" in the Bash script, in other words to add echo "Hello World , which could make sense. Now we're into the realm of clarifying more exactly what your script is supposed to do. The Final Script So my more exact statement of the specifications for this script are: The script shall find all regular files in the current directory (or any subdirectory recursively) which have the executable bit set for the owner. For each such file, the script shall check whether the first line of the file exactly equals the string #!/bin/bash . Only for files with this exact first line, the script shall insert the exact text echo "Hello World" , followed by a newline character, after the first line of the file. (This change shall be saved to the file, not printed to standard out.) Here is a script matching those exact specifications, using only POSIX tools and features: #!/bin/shfind . -type f -perm -700 -exec sh -c ' for f do head -n 1 "$f" | grep -qFx "#!/bin/bash" && printf "%s\n" "1a" "echo \"Hello World\"" . x | ex "$f" done' find-sh {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194756/" ] }
322,309
I have some binaries and some .c extension files in my directory. Here the output using ls arrays.c basic0 basic0.c fromfb fromfb.c oute oute.c segmen.c simp simp.c Here i want to filter binary files only , so I use ls |grep -v .c This command list all files, Then using grep I get files, except those file not ending with .c What I expect is basic0fromfboutesimp But What i got fromfboutesimp basic0 binary file missing. What is the problem with this?
As per man grep The period . matches any single character. thus grep .c match any character followed by c You might be looking for grep -v \.c or better grep -v '\.c$' where \. escape special meaning of . c $ end of line (when piped to ls output one file name par line) as suggested by wildcard, you can also use grep -vF .c The -F flag tells grep to use arg as simple string, not regular expression.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172333/" ] }
322,336
I have a list of codepoints like 0x13000, 0x1300A.I have to print the corresponding Unicode characters from bash.I've already tried to do it with other commands that I've found searching in the forum ( In bash, how can I convert a Unicode Codepoint [0-9A-F] into a printable character? ), but they didn't work. I've tried echo -ne 'x13000/x130FF/' | iconv -f utf-16be and, using perl on the terminal perl -C -e 'print chr 0x130F0'
This does it in two steps: $ printf "$(printf '\\U%08x' 0x13000)\n" If you are unable to see the rendered glyph (character image), here is a fixed image: The two steps are: - The first formats the codepoint number (0x13000) in 8 hexadecimal digits with \U in its front. - The second use the bash builtin printf capacity to print Unicode characters. The output will be adapted to the locale used. In utf8 locales like en_US.utf8 and with a font that could present the correct glyph, the output character will be correctly presented in the console. In this system, the full noto-font package was installed. It contains very nice text fonts, well hinted, and as a plus it also contains glyphs for many many languages, including the "Noto Sans Egyptian Hieroglyphs" font. This will print all the character list: $ printf "$(printf '\\U%08x' 778{24..34})"; echo the value range is just the hexadecimal values in decimal: $ printf '%d\n' 0x13000 0x1300A7782477834
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322336", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199630/" ] }
322,352
I am moving from one server to another and want to bring some of the disks with me. Unfortunately, I do not have enough storage to back up all of the data on the old server. Old server 4 disk RAID5 Bringing two disks from old server to: New server 6 DISK RAID-Z2 (4+2) Old server can take losing one disk but not two. Could I set up the new server as RAID-Z2 (4+2) lacking one disk? Move all data and then add the last disk? Or is there any other way around this?
Yes, it is possible by using fake file-backed disks for your redundant ones. Of course, not supported and you should have a backup, so simulate it first with small files on your old pool to see if everything works as expected. For details see https://www.mail-archive.com/[email protected]/msg22993.html and https://www.mail-archive.com/[email protected]/msg23023.html for details. You can also search online for "create raidz2 degraded" if you have other systems like FreeNAS etc. The important steps (taken from the mailing list archive thread by Tomas Ögren and Daniel Rock) are: Create sparse file with the size of the real disk (let's assume it is 1000 GB in this example): mkfile -n 1000g /tmp/fakedisk1 Create a zpool with the real disks and the sparse file: zpool create -f newpool raidz2 disk1 disk2 disk3 disk4 disk5 /tmp/fakedisk1 Immediately put the sparse files offline so that nobody tries to write on it: zpool offline newpool /tmp/fakedisk1 Your pool will now be degraded, but functioning. Copy your files to the new pool (use ssh or netcat between send and recv if using network instead of directly attached pools): zfs snapshot -r oldpool@nowzfs send -R oldpool@now | zfs recv -Fdu newpool Destroy the old one and replace the sparse files with the now freed up disks: zpool replace newpool /tmp/fakedisk1 disk6 Again, a word of caution depending on your redundancy level (if you use two fake disks on a Z2 or three fake disks on a Z3): Remember: during data migration your are running without safety belts. If a disk fails during migration you will lose data.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199641/" ] }
322,353
Just bought a new computer and assembled it today. Then as always I installed windows for all the graphic needs (Photoshop and Co) and now I wanted to install a dual boot with Fedora 24. Made a new USB bootable live install media and selected this uefi usb in the boot menu. At this point it shows me regular grub menu where I can choose to start Fedora or check media and start Fedora. Tried both of them. After a few seconds it shows me some [OK] Firewall loaded [OK] XY Loaded .... and so on. Then the mouse appears with the shell output still in background. I can move the mouse bot no Gnome Desktop appears. A few seconds later the mouse icon disappears and it has been frozen on the boot output. I cant find any important output at this point. Is there any way to install fedora whiteout a gui? Or can I disable the display drivers? My setup: i7 7600KASUS Maxmimus Hero VIIIMSI Geforce GTX 1070 I tried to start the install with the build in VGA of my mainboard, but sure, there is no output. Do you have any tips for installing Fedora on my new computer? I also tried a few ways to create the boot media, Mac dd, Windows Tool rufus, Mac Tool unetbootin and at least Windows Tool Fedora Media Writer. So I think I did not get a bad usb-iso install. Oh and I also tried other distributions like ubuntu, which boots just fine. Also tried the newest beta of Fedora 25, but still the same error as above described.
Yes, it is possible by using fake file-backed disks for your redundant ones. Of course, not supported and you should have a backup, so simulate it first with small files on your old pool to see if everything works as expected. For details see https://www.mail-archive.com/[email protected]/msg22993.html and https://www.mail-archive.com/[email protected]/msg23023.html for details. You can also search online for "create raidz2 degraded" if you have other systems like FreeNAS etc. The important steps (taken from the mailing list archive thread by Tomas Ögren and Daniel Rock) are: Create sparse file with the size of the real disk (let's assume it is 1000 GB in this example): mkfile -n 1000g /tmp/fakedisk1 Create a zpool with the real disks and the sparse file: zpool create -f newpool raidz2 disk1 disk2 disk3 disk4 disk5 /tmp/fakedisk1 Immediately put the sparse files offline so that nobody tries to write on it: zpool offline newpool /tmp/fakedisk1 Your pool will now be degraded, but functioning. Copy your files to the new pool (use ssh or netcat between send and recv if using network instead of directly attached pools): zfs snapshot -r oldpool@nowzfs send -R oldpool@now | zfs recv -Fdu newpool Destroy the old one and replace the sparse files with the now freed up disks: zpool replace newpool /tmp/fakedisk1 disk6 Again, a word of caution depending on your redundancy level (if you use two fake disks on a Z2 or three fake disks on a Z3): Remember: during data migration your are running without safety belts. If a disk fails during migration you will lose data.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199645/" ] }
322,459
An alias, such as ll is defined with the alias command. I can check the command with things like type ll which prints ll is aliased to `ls -l --color=auto' or command -v ll which prints alias ll='ls -l --color=auto' or alias ll which also prints alias ll='ls -l --color=auto' but I can't seem to find where the alias was defined, i.e. a file such as .bashrc , or perhaps manually in the running shell. At this point I'm unsure if this is even possible. Should I simply go through all files that are loaded by bash and check every one of them?
Manual definition will be hard to spot (the history logs, maybe) though asking the shell to show what it is doing and then grep should help find those set in a rc file: bash -ixlc : 2>&1 | grep ...zsh -ixc : 2>&1 | grep ... If the shell isn't precisely capturing the necessary options with one of the above invocations (that interactively run the null command), then script : script somethingtogrep thatstrangeshell -x...grep ... somethingtogrep Another option would be to use something like strace or sysdig to find all the files the shell touches, then go grep those manually (handy if the shell or program does not have an -x flag); the standard RC files are not sufficient for a manual filename check if something like oh-my-zsh or site-specific configurations are pulling in code from who knows where (or also there may be environment variables, as sorontar points out in their answer).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/322459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1290/" ] }
322,461
Looking around I have found out the following about /etc/resolv.conf valid formatting: Trailing whitespace is allowed Leading whitespace is NOT allowed DNS records are case insensitive, though you may have weird issues in applications that lowercase everything However, I can't find anywhere whether the resolv.conf keywords are case insensitive or case sensitive. They seem to be lowercase usually , but do they have to be? Is it an error if I find a server where they are in uppercase? A google search turns up this forum thread , where a code example seems to indicate that the keywords are case in sensitive. However, there is no link to any authoritative documentation. Are /etc/resolv.conf keywords (such as nameserver ) case sensitive?
They are certainly case sensitive in the glibc resolver libraries. Note the use of strncmp (case sensitive compare) rather than strncasecmp (case insensitive compare) in the MATCH function within glibc res_init.c . This code is responsible for reading + parsing the /etc/resolv.conf file. #define MATCH(line, name) \ (!strncmp(line, name, sizeof(name) - 1) && \ (line[sizeof(name) - 1] == ' ' || \ line[sizeof(name) - 1] == '\t')) if ((fp = fopen(_PATH_RESCONF, "rce")) != NULL) { /* No threads use this stream. */ __fsetlocking (fp, FSETLOCKING_BYCALLER); /* read the config file */ while (fgets_unlocked(buf, sizeof(buf), fp) != NULL) { /* skip comments */ if (*buf == ';' || *buf == '#') continue; /* read default domain name */ if (MATCH(buf, "domain")) { if (haveenv) /* skip if have from environ */ continue; cp = buf + sizeof("domain") - 1; Further, quick example showing how lookup breaks with NAMESERVER rather than nameserver. # cat /etc/resolv.confoptions timeout:2 attempts:5; generated by /sbin/dhclient-scriptsearch eu-west-1.compute.internalnameserver 172.31.0.2# getent hosts www.google.com2a00:1450:400b:802::2004 www.google.com# sed -i 's/nameserver/NAMESERVER/' /etc/resolv.conf# getent hosts www.google.com#
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
322,475
Today I fired up my new GTX1060. First thing I realised: It stutters. Like everything runs fine for a good half second, then image stops for like one fifth of a sec. Reaaaallly annoying, unusable. I've not been able to find anyone with a stuttering GTX1060. System is elementaryOS Freya with latest nvidia binary drivers. Now, what should I do? Or what would you do? Wait for better driver support Cleanly reinstall everything Try it with Windows Send back, request new one; GPU broken? Get an AMD RX 480 instead Do XYZ to find the cause of my problem Open for any suggestions!
They are certainly case sensitive in the glibc resolver libraries. Note the use of strncmp (case sensitive compare) rather than strncasecmp (case insensitive compare) in the MATCH function within glibc res_init.c . This code is responsible for reading + parsing the /etc/resolv.conf file. #define MATCH(line, name) \ (!strncmp(line, name, sizeof(name) - 1) && \ (line[sizeof(name) - 1] == ' ' || \ line[sizeof(name) - 1] == '\t')) if ((fp = fopen(_PATH_RESCONF, "rce")) != NULL) { /* No threads use this stream. */ __fsetlocking (fp, FSETLOCKING_BYCALLER); /* read the config file */ while (fgets_unlocked(buf, sizeof(buf), fp) != NULL) { /* skip comments */ if (*buf == ';' || *buf == '#') continue; /* read default domain name */ if (MATCH(buf, "domain")) { if (haveenv) /* skip if have from environ */ continue; cp = buf + sizeof("domain") - 1; Further, quick example showing how lookup breaks with NAMESERVER rather than nameserver. # cat /etc/resolv.confoptions timeout:2 attempts:5; generated by /sbin/dhclient-scriptsearch eu-west-1.compute.internalnameserver 172.31.0.2# getent hosts www.google.com2a00:1450:400b:802::2004 www.google.com# sed -i 's/nameserver/NAMESERVER/' /etc/resolv.conf# getent hosts www.google.com#
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74204/" ] }
322,517
I have a make script to perform 3 tasks: Import a MySQL database Move a configuration file Configure the configuration file For these tasks, the script requires 3 inputs: MySQL Host MySQL Username MySQL Password For some reason, whenever I read an input, it saves to the right variable, but the content is the variable name that I save it to without the first letter. I can't seem to find out why it does that. Here's the Makefile : SHELL := /bin/bashdefault: @echo "Welcome!";\ echo -n "Please enter the MySQL host (default: localhost):";\ read host;\ host=${host:-localhost};\ echo -n "Please enter the MySQL username:";\ read username;\ echo -n "Please enter the MySQL password:";\ read -s password;\ mv includes/config.php.example includes/config.php 2>/dev/null;true;\ sed 's/"USER", ""/"USER", "$(username)"/g' includes/config.php > includes/config.php;\ sed 's/"PASSWORD", ""/"PASSWORD", "$(password)"/g' includes/config.php > includes/conf$ echo $username;\ echo $password;\ mysql -u "$username" -p"$password" codeday-team < ./codeday-team.sql;\ echo "Configuration complete. For further configuration options, check the config file$ exit 0; The output is: Welcome!Please enter the MySQL host (default: localhost):Please enter the MySQL username:<snip> Please enter the MySQL password:sernameasswordERROR 1045 (28000): Access denied for user 'sername'@'localhost' (using password: YES)Configuration complete. For further configuration options, check the config file includes/config.php As you can see, it outputs sername and assword . For the life of me I can't fix this! While the purpose of this post is to solve the read bug, I would appreciate any advice, bugs reports, or suggestions. I may award a bounty for them. Thank you for helping!
You're mixing shell and make variables in there. Both make and the shell use $ for their variables. In Makefile, variables are $(var) or $v for single-letter variables, and $var or ${var} in shells. But if you write $var in a Makefile, make will understand it as $(v)ar . If you want to pass a literal $ to the shell, you need to enter it as $$ , as in $$var or $${var} so that it becomes $var or ${var} for the shell. Also, make runs sh , not bash to interpret that code ( Edit , sorry missed your SHELL := /bin/bash above, note that many systems don't have bash in /bin if they have bash at all and := is GNU specific), so you need to use sh syntax there. echo -n , read -s are zsh / bash syntax, not sh syntax. Best here would be to add a zsh / bash script to do that (and add a build dependency on bash ). Something like: #! /usr/bin/env bashprintf 'Welcome\nPlease enter the MySQL host (default: localhost): 'read host || exithost=${host:-localhost}printf "Please enter the MySQL username: "read username || exitprintf "Please enter the MySQL password:"IFS= read -rs password || exitprintf '\n'mv includes/config.php.example includes/config.php 2>/dev/nullrepl=${password//\\/\\\\}repl=${repl//&/\\&}repl=${repl//:/\\:}{ rm includes/config.php && sed 's/"USER", ""/"USER", "'"$username"'"/g s:"PASSWORD", "":"PASSWORD", "'"$repl"'"/g' > includes/config.php} < includes/config.php || exitmysql -u "$username" -p"$password" codeday-team < ./codeday-team.sql || exitecho "Configuration complete. For further configuration options, check the config file" It addresses a few more issues: you need -r when reading the password if you want to allow the user to use backslashes in their password. you need IFS= if you want to allow the user to have a password that starts or ends in blanks same would apply for the username, but here we're making the assumption that the user won't use anything silly for the username, and the blank stripping and backslash handling could be seen as a /feature/. your ;true after mv doesn't do what you think it does. It doesn't cancel the effect of the errexit option (for make implementations that call the shell with -e ). You'd need ||true instead. Here, we're not using errexit but doing the error handling ourselves with || exit where needed. You need to escape backslash, & and the s:pattern:repl: separator (here : ) or it won't work (and could have nasty side effects). You can't do sed ... < file > file , as file would be truncated before sed is even started. Some sed implementations support a -i or -i '' option for that. Alternatively you could use perl -pi . Here we're doing the equivalent of perl -pi manually (delete and recreate the input file after it has been open for reading) but without taking care of the file's metadata. Here, it would be better to use the example one as input and the final one as output. It still doesn't address a few more issues: the new config.php is created with permissions derived from the current umask which is likely to be world readable and owned by the user running make . You may need to adapt the umask and/or change ownership if the includes dir is not otherwise restricted as that file contains sensitive information. if the password contains " characters, that will likely break (this time for php ). You'd want to either forbid those (by returning an error) or add another layer of escaping for them in the right syntax for that php file. You're likely to have similar problems with backslash and you may want to exclude non-ascii or control characters as well. Passing the password on the command-line of mysql is generally a bad idea as that means it shows in the output of ps -f . It would be better to use: mysql --defaults-file=<( printf '[client]\nuser=%s\npassword="%s"\n' "$username" "$password") ... printf being built-in, it wouldn't show up in ps output. the $host variable is not used. prompting the user like that means that your script can not easily be automated. You could take the input from arguments or (better for the password) with environment variables instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165072/" ] }
322,528
Here I want to write some shell code and my question is, I want to copy one data file multiple times into another new file.For example: File1 contains 3000 lines of data. Now I want this data multiple times in another single file ( File1 * 3 > File2 ). Here I am copying File1 's data 3 times and saving it to File2 . Now File2 contains 12000 lines of data. Thanks.
If you prefer loops: for i in {1..3}; do cat file1 >> file2; done [edit] how to give dynamically n value in for loop on shell script put this into myCopyScript.sh #!/bin/bashfor i in {1.. $3 }; do cat $1 >> $2; done make the it executable chmod u+x myCopyScript.sh then call it like this: myCopyScript.sh file1 file2 4711
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199763/" ] }
322,637
With cat I can read from a file to stdout. This way I can for example pipe a file out of a docker container: docker exec my_container cat file > file_on_host When I want to do the opposite, I would need a command that reads from stdin and saves to a file. Is there such a command? docker exec my_container ??? file < file_on_host
Similar to your approach, docker exec -i my_container dd of= file < file_on_host which gives you a nice status summary and doesn't write the data to stdout. There are probably a few other options,e.g., cp /dev/stdin file (which might not work,depending on whether your container's OS supports /dev/stdin )and sh -c "cat > file " .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30206/" ] }
322,720
I have the following systemd unit file in /etc/systemd/system/emacs.service : [Unit]Description=Emacs: the extensible, self-documenting text editorDocumentatin=man:emacs(1) info:Emacs[Service]Type=forkingExecStart=/usr/bin/emacs --daemonExecStop=/usr/bin/emacsclient --eval "(progn (setq kill-emacs-hook nil) (kill-emacs))"Restart=alwaysEnvironment=DISPLAY=:%iTimeoutStartSec=0[Install]WantedBy=default.target I want this to start on boot, so I have entered systemctl enable emacs However, each time my service reboots, systemctl status emacs shows: ● emacs.service - Emacs: the extensible, self-documenting text editor Loaded: loaded (/etc/systemd/system/emacs.service; disabled; vendor preset: enabled) Active: inactive (dead) But then entering systemctl start emacs and checking the status returns: ● emacs.service - Emacs: the extensible, self-documenting text editor Loaded: loaded (/etc/systemd/system/emacs.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2016-11-11 23:03:59 UTC; 4s ago Process: 3151 ExecStart=/usr/bin/emacs --daemon (code=exited, status=0/SUCCESS) Main PID: 3154 (emacs) Tasks: 2 Memory: 7.6M CPU: 53ms CGroup: /system.slice/emacs.service └─3154 /usr/bin/emacs --daemon How can I get this process to successfully start on boot?
I have no idea why but to get this to work I: deleted Environment=DISPLAY=:%i added a User= variable Made sure the correct file was in /etc/systemd/system/emacs.service (earlier it had been a hard link) and re-ran systemctl enable emacs This made it work. EDIT The real problem here is that I had a typo at line 3: Documentatin I found this by checking journalctl . I suggest anyone who has issues with a systemd script do the same as there was no error sent to stderr.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/322720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73671/" ] }
322,724
I have a data looks like this, for each SNP, it should repeat 5 times with different beta. But for SNP rs11704961, it only repeat twice, so I want to delete SNP rows that repeat less than 5 times. I tried to use sort -k 1 | uniq -c , but it considers the whole line for checking duplicates, not the first column. SNP R K BETA rs767249 1 1 0.1065 rs767249 1 2 -0.007243 rs767249 1 3 0.02771 rs767249 1 4 -0.008233 rs767249 1 5 0.05073 rs11704961 2 1 0.2245 rs11704961 2 2 0.009203 rs1041894 3 1 0.1238 rs1041894 3 2 0.002522 rs1041894 3 3 0.01175 rs1041894 3 4 -0.01122 rs1041894 3 5 -0.009195
I have no idea why but to get this to work I: deleted Environment=DISPLAY=:%i added a User= variable Made sure the correct file was in /etc/systemd/system/emacs.service (earlier it had been a hard link) and re-ran systemctl enable emacs This made it work. EDIT The real problem here is that I had a typo at line 3: Documentatin I found this by checking journalctl . I suggest anyone who has issues with a systemd script do the same as there was no error sent to stderr.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/322724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199893/" ] }
322,746
what factors should be considered when choosing among 7zip, xz, gzip, tar, etc. for compressing and archiving files?
I first want to clarify that, of the list you provided, tar is the only one that is not a compression algorithm. tar is short for T ape Ar chive, and is used to create archive files. In short, a single file that consists of one or more files. It is used to bundle files together so that they can be compressed by a compressor that is only able to compress a single file. In terms of availability, zip is widely available across UNIX (Linux/BSD/MacOS) and Windows systems. Therefore a zip file is highly portable. Tools to compress/decompress xz and gzip files are also available on Windows systems, but are more commonly seen and used on UNIX systems. xz and 7zip are known to have a better compression algorithm than gzip , but use more memory and time to compress/decompress. This topic is nicely discussed here . I would recommend using gzip when less memory is available, and compression/decompression speed is a concern. 7zip and xz can be used when space is a concern and compression/decompression speed is not. Some nice benchmarks on these algorithms can be found here . Note: LZMA is the compression algorithm used by 7zip and xz .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/322746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199912/" ] }
322,761
I have a 7gb text file I need to edit n first lines of that file (let us assume n=50) I want to do this the following way: head -n 50 myfile >> tmpvim tmp # make necessary editssubstitute first 50 lines of myfile with the contents of tmprm tmp how do I complete the third step here? better solutions to the general problem are also appreciatednote: there is no GUI in this environment
man tail says: -n, --lines=[+]NUM output the last NUM lines, instead of the last 10; or use -n +NUM to output starting with line NUM therefore you can do tail -n +51 myfile >>tmp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199928/" ] }
322,802
With GNOME Disks utility, I can check whether a logical volume is mounted: And where is it mounted: How can I get this information from the command line? Having, for example, the logical volume UUID, I would like to know if it is mounted and where.
Just use lsblk . It prints all disks and their corresponding mount points. Including LVM, MD RAID, etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/322802", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66295/" ] }
322,817
I cannot figure out how to find the file where a bash function is defined ( __git_ps1 in my case). I experimented with declare , type , which , but nothing tells me the source file. I read somewhere that declare can print the file name and the line number, but it was not explained how. The help page for declare does not say it either. How can I get this information?
If you are prepared to run the function, then you can get the information by using set -x to trace the execution and setting the PS4 variable. Start bash with --debugger or else use shopt -s extdebug to record extra debugging info. Set PS4 , the 'prompt' printed when tracing to show the source line. Turn on tracing. you can then run your function and for each line you will get the filename of the function. use set +x to turn off tracing. So for this case you would run bash --debuggerPS4='+ ${BASH_SOURCE[0]} 'set -x ; __git_ps1 ; set +x
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/322817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54221/" ] }
322,864
Errata: Similar questions about this have been asked but after searching this for a few days there appears to be no answer to this specific scenario. Description of the problem: The second line in in the following bash script triggers the error: #!/bin/bashsessionuser=$( ps -o user= -p $$ | awk '{print $1}' )print $sessionuser Here is the error message: Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528. Things I have tried: I have tried every combination of single quotes, back angled single quotes, double quotes, and spacing I could think of both inside and outside the $() command output capture method. I have tried using $( exec ... ) where ... is the command being attempted here. I have read up on bash, and searched these forums and many others and nothing seems illuminate why this error message is happening or how to work around it. If the suggestion given in the error message is followed like this: sessionuser=$( ps -o user= -p 1000 | awk '\{print $1}' ) It results in the following error message combined with the previous one: awk: cmd. line:1: \{print $1}awk: cmd. line:1: ^ backslash not last character on lineUnescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528. The message refers to line 528 in /usr/bin/print. Here is that line: $comm =~ s!%{(.*?)}!$_="'$ENV{$1}'";s/\`//g;s/\'\'//g;$_!ge; Rational for my bash script: The string $USER can be rewritten and is therefore not necessarily reliable. The command "whoami" will return different results depending on whether or not privileges have been elevated for the current user. As such there is a need for reliably attaining the current session users name for portability of scripting, and that is because I am probably not going to keep the same user name forever and would like my scripts to continue working regardless of who I have logged in as. All of that is because user files are being backed up that have huge directory structures and many files. Every once in a while a file with root ownership and permissions will end up in that backup stack for that user. There are lots of reasons why this happens and sometimes its just because that user backed up a wallpaper or a theme they like from the system directory structure, or sometimes its because a project was compiled by that user and some of its directories or files needed to be set to root ownership and permissions for it to function in some way, and other times it may be due to some other strange unaccounted for thing. I understand that rsync might be able to handle this problem, but I'd like to understand how to tackle the "Unescaped left brace" in a Bash script problem first. I can study rsync on my own, but after trying for a few days this bash script doesn't appear to have a solution that is easy to discover or illuminate through either online searches or reading the manuals. [UPDATE 01]: Some information was missing from my original post so I'm adding it here.Here are the relevant system specs: OS: Xubuntu 16.04 x86_64 Bash: GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu) Source and Rational for the commands I'm using: 3rd reply down in the following thread: https://stackoverflow.com/questions/19306771/get-current-users-username-in-bash Print vs. Printf I posted this question using "print" instead of "printf" because the source I copied it from used the "print" syntax. After using "printf" I get the same error message with an added error message as output: Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.Error: no such file "sessions_username_here" Where "sessions_username_here" is a replacement of the actual sessions user name for the purpose of keeping the discussion generalized to whatever username could or might be used. [UPDATE FINAL] The chosen solution offered by Stéphane Chazelas clarified all the issues my script was having in a single post. I was mistakenly assuming that the 2nd line of the script since the output was complaining about brackets. To be clear it was the 3rd line that was triggering the warning (see Chazelas post for why and how) and that is probably why everyone was suggesting printf instead of print. I just needed to be pointed at the 3rd line of the script in order to make sense of those suggestions. Things that didn't work as suggested: sessionuser=$(logname) Resulting error message: logname: no login name ...so maybe that suggestion isn't quite as reliable as it might seem on the surface. If user privileges are elevated which is sometimes the case when running scripts then: id -un would output root and not the current session's user name. This would probably be a simple matter of making sure the script drops out of root privileges before execution which could solve this issue but that is beyond the scope of this thread. Things that did or could work as suggested: After I figure out how to verify my script is running in a POSIX environment and somehow de-elevating root privileges, then I could indeed use "id -un" to acquire the current sessions username, but those verifications and de-escilations are beyond the scope of this threads question. For now "without" POSIX verification, privilege testing, and de-escalation the script does what was originally intended to do without error. Here is what that script looks like now: #!/bin/bash sessionuser=$( ps -o user= -p $$ | awk '{printf $1}' ) printf '%s\n' "$sessionuser" Note: The above script if run with elevated privileges still outputs "root" instead of the current sessions username even though the privilege escalated command: sudo ps -o user= -p $$ | awk '{printf $1}' will output the current sessions username and not "root" so even though the scope of this thread is answered I am back to square one with this script. Thanks again to xtrmz, icarus, and especially Stéphane Chazelas who somehow was able catch my misunderstanding of the issue. I'm really impressed with every one here. Thanks for the help! :)
It's the third line ( print $sessionuser ) that causes that error, not the second. print is a builtin command to output text in ksh and zsh , but not bash . In bash , you need to use printf or echo instead. Also note that in bash (contrary to zsh , but like ksh ), you need to quote your variables. So zsh 's: print $sessionuser (though I suspect you meant: print -r -- $sessionuser If the intent was to write to stdout the content of that variable followed by a newline) would be in bash : printf '%s\n' "$sessionuser" (also works in zsh / ksh ). Some systems also have a print executable command in the file system that is used to send something to a printer, and that's the one you're actually calling here. Proof that it is rarely used is that your implementation (same as mine, as part of Debian's mime-support package) has not been updated after perl 's upgrade to work around the fact that perl now warns you about those improper uses of { in regular expressions and nobody noticed. { is a regexp operator (for things like x{min,max} ). Here in %{(.*?)} , that (.*?) is not a min,max , still perl is lenient about that and treats those { literally instead of failing with a regexp parsing error. It used to be silent about that, but it now reports a warning to tell you you probably have a problem in your (here print 's) code: either you intended to use the { operator, but then you have a mistake within. Or you didn't and then you need to escape those { . BTW, you can simply use: sessionuser=$(logname) to get the name of the user that started the login session that script is part of. That uses the getlogin() standard POSIX function. On GNU systems, that queries utmp and generally only works for tty login sessions (as long as something like login or the terminal emulator registers the tty with utmp ). Or: sessionuser=$(id -un) To get the name of one user that has the same uid as the effective user id of the process running id (same as the one running that script). It's equivalent to your ps -p "$$" approach because the shell invocation that would execute id would be the same as the one that expands $$ and apart from zsh (via assignment to the EUID / UID / USERNAME special variables), shells can't change their uids without executing a different command (and of course, of all commands, id would not be setuid). Both id and logname are standard (POSIX) commands (note that on Solaris, for id like for many other commands you'd need to make sure you place yourself in a POSIX environment to make sure you call the id command in /usr/xpg4/bin and not the ancient one in /bin . The only purpose of using ps in the answer you linked to is to work around that limitation of /bin/id on Solaris). If you want to know the user that called sudo , it's via the $SUDO_USER environment variable. That's a username derived by sudo from the real user id of the process that executed sudo . sudo later changes that real user id to that of the target user ( root by default) so that $SUDO_USER variable is the only way to know which it was. Note that when you do: sudo ps -fp "$$" That $$ is expanded by the shell that invokes sudo to the pid of the process that executed that shell, not the pid of sudo or ps , so it will give not give you root here. sudo sh -c 'ps -fp "$$"' Would give you the process that executed that sh (running as root ) which is now either still running sh or possibly ps for sh invocations that don't fork an extra process for the last command. That would be the same for a script that does that same ps -p "$$" and that you run as sudo that-script . Note that in any case, neither bash nor sudo are POSIX commands. And there are many systems where neither are found.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200000/" ] }
322,879
Supposing I have two servers: -->Server A has the IP XXX.XXX.XXX.XXX -->Server B has the IP YYY.YYY.YYY.YYY What I want is to redirect trafic from server A (port 80) to server B (port 80). A simple way to do that is to put the following rule with iptables in server A : iptables -t nat -A PREROUTING -p tcp --dport port -j DNAT --to-destination server B:80 However, this simple rule does not work. We must add the following rule: iptables -t nat -A POSTROUTING -j MASQUERADE Why so? Why do we need to add a POSTROUTING rule? After the PREROUTING, the packet must go automatically to server B right?
* I'm not an expert in iptables or Linux Network Scheduling, but I'll try to help! According to the description of nat ( Network Address Translation ) table, in the iptables manual page : "This table is consulted when a packet that creates a new connection is encountered. It consists of three built-ins: PREROUTING (for altering packets as soon as they come in), OUTPUT (for altering locally-generated packets before routing), and POSTROUTING (for altering packets as they are about to go out) ." The POSTROUTING chain alters packets just before they go out. The MASQUERADE explanation below I got from The Linux Documentation Project and I've also put your information to make sense: I tell machine B that my PPP or Ethernet connected Linux box A is itsgateway. When a packet comes into the Linux box A from B , it will assign thepacket to a new TCP/IP source port number and insert its own IP addressinside the packet header, saving the originals . The MASQ server willthen send the modified packet over the PPP/ETH interface onto theInternet. When a packet returns from the Internet into the Linux box A , Linuxexamines if the port number is one of those ports that was assignedabove. If so, the MASQ server will then take the original port andIP address, put them back in the returned packet header, and sendthe packet to B . The host that sent the packet will never know the difference.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160856/" ] }
322,883
I am renting a server, running Ubuntu 16.04 at a company, let's name it company.org. Currently, my server is configured like this: hostname: server737263 domain name: company.org Here's my FQDN: user@server737263:~ $ hostname --fqdnserver737263.company.org This is not surprising. I am also renting a domain name, let's name it domain.org . What I would like to do would be to rename my server as server1.domain.org . This means configuring my hostname as server1 and my domain name as domain.org . How can I do it correctly? Indeed, the manpage for hostname is not clear. To me at least: HOSTNAME(1) [...] SET NAME When called with one argument or with the --file option, the commands set the host name or the NIS/YP domain name. hostname uses the sethostname(2) function, while all of the three domainname, ypdomainname and nisdomainname use setdomainname(2). Note, that this is effective only until the next reboot. Edit /etc/hostname for permanent change. [...] THE FQDN You cannot change the FQDN with hostname or dnsdomainname. [...] So it seems that editing /etc/hostname is not enough? Because if it really changed the hostname, it would have changed the FQDN. There's also a trick I read to change the hostname with the command sysctl kernel.hostname=server1 , but nothing says whether this is the correct way or an ugly trick. So: What is the correct way to set the hostname? What is the correct way to set the domain name?
Setting your hostname: You'll want to edit /etc/hostname with your new hostname. Then, run sudo hostname $(cat /etc/hostname) . Setting your domain, assuming you have a resolvconf binary: In /etc/resolvconf/resolv.conf.d/head , you'll add then line domain your.domain.name (not your FQDN, just the domain name). Then, run sudo resolvconf -u to update your /etc/resolv.conf (alternatively, just reproduce the previous change into your /etc/resolv.conf ). If you do not have resolvconf , just edit /etc/resolv.conf , adding the domain your.domain.name line. Either way: Finally, update your /etc/hosts file. There should be at least one line starting with one of your IP (loopback or not), your FQDN and your hostname. grepping out ipv6 addresses, your hosts file could look like this: 127.0.0.1 localhost1.2.3.4 service.domain.com service In response to hostnamectl suggestions piling up in comments: it is not mandatory, nor exhaustive. It can be used as a replacement for step 1 & 2, IF you OS ships with systemd. Whereas the steps given above are valid regardless of systemd being present (pclinuxos, devuan, ...).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/322883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200018/" ] }
322,941
suppose I have following entry in my /etc/hosts 192.168.1.10 server1.mydomain.com and I have a directory SERVER-FILES in current dir. I want to scp the directory SERVER-FILES somewhere. I type SE and use autocompletion to complete the directory name: $ scp -rp SE<TAB> This completion should be totally unambiguous. But zsh autocomplletion tries to be too smart, and treats hostnames case-insensitive, and thus attempts to match SE to hostnames: $ scp -rp SE<TAB>SERVER-FILES/server1.mydomain.com How can I disable this annoying feature, where zsh is trying to match hostnames case-insensitive, and therefore completes SE<TAB> to server1.mydomain.com` ? UPDATE: Based on suggestions from @zeppelin , I have changed the following line in the ssh completion file Unix/_ssh : - compadd -M 'm:{a-zA-Z}={A-Za-z} r:|.=* r:|=*' "$@" $config_hosts+ compadd "$@" $config_hosts but that did not help. It has absolutely no effect. And I don't understand the answer from @Tomasz Pala . My zsh completion is not case-insensitive. Please sopmebody just tell me what I need to change in /usr/share/zsh/functions/Completion/Unix/_foo to change this behaviour. UPDATE 2 I have finally narrowed the problem down, and found out why the solution from @Tomasz Pala did work for him, but not for me: When I change the Unix/_hosts file on a newly setup machine/user account, the solution works. scp -r SE<TAB> The above command ignores server1.mydomain.com in /etc/hosts , and only offers local directory SERVER-FILES for completion. But this does not work for me on my existing user account, because I have server.mydomain.com in my ~/.ssh/config . When I remove the entry, then everything works as desired. But how can I make this hack work even with my current ~/.ssh/config ?
Second answer tries to explain that you need to do two things: 1_ make sure your general matching rules are not case-insensitive ( matcher-list ) - from the updated question it's not, 2_ change Unix/(Type/)_hosts (the actual location might vary, but not the Unix/_ssh - this one handles ~/.ssh/config hosts, see below) last 2 lines to: _wanted hosts expl host \ compadd -M 'm:{a-z}={A-Z} r:|.=* r:|=*' -a "$@" - _hosts All of this was already summarized in my answer, so simply try doing this without reading all the rationale before. Also, since your global config is not case-insensitive, the @zeppelin's answer should also work, although it doesn't use $fpath and removes also small->CAPS matching of the hosts. I did test this with your settings from the update and it works as expected. Update: remember that zsh keeps it's functions loaded, so after modifying the _hosts you need to reload it either by logging in fresh, or: unfunction _hostsautoload -Uz _hosts Also remember that zsh can have the scripts 'compiled' in zwc form ( zcompile [file] ) and if such file exists and is newer than the source it would be used instead. Ad. update 2: Handling the ~/.ssh/config defined hosts is actually pretty much the same as for _hosts - depending on your zsh version in either Unix/(Command/)_ssh or Unix/(Type/)_ssh_hosts change the compadd -M 'm:{a-zA-Z}={A-Za-z} r:|.=* r:|=*' "$@" $config_hosts line to compadd -M 'm:{a-z}={A-Z} r:|.=* r:|=*' "$@" $config_hosts
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155832/" ] }
322,973
I have a monitor that supports 200hz refresh rate, and would like to be able to use that. When I run xrandr , it shows this option: $ xrandrScreen 0: minimum 320 x 200, current 5560 x 1920, maximum 16384 x 16384DP-1 connected 2560x1080+0+420 (normal left inverted right x axis y axis) 814mm x 346mm 2560x1080 59.98*+ 200.00 143.94 119.95 99.94 84.96 1400x1050 74.76 59.98 However when I change my Xorg configuration from: Section "Monitor" Identifier "DP-1" Option "PreferredMode" "2560x1080" Option "Position" "0 420" Option "Primary" "true"EndSection To: Section "Monitor" Identifier "DP-1" Option "PreferredMode" "2560x1080_200" Option "Position" "0 420" Option "Primary" "true"EndSection The monitor doesn't load, and doesn't warn/error in ~/.local/share/xorg/Xorg.0.log . Is there another way to set my monitors refresh rate in my Xorg config file?
Second answer tries to explain that you need to do two things: 1_ make sure your general matching rules are not case-insensitive ( matcher-list ) - from the updated question it's not, 2_ change Unix/(Type/)_hosts (the actual location might vary, but not the Unix/_ssh - this one handles ~/.ssh/config hosts, see below) last 2 lines to: _wanted hosts expl host \ compadd -M 'm:{a-z}={A-Z} r:|.=* r:|=*' -a "$@" - _hosts All of this was already summarized in my answer, so simply try doing this without reading all the rationale before. Also, since your global config is not case-insensitive, the @zeppelin's answer should also work, although it doesn't use $fpath and removes also small->CAPS matching of the hosts. I did test this with your settings from the update and it works as expected. Update: remember that zsh keeps it's functions loaded, so after modifying the _hosts you need to reload it either by logging in fresh, or: unfunction _hostsautoload -Uz _hosts Also remember that zsh can have the scripts 'compiled' in zwc form ( zcompile [file] ) and if such file exists and is newer than the source it would be used instead. Ad. update 2: Handling the ~/.ssh/config defined hosts is actually pretty much the same as for _hosts - depending on your zsh version in either Unix/(Command/)_ssh or Unix/(Type/)_ssh_hosts change the compadd -M 'm:{a-zA-Z}={A-Za-z} r:|.=* r:|=*' "$@" $config_hosts line to compadd -M 'm:{a-z}={A-Z} r:|.=* r:|=*' "$@" $config_hosts
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
322,985
I want to store the output from an netcat function into a variable. I tried a lot of different ways, but it doesn't work to me. Can someone help me?The whole scripting thing is whole new for me! #! /bin/shwhile true;do var = "$(echo "RDTEMP1" | netcat -q2 sanderpi 5033)" echo &(var) echo "$(date +%Y-%m-%d%t%H:%M:%S)"done
In shell, setting variables would be done with: var1=totovar2="$(echo toto | othercommand)" You can't have spaces between your variable name, the equal character and the value you're assigning your variable with. Then, to echo a variable, you would do: echo $varecho "$var"echo "${var}" The & character, in bash/sh, is used for "job control", which is yet another topic, ... Start by using the following instead, tell us how it goes: #! /bin/shwhile true;do var="$(echo "RDTEMP1" | netcat -q2 sanderpi 5033)" echo "$var" echo "$(date +%Y-%m-%d%t%H:%M:%S)"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/322985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200099/" ] }
323,045
I pressed Mod + S and my windows flatted into a stack of bars. How can I undo this action or expand them back into their original configuration? The best route I found was to individually select each window and to Mod + Shift + arrow key to split horizontally. Surely there's a trick I'm missing?
You can switch between three modes: $mod + S for stacking mode $mod + W for tabbed mode $mod + E for standard mode (aka splith/splitv - the one you seek) More info on the User Guide .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42894/" ] }
323,085
Let's take the two lines below which give us two different results. p=$(cd ~ && pwd) ; echo $pp=$(cd ~ | pwd) ; echo $p How do the two differ?
In p=$(cd ~ && pwd) : The command substitution, $() , runs in a subshell cd ~ changes directory to ~ (your home), if cd succeeds ( && ) then pwd prints the directory name on STDOUT, hence the string saved on p will be your home directory e.g. /home/foobar In p=$(cd ~ | pwd) : Again $() spawns a subshell The commands on both sides of | run in respective subshells (and both starts off at the same time) so cd ~ is done in a subshell, and pwd in a separate subshell so you would get only the STDOUT from pwd i.e. from where you run the command, this can be any directory as you can imagine, hence p will contain the directory name from where the command is invoked, not your home directory
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/323085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17671/" ] }
323,109
I'm using OpenSuse Leap 42.1 and I would like to remap my Keyboard. Now, I want to remap one key (The Sleep key) to lock the screen instead of sending the entire PC to sleep. For that reason, I need the Konsole-Command to lock the screen. I Googled and only found commands that works for Ubuntu/Debian/Fedora/KDE4 , but I was unable to find anything that worked for my OpenSuse Version. Would you please provide any suggestions?
In p=$(cd ~ && pwd) : The command substitution, $() , runs in a subshell cd ~ changes directory to ~ (your home), if cd succeeds ( && ) then pwd prints the directory name on STDOUT, hence the string saved on p will be your home directory e.g. /home/foobar In p=$(cd ~ | pwd) : Again $() spawns a subshell The commands on both sides of | run in respective subshells (and both starts off at the same time) so cd ~ is done in a subshell, and pwd in a separate subshell so you would get only the STDOUT from pwd i.e. from where you run the command, this can be any directory as you can imagine, hence p will contain the directory name from where the command is invoked, not your home directory
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/323109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200195/" ] }
323,162
I need to add a new line before any line containing a pattern where we can assume that the pattern is always the first string of the current line. For example This is apatternThis is apattern I can add a new line with the sed command sed -i 's/pattern\+/\n&/g' file to get the output This is apatternThis is apattern To prevent multiple new lines being added (in case of multiple execution) I want to check whether the line before the pattern is empty. I know I can do that with if [ "$line" == "" ]; then But how do I determine the previous line, of a matching pattern, in the first place? EDIT: Pattern can occur multiple times.
You could store the previous line in the hold space: sed ' /^pattern/{ x /./ { x s/^/\/ x } x } h' It would be more legible with awk though: awk '!previous_empty && /pattern/ {print ""} {previous_empty = $0 == ""; print}' Like the GNU implementations of sed has a -i option for in-place editing, the GNU implementation of awk as -i inplace for that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133059/" ] }
323,163
I have a csv file that has many lines of timestamps in following format HH:MM:SS:MS For example: 00.00.07.38 00.00.08.13 00.00.08.88 The hour is not relevant to me so I would like to cut it out. How do I remove HH from every line in file with bash. I can read line by line from the file while IFS=, read col1do #remove HH from every line #awk -F '[.]' '{print $1}' <<< $col1 #only prints one portion of time #echo $col1 | cut -d"." -f2 | cut -d"." -f3 | cut -d"." -f4done < $file I have been playing around with awk and cut but was only able to print a specific position ex HH etc But how to remove just the HH from the line without creating a new file?
You could store the previous line in the hold space: sed ' /^pattern/{ x /./ { x s/^/\/ x } x } h' It would be more legible with awk though: awk '!previous_empty && /pattern/ {print ""} {previous_empty = $0 == ""; print}' Like the GNU implementations of sed has a -i option for in-place editing, the GNU implementation of awk as -i inplace for that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164888/" ] }
323,188
I have a systemd / journald running on my board. The system was built by means of yocto ; systemd version is 216. What I want to get is a kernel boot log, which can be obtained with journalctl -k . But as far as I see a long version of this option is --dmesg , which leads me to think that this is retrieved from kernel ring buffer. Obviously if the system runs for days I might no get this information. Is my understanding correct here? The question is now is there an option for journald to dump this info right after system booted? If not, is it sufficient just to call journalctl -k > dmesg.log as the last step of booting process?
Obviously if the system runs for days I might no get this information. Is my understanding correct here? Yes. It's dependent from how much log information is generated, but eventually the boot information will scroll off the beginning of both the kernel's ring buffer and the systemd journal. It's no guide to how long it takes on anyone else's systems, but I have systems which have uptimes in the hundreds of days whose boot log data have long since scrolled off the top of the systemd journal. This is one of the disadvantages of having one giant combined log stream that everything fans into and then fans back out from again. So take a leaf from FreeBSD and NetBSD and their derivatives. They all have services that run once, at bootstrap just after local filesystems have mounted, that simply do: dmesg > /var/run/dmesg.boot Thus a snapshot of the kernel log as it was at bootstrap is available in /var/run/dmesg.boot even if it has since scrolled off the actual logs. You simply need to write a systemd service that does the same. Use the shell for redirection, ExecStart=/bin/sh -c "exec dmesg > /run/dmesg.boot" or use something like Laurent Bercot's redirfd or the nosh toolset's fdredir ExecStart=/usr/local/bin/fdredir --write 1 /run/dmesg.boot dmesg Substitute journalctl -k if you want to snapshot the systemd journal rather than just the kernel's log, and make this a Type=oneshot service. Either make it wanted by multi-user.target or make it a DefaultDependencies=no service that is wanted by basic.target . Note that it does not have to be ordered after local filesystem mounts (i.e. local-fs.target ). That ordering is necessary for FreeBSD and OpenBSD because /var/run could be a disc filesystem with them. On systemd operating systems /run is an "API filesystem" that is created at bootstrap before any services. (The approach that I personally prefer is not to have the giant central log stream in the first place. A dedicated service feeds off the kernel log feed alone and logs to a private log directory. That takes a lot longer to reach the point where last bootstrap information scrolls off the top. And it also contains boot logs from prior boots. However, this is a lot more complex to set up in a systemd world than a oneshot that writes a /run/dmesg.boot . It is simple in a daemontools family world, though. It's a trivial exercise in the use of tools such as fifo-listen and klog-read , or socklog . Piping the output through a log dæmon that writes to a private, reliably size-capped, auto-rotated, log directory comes as standard with a daemontools/runit/s6/nosh/perp-managed service.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55407/" ] }
323,194
I'd like to check to make sure a handful of commands are available. If it's not, I'd like to print an error message and then exit. I'd like to do this without checking variables, because it's a small point in the script and I don't want it to sprawl over a bunch of lines. The shape I'd like to use is basically this: rsync --help >> /dev/null 2>&1 || printf "%s\n" "rsync not found, exiting."; exit 1 Unfortunately, the exit 1 is executed regardless of the rsync result. Is there a way to use this perl-type die message in bash, or no?
To directly answer the question, braces group commands together, so: rsync --help >> /dev/null 2>&1 || { printf "%s\n" "rsync not found, exiting."; exit 1; } As a suggestion for doing what you want, but in another way: #!/usr/bin/env bashfor c in rsync ls doesnotexist othercommand grepdo if ! type "$c" &> /dev/null then printf "$c not found, exiting\n" exit 1 fidone And if you want to emulate perl's die in shell: function die { printf "%s\n" "$@" >&2 exit 1}# ...if ! type "$c" &> /dev/nullthen die "$c not found, exiting"fi# ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141502/" ] }
323,238
The question is common, but I am not able to solve it with the explanations I found here. Situation: 4GB usb stick Manjaro operaing system iso image file of linux mint First, I did: lsblk # and got /dev/sdb for my usb stick # I left it unmounteddd if=/dev/zero of=/dev/sdb # filled it up with zerosfdisk # Here, I created a DOS partition table and 1 partition # containing the boot flagmkfs.vfat /dev/sdb # made the fat filesystem on the usb stickdd if=linuxmint-18-xfce-64bit.iso.part of=/dev/sdb bs=4M # Now, I copied the ismoimage onto the usb stickecho $? # I checked, if dd finished without error, the exit status was 0mount /dev/sdb /mnt #I mounted the usb stick and listed its content# the content surprised me, it was not the isoimage-file but this: bootcasperdistsEFIisolinuxMD5SUMSpoolpreseedREADME.diskdefines Then, I set the boot order in uefi as usb stick first, but it did not worked, I only saw my GRUB loader window and started into Manjaro like always.
You should verify the .iso image : Steps to verify an ISO image The available linux image come wiht the .iso extension and not .iso.part Before unplugging your USB it is recommanded to run sync There is an example: dd if=linuxmint-18-xfce-64bit.iso of=/dev/sdb bs=4M status=progress oflag=sync Edit The sync is to make sure that all the writes are flushed out before the command returns. if is input file (or device), of is output file (or device) bs=4M tells dd to read/write in 4 megabyte chunks for better performance; the default is 512 bytes, which will be much slower progress : shows periodic transfer statistics. manpages : dd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
323,384
I have this server for a while, and was present on other questions. A while ago, I've changed the network and the gateway IP was changed too. Since then, there is no internet on this machine. I need access to the internet to update the machine and (sometimes) to install packages I need for development. What I've tried: route add default gw 192.168.1.1 ( https://unix.stackexchange.com/a/259046/133591 ) ip route replace default via 192.168.1.1 ( https://unix.stackexchange.com/a/199070/133591 ) ip route add default via 192.168.1.1 dev eth0 ( https://unix.stackexchange.com/a/259048/133591 ) Editing the file /etc/network/interfaces , to look like below: # This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback# The primary network interfaceauto eth0allow-hotplug eth0#iface eth0 inet dhcpiface eth0 inet static address 192.168.1.205 netmask 255.255.255.0 gateway 192.168.1.1 broadcast 192.168.1.1 And this is the result of all my attempts: root@webtest:~# route add default gw 192.168.1.1SIOCADDRT: Network is unreachableroot@webtest:~# ip route replace default via 192.168.1.1RTNETLINK answers: Invalid argumentroot@webtest:~# ip route add default via 192.168.1.1 dev eth0RTNETLINK answers: Invalid argumentroot@webtest:~# The most bizarre thing is the SIOCADDRT: Network is unreachable error, when I'm clearly connected using SSH, which used the network. What else should I try? I don't even know what else to do. My system is running Debian 8.2 x64, with a single interface network. Note: I have read How can I change the default gateway? and How to set the Default gateway (which is where I got all those tries from). The accepted answer on How can I change the default gateway? is a FreeBDS-exclusive answer. Running ip addr and ip route gives the following: root@webtest:~# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:92:47:00:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.205/24 brd 192.168.1.1 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::21a:92ff:fe47:b5/64 scope link valid_lft forever preferred_lft foreverroot@webtest:~# ip route192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.205root@webtest:~# Edit 1: After the change that @Johan Myréen suggested, the result is still the same. Below is the updated ip addr with 2 pings: root@webtest:~# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:92:47:00:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.205/24 brd 192.168.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::21a:92ff:fe47:b5/64 scope link valid_lft forever preferred_lft foreverroot@webtest:~# ip routedefault via 192.168.1.1 dev eth0192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.205root@webtest:~# ping google.comping: unknown host google.comroot@webtest:~# ping facebook.comping: unknown host facebook.comroot@webtest:~#
Your broadcast address should be 192.168.1.255 , not 192.168.1.1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133591/" ] }
323,440
I am looking for away contact lines based on the next line. So far the only way I see is to create a shell script that will read line by line and will do something along these lines: while read line if $line does not start with "," and $curr_line is empty store line in curr_line if $line does not start with "," and $curr_line is not empty flush $curr_line to file store $line in $curr_line if $line starts with "," append to $curr_file, flush to file empty curr_linedone < file So I am trying to understand if could be achieved with sed or even grep with redirection.the rules of the file are simple. There is at max one and only one line starting with "," that needs to be appended to the previous line. ex: line0line1line2,line3line4line5,line6line7,line8line9line10line11 The result file would be line0line1line2,line3line4line5,line6line7,line8line9line10line11
I'd do: awk -v ORS= ' NR>1 && !/,/ {print "\n"} {print} END {if (NR) print "\n"}' < file That is, only prints that newline character that delimits the previous line if the current one does not start with a , . In any case, I wouldn't use a while read loop .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27827/" ] }
323,446
I am having an issue where DHCP (I though as I read in other similar topics) is clearing the /etc/resolv.conf file on each boot. I am not sure about how to deal with this since the post I have found ( 1 , 2 and some others) are for Debian based distros or other but not Fedora. This is the output of ifcfg-enp0s31f6 so for sure is DHCP: cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6 HWADDR=C8:5B:76:1A:8E:55TYPE=EthernetDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=noIPV6_AUTOCONF=noIPV6_DEFROUTE=noIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=enp0s31f6UUID=0af812a3-ac8e-32a0-887d-10884872d6c7ONBOOT=yesIPV6_PEERDNS=noIPV6_PEERROUTES=noBOOTPROTO=dhcpPEERDNS=yesPEERROUTES=yes In the other side I don't know if Network Manager is doing something else around this. Update: Content of NetworkManager.conf (I have removed the comments since are useless) $ cat /etc/NetworkManager/NetworkManager.conf [main]#plugins=ifcfg-rh,ibftdns=none[logging]#domains=ALL Can I get some help with this? It's annonying be setting up the file once and once on every reboot. UPDATE 2 After a month I'm still having the same issue where file gets deleted by "something". Here is the steps I did follow in order to make a fresh test: Reboot the PC After PC gets restarted open a terminal and try to ping Google servers of course without success: $ ping google.comping: google.com: Name or service not known Check the network configuration were all seems to be fine: $ cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6 NAME=enp0s31f6ONBOOT=yesHWADDR=C8:5B:76:1A:8E:55MACADDR=C8:5B:76:1A:8E:55UUID=0af812a3-ac8e-32a0-887d-10884872d6c7BOOTPROTO=staticPEERDNS=noDNS1=8.8.8.8DNS2=8.8.4.4DNS3=192.168.1.10NM_CONTROLLED=yesIPADDR=192.168.1.66NETMASK=255.255.255.0BROADCAST=192.168.1.255GATEWAY=192.168.1.1TYPE=EthernetDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=no Restart the network service: $ sudo service network restart[sudo] password for <current_user>: Restarting network (via systemctl): [ OK ] Try to ping Google servers again, with no success: $ ping google.comping: google.com: Name or service not known Check for file /etc/resolv.conf : $ cat /etc/resolv.conf cat: /etc/resolv.conf: No such file or directory File doesn't exists anymore - and this is the problem something is deleting it on every reboot Create the file and add the content of DNS: $ sudo nano /etc/resolv.conf Ping Google servers this time with success: $ ping google.comPING google.com (216.58.192.110) 56(84) bytes of data.64 bytes from mia07s35-in-f110.1e100.net (216.58.192.110): icmp_seq=1 ttl=57 time=3.87 ms Any ideas in what could be happening here?
In my experience, /etc/resolv.conf gets regenerated on boot, so any manual changes to it get reset. To work around this, you can create /etc/resolv.conf.head (or .tail depending on which end of the file you want to add to) and insert the custom settings you want in there (usually nameserver changes). Then the contents of that file gets added automatically when /etc/resolv.conf is generated by NetworkManager (or whichever service is in charge of the file on your system). If that doesn't work, you can modify /etc/resolvconf/resolv.conf.d/base -- it stores the "default" content for /etc/resolv.conf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13781/" ] }
323,459
Right now, my /etc/resolv.conf looks like this: # Generated by NetworkManagernameserver 10.165.246.33nameserver 192.135.82.60nameserver 10.165.74.2 The first two nameservers are automatically configured through DHCP; the last one is the one I added manually, in NetworkManager. It's also the most important one, since it resolves our internal domain names (e.g. build-server-17.our-company-domain.com ). The trouble is, NetworkManager adds it to the bottom of /etc/resolv.conf , so when accessing an intranet URL, my browser tries to resolve it using the first two servers, and it takes ages. How do I make NetworkManager add the manually-configured DNS server before the automatically-configured ones?
I accidentally created a duplicate question here . The answer is there, but essentially, you need to create: /etc/dhcp/dhclient.conf if it doesn't already exist, and add: prepend domain-name-servers [ip address of server]; Don't forget the semicolon at the end! After that, simply rebooting automagically moved the 'nameserver [ip address of server]' line in the '/etc/resolv.conf' up to the top!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200472/" ] }
323,546
I've often seen the rule -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT applied. Whilst I'm not an expert, that particular line concerns me. It's pretty obvious that the rule allows all traffic with the only exception that the connection has to have been established or related to an established connection. Scenario I'll allow connections to the default SSH port 22 from the serversLAN in the subnet 192.168.0.0/16 or whatever. SuperInsecureApp® exposes something on port 1337 ,which I add to my INPUT chain. I've added the conntrack rule to accept ESTABLISHED and RELATED from all sources Chain policy is DROP So basically that configuration shoud allow SSH-connections from the LAN only, whilst allowing inbound traffic on port 1337 from world. This is where my confusion blooms. Would the conntrack in any way expose a security flaw that would allow one to get an established connection on 1337 (since it's world open), and then utilize that connection to gain access to the SSH port (or any other port for that matter)?
I would not consider ESTABLISHED and RELATED traffic too open. You may be able to omit RELATED, but should definitely allow ESTABLISHED. Both of these traffic categories use conntrack states. ESTABLISHED connections have already been validated by another rule. This makes it much simpler to implement unidirectional rules. This only allows you to continue transactions on the same port. RELATED connects are also validated by another rule. They don't apply to a lot of protocols. Again they make it much simpler to configure rules. They also ensure proper sequencing of connections where they apply. This actually makes your rules more secure. While this may make it possible to connect on a different port, that port should only be part of a related process like an FTP data connection. Which ports are allow are controlled by protocol specific conntrack modules. By allowing ESTABLISHED and RELATED connections, you can concentrate on which new connections you want the firewall to accept. It also avoids broken rules meant to allow return traffic, but which allow new connections. Given you have classified the program on port 1337 as insecure, it should be started using a dedicated non-root user-id. This will limit the damage someone can do if they do manage to crack he application and gain enhanced access. It is highly unlikely a connection on port 1337 could be used to access port 22 remotely, but it is possible that a connection to port 1337 could be used to proxy a connection to port 22. You may want to ensure SSH is secured in depth: Use hosts.allow to limit access in addition to the firewall restrictions. Prevent root access, or at least require the use of keys and limit their access in the authorized_keys file. Audit login failures. A log scanner can send you periodic reports of unusual activity. Consider using a tool like fail2ban to automatically block access on repeated access failures.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155957/" ] }
323,577
I have a file named file.txt . The content of the file is as follows sundaymondaytuesday I wrote the below script and it loops just fine if the grep cannot find the pattern that was mentioned until cat file.txt | grep -E "fdgfg" -C 9999; do sleep 1 | echo "working..."; done But my requirement is that the above script should loop until the text mentioned in the grep pattern disappears in the file.txt I tried to use the L flag with grep. But it didn't work. until cat file.txt | grep -EL "sunday" -C 9999; do sleep 1 | echo "working..."; done
From grep man page: EXIT STATUS Normally the exit status is 0 if a line is selected, 1 if no lines were selected, and 2 if an error occurred. However, if the -q or --quiet or --silent is used and a line is selected, the exit status is 0 even if an error occurred. So if a line is present, the exit status is 0. Since on bash 0 is true (because the standard "successful" exit status of programs is 0) you should actually have something like: #!/bin/bashwhile grep "sunday" file.txt > /dev/null;do sleep 1 echo "working..."done Why exactly are you piping sleep 1 to echo ? Though it works, it doesn't make much sense. If you wanted them inline you could just write sleep 1; echo "working..." and if you wanted the echo to run before the delay, you could have it before the sleep call like echo "working..."; sleep 1 .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200547/" ] }
323,604
I have a csv file that contains 3 fields per line. firstname,lastname,url I'm trying to access the url via the following pipeline: grep theName file.csv | cut -d, -f 3 then I want to add another pipe and use the results of the cut command in a curl command like so: grep theName file.csv | cut -d, -f 3 | curl > result.txt problem is, when i do the above, the curl command throws an error, i assume because curl doesn't have an argument? how can I use the result of cut to curl the resulting url? Thanks in advance. =)
Leverage command substitution, $() : curl "$(grep ... | cut -d, -f 3)" Here $() will be substituted by the STDOUT of the command inside $() i.e. grep ... | cut -d, -f 3 , as this is done by the shell first so the curl command would be finally: curl <the_url>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200569/" ] }
323,610
I understand that reads to /dev/random may block, while reading /dev/urandom is guaranteed not to block. Where does the letter u come into this? What does it signify? Userspace? Unblocking? Micro? Update: Based on the initial wording of the question, there has been some debate over the usefulness of /dev/random vs /dev/urandom . The link Myths about /dev/urandom has been posted three times below, and is summarised in this answer to the question When to use /dev/random vs /dev/urandom .
Unlimited. In Linux, comparing the kernel functions named random_read and random_read_unlimited indicates that the etymology of the letter u in urandom is unlimited . This is confirmed by line 114 : The /dev/urandom device does not have this limit [...] Update: Regarding which came first for Linux, /dev/random or /dev/urandom , @Stéphane Chazelas gave the post with the original patch and @StephenKitt showed they were both introduced simultaneously .
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/323610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
323,663
This is the sequence of commands, gedit starts, but it cannot be killed from its process ID $ gedit&$ t=$!$ echo $t4824$ kill $tbash: kill: (4824) - No such process It would work just fine for a sleep process, like sleep 999&[1] 4881$ t=$!$ echo $t4881$ kill $t$ ps -p $t[1] Terminated sleep 999 What's the difference? How can the gedit process be terminated?
The gedit process is already terminated. Remember how Windows applications mainly worked back in the Win16 days before Win32 came along and did away with it: where there were hInstance and hPrevInstance , attempting to run a second instance of many applications simply handed things over to the first instance, and this made things difficult for command scripting tools (like Take Command) because one would invoke an application a second time, it would visibly be there on the screen as an added window, but as far as the command interpreter was concerned the child process that it had just run immediately exited? Well GNOME has brought the Win16 behaviour back for Linux. With GIO applications like gedit , the application behaves as follows: If there's no registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's the first instance. It becomes the org.gnome.gedit server and continues to run. If there is a registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's a second or subsequent instance. It constructs Desktop Bus messages to the first instance, passing along its command line options and arguments, and then simply exits . So what you see depends from whether you have the gedit server already running. If you haven't, you'll be in sebvieira's shoes and wondering why you aren't seeing the behaviour described. If you have, you'll be in your shoes and seeing the gedit process terminating almost immediately, especially since you haven't given it any command-line options or arguments to send over to the "first instance". Hence the reason that there's no longer a process with that ID. Much fun ensues when, as alluded to above, the per-login Desktop Bus is switched to the "new" style of a per-user Desktop Bus, and suddenly there's not a 1:1 relationship between a Desktop Bus and an X display any more. Single user-bus-wide instance applications suddenly have to be capable of talking to multiple X displays concurrently. Further hilarity ensues when people attempt to run gedit as the superuser via sudo , as it either cannot connect to a per-user Desktop Bus or connects to the wrong (the superuser's) Desktop Bus. There's a proposal to give gedit a command-line option that makes the process that is invoked just be the actual editor application , so that gedit would be useful as the editor pointed-to by the EDITOR environment variable (which it isn't for many common usages of EDITOR , from crontab to git , when it just exits immediately). This proposal has not become reality yet. In the meantime, people have various ways of having a simple second instance of a "lightweight text editor", such as invoking a whole new Desktop Bus instance, private to the invocation of gedit , with dbus-run-session . Of course, this tends to spin up other GNOME Desktop Bus servers on this private bus as they are in turn invoked by gedit , making it not "lightweight" at all. The icing on the cake is when you've followed this recommendation or one like it and interposed a shell function named gedit that immediately removes the gedit process from the shell's list of jobs. Not only does the process terminate rapidly so that you don't see it later with kill or ps , but the shell doesn't even monitor it as a shell-controlled job. Further reading Apps/Gedit/NewSingleInstance . GNOME wiki. 2013. " Description ". GApplication . GNOME Developers' wiki. https://stackoverflow.com/questions/7553452/ Stefan Löffler (2011-05-04). doesn't reuse running instance when started from nautilus . Bug #777292. Launchpad.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91304/" ] }
323,673
I just installed Windows 8.1 on my system. After installing windows as always grub was replaced with windows bootloader. So I booted my fedora live USB and tried to restore the grub. The installation was sucessfull and it detected all my currently installed systems(Windows,Fedora24,Ubuntu16.04). After rebooting I was dropped into grub rescue.So I typed the following. insmod normal normal After that I was given access to grub again but this time with all the OS in the list including the newly installed Windows8.1.Is there any way to fix this,cause everything is working fine just at every boot I need to type the above commands. UPDATE : I kind of screwed up I generated grub2-mkconfig -o /boot/grub2/grub.cfg the grub now loads fine but cannot detect Ubuntu.
The gedit process is already terminated. Remember how Windows applications mainly worked back in the Win16 days before Win32 came along and did away with it: where there were hInstance and hPrevInstance , attempting to run a second instance of many applications simply handed things over to the first instance, and this made things difficult for command scripting tools (like Take Command) because one would invoke an application a second time, it would visibly be there on the screen as an added window, but as far as the command interpreter was concerned the child process that it had just run immediately exited? Well GNOME has brought the Win16 behaviour back for Linux. With GIO applications like gedit , the application behaves as follows: If there's no registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's the first instance. It becomes the org.gnome.gedit server and continues to run. If there is a registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's a second or subsequent instance. It constructs Desktop Bus messages to the first instance, passing along its command line options and arguments, and then simply exits . So what you see depends from whether you have the gedit server already running. If you haven't, you'll be in sebvieira's shoes and wondering why you aren't seeing the behaviour described. If you have, you'll be in your shoes and seeing the gedit process terminating almost immediately, especially since you haven't given it any command-line options or arguments to send over to the "first instance". Hence the reason that there's no longer a process with that ID. Much fun ensues when, as alluded to above, the per-login Desktop Bus is switched to the "new" style of a per-user Desktop Bus, and suddenly there's not a 1:1 relationship between a Desktop Bus and an X display any more. Single user-bus-wide instance applications suddenly have to be capable of talking to multiple X displays concurrently. Further hilarity ensues when people attempt to run gedit as the superuser via sudo , as it either cannot connect to a per-user Desktop Bus or connects to the wrong (the superuser's) Desktop Bus. There's a proposal to give gedit a command-line option that makes the process that is invoked just be the actual editor application , so that gedit would be useful as the editor pointed-to by the EDITOR environment variable (which it isn't for many common usages of EDITOR , from crontab to git , when it just exits immediately). This proposal has not become reality yet. In the meantime, people have various ways of having a simple second instance of a "lightweight text editor", such as invoking a whole new Desktop Bus instance, private to the invocation of gedit , with dbus-run-session . Of course, this tends to spin up other GNOME Desktop Bus servers on this private bus as they are in turn invoked by gedit , making it not "lightweight" at all. The icing on the cake is when you've followed this recommendation or one like it and interposed a shell function named gedit that immediately removes the gedit process from the shell's list of jobs. Not only does the process terminate rapidly so that you don't see it later with kill or ps , but the shell doesn't even monitor it as a shell-controlled job. Further reading Apps/Gedit/NewSingleInstance . GNOME wiki. 2013. " Description ". GApplication . GNOME Developers' wiki. https://stackoverflow.com/questions/7553452/ Stefan Löffler (2011-05-04). doesn't reuse running instance when started from nautilus . Bug #777292. Launchpad.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200630/" ] }
323,741
I have a nice new shiny 4k monitor. I can increase the font size for most applications (including awesome) however, there are a few issues: The wibar vicious widgets show a tiny font, not the one defined in theme.lia . Any Gnome applications still show the old (aka tiny) font size. I suspect that setting the font size everywhere will lead me going insane. Is there a DPI setting within Awesome I can use? If not, is there some xrandr magic I can do?
With awesome 4.0 on Debian stretch, no patch as in the answer of Sardathrion is needed, that is to change your dpi to get a proper screen setup, you need to 1) Create the .Xresources file with your settings, that is Xft.dpi: 192 If you are wondering about the right DPI value, see this post . I used the next value that was a multiple of 96. For more interesting settings, check out the informative Arch wiki entry 2) I needed to include the settings from .Xresources by adding the following line to the file .xinitrc xrdb -merge ~/.Xresources
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10526/" ] }
323,750
I understand that EOT is ASCII code 4, whereas EOF is encoded as -1 (at least in C). Before I found out that EOF is mapped to -1, I thought it was just a synonym for EOT. Why is EOF mapped to -1 rather than EOT? As far as I can tell, they both do the same thing, which is to terminate a file stream. The only difference I can discern is that EOT also terminates a command in the bash shell. I would like a description of the precise technical differences between these two codes.
Generally, EOF isn't a character; it's the absence of a character. If a program runs on a terminal in canonical mode with default settings (i.e. a plain C program that just uses stdio), it will never see the ASCII character EOT. The terminal driver recognizes that character and creates an EOF condition (which at the low level is a 0 return value from read() ). The stdio library translates that EOF condition into the return value that is appropriate for the function in question (the EOF macro for getchar() , a null pointer for fgets() , etc.) The numeric value of the EOF macro is of no relevance anywhere but in the C library, and it shouldn't influence your understanding of the meaning of the EOF condition.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/323750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188709/" ] }
323,821
It seems that sort behaves strangely on lines such as >>b $ cat testa>>bbc>>>$ sort test>>>ab>>bc I would expect the >>b line to be the third line output by sort but it is the fifth. Why does it do this and is there a way to make sort give my expected output? I'm using GNU/Linux Ubuntu 16.04.
Sorting algorithms in modern locales are quite complex. Each character (actually collating element which could consist of a sequence of several characters like the Czech ch ) is given a number of collating weights that decide of their sorting order. When comparing two strings, the first weight of all the characters is used first, and other weights are used later to decide ties if the two strings sorted the same with the first weights. For instance, in many locales, e , é and E have the same primary weight (they are of the same equivalence class, they all match [=e=] ). So, when comparing for instance echo , été and Enter , in the first pass, e , é and E having the same primary weight, it's the second character that will determine the order ( c before n before t ). When comparing été , Été , Ete , after the first pass, they all sort the same, so we use the second pass using the secondary weight. In typical GNU locales, the second weight for latin script characters is used to prioritise accents (no accent comes first, then acute, grave, breve, circumflex...). We'll then need to use the third weight to decide between été and Été and that will be based on case (lowercase first in most locales). There are even characters that end up sorting the same because all their weights are identical. That is used to sort text in a similar way as dictionaries do, as a human would do. In a dictionary, you'll find that space and most punctuation characters are also ignored. For instance de facto sorts in between debut and devoid . The first weight of the space character is IGNORE. On a GNU system, you'll find the core collating rules defined in the /usr/share/i18n/locales/iso14651_t1_common (the path may vary with the distribution). In there, you'll see: ifdef UPPERCASE_FIRST<CAP>else<MIN>endif[...]ifdef UPPERCASE_FIRST[...]<MIN> # 10[...]else[...]<CAP> # 9[...]endif[...]order_start <SPECIAL>;forward;backward;forward;forward,position<U0020> IGNORE;IGNORE;IGNORE;<U0020> # 32 <SP>[...]<U003E> IGNORE;IGNORE;IGNORE;<h> # 140 >[...]ifdef DIACRIT_FORWARDorder_start <LATIN>;forward;forward;forward;forward,positionelseorder_start <LATIN>;forward;backward;forward;forward,positionendif[...]<U0065> <e>;<BAS>;<MIN>;IGNORE # 259 e<U00E9> <e>;<ACA>;<MIN>;IGNORE # 260 é[...]<U0045> <e>;<BAS>;<CAP>;IGNORE # 577 E<U00C9> <e>;<ACA>;<CAP>;IGNORE # 578 É Which illustrate what we've just said. Both space and > have their first 3 weights set to IGNORE . It's only for strings sorting the same for the first 3 weights that their relative order will be considered ( > before space as that <h> would list before the unspecified collating symbol <U0020> ). In locales that define UPPERCASE_FIRST (like /usr/share/i18n/locales/tr_TR ), upper case letters will come first (in the 3 rd pass). Same with DIACRIT_FORWARD where some locales like de_DE can decide to reverse the order of diacritics for the 2 nd pass. > and >> sort the same in the 1 st , 2 nd and 3 rd pass. In the 4 th , > sorts before >> as the empty string sorts before everything. >>b sorts after b because they sort the same in the first 3 passes, but in the fourth pass, b is IGNORE, so > is greater. It's less than c in the first pass ( > ignored, and b before c )... You get the idea. Now, if you look at the C locale definition. It's a lot simpler. There's only one weight, and the weight is based on the codepoint value from U+0000 to U+10FFFF. So SPC (U+0020) sorts before > (U+003E), which sorts before b (U+0062) which sorts before c (U+0063). No character is ever ignored. Note that with the GNU libc at least, that order defined in the C locale definition file is ignored when it comes to comparison functions ( strcoll() and co. as used by sort ). Regardless of the value of LC_CTYPE , with LC_COLLATE=C , strcoll() is equivalent to strcmp() . As in the comparison is always on the byte value, even if those bytes correspond to characters whose unicode codepoint sort the other way round (like 0xA4 U+20AC EURO SIGN vs A5 U+00A5 YEN SIGN in the ISO-8859-15 charset), so LC_ALL=C sort and LC_COLLATE=C sort (provided LC_ALL is not otherwise set) will have the same effect there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124286/" ] }
323,826
file 1: HOGBRM443983 -2522.00 19800826HOGBRM445985 -2389.00 19801101HOUSAM1891409 -1153.00 19811228HOUSAM2004289 -650.00 19860101HOUSAM2005991 -843.00 19860109HOCANM388722 -1546.00 19860116HOUSAM2007297 -1882.00 19860125HOUSAM2007389 -1074.00 19860128HOITAM801038516 -691.00 19860128 Columns 2 and 3 include values and birthdate information( year,month,day) of each id from column1, respectively . I want to check how many ids exist within each birth year and what are the average values (from second column) of ids across different years. For example, in file1 there are 2, 1 and 6 ids across years 1980, 1981 and 1986 respectively so the output should be: output:1980 2 -2455.51981 1 -1153.001986 6 -114.33 in which the first column shows the year of birth, the second column shows a number of ids within each year and the third column is the average values of ids across different years. Considering that the real data is indeed huge, any suggestion would be appreciated.
The Awk answer: awk '{y=substr($3,1,4); c[y]++; s[y]+=$2} END {for (y in c) {print y, c[y], (s[y]/c[y])}}' file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
323,845
I tried a bash script, but it took too long to create a simple 1 MB file. I think the answer lies in using /dev/random or /dev/urandom , but other posts here only show how to add all kinds of data to a file using these things, but I want to add only numbers. So, is there a command that I can use to create a random file of size 1 GB containing only numbers between 0 and 9? Edit:I want the output to be something like this 0 1 4 7 ..... 98 7 5 8 ..... 8........8 7 5 3 ..... 3 The range is 0 - 9 meaning only numbers 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Also I need them to be space separated and 100 per line, up to n number of lines. This n is something I don't care, I want my final size to be 1 GB. Edit: I am using Ubuntu 16.04 LTS
This: LC_ALL=C tr '\0-\377' \ '[0*25][1*25][2*25][3*25][4*25][5*25][6*25][7*25][8*25][9*25][x*]' \ < /dev/urandom | tr -d x | fold -w 1 | paste -sd "$(printf '%99s\\n')" - | head -c1G (assuming a head implementation that supports -c ) appears to be reasonably fast on my system. tr translates the whole byte range (0 to 255, 0 to 0377 in octal): the 25 first bytes as 0, the 25 next ones as 1... then 25 9 the rest (250 to 255) to "x" which we then discard (with tr -d x ) as we want a uniform distribution (assuming /dev/urandom has a uniform distribution itself) and so not give a bias to some digits. That produces one digit for 97% of the bytes of /dev/urandom . fold -w 1 makes it one digit per line. paste -s is called with a list of separators that consists on 99 space characters and one newline character, so to have 100 space separated digits on each line. head -c1G will get the first GiB (2 30 ) of that. Note that the last line will be truncated and undelimited. You could truncate to 2 30 -1 and add the missing newline by hand, or truncate to 10 9 bytes instead which is 50 million of those 200 byte lines ( head -n 50000000 would also make it a standard/portable command). These timings (obtained by zsh on a quad-core system), give an indication of where the CPU time is spent: LC_ALL=C tr '\0-\377' < /dev/urandom 0.61s user 31.28s system 99% cpu 31.904 totaltr -d x 1.00s user 0.27s system 3% cpu 31.903 totalfold -w 1 14.93s user 0.48s system 48% cpu 31.902 totalpaste -sd "$(printf '%99s\\n')" - 7.23s user 0.08s system 22% cpu 31.899 totalhead -c1G > /dev/null 0.49s user 1.21s system 5% cpu 31.898 total The first tr is the bottle neck, most of the time spent in the kernel (I suppose for the random number generation). The timing is roughly in line with the rate I can get bytes from /dev/uramdom (about 19MiB/s and here we produce 2 bytes for each 0.97 byte of /dev/urandom at a rate of 32MiB/s). fold seems to be spending an unreasonable amount of CPU time (15s) just to insert a newline character after every byte but that doesn't affect the overall time as it works on a different CPU in my case (adding the -b option makes it very slightly more efficient, dd cbs=1 conv=unblock seems like a better alternative). You can do away with the head -c1G and shave off a few seconds by setting a limit on the file size ( limit filesize 1024m with zsh or ulimit -f "$((1024*1024))" with most other shells (including zsh )) instead in a subshell. That could be improved if we extracted 2 digits for each byte, but we would need a different approach for that. The above is very efficient because tr just looks up each byte in a 256 byte array. It can't do that for 2 bytes at a time, and using things like hexdump -e '1/1 "%02u"' that computes the text representation of a byte using more complex algorithms would be more expensive than the random number generation itself. Still, if like in my case, you have CPU cores whose time to spare, it may still manage to shave off a few seconds: With: < /dev/urandom LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -n250000000 -ve '500/1 "%02u" "\n"' | fold -w1 | paste -sd "$(printf '%99s\\n')" - > /dev/null I get (note however that here it's 1,000,000,000 bytes as opposed to 1,073,741,824): LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' < /dev/urandom 0.32s user 18.83s system 70% cpu 27.001 totaltr -d x 2.17s user 0.09s system 8% cpu 27.000 totalhexdump -n250000000 -ve '500/1 "%02u" "\n"' 26.79s user 0.17s system 99% cpu 27.000 totalfold -w1 14.42s user 0.67s system 55% cpu 27.000 totalpaste -sd "$(printf '%99s\\n')" - > /dev/null 8.00s user 0.23s system 30% cpu 26.998 total More CPU time overall, but better distributed between my 4 CPU cores, so it ends up taking less wall-clock time. The bottleneck is now hexdump . If we use dd instead of the line-based fold , we can actually reduce the amount of work hexdump needs to do and improve the balance of work between CPUs: < /dev/urandom LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -ve '"%02u"' | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - (here assuming GNU dd for its iflag=fullblock and status=none ) which gives: LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' < /dev/urandom 0.32s user 15.58s system 99% cpu 15.915 totaltr -d x 1.62s user 0.16s system 11% cpu 15.914 totalhexdump -ve '"%02u"' 10.90s user 0.32s system 70% cpu 15.911 totaldd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock 5.44s user 0.19s system 35% cpu 15.909 totalpaste -sd "$(printf '%99s\\n')" - > /dev/null 5.50s user 0.30s system 36% cpu 15.905 total Back to the random-number generation being the bottleneck. Now, as pointed out by @OleTange, if you have the openssl utility, you could use it to get a faster (especially on processors that have AES instructions) pseudo-random generator of bytes. </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom on my system spews 15 times as many bytes per second than /dev/urandom . (I can't comment on how it compares in terms of cryptographically secure source of randomness if that applies to your use case). </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom 2> /dev/null | LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | hexdump -ve '"%02u"' | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - Now gives: openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom < /dev/zero 2> 1.13s user 0.16s system 12% cpu 10.174 totalLC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' 0.56s user 0.20s system 7% cpu 10.173 totaltr -d x 2.50s user 0.10s system 25% cpu 10.172 totalhexdump -ve '"%02u"' 9.96s user 0.19s system 99% cpu 10.172 totaldd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock 4.38s user 0.20s system 45% cpu 10.171 totalpaste -sd "$(printf '%99s\\n')" - > /dev/null back to hexdump being the bottleneck. As I still have CPUs to spare, I can run 3 of those hexdump in parallel. </dev/zero openssl enc -aes-128-ctr -nosalt -pass file:/dev/urandom 2> /dev/null | LC_ALL=C tr '\0-\377' '\0-\143\0-\143[x*]' | tr -d x | (hexdump -ve '"%02u"' <&3 & hexdump -ve '"%02u"' <&3 & hexdump -ve '"%02u"') 3<&0 | dd bs=50000 count=10000 iflag=fullblock status=none cbs=1 conv=unblock | paste -sd "$(printf '%99s\\n')" - (the <&3 is needed for shells other than zsh that close commands' stdin on /dev/null when run in background). Now down to 6.2 seconds and my CPUs almost fully utilised.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/323845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190496/" ] }
323,894
I need awk to print underscore in the output. see below example Current Output [root@looct ~]# date | awk '{print $2$3$6}' Nov142016 Required Output Nov_14_2016 -----> I need this, is it possible?
date +%b_%d_%Y will do it without an extra process.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98965/" ] }
323,901
I have seen this answer : You should consider using inotifywait, as an example: inotifywait -m /path -e create -e moved_to | while read path action file; do echo "The file '$file' appeared in directory '$path' via '$action'" # do something with the file done The above script watches a directory for creation of files of any type. My question is how to modify the inotifywait command to report only when a file of a certain type/extension is created (or moved into the directory). For example, it should report when any .xml file is created. What I tried : I have run the inotifywait --help command, and have read the command line options. It has --exclude <pattern> and --excludei <pattern> options to EXCLUDE files of certain types (by using regular expressions), but I need a way to INCLUDE just the files of a certain type/extension.
how do I modify the inotifywait command to report only when a file ofcertain type/extension is created Please note that this is untested code since I don't have access to inotify right now. But something akin to this ought to work: inotifywait -m /path -e create -e moved_to | while read directory action file; do if [[ "$file" =~ .*xml$ ]]; then # Does the file end with .xml? echo "xml file" # If so, do your thing here! fi done
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/323901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200776/" ] }
323,914
Is there a way to dynamically assign environment variables in a systemd service unit file? We have a machine that has 4 GPUs, and we want to spin up multiple instances of a certain service per GPU. E.g.: gpu_service@1:1.service gpu_service@2:1.service gpu_service@3:1.service gpu_service@4:1.service gpu_service@1:2.service gpu_service@2:2.service gpu_service@3:2.service gpu_service@4:2.service ad nauseam So the 1:1, 2:1, etc. are effectively the %i in the service unit file. In order for the service to bind to a particular GPU, the service executable checks a certain environment variable, e.g.: USE_GPU=4 Is there a way I can take %i inside the service unit file and run it through some (shell) function to derive the GPU number, and then I can set the USE_GPU environment variable accordingly? Most importantly, I don't want the hassle of writing multiple /etc/systemd/system/gpu_service@x:y.service/local.conf files just so I can spin up more instances.
If you are careful you can incorporate a small bash script sequence as your exec command in the instance service file. Eg ExecStart=/bin/bash -c 'v=%i; USE_GPU=$${v%:*} exec /bin/mycommand' The $$ in the string will become a single $ in the result passed to bash, but more importantly will stop ${...} from being interpolated by systemd. (Earlier versions of systemd did not document the use of $$ , so I don't know if it was supported then).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/323914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26714/" ] }
323,916
Can fold be set to recognize characters instead of bytes? Traditional Chinese characters appear to be encoded in three bytes each (in UTF-8 at least), which means that if fold 's -w is not a multiple of three, then the following will occur: $ cat in.txt【財經中心、政治中心╱台北報導】看不慣政府施政效率緩慢,鴻海集團董事長郭台銘動念選總統!《壹週刊》報導,在川普勝選當晚,郭召集鴻海高層幹部,進行美國總統大選換人後的應變策略演練,讓人驚訝的是,郭詢問在場幹$ cat in.txt | fold # -w is 80 by default【財經中心、政治中心╱台北報導】看不慣政府施政效率緩���,鴻海集團董事長郭台銘動念選總統!《壹週刊》報導,在���普勝選當晚,郭召集鴻海高層幹部,進行美國總統大選換人後的應變策略演練,讓人驚訝的是,郭詢問在場幹 fold 's default output is a width of 80 columns, and this results in 26 2/3 characters ( 26 * 3 + 2 , or 80 bytes) being printed on each line. Therefore, -w must be set to a multiple of three in order to avoid character breakage. So, at least for fold , columns=bytes . Again, my question is, can fold can be set to honor multi-byte characters? The man page doesn't mention anything about this.
GNU fold and GNU fmt only understand bytes, not characters. To wrap to a certain number of characters, you can use sed. sed 's/.\{20\}/&\n/g' <in.txt【財經中心、政治中心╱台北報導】看不慣政府施政效率緩慢,鴻海集團董事長郭台銘動念選總統!《壹週刊》報導,在川普勝選當晚,郭召集鴻海高層幹部,進行美國總統大選換人後的應變策略演練,讓人驚訝的是,郭詢問在場幹 If you wanted to break at whitespace (useful for many languages), here's a quick-and-dirty awk script. awk ' BEGIN {width = 20} NF == 0 {column = 0; print} { split($0, a); for (i in a) { w = length(a[i]) + 1; column += w; if (column > width) {column = w; print ""}; if (column != w) printf " "; printf "%s", a[i]; } } END {if (column) print ""}' In any case make sure that your locale settings are correct. Specifically, LC_CTYPE must designate the right character encoding, e.g. LC_CTYPE=en_US.utf8 or LC_CTYPE=zh_CN.utf8 (any language code that's available on your system will do) for Unicode encoded as UTF-8. Note that this counts characters, not screen width. Even fixed-width fonts can have double-width characters and this is typically done for Chinese characters, so e.g. a character width of 20 for the text above occupies 40 columns on typical terminals.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186868/" ] }
323,953
I have many directories with the names as following geom1 geom10 geom11 geom12 geom13 geom14 geom15 geom16 geom17 geom18 geom19 geom2 geom20 geom3 geom4 geom5 geom6 geom7 geom8 geom9 I would like to rename them to be like this geom0000001 geom0000002 geom0000003 geom0000004 geom0000005 geom0000006 geom0000007 geom0000008 geom0000009 geom0000010 geom0000011 geom0000012 geom0000013 geom0000014 geom0000015 geom0000016 geom0000017 geom0000018 geom0000019 geom0000020 I used the following script a=1 for i in geom*/; do new=$(printf "geom%07d" "$a") mv -- "$i" "$new" let a=a+1 done the problem, it moves for examples geom10 to geom0000002 not to geom0000002 while geom2 to geom0000012 not to geom0000002 what I want is to renames the directories with the same sequence but with the new format.
try for i in geom*do new=$(printf "geom%07d" "${i##geom}") echo "$i" "$new" done where ##geom construct will remove geom from var. replace echo by mv if satisfied
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/323953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112129/" ] }
324,064
Here's my test code: a=1echo $aecho `let ++a`echo $a The output that I see is 1 , 1 . Why doesn't the third line modify the value of a ?
because `...` equivalent to $(...), which is a subshell. changing variables in subshell are lost when the subshell closes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150277/" ] }
324,073
On a fresh install of Fedora 19 I am attempting to change the password to something simple, like Password01 (this is just a simple testing VM, nothing fancy), but the password complexity requirements prevent me from setting anything easy to remember. How can I bypass the complexity requirements or disable them? the contents of /etc/pam.d/passwd: #%PAM-1.0auth include system-authaccount include system-authpassword substack system-auth-password optional pam_gnome_keyring.so use_authtokpassword substack postlogin Even as root I cannot bypass the requirements: justincase@localhost ~ $ sudo -s[sudo] password for justincase: [root@localhost justincase]# passwd justincaseChanging password for user justincase.New password: BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary wordRetype new password: [root@localhost justincase]#
As root you can bypass the requirements. Your example shows this happening: # passwd justincaseChanging password for user justincase.New password:BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary wordRetype new password:# Notice it does not repeat the New password prompt but instead it asks you to retype the (bad) new password you are entering. If you had continued with the alleged bad password you would have been able to set it as the password for justincase .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28893/" ] }
324,074
I would like to find a straightforward method to extract the portion of a string up to where the first digit appears (possibly using regular expressions instead of traversing the string character by character). I am using this to extract package names from rpm -qa without their versions. E.g: Parsing: perl-Text-ParseWords-3.30-1.fc22.i686Result: perl-Text-ParseWords
Preferred Alternative We could simply modify the rpm query to only output the name. rpm -qa --queryformat "%{NAME}\n" Or we can get dirty with a regex Not exactly "straight forward" but here is a sed regex that should be able to do it. sed -e 's/\([^\.]*\).*/\1/;s/-[0-9]*$//' <<< "perl-Text-ParseWords-3.30-1.fc22.i686" This should handle everything except for in there is a period in the package name (I don't even think that is allowed). Quick breakdown s/\([^\.]*\).*/\1/ grab everything before the first period. So perl-Text-ParseWords-3.30-1.fc22.i686 becomes perl-Text-ParseWords-3 s/-[0-9]*$// get rid of that trailing - and first version digit. So perl-Text-ParseWords-3 becomes perl-Text-ParseWords .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171008/" ] }
324,082
I am on Kali Linux. I'm trying to install the Broadcom BMD4360 drivers in my MacbookPro i7, in live USB mode, but I got some errors. Is there any way to fix it without reinstall all over again? So, first I installed the headers: apt-get update apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') broadcom-sta-dkms After I removed the possible conflicts modprobe -r b44 b43 b43legacy ssb brcmsmac And got this: root@kali:/etc/apt# modprobe wlmodprobe: FATAL: Module wl not found in directory /lib/modules/4.6.0-kali1-686-pae So I tried to install in a different way and I had: root@kali:/var/lib# apt-get install broadcom-sta-dkmsReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed: broadcom-sta-dkms0 upgraded, 1 newly installed, 0 to remove and 1185 not upgraded.Need to get 0 B/2,207 kB of archives.After this operation, 14.5 MB of additional disk space will be used.Selecting previously unselected package broadcom-sta-dkms.(Reading database ... 335533 files and directories currently installed.)Preparing to unpack .../broadcom-sta-dkms_6.30.223.271-4_all.deb ...Unpacking broadcom-sta-dkms (6.30.223.271-4) ...Setting up broadcom-sta-dkms (6.30.223.271-4) ...Loading new broadcom-sta-6.30.223.271 DKMS files...Building for 4.6.0-kali1-686-paeModule build for kernel 4.6.0-kali1-686-pae was skipped since thekernel headers for this kernel does not seem to be installed. and after I got a new errors in modprobe: root@kali:/var/lib# modprobe wlmodprobe: ERROR: ../libkmod/libkmod-module.c:832 kmod_module_insert_module() could not find module by name='wl'modprobe: ERROR: could not insert 'wl': Unknown symbol in module, or unknown parameter (see dmesg)modprobe: ERROR: ../libkmod/libkmod-module.c:977 command_do() Error running install command for wlmodprobe: ERROR: could not insert 'wl': Operation not permitted My uname -a is root@kali:/lib/modules# uname -aLinux kali 4.6.0-kali1-686-pae #1 SMP Debian 4.6.4-1kali1 (2016-07-21) i686 GNU/Linux The modules available are listed: root@kali:/lib/modules# ls4.6.0-kali1-686-pae 4.8.0-kali1-686 4.8.0-kali1-686-pae 4.8.0-kali1-rt-686-pae and /var/lib/dkms/broadcom-sta/6.30.223.271/source/dkms.conf follows root@kali:/var/lib/dkms/broadcom-sta/6.30.223.271/source# cat dkms.confPACKAGE_NAME="broadcom-sta"PACKAGE_VERSION="6.30.223.271"MAKE[0]="make KVER=$kernelver"BUILT_MODULE_NAME[0]="wl"DEST_MODULE_LOCATION[0]="/updates/dkms"AUTOINSTALL="YES"REMAKE_INITRD="YES Is there any way to fix it without reinstall all over again?
Preferred Alternative We could simply modify the rpm query to only output the name. rpm -qa --queryformat "%{NAME}\n" Or we can get dirty with a regex Not exactly "straight forward" but here is a sed regex that should be able to do it. sed -e 's/\([^\.]*\).*/\1/;s/-[0-9]*$//' <<< "perl-Text-ParseWords-3.30-1.fc22.i686" This should handle everything except for in there is a period in the package name (I don't even think that is allowed). Quick breakdown s/\([^\.]*\).*/\1/ grab everything before the first period. So perl-Text-ParseWords-3.30-1.fc22.i686 becomes perl-Text-ParseWords-3 s/-[0-9]*$// get rid of that trailing - and first version digit. So perl-Text-ParseWords-3 becomes perl-Text-ParseWords .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324082", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200841/" ] }
324,141
When I grep the man page of the find command for matches to type it returns a lot of search results that I don't want. Instead I want to use a command that returns only the search results for -type . The command man find | grep -type doesn't work. It returns: grep: invalid option -- 't'
If you want to grep for a pattern beginning with a hyphen, use -- before the pattern you specify. man find | grep -- -type If you want more info, for example the entire section describing an option, you could try using Sed: $ man find | sed -n '/-mindepth/,/^$/p' -mindepth levels Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the command line arguments. However, this won't work for every option you might search for. For example: $ man find | sed -n '/^[[:space:]]*-type/,/^$/p' -type c File is of type c: Not very helpful. Worse, for some options you could be misled into thinking you'd read the whole text about the option when you really hadn't. For example, searching -delete omits the very important WARNING contained as a second paragraph under that heading. My recommendation is to use a standard call to man with the LESS environment variable set. I use it quite commonly in my answers on this site. LESS='+/^[[:space:]]*-type' man find To learn more about how this works, see: LESS='+/^[[:space:]]*LESS ' man lessLESS='+/\+cmd' man lessLESS='+/\/' man less If you just want to find the option quickly and interactively in the man page, learn to use less 's search capabilities. And also see: How do I use man pages to learn how to use commands?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38285/" ] }
324,151
Is there a way to find half-configured packages in Debian ? This is coming from Debian strech - update broken - seems buggy dpkg . I try to see if packages are broken by two ways - a. $ aptb ┌─[shirish@debian] - [~] - [5289]└─[$] alias aptbaptb='aptitude search '\''~b'\' The more better one - ┌─[shirish@debian] - [~] - [5288]└─[$] dpkg --audit┌─[shirish@debian] - [~] - [5289]└─[$] Are there any other tools which do the desired/above thing ? Update - I get this - [$] dpkg -f '${status} ${package}\n' -W | awk '$2 == "half-configured" {print $4}'dpkg-deb: error: failed to read archive '${status} ${package}\n': No such file or directory[$] dpkg -f '${status} ${package}\n' -W | awk '$2 == "half-configured" {print $4}' dpkg-deb: error: failed to read archive '${status} ${package}\n': No such file or directory Are these the expected outputs ?
Quick to type: dpkg -l | grep -v '^ii' This lists any package that's at least a little known to the system but not perfectly installed. If you want parseable output, use dpkg-query with a custom format. Adjust the filter according to your wishes. dpkg-query -f '${status} ${package}\n' -W | awk '$3 != "installed" {print $4}'dpkg-query -f '${status} ${package}\n' -W | awk '$3 == "half-configured" {print $4}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
324,152
I have a while loop in this form: while :; do performTask1 || break performTask2 || break performTask3 || break ... performTaskX || breakdone Basically, I want to break out of the loop whenever one of the tasks errors out. Is there a better way to achieve this? I suppose another way to do this is: while :; do performTask1 && performTask2 ... && performTaskX || breakdone However, this still looks ugly to me.
Quick to type: dpkg -l | grep -v '^ii' This lists any package that's at least a little known to the system but not perfectly installed. If you want parseable output, use dpkg-query with a custom format. Adjust the filter according to your wishes. dpkg-query -f '${status} ${package}\n' -W | awk '$3 != "installed" {print $4}'dpkg-query -f '${status} ${package}\n' -W | awk '$3 == "half-configured" {print $4}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141689/" ] }
324,191
I like using monit for the web interface where I can see running monitored processes. I recently upgraded a server to Ubuntu 16.04 and it is using systemd. I have everything else running but I cannot find the right monit commands to control systemd... I used to do /etc/init.d/process start or stop or whatever. That obivously no longer works so I tried ... systemctl start process which didn't work either. What can I do here? My monitrc is pasted below...(old style which worked on 14.04) check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop"check process sshd with pidfile /var/run/sshd.pid start program = "etc/init.d/ssh start" stop program = "etc/init.d/ssh stop"
This should work on RHEL / CentOS 7 / Ubuntu 18.04 check process nginx with pidfile /var/run/nginx.pid start program = "/bin/systemctl start nginx" stop program = "/bin/systemctl stop nginx"check process sshd with pidfile /var/run/sshd.pid start program = "/bin/systemctl start sshd" stop program = "/bin/systemctl stop sshd"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154168/" ] }
324,209
Should I use /dev/random or /dev/urandom ? In which situations would I prefer one over the other?
TL;DR Use /dev/urandom for most practical purposes. The longer answer depends on the flavour of Unix that you're running. Linux Historically, /dev/random and /dev/urandom were introduced at the same time. As @DavidSchwartz pointed out in a comment , using /dev/urandom is preferred in the vast majority of cases. He and others also provided a link to the excellent Myths about /dev/urandom article which I recommend for further reading. In summary: The manpage is misleading. Both are fed by the same CSPRNG to generate randomness ( diagrams 2 and 3 ) /dev/random blocks when it runs out of entropy,so reading from /dev/random can halt process execution. The amount of entropy is conservatively estimated, but not counted /dev/urandom will never block. In rare cases very shortly after boot, the CSPRNG may not have had enough entropy to be properly seeded and /dev/urandom may not produce high-quality randomness. Entropy running low is not a problem if the CSPRNG was initially seeded properly. The CSPRNG is being constantly re-seeded. In Linux 4.8 and onward, /dev/urandom does not deplete the entropy pool (used by /dev/random ) but uses the CSPRNG output from upstream. Use /dev/urandom . Exceptions to the rule In the Cryptography Stack Exchange's When to use /dev/random over /dev/urandom in Linux @otus gives two use cases : Shortly after boot on a low entropy device, if enough entropy has not yet been generated to properly seed /dev/urandom . Generating a one-time pad with information theoretic security If you're worried about (1), you can check the entropy available in /dev/random . If you're doing (2) you'll know it already :) Note: You can check if reading from /dev/random will block , but beware of possible race conditions. Alternative: use neither! @otus also pointed out that the getrandom() system will read from /dev/urandom and only block if the initial seed entropy is unavailable. There are issues with changing /dev/urandom to use getrandom() , but it is conceivable that a new /dev/xrandom device is created based upon getrandom() . macOS It doesn't matter, as Wikipedia says : macOS uses 160-bit Yarrow based on SHA1. There is no difference between /dev/random and /dev/urandom; both behave identically. Apple's iOS also uses Yarrow. FreeBSD It doesn't matter, as Wikipedia says : /dev/urandom is just a link to /dev/random and only blocks until properly seeded. This means that after boot, FreeBSD is smart enough to wait until enough seed entropy has been gathered before delivering a never-ending stream of random goodness. NetBSD Use /dev/urandom , assuming your system has read at least once from /dev/random to ensure proper initial seeding. The rnd(4) manpage says : /dev/urandom never blocks. /dev/random sometimes blocks. Will block early at boot if thesystem's state is known to be predictable. Applications should read from /dev/urandom when they need randomlygenerated data, e.g. cryptographic keys or seeds for simulations. Systems should be engineered to judiciously read at least once from /dev/random at boot before running any services that talk to theinternet or otherwise require cryptography, in order to avoidgenerating keys predictably.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/324209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
324,296
I would expect find . -delete to delete the current directory, but it doesn't. Why not?
The members of findutils aware of it , it's for compatible with *BSD: One of the reasons that we skip deletion of "." is for compatibility with *BSD, where this action originated. The NEWS in findutils source code shows that they decided to keep the behavior: #20802: If -delete fails, find's exit status will now be non-zero. However, find still skips trying to delete ".". [UPDATE] Since this question become one of the hot topic, so i dive into FreeBSD source code and come out a more convincing reason. Let's see the find utility source code of FreeBSD : intf_delete(PLAN *plan __unused, FTSENT *entry){ /* ignore these from fts */ if (strcmp(entry->fts_accpath, ".") == 0 || strcmp(entry->fts_accpath, "..") == 0) return 1;... /* rmdir directories, unlink everything else */ if (S_ISDIR(entry->fts_statp->st_mode)) { if (rmdir(entry->fts_accpath) < 0 && errno != ENOTEMPTY) warn("-delete: rmdir(%s)", entry->fts_path); } else { if (unlink(entry->fts_accpath) < 0) warn("-delete: unlink(%s)", entry->fts_path); }... As you can see, if it doesn't filter out dot and dot-dot, then it will reach rmdir() C function defined by POSIX's unistd.h . Do a simple test, rmdir with dot/dot-dot argument will return -1: printf("%d\n", rmdir("..")); Let's take a look how POSIX describe rmdir : If the path argument refers to a path whose final component is either dot or dot-dot, rmdir() shall fail. No reason was given why shall fail . I found rename explain some reaso n: Renaming dot or dot-dot is prohibited in order to prevent cyclical file system paths. Cyclical file system paths ? I look over The C Programming Language (2nd Edition) and search for directory topic, surprisingly i found the code is similar : if(strcmp(dp->name,".") == 0 || strcmp(dp->name,"..") == 0) continue; And the comment ! Each directory always contains entries for itself, called ".", and its parent, ".."; these must be skipped, or the program will loop forever . "loop forever" , this is same like how rename describe it as "cyclical file system paths" above. I slightly modify the code and to make it run in Kali Linux based on this answer : #include <stdio.h>#include <string.h>#include <sys/types.h>#include <sys/stat.h> #include <dirent.h>#include <unistd.h>void fsize(char *);void dirwalk(char *, void (*fcn)(char *));intmain(int argc, char **argv) { if (argc == 1) fsize("."); else while (--argc > 0) { printf("start\n"); fsize(*++argv); } return 0;}void fsize(char *name) { struct stat stbuf; if (stat(name, &stbuf) == -1 ) { fprintf(stderr, "fsize: can't access %s\n", name); return; } if ((stbuf.st_mode & S_IFMT) == S_IFDIR) dirwalk(name, fsize); printf("%81d %s\n", stbuf.st_size, name);}#define MAX_PATH 1024void dirwalk(char *dir, void (*fcn)(char *)){ char name[MAX_PATH]; struct dirent *dp; DIR *dfd; if ((dfd = opendir(dir)) == NULL) { fprintf(stderr, "dirwalk: can't open %s\n", dir); return; } while ((dp = readdir(dfd)) != NULL) { sleep(1); printf("d_name: S%sG\n", dp->d_name); if (strcmp(dp->d_name, ".") == 0 || strcmp(dp->d_name, "..") == 0) { printf("hole dot\n"); continue; } if (strlen(dir)+strlen(dp->d_name)+2 > sizeof(name)) { printf("mocha\n"); fprintf(stderr, "dirwalk: name %s/%s too long\n", dir, dp->d_name); } else { printf("ice\n"); (*fcn)(dp->d_name); } } closedir(dfd);} Let's see: xb@dnxb:/test/dot$ ls -latotal 8drwxr-xr-x 2 xiaobai xiaobai 4096 Nov 20 04:14 .drwxr-xr-x 3 xiaobai xiaobai 4096 Nov 20 04:14 ..xb@dnxb:/test/dot$ xb@dnxb:/test/dot$ cc /tmp/kr/fsize.c -o /tmp/kr/a.out xb@dnxb:/test/dot$ /tmp/kr/a.out . startd_name: S..Ghole dotd_name: S.Ghole dot 4096 .xb@dnxb:/test/dot$ It work correctly, now what if I comment out the continue instruction: xb@dnxb:/test/dot$ cc /tmp/kr/fsize.c -o /tmp/kr/a.out xb@dnxb:/test/dot$ /tmp/kr/a.out .startd_name: S..Ghole doticed_name: S..Ghole doticed_name: S..Ghole dotice^Cxb@dnxb:/test/dot$ As you can see, I have to use Ctrl + C to kill this infinitely loop program. The '..' directory read its first entry '..' and loop forever. Conclusion: GNU findutils try to compatible with find utility in *BSD . find utility in *BSD internally use rmdir POSIX-compliant C function which dot/dot-dot is not allow. The reason of rmdir do not allow dot/dot-dot is prevent cyclical file system paths. The C Programming Language written by K&R shows the example of how dot/dot-dot will lead to forever loop program.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/324296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181058/" ] }
324,359
I have a basic understanding of dotfiles in *nix system. But I am still quite confused about this Difference between Login Shell and Non-Login Shell? A bunch of different answers (including multiple duplicates) have already addressed the following bullets: How to invoke a login or non-login shell How to detect a login or non-login shell What startup files will be consumed by a login or non-login shell Referred to documentation (e.g., man bash ) for more details What the answers didn't tell (and also something I'm still confused about) is: What is the use case of a login or non-login shell? (e.g., I only configured zshrc for zsh and that's enough for most personal dev requirement, I know it's not as simple as what vimrc to vim ) What is the reason to use a login over a non-login shell (besides consuming different startup files & life cycles)?
The idea is that a user should have (at most) one login shell per host. (Perhaps I should say, one login shell per host per terminal —if you are simultaneously logged in to a host through multiple terminals,you would expect to have multiple login shells.) This would typically (always?) be the first shell you getupon logging in (hence the name). So, this scheme allows you to specifyactions that you want to happen only once per loginand things that you want to happenevery time you start a new (interactive) shell. Normally, every other shell you run after logging inwill be a descendant (a child of a child of a child …) of the login shell,and therefore will inherit many settings(environment variables, umask , etc.) from the login shell. And, accordingly, the idea is that the login initialization files( .login , .profile , etc.) should set the settings that are inheritable,and let .bashrc (or whatever else you use) handle the onesthat aren’t ( set , shopt , non-exported shell variables, etc.) Another notion is that the login initialization files (and only they)should do “heavy lifting”, i.e., resource-intensive actions. For example, you might want to have certain processesrunning in the background whenever you’re logged in(but only one copy (instance) of them). You might want to have some status information(e.g., df or who ) displayed when you login,but not every time you start a new interactive shell. Especially if you have an interactive program/dialog(i.e., one that demands input from you)that you want to run every time you login,you probably don’t want to have it run every time you start a new shell. As an extreme example, twenty years ago Solaris logged you into a single, non-graphical, non-windowed shell. (I believe that it has changed since then.) It was the job of .login or .profile (or whatever)to start the windowing system, with a command like startx . (This was usefulpartly because there were multiple windowing systems available. Different users had different preferences. Some users used different systems in different situations,and we had a dialog in our .profile that asked“Which windowing system do you want to use today?”) Obviously, you wouldn’t want that to runevery time you opened a new window or typed sh . It’s been ages since I’ve used anything other than bash except for edge cases. (For example, I write scripts with #!/bin/sh ,so on some systems, my scripts run with dash ,and on others they run with bash in POSIX mode. A few times a year I run csh / tcsh for a few minutesto see how it handles something, or to answer a question.) If you use multiple shells (e.g., bash and zsh ) on a daily basis,your patterns may be different. If your primary shell (as defined in /etc/passwd ) is bash ,you might want to invoke a zsh login shell,and then perhaps invoke some interactive non-login zsh shellssubordinate to that. You should probably avoid having a login shellthat is subordinate to another login shell of the same type. As mentioned in Difference between Login Shell and Non-Login Shell? ,the OS X Terminal application runs a login shell, so a typical userwill typically have several “login shells” running simultaneously. This is a somewhat different model from the one I have described above,and may require the user to rethinkwhat he does in his .login or .profile (or whatever) file. I don’t know whether the OS X developers have documentedtheir rationale for this design decision. But I can imagine a situation in which this would be useful. There was a time when I habitually opened a handful of shell windowswhen I logged in,and I would set them to different text and background colors(by writing ANSI escape sequences to the screen)to help me keep track of which was which. Terminal colors are an example of somethingthat is not inherited by children-of-children,but does persist within a window. So this is the sort of thing that you would want to doevery time you started a new Terminal window,but not every time you start a new interactive shell.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/324359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198157/" ] }
324,395
I haven't used my Windows VM in about a week or so, the only thing I can think of that I've done within that week that may have caused this is cleared out my ~/.cache . Anyway, now when I try and launch VirtualBox I get the following error: VirtualBox - Critical Error Failed to create the VirtualBox COM object. The application will now terminate. Document is empty. Location: '/home/kalenpw/.config/VirtualBox/VirtualBox.xml', line 1 (0), column 1. /build/virtualbox-VDAABr/virtualbox-4.3.36-dfsg/src/VBox/Main/src-server/VirtualBoxImpl.cpp[536] (nsresult VirtualBox::init()). sure enough, ~/.config/VirtualBox/VirtualBox.xml is in fact empty, the only issue is I can't figure out what I need that file to be in order for virtual box to work. I've googled the error and everything says I need to change permissions of VirtualBox.xml, but I've already verified my user has permissions. VirtualBox will run if I do sudo virtualbox so clearly I've messed something up with that file I am just not sure what. Thanks.
Take a look to see if you have an automatic backup of that file, VirtualBox.xml-prev . If so, use that file to try to get VirtualBox happy again. cat VirtualBox.xml-prev > VirtualBox.xml Or rm VirtualBox.xml && cp VirtualBox.xml-prev VirtualBox.xml Or maybe find the original in a recent backup.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171715/" ] }
324,515
OverlayFS has a workdir option, beside two other directories lowerdir and upperdir , which needs to be an empty directory. Unfortunately the kernel documentation of overlayfs does not talk much about the purpose of this option. The "workdir" needs to be an empty directory on the same filesystem as upperdir. For readonly overlays the workdir might be ommittet among the upperdir . This give me the clue that it has to do with writing the merged files. Please explain what's happening in the workdir when files are written or changed in the merged directory. Why is the writable upperdir not enough?
The workdir option is required, and used to prepare files before they are switched to the overlay destination in an atomic action (the workdir needs to be on the same filesystem as the upperdir). Source: http://windsock.io/the-overlay-filesystem/ I would hazard a guess that "the overlay destination" means upperdir . So... certain files (maybe "whiteout" files?) are non-atomically created and configured in workdir and then atomically moved into upperdir .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/324515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29510/" ] }
324,610
I am a student and I keep most of my files on my home computer. Unfortunately, i can't use ssh or scp from my laptop which I use at school because of the firewall. I was thinking about trying to use port 443 because that might be open. My question is: I have multiple computers in my house and so I am using a router. Would it be bad if i were to port forward 443 to my computer?I'm not sure if there are any security issues related with this or if it would screw anything up when trying to use https from my other computers.
It should work fine, it's not more secure than using a different port for ssh (or less secure for that matter). And no, outbound TCP sockets are not the same as inbound TCP sockets - so it should not interfere with your outbound network traffic.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144044/" ] }
324,627
I installed Arch Linux on an Asus C201 Chromebook using this guide (the debian and fedora guides for the notebook didn't work for me and resulted in a black screen). This worked well more or less out of the box until I upgraded the system using pacman -Syu . Now the touchpad doesn't work properly anymore: Behaviour description (go to "Update" below for a miracle solution) Trying to move the cursor with a single finger seems to trigger scrolling. I can very slowly move the cursor when using one finger and sort of scratch the touchpad with the nail. But this really only moves the cursor a little. I can also press the touchpad and then move the cursor in which case the cursor moves and highlights. What I tried so far I double checked /etc/X11/xorg.conf.d/70-synaptics.conf and am using the configuration shown in the arch wiki as an example. I also had a copy of the pre-upgrade synaptics.conf but this didn't change the behavior either. I uninstalled and reinstalled xf86-input-synaptics sudo dmesg | grep elan shows the following two lines: [ 1.6 ] i2c 4-0015: Driver elan_i2c requests probe deferral [ 408.6 ] elan_i2c 4-0015: invalid report id data (ff) Summary Based on the touchpad behaviour, it feels that the upgrade caused havoc with the touchpad configuration. However, the configuration file seems to be ok. Reinstallting the touchpad driver doesn't seem to have an impact. Any other ideas of what I could do? /etc/X11/xorg.conf.d/50-synaptics.conf I uninstalled and reinstalled the synaptics driver and this is the config file: Section "InputClass" Identifier "touchpad" Driver "synaptics" MatchIsTouchpad "on" Option "TapButton1" "1" Option "TapButton2" "3" Option "TapButton3" "2" Option "VertEdgeScroll" "on" Option "VertTwoFingerScroll" "on" Option "HorizEdgeScroll" "on" Option "HorizTwoFingerScroll" "on" Option "CircularScrolling" "on" Option "CircScrollTrigger" "2" Option "EmulateTwoFingerMinZ" "40" Option "EmulateTwoFingerMinW" "8" Option "CoastingSpeed" "0" Option "FingerLow" "30" Option "FingerHigh" "50" Option "MaxTapTime" "125" EndSection The file I used before the upgrade only had the changes shown in the arch wiki as a sample configuration . synclient -l synclient -l returns: Parameter settings: LeftEdge = 120 RightEdge = 2884 TopEdge = 88 BottomEdge = 1554 FingerLow = 30 FingerHigh = 50 MaxTapTime = 125 MaxTapMove = 150 MaxDoubleTapTime = 100 SingleTapTimeout = 180 ClickTime = 100 EmulateMidButtonTime = 0 EmulateTwoFingerMinZ = 40 EmulateTwoFingerMinW = 8 VertScrollDelta = 68 HorizScrollDelta = 68 VertEdgeScroll = 1 HorizEdgeScroll = 1 CornerCoasting = 0 VertTwoFingerScroll = 1 HorizTwoFingerScroll = 1 MinSpeed = 1 MaxSpeed = 1.75 AccelFactor = 0.0584283 TouchpadOff = 0 LockedDrags = 0 LockedDragTimeout = 5000 RTCornerButton = 0 RBCornerButton = 0 LTCornerButton = 0 LBCornerButton = 0 TapButton1 = 1 TapButton2 = 3 TapButton3 = 2 ClickFinger1 = 1 ClickFinger2 = 3 ClickFinger3 = 2 CircularScrolling = 1 CircScrollDelta = 0.1 CircScrollTrigger = 2 CircularPad = 0 PalmDetect = 0 PalmMinWidth = 10 PalmMinZ = 200 CoastingSpeed = 0 CoastingFriction = 50 PressureMotionMinZ = 30 PressureMotionMaxZ = 160 PressureMotionMinFactor = 1 PressureMotionMaxFactor = 1 GrabEventDevice = 0 TapAndDragGesture = 1 AreaLeftEdge = 0 AreaRightEdge = 0 AreaTopEdge = 0 AreaBottomEdge = 0 HorizHysteresis = 17 VertHysteresis = 17 ClickPad = 1 RightButtonAreaLeft = 1502 RightButtonAreaRight = 0 RightButtonAreaTop = 1346 RightButtonAreaBottom = 0 MiddleButtonAreaLeft = 0 MiddleButtonAreaRight = 0 MiddleButtonAreaTop = 0 MiddleButtonAreaBottom = 0 Update I found this thread on an arch forum which looked pretty close to my problem. The proposed solution was to downgrade xf86-input-synaptics to 1.8.3-4. This mostly solved the issues, the touchpad was usable in general though I would have needed to change some of the sensitivity settings. When I tried to use libinput before instead of synaptics , the touchpad didn't work at all, however I never removed the synaptics xorg.conf file as suggested by @mattia.b89. So I uninstalled synaptics again (this is after downgrading it) and removed the xorg.conf file: From the moment I removed the xorg.conf file and synaptics , after reboot, the touchpad was working, however scrolling and multitouch didn't work. I don't understand that at all..... at this point neither synaptics nor libinput was installed. I then installed libinput and now scrolling and multitouch works as intended. I haven't tried to get tapping to work yet, but in any case it is functional now. This all feels like magic a bit. I'll leave the bounty open for a little bit just to see if someone can explain what just happened. In any case thanks to @mattia.b89 and @C.W. for helping with this.
It should work fine, it's not more secure than using a different port for ssh (or less secure for that matter). And no, outbound TCP sockets are not the same as inbound TCP sockets - so it should not interfere with your outbound network traffic.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115315/" ] }
324,676
How do I output a tab character (ASCII hex 0x09) on the terminal window ? In all my experiments the tab character is changed to spaces when it appears on the terminal. E.g. $ echo -e "xx\t\tyy"xx yy which is not want I want since the space between xx and yy is filled with 14 spaces and not 2 tab characters. I tried stty tab0 , stty tab1 , stty tab2 and stty tab3 , but all gives the same result. I am using GNOME Terminal 2.16.0 and Red Hat Enterprise Linux Client release 5.9. Background:I want to mouse-select the text on the terminal and paste into Excel. When I do this from Emacs (with 0x09 tabs between the fields) the fields end up in different columns. I like this, and want the same behavior when copy-pasting from the terminal. However, at present all the fields end up as one string in the first column. When I have spaces (0x20) between fields in Emacs, the behavior is the same as when copy-pasting from the terminal.
Tab is not a printable character. Tab is a control character that usually advances the cursor (but not at the end of line), leaving the characters that it's jumping through unchanged. gnome-terminal (and other vte -based emulators) have a special hack that it tries to preserve tabs for copy-paste purposes, however, it still loses them at a soft linebreak. Other emulators might also have such a hack, but typically they don't. See also the conversation at https://bugzilla.gnome.org/show_bug.cgi?id=769316 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89581/" ] }
324,687
I need to encrypt a folder with documents (about 560kb if that makes any difference) and I was wondering whether it was possible to accomplish that using the command line? I am searching for something like this: encrypt /path/to/folder encryption_method password than can be decrypted the same way decrypt /path/to/folder encryption_method password
Tab is not a printable character. Tab is a control character that usually advances the cursor (but not at the end of line), leaving the characters that it's jumping through unchanged. gnome-terminal (and other vte -based emulators) have a special hack that it tries to preserve tabs for copy-paste purposes, however, it still loses them at a soft linebreak. Other emulators might also have such a hack, but typically they don't. See also the conversation at https://bugzilla.gnome.org/show_bug.cgi?id=769316 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166921/" ] }
324,696
I want a CLI or GUI solution for replacing file name of each file with the title of that file. I have multiple radio podcasts with file names that do not properly show the content. That is visible in media players under Title, but I have problems seeing those in file mangers , so I want to replace file names with titles. I am looking for a Linux native tool , while I know how to do this in Wine with programs like Foobar2000. Edit after comment : these are mp3 files. I can see the title data displayed in the info panel of Dolphin file manager.
Tab is not a printable character. Tab is a control character that usually advances the cursor (but not at the end of line), leaving the characters that it's jumping through unchanged. gnome-terminal (and other vte -based emulators) have a special hack that it tries to preserve tabs for copy-paste purposes, however, it still loses them at a soft linebreak. Other emulators might also have such a hack, but typically they don't. See also the conversation at https://bugzilla.gnome.org/show_bug.cgi?id=769316 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
324,727
I have the following setup where I use an OpenSSH server to remotely start a certain command using ssh . My authorized_keys file has the following entry: command="/path/to/script.sh $SSH_ORIGINAL_COMMAND",no-port-forwarding,no-x11-forwarding,no-agent-forwarding ssh-rsa AAAA…qGDf my_special_key This means that if anyone connects using that key (e.g. by using ssh -i special_key_file user@server ) the script script.sh gets executed on my server. Now there is also the $SSH_ORIGINAL_COMMAND placeholder which gets replaced by all the extra command line to the ssh command, i.e. ssh -i special_key_file user@server foobar means that the $1 will have foobar in it. To test it I can make my script.sh look the following: #!/bin/shprintf '<%s>\n' "$@" Now for ssh -i special_key_file user@server 'foo bar' just like for ssh -i special_key_file user@server foo bar I will get the following same result: <foo><bar> Because of splitting. And if that wasn't bad enough, for ssh -i special_key_file user@server '*' I'm getting a file list: <file1><file2>… So apparently the whole extra command line gets inserted into what is inside command= which is then run in a shell, with all the splitting and globing steps happening. And apparently I can't use " inside the command="…" part so I can't put $SSH_ORIGINAL_COMMAND inside double quotes to prevent that from happening. Is there any other solution for me? BTW, as explained in this dismissed RFE to introduce a $SSH_ESCAPED_ORIGINAL_COMMAND the ssh protocol is party to blame as all the extra command line is transferred as one string. Still this is no reason to have a shell on the server side do all the splitting, especially if it then also does the glob expansion (I doubt that is ever useful here). Unlike the person who introduced that RFE I don't care about splitting for my use case, I just want no glob expansion. Could a possible solution have to do with changing the shell environment OpenSSH uses for this task?
Use quotes: cat bin/script.sh#!/bin/shprintf '<%s>\n' "$@" command="/home/user/bin/script.sh \"${SSH_ORIGINAL_COMMAND}\"" ssh-rsa AA... ssh -i .ssh/id_rsa.special hamilton '*'<*>ssh -i .ssh/id_rsa.special hamilton 'foo bar'<foo bar> But also you will get: ssh -i .ssh/id_rsa.special hamilton '*' 'foo bar'<* foo bar> Not sure is it a problem for you or not. And I was confused about: And apparently I can't use " inside the command="…" I thought it's kind of limitation in your task so deleted my answer. I'm glad my answer helped you with your task!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117599/" ] }
324,754
I am trying to properly visualize the existing characters that listed in the /usr/include/X11/keysymdef.h file. It has lines like: #define XK_onethird 0x0ab0 /* U+2153 VULGAR FRACTION ONE THIRD */#define XK_twothirds 0x0ab1 /* U+2154 VULGAR FRACTION TWO THIRDS */#define XK_onefifth 0x0ab2 /* U+2155 VULGAR FRACTION ONE FIFTH */ I would like to display them like: #define XK_onethird 0x0ab0 /* ⅓ VULGAR FRACTION ONE THIRD */#define XK_twothirds 0x0ab1 /* ⅔ VULGAR FRACTION TWO THIRDS */#define XK_onefifth 0x0ab2 /* ⅕ VULGAR FRACTION ONE FIFTH */ I tried: $ sed -e 's/U+\([0-9A-Fa-f]\{4\}\)/\u\1/' < /usr/include/X11/keysymdef.h That just "ignores" the \u . So, boiling it down to some sed testcases with the Pilcrow "¶": $ echo 00B6 | sed -re $'s/(....)/echo "\u00B6"/e'¶ # Good, display works, lets get the capture group:$ echo 00B6 | sed -re $'s/(....)/echo "\u00B6 \\1"/e'¶ 00B6 # So far, so good, lets prefix \u again:$ echo 00B6 | sed -re $'s/(....)/echo "\u00B6 \u\\1"/e'¶ 00B6 # Huh? Ok, trying double-wrapping$ echo 00B6 | sed -re $'s/(....)/echo "\u00B6 \\u\\1"/e'¶ 00B6 # Hey, where did the '\\u' go? Ok, try something else:$ echo 00B6 | sed -re $'s/(....)/echo $(echo "\u00B6 \u\\1")/e'¶ 00B6 # I give up (Note: I also just now tried some variations of the above with printf . No change) What am I missing? Why can't I use the evaluate flag of sed like that? Edit: I am aware, that this can be worked around with while read echo eval and be solved with other languages/tools, and appreciated(+1d) the answers. For this question however I'd be most interested in a solution with sed or know why the above commands produce this output and/or why it is not possible.
With perl: perl -CS -pe 's/\bU\+([\dA-Fa-f]{4})\b/chr(hex($1))/eg' /usr/include/X11/keysymdef.h This tells perl to look for U+0000 , convert the 0000 to hex, and then replace it with the character represented by that number. If you want to replace the contents of the file you can do: perl -i -CD -pe 's/\bU\+([\dA-Fa-f]{4})\b/chr(hex($1))/eg' /path/to/file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/127903/" ] }
324,843
An application wants to access the keyring 'Default Keyring' Chrome/Chromium prompts me for a password each time it opens. I don't know why it isn't integrated directly with the OS to unlock with login, but there isn't any obvious way around it. I read that I need to rm ~/.gnome2/keyrings/default.keyring but I have no such file in my GNOME-less Xfce installation.
This problem has a long history and you can fiddle around with gnome-keyring if you want, but I found that the easier solution is to set that prompt's password to blank, such that it won't ask you anymore: rm ~/.local/share/keyrings/* (you may want to check/backup these files first, if you're not on a fresh install, e.g., cp -r ~/.local/share/keyrings ~/keyrings-backup ) Restart Chrome When prompted to create a keyring, continue without entering a password. (Turns out you would have been okay if you did this the first time.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/324843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42894/" ] }
324,929
I have a bash shell variable containing a string formed of multiple words delimited by whitespace. The string can contain escapes, such as escaped whitespace within a word. Words containing whitespace may alternatively be quoted. A shell variable that is used unquoted ( $FOO instead of "$FOO" ) becomes multiple words but quotes and escapes in the original string have no effect. How can a string be split into words, giving consideration to quoted and escaped characters? Background A server offers restricted access over ssh using the ForceCommand option in the sshd_config file to force execution of a script regardless of the command-line given to the ssh client. The script uses the variable SSH_ORIGINAL_COMMAND (which is a string, set by ssh , that contains the command-line provided to the ssh client) to set its argument list before proceeding. So, a user doing $ ssh some_server foo 'bar car' baz will see the script execute and it will have SSH_ORIGINAL_COMMAND set to foo bar car baz which would become four arguments when the script does set -- ${SSH_ORIGINAL_COMMAND} Not the desired result. So the user tries again: $ ssh some_server foo bar\ car baz Same result - the backslash in the second argument needs to be escaped for the client's shell so ssh sees it. What about these: $ ssh some_server foo 'bar\ car' baz$ ssh some_server foo bar\\ car baz Both work, as would a printf "%q" quoting wrapper that can simplify the client-side quoting. Client-side quoting allows ssh to send the correctly quoted string to the server so that it receives SSH_ORIGINAL_COMMAND with the backslash intact: foo bar\ car baz . However there is still a problem because set does not consider the quoting or escaping. There is a solution: eval set -- ${SSH_ORIGINAL_COMMAND} but it is unacceptable. Consider $ ssh some_server \; /bin/sh -i Very undesirable: eval can't be used because the input can't be controlled. What is required is the string expansion capability of eval without the execution part.
Use read : read -a ssh_args <<< "${SSH_ORIGINAL_COMMAND}"set -- "${ssh_args[@]}" This will parse words from SSH_ORIGINAL_COMMAND into the array ssh_args , treating backslash ( \ ) as an escape character. The array elements are then given as arguments to set . It works with an argument list passed through ssh like this: $ ssh some_server foo 'bar\ car' baz$ ssh some_server foo bar\\ car baz A printf "%q" quoting ssh wrapper allows these: $ sshwrap some_server foo bar\ car baz$ sshwrap some_server foo 'bar car' baz Here is such a wrapper example: #!/bin/bashh=$1; shiftQUOTE_ARGS=''for ARG in "$@"do ARG=$(printf "%q" "$ARG") QUOTE_ARGS="${QUOTE_ARGS} $ARG"donessh "$h" "${QUOTE_ARGS}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/324929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9259/" ] }
325,200
I have jobs that I want to run hourly, but not necessarily at the same time, which I think 0 * * * * job Means run at the 0 minute of every hour on the dot. I know I can also use @hourly job What is the difference if any?How can I schedule Jobs to run Hourly, but not all at the same time?
From crontab(5) : @hourly : Run once an hour, ie. "0 * * * *" . So it's strictly the same. To run a job at a varying point in the hour (or multiple jobs, to spread the load) you can sleep for a random amount of time before starting the job: @hourly sleep $((RANDOM / 10)); dowhatever This sleeps for up to 3276 seconds (nearly an hour), then runs the job. So every time cron starts the job, it waits a different amount of time before actually starting.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/325200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130767/" ] }
325,202
I work as a sysadmin in a large company and have to maintain several windows and Linux (Ubuntu 16.04) VMs. Since I want to use zsh instead of bash on the Linux VMs, I have to change my default shell. Now, I log in on Linux with my Windows domain account which enforces the AD settings; that means I can't change the passwd file or use chsh to change my default shell, so I had to find another way. This way was to enforce the shell in AD with the loginShell attribute. The question is, what happens if I log in on a Linux VM which does not have zsh installed, what happens? Does it fallback to bash / sh , does it get stuck or something else?
Let's try! Shell changed on the server: [myserver ~]% getent passwd myusermyuser:x:150:150:myuser:/home/myuser:/foo Let's log in: [myclient ~]% ssh myserverReceived disconnect from myserver: 2: Too many authentication failures for myuser From the SSH logs on the server: Nov 22 09:30:27 myserver sshd[20719]: Accepted gssapi-with-mic for myuser from myclient port 33808 ssh2Nov 22 09:30:27 myserver sshd[20719]: pam_unix(sshd:session): session opened for user myuser by (uid=0)Nov 22 09:31:18 myserver sshd[20727]: Received disconnect from myclient: 11: disconnected by userNov 22 09:31:18 myserver sshd[20719]: pam_unix(sshd:session): session closed for user myuserNov 22 09:31:20 myserver sshd[20828]: User myuser not allowed because shell /foo does not existNov 22 09:31:20 myserver sshd[20835]: input_userauth_request: invalid user myuserNov 22 09:31:20 myserver sshd[20835]: Disconnecting: Too many authentication failures for myuser Key line: User myuser not allowed because shell /foo does not exist . So you can't log in if you don't have a valid shell set.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/325202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201761/" ] }
325,206
This is an exploration question, meaning I'm not completely sure what this question is about, but I think it's about the biggest integer in Bash. Anyhow, I'll define it ostensively. $ echo $((1<<8))256 I'm producing an integer by shifting a bit. How far can I go? $ echo $((1<<80000))1 Not this far, apparently. (1 is unexpected, and I'll return to it.) But, $ echo $((1<<1022))4611686018427387904 is still positive. Not this, however: $ echo $((1<<1023))-9223372036854775808 And one step further afield, $ echo $((1<<1024))1 Why 1? And why the following? $ echo $((1<<1025))2$ echo $((1<<1026))4 Would someone like to analyse this series? UPDATE My machine: $ uname -aLinux tomas-Latitude-E4200 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Bash uses intmax_t variables for arithmetic . On your system these are 64 bits in length, so: $ echo $((1<<62))4611686018427387904 which is 100000000000000000000000000000000000000000000000000000000000000 in binary (1 followed by 62 0s). Shift that again: $ echo $((1<<63))-9223372036854775808 which is 1000000000000000000000000000000000000000000000000000000000000000 in binary (63 0s), in two's complement arithmetic. To get the biggest representable integer, you need to subtract 1: $ echo $(((1<<63)-1))9223372036854775807 which is 111111111111111111111111111111111111111111111111111111111111111 in binary. As pointed out in ilkkachu 's answer , shifting takes the offset modulo 64 on 64-bit x86 CPUs (whether using RCL or SHL ), which explains the behaviour you're seeing: $ echo $((1<<64))1 is equivalent to $((1<<0)) . Thus $((1<<1025)) is $((1<<1)) , $((1<<1026)) is $((1<<2)) ... You'll find the type definitions and maximum values in stdint.h ; on your system: /* Largest integral types. */#if __WORDSIZE == 64typedef long int intmax_t;typedef unsigned long int uintmax_t;#else__extension__typedef long long int intmax_t;__extension__typedef unsigned long long int uintmax_t;#endif/* Minimum for largest signed integral type. */# define INTMAX_MIN (-__INT64_C(9223372036854775807)-1)/* Maximum for largest signed integral type. */# define INTMAX_MAX (__INT64_C(9223372036854775807))
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/325206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
325,216
TL/DR : I'm working in Solaris 10. I have a ls ... | egrep ... command, and I need to know if it outputs any results or not. I could just add a | wc -c to the end; but I need the result (0 or non-zero) to be in the exit code, not in the output. And I can't use if , it's not a bash script, I can only execute a single command. Long version : I'm writing a maintenance process to compress and remove old log files in a Solaris 10 system. It checks all the .log or .xml files inside a given path, takes the ones which were last modified on a given month, creates a .tar with them, and then removes the original files: ls -Egopqt /path/ | egrep -i '2016-10-[0123][0-9] .*(\.log$|\.xml$)' | awk '{ print $7 }'| xargs tar -cvf target.tar And the same to remove the files, just replacing the last part with: | xargs -i rm {} I'm probably overcomplicating it, but it works. Unless there are no files for a given month; if that's the case, I get an error saying tar: Missing filenames . How can I check it before attempting to create the tar? I thought of something like this, using wc to check if there is an output or not : ls ... | egrep ... | wc -c Which correctly outputs 0 when there aren't any files, and another number otherwise. The problem is: I can't see the output, only the exit code (which is always 0 since there is no error). I'm not doing this in a bash script, I'm working with Siebel CRM: I have a javascript function which generates the commands and executes them with Clib.system calls. The only thing I can see is the exit code: 0 for OK, non-zero for an error (I see the actual number, not "non-zero"). Previously I had a similar requeriment, to check if a single file exists or not, and this answer helped me to get to this: [ -f filename ] && exit 111 || exit 0 I'm successfully getting either 111 or 0 , depending on if filename exists or not. But I can't get it to work with the ls ... | egrep ... | wc command. I've tried using this syntax : [[ $( ls -Egopq /path/ | egrep -i ... | wc -c ) -ne 0 ]] && exit 111 || exit 0 But I'm getting always exit code 2, it doesn't matter if there are files or not. What am I doing wrong? PS: I know I could write a tiny shell script to perform the checks, use a simple if to compare the output, and then return whatever exit code I want. Also, I actually can access a command's output, I'd just need to redirect it to a > tempfile and then read it from Siebel. However, I'd prefer to avoid both of these options, as I'm trying to avoid creating unnecessary (temp or permanent) files.
egrep returns non-zero if no lines were matched.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/325216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201714/" ] }
325,225
I am wondering how I can get the number of bytes in just one line of a file. I know I can use wc -l to get the number of lines in a file, and wc -c to get the total number of bytes in a file. What I want, however, is to get the number of bytes in just one line of a file. How would I be able to do this?
sed -n 10p myfile | wc -c will count the bytes in the tenth line of myfile (including the linefeed/newline character). A slightly less readable variant, sed -n "10{p;q;}" myfile | wc -c (or sed '10!d;q' or sed '10q;d' ) will stop reading the file after the tenth line, which would be interesting on longer files (or streams). (Thanks to Tim Kennedy and Peter Cordes for the discussion leading to this.) There are performance comparisons of different ways of extracting lines of text in cat line X to line Y on a huge file .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/325225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189989/" ] }
325,346
i have a query on cronjob , if i execute a command using cronjob is it possible to display the output in terminal rather than saving in output file . say for eg */2 * * * root /bin/ping xx.xx.xx.xx the output should display in the terminal . i tried it doesn't show in the terminal. Anything i need to change in my cronjob . Thanks in advanceVinoth
You can't do this. All cron jobs are run in non-interactive shells, there is no terminal attachment. Hence the concept of /dev/tty or similar is not available in cron .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/325346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155498/" ] }
325,445
Does anyone know if there is a way to incorporate repetition constraints into ZSH wildcard expressions? For example, to match all files starting with "ABC" following by one or more numbers, using grep one could do: ls | grep -e "ABC[0-9]\+" Is there any way to do this directly with ZSH glob strings? Something along the lines of: ls "ABC[0-9]\+" I've looked through the docs for ZSH and Googled for something like this, but so far have not found any such support. Anyone know if this is possible?
Yes, use ## to match one or more occurrence of [0-9] like: ABC[0-9]## This requires extendedglob to be set, which is by default. If unset, set it first: setopt extendedglob Example: % print -l ABC*ABCABC75475ABC8ABC90% print -l ABC[0-9]##ABC75475ABC8ABC90
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/325445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39903/" ] }