source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
289,250
POSIX defines the behavior of tools such as grep , awk , sed , etc which work against text files. Since it is a text file, I think there is the problem(s) of character encoding. Question: What is the character encodings supported by POSIX? (or, text files of what encoding can be handled by POSIX compiant systems?)
There is no specific character encoding mandated by POSIX. The only character in a fixed position is null, which must be 00. What POSIX does require is that all characters from its Portable Character Set exist. The Portable Character Set contains the printable ASCII characters, space, BEL, backspace, tab, carriage return, newline, vertical tab, form feed, and null. Where or how those are encoded is not specified, except that: They are all a single byte (8 bits). Null is represented with all bits zero. The digits 0-9 appear contiguously in that order. It imposes no other restrictions on the representation of characters, so a conforming system is free to support encodings with any representation of those characters, and any other characters in addition. Different locales on the same system can have different representations of those characters, with the exception of . and / , and if an application uses any pair of locales where the character encodings differ, or accesses data from an application using a locale which has different encodings from the locales used by the application, the results are unspecified. The only files that all POSIX-compliant systems are required to treat in the same way are files consisting entirely of null bytes. Files treated as text have their lines terminated by the encoding's representation of the PCS's newline character .
{ "source": [ "https://unix.stackexchange.com/questions/289250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157713/" ] }
289,385
I tried removing the '.' directory. I thought I could just delete my working directory without having to go into a parent directory. The point of my question is to look for some insight into how the linux system works to delete files.
Removing the current directory does not affect the file system integrity or its logical organization. Preventing . removal is done to follow the POSIX standard which states in the rmdir(2) manual page: If the path argument refers to a path whose final component is either dot or dot-dot, rmdir() shall fail. One rationale can be found in the rm manual page: The rm utility is forbidden to remove the names dot and dot-dot in order to avoid the consequences of inadvertently doing something like: rm -r .* On the other hand, explicitly removing the current directory (i.e. by stating its full or relative path) is an allowed operation under Unix, at least since SVR3 as it was forbidden with Unix version 7 until SVR2. This is very similar to what happens when you remove a file that is actively being read or written to. Processes accessing the delete file continue their read and write operations just like if nothing happened. After you have removed a process current directory, this directory is no more accessible though its path but its inode stay present on the file system until the process dies or change its own directory. Note that the process won't be able to use a path relative to its current directory to change its cwd (e.g. cd .. ) because there is no more a .. entry in its current directory. When someone type rmdir . , they likely expect the current directory entry to be removed but when a directory is removed (using its path), three directory entries are actually removed, . , .. , and the directory itself. Removing only . and not this directory's directory entry would create a non compliant directory but as already stated, it is forbidden by the standard. As @Emmanuel rightly pointed out, there is a second reason why removing . is not allowed. There is at least one POSIX compliant OS (Mac OS X with HFS+) that, with strong restrictions, supports creating hardlinks to existing directories. In such case, there is no clear way from inside the directory to know which hardlink is the one expected to be removed.
{ "source": [ "https://unix.stackexchange.com/questions/289385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142064/" ] }
289,389
When I am looking to create a new partition table, I have the following options: aix amiga bsd dvh gpt mac msdos pc98 sun loop The default in gparted appears to be msdos which I guess is an 'MBR' partition table. However gpt is more recent, but has less Windows support. I've used Linux for a long time, but I've never really looked into partitioning. What are the various options and their differences? Is there a recommended one for Linux-only disks?
The options correspond to the various partitioning systems supported in libparted ; there's not much documentation , but looking at the source code : aix provides support for the volumes used in IBM’s AIX (which introduced what we now know as LVM); amiga provides support for the Amiga’s RDB partitioning scheme; bsd provides support for BSD disk labels; dvh provides support for SGI disk volume headers; gpt provides support for GUID partition tables; mac provides support for old (pre-GPT) Apple partition tables; msdos provides support for DOS-style MBR partition tables; pc98 provides support for PC-98 partition tables; sun provides support for Sun’s partitioning scheme; loop provides support for raw disk access (loopback-style) — I’m not sure about the uses for this one. As you can see, the majority of these are for older systems, and you probably won’t need to create a partition table of any type other than gpt or msdos . For a new disk, I recommend gpt : it allows more partitions, it can be booted even in pre-UEFI systems (using grub ), and supports disks larger than 2 TiB (up to 8 ZiB for 512-byte sector disks). Actually, if you don’t need to boot from the disk, I’d recommend not using a partitioning scheme at all and simply adding the whole disk to mdadm , LVM, or a zpool, depending on whether you use LVM (on top of mdadm or not) or ZFS.
{ "source": [ "https://unix.stackexchange.com/questions/289389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
289,391
Most of the files are gone, but I'm still left with these two files: ".RData" and ".Rhistory" Why is this the case? I'm working with R, but I don't know what those files are. Afterwards, I can individually remove them without needing to use sudo.
* only includes visible files. If you want to delete both those and the hidden ones, use: rm -rf * .* The dotglob option With bash, we can change this behavior and unhide files. To illustrate, let's create two files, one hidden and one not: $ touch unhidden .hide1 $ ls * unhidden As you can see, only the unhidden one is shown by ls * . Now let's set the dotglob option: $ shopt -s dotglob $ ls * .hide1 unhidden Both files appear now. We can, of course, turn dotglob off if we want: $ shopt -u dotglob $ ls * unhidden Documentation From man bash : When a pattern is used for pathname expansion, the character "." at the start of a name or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. When matching a pathname, the slash character must always be matched explicitly. In other cases, the ``.'' character is not treated specially. See the description of shopt below under SHELL BUILTIN COMMANDS for a description of the nocaseglob, nullglob, failglob, and dotglob shell options. In other words, pathname expansion ignores files whose names begin with . unless the . is explicitly specified. Safety issues To avoid unpleasant surprises, rm will refuse to remove the current directory . and the parent directory .. even if you specify them on the command line: $ rm -rf .* rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘.’ rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘..’
{ "source": [ "https://unix.stackexchange.com/questions/289391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142064/" ] }
289,499
For example: $ node -bash: /usr/local/bin/node: No such file or directory $ foo -bash: foo: command not found What's the difference? In both cases, node and foo are invalid commands, but it seems like Unix just can't find the node binary? When uninstalling a program, e.g. node , is there a way to clean this up so that I get $ node -bash: node: command not found EDIT: Results from type command: $ type node node is hashed (/usr/local/bin/node) $ type foo -bash: type: foo: not found
That's because bash remembered your command location, store it in a hash table. After you uninstalled node , the hash table isn't cleared, bash still thinks node is at /usr/local/bin/node , skipping the PATH lookup, and calling /usr/local/bin/node directly, using execve() . Since when node isn't there anymore, execve() returns ENOENT error, means no such file or directory, bash reported that error to you. In bash , you can remove an entry from hash table: hash -d node or remove the entire hash table ( works in all POSIX shell ): hash -r
{ "source": [ "https://unix.stackexchange.com/questions/289499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58080/" ] }
289,563
I am using an embedded Arm with a Debian build. How does one list the compiled devices from the device tree? I want to see if a device is already supported. For those reading this, the "Device Tree" is a specification/standard for adding devices to an (embedded) Linux kernel.
The device tree is exposed as a hierarchy of directories and files in /proc . You can cat the files, eg: find /proc/device-tree/ -type f -exec head {} + | less Beware, most file content ends with a null char, and some may contain other non-printing characters.
{ "source": [ "https://unix.stackexchange.com/questions/289563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109021/" ] }
289,629
Note: I wrote an article on Medium that explains how to create a service, and how to avoid this particular issue: Creating a Linux service with systemd . Original question: I'm using systemd to keep a worker script working at all times: [Unit] Description=My worker After=mysqld.service [Service] Type=simple Restart=always ExecStart=/path/to/script [Install] WantedBy=multi-user.target Although the restart works fine if the script exits normally after a few minutes, I've noticed that if it repeatedly fails to execute on startup, systemd will just give up trying to start it: Jun 14 11:10:31 localhost systemd[1]: test.service: Main process exited, code=exited, status=1/FAILURE Jun 14 11:10:31 localhost systemd[1]: test.service: Unit entered failed state. Jun 14 11:10:31 localhost systemd[1]: test.service: Failed with result 'exit-code'. Jun 14 11:10:31 localhost systemd[1]: test.service: Service hold-off time over, scheduling restart. Jun 14 11:10:31 localhost systemd[1]: test.service: Start request repeated too quickly. Jun 14 11:10:31 localhost systemd[1]: Failed to start My worker. Jun 14 11:10:31 localhost systemd[1]: test.service: Unit entered failed state. Jun 14 11:10:31 localhost systemd[1]: test.service: Failed with result 'start-limit'. Similarly, if my worker script fails several times with an exit status of 255 , systemd gives up trying to restart it: Jun 14 11:25:51 localhost systemd[1]: test.service: Failed with result 'exit-code'. Jun 14 11:25:51 localhost systemd[1]: test.service: Service hold-off time over, scheduling restart. Jun 14 11:25:51 localhost systemd[1]: test.service: Start request repeated too quickly. Jun 14 11:25:51 localhost systemd[1]: Failed to start My worker. Jun 14 11:25:51 localhost systemd[1]: test.service: Unit entered failed state. Jun 14 11:25:51 localhost systemd[1]: test.service: Failed with result 'start-limit'. Is there a way to force systemd to always retry after a few seconds?
I would like to extend Rahul's answer a bit. systemd tries to restart multiple times ( StartLimitBurst ) and stops trying if the attempt count is reached within StartLimitIntervalSec . Both options belong to the [unit] section. The default delay between executions is 100ms ( RestartSec ) which causes the rate limit to be reached very fast. systemd won't attempt any more automatic restarts ever for units with Restart policy defined : Note that units which are configured for Restart= and which reach the start limit are not attempted to be restarted anymore; however, they may still be restarted manually at a later point, from which point on, the restart logic is again activated. Rahul's answer helps, because the longer delay prevents reaching the error counter within the StartLimitIntervalSec time. The correct answer is to set both RestartSec and StartLimitBurst to reasonable values though.
{ "source": [ "https://unix.stackexchange.com/questions/289629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30018/" ] }
289,685
How to install VirtualBox Extension Pack to VirtualBox latest version on Linux? I would also like to be able to verify extension pack has been successfully installed and and uninstall it, if I wish.
First, you need to adhere to the VirtualBox Extension Pack Personal Use and Evaluation License . Second, I advise to only install this package if actually needed, here is the description of the VirtualBox Extension Pack functionality: Oracle Cloud Infrastructure integration, USB 2.0 and USB 3.0 Host Controller, Host Webcam, VirtualBox RDP, PXE ROM, Disk Encryption, NVMe. Now, let's download the damn thing: we need to store the latest VirtualBox version into a variable, let's call it LatestVirtualBoxVersion download the latest version of the VirtualBox Extension Pack, one-liner follows LatestVirtualBoxVersion=$(wget -qO - https://download.virtualbox.org/virtualbox/LATEST-STABLE.TXT) && wget "https://download.virtualbox.org/virtualbox/${LatestVirtualBoxVersion}/Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack" Simplification attribution goes to guntbert . Thank you. You might want to verify its integrity by comparing its SHA-256 checksum available in file: https://www.virtualbox.org/download/hashes/${LatestVirtualBoxVersion}/SHA256SUMS using sha256sum -c --ignore-missing SHA256SUMS Then, we install it as follows: sudo VBoxManage extpack install --replace Oracle_VM_VirtualBox_Extension_Pack-${LatestVirtualBoxVersion}.vbox-extpack To verify if it has been successfully installed, we may list the installed extension packs: VBoxManage list extpacks To uninstall the extension pack: sudo VBoxManage extpack uninstall "Oracle VM VirtualBox Extension Pack"
{ "source": [ "https://unix.stackexchange.com/questions/289685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
289,999
Using command line, I know that I can encrypt a directory with the following command: zip -er Directory.zip /path/to/directory However, this does not encrypt the filenames themselves. If someone runs: unzip Directory.zip and repeatedly enters a wrong password, the unzip command will loop through all of the contained filenames until the correct password is entered. Sample output: unzip Directory.zip Archive: Directory.zip creating: Directory/ [Directory.zip] Directory/sensitive-file-name-1 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-1 incorrect password [Directory.zip] Directory/sensitive-file-name-2 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-2 incorrect password [Directory.zip] Directory/sensitive-file-name-3 password: password incorrect--reenter: password incorrect--reenter: skipping: Directory/sensitive-file-name-3 incorrect password and so on. Using command line, is there a way to zip a directory with encryption while also encrypting or hiding the filenames themselves? Thank you.
In a zip file, only file contents is encrypted. File metadata, including file names, is not encrypted. That's a limitation of the file format: each entry is compressed separately, and if encrypted, encrypted separately. You can use 7-zip instead. It supports metadata encryption ( -mhe=on with the Linux command line implementation). 7z a -p -mhe=on Directory.7z /path/to/directory There are 7zip implementations for all major operating systems and most minor ones but that might require installing extra software (IIRC Windows can unzip encrypted zip files off the box these days). If requiring 7z for decryption is a problem, you can rely on zip only by first using it to pack the directory in a single file, and then encrypting that file. If you do that, turn off compression of individual files and instruct the outer zip to compress the zip file, you'll get a better compression ratio overall. zip -0 -r Directory.zip /path/to/directory zip -e -n : encrypted.zip Directory.zip
{ "source": [ "https://unix.stackexchange.com/questions/289999", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175250/" ] }
290,013
I am currently trying to run gprof2dot on the gmon.out created by using the -pg option while compiling. Now I have already done pip install gprof2dot . How am I supposed to run this on the gmon.out file that was created? Using the instructions given on the Github page( gprof main | gprof2dot.py | dot -Tpng -o output.png ), I get the error: bash: gprof2dot.py: command not found Note : My executable is called main .
In a zip file, only file contents is encrypted. File metadata, including file names, is not encrypted. That's a limitation of the file format: each entry is compressed separately, and if encrypted, encrypted separately. You can use 7-zip instead. It supports metadata encryption ( -mhe=on with the Linux command line implementation). 7z a -p -mhe=on Directory.7z /path/to/directory There are 7zip implementations for all major operating systems and most minor ones but that might require installing extra software (IIRC Windows can unzip encrypted zip files off the box these days). If requiring 7z for decryption is a problem, you can rely on zip only by first using it to pack the directory in a single file, and then encrypting that file. If you do that, turn off compression of individual files and instruct the outer zip to compress the zip file, you'll get a better compression ratio overall. zip -0 -r Directory.zip /path/to/directory zip -e -n : encrypted.zip Directory.zip
{ "source": [ "https://unix.stackexchange.com/questions/290013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166916/" ] }
290,146
I'm working with Bash 3, and I'm trying to form a conditional. In C/C++, its dead simple: ((A || B) && C) . In Bash, its turning out not so (I think the Git authors must have contributed this code before they moved onto other endeavors). This does not work. Note that <0 or 1> is not a string literal; it means a 0 or 1 (generally comes from grep -i ). A=<0 or 1> B=<0 or 1> C=<0 or 1> if [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eqe "0" ]; then ... fi It results in: line 322: syntax error near unexpected token `[[' I then tried: A=<0 or 1> B=<0 or 1> C=<0 or 1> if [ ([ "$A" -eq "0" ]) || ([ "$B" -ne "0" ]) ] && [ "$C" -eq "0" ]; then ... fi it results in: line 322: syntax error near unexpected token `[[' Part of the problem is search results are the trivial examples, and not the more complex examples with compound conditionals. How do I perform a simple ((A || B) && C) in Bash? I'm ready to just unroll it and repeat the same commands in multiple blocks: A=<0 or 1> B=<0 or 1> C=<0 or 1> if [ "$A" -eq "0" ] && [ "$C" -eq "0" ]; then ... elif [ "$B" -ne "0" ] && [ "$C" -eq "0" ]; then ... fi
The syntax of bash is not C-like, even if a little part of it is inspired by C. You can't simply try to write C code and expect it to work. The main point of a shell is to run commands. The open-bracket command [ is a command, which performs a single test¹. You can even write it as test (without the final closing bracket). The || and && operators are shell operators, they combine commands , not tests. So when you write [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] that's parsed as [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] which is the same as test [ "$A" -eq "0" || test "$B" -ne "0" ] && test "$C" -eq "0" Notice the unbalanced brackets? Yeah, that's not good. Your attempt with parentheses has the same problem: spurious brackets. The syntax to group commands together is braces. The way braces are parsed requires a complete command before them, so you'll need to terminate the command inside the braces with a newline or semicolon. if { [ "$A" -eq "0" ] || [ "$B" -ne "0" ]; } && [ "$C" -eq "0" ]; then … There's an alternative way which is to use double brackets. Unlike single brackets, double brackets are special shell syntax. They delimit conditional expressions . Inside double brackets, you can use parentheses and operators like && and || . Since the double brackets are shell syntax, the shell knows that when these operators are inside brackets, they're part of the conditional expression syntax, not part of the ordinary shell command syntax. if [[ ($A -eq 0 || $B -ne 0) && $C -eq 0 ]]; then … If all of your tests are numerical, there's yet another way, which delimit artihmetic expressions . Arithmetic expressions perform integer computations with a very C-like syntax. if (((A == 0 || B != 0) && C == 0)); then … You may find my bash bracket primer useful. [ can be used in plain sh. [[ and (( are specific to bash (and ksh and zsh). ¹ It can also combine multiple tests with boolean operators, but this is cumbersome to use and has subtle pitfalls so I won't explain it.
{ "source": [ "https://unix.stackexchange.com/questions/290146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
290,176
I'm trying to find a way to grep how many messages in a file are older than 14 days and have a value of number of results return Example going with today's date of 20160616 . $grep 'Put Date' filename : Put Date :'20160425' Put Date :'20160501' Put Date :'20160514' Put Date :'20160609' Put Date :'20160610' Put Date :'20160616' The results should see the following are older than 14 days and would return 3 : Put Date :'20160226' Put Date :'20160501' Put Date :'20160514'
The syntax of bash is not C-like, even if a little part of it is inspired by C. You can't simply try to write C code and expect it to work. The main point of a shell is to run commands. The open-bracket command [ is a command, which performs a single test¹. You can even write it as test (without the final closing bracket). The || and && operators are shell operators, they combine commands , not tests. So when you write [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] that's parsed as [ [ "$A" -eq "0" ] || [ "$B" -ne "0" ] ] && [ "$C" -eq "0" ] which is the same as test [ "$A" -eq "0" || test "$B" -ne "0" ] && test "$C" -eq "0" Notice the unbalanced brackets? Yeah, that's not good. Your attempt with parentheses has the same problem: spurious brackets. The syntax to group commands together is braces. The way braces are parsed requires a complete command before them, so you'll need to terminate the command inside the braces with a newline or semicolon. if { [ "$A" -eq "0" ] || [ "$B" -ne "0" ]; } && [ "$C" -eq "0" ]; then … There's an alternative way which is to use double brackets. Unlike single brackets, double brackets are special shell syntax. They delimit conditional expressions . Inside double brackets, you can use parentheses and operators like && and || . Since the double brackets are shell syntax, the shell knows that when these operators are inside brackets, they're part of the conditional expression syntax, not part of the ordinary shell command syntax. if [[ ($A -eq 0 || $B -ne 0) && $C -eq 0 ]]; then … If all of your tests are numerical, there's yet another way, which delimit artihmetic expressions . Arithmetic expressions perform integer computations with a very C-like syntax. if (((A == 0 || B != 0) && C == 0)); then … You may find my bash bracket primer useful. [ can be used in plain sh. [[ and (( are specific to bash (and ksh and zsh). ¹ It can also combine multiple tests with boolean operators, but this is cumbersome to use and has subtle pitfalls so I won't explain it.
{ "source": [ "https://unix.stackexchange.com/questions/290176", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175411/" ] }
290,242
Given a file with multiple lines, I want to change every space to dash. I did like that: #!/bin/bash while read line; do echo "${line// /-}" done This works just fine, but I need a better method!
The standard tr utility does exactly this: tr ' ' '-' <filename.old >filename.new You can use a tool like sponge to do in-place editing (hides the fact that a temporary file is being used): tr ' ' '-' <filename | sponge filename
{ "source": [ "https://unix.stackexchange.com/questions/290242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150030/" ] }
290,525
I am reading this intro to the command line by Mark Bates. In the first chapter, he mentions that hard links cannot span file systems. An important thing to note about hard links is that they only work on the current file system. You can not create a hard link to a file on a different file system. To do that you need to use symbolic links, Section 1.4.3. I only know of one filesystem. The one starting from root ( / ). This statement that hard links cannot span over file systems doesn't make sense to me. The Wikipedia article on Unix file systems is not helpful either.
Hopefully I can answer this in a way that makes sense for you. A file system in Linux, is generally made up of a partition that is formatted in one of various ways (gotta love choice!) that you store your files on. Be that your system files, or your personal files... they are all stored on a file system. This part you seem to understand. But what if you partition your hard drive to have more than one partition (think Apple Pie cut up into pieces), or add an additional hard drive (perhaps a USB stick?). For the sake of argument, they all have file systems on them as well. When you look at the files on your computer, you're seeing a visual representation of data on your partition's file system. Each file name corresponds to what is called an inode, which is where your data, behind the scenes, really lives. A hard link lets you have multiple "file names" (for lack of a better description) that point to the same inode. This only works if those hard links are on the same file system. A symbolic link instead points to the "file name", which then is linked to the inode holding your data. Forgive my crude artwork but hopefully this explains better. image.jpg image2.jpg \ / [your data] here, image.jpg, and image2.jpg both point directly to your data. They are both hardlinks. However... image.jpg <----------- image2.jpg \ [your data] In this (crude) example, image2.jpg doesn't point to your data, it points to the image.jpg... which is a link to your data. Symbolic links can work across file system boundaries (assuming that file system is attached and mounted, like your usb stick). However a hard link cannot. It knows nothing about what is on your other file system, or where your data there is stored. Hopefully this helps make better sense.
{ "source": [ "https://unix.stackexchange.com/questions/290525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175667/" ] }
290,533
I'm running old Debian machine: Distributor ID: Debian Description: Debian GNU/Linux 5.0.2 (lenny) Release: 5.0.2 Codename: lenny I open terminal and run Midnight Commander in it. Now I need to quit by pressing F10. But When I do this I'm getting terminal menu: How to get MC menu and not terminal one by pressing F10?
Go to Edit->Keyboard Shortcuts And uncheck "Enable the menu shortcut key" to turn it off . Reference link : here .
{ "source": [ "https://unix.stackexchange.com/questions/290533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37277/" ] }
290,938
So I'm trying to give a VM a static IP address, this case has been particularly stubborn. The VM is running on a ESXi cluster with its own public IP range. I had it (sorta) working with an IPv4 address, except it would be reassigned every boot, now after fiddling with nmcli I can't get any IPv4 address assigned to it. The interface is ens32 and I've changed ipv4.addresses to XXX.XXX.120.44/24 (want it to have address 120.44 ), gateway to XXX.XXX.120.1 and set it to manual. Does anyone have any insights to why this isn't working? all the online guides are for the older network service not NetworkManager.
Try: # nmcli con add con-name "static-ens32" ifname ens32 type ethernet ip4 xxx.xxx.120.44/24 gw4 xxx.xxx.120.1 # nmcli con mod "static-ens32" ipv4.dns "xxx.xxx.120.1,8.8.8.8" # nmcli con up "static-ens32" iface ens32 Next, find the other connections and delete them. For example: # nmcli con show NAME UUID TYPE DEVICE ens32 ff9804db5-........ 802-3-ethernet -- static-ens32 a4b59cb4a-........ 802-3-ethernet ens32 # nmcli con del ens32 On the next reboot, you should pick up the static-ens32 connection, as it is the only one available.
{ "source": [ "https://unix.stackexchange.com/questions/290938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175924/" ] }
291,065
I try to duplicate a video file x times from the command line by using a for loop, I've tried it like this, but it does not work: for i in {1..100}; do cp test.ogg echo "test$1.ogg"; done
Your shell code has two issues: The echo should not be there. The variable $i ("dollar i") is mistyped as $1 ("dollar one") in the destination file name. To make a copy of a file in the same directory as the file itself, use cp thefile thecopy If you use more than two arguments, e.g. cp thefile theotherthing thecopy then it is assumed that you'd like to copy thefile and theotherthing into the directory called thecopy . In your case with cp test.ogg echo "test$1.ogg" , it specifically looks for a file called test.ogg and one named echo to copy to the directory test$1.ogg . The $1 will most likely expand to an empty string. This is why, when you delete the echo from the command, you get "test.ogg and test.ogg are the same files"; the command being executed is essentially cp test.ogg test.ogg This is probably a mistyping. In the end, you want something like this: for i in {1..100}; do cp test.ogg "test$i.ogg"; done Or, as an alternative i=0 while (( i++ < 100 )); do cp test.ogg "test$i.ogg" done Or, using tee : tee test{1..100}.ogg <test.ogg >/dev/null Note: This would most likely work for 100 copies, but for thousands of copies it may generate a "argument list too long" error. In that case, revert to using a loop.
{ "source": [ "https://unix.stackexchange.com/questions/291065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124191/" ] }
291,285
The history command lists out all the history for the current session. Like: 1 ls 2 cd /root 3 mkdir something 4 cd something 5 touch afile 6 ls 7 cd .. 8 rm something/afile 9 cd .. 10 ls 11 history In order to search items of interest, I can pipe history with grep like history | grep ls 1 ls 6 ls 10 ls I can also view last 3 commands like: history 3 11 history 12 history | grep ls 13 history 3 But how do I get a specific range of history? For example something like: history range 4 7 4 cd something 5 touch afile 6 ls 7 cd ..
Instead of history , you can use fc , which allow you select range: fc -l 4 7
{ "source": [ "https://unix.stackexchange.com/questions/291285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176179/" ] }
291,319
Current Environment : mysql> show variables like "%version%"; +-------------------------+------------------------------+ | Variable_name | Value | +-------------------------+------------------------------+ | innodb_version | 5.7.13 | | protocol_version | 10 | | slave_type_conversions | | | tls_version | TLSv1,TLSv1.1 | | version | 5.7.13 | | version_comment | MySQL Community Server (GPL) | | version_compile_machine | x86_64 | | version_compile_os | Linux | +-------------------------+------------------------------+ 8 rows in set (0.01 sec) Password Change command user : mysql> update user set password=PASSWORD("XXXX") where user="root"; ERROR 1054 (42S22): Unknown column 'password' in 'field list' Am I missing something?
In MySQL 5.7, the password field in mysql.user table field was removed, now the field name is authentication_string . First choose the database: mysql> use mysql; And then show the tables: mysql> show tables; You will find the user table, and see its fields: mysql> describe user; You will realize there is no field named password , the password field is named authentication_string . So, just do this: update user set authentication_string=password('XXXX') where user='root'; As suggested by @Rui F Ribeiro, alternatively you can run: mysql> SET PASSWORD FOR 'root' = PASSWORD('new_password');
{ "source": [ "https://unix.stackexchange.com/questions/291319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176204/" ] }
291,404
With Bash's source it is possible to execute a script without an execution bit set. This is documented and expected behaviour, but isn't this against the use of an execution bit? I know, that source doesn't create a subshell.
source or the equivalent but standard dot . do not execute the script, but read the commands from script file, then execute them, line by line, in current shell environment. There's nothing against the use of execution bit, because the shell only need read permission to read the content of file. The execution bit is only required when you run the script. Here the shell will fork() new process then using execve() function to create new process image from the script, which is required to be regular, executable file.
{ "source": [ "https://unix.stackexchange.com/questions/291404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166118/" ] }
291,570
there is a script I evolved with it, it has line of command like below : mytemp=`echo ${sourcedir}|awk -F/ '{printf "/%s/tmp",$2}'`/`basename $0`-$1.$$ at the last of the command we see $$ that produces a number. when I use echo $$ in bash I also see a number like bellow: #echo $$ 23019 what exactly is this number, and what is $$ ?
From Advanced Bash-Scripting Guide: $$ is the process ID (PID) of the script itself. $BASHPID is the process ID of the current instance of Bash. This is not the same as the $$ variable, but it often gives the same result.
{ "source": [ "https://unix.stackexchange.com/questions/291570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78188/" ] }
291,729
Operating a standard bash shell on a server, the PS1 prompt defaults to ending in a $ for non-root users, and # for root. IE: ubuntu@server:~$ sudo su root@server:/home/ubuntu# Why is this?
Historically the original /bin/sh Bourne shell would use $ as the normal prompt and # for the root user prompt (and csh would use % ). This made it pretty easy to tell if you were running as superuser or not. # is also the comment character, so anyone blindly re-entering data wouldn't run any real commands. More modern shells (eg ksh, bash) continue this distinction of $ and # although it's less important when you can set more complicated values such as the username, hostname, directory :-)
{ "source": [ "https://unix.stackexchange.com/questions/291729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89664/" ] }
291,737
On RHEL 6.6, I installed Python 3.5.1 from source. I am trying to install pip3 via get-pip.py, but I get Traceback (most recent call last): File "get-pip.py", line 19177, in <module> main() File "get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap import pip zipimport.ZipImportError: can't decompress data; zlib not available It works for the Python 2.6.6 installed. I have looked online for answers, but I cannot seem to find any that works for me. edit: yum search zlib jzlib.i686 : JZlib re-implementation of zlib in pure Java perl-Compress-Raw-Zlib.i686 : Low-Level Interface to the zlib compression library perl-Compress-Zlib.i686 : A module providing Perl interfaces to the zlib compression library perl-IO-Zlib.i686 : Perl IO:: style interface to Compress::Zlib zlib.i686 : The zlib compression and decompression library zlib-debuginfo.i686 : Debug information for package zlib zlib-devel.i686 : Header files and libraries for Zlib development perl-IO-Compress-Zlib.i686 : Perl interface to allow reading and writing of gzip and zip data Name and summary matches only, use "search all" for everything.
Ubuntu 16.10+ and Python 3.7 dev sudo apt-get install zlib1g-dev Note: I only put this here because it was the top search result for the error, but this resolved my issue. Update: also the case for ubuntu 14.04LTS and base kernel at 4.1+
{ "source": [ "https://unix.stackexchange.com/questions/291737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176523/" ] }
291,742
After rebooting, my network interface card (renamed from eth0 to enp0s25 ) is not displayed with the command ifconfig, only in ifconfig -a . Also ping -c 4 google.com only yields unknown host. In my /etc/resolv.conf file, the name server is set to my router which deals with all of the DNS bs. I checked to see if net.enp0s25 is installed at runlevel which it was. I was trying out MATE and dbus/xdm threw alot of error messages after the reboot. Also ping 8.8.8.8 yield network unreachable. Trying to set the interface to up through ifconfig up enp0s25 yields enp0s25 : Host Name lookup Failure.
Ubuntu 16.10+ and Python 3.7 dev sudo apt-get install zlib1g-dev Note: I only put this here because it was the top search result for the error, but this resolved my issue. Update: also the case for ubuntu 14.04LTS and base kernel at 4.1+
{ "source": [ "https://unix.stackexchange.com/questions/291742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176526/" ] }
291,932
I never used tail -F command instead always used tail -f however someone told me that -F is better without much explanation. I looked up man page for tail command. -f output appended data as the file grows; -F Same as --follow=name --retry --retry Keep trying to open a file even when it is or becomes inaccessible It is easy to understand what lower -f does but I do not follow what upper case -F is trying to do. I'd appreciate someone can explain to me the differences.
You describe the GNU tail utility. The difference between these two flags is that if I open a file, a log file for example, like this: $ tail -f /var/log/messages ... and if the log rotation facility on my machine decides to rotate that log file while I'm watching messages being written to it ("rotate" means delete or move to another location etc.), the output that I see will just stop. If I open the file with tail like this: $ tail -F /var/log/messages ... and again, the file is rotated, the output would continue to flow in my console because tail would reopen the file as soon as it became available again, i.e. when the program(s) writing to the log started writing to the new /var/log/messages . On the free BSD systems, there is no -F option, but tail -f will behave like tail -F does on GNU systems, with the difference that you get the message tail: file has been replaced, reopening. in the output when the file you're monitoring disappears and reappears. YOU CAN TEST THIS In one shell session, do $ cat >myfile That will now wait for you to type stuff. Just go ahead and type some gibberish, a few lines. It will all be saved into the file myfile . In another shell session (maybe in another terminal, without interrupting the cat ): $ tail -f myfile This will show the (end of the) contents of myfile in the console. If you go back to the first shell session and type something more, that output will immediately be shown by tail in the second shell session. Now quit cat by pressing Ctrl+D , and remove the myfile file: $ rm myfile Then run the cat again: $ cat >myfile ... and type something, a few lines. With GNU tail , these lines will not show up in the second shell session (where tail -f is still running). Repeat the exercise with tail -F and observe the difference.
{ "source": [ "https://unix.stackexchange.com/questions/291932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156114/" ] }
291,975
I just downloaded VLC 3.0 Beta (using ubuntu ppa) and I wanted to know how to set it up to stream to chromecast. It's in the repo's NEWS that the feature has been added. Numerous news outlets are covering it. But, there is no example of how to actually use it yet. I know it's not in the GUI (having searched the source code). And, I have no idea how to use the code from the command line. Here is the Ubuntu PPA that I used to install it. However, it shouldn't matter. Nor, should the OS or system matter. It's just software. You can build it yourself or download a binary ("nightly") here .
Building VLC If you have to build vlc yourself, make sure you have --enable-sout --enable-chromecast Using VLC Thus far this feature is not available under the GUI, however you can stream to Chromecast like this, $ vlc --sout="#chromecast{ip=ip_address}" ./video.mp4 You can watch the video at the same time with $ vlc --sout="#duplicate{dst=display,#chromecast{ip=ip_address}}" ./video.mp4 To make matters even better, you can actually add a delay on the video so it better syncs with the audio (sets the delay to 3100ms). $ vlc --sout="#duplicate{dst=display{delay=3100},#chromecast{ip=ip_address}}" ./video.mp4 You can find the list of options support to chromecast here , they currently include ip port http-port mux mime video
{ "source": [ "https://unix.stackexchange.com/questions/291975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
292,189
I am running Fedora 24 with Gnome Shell. I try to pair my new Bose QuietComfort 35 over Bluetooth. I started using the Gnome interface. Unfortunately, the connection seems not to hold. It appears as constantly connecting/disconnecting: https://youtu.be/eUZ9D9rGUZY My next step was to perform some checks using the command-line. First, I checked that the bluetooth service is running: $ sudo systemctl status bluetooth ● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled) Active: active (running) since dim. 2016-06-26 11:19:24 CEST; 14min ago Docs: man:bluetoothd(8) Main PID: 932 (bluetoothd) Status: "Running" Tasks: 1 (limit: 512) Memory: 2.1M CPU: 222ms CGroup: /system.slice/bluetooth.service └─932 /usr/libexec/bluetooth/bluetoothd juin 26 11:19:24 leonard systemd[1]: Starting Bluetooth service... juin 26 11:19:24 leonard bluetoothd[932]: Bluetooth daemon 5.40 juin 26 11:19:24 leonard bluetoothd[932]: Starting SDP server juin 26 11:19:24 leonard bluetoothd[932]: Bluetooth management interface 1.11 initialized juin 26 11:19:24 leonard bluetoothd[932]: Failed to obtain handles for "Service Changed" characteristic juin 26 11:19:24 leonard systemd[1]: Started Bluetooth service. juin 26 11:19:37 leonard bluetoothd[932]: Endpoint registered: sender=:1.68 path=/MediaEndpoint/A2DPSource juin 26 11:19:37 leonard bluetoothd[932]: Endpoint registered: sender=:1.68 path=/MediaEndpoint/A2DPSink juin 26 11:20:26 leonard bluetoothd[932]: No cache for 08:DF:1F:DB:A7:8A Then, I have tried to follow some explanations from Archlinux wiki with no success. The pairing is failing Failed to pair: org.bluez.Error.AuthenticationFailed : $ sudo bluetoothctl [NEW] Controller 00:1A:7D:DA:71:05 leonard [default] [NEW] Device 08:DF:1F:DB:A7:8A Bose QuietComfort 35 [NEW] Device 40:EF:4C:8A:AF:C6 EDIFIER Luna Eclipse [bluetooth]# agent on Agent registered [bluetooth]# scan on Discovery started [CHG] Controller 00:1A:7D:DA:71:05 Discovering: yes [CHG] Device 08:DF:1F:DB:A7:8A RSSI: -77 [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000febe-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A RSSI: -69 [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000febe-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110d-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110b-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110e-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000110f-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001130-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000112e-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 0000111e-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001108-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00001131-0000-1000-8000-00805f9b34fb [CHG] Device 08:DF:1F:DB:A7:8A UUIDs: 00000000-deca-fade-deca-deafdecacaff [bluetooth]# devices Device 08:DF:1F:DB:A7:8A Bose QuietComfort 35 Device 40:EF:4C:8A:AF:C6 EDIFIER Luna Eclipse [CHG] Device 08:DF:1F:DB:A7:8A RSSI: -82 [CHG] Device 08:DF:1F:DB:A7:8A RSSI: -68 [CHG] Device 08:DF:1F:DB:A7:8A RSSI: -79 [bluetooth]# trust 08:DF:1F:DB:A7:8A Changing 08:DF:1F:DB:A7:8A trust succeeded [bluetooth]# pair 08:DF:1F:DB:A7:8A Attempting to pair with 08:DF:1F:DB:A7:8A [CHG] Device 08:DF:1F:DB:A7:8A Connected: yes Failed to pair: org.bluez.Error.AuthenticationFailed [CHG] Device 08:DF:1F:DB:A7:8A Connected: no I tried to disable SSPMode but it seems to have no effect: $ sudo hciconfig hci0 sspmode 0 When I use bluetoothctl, journalctl logs the following: juin 26 11:37:21 leonard sudo[4348]: lpellegr : TTY=pts/2 ; PWD=/home/lpellegr ; USER=root ; COMMAND=/bin/bluetoothctl juin 26 11:37:21 leonard audit[4348]: USER_CMD pid=4348 uid=1000 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/lpellegr" cmd="bluetoothctl" terminal=pt juin 26 11:37:21 leonard audit[4348]: CRED_REFR pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_fprintd acct="roo juin 26 11:37:21 leonard sudo[4348]: pam_systemd(sudo:session): Cannot create session: Already occupied by a session juin 26 11:37:21 leonard audit[4348]: USER_START pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits, juin 26 11:37:21 leonard sudo[4348]: pam_unix(sudo:session): session opened for user root by (uid=0) juin 26 11:38:06 leonard bluetoothd[932]: No cache for 08:DF:1F:DB:A7:8A Unfortunately, I don't understand the output. Any idea or help is welcome. I am pretty lost. The bluetooth receiver I use is a USB dongle from CSL-Computer. Bluetoothctl version is 5.40. I am running kernel 4.5.7-300.fc24.x86_64. Below are the features supported by my bluetooth adapter: hciconfig -a hci0 features hci0: Type: BR/EDR Bus: USB BD Address: 00:1A:7D:DA:71:05 ACL MTU: 310:10 SCO MTU: 64:8 Features page 0: 0xff 0xff 0x8f 0xfe 0xdb 0xff 0x5b 0x87 <3-slot packets> <5-slot packets> <encryption> <slot offset> <timing accuracy> <role switch> <hold mode> <sniff mode> <park state> <RSSI> <channel quality> <SCO link> <HV2 packets> <HV3 packets> <u-law log> <A-law log> <CVSD> <paging scheme> <power control> <transparent SCO> <broadcast encrypt> <EDR ACL 2 Mbps> <EDR ACL 3 Mbps> <enhanced iscan> <interlaced iscan> <interlaced pscan> <inquiry with RSSI> <extended SCO> <EV4 packets> <EV5 packets> <AFH cap. slave> <AFH class. slave> <LE support> <3-slot EDR ACL> <5-slot EDR ACL> <sniff subrating> <pause encryption> <AFH cap. master> <AFH class. master> <EDR eSCO 2 Mbps> <EDR eSCO 3 Mbps> <3-slot EDR eSCO> <extended inquiry> <LE and BR/EDR> <simple pairing> <encapsulated PDU> <non-flush flag> <LSTO> <inquiry TX power> <EPC> <extended features> Features page 1: 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 The pairing works well with EDIFIER Luna Eclipse speakers. I suspect the issue is really related to the headset I am trying to configure.
I have these headphones as well, along with a handy laptop running Fedora 24. After chatting with one of the Bluez developers on IRC, I have things working. Below is what I've found. (Note that I know very little about Bluetooth so I may be using incorrect terminology for some of this.) The headphones support (or at least say they support) bluetooth LE but don't support LE for pairing. Bluez does not yet support this and has no way to set the supported BT mode except statically in the configuration file. You can use the headphones over regular bluetooth just fine, though. This happens to be the reason Bluez 4 works; it doesn't really support LE. So, create /etc/bluetooth/main.conf. Fedora 24 doesn't come with this file so either fetch a copy from Upstream , find the line containing #ControllerMode = dual and change it to: ControllerMode = bredr or create a new file containing just: [General] ControllerMode = bredr Then restart bluetooth and pair. (I did this manually via bluetoothctl, but just using the bluetooth manager should work.) Now, this got things working for me, though if you don't force pulseaudio to use the A2DP-Sink protocol, the headphones will announce that you have an incoming call for some reason. However, my mouse requires Bluetooth LE, so I went in and removed the ControllerMode line. And... the headphones still work, as well as the mouse. I guess that once they are paired everything is OK.
{ "source": [ "https://unix.stackexchange.com/questions/292189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48778/" ] }
292,253
I want to redirect the output of the find command to cat command so I can print the data of the given file. So for example if the output of find is /aFile/readme then the cat should be interpreted as cat ./aFile/readme . How can I do that instantly ? Do I have to use pipes ? I tried versions of this : cat | find ./inhere -size 1033c 2> /dev/null But I guess this is completely wrong? Of course I'm sure that the output is only one file and not multiple files. So how can I do that ? I've searched on Google and couldn't find a solution, probably because I didn't search right :P
You can do this with find alone using the -exec action: find /location -size 1033c -exec cat {} + {} will be replaced by the files found by find , and + will enable us to read as many arguments as possible per invocation of cat , as cat can take multiple arguments. If your find does not have the standard + extension, or you want to read the files one by one: find /location -size 1033c -exec cat {} \; If you want to use any options of cat , do: find /location -size 1033c -exec cat -n {} + find /location -size 1033c -exec cat -n {} \; Here I am using the -n option to get the line numbers.
{ "source": [ "https://unix.stackexchange.com/questions/292253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176853/" ] }
292,344
Unfortunately bc and calc don't support xor.
Like this: echo $(( 0xA ^ 0xF )) Or if you want the answer in hex: printf '0x%X\n' $(( 0xA ^ 0xF )) On a side note, calc(1) does support xor as a function: $ calc base(16) 0xa xor(0x22, 0x33) 0x11
{ "source": [ "https://unix.stackexchange.com/questions/292344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25812/" ] }
292,843
On a Linux VM I would like to TEST the NAGIOS monitoring more deeply than just switching off the VM or disconnecting the virtual NIC; I would like to test or "enforce a disk space alarm" through occupying several % of free space for a short period of time. I know that I could just use a dd if=/dev/zero of=/tmp/hd-fillup.zeros bs=1G count=50 or something like that... but this takes time and loads the system and requires again time when removing the test files with rm. Is there a quick (almost instant) way to fill up a partition that does not load down the system and takes a lot of time ? im thinking about something that allocates space, but does not "fill" it.
The fastest way to create a file in a Linux system is using fallocate : fallocate -l 50G file From man: fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeros. Supported for XFS (since Linux 2.6.38), ext4 (since Linux 3.0), Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).
{ "source": [ "https://unix.stackexchange.com/questions/292843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119964/" ] }
293,304
I have a process that listens on an IP:port - in fact it is spark streaming which connects to a socket. The issue is that I wish to somehow create a server that connects to spark on one port and data is streamed into this server from another port. For example, the spark streaming example uses the netcat utility (for example nc -lk 5005 ). However, I have another service that listens for incoming messages and then spit out a message. So I need some kind of server that can listen to messages from service A and pass them to spark. My service A, relies on sockets. And my spark consumer relies on sockets. Here is what I have done so far is the forwarding from port to port but this does not seem to work: nc -X 4 -x 127.0.0.1:5005 localhost 5006 With the idea that the service A:5005 -> socket -> 5006 -> Spark I cannot seem to find the correct way to make this work. Some answers have suggested the following: socat tcp-l:5005,fork,reuseaddr tcp:127.0.0.1:5006 My spark socket reciever doesn't or cannot seem to connect. I get the error: Error connecting to 127.0.0.1:5006 - java.net.ConnectException: Connection refused
you can't use only nc for forward traffic, nc have not keep-alive or fork mode you must use another tools instead nc ; for example use socat or ncat socat ( source code ) this command listen on port 5050 and forward all to port 2020 socat tcp-l:5050,fork,reuseaddr tcp:127.0.0.1:2020 ncat readmore Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It ncat -l localhost 8080 --sh-exec "ncat example.org 80" And you can use another tools: goproxy : (download source code or bin file ) Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" ./proxy tcp -p ":1234" -T tcp -P "1.1.1.1:4567" gost (Download source code and bin ) ENGLISH readme Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" source ./gost -L tcp://:1234/1.1.1.1:4567 redir ( source code ) ./redir :1234 1.1.1.1:5678
{ "source": [ "https://unix.stackexchange.com/questions/293304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177669/" ] }
293,307
I've encountered the following problem when using debconf to configure a package in Ubuntu 16.04 during installation. More precisely, the package uses debconf to save configurations files, and right after, in the postinst script, a service is started. This service also uses a debconf module to load the configurations saved in the previous step. However, the service started with systemd fails with the error: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable From what I could find, dpkg is still accessing this file with the debconf frontend, and the service crashes when it tries to start another frontend (the environmental variable DEBIAN_HAS_FRONTEND is not passed to the service). I have tried forcing the env variable DEBIAN_HAS_FRONTEND in the script, but then other errors appear. I think I should force starting the daemon after the dpkg process has ended, and debconf has already finished, any ideas?
you can't use only nc for forward traffic, nc have not keep-alive or fork mode you must use another tools instead nc ; for example use socat or ncat socat ( source code ) this command listen on port 5050 and forward all to port 2020 socat tcp-l:5050,fork,reuseaddr tcp:127.0.0.1:2020 ncat readmore Ncat is a feature-packed networking utility which reads and writes data across networks from the command line. Ncat was written for the Nmap Project as a much-improved reimplementation of the venerable Netcat. It ncat -l localhost 8080 --sh-exec "ncat example.org 80" And you can use another tools: goproxy : (download source code or bin file ) Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" ./proxy tcp -p ":1234" -T tcp -P "1.1.1.1:4567" gost (Download source code and bin ) ENGLISH readme Listen on port 1234 and forward it to port 4567 on address "1.1.1.1" source ./gost -L tcp://:1234/1.1.1.1:4567 redir ( source code ) ./redir :1234 1.1.1.1:5678
{ "source": [ "https://unix.stackexchange.com/questions/293307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177689/" ] }
293,495
I am making the check for update script for my theme I have 2 text files. First one is called "current.txt" and contains the current version. There is 4.1.1 string in that text file. Second one is called "latest.txt" and contains the latest version. There is 4.2 string in this text file. So here is the code echo "Checking update"; x=$(cat ./current.txt) y=$(cat ./latest.txt) if [ "$x" -eq "$y" ] then echo There is version $y update else echo Version $x is the latest version fi What it mean is if current.txt is NOT the same with latest.txt then it will say "there is version 4.2 update". If not, it will say "version 4.1.1 is the latest version" But when I try to run it. I get this error Checking update ./test.sh: line 4: [: 4.1.1: integer expression expected Version 4.1.1 is the latest version So what am I doing wrong with this?
The test command, also named [ , has separate operators for string comparisons and integer comparisons: INTEGER1 -eq INTEGER2 INTEGER1 is equal to INTEGER2 vs STRING1 = STRING2 the strings are equal and STRING1 != STRING2 the strings are not equal Since your data is not strictly an integer, your test needs to use the string comparison operator. The last realization in the comments was that the "-eq" logic did not match the sense of the if/else echo statements, so the new snippet should be: ... if [ "$x" != "$y" ] then echo There is version $y update else echo Version $x is the latest version fi
{ "source": [ "https://unix.stackexchange.com/questions/293495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177727/" ] }
293,642
I'm currently attempting to follow Hardening Debian for the Desktop Using Grsecurity guide in order to install the 4.5.7 kernel with Grsecurity on my Kali Linux desktop. I am following that list of instructions verbatim , except for the fact that I'm trying to use Grsecurity's test patch for the 4.5.7 kernel and I'm running Kali Linux instead of straight Debian. Every time I attempt to compile the kernel, however, I get this error following the line "CC certs/system_keyring.o": CC certs/system_keyring.o make[2]: *** No rule to make target 'debian/certs/[email protected]', needed by 'certs/x509_certificate_list'. Stop. Makefile:951: recipe for target 'certs' failed make[1]: *** [certs] Error 2 make[1]: Leaving directory '/home/jc/Downloads/linux-4.5.7' debian/ruleset/targets/common.mk:295: recipe for target 'debian/stamp/build/kernel' failed make: *** [debian/stamp/build/kernel] Error 2 I get this error, as I found out, for any kernel even if I apply no patches or modifications, so it has something to do with the tools I'm using to compile the kernel (apparently a system keychain of some sort). Can someone out there tell me how to fix my OS and compile my kernel? P.S. Here is the output of cat /proc/version : Linux version 4.6.0-kali1-amd64 ([email protected]) (gcc version 5.4.0 20160609 (Debian 5.4.0-4) ) #1 SMP Debian 4.6.2-2kali2 (2016-06-28)
I ran into this several years ago on a Debian build. In the .config file you copied from /boot find and comment out the lines CONFIG_SYSTEM_TRUSTED_KEY and CONFIG_MODULE_SIG_KEY . During the build you can use your own cert or just use a random one time cert. Found the above in this thread .
{ "source": [ "https://unix.stackexchange.com/questions/293642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177987/" ] }
293,647
I am trying to set up a script that will loop over a set of directories, and do one thing when it finds .jpg files, and another when it finds .nef files. The problem is, if a directory does not have .jpg files for example (or .nef) then the glob entry is no longer an expanded glob, but just a string. For example: my_dir="pictures/" ext="JPG" for f in "$my_dir"*."$ext"; do echo $f done if the my_dir folder has .JPG files in it, then they will be echoed correctly on the command line. pictures/one.JPG pictures/two.JPG However, if my_dir has no .JPG files, then the loop will enter for one iteration and echo: pictures/*.JPG how do I construct this so that if the glob has no matches, it does not enter the for loop?
This is normal and default behavior: If globbing fails to match any files/directories, the original globbing character will be preserved. If you want to get back an empty result instead, you can set the nullglob option in your script as follows: $ shopt -s nullglob $ for f in "$my_dir"*."$ext"; do echo $f; done $ You can disable it afterwards with: $ shopt -u nullglob
{ "source": [ "https://unix.stackexchange.com/questions/293647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177986/" ] }
293,775
I want to write a script that merges contents of several .csv files in one .csv file, i.e appends columns of all other files to the columns of first file. I had tried doing so using a "for" loop but was not able to proceed with it. Does anyone know how to do this in Linux?
The simplest approach for achieving that would be typing the following command cat *csv > combined.csv This file would contain the contents of all your csv files just in the way you mentioned.
{ "source": [ "https://unix.stackexchange.com/questions/293775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172771/" ] }
293,940
I'm making a script to install my theme, after it finished installing it will appear the changelog and there will be "Press any key to continue" so that after users read the changelog then press any key to continue
You can use the read command. If you are using bash : read -p "Press enter to continue" In other shells, you can do: printf "%s " "Press enter to continue" read ans As mentioned in the comments above, this command does actually require the user to press enter ; a solution that works with any key in bash would be: read -n 1 -s -r -p "Press any key to continue" Explanation by Rayne and wchargin -n defines the required character count to stop reading -s hides the user's input -r causes the string to be interpreted "raw" (without considering backslash escapes)
{ "source": [ "https://unix.stackexchange.com/questions/293940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177727/" ] }
294,371
I started learning Bash a couple of days ago. I'm trying to obtain an exit status of grep expression into a variable like this: check=grep -ci 'text' file.sh and the output that I got is No command '-ic' found Should I do it with a pipe command?
Your command, check=grep -ci 'text' file.sh will be interpreted by the shell as "run the command -ci with the arguments text and file.sh , and set the variable check to the value grep in its environment". The shell stores the exit value of most recently executed command in the variable ? . You can assign its value to one of your own variables like this: grep -i 'PATTERN' file check=$? If you want to act on this value, you may either use your check variable: if [ "$check" -eq 0 ]; then # do things for success else # do other things for failure fi or you could skip using a separate variable and having to inspect $? all together: if grep -q -i 'pattern' file; then # do things (pattern was found) else # do other things (pattern was not found) fi (note the -q , it instructs grep to not output anything and to exit as soon as something matches; we aren't really interested in what matches here) Or, if you just want to "do things" when the pattern is not found: if ! grep -q -i 'pattern' file; then # do things (pattern was not found) fi Saving $? into another variable is only ever needed if you need to use it later, when the value in $? has been overwritten, as in mkdir "$dir" err=$? if [ "$err" -ne 0 ] && [ ! -d "$dir" ]; then printf 'Error creating %s (error code %d)\n' "$dir" "$err" >&2 exit "$err" fi In the above code snippet, $? will be overwritten by the result of the [ "$err" -ne 0 ] && [ ! -d "$dir" ] test. Saving it here is really only necessary if we need to display it and use it with exit .
{ "source": [ "https://unix.stackexchange.com/questions/294371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178555/" ] }
294,378
I'm trying to perform environment variable replacement through envsubst , but I want to only replace specific variables. From the docs I should be able to tell envsubst to only replace certain variables but I'm failing to be able to do that. For example, if I have a file containing: VAR_1=${VAR_1} VAR_2=${VAR_2} how should I execute envsubst so that it only replaces the reference to ${VAR_1} ?
Per the man page: envsubst [OPTION] [SHELL-FORMAT] If a SHELL-FORMAT is given, only those environment variables that are referenced in SHELL-FORMAT are substituted; otherwise all environment variables references occurring in standard input are substituted. Where SHELL-FORMAT strings are "strings with references to shell variables in the form $variable or ${variable} [...] The variable names must consist solely of alphanumeric or underscore ASCII characters, not start with a digit and be nonempty; otherwise such a variable reference is ignored." . Note that the format ${VAR:-default} is not supported. I mentioned HERE some alternatives that support it along with other features. Anyway, back to gettext envsubst : So, one has to pass the respective variables names to envsubst in a shell format string (obviously, they need to be escaped/quoted so as to be passed literally to envsubst ). Example: input file e.g. infile : VAR1=${VAR1} VAR2=${VAR2} VAR3=${VAR3} and some values like export VAR1="one" VAR2="two" VAR3="three" then running envsubst '${VAR1} ${VAR3}' <infile or envsubst '${VAR1},${VAR3}' <infile or envsubst '${VAR1} ${VAR3}' <infile outputs VAR1=one VAR2=${VAR2} VAR3=three Or, if you prefer backslash: envsubst \$VAR1,\$VAR2 <infile produces VAR1=one VAR2=two VAR3=${VAR3}
{ "source": [ "https://unix.stackexchange.com/questions/294378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178562/" ] }
294,386
I am following a tutorial in computational microbiology: William Lindstrom, Garrett M. Morris, Christoph Weber, and Ruth Huey (2008-01-29). Using AutoDock 4 for Virtual Screening . Scripps Research Institute. On page 10 there is ex01.csh which contains (original unindented, indentation added by StackExchange participants): foreach f (tmp*) echo $f set zid = `grep ZINC $f` if !(-e "$zid".mol2) then set filename = "$zid".mol2 else foreach n (`seq –w 1 99`) if !(-e "$zid"_"$n".mol2) then set filename = "$zid"_"$n".mol2 break endif end endif mv –v $f $filename end I want to run the above command. I have been trying to figure it out for the last 2 days but have failed. Every time, at the first step, which is foreach f (tmp*) it says bash : syntax near unexpected error '(' I know zero about linux stuff, and am just following what I see in the tutorial. How i can fix my problem?
Per the man page: envsubst [OPTION] [SHELL-FORMAT] If a SHELL-FORMAT is given, only those environment variables that are referenced in SHELL-FORMAT are substituted; otherwise all environment variables references occurring in standard input are substituted. Where SHELL-FORMAT strings are "strings with references to shell variables in the form $variable or ${variable} [...] The variable names must consist solely of alphanumeric or underscore ASCII characters, not start with a digit and be nonempty; otherwise such a variable reference is ignored." . Note that the format ${VAR:-default} is not supported. I mentioned HERE some alternatives that support it along with other features. Anyway, back to gettext envsubst : So, one has to pass the respective variables names to envsubst in a shell format string (obviously, they need to be escaped/quoted so as to be passed literally to envsubst ). Example: input file e.g. infile : VAR1=${VAR1} VAR2=${VAR2} VAR3=${VAR3} and some values like export VAR1="one" VAR2="two" VAR3="three" then running envsubst '${VAR1} ${VAR3}' <infile or envsubst '${VAR1},${VAR3}' <infile or envsubst '${VAR1} ${VAR3}' <infile outputs VAR1=one VAR2=${VAR2} VAR3=three Or, if you prefer backslash: envsubst \$VAR1,\$VAR2 <infile produces VAR1=one VAR2=two VAR3=${VAR3}
{ "source": [ "https://unix.stackexchange.com/questions/294386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178566/" ] }
294,486
I have a file called file.txt . How can I print the first line only using the grep command?
Although it's an unconventional application of grep, you can do it in GNU grep using grep -m1 "" file.txt It works because the empty expression matches anything, while -m1 causes grep to exit after the first match -m NUM, --max-count=NUM Stop reading a file after NUM matching lines.
{ "source": [ "https://unix.stackexchange.com/questions/294486", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178651/" ] }
294,544
Disclaimerish thingy: I just went through the list oft StackExchange sites for about 20 minutes trying to figure out where to post this. If you know any site more suitable, please move this question there. I'm posting this here because unix time got me thinking. So as we all know, there is unix time and there is UTC. Unix time just keeps on ticking, counting seconds – one second per second –, whereas UTC tries to keep time in the human-readable formats we use aligned with Earth's phase in its rotation. To do this, UTC inserts leap seconds from time to time. Since time is relative to gravitational force the object experiencing time is exposed to, other kinds of acceleration, and relative speed, this leads to 2 questions. Let's get over the simple one first: Where is unix time measured? If Alice and Bob start out agreeing the current time is 1467932496.42732894722748 when they are at the same place (a second of course being defined as 9'192'631'770 cycles of radiation corresponding to the transition between two energy levels of the caesium-133 atom at rest and at 0 K), experience a twin paradox due to Alice living on sea level and Bob living high up in the mountains or Alice living at the north pole and Bob living at the equator, they won't agree any more. So how is unix time defined precisely? You might not see the problem with UTC at first because surely everyone can agree on when earth completed an orbit (this is of course ignoring continental plate movement but I think we get that one figured out pretty well because with GPS it's possible to measure their movement very precisely and we can assume them to be on a set position in our model and not move as continental plates shift), no matter whether they are on a mountain, on sea level, on the equator, or at the north pole. There might be some time differences but they don't accumulate. But a second is defined as 9'192'631'770 cycles of radiation corresponding to the transition between two energy levels of the caesium-133 atom at rest and at 0 K and caesium-133 atom don't care about earth's orbit. So UTC decides where to insert a leap second but there has to be a measured or predicted shift between the phase of earth's orbit and the time measured somewhere by an atomic clock. Where is that somewhere?
Your headline question doesn't have a real answer; Unix time isn't a real timescale, and isn't "measured" anywhere. It's a representation of UTC, albeit a poor one because there are moments in UTC that it can't represent. Unix time insists on there being 86,400 seconds in every day, but UTC deviates from that due to leap seconds. As to your broader question, there are four important timescales of interest: UT1 (Universal Time), which is calculated by observatories around the world which measure the rotation of the Earth with respect to the fixed stars. With these observations and a little math, we get a more modern version of the old Greenwich Mean Time, which was based on the moment of solar noon at the Royal Observatory in Greenwich. Universal Time is calculated by An organization called the IERS (the International Earth Rotation and Reference Systems Service, formerly the International Earth Rotation Service). TAI (International Atomic Time), which is kept by hundreds of atomic clocks around the world, maintained by national standards bodies and such. The keepers of the clocks that contribute to TAI use time transfer techniques to steer their clocks towards each other, canceling out any small errors of individual clocks and creating an ensemble time; that ensemble is TAI, published by the International Bureau of Weights and Measures (BIPM), the stewards of the SI system of units. To answer your question about time dilation, TAI is defined to be atomic time at sea level (actually, at the geoid, which is a fancier version of the same idea), and each clock corrects for the effects of its own altitude. UTC (Coordinated Universal Time), which was set equal to ten seconds behind TAI on 1 January 1972, and since that date it ticks forwards at exactly the same rate as TAI, except when a leap second is added or subtracted. The IERS makes the decision to announce a leap second in order to keep the difference within 0.9 seconds (in practice, within about 0.6 seconds; an added leap second causes the difference to go from −0.6 to +0.4). In theory, leap seconds can be both positive and negative, but because the rotation of the earth is slowing down compared to the standard established by SI and TAI, a negative leap second has never been necessary and probably never will. Unix time , which does its best to represent UTC as a single number. Every Unix time that is a multiple of 86,400 corresponds to midnight UTC. Since not all UTC days are 86,400 seconds long, but all "Unix days" are, there is an irreconcilable difference that has to be patched over somehow. There's no Unix time corresponding to an added leap second. In practice, systems will either act as though the previous second occurred twice (with the unix timestamp jumping backwards one second, then proceeding forward again), or apply a technique like leap smearing that warps time for a longer period of time on either side of a leap second. In either case there's some inaccuracy, although at least the second one is monotonic. In both cases, the amount of time that passes between two distant Unix timestamps a and b isn't equal to b − a ; it's equal to b − a plus the number of intervening leap seconds . Since UT1, TAI, UTC, and the IERS are all worldwide, multinational efforts, there is no single "where", although IERS bulletins are published from the Paris Observatory and the BIPM is also based in Paris, that's one answer. An organization that requires precise, traceable time might state their timebase as something like "UTC(USNO)", which means that their timestamps are in UTC and that they're derived from the time at the US Naval Observatory, but given the problems that I mentioned with Unix time, it's basically incompatible with that level of precision—anyone dealing with really precise time will have an alternative to Unix time.
{ "source": [ "https://unix.stackexchange.com/questions/294544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
294,600
So I'm following a tutorial to install OTRS which is Open source Ticket Request System. So in order to install, it requires: 4GB of Swap space. Here's the command I used: [root@ip-10-0-7-41 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 50G 14G 37G 27% / devtmpfs 478M 0 478M 0% /dev tmpfs 496M 0 496M 0% /dev/shm tmpfs 496M 13M 484M 3% /run tmpfs 496M 0 496M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/1000 [root@ip-10-0-7-41 ~]# fallocate -l 4G /myswap [root@ip-10-0-7-41 ~]# ls -lh /myswap -rw-r--r--. 1 root root 4.0G Jul 8 08:44 /myswap [root@ip-10-0-7-41 ~]# chmod 600 /myswap [root@ip-10-0-7-41 ~]# mkswap /myswap Setting up swapspace version 1, size = 4194300 KiB no label, UUID=3656082a-148d-4604-96fb-5b4604fa5b2e [root@ip-10-0-7-41 ~]# swapon /myswap swapon: /myswap: swapon failed: Invalid argument You can see : Invalid argument error here. I tried many time in vain to enable it.Someone please tell me how to fix this error. (I'm running this CentOS 7 on AWS Instance EC2) [root@ip-10-0-7-41 ~]# df -T | awk '{print $1,$2,$NF}' | grep "^/dev" /dev/xvda1 xfs /
The problem with fallocate(1) is that it uses filesystem ioctls to make the allocation fast and effective, the disadvantage is that it does not physically allocate the space but swapon(2) syscall requires a real space. Reference : https://bugzilla.redhat.com/show_bug.cgi?id=1129205 I'd faced this issue earlier with my box too. So instead of using fallocate , I used dd as the link suggests sudo dd if=/dev/zero of=/myswap count=4096 bs=1MiB and moving ahead with chmod , mkswap & swapon commands. Bingo ! It worked.
{ "source": [ "https://unix.stackexchange.com/questions/294600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
294,625
I have a command which outputs lots of data (say, strace with lots of syscalls, running for a few minutes). Is there any option (e.g. command wrapper or something similar) that would allow me to pause the output of the command (just the output on the screen, I don't mind the command running in the background), then unpause it after I take a look on its output?
You have three options: press control S to stop output, control Q to resume (this is called XON/XOFF) redirect your output to a pager such as less , e.g., strace date | less redirect your output to a file, e.g., strace -o foo date , and browse it later.
{ "source": [ "https://unix.stackexchange.com/questions/294625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
294,835
Is there an easy way to substitute/evaluate environment variables in a file? Like let's say I have a file config.xml that contains: <property> <name>instanceId</name> <value>$INSTANCE_ID</value> </property> <property> <name>rootPath</name> <value>/services/$SERVICE_NAME</value> </property> ...etc. I want to replace $INSTANCE_ID in the file with the value of the INSTANCE_ID environment variable, $SERVICE_NAME with the value of the SERVICE_NAME env var. I won't know a priori which environment vars are needed (or rather, I don't want to have to update the script if someone adds a new environment variable to the config file). Thanks!
You could use envsubst (part of gnu gettext ): envsubst < infile will replace the environment variables in your file with their corresponding value. The variable names must consist solely of alphanumeric or underscore ASCII characters, not start with a digit and be nonempty; otherwise such a variable reference is ignored. Some alternatives to gettext envsubst that support ${VAR:-default} and extra features: rust alternative go alternative node.js alternative To replace only certain environment variables, see this question.
{ "source": [ "https://unix.stackexchange.com/questions/294835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178917/" ] }
294,956
I installed tmux via apt-get and there is no .tmux.conf file in my home directory, even after I run tmux . I have been trying to follow a tmux tutorial, but the first part involves modifying this file, but since I do not have this file I am stuck. How do I get the tmux conf file?
There should be several example configuration files in either /usr/share/doc/tmux/examples or /usr/share/tmux/ . You can copy any of those over to ~/.tmux.conf to test out. Alternatively, you could create a ~/.tmux.conf with the default settings by using this command from within tmux: tmux show -g > ~/.tmux.conf This command works with tmux version 1.8. In older versions of tmux , a bug regarding redirecting stdout to a file might require this command: tmux show -g | cat > ~/.tmux.conf More info can be found here .
{ "source": [ "https://unix.stackexchange.com/questions/294956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
295,005
On my system (Darwin 15.5.0), man(1) opens as follows: NAME man - format and display the on-line manual pages The file the page is formatted from, however, is clearly on disk: % man -w man /usr/share/man/man1/man.1 % file `man -w man` /usr/share/man/man1/man.1: troff or preprocessor input text So, "on-line" in this case does not mean "online," as in, "somewhere else accessible over the Internet." Does "on-line" just mean that my system is powered on? If so, why bother specifying that in the first place, i.e., isn't it obvious that I'm reading a page that the formatter processed? Or, when the description was written, was it a huge deal to have a manual on disk because most "manuals" then were paper volumes? Is this usage of "on-line," hyphen and all, still common in computing?
In contrast to a printed (hard-copy) manual, which you could read off-line (while not using a computer). The term dates back (at least) to time-sharing systems. Users may have had a terminal which could be used for typing text, punching paper tapes. But they were only able to use the computer when they were on-line (the "line" referring to the communications link from the terminal to the computer). Lots of English is that way: you likely use terms which on reflection you might not consider up-to-date.
{ "source": [ "https://unix.stackexchange.com/questions/295005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179054/" ] }
295,017
I run Xvfb with command: Xvfb :1 -screen 0 100x100x16 -fbdir /tmp And it's working fine. I can connect via VNC, and now under /tmp directory I have Xvfb_screen0 binary file. I thought it will act like /dev/fb0 so I tried to change its settings with fbset like: sudo fbset -fb /tmp/Xvfb_screen0 -xres 500 -yres 500 But the command finishes with error: ioctl FBIOGET_VSCREENINFO: Inappropriate ioctl for device Is there any way to change running Xvfb server resolution?
In contrast to a printed (hard-copy) manual, which you could read off-line (while not using a computer). The term dates back (at least) to time-sharing systems. Users may have had a terminal which could be used for typing text, punching paper tapes. But they were only able to use the computer when they were on-line (the "line" referring to the communications link from the terminal to the computer). Lots of English is that way: you likely use terms which on reflection you might not consider up-to-date.
{ "source": [ "https://unix.stackexchange.com/questions/295017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179033/" ] }
295,241
I used this command to add i386 arch: sudo dpkg --add-architecture i386 And then immediately after without installing any packages I tried to remove the i386 arch like so: sudo dpkg --remove-architecture i386 And i got the error: dpkg: error: cannot remove architecture 'i386' currently in use by the database Solutions I have seen so far involve removing i386 packages, I haven't installed any, the ones that are installed are vital to the functioning of the OS. What do I do? EDIT, PLEASE READ THE FOLLOWING TO AVOID DESTROYING YOUR OS: Turns out that 64-bit Linux OSes already include the i386 arch, so the command sudo dpkg --add-architecture i386 didn't really do anything.
Run dpkg --get-selections | awk '/i386/{print $1}' And then if happy with them being removed, run apt-get remove --purge `dpkg --get-selections | awk '/i386/{print $1}'` And then retry the dpkg --remove-architecture i386
{ "source": [ "https://unix.stackexchange.com/questions/295241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122397/" ] }
295,274
I'm trying to change a single word on a specific line in a file, but I'm having some trouble connecting all together. Basically, on one line in my file there is a keyword 'firmware_revision', and on this line (and only this line) I want to replace the word 'test' with the word 'production'. So I can do this: grep 'firmware_revision' myfile.py | sed 's/test/production' This will pick out the line I want and perform the substitution, but I can't figure out how to get this new line into the original file to replace the old line. I obviously cannot just redirect it back to the file, so what should I do? Even if I use temporaries, by using grep to get just the line I need I lose all of the other data in the file, so I can no longer just redirect it all to a temp file then replace the original with the temp. Edit - Someone asked for more information Lets say I have a file full of lines like this [ ('key_name1', str, 'value1', 'Description'), ('key_name2', str, 'value2', 'Description'), ('key_name3', str, 'value3', 'Description'), ('firmware_revision', str, 'my-firmware-name-test', 'Firmware revision name') ] now I want to write a script (ideally a one-liner) that will find the line that contains 'firmware_revision', and changes all instances of the word 'test' on that line to 'production'. The word 'test' might be in other places in that file and I do not want those changed. So to be clear, I want to change the above line to ('firmware_revision', str, 'my-firmware-name-production', 'Firmware revision name') How do I do this?
Try: sed -i.bak '/firmware_revision/ s/test/production/' myfile.py Here, /firmware_revision/ acts as a condition. It is true for lines that match the regex firmware_revision and false for other lines. If the condition is true, then the command which follows is executed. In this case, that command is a substitute command that replaces the first occurrence of test with production . In other words, the command s/test/production/ is executed only on lines which match the regex firmware_revision . All other lines pass through unchanged. By default, sed sends its output to standard out. You, however, wanted to change the file in place. So, we added the -i option. In particular, -i.bak causes the file to be changed in place with a back-up copy saved with a .bak extension. If you have decided that the command works for you and you want to live dangerously and not create a backup, then, with GNU sed (Linux), use: sed -i '/firmware_revision/ s/test/production/' myfile.py By contrast, on BSD (OSX), the -i option must have an argument. If you don't want to keep a backup, provide it with an empty argument. Thus, use: sed -i '' '/firmware_revision/ s/test/production/' myfile.py Edit In the edit to the question, the OP asks for every occurrence of test on the line to be replaced with production . In that case, we add the g option to the substitute command for a global (for that line) replacement: sed -i.bak '/firmware_revision/ s/test/production/g' myfile.py
{ "source": [ "https://unix.stackexchange.com/questions/295274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108328/" ] }
296,086
to find the PID of the process to kill use : pgrep <process command> I then use the kill command to kill the PID returned by pgrep <process command> kill <PID> Can these commands be combined into one so can kill the PID or PID's returned by pgrep <process command> ? Or is there a method kill multiple processes by command name ? Something like : kill(pgrep <name of process>)
You can use pkill: pkill httpd You may also want to use process substitution(although this isn't as clear): kill $(pgrep command) And you may want to use xargs : pgrep command | xargs kill
{ "source": [ "https://unix.stackexchange.com/questions/296086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65628/" ] }
296,100
Apparently I don't know all the output destinations that are available for use. I know about stdout ( &1 ) and stderr ( &2 ). However, after redirecting both descriptors, I sometimes still get some output in my console! The easiest example I can think of is GNU Parallel; Each time I use it, I see a citation notice. Even when I do &2>1 > file , I still see the notice. And the same applies to emerge : When I run emerge and there are some problems, some informations aren't printed to stdout nor stdin , since I redirect them and they still get through. I mostly solve these problems by using script , but I am still wondering what's causing this issue.
The syntax you used is wrong. cmd &2>1 >file will be split down as cmd & 2>1 >file This will: Run cmd as a background job with no redirections In a separate process (without a command!) will redirect stderr to a file literally called 1 and redirect stdout to file The syntax you want is: cmd >file 2>&1 The order of operations is important. This will: Redirect stdout to file Redirect stderr to &1 - ie the same filehandle as stdout The result is that both stderr and stdout will be redirected to file . In bash , a simpler non-standard (and so I don't recommend it, on portability grounds) syntax of cmd &> file does the same thing.
{ "source": [ "https://unix.stackexchange.com/questions/296100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129998/" ] }
296,136
I have to delete one particular file in each directory. Example: MKRUW, DKRUW, TKRUW In each of these directories I need to enter MKRUW/default/file and delete .dat files.
The syntax you used is wrong. cmd &2>1 >file will be split down as cmd & 2>1 >file This will: Run cmd as a background job with no redirections In a separate process (without a command!) will redirect stderr to a file literally called 1 and redirect stdout to file The syntax you want is: cmd >file 2>&1 The order of operations is important. This will: Redirect stdout to file Redirect stderr to &1 - ie the same filehandle as stdout The result is that both stderr and stdout will be redirected to file . In bash , a simpler non-standard (and so I don't recommend it, on portability grounds) syntax of cmd &> file does the same thing.
{ "source": [ "https://unix.stackexchange.com/questions/296136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173191/" ] }
296,141
Many questions like 'How to type the double-quote char (")?' are being asked, and we don't want to clutter our community with the same answer (Type it as \" if not enclosed in ' s, " if enclosed in ' s.) So, the question is here. You can't type special chars into a terminal like normal ones, e.g. this command will fail: echo Updates (11) So, how to type in these chars in the terminal as if they were normal ones? !#$^&*?[](){}<>~;'"\|<space><tab><newline>
That very much depends on the shell. Check your shell manual for details. Also note that some characters are only special in some contexts. For instance, in most shells, * and ? are only special in list contexts, in POSIX or csh-like shells, ~ is only special at the beginning of a word or following some characters like : . Same for = in zsh . In some shells, [ is only special when matched (with some restrictions) by a ] . In some shells like bash or yash , special characters like blank token delimiters also vary with the locale. The quoting operators (to remove the special meaning of those characters) also vary greatly between shells. Bourne-like shells A summary for Bourne-like shells (that is the shells that have been known to be called sh on some system or another since the 80s): Bourne shell Special characters: "'&|;()^`<>$ , space, newline and tab are special in simple command lines when not quoted. # (except in early version) is special at the beginning of a line or following an unquoted space, tab or &|()^<>;` . { and } are only special in that they are shell keywords (so only words in command position). *?[ are special as globbing operators, so only in list contexts. In the case of [ , it's [...] that is the globbing operator, either [ or ] only need to be quoted to remove the special meaning. = is special when in contexts where it's treated as an assignment operator. That is, in a simple command, for all words that do not follow an argument (except after set -k ). Quoting operators \ quotes all special characters except newline ( \<newline> is a way to continue a long logical line onto the next physical line, so that sequence is removed). Note that backticks add extra complexity as within them, \ is used first to escape the closing backtick and help the parser. Inside double quotes, \ may only be used to escape itself, " , $ and ` ( \<newline> is still a line-continuation). Inside a here-document, same except for " . \ is the only way to escape characters inside here documents. "..." double-quotes escape all characters but itself, \ , $ and ` . '...' single quotes escape all characters but itself. POSIX shells POSIX shells behave mostly like the Bourne shell, except that: ^ is no longer a special character ~ is special in some contexts { is allowed to be special so should be quoted. ksh like POSIX except that: {string} is special if string contains an unquoted , (or .. in some cases and with some versions). ksh93 has an additional special quoting operator: $'...' with complex rules. That operator is also found (with some variations) in bash , zsh , mksh and FreeBSD and busybox sh . ksh93 also has a $"..." quoting operator that works like "..." except that the string is subject to localisation (could be configured so it translates to the language of the user). mksh ignores the $ in $"..." . since ksh93r , ksh93 supports csh-style history expansion (not enabled by default) with -H / -o histexpand in interactive shells which makes ^ at the beginning of commands and ! special. ! is then special in some contexts (not when followed by space or TAB nor in here documents) and is not escaped by double quotes. Only backslash (not within double-quotes where \ removes ! its special meaning but is not otherwise removed) and single quotes escape it. bash like ksh93 but: in single-byte character locales, all blank (according to the locale) characters are considered as delimiters (like space or tab). In effect, that means you should quote all bytes with the 8th bit set in case they may be a blank character in some locale. csh history expansion is enabled by default in interactive instances, with the same notes as on ksh93 above except that in newer versions of bash , ! is also not special sometimes, when followed by a " . like in csh, % at the start of a command is used to manipulate jobs. %1 puts job number one in foreground instead of running the %1 command. Same with %1 & to put it in background... zsh like ksh93 but: same note as for bash for csh history expansion, except that backslash can be used to escape ! inside double quotes like in csh. = is special as the first character of a word ( =ls expands to /bin/ls ). same note as for bash about % in command position. { and } can also open and close command groups when not delimited (as in {echo text} works like Bourne's { echo text;} ). except for [ alone, [ needs quoted even if not closed with a ] . With the extendedglob option enabled, # , ^ and ~ are globbing operators. With the braceccl option, {non-empty-string} is special. $"..." is not supported. as a special quirk, ? is not special when following a % (even quoted or expanded) at the start of a word (to allow the %?name job specification) a rcquotes option (not enabled by default) allows one to enter single quotes as '' inside single quotes à la rc (see below). yash like POSIX except that. all blank characters are considered as delimiters. With the brace-expand option, implements zsh-style brace expansion. same note as for bash abd zsh about % in command position (except when in POSIX mode). For all shells, there are some special contexts where quoting works differently. We've already mentioned here documents and backticks, but there's also [[...]] in ksh and a few other shells, POSIX $((...)) , case constructs... Also note that quoting can have other side-effects when it comes to expansions (with double-quotes), or when applied to here document delimiters. It also disables reserved words and affects alias expansion. Summary In Bourne-like shells, !#$^&*?[(){}<>~;'"`|= , SPC, TAB, NEWLINE and some bytes with the 8th bit set are or may be special (at least in some contexts). To remove the special meaning so they be treated literally, you use quoting. Use: '...' to remove the special meaning of every character: printf '%s\n' '\/\/ Those $quoted$ strings are passed literally as single arguments (without the enclosing quotes) to `printf`' \ to remove the special meaning of one character only: printf '<%s>\n' foo bar\ baz #comment Above, only the space character preceded by a \ is passed literally to printf . The other ones are treated by the shell as token delimiters. use "..." to quote characters while still allowing parameter expansion ( $var , $# , ${foo#bar} ...), arithmetic expansion ( $((1+1)) , also $[1+1] in some shells) and command substitution ( $(...) or the old form `...` . Actually, most of the time, you do want to put those expansions inside double quotes in any case . You can use \ within "..." to remove the special meaning of the characters that are still special (but only them). if the string contains ' character, you can still use '...' for the rest and use other quoting mechanisms that can quote ' like "'" or \' or (where available) $'\'' : echo 'This is "tricky", isn'\''t it?' Use the modern $(...) form of command substitution. Only use the old `...` for compatibility with the Bourne shell, that is to very old system, and only in variable assignments, as in don't use: echo "`echo "foo bar"`" Which won't work with the Bourne shell or AT&T versions of ksh. Or: echo "`echo \"foo bar\"`" Which will work with Bourne and AT&T ksh, but not with yash ( 2020 edit: only in version 2.41 and earlier though, it's since been changed in 2.42 / bug report / commit ), but use: var=`echo "foo bar"`; echo "$var" which will work with all. Nesting them portably with double quotes is also impossible, so again, use variables. Also beware of the special backslash processing: var=`printf '%s\n' '\\'` Will store only one backslash inside $var , because there's an extra level of backslash processing (for \ , `, and $ (and also " when quoted except in yash )) within backticks so you need either var=`printf '%s\n' '\\\\'` or var=`printf '%s\n' '\\\' instead. Csh family csh and tcsh have a significantly different syntax, though there is still a lot in common with the Bourne shell as they share a common heritage. Special characters: "'&|;()^`<>$ , space, newline and tab are special everywhere when not quoted. # (csh is the shell that introduced # as the comment leader) is special at the beginning of a script or following an unquoted space, tab or newline. *?[ are special as globbing operators so in list contexts {anything} is special except for the special case of a standalone {} (csh is the shell that introduced brace expansion). ! and ^ are special as part of history expansion (again, a csh invention), and quoting rules are special. ~ (tilde expansion also a csh invention) is special in some contexts. Like in bash, zsh, yash, % in command position is used to manipulate jobs (again a csh invention). Quoting operators They are the same as for the Bourne shell, but the behaviour differs. tcsh behaves like csh from the syntax point of view, you'll find that many versions of csh have nasty bugs. Get the latest version of tcsh to get a roughly working version of csh. \ escapes a single character except newline (same as for the Bourne shell). It's the only quoting operator that can escape ! . \<newline> doesn't escape it but transforms it from a command separator to a token separator (like space) "..." escapes all characters except itself, $ , ` , newline and ! . Contrary to the Bourne shell, you can't use \ to escape $ and ` inside "..." , but you can use \ to escape ! or newline (but not itself except when before a ! or newline). A literal ! is "\!" and a literal \! is "\\!" . '...' escapes all characters except itself, ! and newline. Like for double quotes, ! and newline can be escaped with backslash. command substitution is only via the `...` syntax and can hardly be used reliably. variable substitution is also pretty badly designed and error prone. A $var:q operator helps to write more reliable code involving variables. Summary Stay away from csh if you can. If you can't use: single quotes to quote most characters. ! and newline still need a \ . \ can escape most characters "..." can allow some expansions within it, but that's pretty buggy if they embed newline and/or backslash characters, best may be to use single quotes only and $var:q for variable expansion. You'll need to use loops if you want to join elements of an array reliably. rc family rc is the plan9 shell. plan9 code has now been released as FLOSS and its user space software including rc been ported to Linux. A clone of rc for Unix was also written in the early 90s by Byron Rakitzis, and from which derived es and akanga . That's a shell with a much cleaner and better syntax and the one everyone would be using if we weren't stuck with Bourne-like shells for backward compatibility. rc / akanga Special characters #;&|^$=`'{}()<> , SPC, TAB and NEWLINE are always special when not quoted. *?[ are globbing operators. Quoting operator '...' is the only quoting operator. A litteral ' is written with '' within single quotes as in: echo 'it''s so simple isn''t it?' es es could be seen as an experimental shell based on rc . It has a few differences though. The one of interest for this Q/A is that \ is also a quoting operator (that quotes all special characters except newline) and can also be used to introduce escape sequences like \n for newline, \b for backslash... fish fish is a relative newcomer (circa 2005), is primarily intended for interactive use and also has a significantly different syntax from other shells. special characters "'\()$%{}^<>;&| always special when not quoted (note the % (for pid expansion) as a significant difference from other shells, and ` is not special) # (comment) special when following unquoted space, tab, newline or ;&|^<> *? (but not [...] ) globbing operators Quoting operators \ quotes a single special character except newline, but beware it also doubles as a C escape sequence ( \n , \b ...) introducer. IOW, \n is not a quoted n but a newline. "..." quotes everything but itself, $ and backslash and backslash can be used to escape those. \<newline> is a line continuation (removed) inside "..." . '...' quotes everything but itself and \ , and you can use backslash to escape those.
{ "source": [ "https://unix.stackexchange.com/questions/296141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81848/" ] }
296,297
Passing a password on command line (to a child process started from my program) is known to be insecure (because it can be seen even by other users with ps command). Is it OK to pass it as an environment variable instead? What else can I use to pass it? (Except of environment variable) the easiest solution seems to use a pipe, but this easiest solution is not easy. I program in Perl.
Process arguments are visible to all users, but the environment is only visible to the same user ( at least on Linux , and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already. The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?). It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do echo "$password" | theprogram if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document: theprogram <<EOF $password EOF Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg : get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data $password EOP If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3. theprogram --password-from-file=/dev/fd/3 3<<EOF $password EOF In ksh, bash or zsh, you can do this more concisely through process substitution. theprogram --password-from-file=<(echo "$password")
{ "source": [ "https://unix.stackexchange.com/questions/296297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9158/" ] }
296,299
I am trying to run my application directly from the linux kernel (without using cron or something similar). I've tried using ./init/init.c , but it runs too early: $ dmesg ... [ 0.605657] TEST!!! ... My idea is to launch an application after successful user login, but I cannot find an appropriate function to use.
Process arguments are visible to all users, but the environment is only visible to the same user ( at least on Linux , and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already. The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?). It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do echo "$password" | theprogram if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document: theprogram <<EOF $password EOF Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg : get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data $password EOP If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3. theprogram --password-from-file=/dev/fd/3 3<<EOF $password EOF In ksh, bash or zsh, you can do this more concisely through process substitution. theprogram --password-from-file=<(echo "$password")
{ "source": [ "https://unix.stackexchange.com/questions/296299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179952/" ] }
296,347
Here's what I did on Debian Jessie: install cron via apt-get install cron put a backup_crontab file in /etc/cron.d/ However the task is never running. Here are some outputs: /# crontab -l no crontab for root /# cd /etc/cron.d && ls backup_crontab /etc/cron.d# cat backup_crontab 0,15,30,45 * * * * /backup.sh >/dev/null 2>&1 Is there something to do to activate a particular crontab, or to activate the cron "service" in itself?
Files in /etc/cron.d need to also list the user that the job is to be run under. i.e. 0,15,30,45 * * * * root /backup.sh >/dev/null 2>&1 You should also ensure the permissions and owner:group are set correctly ( -rw-r--r-- and owned by root:root )
{ "source": [ "https://unix.stackexchange.com/questions/296347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180052/" ] }
296,596
I have a script that generates some output. I want to check that output for any IP address like 159.143.23.12 134.12.178.131 124.143.12.132 if (IPs are found in <file>) then // bunch of actions // else // bunch of actions // Is fgrep a good idea? I have bash available.
Yes , You have lot of options/tools to use. I just tried this , it works: ifconfig | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" so you can use grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" to grep the ip addresses from your output.
{ "source": [ "https://unix.stackexchange.com/questions/296596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152598/" ] }
296,838
eval and exec are both built in commands of bash(1) that execute commands. I also see exec has a few options but is that the only difference? What happens to their context?
eval and exec are completely different beasts. (Apart from the fact that both will run commands, but so does everything you do in a shell.) $ help exec exec: exec [-cl] [-a name] [command [arguments ...]] [redirection ...] Replace the shell with the given command. What exec cmd does, is exactly the same as just running cmd , except that the current shell is replaced with the command, instead of a separate process being run. Internally, running say /bin/ls will call fork() to create a child process, and then exec() in the child to execute /bin/ls . exec /bin/ls on the other hand will not fork, but just replaces the shell. Compare: $ bash -c 'echo $$ ; ls -l /proc/self ; echo foo' 7218 lrwxrwxrwx 1 root root 0 Jun 30 16:49 /proc/self -> 7219 foo with $ bash -c 'echo $$ ; exec ls -l /proc/self ; echo foo' 7217 lrwxrwxrwx 1 root root 0 Jun 30 16:49 /proc/self -> 7217 echo $$ prints the PID of the shell I started, and listing /proc/self gives us the PID of the ls that was ran from the shell. Usually, the process IDs are different, but with exec the shell and ls have the same process ID. Also, the command following exec didn't run, since the shell was replaced. On the other hand: $ help eval eval: eval [arg ...] Execute arguments as a shell command. eval will run the arguments as a command in the current shell. In other words eval foo bar is the same as just foo bar . But variables will be expanded before executing, so we can execute commands saved in shell variables: $ unset bar $ cmd="bar=foo" $ eval "$cmd" $ echo "$bar" foo It will not create a child process, so the variable is set in the current shell. (Of course eval /bin/ls will create a child process, the same way a plain old /bin/ls would.) Or we could have a command that outputs shell commands. Running ssh-agent starts the agent in the background, and outputs a bunch of variable assignments, which could be set in the current shell and used by child processes (the ssh commands you would run). Hence ssh-agent can be started with: eval $(ssh-agent) And the current shell will get the variables for other commands to inherit. Of course, if the variable cmd happened to contain something like rm -rf $HOME , then running eval "$cmd" would not be something you'd want to do. Even things like command substitutions inside the string would be processed, so one should really be sure that the input to eval is safe before using it. Often, it's possible to avoid eval and avoid even accidentally mixing code and data in the wrong way.
{ "source": [ "https://unix.stackexchange.com/questions/296838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21240/" ] }
296,967
I made a backup to an NTFS drive, and well, this backup really proved necessary. However, the NTFS drive messed up permissions. I'd like to restore them to normal w/o manually fixing each and every file. One problem is that suddenly all my text files gained execute permissions, which is wrong ofc. So I tried: sudo chmod -R a-x folder\ with\ restored\ backup/ But it is wrong as it removes the x permission from directories as well which makes them unreadable. What is the correct command in this case?
If you are fine with setting the execute permissions for everyone on all folders: chmod -R -x+X -- 'folder with restored backup' The -x removes execute permissions for all The +X will add execute permissions for all, but only for directories. See Stéphane Chazelas's answer for a solution that uses find to really not touch folders, as requested.
{ "source": [ "https://unix.stackexchange.com/questions/296967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112145/" ] }
297,006
I have a collection of files ( *.zip, *.txt, *.tar.gz, *.doc, ...etc ). These files reside within a path. I want to find all the files ( *.txt), then copy, only, the text files that contain specific words ( e.g LINUX/UNIX). I ran the following: find . -name "*.txt" | grep 'LINUX/UNIX' This command was able to find all the text files, then "grep" filtered the resultant text files by listing only the text files that contain 'LINUX/UNIX'. How can I copy these final files (i.e. the text files that contain 'LINUX/UNIX') to a specific path of choice? I tried to apply xargs find . -name "*.txt" | grep 'LINUX/UNIX' | xargs cp <to a path> But it didn't work
Try: grep -rl --null --include '*.txt' LINUX/UNIX . | xargs -0r cp -t /path/to/dest Because this command uses NUL-separation, it is safe for all file names including those with difficult names that include blanks, tabs, or even newlines. The above requires GNU cp . For MacOS/FreeBSD, try: grep -rl --null --include '*.txt' LINUX/UNIX . | xargs -0 sh -c 'cp "$@" /path/to/dest' sh How it works: grep options and arguments -r tells grep to search recursively through the directory structure. (On FreeBSD, -r will follow symlinks into directories. This is not true of either OS/X or recent versions of GNU grep .) --include '*.txt' tells grep to only return files whose names match the glob *.txt (including hidden ones like .foo.txt or .txt ). -l tells grep to only return the names of matching files, not the match itself. --null tells grep to use NUL characters to separate the file names. ( --null is supported by grep under GNU/Linux , MacOS and FreeBSD but not OpenBSD .) LINUX/UNIX tells grep to look only for files whose contents include the regex LINUX/UNIX . search in the current directory. You can omit it in recent versions of GNU grep , but then you'd need to pass a -- option terminator to cp to guard against file names that start with - . xargs options and arguments -0 tells xargs to expect NUL-separated input. -r tells xargs not to run the command unless at least one file was found. (This option is not needed on either BSD or OSX and is not compatible with OSX's xargs .) cp -t /path/to/dest copies the directories to the target directory. ( -t requires GNU cp .)
{ "source": [ "https://unix.stackexchange.com/questions/297006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
297,569
I have a windows 10 host OS where I have installed vmware workstation 12 player. I have an Xubuntu as a guest OS (virtual machine). The complication is: the text is too small in guest OS and almost unreadable. The steps that I have already taken to rectify the problem are given below: I have already installed vmware tools (which is confirmed by hovering on Manage -> Reinstall vmware tools). I have tried to manually set the resolution in the vmware before starting the virtual machine (by manually changing it to 640 by 480 and then to other settings). In vmware workstation 12 player, i cannot see the stretch the guest OS but I have tried to stretch the guest desktop in the guest OS. Note: I am using DELL XPS 15 with 4k UHD. Any help in this regard is highly appreciable. If I am unable to explain anything please let me know, I can provide more details.
It worked for me too on a HP Spectre 4k laptop (windows 10): Right click on the vmware player icon on the desktop shortcut and click properties. Move to compatibility tab. Check the option "override high DPI scaling behavior. And select the System Enhanced for Scaling performed by: . Apply and restart VM. It should work. Got a result after 5 hours spent on the web.
{ "source": [ "https://unix.stackexchange.com/questions/297569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180963/" ] }
297,686
I want to use sed to replace anything in a string between the first AB and the first occurrence of AC (inclusive) with XXX . For example , I have this string (this string is for a test only): ssABteAstACABnnACss and I would like output similar to this: ssXXXABnnACss . I did this with perl : $ echo 'ssABteAstACABnnACss' | perl -pe 's/AB.*?AC/XXX/' ssXXXABnnACss but I want to implement it with sed . The following (using the Perl-compatible regex) does not work: $ echo 'ssABteAstACABnnACss' | sed -re 's/AB.*?AC/XXX/' ssXXXss
Sed regexes match the longest match. Sed has no equivalent of non-greedy. What we want to do is match AB , followed by any amount of anything other than AC , followed by AC Unfortunately, sed can’t do #2 — at least not for a multi-character regular expression.  Of course, for a single-character regular expression such as @ (or even [123] ), we can do [^@]* or [^123]* .  And so we can work around sed’s limitations by changing all occurrences of AC to @ and then searching for AB , followed by any number of anything other than @ , followed by @ like this: sed 's/AC/@/g; s/AB[^@]*@/XXX/; s/@/AC/g' The last part changes unmatched instances of @ back to AC . But this is a reckless approach because the input could already contain @ characters. So, by matching them, we could get false positives.  However, since no shell variable will ever have a NUL ( \x00 ) character in it, NUL is likely a good character to use in the above work-around instead of @ : $ echo 'ssABteAstACABnnACss' | sed 's/AC/\x00/g; s/AB[^\x00]*\x00/XXX/; s/\x00/AC/g' ssXXXABnnACss The use of NUL requires GNU sed. (To make sure that GNU features are enabled, the user must not have set the shell variable POSIXLY_CORRECT.) If you are using sed with GNU's -z flag to handle NUL-separated input, such as the output of find ... -print0 , then NUL will not be in the pattern space and NUL is a good choice for the substitution here. Although NUL cannot be in a bash variable it is possible to include it in a printf command. If your input string can contain any character at all, including NUL, then see Stéphane Chazelas' answer which adds a clever escaping method.
{ "source": [ "https://unix.stackexchange.com/questions/297686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66727/" ] }
297,758
In *nix world, is there a way for shell script to have information about which program has executed it? Example: /path/to/script1 /path/to/script_xyz in this imaginary scenario, script_xyz would have path information ( /path/to/script1 ) or process PID of entity that have executed it. Note: I'm curious about different solutions and approaches, I don't expect exactly this to be actually possible
There's often confusion between process forking and execution. When you do at the prompt of a bash shell. $ sh -c 'exec env ps' The process P1 issuing that $ prompt is currently running bash code. That bash code forks a new process P2 that executes /bin/sh which then executes /usr/bin/env , which then executes /bin/ps . So P2 has in turn executed code of bash , sh , env and ps . ps (or any other command like a script we would use instead here) has no way to know that it has been executed by the env command. All it can do is find out what its parent process id is, which in this case would be either P1 or 1 if P1 has died in the interval or on Linux another process that has been designated as a subreaper instead of 1 . It can then query the system for what command that process is currently running (like with readlink /proc/<pid>/exe on Linux) or what arguments where passed to the last command it executed (like with ps -o args= -p <pid> ). If you want your script to know what invoked it, a reliable way would be to have the invoker tell it. That could be done for instance via an environment variable. For instance script1 could be written as: #! /bin/sh - INVOKER=$0 script2 & And script2 : #! /bin/sh - printf '%s\n' "I was invoked by $INVOKER" # and in this case, we'll probably find the parent process is 1 # (if not now, at least one second later) as script1 exited just after # invoking script2: ps -fp "$$" sleep 1 ps -fp "$$" exit $INVOKER will ( generally ) contain a path to script1 . In some cases, it may be a relative path though, and the path will be relative to the current working directory at the time script1 started. So if script1 changes the current working directory before calling script2 , script2 will get wrong information as to what called it. So it may be preferable to make sure $INVOKER contains an absolute path (preferably keeping the basename) like by writing script1 as: #! /bin/sh - mypath=$( mydir=$(dirname -- "$0") && cd -P -- "$mydir" && pwd -P) && mypath=$mypath/$(basename -- "$0") || mypath=$0 ... some code possibly changing the current working directory INVOKER=$mypath script2 In POSIX shells, $PPID will contain the pid of the parent of the process that executed the shell at the time of that shell initialisation. After that, as seen above, the parent process may change if the process of id $PPID dies. zsh in the zsh/system module, can query the current parent pid of the current (sub-)shell with $sysparams[ppid] . In POSIX shells, you can get the current ppid of the process that executed the interpreter (assuming it's still running) with ps -o ppid= -p "$$" . With bash , you can get the ppid of the current (sub-)shell with ps -o ppid= -p "$BASHPID" .
{ "source": [ "https://unix.stackexchange.com/questions/297758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
297,792
After some very quick research, it seems Bash is a Turing-complete language . I wonder, why is Bash used almost exclusively to write relatively simple scripts? Since a Bash shell comes with Linux, you can run shell scripts without any external interpreter or compiler, as required for other popular computer languages. This is a huge advantage, that could compensate for the mediocrity of the language itself in some cases. So, is there a limit to how complex such programs can get? Is pure Bash used to write complex programs? Is is possible to write, say, a file compressor/decompressor in pure Bash? A compiler? A simple video game? Is it so sparsely used just because there are only very limited debugging tools?
it seems Bash is a Turing-complete language The concept of Turing completeness is entirely separate from many other concepts useful in a language for programming in the large : usability, expressiveness, understandabilty, speed, etc. If Turing-completeness were all we required, we wouldn't have any programming languages at all , not even assembly language . Computer programmers would all just write in machine code , since our CPUs are also Turing-complete. why is Bash used almost exclusively to write relatively simple scripts? Large, complex shell scripts — such as the configure scripts output by GNU Autoconf — are atypical for many reasons: Until relatively recently, you couldn't count on having a POSIX-compatible shell everywhere . Many systems, particularly older ones, do technically have a POSIX-compatible shell somewhere on the system, but it may not be in a predictable location like /bin/sh . If you're writing a shell script and it has to run on many different systems, how then do you write the shebang line ? One option is to go ahead and use /bin/sh , but choose to restrict yourself to the pre-POSIX Bourne shell dialect in case it gets run on such a system. Pre-POSIX Bourne shells don't even have built-in arithmetic; you have to call out to expr or bc to get that done. Even with a POSIX shell, you're missing out on associative arrays and other features we've expected to find in Unix scripting languages since Perl first became popular in the early 1990s . That fact of history means there is a decades-long tradition of ignoring many of the powerful features in modern Bourne family shell script interpreters purely because you can't count on having them everywhere. This still continues to this day, in fact: Bash didn't get associative arrays until version 4 , but you might be surprised how many systems still in use are based on Bash 3. Apple still ships Bash 3 with macOS in 2017 — apparently for licensing reasons — and Unix/Linux servers often run all but untouched in production for a very long time, so you might have a stable old system still running Bash 3, such as a CentOS 5 box. If you have such systems in your environment, you can't use associative arrays in shell scripts that have to run on them. If your answer to that problem is that you only write shell scripts for "modern" systems, you then have to cope with the fact that the last common reference point for most Unix shells is the POSIX shell standard , which is largely unchanged since it was introduced in 1989. There are many different shells based on that standard, but they've all diverged to varying degrees from that standard. To take associative arrays again, bash , zsh , and ksh93 all have that feature, but there are multiple implementation incompatibilities. Your choice, then, is to only use Bash, or only use Zsh, or only use ksh93 . If your answer to that problem is, "so just install Bash 4," or ksh93 , or whatever, then why not "just" install Perl or Python or Ruby instead? That is unacceptable in many cases; defaults matter. None of the Bourne family shell scripting languages support modules . The closest you can come to a module system in a shell script is the . command — a.k.a. source in more modern Bourne shell variants — which fails on multiple levels relative to a proper module system, the most basic of which is namespacing . Regardless of programming language, human understanding starts to flag when any single file in a larger overall program exceeds a few thousand lines. The very reason we structure large programs into many files is so that we can abstract their contents to a sentence or two at most. File A is the command line parser, file B is the network I/O pump, file C is the shim between library Z and the rest of the program, etc. When your only method for assembling many files into a single program is textual inclusion, you put a limit on how large your programs can reasonably grow. For comparison, it would be like if the C programming language had no linker, only #include statements. Such a C-lite dialect would not need keywords such as extern or static . Those features exist to allow modularity. POSIX doesn't define a way to scope variables to a single shell script function, much less to a file. This effectively makes all variables global , which again hurts modularity and composability. There are solutions to this in post-POSIX shells — certainly in bash , ksh93 and zsh at least — but that just brings you back to point 1 above. You can see the effect of this in style guides on GNU Autoconf macro writing, where they recommend that you prefix variable names with the name of the macro itself, leading to very long variable names purely in order to reduce the chance of collision to acceptably near zero. Even C is better on this score, by a mile. Not only are most C programs written primarily with function-local variables, C also supports block scoping, allowing multiple blocks within a single function to reuse variable names without cross-contamination. Shell programming languages have no standard library. It is possible to argue that a shell scripting language's standard library is the contents of PATH , but that just says that to get anything of consequence done, a shell script has to call out to another whole program, probably one written in a more powerful language to begin with. Neither is there a widely-used archive of shell utility libraries as with Perl's CPAN . Without a large available library of third-party utility code, a programmer must write more code by hand, so she is less productive. Even ignoring the fact that most shell scripts rely on external programs typically written in C to get anything useful done, there's the overhead of all those pipe() → fork() → exec() call chains. That pattern is fairly efficient on Unix, compared to IPC and process launching on other OSes, but here it's effectively replacing what you'd do with a subroutine call in another scripting language, which is far more efficient still. That puts a serious cap on the upper limit of shell script execution speed. Shell scripts have little built-in ability to increase their performance via parallel execution. Bourne shells have & , wait and pipelines for this, but that's largely only useful for composing multiple programs, not for achieving CPU or I/O parallelism. You're not likely to be able to peg the cores or saturate a RAID array solely with shell scripting, and if you do, you could probably achieve much higher performance in other languages. Pipelines in particular are weak ways to increase performance via parallel execution. It only lets two programs run in parallel, and one of the two will likely be blocked on I/O to or from the other at any given point in time. There are latter-day ways around this, such as xargs -P and GNU parallel , but this just devolves to point 4 above. With effectively no built-in ability to take full advantage of multi-processor systems, shell scripts are always going to be slower than a well-written program in a language that can use all the processors in the system. To take that GNU Autoconf configure script example again, doubling the number of cores in the system will do little to improve the speed at which it runs. Shell scripting languages don't have pointers or references . This prevents you from doing a bunch of things easily done in other programming languages. For one thing, the inability to refer indirectly to another data structure in the program's memory means you're limited to the built-in data structures . Your shell may have associative arrays , but how are they implemented? There are several possibilities, each with different tradeoffs: red-black trees , AVL trees , and hash tables are the most common, but there are others. If you need a different set of tradeoffs, you're stuck, because without references, you don't have a way to hand-roll many types of advanced data structures. You're stuck with what you were given. Or, it may be the case that you need a data structure that doesn't even have an adequate alternative built into your shell script interpreter, such as a directed acyclic graph , which you might need in order to model a dependency graph . I've been programming for decades, and the only way I can think of to do that in a shell script would be to abuse the file system , using symlinks as faux references. That's the sort of solution you get when you rely merely on Turing-completeness, which tells you nothing about whether the solution is elegant, fast, or easy to understand. Advanced data structures are merely one use for pointers and references. There are piles of other applications for them , which simply can't be done easily in a Bourne family shell scripting language. I could go on and on, but I think you're getting the point here. Simply put, there are many more powerful programming languages for Unix type systems. This is a huge advantage, that could compensate for the mediocrity of the language itself in some cases. Sure, and that's precisely why GNU Autoconf uses a purposely-restricted subset of the Bourne family of shell script languages for its configure script outputs: so that its configure scripts will run pretty much everywhere. You will probably not find a larger group of believers in the utility of writing in a highly-portable Bourne shell dialect than the developers of GNU Autoconf, yet their own creation is written primarily in Perl, plus some m4 , and only a little bit of shell script; only Autoconf's output is a pure Bourne shell script. If that doesn't beg the question of how useful the "Bourne everywhere" concept is, I don't know what will. So, is there a limit to how complex such programs can get? Technically speaking, no, as your Turing-completeness observation suggests. But that is not the same thing as saying that arbitrarily-large shell scripts are pleasant to write, easy to debug, or fast to execute. Is is possible to write, say, a file compressor/decompressor in pure bash? "Pure" Bash, without any calls out to things in the PATH ? The compressor is probably doable using echo and hex escape sequences, but it would be fairly painful to do. The decompressor may be impossible to write that way due to the inability to handle binary data in shell . You'd end up calling out to od and such to translate binary data to text format, shell's native way of handling data. Once you start talking about using shell scripting the way it was intended, as glue to drive other programs in the PATH , the doors open up, because now you're limited only to what can be done in other programming languages, which is to say you don't have limits at all. A shell script that gets all of its power by calling out to other programs in the PATH doesn't run as fast as monolithic programs written in more powerful languages, but it does run. And that's the point. If you need a program to run fast, or if it needs to be powerful in its own right rather than borrowing power from others, you don't write it in shell. A simple video game? Here's Tetris in shell . Other such games are available, if you go looking. there are only very limited debugging tools I would put debugging tool support down about 20th place on the list of features necessary to support programming in the large. A whole lot of programmers rely much more heavily on printf() debugging than proper debuggers, regardless of language. In shell, you have echo and set -x , which together are sufficient to debug a great many problems.
{ "source": [ "https://unix.stackexchange.com/questions/297792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180653/" ] }
297,969
Is the slash ( / ) really part of the name of the Linux root directory? Or is it just a symbol for it? What about /etc and so on? Update Suppose /dev/sda2 is the block device of a Linux root directory. $ sudo debugfs /dev/sda2 debugfs 1.44.1 (24-Mar-2018) debugfs: pwd [pwd] INODE: 2 PATH: / [root] INODE: 2 PATH: / debugfs: stat / Inode: 2 Type: directory Mode: 0755 Flags: 0x80000 Generation: 0 Version: 0x00000000:00000077 User: 0 Group: 0 Project: 0 Size: 4096 File ACL: 0 Links: 25 Blockcount: 8 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5b13c9f1:3f017990 -- Sun Jun 3 15:28:57 2018 atime: 0x5b13ca0f:3b3ee380 -- Sun Jun 3 15:29:27 2018 mtime: 0x5b13c9f1:3f017990 -- Sun Jun 3 15:28:57 2018 crtime: 0x5aad1843:00000000 -- Sat Mar 17 16:59:39 2018 Size of extra inode fields: 32 EXTENTS: (0):9249 So there is a directory in there, inode #2, but it hasn't a name.
The POSIX.1-2008 standard says A pathname consisting of a single / shall resolve to the root directory of the process. A null pathname shall not be successfully resolved. The standard further makes a distinction between filenames and pathnames . / is the pathname of the root directory. The name of the directory is "the root directory", but in the filesystem it is nameless, it does not have a filename. If it had a filename, that name would be a directory entry in the directory above the root directory, and there is no such directory. The character / can never be part of a filename as it is the path separator. For clarity: / is not the name of the root directory, but the path to it, its pathname . /etc is another pathname. It is the absolute path to the etc directory. The name of the directory at that path is etc (its filename is etc ). /usr/local/bin/curl is the pathname of the curl executable file in the same way that /etc is the pathname of the etc directory.
{ "source": [ "https://unix.stackexchange.com/questions/297969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82589/" ] }
297,982
I typed help while I was in the GDB but didn't find anything about step-into, step-over and step-out. I put a breakpoint in an Assembly program in _start ( break _start ). Afterwards I typed next and it finished the debugging. I guess it was because it finished _start and didn't step-into as I wanted.
help running provides some hints: There are step and next instuctions (and also nexti and stepi ). (gdb) help next Step program, proceeding through subroutine calls. Usage: next [N] Unlike "step", if the current source line calls a subroutine, this command does not enter the subroutine, but instead steps over the call, in effect treating it as a single source line. So we can see that step steps into subroutines, but next will step over subroutines. The step and stepi (and the next and nexti ) are distinguishing by "line" or "instruction" increments. step -- Step program until it reaches a different source line stepi -- Step one instruction exactly Related is finish : (gdb) help finish Execute until selected stack frame returns. Usage: finish Upon return, the value returned is printed and put in the value history. A lot more useful information is at https://sourceware.org/gdb/onlinedocs/gdb/Continuing-and-Stepping.html
{ "source": [ "https://unix.stackexchange.com/questions/297982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176824/" ] }
298,110
Whenever I want to use command yum install <packagename> I get error: No package available For example, [root@cpanel1 etc]# yum install autossh Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.t-2.net * extras: centos.t-2.net * updates: centos.t-2.net No package autossh available. Error: Nothing to do [root@cpanel1 etc]# How do I make it work?
These steps might help you, yum clean all & yum clean metadata Check the files in /etc/yum.repos.d and make sure that they don't all have enabled = 0 for each repo (there may be more than one per file). Finally you would be able to do yum update and search for desired packages.
{ "source": [ "https://unix.stackexchange.com/questions/298110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176074/" ] }
298,281
My goal is to be able to develop for embedded Linux. I have experience on bare-metal embedded systems using ARM. I have some general questions about developing for different cpu targets. My questions are as below: If I have an application compiled to run on a ' x86 target, linux OS version x.y.z ', can I just run the same compiled binary on another system ' ARM target, linux OS version x.y.z '? If above is not true, the only way is to get the application source code to rebuild/recompile using the relevant toolchain 'for example, arm-linux-gnueabi'? Similarly, if I have a loadable kernel module (device driver) that works on a ' x86 target, linux OS version x.y.z ', can I just load/use the same compiled .ko on another system ' ARM target, linux OS version x.y.z '? If above is not true, the only way is to get the driver source code to rebuild/recompile using the relevant toolchain 'for example, arm-linux-gnueabi'?
No. Binaries must be (re)compiled for the target architecture, and Linux offers nothing like fat binaries out of the box. The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different). EDIT: it is worth noting that some architectures offer levels of backwards compatibility (and even rarer, compatibility with other architectures); on 64-bit CPU's, it's common to have backwards compatibility to 32-bit editions (but remember: your dependent libraries must also be 32-bit, including your C standard library, unless you statically link ). Also worth mentioning is Itanium , where it was possible to run x86 code (32-bit only), albeit very slowly; the poor execution speed of x86 code was at least part of the reason it wasn't very successful in the market. Bear in mind that you still cannot use binaries compiled with newer instructions on older CPU's, even in compatibility modes (for example, you cannot use AVX in a 32-bit binary on Nehalem x86 processors ; the CPU just doesn't support it. Note that kernel modules must be compiled for the relevant architecture; in addition, 32-bit kernel modules will not work on 64-bit kernels or vice versa. For information on cross-compiling binaries (so you don't have to have a toolchain on the target ARM device), see grochmal's comprehensive answer below.
{ "source": [ "https://unix.stackexchange.com/questions/298281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181479/" ] }
298,292
My user id kranthi has been created and added to sudoers group. I've made the following changes in /etc/ssh/sshd_config : PasswordAuthentication Yes AllowUsers kranthi In order to allow a user to ssh the user should be added to AllowUsers . Port 22 is added to the sshd_config file. service sshd restart was given. Why cannot I SSH the server using putty ? What's missing ? I get access denied after entering password. After recommendation in comment section when I type ssh -l kranthi 127.0.0.1 I got the following The following is the screenshot from the console. Still I cannot SSH the server using the user kranthi.
No. Binaries must be (re)compiled for the target architecture, and Linux offers nothing like fat binaries out of the box. The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different). EDIT: it is worth noting that some architectures offer levels of backwards compatibility (and even rarer, compatibility with other architectures); on 64-bit CPU's, it's common to have backwards compatibility to 32-bit editions (but remember: your dependent libraries must also be 32-bit, including your C standard library, unless you statically link ). Also worth mentioning is Itanium , where it was possible to run x86 code (32-bit only), albeit very slowly; the poor execution speed of x86 code was at least part of the reason it wasn't very successful in the market. Bear in mind that you still cannot use binaries compiled with newer instructions on older CPU's, even in compatibility modes (for example, you cannot use AVX in a 32-bit binary on Nehalem x86 processors ; the CPU just doesn't support it. Note that kernel modules must be compiled for the relevant architecture; in addition, 32-bit kernel modules will not work on 64-bit kernels or vice versa. For information on cross-compiling binaries (so you don't have to have a toolchain on the target ARM device), see grochmal's comprehensive answer below.
{ "source": [ "https://unix.stackexchange.com/questions/298292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178935/" ] }
298,382
So, I've been using Linux for a few years now, and I really should know this answer, but I'm having trouble finding it. Specifically I've been using Debian based distro's....mostly Ubuntu. If I have a server, that has more than three users, how do I set a different set of permissions to a file for each user. For example: If I have a file with these permissions and ownership: rwx rw_ r__ user1:group1 file1.txt and I have 3 users with these desired permissions.... user1 rwx user2 rw_ user3 r__ All I have to do is have user1 own the file, user2 be in group1 , and user3 can be neither -- correct? But, what if I have a user4 and user5 . user4 _wx user5 __x How would I set that up? I haven't had to do this before, but I was asked that question by a Windows admin, and I honestly couldn't answer.
Traditional unix permissions only allow user, group, other permissions as you've found. These can result in some awkward combination of groups needing to be created... So a new form of ACL (Access Control Lists) were tacked on. This allows you to specify multiple users and multiple groups with different permissions. These are set with the setfacl command and read with getfacl $ setfacl -m u:root:r-- file.txt $ setfacl -m u:bin:-wx file.txt $ setfacl -m u:lp:--x file.txt $ getfacl file.txt # file: file.txt # owner: sweh # group: sweh user::rw- user:root:r-- user:bin:-wx user:lp:--x group::r-- mask::rwx other::r-- You can easily tell if a file has an ACL by looking at the ls output: $ ls -l file.txt -rw-rwxr--+ 1 sweh sweh 0 Jul 26 10:33 file.txt The + at the end of the permissions indicates an ACL.
{ "source": [ "https://unix.stackexchange.com/questions/298382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67988/" ] }
298,519
It seems that whenever I create a file with touch the permissions are set to: -rw-r--r-- . Is there some way that I can configure the permissions with touch or does this have to be done after with a different command?
You can modify your umask to allow (for most implementations) more read/write privileges, but not executable, since generally the requested permissions are 0666 . If your umask is 022 , you'll see touch make a 0644 file. Interestingly, POSIX describes this behavior in terms of creat : If file does not exist: The creat() function is called with the following arguments: The file operand is used as the path argument. The value of the bitwise-inclusive OR of S_IRUSR , S_IWUSR , S_IRGRP , S_IWGRP , S_IROTH , and S_IWOTH is used as the mode argument. and it is only by following the links to creat , then to open , noticing the mention of umask and back-tracking to open (and creat ) to verify that umask is supposed to affect touch . For umask to affect only the touch command, use a subshell: (umask 066; touch private-file) (umask 0; touch world-writable-file) touch file-as-per-current-umask (note that in any case, if the file existed beforehand, touch will not change its permissions, just update its timestamps).
{ "source": [ "https://unix.stackexchange.com/questions/298519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171477/" ] }
298,590
Using /bin/find /root -name '*.csv' returns: /root/small_devices.csv /root/locating/located_201606291341.csv /root/locating/located_201606301411.csv /root/locating/g_cache.csv /root/locating/located_201606291747.csv /root/locating/located_201607031511.csv /root/locating/located_201606291746.csv /root/locating/located_201607031510.csv /root/locating/located_201606301412.csv /root/locating/located_201606301415.csv /root/locating/located_201607031512.csv I don't actually want all the files under /root/locating/ , so the expected output is simply /root/small_devices.csv . Is there an efficient way of using `find' non-recursively? I'm using CentOS if it matters.
You can do that with -maxdepth option: /bin/find /root -maxdepth 1 -name '*.csv' As mentioned in the comments, add -mindepth 1 to exclude starting points from the output. From man find : -maxdepth levels Descend at most levels (a non-negative integer) levels of directories below the starting-points. -maxdepth 0 means only apply the tests and actions to the starting-points themselves. -mindepth levels Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the starting-points.
{ "source": [ "https://unix.stackexchange.com/questions/298590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156022/" ] }
298,644
I am using Debian stretch (systemd). I was running the rsyslog daemon in foreground using /usr/sbin/rsyslogd -n and I did a Ctrl + Z to stop it. The state of the process changed to Tl (stopped, threaded). I issued multiple kill -15 <pid> commands to the process, and the state of the process was the same: Tl . Once I did an fg , it died. I have 3 questions. Why was the SIGSTOP -ed process not responding to SIGTERM ? Why does the kernel keeps it in the same state? Why did it get killed the moment it received the SIGCONT signal? If it was because of the previous SIGTERM signal, where was it kept until the process resumed?
SIGSTOP and SIGKILL are two signals that cannot be caught and handled by a process. SIGTSTP is like SIGSTOP except that it can be caught and handled. The SIGSTOP and SIGTSTP signals stop a process in its tracks, ready for SIGCONT . When you send that process a SIGTERM , the process isn't running and so it cannot run the code to exit. (There are also SIGTTIN and SIGTTOU , which are signals generated by the TTY layer when a backgrounded job tries to read or write to the terminal. They can be caught but will otherwise stop (suspend) the process, just like SIGTSTP . But I'm now going to ignore those two for the remainder of this answer.) Your Ctrl Z sends the process a SIGTSTP , which appears not to be handled specially in any way by rsyslogd , so it simply suspends the process pending SIGCONT or SIGKILL . The solution here is also to send SIGCONT after your SIGTERM so that the process can receive and handle the signal. Example: sleep 999 & # Assume we got PID 456 for this process kill -TSTP 456 # Suspend the process (nicely) kill -TERM 456 # Terminate the process (nicely). Nothing happens kill -CONT 456 # Continue the process so it can exit cleanly The documentation for the GNU C Library explains this quite well, I think (my highlighting): While a process is stopped, no more signals can be delivered to it until it is continued , except SIGKILL signals and (obviously) SIGCONT signals. The signals are marked as pending, but not delivered until the process is continued. The SIGKILL signal always causes termination of the process and can’t be blocked, handled or ignored. You can ignore SIGCONT , but it always causes the process to be continued anyway if it is stopped. Sending a SIGCONT signal to a process causes any pending stop signals for that process to be discarded. Likewise, any pending SIGCONT signals for a process are discarded when it receives a stop signal
{ "source": [ "https://unix.stackexchange.com/questions/298644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71917/" ] }
298,698
I have several remote systems, and one of them, a linode running debian, is very slow to ssh into - it takes approximately 20-25 seconds every time. This seems to have happened relatively recently. I have tried setting GSSAPIAuthentication to no or to yes as suggested in several answers to similar questions, and it doesn't make a difference. It also doesn't make any difference if I login using the fqdn or the ip address. I have the same delay sshing from either my local linux box or my local Macintosh. I have no such delay sshing from the linode to the local linux box. I have another remote system using the same version of Debian and I can ssh into it in 2 seconds. The only difference between the /etc/ssh/sshd_config files on the two Debian boxes is that the fast one doesn't allow passwords and also specifies a list of allowed ciphers. If I login using ssh -vvv root@linode , the delay happens at the part marked with >>>>>> debug2: key: /root/.ssh/id_ecdsa ((nil)) debug2: key: /root/.ssh/id_ed25519 ((nil)) debug3: send packet: type 5 debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 >>>>>> debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug3: send_pubkey_test (This is only a partial log - full log available on request) I can't find anything about the login in /var/log/auth.log or /var/log/syslog during the delay time - afterwards I just get Jul 27 13:46:43 linode sshd[23049]: Accepted publickey for root from 199.241.27.237 port 51464 ssh2: RSA 89:08:ef:44:48:a4:84:b7:0a:de:14:65:1b:d9:86:f8 Jul 27 13:46:43 linode sshd[23049]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 27 13:46:43 linode systemd-logind[3235]: New session 10361 of user root.
If creating the connection is slow, but it is normal speed after being created, you will most likely have a problem that the server is doing a reverse DNS lookup for the client and that, for some reason, it fails. In general, when debugging this, you can also try to login from two terminals. With the first login look at the sshd log on the server, while you are trying to login from the second. That gives you more information about what the server is doing (or waiting for). You can try to find proof for this for the cause being reverse DNS lookup by setting one, or both, of the following in /etc/ssh/sshd_config : UseDNS no UsePAM no and see if that speeds up creating the connection. If it does you can often leave things that way until solved (if you care about that). If this is a reverse DNS lookup problem, this depends on the DNS server the machine that you login to is using. According to Wikipedia not all IP addresses have a reverse entry, as this is not an actual standards requirement. But more likely this is some configuration issue.
{ "source": [ "https://unix.stackexchange.com/questions/298698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11163/" ] }
298,702
I am using tmux environment and often times I have to run the same python script in 4 different panes (top 4 in the image) with same command line arguments. Is there a way I can execute the script on each shell by executing command on one? I am aware of this discussion but they suggest using a different terminal environment, I am looking for something that can be done using tmux or shell scripting. The four different shells are ssh sessions to 4 different VMs.
No need for any tools. tmux can handle this: just open up the panes, ssh to the individual servers, and then Ctrl - B followed by :setw synchronize-panes and all input gets synchronized to all visible panes. Re-type this or add "off" to the command to leave.
{ "source": [ "https://unix.stackexchange.com/questions/298702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103422/" ] }
298,706
I'd like to write a function that I can call from a script with many different variables. For some reasons I'm having a lot of trouble doing this. Examples I've read always just use a global variable but that wouldn't make my code much more readable as far as I can see. Intended usage example: #!/bin/bash #myscript.sh var1=$1 var2=$2 var3=$3 var4=$4 add(){ result=$para1 + $para2 } add $var1 $var2 add $var3 $var4 # end of the script ./myscript.sh 1 2 3 4 I tried using $1 and such in the function, but then it just takes the global one the whole script was called from. Basically what I'm looking for is something like $1 , $2 and so on but in the local context of a function. Like you know, functions work in any proper language.
To call a function with arguments: function_name "$arg1" "$arg2" The function refers to passed arguments by their position (not by name), that is $1, $2, and so forth. $0 is the name of the script itself. Example: #!/bin/bash add() { result=$(($1 + $2)) echo "Result is: $result" } add 1 2 Output ./script.sh Result is: 3
{ "source": [ "https://unix.stackexchange.com/questions/298706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181822/" ] }
299,067
I'm encountering an issue where I am trying to get the size of a terminal by using scripts. Normally I would use the command tput cols inside the console, however I want to be able to accomplish this feature by strictly using scripts. As of now I am able to detect the running console and get its file path. However I'm struggling to use this information to get the console's width. I've attempted using the command tput , but I'm fairly new to Linux/scripts so therefore don't really know what to do. The reason for doing this is I want to be able to setup a cron entry that notifies the console of its width/columns every so often. This is my code so far: tty.sh #!/bin/bash #Get PID of terminal #terminal.txt holds most recent PID of console in use value=$(</home/test/Documents/terminal.txt) #Get tty using the PID from terminal.txt TERMINAL="$(ps h -p $value -o tty)" echo $TERMINAL #Use tty to get full filepath for terminal in use TERMINALPATH=/dev/$TERMINAL echo $TERMINALPATH COLUMNS=$(/home/test/Documents/get_columns.sh) echo $COLUMNS get_columns.sh #!/usr/bin/env bash echo $(/usr/bin/tput cols) The normal output of TERMINAL & TERMINALPATH are pts/ terminalnumber and /dev/pts/ terminalnumber , for example pts/0 & /dev/pts/0
The tput command is an excellent tool, but unfortunately it can't retrieve the actual settings for an arbitrarily selected terminal. The reason for this is that it reads stdout for the terminal characteristics, and this is also where it writes its answer. So the moment you try to capture the output of tput cols you have also removed the source of its information. Fortunately, stty reads stdin rather than stdout for its determination of the terminal characteristics, so this is how you can retrieve the size information you need: terminal=/dev/pts/1 columns=$(stty -a <"$terminal" | grep -Po '(?<=columns )\d+') rows=$(stty -a <"$terminal" | grep -Po '(?<=rows )\d+') By the way, it's unnecessarily cumbersome to write this as echo $(/usr/bin/tput cols) . For any construct echo $(some_command) you are running some_command and capturing its output, which you then pass to echo to output. In almost every situation you can imagine you might as well have just run some_command and let it deliver its output directly. It's more efficient and also easier to read.
{ "source": [ "https://unix.stackexchange.com/questions/299067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182119/" ] }
299,106
I try to keep last 50 lines in my file where I save temperature every minute. I used this command: tail -n 50 /home/pi/Documents/test > /home/pi/Documents/test But the result is empty test file. I thought, it will lists last 50 lines of test file and insert it to test file. When I use this command: tail -n 50 /home/pi/Documents/test > /home/pi/Documents/test2 it is working fine. There is 50 lines in test2 file. Can anybody explain me where is the problem?
The problem is that your shell is setting up the command pipeline before running the commands. It's not a matter of "input and output", it's that the file's content is already gone before tail even runs. It goes something like: The shell opens the > output file for writing, truncating it The shell sets up to have file-descriptor 1 (for stdout) be used for that output The shell executes tail . tail runs, opens /home/pi/Documents/test and finds nothing there There are various solutions, but the key is to understand the problem, what's actually going wrong and why. This will produce what you are looking for, echo "$(tail -n 50 /home/pi/Documents/test)" > /home/pi/Documents/test Explanation : $() is called command substitution which executes tail -n 50 /home/pi/Documents/test the quotation marks preserve line breaks in the output. > /home/pi/Documents/test redirects output of echo "$(tail -n 50 /home/pi/Documents/test)" to the same file.
{ "source": [ "https://unix.stackexchange.com/questions/299106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176802/" ] }
299,321
for k in {0..49}; do a=$(($((2*$k))+1)); echo $a; done Hi, I need a simplified expression for the third line, maybe one that does not use command substitution.
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a" done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a" done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a" done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -l for (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a" done kill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |& bc_pid="$!" for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a" done kill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "source": [ "https://unix.stackexchange.com/questions/299321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131380/" ] }
299,322
I have a C-Media USB soundcard installed on my Raspberry Pi: Bus 001 Device 004: ID 0d8c:0008 C-Media Electronics, Inc. . It is a USB cable with an XLR end on the other side, to which I have an XLR Microphone (a Sennheiser MD 427 if anyone is interested) connected: Connecting it to my Mac I can turn up the recording volume (it says "settings for selected device" and "input volume" in german) and I get a fairly ok recording from it (it's actually a stereo recording, but this shows the volume level): Now, the same under Linux looks quite differently. The device is recognized ok, snd_usb_audio is loaded and alsamixer shows the new recording device and lets me turn up the "recording volume" all the way: Yet, the volume of what I can record using # AUDIODEV=hw:1 rec tmp.wav is abysmal at best: Now, is there a way to change the kernel module settings so that I can "crank the recording volume up" any more then what I am presented with? Or maybe any other settings I have forgotten about? I can "soft-up" the recording using # AUDIODEV=hw:1 rec tmp.wav gain 20 , but that also increases the noise and it is still below what the Mac records. Before you ask: # arecord -L null Discard all samples (playback) or generate zero samples (capture) default:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Device sysdefault:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Device front:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Front speakers surround21:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 2.1 Surround output to Front and Subwoofer speakers surround40:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.0 Surround output to Front and Rear speakers surround41:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio IEC958 (S/PDIF) Digital Audio Output dmix:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample mixing device dsnoop:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample snooping device hw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct hardware device without any conversions plughw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Hardware device with all software conversions # # lsusb Bus 001 Device 005: ID 0d8c:0008 C-Media Electronics, Inc. Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub # # arecord -l **** List of CAPTURE Hardware Devices **** card 1: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0 # # amixer -c 1 scontrols Simple mixer control 'PCM',0 Simple mixer control 'Mic',0 Simple mixer control 'Auto Gain Control',0 # # uname -ra Linux xxx 4.4.16+ #899 Thu Jul 28 12:36:19 BST 2016 armv6l GNU/Linux # # aplay -l -L null Discard all samples (playback) or generate zero samples (capture) default:CARD=ALSA bcm2835 ALSA, bcm2835 ALSA Default Audio Device sysdefault:CARD=ALSA bcm2835 ALSA, bcm2835 ALSA Default Audio Device dmix:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct sample mixing device dmix:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct sample mixing device dsnoop:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct sample snooping device dsnoop:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct sample snooping device hw:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Direct hardware device without any conversions hw:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Direct hardware device without any conversions plughw:CARD=ALSA,DEV=0 bcm2835 ALSA, bcm2835 ALSA Hardware device with all software conversions plughw:CARD=ALSA,DEV=1 bcm2835 ALSA, bcm2835 IEC958/HDMI Hardware device with all software conversions default:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Device sysdefault:CARD=Device C-Media USB Audio Device, USB Audio Default Audio Device front:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Front speakers surround21:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 2.1 Surround output to Front and Subwoofer speakers surround40:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.0 Surround output to Front and Rear speakers surround41:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio IEC958 (S/PDIF) Digital Audio Output dmix:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample mixing device dsnoop:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct sample snooping device hw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Direct hardware device without any conversions plughw:CARD=Device,DEV=0 C-Media USB Audio Device, USB Audio Hardware device with all software conversions **** List of PLAYBACK Hardware Devices **** card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Device [C-Media USB Audio Device], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0 # # lsusb -v -d 0d8c:0008 Bus 001 Device 004: ID 0d8c:0008 C-Media Electronics, Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0d8c C-Media Electronics, Inc. idProduct 0x0008 bcdDevice 1.00 iManufacturer 0 iProduct 1 C-Media USB Audio Device iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 224 bNumInterfaces 4 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 1 Control Device bInterfaceProtocol 0 iInterface 0 AudioControl Interface Descriptor: bLength 10 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdADC 1.00 wTotalLength 71 bInCollection 2 baInterfaceNr( 0) 1 baInterfaceNr( 1) 2 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bNrChannels 2 wChannelConfig 0x0003 Left Front (L) Right Front (R) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0201 Microphone bAssocTerminal 0 bNrChannels 1 wChannelConfig 0x0001 Left Front (L) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 6 wTerminalType 0x0301 Speaker bAssocTerminal 0 bSourceID 9 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 7 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bSourceID 10 iTerminal 0 AudioControl Interface Descriptor: bLength 10 bDescriptorType 36 bDescriptorSubtype 6 (FEATURE_UNIT) bUnitID 9 bSourceID 1 bControlSize 1 bmaControls( 0) 0x01 Mute Control bmaControls( 1) 0x02 Volume Control bmaControls( 2) 0x02 Volume Control iFeature 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 6 (FEATURE_UNIT) bUnitID 10 bSourceID 2 bControlSize 1 bmaControls( 0) 0x43 Mute Control Volume Control Automatic Gain Control bmaControls( 1) 0x00 iFeature 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 1 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 2 bSubframeSize 2 bBitResolution 16 bSamFreqType 2 Discrete tSamFreq[ 0] 48000 tSamFreq[ 1] 44100 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 9 Transfer Type Isochronous Synch Type Adaptive Usage Type Data wMaxPacketSize 0x00c8 1x 200 bytes bInterval 1 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 1 Milliseconds wLockDelay 1 Milliseconds Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 7 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 1 bSubframeSize 2 bBitResolution 16 bSamFreqType 2 Discrete tSamFreq[ 0] 48000 tSamFreq[ 1] 44100 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0064 1x 100 bytes bInterval 1 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 0 Undefined wLockDelay 0 Undefined Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 3 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 50 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 32 Device Status: 0x0000 (Bus Powered) #
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a" done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a" done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a" done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -l for (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a" done kill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |& bc_pid="$!" for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a" done kill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "source": [ "https://unix.stackexchange.com/questions/299322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17838/" ] }
299,336
I'm trying to resize my home partition, but I can't figure out how to do so. Move/Resize doesn't let me move the bar, and there's no space to extend it to. I can't unmount it because it says it's busy. I saw talk of moving partitions around to be adjacent to extend into the free space, but how would that work? I can't move the unallocated space, and wouldn't moving my home partition before the unallocated space screw stuff up?
Using arithmetic expansion: for (( k = 0; k < 50; ++k )); do a=$(( 2*k + 1 )) echo "$a" done Using the antiquated expr utility: for (( k = 0; k < 50; ++k )); do a=$( expr 2 '*' "$k" + 1 ) echo "$a" done Using bc -l ( -l not actually needed in this case as no math functions are used): for (( k = 0; k < 50; ++k )); do a=$( bc -l <<<"2*$k + 1" ) echo "$a" done Using bc -l as a co-process (it acts like a sort of computation service in the background¹): coproc bc -l for (( k = 0; k < 50; ++k )); do printf "2*%d + 1\n" "$k" >&${COPROC[1]} read -u "${COPROC[0]}" a echo "$a" done kill "$COPROC_PID" That last one looks (arguably) cleaner in ksh93 : bc -l |& bc_pid="$!" for (( k = 0; k < 50; ++k )); do print -p "2*$k + 1" read -p a print "$a" done kill "$bc_pid" ¹ This solved a an issue for me once where I needed to process a large amount of input in a loop. The processing required some floating point computations, but spawning bc a few times in the loop proved to be exceedingly slow. Yes, I could have solved it in many other ways, but I was bored...
{ "source": [ "https://unix.stackexchange.com/questions/299336", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182319/" ] }
300,095
A fair number of linux commands have a dry-run option that will show you what they're going to do without doing it. I see nothing in the xargs man page that does that and no obvious way to emulate it. (my specific use case is troubleshooting long pipelines, though I'm sure there are others) Am I missing something?
You may benefit from the -p or -t flags. xargs -p or xargs --interactive will print out the command to be executed and then prompt for input (y/n) to confirm before executing the command. % cat list one two three % ls list % cat list | xargs -p -I {} touch {} touch one ?...y touch two ?...n touch three ?...y % ls list one three xargs -t or xargs --verbose will print each command, then immediately execute it: % cat list | xargs -t -I {} touch {} touch one touch two touch three % ls list one three two
{ "source": [ "https://unix.stackexchange.com/questions/300095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121291/" ] }
301,256
If you open a file that you don't have permission to write to in vim, then decide you need to change it, you can write your changes without exiting vim by doing :w !sudo tee % I don't understand how this can work. Can you please dissect this? I understand the :w part, it writes the current buffer to disk, assuming there already is a file name associated with it, right? I also understand the ! which executes the sudo tee command and % represents the current buffer content right? But still don't understand how this works.
The structure :w !cmd means "write the current buffer piped through command". So you can do, for example :w !cat and it will pipe the buffer through cat . Now % is the filename associated with the buffer So :w !sudo tee % will pipe the contents of the buffer through sudo tee FILENAME . This effectively writes the contents of the buffer out to the file.
{ "source": [ "https://unix.stackexchange.com/questions/301256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3850/" ] }
301,318
First, apologies if this has been asked before - I searched for a while through the existing posts, but could not find support. I am interested in a solution for Fedora to OCR a multipage non-searchable PDF and to turn this PDF into a new PDF file that contains the text layer on top of the image. On Mac OSX or Windows we could use Adobe Acrobat, but is there a solution on Linux, specifically on Fedora? This seems to describe a solution - but unfortunately I am already lost when retrieving exact-image.
ocrmypdf does a good job and can be used like this: ocrmypdf in.pdf out.pdf To install: pip install ocrmypdf or sudo apt install ocrmypdf # ubuntu sudo dnf -y install ocrmypdf # fedora
{ "source": [ "https://unix.stackexchange.com/questions/301318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156074/" ] }
301,334
On Linux, what is the sixth character of the password hash stored in /etc/shadow ? On my puppy style linux box, if I try to generate 100 random passwords using shuf and /dev/urandom , then the sixth character is / about half the time. My question is not for production purpose, since I boot it up every time fresh from CD. Does this mean that my system is misconfigured or insecure in some way? I ran file on shuf to see if it was a busybox link. file /usr/bin/shuf shuf: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, stripped I don't think that shuf is a busybox link here. ls -l /usr/bin/shuf -rwxr-xr-x 1 root root 41568 Mar 7 2015 /usr/bin/shuf while ls -l /bin/wget lrwxrwxrwx 1 root root 14 Apr 29 03:49 wget -> ../bin/busybox Here is a rough idea of what I did: # ! / b i n / b a s h ## don't try this on any real computer ## this is not a production script, it is just psuedo code ## with pseudo results to illustrate a point ## for this run of 100 ?random? passwords, ## 46 of the 6th character of the hash stored in ## '/ect/shadow' were '/' function is_this_really_a_random_password () { PERHAPS_RANDOM='' for (( Z=0 ; Z<=8 ; Z++ )) do PERHAPS_RANDOM="$PERHAPS_RANDOM$( shuf --head-count=1 --random-source=/dev/urandom $FILE_OF_SAFE_CHARACTERS )" done echo "$USER_NAME:$PERHAPS_RANDOM" | chpasswd } rm sixth-character-often-forward-slash.txt for (( I=1; I<=100; I++ )) do is_this_really_a_random_password grep --regexp=root /etc/shadow | cut --characters=-40 >> sixth-character-often-forward-slash.txt done root:$5$56YsS//DE$HasM6O8y2mnXbtgeE64zK root:$5$ho8pk/4/A6e/m0eW$XmjA5Up.0Xig1e root:$5$jBQ4f.t1$vY/T/1kX8nzAEK8vQD3Bho root:$5$BJ44S/Hn$CsnG00z6FB5daFteS5QCYE root:$5$Jerqgx/96/HlV$9Wms5n1FEiM3K93A8 root:$5$qBbPLe4zYW$/zXRDqgjbllbsjkleCTB root:$5$37MrD/r0AlIC40n6$8hplf2c3DgtbM1 root:$5$.4Tt5S6F.3K7l7E$dAIZzFvvWmw2uyC root:$5$A4dX4ZlOoE$6axanr4GLPyhDstWsQ9B root:$5$HXAGhryJ/5$40tgmo7q30yW6OF7RUOE root:$5$EzNb9t5d$/nQEbEAQyug7Dk9X3YXCEv root:$5$HHS5yDeSP$LPtbJeTr0/5Z33vvw87bU root:$5$sDgxZwTX5Sm$6Pzcizq4NcKsWEKEL15 root:$5$FK1du/Paf/$hAy8Xe3UQv9HIpOAtLZ2 root:$5$xTkuy/BLUDh/N$/30sESA.5nVr1zFwI root:$5$PV4AX/OjZ$VU8vX651q4eUqjFWbE2b/ root:$5$iDuK0IUGijv4l$cdGh8BlHKJLYxPB8/ root:$5$0DEUp/jz$JBpqllXswNc0bMJA5IFgem root:$5$Wz3og/W3Jra/WKA.$6D7Wd4M1xxRDEp root:$5$ntHWB.mC3x$Kt4DNTjRZZzpbFvxpMxP root:$5$g/uEc/cq$Ptlgu8CXV.vrjrmuok9RRT root:$5$/XAHs/5x$Z9J4Zt4k6NxdjJ27PpLmTt root:$5$mgfbZeWD0h/$UDGz8YX.D85PzeXnd2K root:$5$f4Oh3/bF2Ox/eN$xt/Jkn0LxPnfKP8. root:$5$J0mZZXGJG7/v$e16VxghNvZZKRONown root:$5$SNza9XFl9i$Qq7r/N6Knt2j74no8H0x root:$5$aFCu//xiL$Ocn9mcT2izcnm3rUlBOJg root:$5$kMkyos/SLZ/Mm6$wNYxZ9QeuJ8c8T.o root:$5$ujXKC/Xnj0h/nQ$PUmePvJZr.UXmTGK root:$5$wtEhA/YKaTKH$6VCSXUiIdsfelkCYWV root:$5$I1taRlq59YZUGe$4OyIfByuvJeuwsjM root:$5$N54oH//j4nbiB$K4i6QOiS9iaaX.RiD root:$5$ps8bo/VjPGMP0y4$NTFkI6OeaMAQL7w root:$5$IRUXnXO8tSykA8$NatM5X/kKHHgtDLt root:$5$VaOgL/8V$m45M9glUYnlTKk8uCI7b5P root:$5$/lPDb/kUX73/F3$jJL.QLH5o9Ue9pVa root:$5$/sHNL/tVzuu//cr$QasvQxa02sXAHOl root:$5$hGI.SMi/7I$fYm0rZP0F5B2D1YezqtX root:$5$WsW2iENKA$4HhotPoLRc8ZbBVg4Z5QW root:$5$cN6mwqEl$q5S3U85cRuNHrlxS9Tl/PC root:$5$wwzLR/YMvk5/7ldQ$s3BJhq5LyrtZww root:$5$GUNvr/d15n8/K$CiNHwOkAtxuWJeNy1 root:$5$nGE75/8mEjM/A$pD/84iLunN/ZNI/JK root:$5$77Dn2dHLS$d5bUQhTz.OU4UA.67IGMB root:$5$EWrI//1u$uubkPk3YhAnwYXOYsvwbah root:$5$Hzfw1UCudP/N/U$Rjcdzdbov1YgozSJ root:$5$2y8CKTj.2eTq$7BEIgMWIzAJLl1SWBv root:$5$lcWsD/42g8zEEABA$r/vGxqqUZTkJ0V root:$5$LPJLc/Xz$tnfDgJh7BsAT1ikpn21l76 root:$5$ucvPeKw9eq8a$vTneH.4XasgBIeyGSA root:$5$Fwm2eUR7$ByjuLJRHoIFWnHtvayragS root:$5$yBl7BtMb$KlWGwBL6/WjgHVwXQh9fJS root:$5$1lnnh2kOG$rdTLjJsSpC3Iw4Y6nkPhq root:$5$WfvmP6cSfb066Z$1WvaC9iL11bPCAxa root:$5$qmf/hHvalWa4GE25$m3O2pdu25QBCwU root:$5$4P.oT/9HQ$Ygid4WXi0QCEObLVNsqFZ root:$5$FNr4Bkj56Y$38mG7mKV0mdb1PMCxrVd root:$5$hoNcyURtV$aTidBWHjngc1I0vUTi5bB root:$5$rzHmykYT$ATiXdUDUvUnB2fNMUQgwvE root:$5$o11Yb/ZQv2/k3wg9$5yShpVejDBk6HB root:$5$REPGN//y9H$awpPmUvCqvi6Bd/6bQxF root:$5$HbAEY/djXJx$y56GhMwavd7xTQ.jPg6 root:$5$3T1k5.LZUcy$Cup.LM5AnaBTIaJtBnF root:$5$wXaSC/P8bJ$y/0DoYJVjaP09O6GWiki root:$5$YuFfY8QPqm/dD$IIh0/tyn.18xEBl5Y root:$5$uTTBpjsKG//3Et8$9ibN9mVwSeVyOI4 root:$5$dASlMLzbVbFMnZ$N4uGBwGHhdg93z/V root:$5$03.FA/LnRBb.k7Zl$XOHU2ZlHkV9oz9 root:$5$2zL1p/VDCi$/QRT7Bo3cZ3Rxb8Y7ddo root:$5$0NpZqZs/qt/jIv.$8W/TTM3Gy2UMOWy root:$5$a4SXynoro7ucT$qFM2C79QJ15jQ0ZlL root:$5$RL0Eg/jroH8/ONP$EzceXz.pz74k104 root:$5$O3R5V/n1$U.mmCTbpID8xMXbvtzd4ch root:$5$0T2nVrv/P/xaRwUD$YVm17XF8kTsL0f root:$5$2bRwMNIXobZwn$Q228FJqg6/iRCe9GQ root:$5$PyYgL/axfgj/$uaL5y/kdzU4Kzi.JlB root:$5$A6QtfJdJ4Gwvx4$d4PA5AJ0806NzRnm root:$5$H8Mta5LDgGXp$QGdOJh.bFWgR3L719Z root:$5$H06URjv4BtOAbA$EJs1mZYhdKIVgCmn root:$5$OeB.O/GrmFB/az$SoE759KE9WIE17Uf root:$5$huiB9/sk$el3XMf7SGX81LnD3.SaF8J root:$5$fO7tfM.fjdSHA8G6$s.QIjfNniCzFdU root:$5$32at3SQJAD/xlw$HbXmBLVXTTyZfxQv root:$5$FHBFL/QdFl$FMipxpW0HlEFUIAr7IxF root:$5$sHvKf/M5OPdBuZZ$dz4qLOkTLGeCINX root:$5$hw4Vu/e34$/82lXu7ISrse.Ihk.qbqT root:$5$k1JOy/jRWZ$30YSk7kbhdKOjfDaiWVf root:$5$MnX.LUzqrB/B2$JuwqC.SmKFnMUWkEf root:$5$arRYf/PG$Xw6PpZNFO656p.Eb636iLt root:$5$5op/p8Hqs5$Nj2jA0Qxm80aG4fHW3oz root:$5$VHIT9/8yzZ$CpIK4ODps78GcqcsgiMT root:$5$.AlH7jBJoh/8$sjuVt.PcRH.vyvB3og root:$5$f7Ewinqm$nrJ2p/hKTuiEK//IfCTjth root:$5$N.dv/VCvrCADg$peSXfo35KN1dmbw/n root:$5$PSc4W./54l/SroH$CFFVOHRYK.Jj8Sp root:$5$8UBP3f4IcnAd/N1/$P.ud49qTStQ7Lw root:$5$qnXsZ/NlLZh/$nlaQVTS3FCJg1Jb2QG root:$5$xOpbbBqENR/7$boYJQzkCkZhRf7Uicf root:$5$V93tjZhzT$LrsIZWZmYo4ocRUvCixO6 root:$5$1MVz8/lf5oC/$rUKpnX23MhFx4.y2ZS Roughly half of the 6th hash characters are / : cat sixth-character-often-forward-slash.txt | cut --character=14 | sort / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / . . . . 2 5 6 8 8 B d D e e E f H I j j j J k k K l L M M n n N q r r r s S S t t T U U U U V w x X X X Z Z Z
Hash format and source The format of the password hash is $<type>$<salt>$<hash> , where <type> 5 is an SHA-256 based hash. The salt is usually at least 8 characters, (and is in the examples in the question) so the sixth character is part of the salt. Those hashes are likely generated by a version of the shadow tool suite (src package shadow in Debian, shadow-utils in CentOS) I tried to find out why, exactly, the code biases the slash. (thanks to @thrig for originally digging up the code.) TLDR: It's a bit interesting, but doesn't matter. The code generating the salt In libmisc/salt.c , we find the gensalt function that calls l64a in a loop: strcat (salt, l64a (random())); do { strcat (salt, l64a (random())); } while (strlen (salt) < salt_size); The loop takes a random number from random() , turns it into a piece of a string, and concatenates that to the string forming the salt. Repeat until enough characters are collected. What happens in l64a is more interesting though. The inner loop generates one character at a time from the input value (which came from random() ): for (i = 0; value != 0 && i < 6; i++) { digit = value & 0x3f; if (digit < 2) { *s = digit + '.'; } else if (digit < 12) { *s = digit + '0' - 2; } else if (digit < 38) { *s = digit + 'A' - 12; } else { *s = digit + 'a' - 38; } value >>= 6; s++; } The first line of the loop ( digit = value & 0x3f ) picks six bits from the input value, and the if clauses turn the value formed by those into a character. ( . for zero, / for a one, 0 for a two, etc.) l64a takes a long but the values output by random() are limited to RAND_MAX , which appears to be 2147483647 or 2^31 - 1 on glibc. So, the value that goes to l64a is a random number of 31 bits. By taking 6 bits at a time or a 31 bit value, we get five reasonably evenly distributed characters, plus a sixth that only comes from one bit! The last character generated by l64a cannot be a . , however, since the loop also has the condition value != 0 , and instead of a . as sixth character, l64a returns only five characters. Hence, half the time, the sixth character is a / , and half the time l64a returns five or fewer characters. In the latter case, a following l64a can also generate a slash in the first positions, so in a full salt, the sixth character should be a slash a bit more than half the time. The code also has a function to randomize the length of the salt, it's 8 to 16 bytes. The same bias for the slash character happens also with further calls to l64a which would cause the 11th and 12th character to also have a slash more often than anything else. The 100 salts presented in the question have 46 slashes in the sixth position, and 13 and 15 in the 11th and 12th position, respectively. (a bit less than half of the salts are shorter than 11 characters). On Debian On Debian, I couldn't reproduce this with a straight chpasswd as shown in the question. But chpasswd -c SHA256 shows the same behaviour. According to the manual, the default action, without -c , is to let PAM handle the hashing, so apparently PAM on Debian at least uses a different code to generate the salt. I didn't look at the PAM code on any distribution, however. (The previous version of this answer stated the effect didn't appear on Debian. That wasn't correct.) Significance, and requirements for salts Does it matter, though? As @RemcoGerlich commented, it's pretty much only a question of encoding. It will effectively fix some bits of the salt to zero, but it's likely that this will have no significant effect in this case, since the origin of those bits is this call to srandom in seedRNG : srandom (tv.tv_sec ^ tv.tv_usec ^ getpid ()); This is a variant of ye olde custom of seeding an RNG with the current time. ( tv_sec and tv_usec are the seconds and microseconds of the current time, getpid() gives the process id if the running process.) As the time and PIDs are not very unpredictable, the amount of randomness here is likely not larger than what the encoding can hold. The time and PID is not something you'd like to create keys with, but might be unpredictable enough for salts. Salts must be distinct to prevent brute-force testing multiple password hashes with a single calculation, but should also be unpredictable, to prevent or slow down targeted precomputation, which could be used to shorten the time from getting the password hashes to getting the actual passwords. Even with the slight issues, as long as the algorithm doesn't generate the same salt for different passwords, it should be fine. And it doesn't seem to, even when generating a couple dozen in a loop, as the list in the question shows. Also, the code in question isn't used for anything but generating salts for passwords, so there are no implications about problems elsewhere. For salts, see also, e.g. this on Stack Overflow and this on security.SE . Conclusion In conclusion, there's nothing wrong with your system. Making sure your passwords are any good, and not used on unrelated systems is more useful to think about.
{ "source": [ "https://unix.stackexchange.com/questions/301334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183124/" ] }
301,341
I'm using rsync to backup my data to an external USB3 disk (encrypted with dmcrypt/luks) and the problem is that the transfer hangs on a file, for an amount of time that can go from seconds to minutes, and then resumes with no errors or issues. This happens to several files (apparently random) during the same rsync "session" making it very slow, even though some file transfers can reach speeds of 100MB/s. I'm running Debian Jessie 8.5, rsync is at version 3.1.1, the source file system is formatted with btrfs (version 3.17) and the external disk was encrypted with crypsetup 1.6.6. The encrypted partition was formatted with btrfs, but after noticing this issue and finding this apparently unrelated ubuntu bug , I reformatted the partition to ext4 and, although it seemed to make the issue less frequent, the problem was still there. During these "hangs" no strange CPU or memory usage is detected but disk reads and writes drop to zero. This is an iotop output during a freeze: Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 21879 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/6:1] 1085 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/3:2] 31994 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.00 % [kworker/4:3] 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H] 7 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] The kworker processes are always changing but keep the 99% IO. This is an iostat output during one of the freezes (the external disk is sdg): avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.25 99.75 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdg 0.00 0.00 0.00 141.00 0.00 16920.00 240.00 135.94 868.20 0.00 868.20 7.09 100.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20343.88 0.00 0.00 0.00 0.00 100.00 I also ran ps aux | awk '$8 ~ /D/ { print $0 }' with watch and during the freeze it's this: root 1080 0.1 0.0 0 0 ? D 16:23 0:00 [kworker/0:0] root 5851 0.0 0.0 0 0 ? D 01:41 0:02 [btrfs-transacti] root 17455 4.4 0.0 105028 5192 pts/3 D+ 14:10 6:11 rsync -avr --stats --progress --inplace --delete /data/ /media/BKP-DISK/ root 24219 0.1 0.0 0 0 ? D 15:16 0:08 [kworker/5:0] root 31892 0.2 0.0 0 0 ? D Aug02 2:08 [usb-storage] root 31956 0.1 0.0 0 0 ? D 15:41 0:04 [kworker/7:0] root 31994 0.0 0.0 0 0 ? D 15:42 0:01 [kworker/4:3] root 32100 0.1 0.0 0 0 ? D 15:52 0:03 [kworker/u16:2] When the transfer is running ok it's this: root 17453 4.4 0.1 105020 33304 pts/3 D+ 14:10 6:32 rsync -avr --stats --progress --inplace --delete /data/ /media/BKP-DISK/ I'm out of ideas and know-how so I need help to troubleshoot this further. Edit @derobert I tested in an USB2 port but the issue continues to appear (found a gap of 11 seconds in the strace log and then stopped the test). The last dmesg backtrace was when the external disk was still formatted with btrfs and here's the output (there were more but all the same): INFO: task kworker/u16:21:12881 blocked for more than 120 seconds. Not tainted 3.16.0-4-amd64 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u16:21 D ffff8807f72bfa48 0 12881 2 0x00000000 Workqueue: btrfs-endio-write btrfs_endio_write_helper [btrfs] ffff8807f72bf5f0 0000000000000046 0000000000012f00 ffff88022dcfbfd8 0000000000012f00 ffff8807f72bf5f0 ffff880103b92c00 ffff88026de241f0 ffff88026de241f0 0000000000000001 0000000000000000 ffff88010c4662d8 Call Trace: [<ffffffffa02843ef>] ? wait_current_trans.isra.20+0x9f/0xf0 [btrfs] [<ffffffff810a7e60>] ? prepare_to_wait_event+0xf0/0xf0 [<ffffffffa0285948>] ? start_transaction+0x298/0x570 [btrfs] [<ffffffffa028da90>] ? btrfs_finish_ordered_io+0x250/0x5c0 [btrfs] [<ffffffffa02b2f25>] ? normal_work_helper+0xb5/0x290 [btrfs] [<ffffffff810817c2>] ? process_one_work+0x172/0x420 [<ffffffff81081e53>] ? worker_thread+0x113/0x4f0 [<ffffffff81510d61>] ? __schedule+0x2b1/0x700 [<ffffffff81081d40>] ? rescuer_thread+0x2d0/0x2d0 [<ffffffff8108809d>] ? kthread+0xbd/0xe0 [<ffffffff81087fe0>] ? kthread_create_on_node+0x180/0x180 [<ffffffff81514958>] ? ret_from_fork+0x58/0x90 [<ffffffff81087fe0>] ? kthread_create_on_node+0x180/0x180 @roaima Since I run rsync with --progress I can see the current state of the transfer and that's how I first caught the issue. For example, for a file that is 1GB it could hang at 100MB and I would see all the transfer info (transfered bytes, speed, etc) stop updating (this is where iotop would show disk reads and write at 0), and when the info would start updating again iotop would show normal read and write values. @activesheetd Here is a section of the strace log (I added the timestamp option): 29253 03:47:18 <... select resumed> ) = 1 (in [0], left {59, 999999}) 29253 03:47:18 read(0, "\355\1H\347?~\0\255", 8) = 8 29251 03:47:18 select(6, [5], [4], [5], {60, 0} <unfinished ...> 29253 03:47:18 write(1, "\235\356\374|\f\230\310u\330{\7\24\3169<\255\213>\347m\335kX\350\234\253\1\226M\6#\341"..., 262144) = 262144 29253 03:47:31 select(1, [0], [], [0], {60, 0}) = 1 (in [0], left {59, 999997}) 29253 03:47:31 read(0, <unfinished ...> 29251 03:47:31 <... select resumed> ) = 1 (out [4], left {47, 597230}) Between the 4th and the 5th lines we can see a gap of 13 seconds, which corresponded with a hang, and then it resumed. @Fiximan The log option doesn't give me more info on this problem. Since the freeze is in the middle of a file transfer for the logs it's like nothing happened (even the strace logs show timestamp gaps).
Hash format and source The format of the password hash is $<type>$<salt>$<hash> , where <type> 5 is an SHA-256 based hash. The salt is usually at least 8 characters, (and is in the examples in the question) so the sixth character is part of the salt. Those hashes are likely generated by a version of the shadow tool suite (src package shadow in Debian, shadow-utils in CentOS) I tried to find out why, exactly, the code biases the slash. (thanks to @thrig for originally digging up the code.) TLDR: It's a bit interesting, but doesn't matter. The code generating the salt In libmisc/salt.c , we find the gensalt function that calls l64a in a loop: strcat (salt, l64a (random())); do { strcat (salt, l64a (random())); } while (strlen (salt) < salt_size); The loop takes a random number from random() , turns it into a piece of a string, and concatenates that to the string forming the salt. Repeat until enough characters are collected. What happens in l64a is more interesting though. The inner loop generates one character at a time from the input value (which came from random() ): for (i = 0; value != 0 && i < 6; i++) { digit = value & 0x3f; if (digit < 2) { *s = digit + '.'; } else if (digit < 12) { *s = digit + '0' - 2; } else if (digit < 38) { *s = digit + 'A' - 12; } else { *s = digit + 'a' - 38; } value >>= 6; s++; } The first line of the loop ( digit = value & 0x3f ) picks six bits from the input value, and the if clauses turn the value formed by those into a character. ( . for zero, / for a one, 0 for a two, etc.) l64a takes a long but the values output by random() are limited to RAND_MAX , which appears to be 2147483647 or 2^31 - 1 on glibc. So, the value that goes to l64a is a random number of 31 bits. By taking 6 bits at a time or a 31 bit value, we get five reasonably evenly distributed characters, plus a sixth that only comes from one bit! The last character generated by l64a cannot be a . , however, since the loop also has the condition value != 0 , and instead of a . as sixth character, l64a returns only five characters. Hence, half the time, the sixth character is a / , and half the time l64a returns five or fewer characters. In the latter case, a following l64a can also generate a slash in the first positions, so in a full salt, the sixth character should be a slash a bit more than half the time. The code also has a function to randomize the length of the salt, it's 8 to 16 bytes. The same bias for the slash character happens also with further calls to l64a which would cause the 11th and 12th character to also have a slash more often than anything else. The 100 salts presented in the question have 46 slashes in the sixth position, and 13 and 15 in the 11th and 12th position, respectively. (a bit less than half of the salts are shorter than 11 characters). On Debian On Debian, I couldn't reproduce this with a straight chpasswd as shown in the question. But chpasswd -c SHA256 shows the same behaviour. According to the manual, the default action, without -c , is to let PAM handle the hashing, so apparently PAM on Debian at least uses a different code to generate the salt. I didn't look at the PAM code on any distribution, however. (The previous version of this answer stated the effect didn't appear on Debian. That wasn't correct.) Significance, and requirements for salts Does it matter, though? As @RemcoGerlich commented, it's pretty much only a question of encoding. It will effectively fix some bits of the salt to zero, but it's likely that this will have no significant effect in this case, since the origin of those bits is this call to srandom in seedRNG : srandom (tv.tv_sec ^ tv.tv_usec ^ getpid ()); This is a variant of ye olde custom of seeding an RNG with the current time. ( tv_sec and tv_usec are the seconds and microseconds of the current time, getpid() gives the process id if the running process.) As the time and PIDs are not very unpredictable, the amount of randomness here is likely not larger than what the encoding can hold. The time and PID is not something you'd like to create keys with, but might be unpredictable enough for salts. Salts must be distinct to prevent brute-force testing multiple password hashes with a single calculation, but should also be unpredictable, to prevent or slow down targeted precomputation, which could be used to shorten the time from getting the password hashes to getting the actual passwords. Even with the slight issues, as long as the algorithm doesn't generate the same salt for different passwords, it should be fine. And it doesn't seem to, even when generating a couple dozen in a loop, as the list in the question shows. Also, the code in question isn't used for anything but generating salts for passwords, so there are no implications about problems elsewhere. For salts, see also, e.g. this on Stack Overflow and this on security.SE . Conclusion In conclusion, there's nothing wrong with your system. Making sure your passwords are any good, and not used on unrelated systems is more useful to think about.
{ "source": [ "https://unix.stackexchange.com/questions/301341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183100/" ] }
301,426
I want to write the following bash function in a way that it can accept its input from either an argument or a pipe: b64decode() { echo "$1" | base64 --decode; echo } Desired usage: $ b64decode "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" $ b64decode < file.txt $ b64decode <<< "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" $ echo "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" | b64decode
See Stéphane Chazelas's answer for a better solution. You can use /dev/stdin to read from standard input b64decode() { if (( $# == 0 )) ; then base64 --decode < /dev/stdin echo else base64 --decode <<< "$1" echo fi } $# == 0 checks if number of command line arguments is zero base64 --decode <<< "$1" one can also use herestring instead of using echo and piping to base64
{ "source": [ "https://unix.stackexchange.com/questions/301426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183200/" ] }
301,483
I'm trying to copy a large directory from one drive to another. I mistakenly logged out before it was finished so only about 80% of the files copied over. Is there away to copy the remaining files without starting from scratch?
I would try, rsync -a /from/file /dest/file you can use other options like --append , -P (--partial --progress) . See man rsync for more info. Or if you are using cp then use cp -u . from man cp : -u, --update copy only when the SOURCE file is newer than the destination file or when the destination file is missing.
{ "source": [ "https://unix.stackexchange.com/questions/301483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183260/" ] }
301,717
On this question or on this one (for example) you will get solutions on how to look for symlinks pointing to a given directory (let's call it /dir1 ), while I am interested to symbolic links possibly pointing to any file/folder inside /dir1 . I want to delete such directory but I am not sure that I am safe to do so, as on an other directory (let's call it /dir2 ), I may have symlinks pointing to inner parts of /dir1 . Further, I may have created these symlinks using absolute or relative paths. My only help is that I know the symlinks I want to check are on a mounted filesystem, on /dir2 .
You can find all the symbolic links using: find / -type l you might want to run this as root in order to get to every place on the disc. You can expand these using readlink -f to get the full path of the link and you should be able to grep the output against the target directory that you are considering for deletion: find / -type l -exec readlink -f {} + | grep -F /dir2 Using find / -type l -printf '%l\n' doesn't work as you get relative links like ../tmp/xyz which might be pointing to your target dir, but are not matched because they are not fully expanded.
{ "source": [ "https://unix.stackexchange.com/questions/301717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77329/" ] }
301,987
After searching plenty through plenty a post, Youtube video, and "documentation" on the matter of systemd, I'm still at a loss. The link ( https://wiki.archlinux.org/index.php/systemd#Create_custom_target ) seemed promising, but was a bit vague (to me). Question How would one go about creating a custom systemd target (IE: foo.target ) so that one may boot with select .service units? Example System boots default.target (symlink of "foo.target") "foo.target" only starts a barebones X server and GUI program, say "gvim". Reason I'm simply looking to create a custom target for quickly launching one X program. I'd be nice to exclude all the services I don't need. Thanks in advance!
Reading through man 5 systemd.unit and man 5 systemd.target tells us that unit files are used to define targets as well as everything else systemd. There is no documentation specifically on how to create a target , so it's hard to determine the how it should be done, but it is not too different from creating a service. When you create your target, you will need to make symlinks to the target.wants directory from the systemd services directory. Then you can set/boot your target. Here's how it might look given your example. /etc/systemd/system/foo.target This is the target's unit file. If graphical.target is taken as an example, we can create our own target using it as a base. [Unit] Description=Foobar boot target Requires=multi-user.target Wants=foobar.service Conflicts=rescue.service rescue.target After=multi-user.target rescue.service rescue.target AllowIsolate=yes To explain the options taken from the systemd manpages; Description -- Describes the target. You should understand Requires -- Hard dependencies of the target. You should let the basic system start before you start your own service(s) Wants -- Soft dependencies. The target does not require these to start. Conflicts -- If a unit has a Conflicts setting on another unit, starting the former will stop the latter and vice versa. After -- Boots after these services AllowIsolate -- Really up to you and your environment. Details are available in the manpage systemd.unit(5) /etc/systemd/system/foo.target.wants/ This is the directory where you will link the services you create/require for your target. It is equivalent to the Wants= option in the unit file. Create this directory and then create symlinks like so; ln -s /usr/lib/systemd/system/bar.service /etc/systemd/system/foo.target.wants/bar.service . This creates a symlink from bar.service in the system directory to your foo.target.wants directory. I think creating a unit file for a service is kind of out of the scope of this answer, and that question is definitely more documented so I'll leave that out for now. When you create your unit file, just symlink it into the target.wants directory or add it to the Wants= directive.
{ "source": [ "https://unix.stackexchange.com/questions/301987", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183604/" ] }
302,261
Can someone clarify for me difference between "enable" and "start" for a systemd unit. I have been told that if a unit has an [Install] section, then enable should be called, otherwise just start is enough. How this handled in startup process? Systemd automagically makes right decision?
To start (activate) a service , you will run the command systemctl start my_service.service , this will start the service immediately in the current session. To enable a service at boot , you will run systemctl enable my_service.service . Enable one or more units or unit instances. This will create a set of symlinks, as encoded in the "[Install]" sections of the indicated unit files. After the symlinks have been created, the system manager configuration is reloaded (in a way equivalent to daemon-reload), in order to ensure the changes are taken into account immediately The /usr/lib/systemd/system/ contains init scripts , when you type systemctl enable to start a service at boot it will be linked to /etc/systemd/system/ . #systemctl enable my_service.service ln -s '/usr/lib/systemd/system/my_service.service' '/etc/systemd/system/multi-user.target.wants/my_service.service'
{ "source": [ "https://unix.stackexchange.com/questions/302261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183272/" ] }
302,289
I have an input string like: arg1.arg2.arg3.arg4.arg5 The output I want is: arg5.arg4.arg3.arg2.arg1 It's not always 5 arg's, could be 2 to 10. How can I do this in a bash script?
Using combination of tr + tac + paste $ tr '.' $'\n' <<< 'arg1.arg2.arg3.arg4.arg5' | tac | paste -s -d '.' arg5.arg4.arg3.arg2.arg1 If you still prefer bash, you could do in this way, IFS=. read -ra line <<< "arg1.arg2.arg3.arg4." let x=${#line[@]}-1; while [ "$x" -ge 0 ]; do echo -n "${line[$x]}."; let x--; done Using perl , $ echo 'arg1.arg2.arg3.arg4.arg5' | perl -lne 'print join ".", reverse split/\./;' arg5.arg4.arg3.arg2.arg1
{ "source": [ "https://unix.stackexchange.com/questions/302289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160653/" ] }
302,419
I don't know if this is normal, but the thing is, let's say I have a Solaris user called gloaiza and its password is password2getin I'm logging into the server with PuTTY, I just put 192.168.224.100 and it prompts a windows asking for an user, so I type gloaiza , then it asks for a password and let's say I type password2geti by mistake, and it worked! I'm IN the server! Is that normal? It also works if I put something like password2getin2 . I'm not a native English speaker, so, in case there's something you can't understand please ask me OS: Oracle Solaris 10 1/13
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "source": [ "https://unix.stackexchange.com/questions/302419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179831/" ] }
302,437
I have a 1Gb HDD image (created using bximage for Bochs), onto which I wish to install Grub 2. I understand that a Grub installation consists of 3 parts: The boot.img image, which occupies the first sector The core.img image, which occupies space following the first sector up until the start of the next track And the /boot/grub/ directory, in which the grub.cfg and other modules are located. First I use a boot.img image that from within my own Linux /boot/grub/ directory. Following this, I generate my core.img image using the following command: sudo grub-mkimage -v --format=i386-pc -o core.img -p\(hd0,msdos1\)/boot/grub ls ext2 part_msdos And to install them onto the final disk image, I use the following commands: sudo dd if=boot.img of=/dev/loop0 bs=446 count=1 the 446 blocksize is used so as to not overwrite The partition data that resides within the MBR sudo dd if=core.img of=/dev/loop0 bs=512 seek=1 and here, seek=1 is so as to not overwrite the MBR that was just written. The disk, starting from sector 2048 until the last, is formatted with an ext2 partition, and contains a boot/grub/ directory containing a grub.cfg (with a single bogus menuentry which doesn't load anything), and modules in the /boot/grub/i386-pc/ directory. Bochs successfully boots this installation of grub all the way to the grub> prompt. As This Ubuntu guide points out, this behaviour indicates that grub.cfg was not found. Upon invoking ls , I am faced with an interesting problem - I apparently have no devices connected at all! To further elaborate on the nature of the problem, I observed that when booting a grub-mkrescue image from a slave drive, invoking ls displayed its own rescue drive, and the previously 'non-existant' primary disk drive, along with the ext2 partition. I verified that /boot/grub.cfg could indeed be accessed. From this observation I would assume that my own core.img is missing some fundamental module or functionality. But which, and how would I amend this? I also conducted this exercise on a physical machine using a USB stick, and the exact same thing happened, so I can confirm that the problem is not with Bochs.
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "source": [ "https://unix.stackexchange.com/questions/302437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175978/" ] }
302,439
I have a file looks like: input: 112 1 2 01 1 000 0 0 22 0 122 2 2 22 0 I want to delete those columns in which there is less than 2 digits in each row. So the out should look like: 112 01 000 22 122 22 any suggestion? note that the real file is huge.
The operating system stores a hash of the password in /etc/shadow (or, historically, /etc/passwd ; or a different location on some other Unix variants). Historically, the first widespread password hash was a DES-based scheme which had the limitation that it only took into account the first 8 characters of the password. In addition, a password hashing algorithm needs to be slow; the DES-based scheme was somewhat slow when it was invented but is insufficient by today's standards. Since then, better algorithms have been devised. But Solaris 10 defaults to the historical DES-based scheme. Solaris 11 defaults to an algorithm based on iterated SHA-256 which is up to modern standards. Unless you need historical compatibility with ancient systems, switch to the iterated SHA-256 scheme. Edit the file /etc/security/policy.conf and change the CRYPT_DEFAULT setting to 5 which stands for crypt_sha256 . You may also want to set CRYPT_ALGORITHMS_ALLOW and CRYPT_ALGORITHMS_DEPRECATE . Once you've changed the configuration, run passwd to change your password. This will update the password hash with the currently configured scheme.
{ "source": [ "https://unix.stackexchange.com/questions/302439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133262/" ] }
302,548
This is how my bash prompt used to look like. Then I did something which was probably not so smart, I did cat /bin/bash . And now my bash prompt looks like this, with a pound symbol (£) instead of a hash symbol (#). It even affects hash symbols within files, see here: Any Idea how to revert this? Edit: This question does not ask "How to change my bash prompt?", but "my bash prompt changed by itself, how can I restore it?" Complete .bashrc for those who are interested.
The terminal accepts and executes a bunch of different character sequences as control commands. For example, all cursor movement is done using those. Some of the codes make permanent changes, like setting colors, or telling the terminal to use an alternate character set. Executables and other binary files can well contain bytes that represent those commands, so dumping binary files to the terminal can have annoying side effects. See e.g. here for some of the control codes. The historical background to this is that originally, terminals were rather dumb devices with a screen and a keyboard , and they connected to the actual computer via a serial port. Before that, they were printers with keyboards. There wasn't much of a protocol to separate data bytes from command bytes, so commands were given to the terminal "inline". (Or rather, the escape codes and control characters were the protocol.) One might assume that if the system was devised today, there would be clearer separation between data and commands. Instead of just closing the terminal window or killing the emulator, you can use the reset command , which sends a similar command (or several) to reset the terminal back to sane defaults. I don't know what exactly would cause the hash to pound change. (But @Random832 does, see their answer .) I'm more familiar with the "alternate character set", which can change all characters into line-drawing glyphs. Even if that happens, input from the keyboard usually goes through unchanged, so writing reset Enter still works even if the characters display as garbage or not at all. (Compared to your prompt being turned into a bunch of lines, you only got a minor effect.)
{ "source": [ "https://unix.stackexchange.com/questions/302548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184025/" ] }
303,157
I was testing the speed of Bash and Python by running a loop 1 billion times. $ cat python.py #!/bin/python # python v3.5 i=0; while i<=1000000000: i=i+1; Bash code: $ cat bash2.sh #!/bin/bash # bash v4.3 i=0 while [[ $i -le 1000000000 ]] do let i++ done Using the time command I found out that the Python code takes just 48 seconds to finish while the Bash code took over 1 hour before I killed the script. Why is this so? I expected that Bash would be faster. Is there something wrong with my script or is Bash really much slower with this script?
Shell loops are slow and bash's are the slowest. Shells aren't meant to do heavy work in loops. Shells are meant to launch a few external, optimized processes on batches of data. Anyway, I was curious how shell loops compare so I made a little benchmark: #!/bin/bash export IT=$((10**6)) echo POSIX: for sh in dash bash ksh zsh; do TIMEFORMAT="%RR %UU %SS $sh" time $sh -c 'i=0; while [ "$IT" -gt "$i" ]; do i=$((i+1)); done' done echo C-LIKE: for sh in bash ksh zsh; do TIMEFORMAT="%RR %UU %SS $sh" time $sh -c 'for ((i=0;i<IT;i++)); do :; done' done G=$((10**9)) TIMEFORMAT="%RR %UU %SS 1000*C" echo 'int main(){ int i,sum; for(i=0;i<IT;i++) sum+=i; printf("%d\n", sum); return 0; }' | gcc -include stdio.h -O3 -x c -DIT=$G - time ./a.out ( Details: CPU: Intel(R) Core(TM) i5 CPU M 430 @ 2.27GHz ksh: version sh (AT&T Research) 93u+ 2012-08-01 bash: GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) zsh: zsh 5.2 (x86_64-unknown-linux-gnu) dash: 0.5.7-4ubuntu1 ) The (abbreviated) results (time per iteration) are: POSIX: 5.8 µs dash 8.5 µs ksh 14.6 µs zsh 22.6 µs bash C-LIKE: 2.7 µs ksh 5.8 µs zsh 11.7 µs bash C: 0.4 ns C From the results: If you want a slightly faster shell loop, then if you have the [[ syntax and you want a fast shell loop, you're in an advanced shell and you have the C-like for loop too. Use the C like for loop, then. They can be about 2 times as fast as while [ -loops in the same shell. ksh has the fastest for ( loop at about 2.7µs per iteration dash has the fastest while [ loop at about 5.8µs per iteration C for loops can be 3-4 decimal orders of magnitude faster. (I heard the Torvalds love C). The optimized C for loop is 56500 times faster than bash's while [ loop (the slowest shell loop) and 6750 times faster than ksh's for ( loop (the fastest shell loop). Again, the slowness of shells shouldn't matter much though, because the typical pattern with shells is to offload to a few processes of external, optimized programs. With this pattern, shells often make it much easier to write scripts with performance superior to python scripts (last time I checked, creating process pipelines in python was rather clumsy). Another thing to consider is startup time. time python3 -c ' ' takes 30 to 40 ms on my PC whereas shells take around 3ms. If you launch a lot of scripts, this quickly adds up and you can do very very much in the extra 27-37 ms that python takes just to start. Small scripts can be finished several times over in that time frame. (NodeJs is probably the worst scripting runtime in this department as it takes about 100ms just to start (even though once it has started, you'd be hard pressed to find a better performer among scripting languages)).
{ "source": [ "https://unix.stackexchange.com/questions/303157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
303,423
An answer to Linux: allowing an user to listen to a port below 1024 specified giving an executable additional permissions using setcap such that the program could bind to ports <1024: setcap 'cap_net_bind_service=+ep' /path/to/program What is the correct way to undo these permissions?
To remove capabilities from a file use the -r flag setcap -r /path/to/program This will result in the program having no capabilities.
{ "source": [ "https://unix.stackexchange.com/questions/303423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167427/" ] }
303,605
The full portion of the Bash man page which is applicable only says: If the operating system on which bash is running supports job control, bash contains facilities to use it. Typing the suspend character (typically ^Z, Control-Z) while a process is running causes that process to be stopped and returns control to bash. Typing the delayed suspend character (typically ^Y, Control-Y) causes the process to be stopped when it attempts to read input from the terminal, and control to be returned to bash. The user may then manipulate the state of this job, using the bg command to continue it in the background, the fg command to continue it in the foreground, or the kill command to kill it. A ^Z takes effect immediately, and has the additional side effect of causing pending output and typeahead to be discarded. I have never used Ctrl - Y ; I only just learned about it. I have done fine with Ctrl - Z (suspend) only. I am trying to imagine what this option is for . When would it be useful? (Note that this feature doesn't exist on all Unix variants. It's present on Solaris and OpenBSD but not on Linux or FreeBSD. The corresponding setting is stty dsusp .) Perhaps less subjectively: Is there anything that can be accomplished with Ctrl - Y that cannot be accomplished just as easily with Ctrl - Z ?
From the 4BSD manual for csh : A ^Z takes effect immediately and is like an interrupt in that pending output and unread input are discarded when it is typed. There is another special key ^Y which does not generate a STOP signal until a program attempts to read (2) it. This can usefully be typed ahead when you have prepared some commands for a job which you wish to stop after it has read them. So, the purpose is to type multiple inputs while the first one is being processed, and have the job stop after they are done.
{ "source": [ "https://unix.stackexchange.com/questions/303605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }