output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
These sketches should help you answer all the questions: https://unix.stackexchange.com/a/118650/121504 But to answer your questions explicitly:For remote forwarding: port is a connection endpoint in the SSH server.For local forwarding: port is a connection endpoint in the SSH client processFor SOCKS proxy: port is a connection endpoint in the SSH client processBut much more visual explanation is really the sketches linked above. But to sum it up: The first port (for SOCK proxy the only one) is always the free port you are going to connect using the next step. The other port is the port where is running your existing service. Edit: The easier thing to find out, if I understand what is really the question is to use lsof. Your port is in my examples 12345: For remote forwarding : [local ] $ ssh -R 12345:localhost:22 remote [remote] $ lsof -P | grep 12345 sshd 27772 root 7u IPv6 1304283702 0t0 TCP localhost:12345 (LISTEN) sshd 27772 root 8u IPv4 1304283703 0t0 TCP localhost.localdomain:12345 (LISTEN)For local forwarding: [local] $ ssh -L 12345:localhost:22 remote [local] $ lsof -p $(pidof ssh) -P | grep 12345 ssh 6779 jakuje 4u IPv6 146565 0t0 TCP ip6-localhost:12345 (LISTEN) ssh 6779 jakuje 5u IPv4 146566 0t0 TCP localhost:12345 (LISTEN)For dynamic port forwarding: [local] $ ssh -D 12345 [emailprotected] [local] $ lsof -p $(pidof ssh) -P | grep 12345 ssh 11388 jakuje 4u IPv6 173537 0t0 TCP ip6-localhost:12345 (LISTEN) ssh 11388 jakuje 5u IPv4 173538 0t0 TCP localhost:12345 (LISTEN)
(1) For remote forwarding: -R [bind_address:]port:host:hostport Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. This works by allocating a socket to listen to port on the remote side, and whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to host port hostport from the local machine. Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6 addresses can be specified by enclosing the address in square brackets. By default, the listening socket on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)). If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward the allocated port will be printed to the standard output.hostport specifies a connection endpoint for the destination process running on the destination host. Is port a connection endpoint in the SSH server process, or in a process which runs on the same source host as the SSH server and wants to use the SSH tunneling by attaching itself to port? (My guess is the latter) (2) For local forwarding: -L [bind_address:]port:host:hostport Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to host port hostport from the remote machine. Port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listen‐ ing port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces.hostport specifies a connection endpoint for the destination process running on the destination host. Is port a connection endpoint in the SSH client process or in a process which runs on the same source host as the SSH client and wants to use the SSH tunnel by attaching itself to port? (My guess is the latter) (3) For SOCKS proxy: -D [bind_address:]port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, option‐ ally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces.Is port a connection endpoint in the SSH client process, in the SSH SOCKS server, or in a process which runs on the same host as the SSH client and wants to use the SOCKS server by connecting to port? (My guess is the second. I guess it is not the first because the SSH client has its own default port(s). I am not sure about the third)
Which processes do the ports (as communication connection endpoints) belong to in SSH port forwarding?
The easiest way is to use the tool called cloc. Use it this way: cloc .That's it. :-)
if I want to count the lines of code, the trivial thing is cat *.c *.h | wc -l But what if I have several subdirectories?
Counting lines of code?
RHEL/Fedora Run rpm -qf /path $ rpm -qf /usr/bin/echo coreutils-8.25-17.fc25.x86_64Download the source package (use yum for RHEL): $ dnf download coreutils --enablerepo="*source"Extract the sources, patches from the SRPM package downloaded in current directory, change to the directory where the files are extracted and find your file: $ rpmbuild -rp coreutils-8.25-17.fc25.src.rpm $ cd ~/rpmbuild/BUILD/coreutils-8.25/ $ find src -iname '*echo*' src/echo.cYou can rebuild the package using rpmbuild --rebuild coreutils-8.25-17.fc25.src.rpm, which will produce the RPMs that you can directly install on your system. If you need to do some modification to fedora packages, it is much easier to go the maintainer way: Install fedpkg, clone the repository, do the modifications (using patches) and rebuild the package with modifications: $ sudo dnf install fedpkg $ fedpkg clone coreutils $ cd coreutils $ # do the modifications $ fedpkg local
I suddenly decided I'd like to look at the source code for 'echo' $ which echo /usr/bin/echoso $ ls -al /usr/bin/echo -rwxr-xr-x. 1 root root 32536 Oct 31 2016 /usr/bin/echoso $strings /usr/bin/echoleads me to believe it's a compiled C program Now I'm stuck. How do I:Find out which package it's in Get the source Rebuild it Test it Install the new version system-wide(I know that 5's not a good idea, I'm just curious...) I'm currently on Fedora, but I'd also be interested in the answers for Debian A link to a relevant tutorial would be a good answer.Edit: $ type -a echo echo is a shell builtin echo is /usr/bin/echoSo I guess it's the one in /usr/bin/echo I'd like to see rather than trying to read the whole of bash.
How do I look at the source code for a command? [closed]
Have a look inside the .asc file, you will see it starts with: -----BEGIN PGP SIGNATURE-----So this is a PGP Signature. It means it signs some content with some specific PGP key. 1) The content Based on the name, the content is the file .txt, it is the list of files corresponding to the software to download and each file has its corresponding hash. 2) The signature If you launch gpg on both files, here is the result: $ gpg --verify cmake-3.11.0-rc3-SHA-256.txt.asc cmake-3.11.0-rc3-SHA-256.txt gpg: Signature made Fri Mar 9 10:29:10 2018 EST gpg: using RSA key 2D2CEF1034921684 gpg: Can't check signature: No public keySo how does it all work? You are supposed to have the key 2D2CEF1034921684 in your local keychain. How do you get it, and more important you do make sure you get the proper one? (the id by itself is not enough). This is where the web of trust model of OpenPGP takes place. It would be too long to detail here but in short, ideally, you get access to the keys out of band and you have means to authentify it... or to authentify some other key of someone you know that itself has authentified the other key. And/Or you find it online in one or multiple places, secured by HTTPS (with a certificate and CA you trust), and ideally with DNSSEC. If you have the public key, and you trust it to be good and not forged (and that typically it corresponds to the software developers of the tool you are trying to download), the above command show you that the .txt files has not been tampered with. And in turn this gives you the hash of all archives files, so you can download it and re-run the hash algorithm (SHA256 as written in the filename) and compare it with the value stored in file. If someone had compromised the server and changed some archives it could, as you say yourself, as well change the .txt file with the new hashes so you could believe files do match. However this third party would not be able to generate the proper .asc file with the PGP signature (upon gpg --verify you would get an error about invalid signature), because to do so he would need to have access to the private key which is supposedly properly secured and not stored on the webserver anyway. And if it does generate it with another key you will see it because you will not trust this unknown key. But of course the whole model collapses if the key itself is compromised.
I'm installing cmake from the cmake.org website and they provide two files that I believe are intended to verify the source code download cmake-3.11.0-rc3.tar.gz On the same page, they have links to download a cmake-3.11.0-rc3-SHA-256.txt file and a cmake-3.11.0-rc3-SHA-256.txt.asc file What I don't understand is:How does an asc file from the same source (cmake.org) ensure the source code's integrity? If the source code offered by the site was compromised, couldn't the attacker also compromise the asc file? Don't I need a public key to truly verify the source code download? And I thought the asc file was supposed to be the public key. However, when I try to import the asc file with gpg --import cmake-3.11.0-rc3-SHA-256.txt.ascI get an error that says "no valid OpenPGP data found"
How does providing an asc file ensure I'm downloading the intended source code?
Taken literally, kernel maintainers are the people listed in the MAINTAINERS file. There are mainly two ways to get listed there: one is to add a subsystem to the kernel, and become its maintainer, the other is to take over maintainership for an existing kernel component. There was a recent example of the latter which followed an episode which generated some buzz, the potential removal of the floppy driver; Denis Efremov became the new maintainer. In general, becoming a maintainer is a result of becoming involved in the general curation of a given component. If you do a good job, gain a reputation as someone reliable and trustworthy, and the appropriate circumstances arise, you’ll eventually have the opportunity to become a maintainer. This applies for small components (e.g. the floppy driver mentioned previously: Denis had demonstrated his ability to take good care of it before offering to maintain it), and of course for larger components, all the way up the hierarchy. This also tends to apply generally, in free software / open source projects, but with different terminology: typically, “committer” status.
According to this link, when changes to the Linux source code are submitted, they are reviewed by a hierarchy of maintainers, eventually concluding with Linus himself. How does one become such a maintainer? (Context: I'm teaching a class about the basics of Linux and one of my students asked this, and I'm having trouble finding a satisfying answer online.)
How to become a Linux source code maintainer?
find + GNU sed solution: find . -type f -name "*.[ch]" -exec sed -i '/^#include / s|\\|/|g' {} +"*.[ch]" - wildcard to find files with extension .c or .h -i: GNU sed extension to edit the files in-place without backup. FreeBSD/macOS sed have a similar extension where the syntax is -i '' instead. /^#include / - on encountering/matching line which starts with pattern: #include s|\\|/|g - substitute all backslashes \ with forward slashes / (\ escaped with backslash \ for literal representation).
I have some C source code which was originally developed on Windows. Now I want to work on it in Linux. There are tons of include directives that should be changed to Linux format, e.g: #include "..\includes\common.h"I am looking for a command-line to go through all .h and .c files, find the include directives and replace any backslash with a forward slash.
Replacing backslashes with forward slash within double quotes
Minification is not generally a reversible operation, as information could be lost in the process, e.g. consider human-readable variable names, comments, logical constructs, which can be written in multitude of different ways e.t.c. But there are various tools, which can pretty-print or beautify your code, which should solve #1 for you. One example is: https://github.com/mvdan/shA shell parser, formatter and interpreter (POSIX/Bash/mksh)Running your one-liner, through it, produces the following result: %shfmt <<<"echo foo;echo bar;echo \"baz;bing\";echo 'buz;bong'"echo foo echo bar echo "baz;bing" echo 'buz;bong'
Various minifier scripts exist for shell code, (e.g. bash-minifier), but how about the reverse?Are there any shell-centric utils or scripts to automatically turn a one-liner like this: echo foo;echo bar;echo "baz;bing";echo 'buz;bong'...into this: echo foo echo bar echo "baz;bing" echo 'buz;bong'Or turn minimalist logic like this: true && echo foo...into this: if true ; then echo foo fi
Are there any unminify tools for shell scripting?
That's just the binary representation of the ascii encoding of "Hello World", not an executable, there's no way to execute that.
I have a binary code and I want to run it. 01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100How can I create a file "application/x-executable" and execute it on Debian?
Execute binary code
Git uses libcurl library to push/fetch repositories via http:// and https://. This error occurs if you compile git without the library present. Install it (yum/dnf install libcurl-devel) and then reconfigure and recompile git. It should work. Link: https://github.com/git/git/blob/b896f729e240d250cf56899e6a0073f6aa469f5d/INSTALL#L141-L149
I'm working on a CentOS 7.9 GNU/Linux system. I've built and installed a newer version of git (2.34.1 instead of 1.8.3.1 that's bundled with the distribution) under /opt/git/2.34.1, with a symlink to that directory at /opt/git/current; and I've added that symlinked directory to (the beginning of) my $PATH variable. Unfortunately, when I try the checkout a repository with an HTTPS URL, I get a few errors $ git clone https://github.com/eyalroz/cuda-api-wrappers.git Cloning into 'cuda-api-wrappers'... git: 'remote-https' is not a git command. See 'git --help'.cloning with the old version of git - works. Why does this happen, and what can I do to resolve it?
git clone from https URL fails, says it's 'remote-https' is not a git command and that templates werent' found
If you just want to remove anything between /* and */, and ignore all the quirks of the C language, like C99-style //-comments, quoted strings and backslash-escapes of newlines, then a simple Perl-regex should do: perl -0777 -pe 's,/\*.*?\*/,,gs' inputfile
I am trying to remove comments from a file which may be in any part of a line and span multiple lines. struct my_struct{ field1; field2; /** comment 1 */ field3; /* comment 2 */ } struct_name;I need to get struct my_struct{ field1; field2; field3; } struct_name;I tried using grep -o '[^/*]*[^*/]' to remove any text between matching /* and */, but it is eliminating the comment symbols but not the text in between. What is the correct way? If there is another way using 'sed', it would be nice to know that too.
Remove comments in a C file [duplicate]
The place to find the open source components of MacOS is https://opensource.apple.com/ , and the package where arch is included is called system_cmds. Unfortunately, the links for Catalina (10.15.x) seem to be unavailable at the time of writing this (this is not uncommon, because Apple usually publishes the source with some delay). The version you want is probably system_cmds-854.11.2 (the link is at https://opensource.apple.com/release/macos-1015.html but it is broken for the moment). However, if you are fine with the version for Mojave (10.14.x), then you can get the source here: https://opensource.apple.com/source/system_cmds/system_cmds-805.250.2/arch.tproj/ Hope this helps! Note: If you really need the Catalina version, you'll need to wait until Apple publishes it in the website above.
I need to get the source code of the "arch" command located in /usr/bin/arch for macOS Catalina( see the output of sw_vers command below). macOS Catalina ProductName: Mac OS X ProductVersion: 10.15.3 BuildVersion: 19D76In case you need it, here are some architecture details: MacBook-Pro 15-inch, 2019 Processor 2.3GHz * core Intel core i9I found that macOS Catalina is one of many releases of Apple OS Darwin as explained here Darwin OS. The same link also states that :Darwin is an open-source Unix-like operating system first released by Apple Inc. in 2000. It is composed of code developed by Apple, as well as code derived from NeXTSTEP, BSD, Mach, and other free software projects.So I thought that maybe I could find it here Free BSD Source at GitHub, but I had no luck there either. Could someone please help? Thanks!!
Source code of arch command for macOS Catalina Version 10.15.3 (19D76)
The task state represented by “X” isn’t TASK_DEAD, it’s the EXIT_DEAD exit state. TASK_DEAD itself isn’t a reportable state, and while EXIT_DEAD is, it isn’t supposed to be visible in practice. EXIT_DEAD’s role is similar to what you describe for TASK_DEAD: a task’s exit state is set to EXIT_DEAD shortly before its task_struct is deleted by release_task; see for example de_thread, release_task itself, and exit_notify. I haven’t checked the locking in detail, and changing process state can be seen by readers; however it seems unlikely that a process would ever be seen in EXIT_DEAD state by another process. Whether it can be seen or not, a process is in state “X” once it’s fully exited and its task_struct is about to be deleted.
According to fs/proc/array.c:130, the following array defines various process states: /* * The task state array is a strange "bitmap" of * reasons to sleep. Thus "running" is zero, and * you can test for combinations of others with * simple bit tests. */ static const char * const task_state_array[] = { /* states in TASK_REPORT: */ "R (running)", /* 0x00 */ "S (sleeping)", /* 0x01 */ "D (disk sleep)", /* 0x02 */ "T (stopped)", /* 0x04 */ "t (tracing stop)", /* 0x08 */ "X (dead)", /* 0x10 */ "Z (zombie)", /* 0x20 */ "P (parked)", /* 0x40 */ /* states beyond TASK_REPORT: */ "I (idle)", /* 0x80 */ };According to proc(5), the X state was added in kernel 2.6.0:X Dead (from Linux 2.6.0 onward) x Dead (Linux 2.6.33 to 3.13 only)And according to ps(1), the X shouldn't be seen:X dead (should never be seen)Looking at the rest of the source code, it seems like it is used internally by the kernel. In the source file kernel/sched.core.c:4176, a comment briefly describes it: /* * A task struct has one reference for the use as "current". * If a task dies, then it sets TASK_DEAD in tsk->state and calls * schedule one last time. The schedule call will never return, and * the scheduled task must drop that reference. * * We must observe prev->state before clearing prev->on_cpu (in * finish_task), otherwise a concurrent wakeup can get prev * running on another CPU and we could rave with its RUNNING -> DEAD * transition, resulting in a double drop. */It also appears to be required in some cases. In kernel/fork.c:424: static void release_task_stack(struct task_struct *tsk) { if (WARN_ON(tsk->state != TASK_DEAD)) return; /* Better to leak the stack than to free prematurely */ account_kernel_stack(tsk, -1); free_thread_stack(tsk); tsk->stack = NULL; #ifdef CONFIG_VMAP_STACK tsk->stack_vm_area = NULL; #endif }It looks to me like TASK_DEAD is set for a process when it terminates but before the kernel finally destroys task_struct, so it should never appear as a process state unless there is a kernel bug that fails to clean up the process. There's also these lecture notes which reinforces this idea:TASK_DEAD – the process is being cleaned up and the task is being deletedSo to my real question: In what circumstances will a process be reported by ps as being in state X?
In what circumstances will a process be in state X (dead)?
Raspbian itself contains 22,544 source packages in its main repository, with 67,417 files to download (as of 2016, Raspbian Jessie) if you want all the source code. Rebuilding all that isn't something I'd consider doing manually... If you really want to download all the source code for Raspbian, you should start by downloading the source repository index, and process that to construct the download URLs. Something like the following script should get you started: #!/usr/bin/awk -f/^$/ { for (i = 0; i < nbfiles; i++) { print "http://archive.raspbian.org/raspbian/" directory "/" files[i] } }/^Files: *$/ { infiles = 1 nbfiles = 0 next }infiles == 1 && /^ / { files[nbfiles] = $3 nbfiles++ }infiles == 1 && /^[^ ]/ { infiles = 0 }/^Directory: / { directory = $2 }
I'm new to linux and my teacher asked me to learn about how to build Raspbian from the source code. From what read in other questions, I need to download the Raspbian source code first. In some questions the link http://archive.raspbian.org/raspbian/pool/main/ and https://github.com/raspberrypi/linux appears to be the place where I can get the source code for the OS and kernel. The thing is, I'm not sure of what to download. I need the source code of the Raspbian OS and then try to build it as is for academic reason. And I'm quite sure I was asked to compile Raspbian OS, not Raspbian Kernel. I finished compiling Raspbian Kernel yesterday and today I was asked to build Raspbian OS itself. After I managed to build the OS, I am required to create a module to make Raspberry work with a certain sensor device (currently undecided). PS: I think this is building a linux distro without added customisation. Is it right?
How to download a whole raspbian source code?
I am unsure if this is the right solution to my problem, but since it resolved the warning, I will add it here: sudo apt-get install debian-keyringAs pointed out by Stephen Kitt, there is another possibility to disable verification with: dget -x -u ...but the above approach is better from the security standpoint.
System: Linux Mint 19 Cinnamon, based on Ubuntu 18.04.In this answer, I am being pointed at a different solution, other than installing directly from source. Since I haven't ever used dget, I must have it installed first with: $ sudo apt-get install devscriptsUpon the first suggested line: $ dget -x http://deb.debian.org/debian/pool/main/r/redshift/redshift_1.12-2.dscI originally got Validation FAILED!!, which I quickly overcame with creating the following file: ~/.devscriptswith contents: DSCVERIFY_KEYRINGS="/etc/apt/trusted.gpg:~/.gnupg/pubring.kbx"as pointed out, slightly changed by me, in this AskUbuntu answer. And simultaneously importing the public key with: $ gpg --keyserver hkp://keys.gnupg.net --recv-keys 402543B2D98854007F627D36A63A58A3F2E17569I get a warning:dpkg-source: warning: failed to verify signature on ./redshift_1.12-2.dscThe whole command output follows: dget: retrieving http://deb.debian.org/debian/pool/main/r/redshift/redshift_1.12-2.dsc % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 328 100 328 0 0 2466 0 --:--:-- --:--:-- --:--:-- 2466 100 2180 100 2180 0 0 8549 0 --:--:-- --:--:-- --:--:-- 8549 dget: retrieving http://deb.debian.org/debian/pool/main/r/redshift/redshift_1.12.orig.tar.xz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 334 100 334 0 0 865 0 --:--:-- --:--:-- --:--:-- 865 100 474k 100 474k 0 0 666k 0 --:--:-- --:--:-- --:--:-- 2597k dget: retrieving http://deb.debian.org/debian/pool/main/r/redshift/redshift_1.12-2.debian.tar.xz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 338 100 338 0 0 840 0 --:--:-- --:--:-- --:--:-- 840 100 5720 100 5720 0 0 11793 0 --:--:-- --:--:-- --:--:-- 11793 redshift_1.12-2.dsc: Good signature found validating redshift_1.12.orig.tar.xz validating redshift_1.12-2.debian.tar.xz All files validated successfully. gpgv: Signature made Tue 02 Oct 2018 12:40:08 PM CEST gpgv: using RSA key 402543B2D98854007F627D36A63A58A3F2E17569 gpgv: Can't check signature: No public key dpkg-source: warning: failed to verify signature on ./redshift_1.12-2.dsc dpkg-source: info: extracting redshift in redshift-1.12 dpkg-source: info: unpacking redshift_1.12.orig.tar.xz dpkg-source: info: unpacking redshift_1.12-2.debian.tar.xzAt this point, I am out of ideas where the warning is coming from? And how to fix it?
dpkg-source: warning: failed to verify signature
TL;DR this is due to a limitation in Android/termux/proot but is trivially worked around with a code change in plocate, which is in the 1.1.20 release. 4294967295 (i.e. 0xffffffff) is (uint32_t)-1. This is the value plocate's updatedb uses as a sentinel version number to detect an incompletely written database file. When plocate's updatedb is generating a database, it first writes out a header block using dummy values (including the bad version number above). After writing the rest of the database, it goes back to overwrite the header, this time using the correct values (including the correct version value, presently 1). Since the database output stream is writing to an unlinked file (opened with O_TMPFILE), the file must now be linked at its actual path. The code is roughly: /* Open database */ fd = open(path.c_str(), O_WRONLY | O_TMPFILE, 0640); outfp = fdopen(fd, "wb"); /* Write dummy header. */ ... /* Write database. */ ... /* Write real header */ fseek(outfp, 0, SEEK_SET); fwrite(&hdr, sizeof(hdr), 1, outfp); /* Link database path */ snprintf(procpath, sizeof(procpath), "/proc/self/fd/%d", fileno(outfp)); linkat(AT_FDCWD, procpath, AT_FDCWD, outfile.c_str(), AT_SYMLINK_FOLLOW);fclose(outfp);This is being executed on Android, which does not allow unprivileged hard link creation, so the linkat above would normally fail. However, it is executing under proot, which is implementing a handler for linkat(..., "/proc/X/fd/Y", ..., AT_SYMLINK_FOLLOW) when, as in this case, the FD is for an unlinked file. The linkat "works" because proot's handler copies the contents of the unlinked file, at the time of the linkat call. This may be better than nothing, but any further writes to the original file descriptor do not make it in to the file on the filesystem. In the case of updatedb, there are no further writes -- but neither is there an fflush between the final fwrite and the linkat call. The lack of fflush would normally be fine, as the subsequent fclose flushes the output buffers. However, with proot's linkat implementation this pattern leads to data loss. There doesn't seem to be a public bug tracker but I've reported this to the ptrace plocate author. Adding an fflush would resolve the issue and be harmless otherwise. Update: resolved in 994819b.
No matter how many times I rebuild the plocate db I get: /var/lib/plocate/plocate.db: has version 4294967295, expected 0 or 1; please rebuild it.How in the world did I manage this?? /sbin/updatedb.plocate: linux-vdso.so.1 (0x0000007f1c7d4000) libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1 (0x0000007f1c6c0000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000007f1c490000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f1c3f0000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000007f1c3c0000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f1c210000) /lib/ld-linux-aarch64.so.1 (0x0000003000000000) /bin/plocate: linux-vdso.so.1 (0x00000077e07be000) liburing.so.2 => /lib/aarch64-linux-gnu/liburing.so.2 (0x00000077e0760000) libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1 (0x00000077e0690000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x00000077e0460000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x00000077e0430000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x00000077e0280000) /lib/ld-linux-aarch64.so.1 (0x0000003000000000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x00000077e01e0000)plocate 1.1.15 Copyright 2020 Steinar H. Gunderson License GPLv2+: GNU GPL version 2 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.updatedb (plocate) 1.1.15 Copyright (C) 2007 Red Hat, Inc. All rights reserved. This software is distributed under the GPL v.2.This program is provided with NO WARRANTY, to the extent permitted by law.
/var/lib/plocate/plocate.db: has version 4294967295, expected 0 or 1; please rebuild it
bash_5.0-6ubuntu1.1.dsc is the source package control file; it describes the source package (it’s a text file, you can view it using your favourite text viewer or editor). bash_5.0.orig.tar.xz contains the upstream source code, i.e. the archive you’d get from the Bash project itself (with no packaging). bash_5.0-6ubuntu1.1.debian.tar.xz contains the source package’s debian directory, i.e. everything that’s added to create the package (metadata, build information, patches...). bash-5.0 contains the unpacked source package, i.e. the result of extracting both archives and applying any patches contained in the latter archive.
These four files/folders were downloaded after I ran apt-get source bash to get the source code of the package bash in Ubuntu. - bash-5.0 - bash_5.0-6ubuntu1.1.debian.tar.xz - bash_5.0-6ubuntu1.1.dsc - bash_5.0.orig.tar.xzWhat are these four files? Which of these is the source code of the package bash?
What is the difference between the three files downloaded on running 'apt-get source {package-name}' ? Which is the actual source code of the package?
Bash patches are cumulative, the source for 4.3 is effectively 4.3.0, the patches are separate, and all of them should be applied in order, each one will bump you up a patch level. Rarely, a complete source release is made available from the official site, the last one was 3.2.48. What you are observing is that the required patch (the "-030" suffix indicates a .30 patchlevel) is expecting the earlier patches. (This will always be detected with bash patches, since each one patches patchlevel.h any omission will result in a patch error). You can find my instructions for building from source here: https://unix.stackexchange.com/a/157714/31352 Building from source is straightforward, but not to be undertaken lightly. Once you patch your bash you're on your own with regards to vendor support, and may complicate further administrative tasks (such as patches and upgrades). You are probably better off downloading the Red Hat RPM, transferring that to the server (or if you really want to build it from source, the SRPM instead). bash has minimal dependencies (notably termcap), you should just need a single package assuming none of them been modified. In any case, you probably should stick with bash-4.1, there are a number of changes which may impact scripts, see the COMPAT file in the source distribution for details. All released versions from 2.05b to 4.3 have patches for "shellshock" (CVE-2014-6271) and related issues.
The bash shell in my production box is vulnerable to 'bashbug' vulnerability. https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/ The version installed is `$ bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)`I am not able to use YUM to install the latest package because our server is not connected to Internet , SO I am trying to install bash using source code. I downloaded bash 4.3 and installed it from source code.Since this version is still vulnerable to bash bug , I need to apply the latest patch for this version. For this I downloaded the latest patch for bash from the following site. http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ I am applying bash43-030 patch from the above link. http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-030 Issue I am facing is that apply of patch is failing with the following error [bash-4.3]$ patch -p0 < bash-patch patching file builtins/evalstring.c Hunk #1 FAILED at 309. Hunk #2 FAILED at 379. 2 out of 2 hunks FAILED -- saving rejects to file builtins/evalstring.c.rej patching file parse.y Hunk #1 succeeded at 2574 (offset 35 lines). Hunk #2 FAILED at 4038. 1 out of 2 hunks FAILED -- saving rejects to file parse.y.rej patching file shell.h Hunk #1 succeeded at 181 with fuzz 2. patching file y.tab.c Hunk #1 FAILED at 169. Hunk #2 FAILED at 498. Hunk #3 FAILED at 2099. Hunk #4 FAILED at 2113. ... ... Hunk #98 FAILED at 6350. 97 out of 98 hunks FAILED -- saving rejects to file y.tab.c.rej patching file patchlevel.h Hunk #1 FAILED at 26. 1 out of 1 hunk FAILED -- saving rejects to file patchlevel.h.rej Please suggest how to resolve the issue . May be approach of applying the patch is wrong .
Applying patch for bash failing
All the GPL requires is that they make the source code available. (And not everything Linux is GPL.) They don't have to make it available in any convenient format. I'm guessing their internal revision control servers are private for reasons of their business model. Downloading SRPMs is likely your best bet. Try something like zypper si foo or dnf download --source foo to get the source packages for the component(s) you are interested in.
I'm having a critical support issue with SLES that doesn't make any measurable progress (for months now). So I wanted to have a look at the source code myself; maybe I can spot the issue. (It seems a fatal bug was added between SLES15 SP2 and SP3 in the Xen Hypervisor that causes frequent server crashes due to RAM corruption) As I see it you can download DVD images that should contain the source code, but those are as old as the media are. Meaning: You don't have the source for the current patches. Isn't there a public Git repository where I could inspect the changes being made from release to release or from patch to patch? I don't want to download ISO images, unpack them, download more RPM source packages and unpack them, etc. just to see the changes. I see that the business model is somewhat against that, but from a support perspective that is vital.
Where can I get the current source code for SUSE SLES products?
I'd recommend to always prefer the native package manager for anything that needs to link against system libraries, or anything that other things on the target system might link against. Otherwise, you'll be going through the effort of making things work with distro quirks for every single platform you're targeting, and will lead to people having problems if they try to use your version vs the version that a package they've installed requires. As @ArtemS.Tashkinov says, you need to deal with each target individually. That really does not scale: the GNU Radio community has a tool called PyBOMBS, which tried to do this for but a handful of platforms. It's a maintenance nightmare. We should have gone with existing prefix managers like conda from the start¹ for those use cases where users actually wanted an isolated prefix, and put more effort in coordination with the official distro package maintainers to make the software we shipped installable via apt, yum, pacman, emerge, zypper,... and actually work flawlessly out of the box. And specifically regarding your software: under no circumstances should you install a hand-compiled zsh to your system path. That's a recipe for nightmarish incompatibilities. Just install the zsh your distro brings. A Linux distribution is a software conflict avoidance mechanism that you should use whether possible. Building things from source and installing them that a user night reasonably install via their package manager leads to conflicts, and thus breaks your user's system.¹ This is my private opinion on it; PyBOMBS1 and PyBOMBS2 still enabled a great deal of great applications! It's definitely cool to have it. It's just been an immense effort, duplicating what larger communities are already doing.
I'd like to create a script that is able to install my desired set of packages in any Linux distro (initially create it to run in Ubuntu and later expand it to whichever) and basically function as an environment setup. Examples of such packages would be zsh, omz, fzf, autocomplete etc... My question is this, should I rely on the existence of a package manager (apt-get) in order to install the package and just call that through the script or manually wget the source files and proceed with the installation? Which do you think is best?
Rely on the existence of a package manager or install package via downloading the source?
First, you need to understand that PS1 is usually a shell variable, which means it isn't inherited by the children. So unless you explicitly ran export PS1=... and made PS1 an environment variable, every new bash process will get the PS1 (and other shell variables) from the rc files and not from the parent. So you first need to find out where exactly your PS1 is defined. You can verify that by exporting your PS1: export PS1And then run your ./bash. You'll see that in that case, PS1 will be inherited by the new shell.So why doesn't your compiled bash get the PS1 shell variable you expect from the rc files? Here's my guess: On many systems, PS1 is being defined in /etc/bash.bashrc. But not all bash versions read this file. It depends on how bash was being compiled. A good rule of thumb would be checking your bash man pages. For instance, for Ubuntu you'll see:When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of /etc/bash.bashrc and ~/.bashrc.However, /etc/bash.bashrc is not even mentioned in the bash man page you downloaded from that [git]:When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.Additionally, in the README file of bash from Debian, you'll see:5. What is /etc/bash.bashrc? It doesn't seem to be documented. The Debian version of bash is compiled with a special option (-DSYS_BASHRC) that makes bash read /etc/bash.bashrc before ~/.bashrcSo in the git repository you've used, you'll see at config-top.h that the line that defines SYS_BASHRC is commented out by default. /* System-wide .bashrc file for interactive shells. */ /* #define SYS_BASHRC "/etc/bash.bashrc" */So your bash as it's built by default won't read /etc/bash.bashrc on startup, and if your PS1 is defined there, it won't get it. If you remove the comment from this line: #define SYS_BASHRC "/etc/bash.bashrc"and run make again, my guess is that your new bash will show the PS1 variable you expect.
I ran git clone https://git.savannah.gnu.org/git/bash.git cd bash/ ./configure make ./bashI noticed that the newly launched Bash instance did not inherit the environment, specifically the PS1 variable that defines the shell prompt, from the parent shell. The inheritance works for /bin/bash List of sourced files is the same for /bin/bash and ./bash ./bash -lixc exit 2>&1 | sed -n 's/^+* \(source\|\.\) //p' /bin/bash -lixc exit 2>&1 | sed -n 's/^+* \(source\|\.\) //p'Edit: As aviro mentiond PS1 was defined without export, so when I tried exporting it got inherited, so my initial question was wrong. On my machine PS1 is defined in two files /etc/bash/bashrc # If not running interactively, don't do anything [[ $- != *i* ]] && return [[ $DISPLAY ]] && shopt -s checkwinsize PS1='[\u@\h \W]\$ 'And /etc/bash/bashrc.d/artix.bashrc if ${use_color} ; then if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;36m\][\u@\h\[\033[01;37m\] \W\[\033[01;36m\]]\$\[\033[00m\] ' fi else if [[ ${EUID} == 0 ]] ; then # show root@ when we don't have colors PS1='\u@\h \W \$ ' else PS1='\u@\h \w \$ ' fi fiWhen I ran ./bash the PS1 is \s-\v\$ and I have no idea why. The command listing all sourced file shows that both of these files should be sourced when run with ./bash, but for some reason they aren't or shell starts in different type/mode. Why?
Why does Bash behave differently when compiled from source code?
/usr/local/src is the local equivalent of /usr/src, which the FHS describes asSource code may be place placed in this subdirectory, only for reference purposes.Neither /usr/src or /usr/local/src are intended as working directories, especially not for a specific user. All your data is supposed to live under your home directory, and the FHS doesn’t have much to say about what you do there. I think the appropriate solution in your case is to configure Timeshift so that it backs your source code up.
I'm a software developer and only in the last year or so have I been using Linux Mint over Windows as my working dev environment. For the last year I figured I'd do things my way and adapt as I learn the proper way to do things. So far so good. Relevant to this question, I've been putting my source code to my work and personal projects in a folder I created in my home directory: ~/src In the last week I updated to Mint 19 and Timeshift is a front and center feature in it. I've set up my backups but noticed that my encrypted home directory is marked as an exclusion from my backups. I do use remote git repos to store my source but I would also feel a lot better if my Timeshift backups had my code as well. I could add an inclusive filter to Timeshift but I think it's time I look into where my source SHOULD be stored. A little research produced the very helpful Filesystem Hierarchy Standard (pdf) and it brought /usr/local/src to my attention but doesn't go into detail about it. I know this question seems a little subjective but Linux has always seemed to be a developer friendly OS which makes me believe there's an idiomatic place to keep my personal source code separate from source I download/make/install. Where does Linux intend I place my source? Or should I just add an inclusive filter to Timeshift to backup ~/src?
Where is the idiomatic place to put source code for my projects? [closed]
To protect your data you have to have key, of any sort, which you can guard and control physically and be separately from your secret data. Password in your brains is example of such solution - it is physically guarded by owner, by memorizing it. As password there might be some visual token for your super robot, like piece of paper with barcode which decrypts your software and allows it to run that way. It may worth to consider, as same piece of paper might be used to shutdown your creation, with help of openvc and scripts of course. Less practical but easier way to implement, is to have 2 sd cards, one contains key and is used to boot the system and waits until you change sdcard and will continue to boot after that. you should look in to how initramfs is used to boot whole system. You should look how to create custom initramfs, and be able to modify init script for waiting and decrypting mounting your second card with main system. System is not perfect, but dirty and cheap. It also do not have to be initramfs solution, you might use solutions like copying your executable codes to ramdisk from some usbdrive, which you insert just for purposes to launch your creation, and remove it when it begins to be alive. When you have secret and key on one media(is that smart trick, hidden something etc) it can be reversengineered. As for me initramfs solution needs less effort to make, plenty of information how to make custom initramfs's.
I want to build a robot using some mini-computers like Raspberry-Pi, and using Linux OS upon. I think those boards (RPi, NanoPi, etc) have external SD card to boot OS from, and my code will go there (on SD card). So, how can I protect my code from someone who wants to copy it? From my code, I mean my programs that are written using CMUsphinx, Opencv, etc. Are there anyways to specify each SD card for one specified board? I mean the SD card nom.1 only runs on board nom.1. Or maybe put a hidden code on boards and SD cards that use one algorithm and should be match to run?
How can I protect my robot code?
Depends a lot on source and, control over source, desired result etc. In it's simplest form: sed -n '/^int main(/,/^}/p' file.cThat would print everything between lines starting with int main( until } inclusive.If you need to expand macros you could use the c preprocessor cpp then run it trough indent and finally extract with sed. indent could in any case be OK to make sure the code is well formatted etc. Example:cpp expands macros. (Could be cumbersome on a big code base.) indent format the code for matching (indentation). The -bls options makes sure int and main is on same line. sed extract the main part.cpp file.c | indent --linux-style --standard-output -bls | sed -n '/^int main(/,/^}/p'Or to mainly make sure indentation is OK: indent --linux-style --standard-output -bls file.c | sed -n '/^int main(/,/^}/p'Optionally add -fc1 to make comment blocks content not start in first column. (E.g. if a block comment has a line starting with int main(.)Again; It all depends on how much control you have over the input, how much change you either want, do not want, need etc.Example: #include <stdio.h>#define ANSWER(q, s) (q |= (s)) #define WHAT forvoid foo(int a) { printf("%x\n", a | 2); }intmain( int argc, char *argv[]) { /* Some * comment * */ int i, k = 40; ANSWER(k, 2); // Another comment WHAT (i = 0; i < 3; ++i) { foo(i); } return 0; }/*int main(void) { return 0; }*/Result (using cpp, indent, sed): int main(int argc, char *argv[]) { int i, k = 40; (k |= (2)); for (i = 0; i < 3; ++i) { foo(i); } return 0; }If you need to find the line where main start, or any function for that matter, one option is also ctags, i.e: $ ctags -x --c-kinds=f test.c | awk '$1 == "main"'Have seen both c-kinds and c-types and both work, but kinds is the only one I have in manual. Perhaps something to do with ctags vs exuberant ctags.
For example, cat foo.c would print the whole file, cat foo.c | grep main will print the line where the main function is defined. So how would I print the entire main function? (I am on Ubuntu)
How would I print only the main function from a C source file?
The link provided does indeed point to a package repository, but that package repository also includes all the source code for the packages. All the tarballs contain source code: in the example you show, the various .orig.tar.xz files contain the upstream source, and the .debian.tar.xz files contain the distribution-specific patches applied to them. PureOS seems to use the Calamares installer framework; this is where the “Boot loader location” selection lives. You’ll find the source code for the PureOS version in the relevant repository, and all the PureOS core source code just above.
I have been playing with PureOS installation options (the ultimate goal is to install it dual boot with MacOS on a MacBook Air, but this is out of the scope of this question). What I would like to see is how this "Boot loader location: Boot Partition (/boot)" option actually works behind the scenes (in particular, how it makes the Mac's bootloader know where the /boot partition is, by "blessing"?):I thought that digging through the source code of that installer should shed some light on what it does. I therefore went to see what is under the "Source Code" link on their website, but it looks like a packages repo rather than source code:So, the question is: where is the source code of the PureOS bootloader installer? Am I just missing something obvious, or is it not actually available?
Locating source code of the PureOS installer
I expect that this is merely a transient mirroring problem. Try sudo dnf --refresh upgrade kernel-devel(Or possibly just a general sudo dnf --refresh upgrade.)
I just stumbled across a problem where the installed kernel sources don't match the kernel that I'm actually running. I'm running 4.11.7-300.fc26.x86_64: [root@localhost VirtualBoxGuestAdditions]# uname -r 4.11.7-300.fc26.x86_64But the latest kernel sources don't seem to have the same version: [root@localhost VirtualBoxGuestAdditions]# yum install kernel-devel Last metadata expiration check: 1:30:50 ago on Wed 28 Jun 2017 04:11:01 PM CEST. Package kernel-devel-4.11.6-301.fc26.x86_64 is already installed, skipping. Dependencies resolved. Nothing to do. Complete!And looking at /usr/src/kernels/ surely enough, I only have the old sources: [root@localhost VirtualBoxGuestAdditions]# ls -la /usr/src/kernels/ total 12 drwxr-xr-x. 3 root root 4096 Jun 28 16:22 . drwxr-xr-x. 4 root root 4096 Jun 28 16:50 .. drwxr-xr-x. 23 root root 4096 Jun 28 16:22 4.11.6-301.fc26.x86_64So I tried to specify the version manually, but without success: [root@localhost VirtualBoxGuestAdditions]# yum install kernel-devel-4.11.7-300.fc26 Last metadata expiration check: 1:27:40 ago on Wed 28 Jun 2017 04:11:01 PM CEST. No package kernel-devel-4.11.7-300.fc26 available. Error: Unable to find a matchIs this normal? What am I supposed to do now?
Latest kernel sources not available for installation? (Fedora 26 Beta)
The best way that I've seen is by adding the -l flag to silence as follows: sox in.wav out6.wav silence -l 1 0.1 1% -1 2.0 1%I've copied this command from Example 6 of this very useful blog post called The Sox of Silence
Currently we're using this command within a shell script to remove silence from audio files: ffmpeg -i $INFILE -af silenceremove=0:0:0:-1:1:${NOISE_TOLERANCE}dB -ac 1 $SILENCED_FILE -yThis works fine except that it removes all the silence, causing the remaining audio to be squeezed together. How can this be done while leaving two or three seconds between each piece of audio? The solution needs to be very efficient as we'll be processing a lot of audio and should use a tool that can be fairly easily installed on both Linux and OSX, such as ffmpeg or sox.
Remove silence from audio files while leaving gaps
Remove the negative sign from your original command: rec /tmp/recording.flac rate 32k silence 1 0.1 3% 1 3.0 3%When the "below count" is negative, the silence command will trim all silences from the middle of the file. When it's positive, it trims silence from the end of the file.
I'm writing a script that uses sox to record me talking. Now I need sox to wait until it detects sound before it begins recording, and I do have that figured out. But I also need sox to exit once there has been silence for at least 3 seconds. As it is now, I have to manually kill sox once I finish talking, otherwise sox just waits again until I talk some more, appending to the output file (That's not what I want). Here is the command for recording I am using now: rec /tmp/recording.flac rate 32k silence 1 0.1 3% -1 3.0 3%Again, just to be clear, Sox should wait until I start talking, and then record until I stop talking, then the sox program should quit.
End sox recording once silence is detected
sox disturbence.wav -r 16000 -c 1 -b 16 disturbence_16000_mono_16bit.wav gives within one commandSample rate of 16 kHz (-r 16000), one channel (mono) (-c 1), 16 bits bit depth (-b 16).
I have a test.wav file. I need to use this file to process an application, with following properties:monochannel 16 kHz sample rate 16-bitNow, I'm using thefollowing commands to attain these properties: sox disturbence.wav -r 16000 disturbence_16000.wav sox disturbence_16000.wav -c 1 disturbence_1600_mono.wav sox disturbence_1600_mono.wav -s -b 16 disturbence_1600_mono_16bit.wavHere to get a single file, three steps are involved and two temporary files are created. Itis atime-consuming process. I thought of writing a script to do these process but I'm keeping this is a last option. In single command, can I convert a .wav file to therequired format?
Sox: Convert a .wav file with required properties in a single command
It sounds like you're running into a problem with spaces in the filename. If you have a file named "My Greatest Hits.mp3", your command will try to convert the three different files named "My", "Greatest", and "Hits.mp3". Instead of using the "$()" syntax, just use "*.mp3" in the for line, and make sure to quote the file names in the sox command. In addition, the basename command doesn't remove the file extension, just any folder names. So this command will create a bunch of WAV files with a ".mp3" extension. Adding "-s .mp3" to the command tells basename to strip the extension, and then put ".wav" on the end adds the correct extension. Put it all together, and you have this: for i in *.mp3 do sox "$i" "waves/$(basename -s .mp3 "$i").wav" done
for a single .mp3, I can convert it to wav using sox ./input/filename.mp3 ./output/filename.wavI tried: #!/bin/bash for i in $(ls *mp3) do sox -t wav $i waves/$(basename $i) doneBut it throws the following error: sox FAIL formats: can't open input file `filename.mp3': WAVE: RIFF header not foundHow would I run this sox conversion over all mp3 files in the input folder and save the generated wav's to the output folder? PS : I don't know why it shows the file enclosed between a back quote ( ` ) and an apostrophe ' `filename.mp3' I played all the mp3's and they work perfectly file.
batch convert mp3 files to wav using sox
It can vary—but at least for me, text2wave produces 1-channel, 16-bit, signed integer PCM. These are fairly normal—and it'll be very clear when you have them right (e.g., if you unsigned-integer by mistake, you'd get extremely distorted sound) With play, that looks like: play -r 16000 -b 16 -c 1 -e signed-integer /tmp/foo.raw play -r 16000 -2 -s -c 1 /tmp/foo.raw # obsolete way for older versions of SoxThese parameters are configured in Festival somewhere, I suspect. Some of them may be hardcoded as well. The only architecture-dependent thing you may encounter is big vs. little endian; on my little-endian machine Festival is writing little-endian; if I moved that file to a big-endian machine I'd likely need to add -L. If text2wav were run on a big-endian machine, I'm not sure if it'd write big- or little- endian data.
I have done this: me@riverbrain:~/sgf$ echo "test" | text2wave -otype raw -F 16000 >> test.rawwhich produced a headerless audio file. The wonderful thing about this file is that it can be concatenated (using cat, like text) with another raw audio file. Of course, I've got a problem. The problem is that I can't play it yet. me@riverbrain:~/sgf$ play test.raw play FAIL formats: bad input format for file `test.raw': sampling rate was not specifiedand also, when specifying sample rate me@riverbrain:~/sgf$ play -r 16000 test.raw play FAIL formats: bad input format for file `test.raw': data encoding was not specifiedWhen I looked up some information 'encoding' I got the feeling that it had a lot to do with your processor architecture, but maybe I'm wrong. Anyway, I can't find any documentation about how to 'ask' the computer what the data encoding of the raw audio file is. And I also I know what the sample rate is, due to setting it myself, but that's as far as I'm able to get.
What and how is the encoding of a raw (headerless) audio file?
You can preserve all the silences in the split parts with some small changes. Starting with your original command: silence 1 0.5 0.1% 1 0.5 0.1% The first triplet of values means removes silence, if any, at the start until .5 seconds of sound above .1%. The second triplet means stop when there is at least .5 seconds of silence below .1%. The rest of your command, : newfile : restart, then starts a new output file and begins again to look for sound at the start. So the first file ends when the silence begins, and the second file will start when the silence ends. The simplest option available to improve this is silence -l. It will preserve the .5 seconds of silence that triggered the end of file. Unfortunately, any longer silence will be removed because it is the start of the next file. An easy way to keep a longer gap is to combine -l with a longer detection time, eg 2 seconds: silence -l 1 0.5 0.1% 1 2.0 0.1%You will now only split if there is at least 2 seconds of silence, but you will preserve the first 2 seconds of the gap. To avoid losing all silence, simply remove the detection of silence at the start. You need to replace the triplet by a single 0: silence -l 0 1 2.0 0.1%If you want to play with simple sound files to see how sox handles situations, you can easily create 2 sound files, one consisting of 1 second of a tone, and one consisting of 1 second of silence, then join them together as you wish before presenting the result as input to the silence effect. For example, create: sox -n gap.wav trim 0 1 sox -n tone.wav synth 1.001t sine C5then join gap-tone-gap-tone and create out.wav using your effect and listen to the result: sox gap.wav tone.wav gap.wav tone.wav out.wav silence 1 0.5 0.1% play out.wav
I've got multiple audiobooks that are stored in large mp3s. And I'm trying to split these large mp3s into multiple smaller files. I've found a tool that can detect silence in audio files and split audio files based on this "delimiter". Here is an example: sox -V3 audiobook.mp3 audiobook_part_.mp3 \ silence 1 0.5 0.1% 1 0.5 0.1% : newfile : restartThis will basically split audiobook.mp3 into audiobook_part_001.mp3, audiobook_part_002.mp3, ... where silence >= 0.5 seconds. Now the problem is that this command not only splits the file but it also removes the silence. Therefore when you play the new files in a playlist the tracks/paragraphs sound squeezed together. So how do you tell sox to only split the file but to keep the silence (at the end of each track)?
sox: Split audio on silence but keep silence
I find almost half a dozen softwares recommended here.Audacity MPlayer Rubberband Play It Slowly Ardour LMMS MuSE Rosegarden
How can you play a sound slower or faster? That would be useful for listening carefully one audio passage or to listen fast forward to find a concrete passage. Is there something with the play sox command that would do this? Alternative simple solutions also welcome.
Play a sound file slower or faster
The -t option needs to come before the filename it applies to. Also, -t pulse means to read directly from (or write to) the PulseAudio daemon; it's not a file format as such. The type name for raw audio is raw. Try this: parec ... | sox -t raw -b 16 -e signed -c 2 -r 44100 - hmm.ogg ...(where ... means to keep the same arguments you had before) soxi can't identify the filetype because all it does is look at the header. Raw audio doesn't have a header for it to look at.
sox is probably the one linux program that continues to frustrate me. At the same time, I am awed by what it can do, and I'd like to get close to being fluent in it, if not mastering it. Today, I've spent about 2 hours trying to get sox to read bytes from parec via a pipe. The parec bytes are a pulseaudio "sink". In order to get them flowing through the pipe, I used this answer from askubuntu. This is the command I've been using: $ parec -d telephonControl.monitor | sox -b 16 -e signed -c 2 -r 44100 - -t pulse hmm.ogg silence 1 0.50 0.1% 1 2.0 0.1% : newfile : restartand this is the error I get: sox FAIL formats: can't determine type of `-' write() failed: Broken pipeWhat's more, oggenc parses them just fine: parec -d telephonControl.monitor | oggenc -b 192 -o telephonControl.ogg --raw - Encoding standard input to "telephonControl.ogg" at approximate bitrate 192 kbps (VBR encoding enabled)I have absolutely no idea how to make sox digest those bytes. $ parec -d telephonControl.monitor >> somebytes$ soxi somebytes soxi FAIL formats: can't determine type of file `somebytes'But I do know that they are raw audio, 16 bit signed little endian, 2 channel 44100kHz: $pacmd >>> list-sink-inputs 1 sink input(s) available. index: 17 driver: <protocol-native.c> flags: state: RUNNING sink: 2 <telephonControl> volume: 0: 100% 1: 100% 0: 0.00 dB 1: 0.00 dB balance 0.00 muted: no current latency: 92.86 ms requested latency: 23.20 ms sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stereo resample method: (null) module: 7 client: 53 <ALSA plug-in> properties: media.name = "ALSA Playback" application.name = "ALSA plug-in" native-protocol.peer = "UNIX socket client" native-protocol.version = "26" application.process.id = "3609" application.process.user = "alec" application.process.host = "ROOROO" window.x11.display = ":0" application.language = "en_GB.UTF-8" application.process.machine_id = "eec7c6ae60f90bb3921ad16d0000302d" application.process.session_id = "eec7c6ae60f90bb3921ad16d0000302d-1345384044.64188-1149507345" module-stream-restore.id = "sink-input-by-application-name:ALSA plug-in"
pipe the output of parec to sox
You need to declare the type of the sox output by adding -t wav before the second -. When it's a file name, sox peeps at the name and deduces the type from there, but when it's stdout, the type needs to be declared. You might also want to declare all other settings as well (-b 16 -e signed -c 1) rather than assuming they are transferred from the input; all before the last - that nominates the output.
I'm trying to put the "sox" utility in a two pipes command to resample a mono 44kHz audio file to a 16kHz audio file. It works fine with a single pipe : $ speexdec toto.oga - | sox -V -t raw -b 16 -e signed -c 1 -r 44.1k - -r 16k toto.wavBut when I add onather pipe, the sox utility complains : $ speexdec toto.oga - | sox -V -t raw -b 16 -e signed -c 1 -r 44.1k - -r 16k - | cat - > toto.wav sox FAIL formats: can't determine type of `-'Any idea ?
sox in between two pipes to resample a voice audio
SoX wants/needs input & output... by typing 'play xxxx' in the console, you're running it normally, with stdin & stdout (& stderr) all connected. When you background the job (with &), it starts, then is paused since it's waiting for access to stdin & stdout. Same thing occurs when you 'nohup' a job. If it needs keyboard input, it'll "block", and get paused by the system until it receives access to stdin. disown'ing a process effectively cuts it off from stdin & stdout which were connected to the console which started the process. It's still "running", but is blocked (paused) by the system since it's waiting for access to stdin & stdout.
I can run this command: $ play mylist.m3uAnd music plays. I can then press Ctrl-Z to suspend the job, and issue bg to have it run in the background. However, if I then run disown and exit, the music stops playing, even though the play command still shows up in ps. I would expect the music to keep playing. Also interesting I run the command $ play mylist.m3u &Music does not play. The job shows as the stopped status. I can also run the command $ nohup play mylist.m3u &And no music plays - the job immediately stops. However, $ nohup play mylist.m3uDoes have music play, but I can't disown it, as before.It seems like all these are related. Most programs behave well when disowned or run through nohup, but not SoX. Does anyone know why?
Why do 'nohup' and 'disown' not work on SoX (invoked as 'play')
(Answer based on various comments, as this method seems to be acceptable, and comments are not guaranteed to stay.) Look at the first recording ("10 secs of silence") in an audio editor, e.g. audacity. You'll see a DC (very low frequency) component when the level goes from 1 at 0 secs to -1 at 1 secs to 0.5 at 1.5 secs, and then falls down to near zero near the end. Did you plug in the mic during that time? If yes, you need to wait ca. 10 seconds before the amplitude settles, then measure. If not, you need to filter out the DC (direct current, that is constant voltage offset) component somehow. sox has several filters you can try. You can use the sox filters from a shell script without problems. Try e.g. highpass 100, that filters out most of it except for the initial jump. If filtering out DC components is too much effort, you can also ignore the initial part, and use the remaining part as it is.
I've found a solution that doesn't work by me: audio - Monitoring the microphone level with a command line tool in Linux - Super User https://superuser.com/questions/306701/monitoring-the-microphone-level-with-a-command-line-tool-in-linux The problem is that they are using Maximum amplitude to detect sound. However its value is always the same by me, no matter whether the recorded audio contains only silence or some sounds. For example: 10 sec of silence (Can be downloaded here: http://denis-aristov.ucoz.com/en/test-mic-silence.wav ): $ arecord -f S16_LE -D hw:2,0 -d 10 /tmp/test-mic-silence.wav$ sox -t .wav /tmp/test-mic-silence.wav -n stat Samples read: 80000 Length (seconds): 10.000000 Scaled by: 2147483647.0 Maximum amplitude: 0.999969 Minimum amplitude: -1.000000 Midline amplitude: -0.000015 Mean norm: 0.202792 Mean amplitude: 0.009146 RMS amplitude: 0.349978 Maximum delta: 0.913849 Minimum delta: 0.000000 Mean delta: 0.001061 RMS delta: 0.005564 Rough frequency: 20 Volume adjustment: 1.00010 sec with some sounds (Can be downloaded here: http://denis-aristov.ucoz.com/en/test-mic-sounds.wav ): $ arecord -f S16_LE -D hw:2,0 -d 10 /tmp/test-mic-sounds.wav$ sox -t .wav /tmp/test-mic-sounds.wav -n stat Samples read: 80000 Length (seconds): 10.000000 Scaled by: 2147483647.0 Maximum amplitude: 0.999969 Minimum amplitude: -1.000000 Midline amplitude: -0.000015 Mean norm: 0.185012 Mean amplitude: 0.010225 RMS amplitude: 0.334286 Maximum delta: 1.999969 Minimum delta: 0.000000 Mean delta: 0.006213 RMS delta: 0.057844 Rough frequency: 220 Volume adjustment: 1.000What is the difference? What values to use for sound detection? Or do I have to set something up because something works wrong? I've just used another computer (a notebook with a built-in microphone). I've recorded two WMA files (with and without sounds) using Windows "Sound Recorder". Converted them to WAV files using audacity and got the following outputs. Maximum amplitudes differ this time: With sounds: $ sox -t .wav /tmp/mic-sounds.wav -n stat Samples read: 581632 Length (seconds): 6.594467 Scaled by: 2147483647.0 Maximum amplitude: 0.999969 Minimum amplitude: -1.000000 Midline amplitude: -0.000015 Mean norm: 0.013987 Mean amplitude: 0.000062 RMS amplitude: 0.065573 Maximum delta: 1.999969 Minimum delta: 0.000000 Mean delta: 0.011242 RMS delta: 0.047009 Rough frequency: 5031 Volume adjustment: 1.000Without sounds: $ sox -t .wav /tmp/mic-silence.wav -n stat Samples read: 372736 Length (seconds): 4.226032 Scaled by: 2147483647.0 Maximum amplitude: 0.029022 Minimum amplitude: -0.029114 Midline amplitude: -0.000046 Mean norm: 0.005082 Mean amplitude: -0.000053 RMS amplitude: 0.006480 Maximum delta: 0.030487 Minimum delta: 0.000000 Mean delta: 0.005815 RMS delta: 0.007285 Rough frequency: 7891 Volume adjustment: 34.348May it be an indication that there are some problems with the microphone on another computer?
How to monitor microphone volume level?
If the background noise has some repetitive structure to it, you can remove it with the noiseprof and noisered effects of sox, see e.g. this script. Relevant bits repeated for convenience: # Create background noise profile from mp3 /usr/bin/sox noise.mp3 -n noiseprof noise.prof# Remove noise from mp3 using profile /usr/bin/sox input.mp3 output.mp3 noisered noise.prof 0.21Audacity currently has experimental scripting support.
Is it possible to convert a mp4 file to mp3 or flac, clearing background noises in the process? Or is it possible to run audacity totally trough shell, No GUI?
How to clear background noises with sox
You can use sort -R to reorder the file list into "random" order. The command could be the following: find ~/Music -type f | sort -R | xargs -I + play +Here find ~/Music -type f results in a list of of all files in the Music subtree, recursively. The resulting pathname list is then "sorted" into a random order by sort -R, and passed as arguments to successive play invocations with some few/many pathnames at a time. Note the use of + as "replace string", to invoke individual play for each music file. (Edit: as per Warren's comment below, I've now removed the useless but harmless single quotes for the second +.)
Is there anyway to shuffle songs using play on a folder using SoX? play ~/Music/*/**
Shuffle Using SoX?
Answer: @derobert pointed out the "sox" and "play" command are part of the same package but does different thing. The 3600 below is the time interval in seconds. sox -n note.mp3 synth 3600 sin 347The above code will generate an hour long tone without playing it. play -n note.mp3 synth 3600 sin 347The above code will play the tone for an hour AND save it to note.mp3 Thanks to @derobert, I should have tried this first.
I know how to play a tone for a specific amount of time using SOX. play -n synth 5 sin 347I know how to save a tone using SOX. sox -n note.mp3 synth 5 sin 347Question is : How can I save a longer tone (hours) without the sound actually playing and not actually having to wait hours for the file to generate?
Generating a long sound file with SOX without actually playing the tone
I think sox needs to seek its input if it is to determine the input format from the file's header, and that's incompatible with a pipe. I think ffmpeg can do all you want, though I'm not completely sure. I'm unfamiliar with it and the documentation is clear as mud. ffmpeg -i "$input" -compression_level 9 -ac 2 -ab 44100 output.flacAlternatively, mencoder should be able to do a similar job. mencoder "$input" -oac lavc -lavcopts=acodec=flac:abitrate=44.1:o=compression_level=9 -af channels=2 output.flac
I'm processing a variety of audio files in a bunch of different formats and I'd like to unify their format and configuration using FFMPEG and SoX. There are two steps to my process:Convert the file, whatever it may originally be, to a PCM 16-bit little-endian WAV file: ffmpeg -i input.wav -c:a pcm_s16le output.wav Process the file in Sox to make it conform to the sample rate and channel count that we need: sox input.wav output.flac channels 2 rate 44.1kI'd ideally like to pipe these two commands together so as to avoid creating an unnecessary file. I'm having a lot of trouble actually getting the format to work properly, though. SoX complains that it needs to explicitly know the format of the incoming audio, which is something that I don't even know at execution time. I know the format of the PCM audio, but I'm not sure the channel count nor of the sample rate of the incoming audio. Is there a way to pipe these two commands together, or better, to only have to use one tool for the job? The reason I've used two tools rather than just trying to do it with one: FFMPEGNot sure if there's a way to safely convert a mono audio stream to a stereo audio stream by duplicating the channels. (SoX does this natively.) Not sure how to change sample rate. (SoX does this natively.) Not sure how to output to FLAC using the best compression rate.SoXNot able to do audio format detection as well as FFMPEG does. If I have a file without an extension, SoX asks me to manually specify the format, which doesn't work at all for my application.
Piping Sox and FFMPEG together
A raw stream does not contain any meta-information about its format, so you have to tell sox about it: parec ... | sox -t raw -r 16k -e signed -b 16 -c 1 ...
I'm trying to pipe audio to sox and I get Error "sox FAIL formats: bad input format for `-': sampling rate was not specified" parec -d alsa_output.pci-0000_00_1b.0.analog-stereo.monitor --rate=16000 --channels=1 | sox -t raw - output.wav silence 1 0.3 0.1% 1 0.3 0.1% : newfile : restartThis is output of the command sox FAIL formats: bad input format for `-': sampling rate was not specified Opening a recording stream with sample specification 's16le 1ch 16000Hz' and channel map 'mono'. Connection established. Stream successfully created. Buffer metrics: maxlength=4194304, fragsize=64000 Using sample spec 's16le 1ch 16000Hz', channel map 'mono'. Connected to device alsa_output.pci-0000_00_1b.0.analog-stereo.monitor (index: 0, suspended: no). write() failed: Broken pipe
Sox format for stream with sample spec 's16le 1ch 16000Hz', channel map 'mono'
I've just had a quick go at it, very little in the way of testing so maybe it'll be of help. Below relies on ffmpeg-python, but it wouldn't be a challenge to write with subprocess anyway. At the moment the time input file is just treated as pairs of times, start and end, and then an output name. Missing names are replaced as linecount.wav import ffmpeg from sys import argv""" split_wav `audio file` `time listing` `audio file` is any file known by local FFmpeg `time listing` is a file containing multiple lines of format: `start time` `end time` output name times can be either MM:SS or S* """_in_file = argv[1]def make_time(elem): # allow user to enter times on CLI t = elem.split(':') try: # will fail if no ':' in time, otherwise add together for total seconds return int(t[0]) * 60 + float(t[1]) except IndexError: return float(t[0])def collect_from_file(): """user can save times in a file, with start and end time on a line""" time_pairs = [] with open(argv[2]) as in_times: for l, line in enumerate(in_times): tp = line.split() tp[0] = make_time(tp[0]) tp[1] = make_time(tp[1]) - tp[0] # if no name given, append line count if len(tp) < 3: tp.append(str(l) + '.wav') time_pairs.append(tp) return time_pairsdef main(): for i, tp in enumerate(collect_from_file()): # open a file, from `ss`, for duration `t` stream = ffmpeg.input(_in_file, ss=tp[0], t=tp[1]) # output to named file stream = ffmpeg.output(stream, tp[2]) # this was to make trial and error easier stream = ffmpeg.overwrite_output(stream) # and actually run ffmpeg.run(stream)if __name__ == '__main__': main()
I looked at the following link: Trim audio file using start and stop times But this doesn't completely answer my question. My problem is: I have an audio file such as abc.mp3 or abc.wav. I also have a text file containing start and end timestamps: 0.0 1.0 silence 1.0 5.0 music 6.0 8.0 speech I want to split the audio into three parts using Python and sox/ffmpeg, thus resulting in three seperate audio files. How do I achieve this using either sox or ffmpeg? Later I want to compute the MFCC corresponding to those portions using librosa. I have Python 2.7, ffmpeg, and sox on an Ubuntu Linux 16.04 installation.
Split audio into several pieces based on timestamps from a text file with sox or ffmpeg
First, gather the list into a Bash array. If the files are in the current directory, you can use files=(prefix_????.mp3)Alternatively, you can use find and sort, IFS=$'\n' ; files=($(find . -name 'prefix_*.mp3' printf '%p\n' | sort -d))Setting IFS tells Bash to split only at newlines. If your file and directory names do not contain spaces, you can omit it. Alternatively, you can read the file names from a file, say filelist, one name per line, and no empty lines, IFS=$'\n' files=($(<filelist))If you might have empty lines in there, use IFS=$'\n' files=($(sed -e '/$/ d' filelist))Next, decide how many files you want in each slice, the name of the temporary accumulator file, as well as the final combined file name: s=100 src="combined-in.mp3" out="combined-out.mp3"Then, we just need to slice the list, and process each sublist: while (( ${#files[@]} > 0 )); do n=${#files[@]} # Slice files array into sub and left. if (( n <= s )); then sub=("${files[@]}") left=() else (( n-= s )) sub=("${files[@]:0:s}") left=("${files[@]:s:n}") fi # If there is no source file, but there is # a sum file, rename sum to source. if [ ! -e "$src" -a -e "$out" ]; then mv -f "$out" "$src" fi # If there is a source file, include it first. if [ -e "$src" ]; then sub=("$src" "${sub[@]}") fi # Run command. if ! sox "${sub[@]}" "$out" ; then rm -f "$out" echo "Failed!" break fi rm -f "$src" echo "Done up to ${sub[-1]}." files=("${left[@]}") # rm -f "${sub[@]}" doneIf sox reports a failure, the loop will break early. Otherwise, it will output the last name in the batch processed. We use an if for the sox command to detect the failure, and remove the output file if indeed a failure occurred. Because we also postpone modifying the files array until after a successful sox command, we can safely edit/fix individual files, and then just rerun the while loop, to continue where we stopped. If you are short on disk space, you can uncomment the second-to-last line, rm -f "${sub[@]}", to remove all files that have been successfully combined. The above processes the initial parts over and over again. As I explained in a comment below, the results will be much better if you concatenate the files first using ffmpeg (without recoding using sox), possibly followed by a recoding pass using sox. (Or, you could recode each first, of course.) First, you create a pipe-separated list (string) of the file names, files="$(ls -1 prefix_????.mp3 | tr '\n' '|')"remove the final superfluous pipe, files="${files%|}"and feed them to ffmpeg, with no recoding: ffmpeg -i "concat:$files" -codec copy output.mp3Note that you may wish to run ulimit -n hardto raise the number of open files to the maximum allowed for the current process (hard limit); you can query it using ulimit -n. (I don't recall whether ffmpeg concat: opens the sources sequentially or all at once.) If you do this more than once, I'd put it all into a simple script: #!/bin/bash export LANG=C LC_ALL=C if [ $# -le 2 -o "$1" = "-h" -o "$1" = "--help" ]; then exec >&2 printf '\n' printf 'Usage: %s -h | --help ]\n' "$0" printf ' %s OUTPUT INPUT1 .. INPUTn\n' "$0" printf '\n' printf 'Inputs may be audio mp3 or MPEG media files.\n' printf '\n' exit 1 fioutput="$1" shift 1 ulimit -n hardinputs="$(printf '%s|' "${@}")" inputs="${inputs%|}"ffmpeg -i "concat:$inputs" -codec copy "$output" retval=$?if [ $retval -ne 0 ]; then rm -f "$output" echo "Failed!" exit $retval fi# To remove all inputs now, uncomment the following line: # rm -f "${@}" echo "Success." exit 0Note that because I use -codec copy instead of -acodec copy, the above should work for all kinds of MPEG files, not just mp3 audio files.
I have a list of files with names prefix_0000.mp3 ... prefix_x.mp3, where max(x) = 9999. I have the bash script: ... sox prefix_*.mp3 script_name_output.mp3 # this fails because maximum number is 348 rm prefix_*.mp3 ...How can I best split the ordered list of mp3 files into sublists (with retaining ordering) and gradually sox them and remove unneeded files in a bash script?
Splitting the ordered list into sublists
Sox comes with the play command which takes the same arguments as sox. These include the trim effect which can specify a starting position. Effects come after the filename. So you can do, for example, play myfile.mp3 trim 10:00
I need a way in which I can play an audio file in the terminal, and tell it from what time step it should start from?.. Example could be to start the audio file from 00:10:00... is that possible?
Play audio file from a certain time step in terminal?
The effects function as a chain, so the stat effect feeds into trim, swap them around and it will work, e.g.: sox audio.wav -n trim 50 5 stat
I wanted to analyze 5 seconds of an audio file beginning from 50 seconds. So I ran the following command: sox audio.wav -n stat trim 50 5But the output contained: ... Length (seconds): 55.296000 ...But I expected only 5 seconds, not 55. What did I do wrong? I thought that 50 was the start and 5 - the duration.
How can I analyze a segment of an audio file with sox?
Have you checked the volume settings? The system defaults were chosen to be quiet or outright muted because people were annoyed with getting blasted by full-volume sound output on new, unconfigured systems. Since /proc/asound/cards indicates that the name of your chipset-integrated soundcard is "PCH", try this (install alsamixer first if necessary): alsamixer -c PCHThis should open up a text-based sound mixer with several sliders: use arrow keys to manipulate them. The M key will toggle the "mute" setting on channels that have them. The slider labeled "PCM" needs to be at full to get normal sound output: the "Master" slider is the one one to use to adjust the overall volume level. If you find channels whose name includs S/PDIF, you may need to toggle their mute status to get S/PDIF output. Once you've found good default settings, run alsactl store as root to save the settings as new system defaults. Your desktop environment may also store your audio settings from one session to another, but setting good system-wide default volumes never hurts.
Problem: I don't hear anything on my sound system when playing audio. Question: What is the minimal set of programs required to play something on my machines audio jack or S/PDIF output? How did I get there? My system is an up-to-date Debian Stretch system which got created with debootstrap. The system is an Intel NUC5CPYH which is said to have an Intel Braswell chipset. I ran apt-get install --no-install-recommends sox libsox-fmt-allto install the sox audio player. When I tried to play a file, I got ALSA lib confmisc.c:767:(parse_card) cannot find card '0' ALSA lib conf.c:4528:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings ALSA lib conf.c:4528:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name ALSA lib conf.c:4528:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib conf.c:5007:(snd_config_expand) Evaluate error: No such file or directory ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM default play FAIL formats: can't open output file `default': snd_pcm_open error: No such file or directoryso, after looking at the dependencies of sox and libsox-fmt-all, I ran apt-get install libsndio6.1 pulseaudioNow the output of sox seems that it is playing a file fine when asked to do so, except that I don't get any sound output on my sound system. (The sound system is set-up fine.) Edit #1 The output of cat /proc/asound/cards is 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0x81414000 irq 313Edit #2 The output of aplay -l is aplay: device_list:270: no soundcards found...Solution I made it work. Now I can say that there were two problems:My user did not have sufficient rights to access the sound device. (This answer to another question told me that.) My sound device was muted. (I marked the answer which told me that as "correct")So, all in all, the steps I had to take wereapt-get install --no-install-recommends sox libsox-fmt-all alsa-utils Add myself to the audio group: usermod --append --groups audio <username> Change the alsa volume: alsamixer Play the song with sox: play <filename>
What is the minimal set of programs required to play something on my machines audio jack or S/PDIF output?
I know this is a very old question but I may be able to help someone (like me, few years ago) who stumble across this in a search... I found the above answer from ridgy when I was looking to record and stream audio from Motion and it sent me off to develop a successful solution, for me at least. My use case is recording the sounds from a nestbox. I've got an MS LifeCam webcam on a pan/tilt mount connected to a Pi 3B+ running motion. The solution was indeed to run script when motion was triggered to start recording a movie but I also have a continuously-running script which streams the audio from the webcam. I then use VLC to view the network video stream and play the audio stream alongside it. Details are as follows: I have three directories set up for handling the audio and video files: /home/pi/motion/videos, /home/pi/motion/videosB and /home/pi/videos/pre-mergevideos motion.conf is configured so .mkv videos are saved to /home/pi/motion/videos On movie start, motion runs a scripts called record_s.sh This records a 60 sec audio file with the same name as the .mkv video and saves it in /home/pi/motion/videos On movie close, motion runs a script called merge_sv.sh This merges the .mkv and .mp3 files in /home/pi/motion/videos and saves the merged file to /home/pi/motion/pre-mergevideos. The .mkv & .mp3 source files are then deleted, the permissions of the merged file changed to 777 and its owner changed from root to motion and finally, the merged file moved to /home/pi/motion/videosB crontab runs rsync every 2 minutes to move contents of 'videosB' to NAS share (/home/pi/motion/media) The latter is to prevent 'motion' crashing if the NAS share becomes unavailable for any reason - 'motion' writes its files to local directories, whilst crontab takes care of the output to the NAS share. #!/bin/bash # record_s.sh # 15/4/18: Created # 25/11/20: libav-tools no longer available in 'Buster', 'avconv' replaced by 'ffmpeg' filename="$1" echo "$1" &> /home/pi/motion/filename.txt # filename.txt will contain /home/pi/motion/videos/xxxxxxxxxxxxxx.mkv #remove ".mkv" file extension - "${filename%.*} (parameter expansion) #Use of 'sudo' is only possible if user motion is in the sudoers file and in the sudo group sudo ffmpeg -y -f alsa -ac 1 -i default -acodec mp3 -b:a 64k -t 60 "${filename%.*}".mp3 &> /home/pi/motion/record_s.txt#!/bin/bash # merge_sv.sh # 19/04/18: Created# Call arguments: %f %Y%H%d%T where %f is filename with full path (eg) /home/pi/motion/videos/9403-20180511053500.mkv # Allow record_s.sh to finish writing the .mp3 sound file (which will have the same name & path as above except .mp3)sleep 10 filename="$1" # The 'if' statement below will reject timelapse files '*.avi' if [[ ${filename##*\.} != avi ]] # See answer 5 in https://stackoverflow.com/questions/407184/how-to-check-the-extension-of-a-filename-in-a-bash-script then #Change output file address from /home/pi/motion/videos to /home/pi/motion/pre-mergevideos (/home/pi/motion/pre-mergevideos is a directory for temporarily holding the unmerged files) tempfolder="pre-mergevideos" outputfolder="videosB" # This is the folder containing the merged mkv & mp3 video and the jpg files from 'motion'. This folder is rsync'd to network share 'media'. # Replaces 'videos' in $filename with the text in $tempfolder (ie) 'pre-mergevideos' so $output becomes (eg) /home/pi/motion/pre-mergevideos/9403-20180511053500.mkv, whilst # $filename remains (eg) /home/pi/motion/videos/9403-20180511053500.mkv temp_locn="${filename/videos/$tempfolder}" final_locn="${filename/videos/$outputfolder}" # # # Merge video and audio into one file. Note the mp3 file has to be converted to aac for an mkv container. Error and stdout messages are redirected to merge_sv.txt. For # 'sudo' to work user 'motion' has to be added to the sudoers file and to the sudo group. # The expression "${filename%.*} removes ".mkv" file extension (parameter expansion) # -itsoffset option offsets the timestamps of all streams by (in this case) 0.8 seconds in the input file following the option (without it the audio leads the video by about 0.8s). sudo ffmpeg -y -i $filename -itsoffset 0.80 -i "${filename%.*}".mp3 -c:v copy -c:a aac $temp_locn &> /home/pi/motion/merge_sv.txt #Delete the source files sudo rm $filename sudo rm "${filename%.*}".mp3 # change the file permissions on the merged file (otherwise it's root 644 rw.r..r..) sudo chmod 777 $temp_locn # change the owner to 'motion' sudo chown motion:motion $temp_locn #move the recently merged file into the 'videosB' folder. Remember both $temp_locn and $final_locn contain path and filename sudo mv $temp_locn $final_locn fitldr version: record_s.sh records the audio from the webcam whilst motion is recording the video. When the video recording has finished, merge_sv.sh merges the audio and video files together and deletes the source files. The script applies an offset to the audio (determined by trial and error) to get the files synched together. There's also some jiggery pokery to stop motion crashing if the network share becomes unavailable. Audio streaming: I run a simple script called sound_on.sh on bootup via crontab -e to generate an rtp stream: #!/bin/bash # sound_on.sh # 13/1/18 # # killall -9 avconv #Pre Buster version: #avconv -f alsa -ac 1 -re -i default -acodec mp3 -ac 1 -f rtp rtp://234.5.5.5:5004 2> /tmp/mylog.log # default was hw:1,0#Buster version onwards: ffmpeg -f alsa -channels 1 -i hw:1,0 -acodec mp3 -f rtp rtp://234.5.5.5:5004 2> /home/pi/motion/sound_on.logThis can be played using VLC with the following settings: Media/Open Network Stream: network URL: http://192.168.1.122:8081 (the address of the Raspberry Pi running motion) Show more options/Play another media synchronously: 'Extra media' rtp://234.5.5.5:5004 Then press 'Play' Please note this only (at the time of writing) works with the Ubuntu version of VLC - the Windows 10 version crashes every time I've tried this.
I want to set up a daemon that starts recording video with sound when it detects noise or movement. There are tools that do that separately but can they be done at the same time? Can I set up motion in a way that when it detects motion it executes a script? Can I do the same with SOX?
sound recording with motion?
With the following script it works (using mplayer, which is probably not present on many systems). #!/bin/sh grep -A 1000 --text -m 1 ^Ogg "$0" | mplayer - exit OggS^@^B^@^@^@^@^@^@^@^@^]f<8a>g^@^@^@^@lY߸^A^^^Avorbis^@^@^@^@^A"V^@^@^...The last line is the beginning of the audio file binary. The grep command searches for the first occurrence of Ogg in the file $0 (which is the script file itself) and prints 1000 lines after that line (is enough for my small audio test file). The output of grep is then piped to mplayer which is reading /dev/stdin (abbreviation for /dev/stdin is -). I have created this file by concatenating the script file playmeBashScript.sh with the audio file sound.ogg: cat playmeBashScript.sh sound.ogg > playme.shA more general and a bit shorter version with sed instead of grep (thanks to Elias): #!/bin/sh sed 1,/^exit$/d "$0" | mplayer - exit OggS^@^B^@^@^@^@^@^@^@^@^]f<8a>g^@^@^@^@lY߸^A^^^Avorbis^@^@^@^@^A"V^@^@^...In this case sed selects all lines from number one up to the line where it finds the word exit and deletes them. The rest is pasted and piped to mplayer. Of course that only works if the word exit occurs only once in the script before the binary data.
How can I add an audio file (OGG vorbis) into a shell script making the script as small as possible and at the same time being able to execute it on as most systems as possible? I want the audio file be played. I don’t want it to be uuencoded, because sharutils are not installed on many systems. base64 is nice, but makes the file/script bigger, which I don’t want. Should I use mplayer or the play command from the sox package? Is it more common on Linux/BSD systems? Or is there another media player mostly installed? Should I convert the audio file to another format/codec (if that doesn’t make the file much bigger) to have more possible players to play the file back or is OGG vorbis a good choice?
How to add an audio file to a shell script
What happens is that one of your SoX programs (sox, play, rec) has changed how stdin is behaving, making it non-blocking. Typically, something is calling fcntl(0, F_SETFL, O_NONBLOCK). When a call to the read() system call is made on a non-blocking file descriptor, the call does not wait: either there is already something to read in the kernel buffer and it is returned, or read() fails and errno is set to EAGAIN. When Bash meets this EAGAIN error while reading from stdin with the read command, it displays the "Resource temporarily unavailable" message you have met. Try adding <&- at the end of each of your SoX commands; this will close stdin for them and they won't be able to alter how it is working.
script #!/bin/bash --# record from microphone rec --channels 1 /tmp/rec.sox trim 0.9 band 4k noiseprof /tmp/noiseprof &&# convert to mp3 sox /tmp/rec.sox --compression 0.01 /tmp/rec.mp3 trim 0 -0.1 &&# play recording to test for noise play /tmp/rec.mp3 &&printf "\nRemove noise? " read reply# If there's noise, remove it if [[ $reply == "y" ]]; then sox /tmp/rec.sox --compression 0.01 /tmp/rec.mp3 trim 0 -0.1 noisered /tmp/noiseprof 0.1 play /tmp/rec.mp3 fiErrors with: read error: 0: Resource temporarily unavailable But, the script works if I use the -e flag on read to enable readline
Why does `read` fail saying "read error: 0: Resource temporarily unavailable"?
By changing the configuration of the user pi to use hw devices, you have disabled all automatic sample format conversions. To set only the card number, use: defaults.pcm.card 0 defaults.ctl.card 0To change this for all users instead, put this into /etc/asound.conf.
I have a test.wav file that I wanna play through speaker using ALSA. I also have sox installed on the system. All sound cards are installed properly. aplay -L and arecord -L return the correct value. However I wasn't able to play this test.wav. aplay -c1 -r 48000 -f S16_LE test.wav $ Playing WAVE 'test.wav': Signed 16 bit Little Endian, Rate 48000 Hz, Mono aplay: set_params:1345: Channels count non availableBut when i used sox, the system had no problem and it shows that the file is indeed 1 channel at 16-bit $ Encoding: Signed PCM Channels: 1 @ 16-bit Samplerate: 48000HzI really have no clue what the reason could be. Any help is appreciated! Thank you guys in advance! EDIT:My mistake when I hand typed the message printed in the terminal, it is indeed 1@16 bit when I used SoX instead..asoundrcpcm.!default { type hw card 0 } ctl.!default { type hw card 0 }When I use arecord, I need to specify the channel counts: arecord -c 2 -r 48000 -f S16_LE test.wav Otherwise it will return the same error above. But then I have no problem playing test.wav if it is recorded using arecord. It is weird that I can't play a test.wav if it is imported from elsewhere.
ALSA aplay mono file but returns channel count non available
got to use the pipe option -p, --sox-pipe otherwise the first command is not passing anything to stdout and the second command only gets one file for mixing: sox FAIL sox: Not enough input filenames specifiedUsing a pipe with the -p option does the job: sox one-bar.flac -p repeat 4 | sox - -m four-bars.wav output.flac
I have two audio-files and want to mix them with SoX using the -m, --combine mix option. Both files have the same bpm, but not the same length, meaning I need to loop one file, but not the other. Does anyone on here know (if possible) how to do this? I managed to create a looped file with sox by using the repeat option, but I can not use this option only on one input file, somewhat like: sox -m repeat 4 one-bar.flac four-bar.flac outfile.flacalso a pipe does not work: sox one-bar.flac repeat 4 | sox - -m four-bars.wav output.flacI get: sox FAIL sox: Not enough input filenames specifiedsox FAIL formats: can't determine type of `-'
SoX - mixing two audio tracks but looping/repeating only one
Figured out. Similar to https://stackoverflow.com/questions/9667081/how-do-you-trim-the-audio-files-end-using-sox Solution: play nameOfMp3 reverse trim 0 N reverse(where N is the number of seconds).
Is there any way to play last N seconds of an mp3 file on bash commandline?
Play last few seconds of mp3
Split the files, inspect the resulting files (for i in *wav), if (length < 10 seconds), pad them.To get the wave file length: sox --info -D file.wavTo pad the wave file: https://superuser.com/questions/579008/add-1-second-of-silence-to-audio-through-ffmpegMaybe do some calculations :-)
I need to split wav files into multiple 10-second-long wav files, but each resulting wav file must be exactly 10 seconds in length, adding silence if needed –so if a wav file's duration in seconds isn't a multiple of 10, the last wav file should be padded with silence. I've seen some answers (1, 2, 3) which show how to use sox and ffmpeg to split a file into chunks of equal length: $ ffmpeg -i file.wav -f segment -segment_time 10 -c copy out%03d.wav$ sox file.wav output-.wav trim 0 10 : newfile : restart but the last file produced by these commands is usually less than 10 seconds long. Is there a way to split a wav file, padding the last file if needed, in the same command?
split wav file into parts of equal duration, padding with silence if needed
If you want/need to use sox for this you can use its trim command: for i in *.mp3 do sox "$i" sample-"$i" trim 0 10 doneThe splitting you can also do with the commandline utility that is part of mp3splt. You explicitly set the output file with -o, so the originals are not touched, just remove them after you are done with them. This allows you to incorporate tags defined in the file in the output name than sox does (what you don't seem to need right now): for i in *.mp3 do mp3splt "$i" 00.00.00 00.10.00 -o sample-"${i%.mp3}" doneNote that -o normally works with @ based directives to include tag elements in the output name and appends .mp3. Without ${i%.mp3} you would get .mp3.mp3 files. Times are dot separated, don't try to use : instead, you get a less than useful error message that you don't have enough split points. I would not remove the input files until you have tested that the scripts works. Also note that if you stop it halfway, or add files later, that you cannot re-run it without removing any sample- files first. You might want to specify a directory before sample-..., to keep things apart.
I have a folder with several MP3 files that I need to extract 10-15 seconds of audio from. I would also like to rename these by appending sample-(name).mp3 to the converted files. How can I do this via Shell Script?
Create SOX batch script to extract first 15 seconds and rename multiple files in folder
I found a solution in the meanwhile. It is based on piping the output of rec to base64 so that it can be encoded to ASCII and stored in a bash variable. If it is time to analyze the segment's volume and frequency I run base --decode on the variable contents. In the script below only volume is analyzed. If it exceeds the threshold (0.6) handleExcess is called and the segment is saved. I also increased the segment length to 5 seconds. handleExcess() { echo "$1" | base64 --decode > /tmp/"$2".wav }VOLUME="";while true; do AUDIO_DATA="$(AUDIODEV=hw:0,0 rec -c 1 -t wav - trim 0 5 2> /dev/null | base64)"; declare $(echo "$AUDIO_DATA" | base64 --decode | sox - -n stat 2>&1 | awk 'BEGIN { ORS="" } /^Maximum amplitude/ { print "VOLUME="$3 }'); if [ $(echo "$VOLUME > 0.6" | bc) == 1 ]; then AUDIO_DATA_TMP="$AUDIO_DATA"; handleExcess "$AUDIO_DATA_TMP" "$VOLUME""_""$(date +%s)" & fi done
I use the following script to monitor my microphone: while true; do printf "$(AUDIODEV=hw:2,0 rec -n stat trim 0 1 2>&1 | awk 'BEGIN { ORS="" } /^Maximum amplitude/ { print "Max. amplitude: "$3} /^Rough\s+frequency/ { print " Frequency: "$3} /^Maximum\s+delta/ { print " Max. delta: "$3}')\r"; doneIt records a segment which is 1 second long, extracts values of Maximum amplitude and Rough frequency from the standard sox output and prints them. Can I save a segment to file if its volume or frequency is greater than a particular threshold? I know that I can save each segment and then analyze it, but there will be too many write operations, which I want to avoid.
Monitor microphone and save filtered segments
Have a look at this thread on the SoX mailing list: [SoX-users] What does Hd:0.0 mean?: The first answer is:Rene Maurer <renemaur@...>:Can anyone tell me what "Hd:" means. Playing songs let this value change from time to time (Hd:0.0, Hd:1.6, Hd:4.9, Hd:0.0 for example).It's the headroom in dB (in case you speak German: Aussteuerungsreserve), i.e. how much the output signal could be amplified before clipping occurs. It is only shown if it is relatively low, so that there is risk to hit the ceiling. The progress indicator is described in more detail in connection with the -S global option in the manpage. Ulrich
When I use the play command provided by SoX, sometimes the playback information contains a number labeled Hd, which the manpage doesn’t seem to mention. What does it mean? $ play song.mp3 In:72.5% 00:04:43.38 [00:01:47.73] Out:12.5M [ -===|==== ] Hd:4.3 Clip:0
What is the Hd output of SoX’s play?
This assumes that you are root and that squashfs-tools is installed on your system:Copy filesystem.squashfs to some empty dir, e.g.: cp /path/to/filesystem.squashfs /path/to/workdir cd /path/to/workdirUnpack the file then move it somewhere else (so you still have it as a backup): unsquashfs filesystem.squashfs mv filesystem.squashfs /path/to/backup/Go in squashfs-root, add/modify as per your taste then recreate1 filesystem.squashfs: cd /path/to/workdir mksquashfs squashfs-root filesystem.squashfsCopy the newly created filesystem.squashfs over the existing one on your USB drive, e.g.: cp filesystem.squashfs /mnt/clonezilla/live/then reboot and use your LIVE USB.1: Consult the manual for additional options that you can pass, like -b 4M -comp lz4 or -comp xz -Xbcj x86 etc
I have a Clonezilla installation on a USB stick and I'd like to make some modifications to the operating system. Specifically, I'd like to insert a runnable script into /usr/sbin to make it easy to run my own backup command to make backups less painful. The main filesystem lives under /live/filesystem.squashfs on the USB FAT-32 partition. How can I mount this read/write on my Linux machine in order to be able to add/remove/change files? I'm running an Ubuntu 12.04 derivative.
Mounting a squashfs filesystem in read-write
There are a couple of important reasons for this, but the big two are space constraints, and requirements from the filesystem itself. SquashFS is a highly optimized filesystem image format that provides, among other benefits:High levels of data compression. Built-in block-level deduplication (any given block is stored only once, and all files that contain a copy of that block just reference that one copy). No practical limitations on file sizes (this usually does not matter, but is worth mentioning IMO). Proper support for file ownership, file permissions, extended attributes (needed for example for SELinux) Very low runtime overhead despite the above benefits. Reasonably good performance.A burnable live system image needs to conform to the filesystem format required by the media type it’s being used with, either ISO 9660 if it’s an optical disk (because while it could use UDF, almost nobody actually does that), or FAT32 on most USB connected storage devices. FAT32 notoriously supports none of those first four benefits listed above. ISO 9660 technically has support for POSIX-style file ownership and permissions (the Rock Ridge extensions), but it lacks practical support for compression and deduplication. However, Linux needs POSIX-style file ownership and permissions to work correctly, and in most cases it’s extremely desirable for the final live system image to be as small as possible and therefore good compression is desirable, and because SquashFS does this better than any other options available for Linux right now, it’s what gets used for the root filesystem for the live image (since that is generally the biggest part of the image by a significant margin, and is also the only part that the bootloader does not need to understand).
Why couldn't the Live ISOs just be a minimal Linux system with an installer? Is there any reason to use squashfs to hold the root of the filesystem? Is it just for better compression, or are there other reasons? I've seen some answers (and comments) say that it's for read-only reasons. What about persistence, like what Ubuntu or EndeavourOS has in their Live USBs?
Why is the base system of Live ISOs for Linux distros usually stored with squashfs?
Most major distributions use squashfs to hold their live CD. squashfs is intended to be used for read-only filesystems, which is exactly what a live CD is. Decompressing filesystem.squashfs takes longer than any other process because filesystem.squashfs contains the entire system. For more information, look at the wikipedia page: https://en.wikipedia.org/wiki/SquashFS
When I use Unetbootin to put a Linux ISO on a USB drive, it proceeds quite quickly until it gets to filesystem.squashfs, which takes longer to process than absolutely everything else combined. Is this writing a new filesystem to the USB, or is it copying some huge filesystem-dependent file? If so, is there a way to only do it once in the event that I will be trying many distros and want to speed this step up?
What is filesystem.squashfs and why does it take so long to load on to bootable media?
Linux kernels before 2.6.29 don't accept SquashFS version 4 filesystems (read here). This will probably the cause why your device does not boot with it. In order to build a SquashFS v3 image, you'll need an older version of the squashfs-tools package. The latest supported release of Ubuntu including this is the old Hardy 8.04 release with the package available here. I think it's possible to just install this package on a more recent version of Ubuntu. Try that before installing the ancient Hardy release. I'm quite astonished to see that Ubuntu has just upgraded in-place with this non-forwards and non-backwards compatible upgrade. I would expected to see both version 3 and 4 packages in the repositories.
I'm trying to modify a firmware file by unsquashing it, editing my files and squash it again. But I got problems with the device which does not accept the file because of different squashfs types (as I suppose). Here's the output on my dev box: original file: user@ubuntuVM:~$ unsquashfs -s main-fs.5_0 Reading a different endian SQUASHFS filesystem on main-fs.5_0 Found a valid big endian SQUASHFS 3:0 superblock on main-fs.5_0. Creation or last append time Thu Aug 21 20:56:15 2008 Filesystem size 9653.75 Kbytes (9.43 Mbytes) Block size 65536 Filesystem is not exportable via NFS Inodes are compressed Data is compressed Fragments are compressed Always_use_fragments option is not specified Check data is not present in the filesystem Duplicates are removed Number of fragments 105 Number of inodes 1667 Number of uids 2 Number of gids 1modified file: (used mksquashfs squashfs-root main-fs.test -b 64K -no-exports -no-xattrs -no-sparse -force-gid 0 -force-uid 0 as root) user@ubuntuVM:~$ unsquashfs -s main-fs.mod Found a valid SQUASHFS 4:0 superblock on main-fs.mod. Creation or last append time Mon Dec 3 14:46:07 2012 Filesystem size 9654.48 Kbytes (9.43 Mbytes) Compression gzip Block size 65536 Filesystem is not exportable via NFS Inodes are compressed Data is compressed Fragments are compressed Always_use_fragments option is not specified Xattrs are not stored Duplicates are removed Number of fragments 105 Number of inodes 1667 Number of ids 1I think the problem is the superblock and/or SQUASHFS version. I found that it was possible to us mksquashfs -2.0 for some time, but this argument got removed and would not be so helpful because I need version 3. So my question exactly is: How can I achieve to repack my modified files exactly as they were before? Additionally my modificated file states compression: gzip but the original states nothing about it's compression. Maybe here is a problem too, but I don't know how to get more info than the above. :-(
SQUASHFS 3 vs 4
This isn't related to BusyBox. BusyBox is a set of unix utilities designed for low-resource environments such as routers. Your router's root filesystem is mounted read-only because it's stored on SquashFS, a compressed filesystem which cannot be written to. A SquashFS filesystem is compressed in one go when the filesystem is built and cannot be modified afterwards. Such routers generally run a variant or derivative of DD-WRT. Most variants have another filesystem on the side, usually JFFS, which is writable. It looks like yours is completely locked down. Check if there's an option somewhere to “unlock” an extra filesystem (it could be an option in NVRAM that you can set through the web interface, or with the nvram utility if you have one). If you don't find a way in, consider installing an alternate firmware, such as OpenWRT, DD-WRT, Tomato, … (check that your particular router model is supported before starting the installation).
I want to change some files on my router. Firstly i can change everything in /var, but i want to change /etc/fstab. when i try to change it, i get an error message that says filesystem is read only. Busybox inside router, has limited commands, so i had got busybox binary for mips http://www.busybox.net/downloads/binaries/1.19.0/busybox-mips, and upload it by tftp (tftp -g -r busybox-mips my.i.p.addr), so now i can use full commands (/var/tmp/busybox-mips command). There is no rom inside router (sdram), or there is no another partition. it must be related with busybox. # /var/tmp/busybox-mips df Filesystem 1K-blocks Used Available Use% Mounted on /dev/root 1344 1344 0 100% /# mount rootfs on / type rootfs (rw) /dev/root on / type squashfs (ro) proc on /proc type proc (rw,nodiratime) ramfs on /var type ramfs (rw) # Model: Airties 5021 Processor: BCM6332KFBG HS1037 P12 994981 N1 Memory: M12L64164A-7T (SDRAM) ANM1P02HL 1028
how to make read only filesystem writable on busybox?
You could mount them with fuse-zip or archivemount and then create the squashfs file from the mount point. For example, this would work for a zip file: $ mkdir /tmp/zmnt $ fuse-zip -r /path/to/file1.zip /tmp/zmnt $ mksquashfs /tmp/zmnt /path/to/file1.squashfs $ fusermount -u /tmp/zmnt
I enjoy using squashfs for compression because of the simplicity of mounting them as loop devices to access the files inside. I have a lot of rar, tgz and zip files that I would like to convert to squashfs. In this answer, I saw that it is possible to use a pseudo file when compressing a disk image to squashfs to avoid having to use a temporary file the size of the whole disk. mkdir empty-dir mksquashfs empty-dir squash.img -p 'sda_backup.img f 444 root root dd if=/dev/sda bs=4M'I would like to use pseudo files to convert from rar, tgz or zip to squashfs in the same way (on the fly), so I don't have to first extract the whole archive to disk and then compress to squashfs in a separate operation. Some of these archives contain thousands of individual files, some of which will have spaces or other special characters in their filenames. I looked at the README, and I think I would need to use the -pf <pseudo-file> option, but I'm not sure how to create the pseudo file on the fly (and also not have problems with filenames with spaces). I think I would need to use process substitution to create the list of files from the source archive. Ideally I would like to have a command that is able to convert any rar, tgz or zip without having to individually create the pseudo file for each archive, but if anyone can tell me how I can do it with one of those archive formats, then hopefully I can work it out for the others. Thanks everyone.
How to convert from rar or tgz to squashfs without having to extract to temporary folder?
Thanks to @msw, I figured out the package names for Ubuntu. Thank you! Here's the full steps for someone in the future. Get source here: http://sourceforge.net/projects/squashfs/ # sudo apt-get install lzma-dev # sudo apt-get install liblzma-dev # tar -zxvf squashfs4.2.tar.gz # cd squashfs4.2/squashfs-tools: Edit Makefile and uncomment this line "LZMA_XZ_SUPPORT = 1" # make # sudo make install # sudo unsquashfs <path/lzma_filename_to_unsquash>
I'm using Ubuntu 12.04. I want to unsquash an lzma image. I have done sudo apt-get squashfs-toolsNow, when I do unsquashfs <squashed_image_filename>I get Filesystem uses lzma compression, this is unsupported by this versionI know my squashed image is lzma. How do I install support for lzma? I have downloaded the squashfs-tools from here: http://sourceforge.net/projects/squashfs/files/ It is my understanding that after extracting that tarball, I need to cd into squashfs4.2/squashfs-tools and edit the Makefile by uncommenting the line LZMA_XZ_SUPPORT = 1. Then I just need to run make. That does not work for me. I get the error: gzip_wrapper.c:23:18: fatal error: zlib.: No such file or directoryI think I need to install lzma-devel and xz-devel. I have tried this and been Googling for a couple hours and haven't gotten anywhere or found any solid instructions that show how this should work. Can anyone who has done this help me out? I am new to desktop Linux so if you could be fairly verbose in your instructions that would be appreciated.
How to use 'unsquashfs' with lzma?
As I said in my other answer you have to move the old filesystem.squashfs to another location (or rename it) before repacking your modified squashfs-root into a new filesystem.squashfs: mv filesystem.squashfs /path/to/backup/or mv filesystem.squashfs filesystem.squashfs.oldthen: mksquashfs squashfs-root filesystem.squashfs -b 1024k -comp xz -Xbcj x86 -e boot
In this answer on a previous question, I found out how to modify files in a squashfs filesystem: # unsquash the filesystem to a local directory sudo cp /media/clonezilla/live/filesystem.squashfs ./ sudo unsquashfs filesystem.squashfs # now, insert my own script which I want as part of the distribution sudo cp ~/autobackup squashfs-root/usr/sbin/ # now, resquash the filesystem to be able to use it sudo mksquashfs squashfs-root filesystem.squashfs -b 1024k -comp xz -Xbcj x86 -e bootHowever, on that last line, I run into some problems making the filesystem: Source directory entry bin already used! - trying bin_1 Source directory entry dev already used! - trying dev_1 Source directory entry etc already used! - trying etc_1 Source directory entry home already used! - trying home_1 Source directory entry initrd.img already used! - trying initrd.img_1 Source directory entry lib already used! - trying lib_1 Source directory entry lib64 already used! - trying lib64_1 Source directory entry media already used! - trying media_1 Source directory entry mnt already used! - trying mnt_1 Source directory entry opt already used! - trying opt_1 Source directory entry proc already used! - trying proc_1 Source directory entry root already used! - trying root_1 Source directory entry run already used! - trying run_1 Source directory entry sbin already used! - trying sbin_1 Source directory entry selinux already used! - trying selinux_1 Source directory entry srv already used! - trying srv_1 Source directory entry sys already used! - trying sys_1 Source directory entry tmp already used! - trying tmp_1 Source directory entry usr already used! - trying usr_1 Source directory entry var already used! - trying var_1 Source directory entry vmlinuz already used! - trying vmlinuz_1Essentially, since it's overwriting an existing squashfs filesystem, instead of merging duplicate files, it creates new folders and files in the root of the filesystem named bin_1, etc_1, var_1, tmp_1, etc. Obviously, this is not desired. Is there a way that I can force it to merge the directories? I have attempted to run it with -noappend, but that breaks the Clonezilla install and I can't get into the Clonezilla wizard. Any ideas?
Merging preexisting source folders in mksquashfs
The line/dev/sda7 on /media/Datos type fuseblk (rw)from mount's output tells you that /media/Datos is an NTFS partition (type fuseblk). NTFS cannot store ownership and permissions in the same way Linux/Unix filesystems like ext{2..4} can. That's why you can set ownership/permissions but they do not persist. You'll need to switch to a "proper" filesystem (e.g. ext4) for that.
I am using Ubuntu 12.04 LTS 64 bits to chroot into a just extracted to my harddrive (with unsquashfs) Squash File System from a Kali Linux v1.0.5 32 bits pendrive for customizations: luis@Fujur:$ sudo chroot /media/Datos/Temporal/squashfs/modificando root@Fujur:/# ls 0 boot etc initrd.img media opt root sbin srv tmp var bin dev home lib mnt proc run selinux sys usr vmlinuzI have been able to modify files (/etc/rc.local, add users with adduser and some other minor changes to the extracted filesystem), but the created new users have this problem: root@Fujur:/# ls /home -la total 8 drwxrwx--- 1 root plugdev 0 mar 24 22:45 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis drwxrwx--- 1 root plugdev 0 mar 24 22:45 potatoas you can see, the owner is "root", and the group is "plugdev", when both should be the same name of the user account (luis/potato in this example). Knowing why this happen should be fine, but I think I could solve it if I could change file/directory permissions, but I can not neither: root@Fujur:/tmp# cd /home/ root@Fujur:/home# ls -la total 8 drwxrwx--- 1 root plugdev 0 mar 24 22:45 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis drwxrwx--- 1 root plugdev 0 mar 24 22:45 potato root@Fujur:/home# chown potato potato root@Fujur:/home# ls -la total 8 drwxrwx--- 1 root plugdev 0 mar 24 22:45 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis drwxrwx--- 1 root plugdev 0 mar 24 22:45 potatoand even on /temp there is no chance: root@Fujur:/# cd /tmp root@Fujur:/tmp# ls -la total 4 drwxrwx--- 1 root plugdev 0 mar 24 22:52 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. root@Fujur:/tmp# mkdir test root@Fujur:/tmp# ls -la total 4 drwxrwx--- 1 root plugdev 0 mar 24 22:57 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 0 mar 24 22:57 test root@Fujur:/tmp# chmod a+x test root@Fujur:/tmp# ls -la total 4 drwxrwx--- 1 root plugdev 0 mar 24 22:57 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 0 mar 24 22:57 test root@Fujur:/tmp# chmod a-x test root@Fujur:/tmp# ls -la total 4 drwxrwx--- 1 root plugdev 0 mar 24 22:57 . drwxrwx--- 1 root plugdev 4096 sep 5 2013 .. drwxrwx--- 1 root plugdev 0 mar 24 22:57 testIt is such a strange thing: I can make directories and files, even editing files, but not changing permissions. Maybe I have not chrooted correctly? When I *chroot` into a partition to restore GRUB I do: $ sudo mount --bind /dev /mnt/devprior to chroot, but I think this is not the case. I believe I could be misusing the chroot command. Any ideas, please? ADDED: this is the result of mount. Outside (before) chroot: root@Fujur:/# mount /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda7 on /media/Datos type fuseblk (rw) gvfs-fuse-daemon on /var/lib/lightdm/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=lightdm)Inside (after) chroot: root@Fujur:/# mount warning: failed to read mtabEDIT: Just tested and I have the same problem outside chroot, this is: without chrooting. I can not change file/directories permissions. Still there are no errors, but the changes are not made.
Can not change permissions of files/directories in a chrooted filesystem
Mounting a squashfs file system doesn’t involve decompressing it into memory; decompression is done on the fly, as necessary. There is a small internal cache to avoid repeatedly decompressing the same data, but that’s all. squashfs file systems can store up to 264 bytes of data, so it wouldn’t be practical to decompress them fully on mount.
Situation: I've got a larger (>10GB) read-only collection of small files with loads of duplicates that I need to have available on multiple machines, even on different file systems. We can assume Linux kernel > 5.3.0. One solution would be to put these into a squashfs image file, use deduplication and zstd compression when creating it, and mount that. Now, this can only work out for me if mounting doesn't mean that all files need to fit in RAM. Is mounting a compressed squashfs file system like that always a decompress-fully-to-RAM business?
Does mounting squashfs put the whole filesystem in RAM?
Your root filesystem is squashfs, which saves some flash space by compressing everything, but as a result is read-only. You can not mount it read-write. Instead, you reflash the device with a new squashfs image. If you need writable storage, you have to partition your flash and mount a second, writable filesystem, of which there are several intended for use on flash storage.
I am working on an embedded device. The fstab shows the following info: <file system> <mount pt> <type> <options> <dump> <pass> /dev/root / ext2 rw,noauto 0 1 proc /proc proc defaults 0 0 devpts /dev/pts devpts defaults,gid=5,mode=620 0 0 tmpfs /tmp tmpfs defaults 0 0 ramfs /var ramfs defaults 0 0 sys /sys sysfs defaults 0 0Running the mount command I get this: rootfs on / type rootfs (rw) /dev/root on / type squashfs (ro,relatime) proc on /proc type proc (rw,relatime) sys on /sys type sysfs (rw,relatime) devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620) ramfs on /var type ramfs (rw,relatime)Which means that the root filesystem is read-only. How can I remount the read-only part as read-write?
Remount squashfs root filessytem read-write [duplicate]
BusyBox's built-in mount command doesn't recognize -B; you'll have to use -o bind: mount -o bind /mnt/flash/etc /mnt/root/etcAlso, I think the remounting is unnecessary if /mnt/flash is already writable. But try fixing the bind mounting first.
I've created a gentoo-live system which should be booted from a CF-card. The whole file-system is in a squashfs. I've created a custom initrd which first mounts the CF-card and from there the squashed filesystem into what will become /. I'd like /etc to be writable so I've copied it to the CF-card added a bind. This however does not seem to work. The system boots but /etc is not mounted. I'd like to know if my approach is right and what I can do to fix it or if not what would be the right way to achieve this. Here's the init-script from my initrd: #!/bin/busybox sh mount -t proc none /proc mount -t sysfs none /sysmount /dev/sda1 /mnt/flash mount -o loop /mnt/flash/filesystem.squashfs /mnt/root mount -B /mnt/flash/etc /mnt/root/etc mount -o remount,rw /mnt/root/etcumount /proc umount /sysexec switch_root /mnt/root /sbin/initThis is the shortened output of cat /proc/mounts: rootfs / rootfs rw 0 0 /dev/sda1 /mnt/flash ext2 rw,relatime,errors=continue,user_xattr,acl 0 0 /dev/loop0 / squashfs ro,relatime 0 0 ...
Mount /etc from disc into squashfs
There is more information in the README that might be part of the distributed package, or can be seen here 3.8 Pseudo file support. For example, -p 'mychardev c 666 root root 100 1'creates a character device with major/minor 100/1. Similarly, if you have a file mylist holding the lines mydir d 777 0 0 mydir/thedate f 776 0 0 date +'year is %Y'then -pf mylist will create a directory and a file within it holding the result of doing the command date +'year is %Y' at the time mksquashfs was run. The -sort option is not described further, but accepts filenames within the resulting filesystem, followed by a number, for example b/c 500 b/d 700where b/c and b/d are found in your squashfs. My version does not recognise names provided through the -p options, and indeed stops the above date example from working. If necessary you can split up the building of the filesystem into separate mksquashfs commands with different options, and each set of new files will be appended at the end.
From debian stretch man page: Filesystem filter options -p PSEUDO_DEFINITION Add pseudo file definition. -pf PSEUDO_FILE Add list of pseudo file definitions. -sort SORT_FILE sort files according to priorities in SORT_FILE. One file or dir with priority per line. Priority -32768 to 32767, default priority 0.But how to write PSEUDO_DEFINITION, PSEUDO_FILE, and SORT_FILE?
Details(examples) of "pseudo-definition", "pseudo-file", "sort_file" for mksquashfs?
Squashfs needs a block device to run, thus you need the block emulation over UBI. First make sure it is enabled in your kernel. You can test this by using the ubiblock command on a running system. For example, running ubiblock -c /dev/ubi0_0 will create the devnode /dev/ubiblock0_0. Once you have the dependency, you can enable the UBI block on the cmdline like this: ubi.mtd=2 ubi.block=0,ubi_vol_rom root=/dev/ubiblock0_0 This will use the UBI volume named ubi_vol_rom and create an emulated block device. Then you can use it to mount your root.
I'm trying to use a compressed squashfs ubi volume as my root file system. The idea is to have two ubi volumes. Volume one contains a read-only squashfs file system. Volume two is re-sizable and uses the remaining flash space. It contains a writable ubifs file system. These two ubi volumes are to be overlayed using overlayfs after booting so that I have a writable file system with the ability to restore to factory state by formatting the second (ubifs) volume. I know squashfs works only on block devices, so I'm using gluebi driver to emulate them on top of ubi volumes (this creates mtdx and mtdblockx for each ubi volume): CONFIG_SQUASHFS=y CONFIG_SQUASHFS_LZO=y CONFIG_MTD=y CONFIG_MTD_BLOCK=y CONFIG_MTD_UBI=y CONFIG_MTD_UBI_GLUEBI=y CONFIG_UBIFS_FS=yHere's my ubinize.conf file to create the ubi image: [rom] mode=ubi image=rootfs.squashfs-lzo vol_id=0 vol_type=static vol_name=ubi_vol_rom [overlay] mode=ubi vol_id=1 vol_type=dynamic vol_name=ubi_vol_overlay vol_size=1KiB vol_flags=autoresizeI'm using these MTD partitions for testing: mtd18: 03a00000 00040000 "sys_back" mtd19: 058c0000 00040000 "system"I flashed the ubi image to mtd18 (sys_back), attached it to ubi, mounted the resulting mtdblock and everything worked as intended, so I presume my ubi volume and squashfs file system are correct. # ubiattach -m 18 # mount /dev/mtdblock23 /mnt/ # mount /dev/mtdblock23 on /mnt type squashfs (ro,relatime)So, I wanted to try the final configuration. I flashed the ubi image to mtd19 (system) and modified my kernel parameters to contain this: ubi.mtd=system root=mtd:ubi_vol_rom rootfstype=squashfsHowever, mounting the root file system failed: [ 3.334908] ubi0: attaching mtd19 [ 3.725841] ubi0: scanning is finished [ 3.751239] gluebi (pid 1): gluebi_resized: got update notification for unknown UBI device 0 volume 1 [ 3.759465] ubi0: volume 1 ("ubi_vol_overlay") re-sized from 1 to 203 LEBs [ 3.767111] ubi0: attached mtd19 (name "system", size 88 MiB) [ 3.772007] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes [ 3.778938] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096 [ 3.785670] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192 [ 3.792583] ubi0: good PEBs: 355, bad PEBs: 0, corrupted PEBs: 0 [ 3.798604] ubi0: user volume: 2, internal volumes: 1, max. volumes count: 128 [ 3.805807] ubi0: max/mean erase counter: 3/1, WL threshold: 4096, image sequence number: 1328192 [ 3.814929] ubi0: available PEBs: 0, total reserved PEBs: 355, PEBs reserved for bad PEB handling: 40 [ 3.823843] ubi0: background thread "ubi_bgt0d" started, PID 148 [ 4.639909] UBIFS error (pid: 1): cannot open "mtd:ubi_vol_rom", error -22 List of all partitions: [ 4.647770] 1f00 2560 mtdblock0 (driver?) [ 4.652783] 1f01 2560 mtdblock1 (driver?) [ 4.657822] 1f02 22528 mtdblock2 (driver?) [ 4.662851] 1f03 5120 mtdblock3 (driver?) [ 4.667886] 1f04 3072 mtdblock4 (driver?) [ 4.672925] 1f05 1280 mtdblock5 (driver?) [ 4.677956] 1f06 1536 mtdblock6 (driver?) [ 4.682994] 1f07 1280 mtdblock7 (driver?) [ 4.688030] 1f08 9216 mtdblock8 (driver?) [ 4.693059] 1f09 9216 mtdblock9 (driver?) [ 4.698094] 1f0a 6400 mtdblock10 (driver?) [ 4.703214] 1f0b 14336 mtdblock11 (driver?) [ 4.708339] 1f0c 16896 mtdblock12 (driver?) [ 4.713458] 1f0d 61440 mtdblock13 (driver?) [ 4.718582] 1f0e 1280 mtdblock14 (driver?) [ 4.723701] 1f0f 30720 mtdblock15 (driver?) [ 4.728826] 1f10 57344 mtdblock16 (driver?) [ 4.733945] 1f11 127232 mtdblock17 (driver?) [ 4.739069] 1f12 59392 mtdblock18 (driver?) [ 4.744228] 1f13 90880 mtdblock19 (driver?) [ 4.749313] 1f14 26676 mtdblock20 (driver?) [ 4.754471] 1f15 50344 mtdblock21 (driver?) [ 4.759552] No filesystem could mount root, tried: ubifs [ 4.764942] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 5.837944] Rebooting in 5 seconds..So from the log I can tell that ubi attached to mtd19 as expected, resized the second partition (ubi_vol_overlay), created two mtd partitions from the ubi volumes (mtd20 and mtd21), and created two block devices on top of these (mtdblock20 and mtdblock21), great. However, mounting the squashfs filesystem (mtdblock20) failed. The log says it tried mounting with as ubifs even though I explicitly said to use squashfs via the rootfstype argument. At first I thought that maybe the name of the device wasn't resolving correctly in the root= parameter, so I've tried using /dev/mtdblock20 but it resulted in the same. How can I force the kernel to mount it with squashfs instead of ubifs?
Using squashfs on top of ubi as root file system
Since file doesn't recognize it, the vendor probably used a custom SquashFS magic signature. I expect that unsquashfs is also giving you an error about not being able to find a valid superblock. Give sasquatch a try; it's a modified version of unsquashfs that attempts to support such vendor hacks.
Debian 7.0, I extracted the firmware.bin image using binwalk. The extracted content is squashfs-root folder containing subdirectories, and a separate file.squashfs file. I tried unsquashfs this file.squashfs file, but operation fails: unsquashfs -l file.squashfs Can't find a SQUASHFS superblock on file.squashfsWhat is the problem? EDIT: yes, sasquatch file.squashfs works: sasquatch D1000.squashfs SquashFS version [768.256] / inode count [-1073676288] suggests a SquashFS image of a different endianess Non-standard SquashFS Magic: qshs Reading a different endian SQUASHFS filesystem on D1000.squashfs Parallel unsquashfs: Using 2 processors Trying to decompress using default gzip decompressor... Trying to decompress with lzma... Detected lzma compression 413 inodes (430 blocks) to write
unsquashfs fails
You're basically looking for find /tmp/mnt -iname '*.squashfs' -exec unsquashfs {} \;{} is replaced by the path to the matched file. If you want to specify which directory to extract to, pass the -d option. find /tmp/mnt -iname '*.squashfs' -exec unsquashfs -d /tmp/unsquashedfs/files {} \;
I am looking for something that will find the squashed filesystem, and the pass the output to unsquash the fs and it would have to be an absolute path outside the squashfs. example /tmp/mnt/live/filesystem.squashfsdesired output /tmp/unsquashedfs/filesI am fooling with lines of code like find /tmp/mnt -iname '*.squashfs' -exec unsqaushfs '*.squashfs' {} \;find /tmp/mnt -print0 -iname "*.squashfs" | unsquashfs "*.squashfs" -T - But can't get it to work. Any help from somebody would be appreciated!
Pipe Find Results for '*.squashfs' to unsquashfs
After some research, it seems that SquashFS is a read-only filesystem, and writing is not possible into it, so, even when you could chroot into it by installing squashfs support, the only way to change the contents is something like these instructions resumed to:Mount the SquashFS and extract the contents to a loop device or directory. Edit what should be needed (as chrooting is possible now) in that loop device or directory. Recreate a new SquashFS from the loop device.As for today, it seems there is nothing more easy. EDIT: it seems these other instructions are more compact. EDIT2: no need for a loop device on step 1. You can just chroot into the directory of extraction.
I would like to chroot into a Live USB Linux distro, if such thing is possible. I don't know if there is a generic method, so I will detail my specific test. I am testing Kali Linux v1.0.5 Live USB created from Windows using "Universal USB Installer". This is the root of the pendrive: 23/12/2013 01:12 am <DIR> uui 05/09/2013 09:50 am <DIR> .disk 05/09/2013 09:51 am 25 autorun.inf 05/09/2013 09:47 am <DIR> dists 05/09/2013 09:46 am <DIR> firmware 05/09/2013 09:51 am 159.629 g2ldr 05/09/2013 09:51 am 8.192 g2ldr.mbr 05/09/2013 09:50 am <DIR> install 05/09/2013 09:51 am <DIR> isolinux 05/09/2013 09:49 am <DIR> live 05/09/2013 09:52 am 42.803 md5sum.txt 05/09/2013 09:47 am <DIR> pool 05/09/2013 09:51 am 366.350 setup.exe 05/09/2013 09:50 am <DIR> tools 05/09/2013 09:51 am 223 win32-loader.ini 11/01/2013 05:55 pm 49.070 Uni-USB-Installer-Copying.txt 24/11/2013 10:22 pm 18.233 Uni-USB-Installer-Readme.txt 04/04/2012 08:42 pm 18.092 license.txt 01/01/2014 09:23 pm <DIR> Instalac 9 archivos 662.617 bytes 10 dirs 1.486.944.256 bytes libres...and, as long as I suppose this is the important part: Directorio de k:\live23/12/2013 01:12 am <DIR> . 23/12/2013 01:12 am <DIR> .. 05/09/2013 09:46 am 60.319 filesystem.packages 05/09/2013 09:46 am 159 filesystem.packages-remove 05/09/2013 09:45 am 2.410.737.664 filesystem.squashfs 05/09/2013 09:46 am 17.296.271 initrd.img 05/09/2013 09:47 am 176.764 memtest 05/09/2013 09:46 am 2.250.960 vmlinuz 6 archivos 2.430.522.137 bytes 2 dirs 1.486.944.256 bytes libresI think that the filesystem.squashfs file is the important part here, but I am not sure. Thanks for any help.
How can I chroot into a live filesystem.squashfs Linux distribution?
Cannot comment so I'm writing an answer: 1) I agree with Patrick that the most probably source of your problems is hardlink 'multiplication'. With a normal cp you create as many copies of a hardlinked file as the hardlink count was. Use rsync or cp -a instead. 2) An even better solution would be to simply unsquashfs the image so you can skip the loop mount step. 3) Digging even deeper, you can play around with aufs or unionfs :)
Here I'm trying to create a squashfs filesystem but the resulting image is bigger than the original version and not because I added a file or anything as I only modified some configuration files. What I'm trying to do is modify the existing squashfs filesystem on a live usb and delete some info to start the OS in a login shell. Since I fixed an amount of space to the EXT4 partition I need the modified squashf to have the same size as the original. I can do the changes while the live system is running but since I'm making a script to automate this process I need to do it before creating the live usb itself. The problem comes when recreating the image as the resulting file is about 400mb larger than the original and when I use the -b 4096/1Mbyte option the image is about 800mb larger when the original file is about 2.2gb. I did the same before to add my script to the filesystem and it worked great but now I can't understand what happen this time. I searched my backup of .bash_history but with no luck How can I reduce the image size?? What I'm doing wrong, anyone?? Edit: # Create Directories mkdir /mnt/kali-iso mkdir /mnt/squash mkdir /tmp/squash_mod# Mount ISO And Squashfs Image mount /root/kali.iso /mnt/kali-iso mount /mnt/kali-iso/live/filesystem.squashfs /mnt/squash -o loop# Copy All Files To A Temp Directory To Modify Them cp -rf /mnt/squash/* /tmp/squash_mod cp /root/foo.sh /tmp/squash_mod/root/Desktop/# Create Squashfs mksquashfs /tmp/squash_mod filesystem.squashfs Or # mksquashfs /tmp/squash_mod filesystem.squashfs -b 4096 # or 1MbyteResults from "du" command: du -ch /mnt/squash | grep "total" = 6.6G totaldu -ch /tmp/squash_mod | grep "total" = 7.2G totalThere are some discrepancies between the same folders, their size are different: "/tmp/squash_mod/sbin = 8.8mb" "/mnt/squash/sbin = 8.5mb" "/tmp/squash_mod/var = 309mb" "/mnt/squash/var = 282mb" "/tmp/squash_mod/bin = 7.0mb" "/mnt/squash/bin = 6.8mb"
Compression Level Mksquashfs
The initramfs OpenWRT/LEDE kernel builds are including the rootfs image into initramfs, attaching it to the kernel so it will put the filesystem in a ramdisk during bootup and utilize it as /. You don't need such builds if the regular flash-based storage works for you, as it won't allow any persistent configuration by default. Such a configuration is useful during initial OpenWRT/LEDE porting efforts when you don't have the flash driver configured to use the flash chip on the device.
Do you know what initramfs-kernel mean? I know squashfs-factory/squashfs-sysupgrade. How can I do it or what is it? which is better? I just don't understand what the initramfs-kernel mean. I have Linksys 1900ACS v2 and D-Link DL-860l B1, but I only use squashfs-factory and squashfs-sysupgrade. What does the initramfs-kernel mean? When would I use those? I would even fear to install that. So, continuing, like lede-17.01.2-ramips-mt7621-dir-860l-b1-initramfs-kernel.bin, what does this mean? can i use it and if so, what is the difference between lede-17.01.2-ramips-mt7621-dir-860l-b1-squashfs-factory.bin (which I know what it does and how) or lede-17.01.2-ramips-mt7621-dir-860l-b1-squashfs-sysupgrade.bin (which I know what it does and how).
wrt (openwrt / lede) initramfs
The correct command for this board is: setenv bootargs ${bootargs} single init=/bin/sh(there is no bash installed)
Embedded device, Linux version 2.6.26.5, U-Boot 2009.03 bootloader. ARM Linux Kernel Image on NAND flash, loading from NAND. How to access the filesystem as the root user, and to reset the root password? Is it possible to get this by supplying single boot argument (single-user mode) to Linux kernel via U-boot parameters? Or by adding init=/bin/bash argument to the end of the boot parameters. The output of bootargs and bootcmd environment variables: Kernel command line: console=ttyS1,115200n8 rootfstype=squashfs noalign half_image=0 verify=y Hw_Model=RZU017 Router_Mode=0
Access the filesystem as the root user
GNU tar can read xattrs and do filtering on them, see https://www.gnu.org/software/tar/manual/html_node/Extended-File-Attributes.html. A Squashfs filesystem can now be created from a tar archive using sqfstar (in version 4.5 and later). So something like tar --xattrs --xattrs-exclude='user.*' -c your-directory | sqfstar img.sqfsShould work.
I have a directory containing a root filesystem that I SquashFS and then mount as r/o on other boxes. However, before SquashFS'ing, i want to clear all the user-namespace xattrs from the filesystem. This is trivial to do with a small getfattr and setfattr loop script, but I want to avoid introducing those as a dependency. The sticky part is that I only want to clear the user xattrs. I need to preserve the security xattrs as they contain my SELinux labels. So simply telling Squash to not collect the xattrs is not an option. One idea i had was to get the size of the filesystem, allocate a blank file of that size, format it as ext4, and mount it with -o nouser_xattrr, copy the files there, and then Squash them. This would work, but it's cumbersome, and i have to hope that I have enough space for the file. A tmpfs won't work because no xattr support, plus i may run out of memory. Any other ideas?
Ideas to clear user xattrs from files without get/setfattr
lxc.pre.mount gets executed before the rootfs gets loaded: lxc.hook.pre-mount = /var/lib/lxc/container0/mount-squashfs.sh lxc.rootfs.path = overlayfs:/var/lib/lxc/container0/rootfs:/var/lib/lxc/container0/delta0And in the mount script: #!/bin/bash mount -nt squashfs -o ro /var/lib/lxc/container0/rootfs.sqsh /var/lib/lxc/container0/rootfs
We are using a Centos LXC container with the rootfs contained in a squashfs filesystem. I really like the fact that a user cannot edit the rootfs from the host. During the development, developers would infact like to make changes to the filesystem, and I'd like to move to an overlayfs. But I notice that although the upper layer can be used to make changes to the lower layer, it is also possible to make changes to lower layer rootfs by simply editing the files on the host. How can I prevent this?
LXC Container with Overlayfs/Squashfs
There wasn't a way to do that, until OpenSSH 7.6. From manual:RemoteCommand Specifies a command to execute on the remote machine after successfully connecting to the server. The command string extends to the end of the line, and is executed with the user's shell. Arguments to RemoteCommand accept the tokens described in the TOKENS section.So now you can have RemoteCommand cd /tmp && bashIt was introduced in this commit.
I'd like to set ssh_config so after just typing ssh my_hostname i end up in specific folder. Just like I would type cd /folder/another_one/much_much_deeper/. How can i achieve that? EDIT. It's have been marked as duplicate of "How to ssh into dir..." yet it is not my question. I know i can execute any commands by tailing them to ssh command. My question is about /ssh_config file not the command.
Remote command in ssh config file
This doesn't answer directly the question. However it has become for me the fix to the root problem, so I think it deserves a post. What I really wanted is to auto-enter tmux. Instead, I just started using byobu, which builds on top of tmux but is way more comfortable. For example, it supports this use case out of the box. After installing byobu, just run: byobu-enableAnd you're done. Next interactive session will enter byobu automatically, no matter the shell. Non-interactive connections will work as usual.
Currently I use Fish as my main shell on local and remote hosts. I connect to remote hosts via ssh and sftp. I wanted to open or reuse a remote tmux whenever I connect, automatically, by default; so I added this to my ~/.ssh/config: Host example.com RemoteCommand tmux a; or tmux RequestTTY yesThe problem is that now I cannot connect through sftp, nor can I run a direct command from my local CLI: ➤ ssh example.com ping localhost Cannot execute command-line and remote command.➤ sftp example.com Cannot execute command-line and remote command. Connection closedSo, my question is: How can I define a default command to be executed when opening a new interactive SSH session, but make it overridable?
How to configure SSH with a RemoteCommand only for interactive sessions (i.e. without command, or not sftp)
So in ssh.c for OpenSSH 7.6p1 we find case 'N': no_shell_flag = 1; options.request_tty = REQUEST_TTY_NO;so -N does two things:the no_shell_flag only appears in ssh.c and is only enabled for the -W or -N options, otherwise it appears in some logic blocks related to ControlPersist and sanity checking involving background forks. I do not see a way an option could directly set it. according to readconf.c the request_tty corresponds to the RequestTTY option detailed in ssh_config(5).This leaves (apart from monkey patching OpenSSH and recompiling, or asking for a ssh_config option to toggle no_shell_flag with...) something like: Host devdb User someuser HostName the_hostname LocalForward 1234 127.0.0.1:1234 RequestTTY no RemoteCommand catWhich technically does start a shell but that shell should immediately replace itself with the cat program which should then block allowing the port forward to be used meanwhile. cat is portable, but will consume input (if there is any) or could fail (if standard input is closed). Another option would be to run something that just blocks.
I want to set up an alias in my config file that has the same result as this command: ssh -N devdb -L 1234:127.0.0.1:1234My .ssh/config entry for devdb: Host devdb User someuser HostName the_hostname LocalForward 1234 127.0.0.1:1234What do I put in the above config to not start a shell?
What is the .ssh/config corresponding option for ssh -N
Don't put RemoteCommand in the configuration file. Having RemoteCommand is occasionally useful to define an alias for a host such that ssh myalias runs a specific command. It isn't useful in a general-purpose entry. As you've noticed, it prevents doing anything other than running that specific command: you can't use rsync, sftp, sshfs, or even run ssh myhost specific-command interactively. If you want to default to running tmux when you connect to a host, there are two sensible solutions:On the server, edit your .profile or similar login-time script (.zprofile, .config/fish/config.ish, .login, …) to run tmux when logging in interactively. See e.g. How can I run a script immediately after connecting via SSH?, Run tmux on ssh login.On the client, define a wrapper around ssh that runs tmux. For example a shell function: tsh () { ssh "$@" tmux new -ADs remote }A Match host directive wouldn't help since you aren't interested in doing something differently based on the host name. I don't think the ssh client has a way to do things different depending on whether a command was passed to ssh. You can execute code with Match exec. I don't think there's a clean way to detect whether ssh was invoked with a command, but a dirty way might be good enough for you. Host myhost Match exec "ps -o args= $PPID | grep -v ' .* '" RemoteCommand if [ -t 0 ]; then exec tmux new -ADs remote; fiIf ssh was invoked with just a host name and no option, run tmux. If ssh was invoked with at least one option or with a command in addition to the host name, the RemoteCommand directive isn't applied. Also don't run tmux if the input doesn't come from a terminal (e.g. echo ls | ssh myhost). This should take care of most cases, erring on the side of not running tmux (e.g. ssh -L … myhost won't run tmux).
I have defined a ssh_config file with all the hosts to which I connect on a regular basis. I like to start/connect to a tmux session upon connection to the host, so I've added the line RemoteCommand tmux new -ADs remote in my config. The problem is that if at some point I want to use rsync over ssh (which I do every now and then), I have the following error: Cannot execute command-line and remote command.In order to solve it, I have to comment out the RemoteCommand line in my config file, and not forget to uncomment it afterwards, which is a bit annoying... Potential solutions (undesirable or not working):I have tried to use the flag -N which means "do not execute remote command" but the command hangs indefinitely. I would prefer not to create alias hosts in my config file, because that means that the number of entries would increase by a factor of 2 for only one minor change. I was not able to use Match Host to only execute a remote command when not using rsyncAnyone knows a configuration trick or a workaround that might help me in this case? Thank you very much!
How to bypass RemoteCommand option in ssh_config
I see couple of more options from the answer here. Option 1: -o "UserKnownHostsFile /dev/null"Option 2: If you want this behavior because you're working with cloud servers (AWS EC2, Rackspace CloudServers etc.) or you're constantly provisioning new images in Vagrant you may want to update your SSH config instead of adding bash aliases or more options on the command line. Consider adding something like: Host *.mydomain.com StrictHostKeyChecking no UserKnownHostsFile /dev/null User foo LogLevel QUIETUse as strict as regex for host as possible to be secure. Setting the LogLevel to QUIET will keep the Warning which Guillaume mentioned from showing up.
I get the following output if I run the following command: -bash-3.2$ ssh -o "StrictHostKeyChecking no" 192.168.1.77 Warning: Permanently added '192.168.1.77' (RSA) to the list of known hosts. Last login: Fri Jul 4 10:49:11 2014 from chlorine.example.com Sun Microsystems Inc. SunOS 5.10 Generic January 2005 -bash-3.2$I would like to run this command, without 192.168.1.77 being added to the list of known hosts, but still permitting a successful login. Is there an SSH option that allows this? I have gone through the man page for ssh_config and I've tried all the likely options such as setting "CheckHostIP no" with no success. Both the local and remote systems are running Solaris 10. If necessary, I could back up my $HOME/.ssh/known_hosts file prior to making a connection, and restore it after making the connection, but if there is an SSH option that allows me to avoid doing this, then I would prefer to use that.
Using SSH to connect to a new server without storing the host keys in the $HOME/.ssh/known_host file
Match is rather on-par with Host. It doesn't exist as a subset of Host the way other options do. But you can specify multiple criteria on a match, and they appear to operate as a short-circuit AND. So this should be possible and useful for you: Match host target_host exec not_inside_network ProxyCommand ssh -W %h:%p proxy_serverThis rule will be checked on every ssh. But for hosts not matching "target_host", the match immediately fails and moves to the next Match or Host keyword (if any). Only if the host is "target_host" will the exec occur. Then the truth of that statement will determine whether or not the ProxyCommand is invoked. To see the logic occur, run with -vvv. You should see some match checks at debug3.
I have a host which I ssh into. Sometimes I'm inside the same network, and can ssh directly into it, other times I'm outside it and I need to use a proxy. Because ssh via the proxy server is much slower than direct, I'd like to have my ssh config set up such that I try to connect directly, falling back to the proxy if that fails. Currently the config looks like: Host proxy_server User user Port port Hostname some_domain Host target_host User user Port port Hostname ip_addr_of_host Match exec not_inside_network ProxyCommand ssh -W %h:%p proxy_serverThe target_host entry is the last entry in my config file, yet not_inside_network gets called by any ssh connection to unrelated servers in the config file. How can I make Match only apply to this one server?
Only apply Match keyword to single Host in ssh config
You should be able to do this with the Match directive e.g. Host host2 HostName host2.some.dom.ain Match user user1 IdentityFile ~/.ssh/id_user1 Match user user2 Identityfile ~/.ssh/id_user2
I would like to specify a specific identity file based on the user I am ssh'ing as to a server. For example when ssh as user1 from host 1 to host 2 as user1 [user1@host1 ~]$ ssh user1@host2I would like to use a certain identity file. However when I ssh as user1 from host1 to host2 as user2, I would like to use a different identity file [user1@host1 ~]$ ssh user2@host2Now, I can do this by specifying the identity file in the command, [user1@host1 ~]$ ssh -i ~/.ssh/id_user1 user1@host2[user1@host1 ~]$ ssh -i ~/.ssh/id_user2 user2@host2but I would love to do it in my ~/.ssh/config file. I tried the following, but it does not seem to work Host user2@* IdentityFile ~/.ssh/id_user2Host user1@* IdentityFile ~/.ssh/id_user1Any and all help is appreciated. If this has to be configured somewhere else, that is fine as well. I would just like to avoid specifying it on the command line. Would really love to figure this out as it would be a cool solution to my problem!
Specify Specific Identity file when ssh'ing as certain user in ~/.ssh/config
The ssh default config file is on /private/etc/ssh/sshd_config, you can copy it to .ssh directory by the following command sudo cp /private/etc/ssh/sshd_config ~/.ssh/configThen restart SSHD: sudo launchctl stop com.openssh.sshd sudo launchctl start com.openssh.sshd
I have changed some stuff within the sshd_config file and want to reset the file to its default settings. How would I go about doing this?
How to reset the sshd_config file to its default settings
One should use Shell link... submenu instead of SFTP link... one. Type something like this in the address field: sh://myhostalias/~ or simply myhostalias (see ssh_config example above). Concerning SFTP link..., I didn't manage to use it that way from mc gui. Anyway, using aliases with sftp in CLI is straightforward.
Problem description I try to connect to remote server in one of two panels of Midnight Commander using SFTP link... submenu. Unfortunately, mc does not pass my ~/.ssh/config file to sftp. Therefore typing sftp://myhostalias results in error message Cannot chdir to "/sftp://myhostalias"Here is a content of ~/.ssh/config: Host myhostalias HostName server.url.domain User myusername IdentityFile ~/.ssh/id_rsa-myhostaliasPlease, note the following:sftp://[emailprotected] works fine followed by inputting my password. I guess, this should not work if password authentication is disabled. sftp myhostalias from terminal also works fine.Question What way should I connect to the remote server from mc SFTP link... menu using the aliases from my current ssh config?
Midnight Commander: sftp connection using aliases from ssh config
The certificate model of authentication used by SSH is a variation of the public key authentication method. With certificates, each user's (or host's) public key is signed by another key, known as the certificate authority (CA). The same CA can be used to sign multiple user or host keys. The user or the host can then trust a single CA instead of having to trust each individual user/host key. Because this is a change in the authentication model, implementing certificates requires changes on both the client and the server side. Also, do note that the certificates used by SSL (the ones generated by openssl) are different from the ones used by SSH. This topic is explained by these QAs at the Security SE: What is the difference between SSL & SSH?, Converting keys between OpenSSL and OpenSSH. Now, since the question is about how a client could connect to a server using an SSH certificate, let's look at that approach. The manual page for ssh-keygen has some relevant information:ssh-keygen supports signing of keys to produce certificates that may be used for user or host authentication. Certificates consist of a public key, some identity information, zero or more principal (user or host) names and a set of options that are signed by a Certification Authority (CA) key. Clients or servers may then trust only the CA key and verify its signature on a certificate rather than trusting many user/host keys. Note that OpenSSH certificates are a different, and much simpler, format to the X.509 certificates used in ssl(8). ssh-keygen supports two types of certificates: user and host. User certificates authenticate users to servers, whereas host certificates authenticate server hosts to users. To generate a user certificate: $ ssh-keygen -s /path/to/ca_key -I key_id /path/to/user_key.pub The resultant certificate will be placed in /path/to/user_key-cert.pub. A host certificate requires the -h option: $ ssh-keygen -s /path/to/ca_key -I key_id -h /path/to/host_key.pub The host certificate will be output to /path/to/host_key-cert.pub.The first thing we'll need here is a CA key. A CA key is a regular private-public key pair, so let's generate one as usual: ssh-keygen -t rsa -f caThe -f ca option simply specifies the output filename as 'ca'. This results in the two files being generated - ca (private key) and ca.pub (public key). Next, we'll sign our user key with the CA's private key (following the example from the manual): ssh-keygen -s path/to/ca -I myuser@myhost -n myuser ~/.ssh/id_rsa.pubThis will generate a new file named ~/.ssh/id_rsa-cert.pub which contains the SSH certificate. The -s option specifies the path to the CA private key, the -I option specifies an identifier that is logged at the server-side, and the -n option specifies the principal (username). The contents of the certificate can be verified by running ssh-keygen -L -f ~/.ssh/id_rsa-cert.pub. At this point, you're free to edit your configuration file (~/.ssh/config) and include the CertificateFile directive to point to the newly generated certificate. As the manual indicates, the IdentityFile directive must also be specified along with it to identify the corresponding private key. The last thing to do is to tell the server to trust your CA certificate. You'll need to copy over the public key of the CA certificate to the target server. This is done by editing the /etc/ssh/sshd_config file and specifying the TrustedUserCAKeys directive: TrustedUserCAKeys /path/to/ca.pubOnce that is done, restart the SSH daemon on the server. On my CentOS system, this is done by running systemctl restart sshd. After that, you will be able to log in to the system using your certificate. Tracing your ssh connection using the verbose flag (-v) will show the certificate being offered to the server and the server accepting it. One last thing to note here is that any user key signed with the same CA key will now be trusted by the target server. Access to the CA keys must be controlled in any practical scenario. There are also directives such as AuthorizedPrincipalsFile that can be used to limit the access from the server side. See the manual for sshd_config for more details. On the client side, the certificates can also be created with tighter specifications. See the manual for ssh-keygen for those details.
I have a configuration as below in my ~/.ssh/config file: Host xxx HostName 127.0.0.1 Port 2222 User gigi ServerAliveInterval 30 IdentityFile ~/blablabla # CertificateFile ~/blablabla-cert.pubwhich works fine but I'm curious about how would one generate the CertificateFile if really wants to use it? Consider one already has the private and public RSA keys generated with e.g. openssl req -newkey rsa:2048 -x509 [...].
How to generate a certificate file which to be used with ssh config?
Use different order of the Host blocks. The Host * matches everything and ssh_config does not overwrite already stored entries: Host hostname0 Hostname foo Host hostname1 Hostname bar Port 0 ProxyCommand ssh -W %h:%p hostname0 Host hostname2 User username2 Hostname bat Port 1 ProxyCommand ssh -W %h:%p hostname1 Host * User username0Moving the Host * to the end will make it working for you again.
I want to log into a linux server using two sequential bastion hosts. My .ssh/config file looks something like this: Host * User username0 Host hostname0 Hostname foo Host hostname1 Hostname bar Port 0 ProxyCommand ssh -W %h:%p hostname0 Host hostname2 User username2 Hostname bat Port 1 ProxyCommand ssh -W %h:%p hostname1My username on hostname0 and hostname1 is username0 but my username on hostname2 is username2. The entries for hostname0 and hostname1 work as expected. But the entry for hostname2 appears to ignore the User option. ssh hostname2 causes this to be displayed: username0@hostname2's password:If I change the ProxyCommand for hostname2 to ssh -l username2 -W %h:%p hostname1 then it asks me for the password for username2@hostname1. username2@hostname1's password which makes sense as I am asking it to log into hostname1 as username2, but it obviously doesn't work as my username is actually username0. How can I configure ssh to use the correct username in each situation?
SSH with a bastion host and different usernames
It is not possible to write a script in the configuration file to pull a variable for port number. But you can write a bash function to get the port for you and place it into the correct place. For example place the following to the ~/.bashrc: function ssh-dynamic() { PORT=`sh get_port_for_somehost.sh` exec ssh -p "$PORT" somehost "$@" }where the other configuration may stay in the ~/.ssh/config.
In the environment I work in, we use tunnels to SSH to various servers. For example, I'll 'ssh -p XXXXX username@localhost' to reach the server. If the port was always the same, I could do this, and I'd be done: Host somehost User bryan Hostname localhost Port 12345 ProxyCommand ssh -p 2218 [emailprotected] -W %h:%pHowever, the port used can and will change if the tunnel goes down and comes back up. This isn't something I have the ability to change - it's built into the infrastructure. So, I wrote a program to find the current port. But I don't know how to either: a) Run that program and use the output for the %p variable; or b) Run a cron job on first.server.com to write out a text file with the port in it, or set an environment variable, or something. In effect, I want to do this. Is it possible?Host somehost User bryan Hostname localhost Port `sh get_port_for_somehost.sh` ProxyCommand ssh -p 2218 [emailprotected] -W %h:%pThe only thing I can think of right now is to run a program on my laptop which rewrites my .ssh/config after going and querying what the ports currently are, but I'd prefer not to do that.
.ssh/config ProxyCommand with a variable port
You should set the SHELL environment variable to the full path of your shell, not simply to bash or zsh. Try: SHELL=/bin/bash ssh user@host
Testing out the SSH Match Exec feature. I have this minimal ~/.ssh/config: Match Exec echo ServerAliveInterval 60and I am running ssh localhostI get Unable to execute 'echo': No such file or directoryThis is true regardless of whether I use a full path or not, or using quotes whether double or single. I tried putting a fake echo script in my .ssh folder as well. I have tried multiple commands (test, nc, connect). It seems the Exec feature cannot see my path at all. I am running WSL Debian with OpenSSH. My final goal is to test if $http_proxy is reachable in the match clause in order to automate proxy usage, but getting the above to work would be enough.
Match Exec failing to execute anything
ssh bla@bla "ls ; bash"This doesn't disconnect after running ls, but I don't get a full terminal interface, just some bare bones command line that doesn't show the me@machine:~$ thing.If you specify a command (e.g. ls ; bash above) the SSH server will not provide a pseudo-terminal. You have observed exactly this. On the other hand sole ssh bla@bla allocates a pseudo-terminal by default. So this is what you want. Force pseudo-terminal allocation, use -t or RequestTTY.Not even the RequestTTY option helped.It won't help you if the remote command is ls only. You need ls; bash or similar. It seems when you requested a tty, you did not request bash. And when you requested bash after ls, you did not request a tty. You need both: ssh -t bla@bla "ls; bash"Note ls will also use the pseudo-terminal. Please read the "broader picture" section of this answer of mine to learn the difference (especially where it reads "there's a quirk").
I want to run a command automatically after I connect to another machine via ssh, without the ssh session being closed automatically. After searching the internet I found some solutions but none of them work the way I need. ssh bla@bla "ls"This runs the ls command on the remote machine, shows me the output and closes the connection. I also tried editing the ssh config file with Host bla HostName bla User bla IdentityFile ~/.ssh/key RemoteCommand ls RequestTTY force Same issue, this connects via ssh, runs the ls command, shows me the output and then exits. Not even the RequestTTY option helped. ssh bla@bla "ls ; bash"This doesn't disconnect after running ls, but I don't get a full terminal interface, just some bare bones command line that doesn't show the me@machine:~$ thing. What I actually want: Either some bash alias, or .ssh/config entry that will allow me to type in a simple command which will connect to the remote machine and then run a command there and leave the terminal open. Basically as if I did this by hand: type ssh blah@blahand then after it connects I would type ls
How do I run a command after a ssh connection, without disconnecting?
I don't think your configuration is the same as that one-liner, it looks more like this: ssh -o ProxyCommand='sshpass -p mypassword ssh -o ProxyCommand="ssh gateway -W %h:%p" h_act' myusername@myip i.e. you have the sshpass running inside a ProxyCommand. But I don't think that will work, sshpass wraps the ssh client in a pseudo-terminal, hiding the fact that the password actually comes from a file or something other than user input from a terminal. To do this, it needs to run before the ssh client runs. If your first one-liner works, but you just don't feel like typing the sshpass each time, you can wrap it in a shell script: #/bin/sh sshpass -f passwordfile ssh -o ProxyCommand="ssh gateway -W %h:%p" "$@"Then run with something like sshscript myusername@myip. As an aside, don't use sshpass -p, it will make the password visible in ps output for as long as the ssh client (and sshpass) runs. Better use sshpass -e to pass the pw through the environment, or sshpass -f file to read it from a file.
I want to use .ssh/config to connect to a host through a gateway. I don't want to set up an RSA key and have to use password. I have previously done this kind of hopping without password. Now trying to do it with password. A direct command that works for me is: sshpass -p mypassword ssh -o ProxyCommand="ssh gateway -W %h:%p" myusername@myip Where I have already set up key auth in gateway and it's details are present in my .ssh/config. To set this up in .ssh/config, I tried the following: Host h_act <username, hostname, port etc.> ProxyCommand ssh gateway -W %h:%p Host h ProxyCommand sshpass -p mypassword ssh h_actHowever, when I try ssh h, I get Pseudo-terminal will not be allocated as stdin is not a terminal. I tried -vtt with ssh to get weird messages, but no terminal. I know that a chain of ProxyCommand works when there is a netcat/nc or just ssh -W with it. But here, it is not working. Even when I try the last command without sshpass, I get the same error. I am guessing it has to do with certain expectations that ProxyCommand has with the command that follows and I am not able to fulfill them. Any ideas?
Multihop with sshpass
The option you're looking for is RequestTTY. From the ssh_config man page:RequestTTY Specifies whether to request a pseudo-tty for the session. The argument may be one of: `no' (never request a TTY), `yes' (always request a TTY when standard input is a TTY), `force' (always request a TTY) or `auto' (request a TTY when opening a login session). This option mirrors the -t and -T flags for ssh(1).force is equivalent to -tt, and yes is equivalent to -t Host interactive HostName example.com User user RequestTTY yes
To run interactive programs remotely one should use ssh -t <host>. But this -t option also has drawbacks so it's not good to use it on non-interactive programs. My problem is: I have several machines. Some of them are for interactive programs and others for non-interactive ones. So I must remember exactly which ones need -t. Is it possible to add this in ~/.ssh/config so I don't have to remember it? Basically what I want is this: Host interactive HostName example.com User user Option "-t"
Force PTY allocation in ssh_config
Use a wrapper script as the ForceCommand. Something like this script (say, saved at /usr/local/bin/myshell): #! /bin/bashif [[ -n $SSH_ORIGINAL_COMMAND ]] # command given, so run it then exec /bin/bash -c "$SSH_ORIGINAL_COMMAND" else # no command, so interactive login shell exec bash -il fiIn action: % grep ForceCommand -B1 /etc/ssh/sshd_config Match user muru ForceCommand /usr/local/bin/forceshell % ssh muru@localhost $ ls Desktop Documents Downloads Music Pictures Public Templates Videos $ logout Connection to localhost closed. % ssh muru@localhost echo foo foo % ssh muru@localhost echo '*' Desktop Documents Downloads Music Pictures Public Templates Videos
How can I run/tweak this command and while using ForceCommand to give this user their shell? Client Command (cat ./sudoPassword ./someCommandInput) | ssh user@ip "sudo -Sp '' someCommand" Server sshd_config ForceCommand /bin/bash The behind the scenes restriction is that ForceCommand needs to be the mechanism that gives this user a shell, in addition to the command above a typical ssh user@ip needs to work too. I have tried various configurations such as ForceCommand /bin/bash -ic $SSH_ORIGINAL_COMMAND ls ForceCommand /bin/bash -s < $SSH_ORIGINAL_COMMAND ForceCommand /bin/bash -c $SSH_ORIGINAL_COMMANDI've also tried messing around with the client command, giving ssh options like -tt but I can't seem to find the right configuration.
SSH ForceCommand for shell while keeping regular login and remote command execution possible
Normally the configuration is parsed in a single pass. First all sections are checked against your input and all settings are gathered, and only after that's done, the HostName setting is actually applied. To achieve what you want, instead of a Host section you'll need a Match section: Match final host *.micro.ws User boomerangThis enables two-pass configuration loading. See the ssh_config(5) manual page for canonical and final keywords.
How can I have rules for a whole domain and also create aliases with rules for each of the subdomains without duplicating all the ruleset? In other words, why is it that in the following example boomerang is not used as the default user when I try to ssh into mega.micro.ws by invoking ssh mega? And is there a correct and parsimonious way to achieve this using ssh config and/or the rest of the available ssh toolset? Host mega HostName mega.micro.ws RemoteForward 52698 localhost:52698 Host *.micro.ws User boomerang
Nested settings in ssh config for domains and aliased subdomains
The Host directive can take multiple hosts, for example: Host *.domain.tld specific-host.tld 10.*.*.* User foo Port 2222This would set user and port for all hosts matching the star pattern, the explicit host specific-host.tld, or, assuming you type IP numbers, any host whose first IPv4 byte is 10. Then you can add Host / HostName pairs to give nicknames to specific hosts, for example.
Our accounts in our lab are all mounted over NFS and are accessible over all the systems in a subnet. So, effectively we can ssh into any of the machines in the subnet and continue our work. The problem is that the machines come up or go down randomly because of people accidently turning it off etc. To find running machines I scan the subnet using nmap and choose a machine. Because of the above problem, I can't put a fixed entry for Hostname in my ssh config entry. So, how can I have an ssh config entry that will have all other parameters except Hostname such that the ssh config entry and the Hostname can somehow be given together while running ssh?
Use a dynamically obtained hostname with an ssh config entry
Verbose is not needed. INFO log level is enough, as it is already fixed in the upstream repository. Commit message explains it pretty much:the LogLevel is set to 'None' we'll not get the Permission Denied we're looking for.This is not a problem in default configuration (since default value is INFO as per manual page). The problem occurs only if you set the LogLevel=QUIET in some of your configuration files (which is pretty much never what you want, unless you are sure that it the connection will succeed or you don't care if it does).
I'm trying to set up public key authentication on a CentOS 7.3 guest, using WSL. When trying to copy the public key using ssh-copy-id, it's rejected, saying it already exists on the VM. This is not the case, since it's a fresh install, and there isn't even an .ssh directory in /root. After searching around, wrong file permissions seems to be a possible error, so I ran these commands: Guest chmod go-w ~Host chmod go-w ~ chmod 0700 ~/.ssh chmod 0600 ~/.ssh/config chmod 0600 ~/.ssh/id_rsa chmod 0644 ~/.ssh/id_rsa.pub chmod 0600 ~/.ssh/known_hostsOn the server in sshd_config public key authentication is enabled. The result of ssh-copy-id was the same. However when I ran ssh-copy-id -o "LogLevel VERBOSE" root@ip it prompted me for the password, and then successfully copied over the key. After which I could use ssh root@ip, and successfully authenticate using my key. ssh using password authentication worked the whole time. Why didn't it work with normal ssh-copy-id, but it did using ssh-copy-id -o "LogLevel VERBOSE"? What did I miss?
Why does ssh-copy-id need verbose LogLevel to work in this case?
Running the ssh in debug mode usually uncovers various problems. Usually permissions. In this case Bad owner or permissions on /home/rperez/.ssh/configmeans that the configuration file can not be writeable by others and therefore chmod go-w /home/rperez/.ssh/configshould fix the problem for you.
I am logged in in my local PC (Fedora 24) as rperez. From this PC I needed to connect to a remote server through sshfs so I generated a private/public key by running ssh-keygen. Using the following command I am able to connect to the server without any problem: sshfs rperez@server_ip:/home/rperez -p 2051 ~/dev -o auto_cache,reconnectNow I have two Github account: one to be used from work, one to be used from home for personal projects. I would like to connect to both using SSH so I have setup the first one using the generated key for rperez and again that works fine. I am trying to setup the second one (the personal) on the same PC so I did run this command: ssh-keygen -t rsa -C "[emailprotected]" I have created the file ~/.ssh/config with the following content: #rperez account Host github.com-rperez HostName github.com User git IdentityFile ~/.ssh/id_rsa#reypm account Host github.com-reypm HostName github.com User git IdentityFile ~/.ssh/id_rsa_reynierpm#Server Host <server_ip> IdentityFile ~/.ssh/id_dsaAnd this is where my problem started. Now running the following commands: sshfs rperez@server_ip:/home/rperez -p 2051 ~/dev -o auto_cache,reconnect sshfs rperez@server_ip:/home/rperez -p 2051 ~/dev -o auto_cache,reconnect,IdentityFile=~/.ssh/id_rsaReturn this error: read: Connection reset by peerI should add, regardless the current problem, that I am not able to connect either to any Github repository What is wrong with this configuration? I have take some ideas from here but none is working for me. Also I am started from this guide for setup the Github accounts Update: verbose output ssh -vvv -p 2051 rperez@server_ip OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016 Bad owner or permissions on /home/rperez/.ssh/config
Can not connect through sshfs because a wrong configuration at ~/.ssh/config file
The \\ in your SSH command really represent a single \ (try echo portoalegre\\[emailprotected] and see what that displays). So the first thing to try is to use a single backslash in your SSH config: Host university Hostname university.server.br User portoalegre\15280433
So my university supplied access to a server with backslashes, like this: ssh portoalegre\\[emailprotected]and I decided to copy my public key there to be more secure (and not having to type the password every time). This works fine! However ... I then decided to set an entry in ~/.ssh/config so I could just login using ssh universityBut it didn't worked. It keeps asking me for my password. Here is the entry I setup on the config file: Host university Hostname university.server.br User portoalegre\\15280433What am I doing wrong? How should I escape/handle those 2 backslashes in the config file? I'm using a Ubuntu Desktop machine to connect to the server via its default terminal.
How to use backslash in user on ~/.ssh/config