source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
385,023
I sometimes have quite a big range of tabs open in Firefox and I prefer it to save them to a file, rather then using the build-in bookmarks. Therefore I (manually) copy the urls from the about:preferences page, save them to a file and process the file with: tr '|' '\n' in a little bash script. Later when I want to reopen the tabs from the textfile I run this little loop: #!/bin/bash# usage: $bash Open-tabs.sh file-with-bookmarks.txt while read -r line; do firefox -new-tab "$line" 2>/dev/null & sleep 2 done < "$1" and it opens all tabs with a delay of 2 seconds. I would like to know if there is a way, I can read-out the urls of the opened tabs from the command line, so I could include it to my script?
Source(Changed file path) : Get all the open tabs This snippet gets the current firefox tab url's. It uses the recovery.js[onlz4] file in your profile folder. That file is updatedalmost instantly, however it will not always be the correct url. Get all the open tabs: python -c 'import io, json, pathlib as pfpath = next(iter(p.Path("~/.mozilla/firefox").expanduser().glob("*.default/sessionstore-backups/recovery.js*")))with io.open(fpath, "rb") as fd: if fpath.suffix == ".jsonlz4": import lz4.block as lz4 fd.read(8) # b"mozLz40\0" jdata = json.loads(lz4.decompress(fd.read()).decode("utf-8")) else: jdata = json.load(fd) for win in jdata.get("windows"): for tab in win.get("tabs"): i = tab["index"] - 1 print(tab["entries"][i]["url"])'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
385,030
The trap builtin in bash has the following syntax trap [-lp] [arg] [sigspec ...] ... Each sigspec is either a signal name or a signal number. Signal names are case insensitive and the SIG prefix is optional. ... The bash manual points out that sigspec can be EXIT , DEBUG , RETURN , and ERR . Are they names of signals? Why do I not find them in the list of all the signal names given below, even adding a prefix SIG to them? Are they only related to bash shells but not to Linux OS? Are they bash shell signals but not Linux OS signals? $ trap -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR111) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+338) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+1348) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-1253) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-758) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX Thanks.
Those names have special meaning in bash for which usage is explained in the manual : If a sigspec is 0 or EXIT, arg is executed when the shell exits. If a sigspec is DEBUG, the command arg is executed before every simple command, for command, case command, select command, every arithmetic for command, and before the first command executes in a shell function ... If a sigspec is ERR, the command arg is executed whenever a pipeline (which may consist of a single simple command), a list, or a compound command returns a non-zero exit status, subject to the following conditions ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
385,109
Is it at all possible to execute iptables --list …. command without being root? Running it as non-root prints this: $ iptables --listiptables v1.4.21: can't initialize iptables table `filter': Permission denied (you must be root)Perhaps iptables or your kernel needs to be upgraded. If you must be root to list iptables, what is the reasoning behind that ? Is there a security concern with viewing the rules ? Is there a resource or service used by iptables --list that requires root access ? Obviously, modifying iptables firewall rules requires privileged user. I am asking about viewing them. Instead of being root, is there a capability that could permit listing the rules ? Does iptables use netlink to interface with the kernel ? Because netlink documentation mentions that Only processes with an effective UID of 0 or the CAP_NET_ADMIN capability may send or listen to a netlink multicast group. Maybe that does not apply to iptables. I am not sure whether this is the right way of doing it but adding a capability to iptables does not let me list the rules either: bash-4.1$ echo $UID2000bash-4.1$ getcap /sbin/iptables-multi-1.4.7/sbin/iptables-multi-1.4.7 = cap_net_admin+epbash-4.1$ /sbin/iptables-multi-1.4.7 main --listFATAL: Could not load /lib/modules/3.10.0-514.21.1.el7.x86_64/modules.dep: No such file or directoryiptables v1.4.7: can't initialize iptables table `filter': Permission denied (you must be root)Perhaps iptables or your kernel needs to be upgraded. Here are some relevant questions q1 and q2 . Both provide workarounds in my opinion and do not discuss the fundamental reason behind it.
It appears iptables needs both CAP_NET_RAW and CAP_NET_ADMIN to be able to read the tables. I tried $ cp /usr/sbin/iptables ~/iptables # note, it may be a symbolic link$ sudo setcap CAP_NET_RAW,CAP_NET_ADMIN+ep ~/iptables$ ~/iptables -nvL and it was ok.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212862/" ] }
385,198
I am required to write a one-time util that does some operation on all the files in a given directory but a list of files in a predefined list. Since the given list is predefined, I am going to hard-code it as an array. Having said that, how to get names of all files that are not in the given array? This can be in any standard unix script(bash, awk, perl).
With bash , you could do: all=(*)except=(file1 file2 notme.txt)only=()IFS=/for file in "${all[@]}"; do case "/${except[*]}/" in (*"/$file/"*) ;; # do nothing (exclude) (*) only+=("$file") # add to the array esacdonels -ld -- "${only[@]}" (that works here for the files in the current directory, but not reliably for globs like all=(*/*) except=(foo/bar) as we use / to join the elements of the array for the look-up). It's based on the fact that "${array[*]}" joins the elements of the array with the first character of $IFS (here chosen to be / as it can't otherwise occur in a file name ; NUL is a character that can't occur in a file path , but unfortunately bash (contrary to zsh ) can't have such a character in its variables). So for each file in $all (here with $file being foo as an example), we do a case "/file1/file2/notme.txt/" in (*"/foo/"*) to check if $file is to be excluded.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73658/" ] }
385,200
I want to make use of the updated version of ssh-keygen because that includes a hashing feature for fingerprints like so: ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub But my current version doesn't recognize the -E parameter (part of the -l parameter). (yes I already have a workaround to obtain my fingerprint hashes, but I'd like to upgrade this tool regardless). How to upgrade? I'm running Debian 8.
With bash , you could do: all=(*)except=(file1 file2 notme.txt)only=()IFS=/for file in "${all[@]}"; do case "/${except[*]}/" in (*"/$file/"*) ;; # do nothing (exclude) (*) only+=("$file") # add to the array esacdonels -ld -- "${only[@]}" (that works here for the files in the current directory, but not reliably for globs like all=(*/*) except=(foo/bar) as we use / to join the elements of the array for the look-up). It's based on the fact that "${array[*]}" joins the elements of the array with the first character of $IFS (here chosen to be / as it can't otherwise occur in a file name ; NUL is a character that can't occur in a file path , but unfortunately bash (contrary to zsh ) can't have such a character in its variables). So for each file in $all (here with $file being foo as an example), we do a case "/file1/file2/notme.txt/" in (*"/foo/"*) to check if $file is to be excluded.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105137/" ] }
385,202
Running MariaDB 10.1.23-MariaDB-9+deb9u1 on Debian 9.1.Fresh OS installation, installed MariaDB with apt-get install mariadb-server mariadb-client Apparently MariaDB doesn't ask for a root password on install so I'm going to set it after the fact: # mysql -uroot> select user from mysql.user;+------+| user |+------+| root |+------+ Ok, so root exists. Now to change its password: > set password for 'root'@'localhost' = PASSWORD('P@ssw0rd');> flush privileges;> exit Did it work? # mysql -uroot -pblablaMariaDB [(none)]> Setting the password went ok but why is MariaDB accepting any random password and even an empty one? This installation doesn't accept ALTER USER statement.
The answer: https://www.percona.com/blog/2016/03/16/change-user-password-in-mysql-5-7-with-plugin-auth_socket/ : Apparently the mysql-server installation on 16.04 (or any 5.7 installation?) allows root access not through password, but through the auth_socket plugin. Running sudo mysql -u root (n.b. w/o a password) will give you mysql console whereas running the command as non-root prompts you for a password. It would seem that changing the password doesn't make much of a difference since the auth backend doesn't even check for a password. To disable this auth_socket plugin, on the mysql prompt do update mysql.user set plugin=null where user='root';flush privileges; This makes MariaDB also ask a password for [Linux] root. Thanks jesse-b and derobert for the in-depth discussion and your answers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40305/" ] }
385,214
Is there a way you can make the displayed content disappear just after user gives input ? For example, take this file below #!/bin/bashread -n 1 -p 'how are you ? ' varif [ "$var" == "y" ]then echo 'Have fun'else echo 'Go to Doctor'fi If you run this, the output is how are you ? yHave fun I am looking for something that let me make how are you ? disappear as soon as user presses a key And then after disappearing, print Have fun So, I want the last output of the above program to be only Have fun Note: Also, anything above this script that is printed on the shell screen should not be erased. I am using bash
You can use: tput cr (or printf '\r' ) to move the cursor to the beginning of the line. Followed by: tput el to delete everything up to the end of the line. ( tcsh and zsh also have the echotc builtin which you can use with the termcap equivalent of that terminfo el : echotc ce (also echoti el in zsh ))
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224025/" ] }
385,216
There are 2 files say File1 and File2 File1 has only Headers like Field2 Field1 Field3 and File2 has both Headers and Data too like Field3 Field2 Field1ABC DEF GHIJKL MNO PQRS I have to synchronize 2 headers fields in files like File1.txt Field1 Field2 Field3 File2.txt Field1 Field2 Field3GHI DEF ABCPQRS MNO JKL
You can use: tput cr (or printf '\r' ) to move the cursor to the beginning of the line. Followed by: tput el to delete everything up to the end of the line. ( tcsh and zsh also have the echotc builtin which you can use with the termcap equivalent of that terminfo el : echotc ce (also echoti el in zsh ))
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245898/" ] }
385,279
I want to navigate through all the files in a folder and find out the missing file for a specific date. Files are partitioned by hour and file name have yyyy-mm-dd-hh formatting. So between 2017-07-01 and 2017-07-02 there will be 24 files from 2017-07-01-00 through 2017-07-01-23 How can I find missing hourly file if I pass above dates as start and end date? Appreciate any input!
# presuming that the files are e. g. template-2017-07-01-16:# To test a given datefor file in template-2017-07-01-{00..23}; do if ! [[ -f "$file" ]]; then echo "$file is missing" fidone# To test a given yearyear=2017for month in seq -w 1 12; do dim=$( cal $( date -d "$year-$month-01" "+%m %Y" | awk 'NF { days=$NF} END {print days}' ) for day in $(seq -w 1 $dim); do for file in template-${year}-${month}-${day}-{00..23}; do if ! [[ -f "$file" ]]; then echo "$file is missing" fi done donedone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245934/" ] }
385,281
I disabled predictable network interfaces names by changing GRUB_CMDLINE_LINUX line in /etc/default/grub from: GRUB_CMDLINE_LINUX="pci=nomsi" to: GRUB_CMDLINE_LINUX="pci=nomsi net.ifnames=0" on a fresh installation of Debian GNU/Linux testing system with installed proprietary NVIDIA drivers. I did it because my external USB Wi-Fi card didn't work with systemd interfaces names. After disabling predictable network interfaces names I'm giving following message at boot: A start job is running for raise network interfaces (2 minutes of 5 mins 1 sec) and system boots long. My /etc/network/intefaces file: # This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback Why am I giving this message at boot? How can I avoid long booting of my system?
Solved by changing file /etc/network/interfaces.d/setup from: auto loiface lo inet loopbackauto eth0iface eth0 inet dhcp to: auto loiface lo inet loopbackallow-hotplug eth0iface eth0 inet dhcp
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
385,339
I noticed some time ago that usernames and passwords given to curl as command line arguments don't appear in ps output (although of course they may appear in your bash history). They likewise don't appear in /proc/PID/cmdline . (The length of the combined username/password argument can be derived, though.) Demonstration below: [root@localhost ~]# nc -l 80 &[1] 3342[root@localhost ~]# curl -u iamsam:samiam localhost &[2] 3343[root@localhost ~]# GET / HTTP/1.1Authorization: Basic aWFtc2FtOnNhbWlhbQ==User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2Host: localhostAccept: */*[1]+ Stopped nc -l 80[root@localhost ~]# jobs[1]+ Stopped nc -l 80[2]- Running curl -u iamsam:samiam localhost &[root@localhost ~]# ps -ef | grep curlroot 3343 3258 0 22:37 pts/1 00:00:00 curl -u localhostroot 3347 3258 0 22:38 pts/1 00:00:00 grep curl[root@localhost ~]# od -xa /proc/3343/cmdline 0000000 7563 6c72 2d00 0075 2020 2020 2020 2020 c u r l nul - u nul sp sp sp sp sp sp sp sp0000020 2020 2020 0020 6f6c 6163 686c 736f 0074 sp sp sp sp sp nul l o c a l h o s t nul0000040[root@localhost ~]# How is this effect achieved? Is it somewhere in the source code of curl ? (I assume it is a curl feature, not a ps feature? Or is it a kernel feature of some sort?) Also: can this be achieved from outside the source code of a binary executable? E.g. by using shell commands, probably combined with root permissions? In other words could I somehow mask an argument from appearing in /proc or in ps output (same thing, I think) that I passed to some arbitrary shell command? (I would guess the answer to this is "no" but it seems worth including this extra half-a-question.)
When the kernel executes a process, it copies the command line arguments to read-write memory belonging to the process (on the stack, at least on Linux). The process can write to that memory like any other memory. When ps displays the argument, it reads back whatever is stored at that particular address in the process's memory. Most programs keep the original arguments, but it's possible to change them. The POSIX description of ps states that It is unspecified whether the string represented is a version of the argument list as it was passed to the command when it started, or is a version of the arguments as they may have been modified by the application. Applications cannot depend on being able to modify their argument list and having that modification be reflected in the output of ps. The reason this is mentioned is that most unix variants do reflect the change, but POSIX implementations on other types of operating systems may not. This feature is of limited use because the process can't make arbitrary changes. At the very least, the total length of the arguments cannot be increased, because the program can't change the location where ps will fetch the arguments and can't extend the area beyond its original size. The length can effectively be decreased by putting null bytes at the end, because arguments are C-style null-terminated strings (this is indistinguishable from having a bunch of empty arguments at the end). If you really want to dig, you can look at the source of an open-source implementation. On Linux, the source of ps isn't interesting, all you'll see there is that it reads the command line arguments from the proc filesystem , in /proc/ PID /cmdline . The code that generates the content of this file is in the kernel, in proc_pid_cmdline_read in fs/proc/base.c . The part of the process's memory (accessed with access_remote_vm ) goes from the address mm->arg_start to mm->arg_end ; these addresses are recorded in the kernel when the process starts and can't be changed afterwards. Some daemons use this ability to reflect their status, e.g. they change their argv[1] to a string like starting or available or exiting . Many unix variants have a setproctitle function to do this. Some programs use this ability to hide confidential data. Note that this is of limited use since the command line arguments are visible while the process starts. Most high-level languages copy the arguments to string objects and don't give a way to modify the original storage. Here's a C program that demonstrates this ability by changing argv elements directly. #include <stdlib.h>#include <stdio.h>#include <string.h>int main(int argc, char *argv[]){ int i; system("ps -p $PPID -o args="); for (i = 0; i < argc; i++) { memset(argv[i], '0' + (i % 10), strlen(argv[i])); } system("ps -p $PPID -o args="); return 0;} Sample output: ./a.out hello world0000000 11111 22222 You can see argv modification in the curl source code. Curl defines a function cleanarg in src/tool_paramhlp.c which is used to change an argument to all spaces using memset . In src/tool_getparam.c this function is used a few times, e.g. by redacting the user password . Since the function is called from the parameter parsing, it happens early in a curl invocation, but dumping the command line before this happens will still show any passwords. Since the arguments are stored in the process's own memory, they cannot be changed from the outside except by using a debugger.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/385339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
385,357
I have files named file.88_0.pdb , file.88_1.pdb , ... , file.88_100.pdb . I want to cat them so that file.88_1.pdb gets pasted after file.88_0.pdb , file.88_2.pdb after file.88_1.pdb , and so on. If I do cat file.88_*.pdb > all.pdb , the files are put together in the following order: 0 1 10 11 12 13 14 15 16 17 18 19 2 20... , etc. How do I put them together so that the order is 0 1 2 3 4 5 6... ?
Use brace expansion cat file.88_{0..100}.pdb >>bigfile.pdb To ignoring printing the error messages for non-existent files use: cat file.88_{0..100}.pdb >>bigfile.pdb 2>/dev/null In the zsh shell also you have the (n) globbing qualifier to request a numerical sorting (as opposed to the default of alphabetical ) for globs: cat file.88_*.pdb(n) >>bigfile.pdb 2>/dev/null
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/385357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
385,380
In the Linux kernel, is open() , mmap() or neither, the more fundamental way to access a file? By "fundamental", I mean "does either ultimately call the other or a simple helper function of the other?". There are a lot of questions on the stack network asking about the performance of these two functions. The hope of this question is to get at what is going on inside the Linux kernel a priori. Does open() call mmap() or some helper function that essentially implements mmap() ? Alternatively, does mmap() call open() or call some helper function that essentially implements open() ? The gist of the question is whether these two system calls are fundamentally different or whether one is a "convenience function" of the other.
Notice that mmap(2) often wants a file descriptor usually provided by open(2) ; in that sense, open is more fundamental. Notice also that the virtual address space of some process is modified not only by mmap , munmap , mprotect(2) but also by other system calls (including execve(2) ; see also shm_overview(7) ) BTW, the Linux kernel don't use mmap or open but provides and implements them (for application-level user-space programs). But the Linux kernel manages the page cache which is more fundamental and related to both system calls . See also LinuxAteMyRam and consider perhaps using madvise(2) , posix_fadvise(2) , readahead(2) to give hints to the page cache subsystem. whether these two system calls are fundamentally different All system calls (listed in syscalls(2) ...) are different. Read also Advanced Linux Programming and Operating Systems : Three Easy Pieces (both are freely downloadable).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67136/" ] }
385,382
I upgraded my system from fedora 25 to 26 and now mendeley desktop fails to open. I was using the version 1.16 (with latest updates). Even the latest version (1.17.10) does not work. When I run it from the terminal this is the message I'm getting: QSslSocket: cannot resolve CRYPTO_num_locksQSslSocket: cannot resolve CRYPTO_set_id_callbackQSslSocket: cannot resolve CRYPTO_set_locking_callbackQSslSocket: cannot resolve ERR_free_stringsQSslSocket: cannot resolve sk_new_nullQSslSocket: cannot resolve sk_pushQSslSocket: cannot resolve sk_freeQSslSocket: cannot resolve sk_numQSslSocket: cannot resolve sk_pop_freeQSslSocket: cannot resolve sk_valueQSslSocket: cannot resolve SSL_library_initQSslSocket: cannot resolve SSL_load_error_stringsQSslSocket: cannot resolve SSL_get_ex_new_indexQSslSocket: cannot resolve SSLv2_client_methodQSslSocket: cannot resolve SSLv23_client_methodQSslSocket: cannot resolve SSLv2_server_methodQSslSocket: cannot resolve SSLv23_server_methodQSslSocket: cannot resolve X509_STORE_CTX_get_chainQSslSocket: cannot resolve OPENSSL_add_all_algorithms_noconfQSslSocket: cannot resolve OPENSSL_add_all_algorithms_confQSslSocket: cannot resolve SSLeayQSslSocket: cannot resolve SSLeay_versionQSslSocket: cannot call unresolved function SSLeayQSslSocket: cannot call unresolved function CRYPTO_num_locksQSslSocket: cannot call unresolved function CRYPTO_set_id_callbackQSslSocket: cannot call unresolved function CRYPTO_set_locking_callbackQSslSocket: cannot call unresolved function SSL_library_initQSslSocket: cannot call unresolved function SSLv23_client_methodQSslSocket: cannot call unresolved function sk_num I have openssl , openssl-devel (version 1:1.1.0f-7), qt and qtwebkit installed. Cannot figure out the problem. I know I should ask this question in Mendeley support and I did but there is no response. As such, the support site has some problems.
this works for me on fedora 26: sudo dnf install compat-openssl10-devel
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139833/" ] }
385,399
I have a small python script #!/usr/bin/env python3import some_python3_moduledef main(): # do stuffif __name__ == '__main__': main() and cannot run this script with Python3, since ROS sets the PYTHONPATH variable to some version 2.7.-related locations, meaning Python 3 cannot find any modules in its dist-packages directory. I would like to override this behaviour without having to modify the outside envorinment. That is, I wish to unset PYTHONPATH , but only for this script, and preferably from within it, so that the shebang will still work.Is this possible? Not sure if this is better suited to superuser.com
There is a command line option that suits your needs: #!/usr/bin/env python3 -E -E Ignore all PYTHON* environment variables, e.g. PYTHONPATH and PYTHONHOME, that might be set.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208945/" ] }
385,405
I have the following folder structure folder||--foo.txt||--sub_folder | |--bar.txt I want to zip the content (files and sub folders) of folder without including the root folder in the zip. I have tried command zip -r package.zip folder But this includes the root folder. Also tried the following form zip -j -r package.zip folder But this will flatten all the directory structures and just include the files. How do I preserve the internal directory structure but ignore the parent folder?
zip stores paths relative to the current directory (when it is invoked), so you need to change that: (cd folder && zip -r "$OLDPWD/package.zip" .) && ensures that zip only runs if the directory was correctly changed, and the parentheses run everything in a subshell, so the current directory is restored at the end. Using OLDPWD avoids having to calculate the relative path to package.zip .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/385405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188434/" ] }
385,466
Example: % touch -- safe-name -name-with-dash-prefix "name with space" \ 'name-with-double-quote"' "name-with-single-quote'" \ 'name-with-backslash\' xargs can't seem to handle double quotes: % ls | xargs ls -l xargs: unmatched double quote; by default quotes are special to xargs unless you use the -0 optionls: invalid option -- 'e'Try 'ls --help' for more information. If we use the -0 option, it has trouble with name that has dash prefix: % ls -- * | xargs -0 -- ls -l --ls: invalid option -- 'e'Try 'ls --help' for more information. This is before using other potentially problematic characters like newline, control character, etc.
The POSIX specification does give you an example for that: ls | sed -e 's/"/"\\""/g' -e 's/.*/"&"/' | xargs -E '' printf '<%s>\n' (with filenames being arbitrary sequences of bytes (other than / and NULL) and sed / xargs expecting text , you'd also need to fix the locale to C (where all non-NUL bytes would make valid characters) to make that reliable (except for xargs implementations that have a very low limit on the maximum length of an argument)) The -E '' is needed for some xargs implementations that without it, would understand a _ argument to signify the end of input (where echo a _ b | xargs outputs a only for instance). With GNU xargs , you can use: ls | xargs -rd '\n' printf '<%s>\n' (also adding -r (also a GNU extension) for the command not be run if the input is empty). GNU xargs also has a -0 that has been copied by a few other implementations, so: ls | tr '\n' '\0' | xargs -0 printf '<%s>\n' is slightly more portable. All of those assume the file names don't contain newline characters. If there may be filenames with newline characters, the output of ls is simply not post-processable. If you get: ab That can be either both a a and b files or one file called a<newline>b , there's no way to tell. GNU ls has a --quoting-style=shell-always which makes its output unambiguous and could be post-processable, but the quoting is not compatible with the quoting expected by xargs . xargs recognise "..." , \x and '...' forms of quoting. But both "..." and '...' are strong quotes and can't contain newline characters (only \ can escape newline characters for xargs ), so that's not compatible with sh quoting where only '...' are strong quotes (and can contain newline characters) but \<newline> is a line-continuation (is removed) instead of an escaped newline. You can use the shell to parse that output and then output it in a format expected by xargs : eval "files=($(ls --quoting-style=shell-always))"[ "${#files[@]}" -eq 0 ] || printf '%s\0' "${files[@]}" | xargs -0 printf '<%s>\n' Or you can have the shell get the list of files and pass it NUL-delimited to xargs . For instance: with zsh : print -rNC1 -- *(N) | xargs -r0 printf '<%s>\n' with ksh93 : (set -- ~(N)*; (($# == 0)) || printf '%s\0' "$@") | xargs -r0 printf '<%s>\n' with fish : begin set -l files *; string join0 -- $files; end | xargs -r0 printf '<%s>\n' with bash : ( shopt -s nullglob set -- * (($# == 0)) || printf '%s\0' "$@") | xargs -r0 printf '<%s>\n'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48914/" ] }
385,471
Hi I recently run an application or an app and now I want to know the cpu status info so i need the PID of the application i have run recently. But i have so many PIDs in the /proc directory so how can i know the PID of the particular application for example "my-example" application binary i recently executed.
There are some command line tools for process management you can check: You can use pidof <name> , e.g. pidof bash , to get the PID of a process given the program name You can use ps -aux to get a listing of currently running programs with their starting time and PID. You may look for your program in the listing. You can use ps -eafx to get a listing of running programs showing all the command line options. May be you can find your program looking for some command-line option or parameter. You can use pgrep [options] <patterns> to look for a process using multiple criteria. You can run pgrep --help to review all the options.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234608/" ] }
385,490
I've created a systemd job using systemd-run --on-calendar .... Now I've replaced it with proper .timer and .service files, but I'm not able to remove the old one. I can stop it and disable it, but when I call systemctl list-timers it still appears with its arbitrary name run-r0d0dc22... . I also looked for its .timer file, but I couldn't find them.
The transient files end up in /run/user/ and do not seem to ever be removed until the user logs out (for systemd-run --user ) or until a reboot, when /run is recreated. For example, if you create a command to run once only at a given time: systemd-run --user --on-calendar '2017-08-12 14:46' /bin/bash -c 'echo done >/tmp/done' You will get files owned by you in /run : /run/user/1000/systemd/user/run-28810.service/run/user/1000/systemd/user/run-28810.service.d/50-Description.conf/run/user/1000/systemd/user/run-28810.service.d/50-ExecStart.conf/run/user/1000/systemd/user/run-28810.timer/run/user/1000/systemd/user/run-28810.timer.d/50-Description.conf/run/user/1000/systemd/user/run-28810.timer.d/50-OnCalendar.conf For non --user the files are in /run/systemd/system/ You can remove the files, do a systemctl [--user] daemon-reload and then list-timers will show only the Unit name, with their last history if they have already run. This information is probably held within systemd's internal status or journal files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167547/" ] }
385,497
I'm working towards setting up U-Boot to boot only verified Linux kernel from a Kernel+fdt FIT image. (Everything is built under Yocto).The U-Boot bin has a basic device tree appended to it which it boots up using, but the FIT image has the full tree for the kernel. I have everything pretty much working, except that when the kernel is booted, U-Boot is ignoring the device tree in the FIT image and instead passing its own one - based on the value of fdtaddr (== 0x11000000): Hit any key to stop autoboot: 0reading uImage3346230 bytes read in 100 ms (31.9 MiB/s)## Loading kernel from FIT Image at 18000000 ...No configuration specified, trying default...Found default configuration: 'conf@1' Using 'conf@1' configuration Verifying Hash Integrity ... sha1,rsa2048:dev+ OK Trying 'kernel@1' kernel subimage Description: Linux kernel Type: Kernel Image Compression: uncompressed Data Start: 0x180000e8 Data Size: 3304016 Bytes = 3.2 MiB Architecture: ARM OS: Linux Load Address: 0x10008000 Entry Point: 0x10008000 Hash node: 'hash@1' Hash algo: sha1 Hash value: ff0333f01a894f81d716605f7c7995d651ff8111 Hash len: 20 Verifying Hash Integrity ... sha1+ OK* fdt: cmdline image address = 0x11000000## Checking for 'FDT'/'FDT Image' at 11000000Wrong FIT format: no description* fdt: raw FDT blob## Flattened Device Tree blob at 11000000 Booting using the fdt blob at 0x11000000 of_flat_tree at 0x11000000 size 0x0000505a Loading Kernel Image ... OK## device tree at 11000000 ... 11005059 (len=32858 [0x805A]) Loading Device Tree to 2f72e000, end 2f736059 ... OK [NB my U-Boot has some board-specific modules - from the board manufacturer - which might be altering U-Boot's standard behaviour] I can get correct operation, if after the image is loaded, I "setenv fdtaddr ${loadaddr}" (== 0x18000000) - then U-Boot does find the device tree in the FIT image and passes that instead: Hit any key to stop autoboot: 0reading uImage3346230 bytes read in 101 ms (31.6 MiB/s)## Loading kernel from FIT Image at 18000000 ...No configuration specified, trying default...Found default configuration: 'conf@1' Using 'conf@1' configuration Verifying Hash Integrity ... sha1,rsa2048:dev+ OK Trying 'kernel@1' kernel subimage Description: Linux kernel Type: Kernel Image Compression: uncompressed Data Start: 0x180000e8 Data Size: 3304016 Bytes = 3.2 MiB Architecture: ARM OS: Linux Load Address: 0x10008000 Entry Point: 0x10008000 Hash node: 'hash@1' Hash algo: sha1 Hash value: ff0333f01a894f81d716605f7c7995d651ff8111 Hash len: 20 Verifying Hash Integrity ... sha1+ OK* fdt: cmdline image address = 0x18000000## Checking for 'FDT'/'FDT Image' at 18000000## Loading fdt from FIT Image at 18000000 ...No configuration specified, trying default...Found default configuration: 'conf@1' Using 'conf@1' configuration Trying 'fdt@1' fdt subimage Description: Flattened Device Tree blob Type: Flat Device Tree Compression: uncompressed Data Start: 0x18326c2c Data Size: 38269 Bytes = 37.4 KiB Architecture: ARM Hash node: 'hash@1' Hash algo: sha1 Hash value: 79d5eeb892ef059566c04d98cdc6b30e92a665a2 Hash len: 20 Verifying Hash Integrity ... sha1+ OKCan't get 'load' property from FIT 0x18000000, node: offset 3304372, name fdt@1 (FDT_ERR_NOTFOUND) Booting using the fdt blob at 0x18326c2c of_flat_tree at 0x18326c2c size 0x0000957d Loading Kernel Image ... OK## device tree at 18326c2c ... 183301a8 (len=50557 [0xC57D]) Loading Device Tree to 2f72a000, end 2f73657c ... OK This is fine (I can add the above command to 'default_bootargs'), but I wondered if I'm missing some 'proper' trick to get the same behaviour - I kinda assumed that if you loaded a FIT image then U-Boot would naturally load not only the kernel from it but also the device tree. (I've not yet been able to grasp the options on the bootm command...) Thanks [Edit:] /dts-v1/;/ { description = "U-Boot fitImage for MyBoard/4.4/tx6"; #address-cells = <1>; images { kernel@1 { description = "Linux kernel"; data = /incbin/("linux.bin"); type = "kernel"; arch = "arm"; os = "linux"; compression = "none"; load = <0x10008000>; entry = <0x10008000>; hash@1 { algo = "sha1"; }; }; fdt@1 { description = "Flattened Device Tree blob"; data = /incbin/("arch/arm/boot/dts/tx6.dtb"); type = "flat_dt"; arch = "arm"; compression = "none"; hash@1 { algo = "sha1"; }; }; }; configurations { default = "conf@1"; conf@1 { description = "Linux kernel, FDT blob"; kernel = "kernel@1"; fdt = "fdt@1"; hash@1 { algo = "sha1"; }; signature@1 { algo = "sha1,rsa2048"; key-name-hint = "dev-example"; sign-images = "kernel", "fdt"; }; }; };}; [Edit:] autoload=noautostart=nobaseboard=stk5-v3baudrate=115200boot_mode=mmcbootargs_jffs2=run default_bootargs;setenv bootargs ${bootargs} root=/dev/mtdblock3 rootfstype=jffs2bootargs_mmc=run default_bootargs;setenv bootargs ${bootargs} root=PARTUUID=${rootpart_uuid} rootwaitbootargs_nfs=run default_bootargs;setenv bootargs ${bootargs} root=/dev/nfs nfsroot=${nfs_server}:${nfsroot},nolock ip=dhcpbootargs_sdcard=run default_bootargs;setenv bootargs ${bootargs} root=/dev/mmcblk0p2 rootwaitbootargs_ubifs=run default_bootargs;setenv bootargs ${bootargs} ubi.mtd=rootfs root=ubi0:rootfs rootfstype=ubifsbootcmd=run bootcmd_${boot_mode} bootm_cmdbootcmd_jffs2=setenv autostart no;run bootargs_jffs2;nboot linuxbootcmd_mmc=setenv autostart no;run bootargs_mmc;fatload mmc 0 ${loadaddr} ${bootfile}bootcmd_net=setenv autoload y;setenv autostart n;run bootargs_nfs;dhcpbootcmd_sdcard=setenv autostart no;run bootargs_sdcard;fatload mmc 1:1 ${loadaddr} ${bootfile}bootdelay=1bootfile=uImagebootm_cmd=bootm ${loadaddr} - ${fdtaddr}cpu_clk=792default_bootargs=setenv bootargs init=/sbin/init console=ttymxc0,115200 ro debug panic=1 ${append_bootargs}; setenv fdtaddr ${loadaddr}emmc_boot_ack=1emmc_boot_part=1ethact=FECethaddr=00:01:02:7f:e5:50fdtaddr=11000000fdtsave=mmc partconf 0 ${emmc_boot_ack} ${emmc_boot_part} ${emmc_boot_part};mmc write ${fdtaddr} 0x680 80;mmc partconf 0 ${emmc_boot_ack} ${emmc_boot_part} 0fdtsize=505aloadaddr=18000000nfsroot=/tftpboot/rootfsotg_mode=devicerootpart_uuid=0cc66cc0-02splashimage=18000000stderr=serialstdin=serialstdout=serialtouchpanel=edt-ft5x06ver=U-Boot 2015.10-rc2 (Aug 11 2017 - 18:57:06 +0100)video_mode=VGA
The answer to this exact problem comes from understanding that U-Boot tries to be extremely flexible and this can lead to some confusion at times. Looking at the provided environment we can see that we have the bootcmd (which is executed when the boot delay runs out) boils down to: bootm ${loadaddr} - ${fdtaddr} And this means that we look at ${loadaddr} for our image, no where for a ramdisk and ${fdtaddr} for the device tree to use. In the case of a legacy-style uImage this makes sense as the ramdisk and device tree are not (likely) to be contained within the file. A FIT image however has all of this included, and offers lots of extra useful features (which the poster wishes to use). What happens is that after picking out the device tree included in the FIT image, U-Boot then parses the rest of the arguments and looks at ${fdtaddr} for the device tree to use. If bootm_cmd was set to simply: bootm ${loadaddr} instead, it would work as expected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240551/" ] }
385,559
I am trying to find all files in my directory, which contain the string "<3". Doing this should be simple: grep "<3" * However, running this prints grep: <3: No such file or directory and then proceeds to grep all files for something else... (I'm not sure what exactly, but lines show up containing no 3's at all...) CAUSE: apparently there was a file -f in my directory, and when it's getting passed into grep with the * , grep is treating it as a flag, causing this behaviour. Trying to delete this file normally also doesn't work, since rm treats it as a flag as well. Thanks to a suggestion from Nick, this file can be removed with rm ./-f
grep "<3" -- * With -- you can determine the end of the options and the beginning of the positional arguments for many GNU programs. Thus a file -l does not cause any harm. An alternative is grep "<3" ./*
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145784/" ] }
385,565
I'm seeing a lot of posts reference lar_disable like this one for instance . I'm wondering what it does. modinfo iwlwifi just says, parm: lar_disable:disable LAR functionality (default: N) (bool) What is "LAR functionality"?
LAR means Location Aware Regulatory I searched LAR in the source code of Linux wireless driver, only Intel use the LAR term in their code.In their code comment [ 1 , 2 , 3 ] mention the full form of LAR
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
385,697
I need to set focus to VirtualBox in KDE, I've written a KWin script for the purpose but I cannot figure out how to run it from console. What I have tried: This KWin script works in the Desktop Shell Scripting Console How to open the Desktop Shell Scripting Console: Press Alt + F2 , type Run wm console The script: // Main reference: http://techbase.kde.org/Development/Tutorials/KWin/Scripting// API: https://techbase.kde.org/Development/Tutorials/KWin/Scripting/API_4.9// Sets focus to VirtualBoxvar clients = workspace.clientList(); for (var i=0; i<clients.length; i++) { print(clients[i].caption); var cap = clients[i].caption; if (cap.indexOf("- Oracle VM VirtualBox") != -1) { workspace.activeClient = clients[i]; }} But when I try to run it in Bash ( according to this method ) scripting does not seem to be setup as I get these errors: Error org.freedesktop.DBus.Error.ServiceUnknown: The name org.kde.kwin.Scripting was not provided by any .service filesError org.freedesktop.DBus.Error.ServiceUnknown: The name org.kde.kwin.Scripting was not provided by any .service files I don't know how dbus works internally so from here on I just try things. I tried to fix these problems caused by things changing in newer versions of KDE: QDBusViewer So I run the qdbusviewer to have a look. It should be KWin instead of kwin.Scripting. I find org.kde.KWin in the left hand list and Scripting to the right, under org.kde.kwin.Scripting I find the methods loadScript and start. I'm able to use these methods manually by double clicking on them, loading my script file and it works, my script gets run and VirtualBox receives focus. So I try to modify the loading commands accordingly: dbus-send --print-reply --dest=org.kde.KWin /Scripting org.kde.kwin.Scripting.loadScript string:"/home/jk/msexcel_setfocus.kwinscript"dbus-send --print-reply --dest=org.kde.KWin /Scripting org.kde.kwin.Scripting.start These commands does not give an error but it does not work either. Is dbus working at all? I try something else just to see if dbus is working at all, and this works (enabling/disabling the FPS effect): dbus-send --print-reply --session --dest=org.kde.KWin /Effects org.kde.kwin.Effects.loadEffect string:"showfps"dbus-send --print-reply --session --dest=org.kde.KWin /Effects org.kde.kwin.Effects.unloadEffect string:"showfps" Numbered entries So there is this business in the script linked to above with a numbered path of some kind, I find that in QDBusViewer sometimes there are numbered entries in the right pane (they come and go). And there actually a Scripting item and a run method in there when a number exists. So I try this: This command does give a number that corresponds to the number appearing in QDBusViewer. num=$(dbus-send --print-reply --dest=org.kde.KWin /Scripting org.kde.kwin.Scripting.loadScript string:"/home/jk/msexcel_setfocus.kwinscript" | awk 'END {print $2}')echo $numdbus-send --print-reply --dest=org.kde.KWin /$num org.kde.kwin.Scripting.run But the last command does not work, neither does it work run the start method (as above) before the run method, then it complains that the number is gone. Error org.freedesktop.DBus.Error.UnknownObject: No such object path '/1'
After far more trial and error than I would have liked, it seems it is possible to run a string containing a script directly by communicating with plasmashell, as in the following example (which happens to be what I was trying, as part of moving the panel when I rotate the screen): qdbus org.kde.plasmashell /PlasmaShell evaluateScript \ "panelById(panelIds[0]).location='right'"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54601/" ] }
385,734
I know that with "ls -l" I can see the permissions of a file or directory but it shows them with letters, so how to show the permissions in numeric way for example: 755 /var/www/mywebpage
You could use find : find . -maxdepth 1 -printf "%m %f\n" or stat : stat -c "%a %n" -- *
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216688/" ] }
385,754
I have the latest version of Empire: Total War in Steam on Arch Linux x86_64. I have followed this Reddit guide to install DME. I have gone through each and every step (except the optional ones) and the game failed to start on launch. Here are my specs: $ inxi -SPARM -GCDN -v1 -xGCRS System: Host: archlinux Kernel: 4.12.4-1-ARCH x86_64 (64 bit gcc: 7.1.1) Desktop: Gnome 3.24.3 (Gtk 3.22.18) Distro: Arch LinuxMachine: Device: desktop Mobo: ASUSTeK model: P5Q PRO TURBO v: Rev 1.xx BIOS: American Megatrends v: 0701 date: 10/08/2012CPU: Quad core Intel Core2 Quad Q6600 (Core 2 rev.11) (-MCP-) cache: 4096 KB flags: (lm nx sse sse2 sse3 ssse3 vmx) bmips: 19207 clock speeds: max: 2403 MHz 1: 2403 MHz 2: 1603 MHz 3: 2136 MHz 4: 1603 MHzGraphics: Card: Advanced Micro Devices [AMD/ATI] Juniper XT [Radeon HD 5770] bus-ID: 01:00.0 Display Server: N/A driver: radeon tty size: 131x87Audio: Card-1 Advanced Micro Devices [AMD/ATI] Juniper HDMI Audio [Radeon HD 5700 Series] driver: snd_hda_intel bus-ID: 01:00.1 Card-2 Intel 82801JI (ICH10 Family) HD Audio Controller driver: snd_hda_intel bus-ID: 00:1b.0 Sound: Advanced Linux Sound Architecture v: k4.12.4-1-ARCHNetwork: Card: Qualcomm Atheros AR8121/AR8113/AR8114 Gigabit or Fast Ethernet driver: ATL1E port: cc00 bus-ID: 02:00.0Drives: HDD Total Size: 1500.3GB (4.3% used) ID-1: /dev/sda model: WDC_WD5000AAKS size: 500.1GB ID-2: /dev/sdb model: ST1000LM024_HN size: 1000.2GBPartition: ID-1: / size: 457G used: 60G (14%) fs: ext4 dev: /dev/sda3 ID-2: /boot size: 202M used: 58M (31%) fs: ext4 dev: /dev/sda1 ID-3: swap-1 size: 0.54GB used: 0.06GB (11%) fs: swap dev: /dev/sda4RAID: No RAID data: /proc/mdstat missing-is md_mod kernel module loaded?Info: Processes: 247 Uptime: 1 day Memory: 2934.5/7987.4MB Init: systemd Gcc sys: 7.1.1 Client: Shell (fish) inxi: 2.3.27 I ran the game from a terminal and I got: $ ./.steam/steam/steamapps/common/Empire\ Total \ War/Empire.sh ~/.local/share/Steam/steamapps/common/Empire Total War/bin/game.i386: error while loading shared libraries: libvorbis.so.0: cannot open shared object file: No such file or directory Apparently, I was missing some 32-bit libraries, some pacman magic and symbolic linking gave the game the libraries it needed. However, when I ran the game it returned: $ ./.steam/steam/steamapps/common/Empire Total War/bin/game.i386 Setting breakpad minidump AppID = 10500Steam_SetMinidumpSteamID: Caching Steam ID: 76561198044159024 [API loaded no]Dumped crashlog to /home/pradana/.local/share/feral-interactive/Empire/crashes//772c6081-0a79-298b-2c7a8124-23190ade.dmpfish: “./game.i386” terminated by signal SIGSEGV (Address boundary error) I have tried to read the .dmp file (core dump) using $ gdb ./game.i386 ~/.local/share/feral-interactive/Empire/crashes/4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp However, it returns the error: "~/.local/share/feral-interactive/Empire/crashes/4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp" is not a core dump: File format not recognized I tried to figure out the encoding of the file using $ file --mime 4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp 4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp: application/x-dmp; charset=binary and $ chardetect-py2 4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp 4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp: Windows-1254 with confidence 0.299704567453 I have also used $ iconv -c -f WINDOWS-1254 -t utf-8 4ab1b7fb-8cb4-b5b2-58c0ddd9-6767d769.dmp > dmp.txt to try and read the log but I had no progress here. I can't seem to progress without trying to find out what lies in hte code dump file. In any case, I'm trying find why the Steam game crashes at this point.
You could use find : find . -maxdepth 1 -printf "%m %f\n" or stat : stat -c "%a %n" -- *
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230119/" ] }
385,771
As far as I understand if i type the following... python -i ... the python-interpreter will now read from stdin, behaving (obviously) like this: >>> print "Hello" Hello I would expect it to do the same thing if I do this: echo 'print "Hello"' > /proc/$(pidof python)/fd/0 But this is the output ( beeing an actual empty line): >>> print "Hello" <empyline> This to me looks like, it just took the print "Hello"\n and wrote it to stdout , yet did not interpret it. Why is that not working and what would I have to do to make it working?
Sending input to shells/interpreters in this way is very problem-prone and very difficult to get working in any reliable way. The proper way is to use sockets, this is why they were invented, you can do this in command line using ncat nc or socat to bind a python process to a simple socket. Or write a simple python application that binds to port and listens for commands to interpret on a socket. sockets can be local and not exposed to any web interface. The problem is that if you start python from the command line, it is typically attached to your shell which is attached to a terminal, in fact we can see $ ls -al /proc/PID/fdlrwxrwxrwx 1 USER GROUP 0 Aug 1 00:00 0 -> /dev/pty1 so when you write to stdin of python, you are actually writing to the pty psuedo-terminal, which is a kernel device, not a simple file. It uses ioctl not read and write , so you will see output on your screen, but it will not be sent to the spawned process ( python ) One way to replicate what you are trying is with a fifo or named pipe . # make pipe$ mkfifo python_i.pipe# start python interactive with pipe input# Will print to pty output unless redirected$ python -i < python_i.pipe &# keep pipe open $ sleep infinity > python_i.pipe &# interact with the interpreter$ echo "print \"hello\"" >> python_i.pipe You can also use screen for input only # start screen $ screen -dmS python python# send command to input$ screen -S python -X 'print \"hello\"'# view output$ screen -S python -x
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202204/" ] }
385,781
I am running the i3 window manager with Debian 9 Stretch on a laptop with a trackpad. I have run into the problem that whenever I type, the mouse is disabled. Is this normal behavior or a bug? nonfree repos have been enabled and linux-firmware-nonfree has been installed. The bug does not show up on other distributions. This does not happen with a USB mouse xinput output Virtual core pointer id=2 [master pointer (3)]Virtual core XTEST pointer id=4 [slave pointer (2)]ETPS/2 Elantech Touchpad id=11 [slave pointer (2)]Virtual core keyboard id=3 [master keyboard (2)]Virtual core XTEST keyboard id=5 [slave keyboard (3)]Video Bus id=7 [slave keyboard (3)]Power Button id=8 [slave keyboard (3)]HP TrueVision HD id=9 [slave keyboard (3)]AT Translated Set 2 keyboard id=10 [slave keyboard (3)]HP Wireless hotkeys id=12 [slave keyboard (3)]HP WMI hotkeys id=13 [slave keyboard (3)]Power Button id=6 [slave keyboard (3)] Touchpad Properties Device 'ETPS/2 Elantech Touchpad': Device Enabled (142): 1 Coordinate Transformation Matrix (144): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 libinput Tapping Enabled (277): 0 libinput Tapping Enabled Default (278): 0 libinput Tapping Drag Enabled (279): 1 libinput Tapping Drag Enabled Default (280): 1 libinput Tapping Drag Lock Enabled (281): 0 libinput Tapping Drag Lock Enabled Default (282): 0 libinput Tapping Button Mapping Enabled (283): 1, 0 libinput Tapping Button Mapping Default (284): 1, 0 libinput Accel Speed (285): 0.000000 libinput Accel Speed Default (286): 0.000000 libinput Natural Scrolling Enabled (287): 0 libinput Natural Scrolling Enabled Default (288): 0 libinput Send Events Modes Available (262): 1, 1 libinput Send Events Mode Enabled (263): 0, 0 libinput Send Events Mode Enabled Default (264): 0, 0 libinput Left Handed Enabled (289): 0 libinput Left Handed Enabled Default (290): 0 libinput Scroll Methods Available (291): 1, 1, 0 libinput Scroll Method Enabled (292): 1, 0, 0 libinput Scroll Method Enabled Default (293): 1, 0, 0 libinput Disable While Typing Enabled (294): 1 libinput Disable While Typing Enabled Default (295): 1 Device Node (265): "/dev/input/event1" Device Product ID (266): 2, 14 libinput Drag Lock Buttons (296): <no items> libinput Horizontal Scroll Enabled (297): 1
The problem I was having involved the Disable While Typing Enabled feature of my trackpad. These are the steps I used to solve it. Make sure xinput is installed. Type xinput to find the name of the trackpad device. Mine was ETPS/2 Elantech Touchpad . Run xinput --list-props "DEVICE" to list the properties of the device. Go through the list until you find something like Disable While Typing . Use xinput --set-prop "DEVICE" ID_OF_PROPERTY 0 For me, this was xinput --set-prop "ETPS/2 Elantech Touchpad" 294 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223417/" ] }
385,860
I'm trying to update to node 7.x through the terminal on my raspberry pi and I keep encountering this error. The command I'm using is: sudo curl -sL https://deb.nodesource.com/setup_7.x | bash - Running this command as root doesn't work so I tried to see if apt-get was being used by any other processes. ps aux | grep aptpi 1295 0.0 0.1 4272 1848 pts/0 S+ 06:24 0:00 grep --color=auto apt This is all I get. Ultimately, (although it was initially advised not to do so), I tried removing the files and running the command again. sudo rm /var/lib/apt/lists/lock && sudo rm /var/lib/dpkg/lock Now neither one of these files no longer exists and I still receive the same error when trying to use curl. I also tried to kill that one process and I still get the error.
The problem is that you sudo curl but not the bash call which call apt.just run it fully as root, for example : sudo sucurl -sL https://deb.nodesource.com/setup_7.x | bash - or you can do something like wget https://deb.nodesource.com/setup_7.xchmod +x setup_7.xsudo ./setup_7.x
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246343/" ] }
385,863
Do you know what initramfs-kernel mean? I know squashfs-factory/squashfs-sysupgrade . How can I do it or what is it? which is better? I just don't understand what the initramfs-kernel mean. I have Linksys 1900ACS v2 and D-Link DL-860l B1 , but I only use squashfs-factory and squashfs-sysupgrade . What does the initramfs-kernel mean? When would I use those? I would even fear to install that. So, continuing, like lede-17.01.2-ramips-mt7621-dir-860l-b1-initramfs-kernel.bin , what does this mean? can i use it and if so, what is the difference between lede-17.01.2-ramips-mt7621-dir-860l-b1-squashfs-factory.bin (which I know what it does and how) or lede-17.01.2-ramips-mt7621-dir-860l-b1-squashfs-sysupgrade.bin (which I know what it does and how).
The initramfs OpenWRT/LEDE kernel builds are including the rootfs image into initramfs , attaching it to the kernel so it will put the filesystem in a ramdisk during bootup and utilize it as / . You don't need such builds if the regular flash-based storage works for you, as it won't allow any persistent configuration by default. Such a configuration is useful during initial OpenWRT/LEDE porting efforts when you don't have the flash driver configured to use the flash chip on the device.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/385863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218066/" ] }
385,964
Task : Start Chromium on startup on openSUSE machine Problem : I think the problem is that I want to start a GUI program So far : Mon Aug 14; 06:45:00; marton;/etc/systemd ; $ Mon Aug 14; 06:45:00; marton;/etc/systemd ; $ ls -ltotal 24-rw-r--r-- 1 root root 529 Mar 15 07:20 bootchart.conf-rw-rw-r-- 1 root root 138 Aug 14 06:34 chorm_start.service-rw-r--r-- 1 root root 768 Mar 15 07:20 journald.conf-rw-r--r-- 1 root root 709 Mar 15 07:20 logind.confdrwxr-xr-x 1 root root 772 Aug 14 05:16 system-rw-r--r-- 1 root root 1196 Mar 15 07:20 system.confdrwxr-xr-x 1 root root 0 Mar 15 07:20 user-rw-r--r-- 1 root root 992 Mar 15 07:20 user.confMon Aug 14; 06:45:00; marton;/etc/systemd ; $ sudo chmod 664 chorm_start.service root's password:Mon Aug 14; 06:45:19; marton;/etc/systemd ; $ ls -ltotal 24-rw-r--r-- 1 root root 529 Mar 15 07:20 bootchart.conf-rw-rw-r-- 1 root root 138 Aug 14 06:34 chorm_start.service-rw-r--r-- 1 root root 768 Mar 15 07:20 journald.conf-rw-r--r-- 1 root root 709 Mar 15 07:20 logind.confdrwxr-xr-x 1 root root 772 Aug 14 05:16 system-rw-r--r-- 1 root root 1196 Mar 15 07:20 system.confdrwxr-xr-x 1 root root 0 Mar 15 07:20 user-rw-r--r-- 1 root root 992 Mar 15 07:20 user.confMon Aug 14; 06:45:20; marton;/etc/systemd ; $ cat chorm_start.service [Unit] Description="Starting chromium on startup" [Service] ExecStart=/usr/lib64/chromium/chromium [Install] WantedBy=multi-user.target Mon Aug 14; 06:45:38; marton;/etc/systemd ; $ sudo systemctl status chorm_startchorm_start.service - "Starting chromium on startup" Loaded: loaded (/etc/systemd/chorm_start.service; enabled) Active: failed (Result: exit-code) since Mon 2017-08-14 06:38:44 EEST; 7min ago Process: 853 ExecStart=/usr/lib64/chromium/chromium (code=exited, status=1/FAILURE) Main PID: 853 (code=exited, status=1/FAILURE)Aug 14 06:38:47 date chromium[853]: Unable to init server: Could not connect: Connection refusedAug 14 06:38:47 date chromium[853]: [853:853:0814/063844.727638:ERROR:browser_main_loop.cc(279)] Gtk: cannot open display:Mon Aug 14; 06:46:35; marton;/etc/systemd ; $ Question : What am I doing wrong and how to solve the problem
And now, the systemd answer. Since you did ask how to do it with systemd. ☺ This is how the systemd people have been telling people to do this. You are putting the service unit file in entirely the wrong directory. It should not go in /etc/systemd . It should not even go in /etc/systemd/system . It should go in ~marton/.config/systemd/user . This is because a graphical program that you want to run under the aegis of your own account is a per-user service not a system service. (You are currently invoking a WWW browser as the superuser. That is a very bad idea. Stop that now !) You could configure it for all users in the /etc/systemd/user directory, but it is probable that not all users on your machine need to start Chromium as a service. So configure it just for your user account, specifically. As it is a per-user service, you should manipulate it with the --user option to systemctl , sans sudo . For example: systemctl --user status chrome.service That goes for enabling and disabling it, too. As a per-user service unit, it should be WantedBy=default.target , because there is no multi-user.target for per-user services. (Although I suspect it should actually be WantedBy= your-desktop -session.target , which will be something like gnome-session.target depending from what desktop your are using. What the systemd people have been saying is not wondrously clear on this point.) And one part of the systemd people's bodge to make per-user services look like per-login-session services is the whole graphical-session mechanism, which your service unit must incorporate with the setting: [Unit]PartOf=graphical-session.target What else you have to do depends from how far OpenSuSE has got with the whole graphical-session bodge, which systemd people started pushing in 2016. Ubuntu and Debian provide a whole mess of behind the scenes shell scripting in GUI login session startup and shutdown that bodges both the starting/stopping of graphical-session.target and injecting the DISPLAY environment variable. If your OpenSuSE does not yet have this, you might have to fill in that part. Further reading Lennart Poettering et al. (2016). systemd.special . systemd manual pages. Freedesktop.org. Martin Pitt (2016-07-25). units: add graphical-session.target user unit . systemd bug #3678. Martin Pitt (2016-09-29). graphical-session.target . systemd.conf. Youtube. Ian Lane (2017-07-30). systemd in GNOME user sessions . GUADEC 2017. Youtube.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/385964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246391/" ] }
386,000
When I do a tar command, like : tar -xzvf xxxxxxxxtar -zcvf xxxxxxxx does the -v parameter reduce the performance of the operation or is this negligible ? I find it usefull to track progress on heavy operations, but I would like to know if it is ok to keep it.
In most cases xxxxxxxx the performance the performance impact will be negligable, as you are only writing a few bytes to stdout for (unless you have tiny files) many more bytes read from the tar to store in files or vv. If you compress during creation, as in your second example, that will reduces the amount of bytes written, but also influence the processing time. The only time I would worry about the -v is when you are working remotely and your connection is not that fast. Actual display of the file (directory, link, etc.) names processed can then slow down the actual processing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246422/" ] }
386,055
How to validate the following file content? That should be include single integer/float number by bash Regular Expression or any other idea with awk/sed. example: cat /var/VERSION/Version_F35_project_usa2.8
Use grep , if matched means that's valid: grep -P '^[0-9]+(\.[0-9]+)?$' infile.txt The above regex can be used in sed or awk or any command. sed -n -Ee '/^[0-9]+(\.[0-9]+)?$/p' awk '/^[0-9]+(\.[0-9]+)?$/' Here is also checking if file match with this regex or not. awk '/^[0-9]+(\.[0-9]+)?$/{print "matched";exit} {print "not-matched";exit}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246468/" ] }
386,077
I and a friend are administrating a number of virtual machines, each one dedicated to some task in the overall system. When we need to administrate them, we typically log in as root, since the only time we log in to the machines are when we need to perform administrative tasks in them. We usually do not use sudo, since every command we do will typically be a sudo command. We would prefer to keep our accounts separate on these machines, to give us separate .bash_history, see when the other one was last logged in, etc. To do that, we will need two accounts with full root permissions. One method is to change our normal user accounts to have UID=0 GID=0, i.e. the same as root. The question is: Are there any gotchas (i.e. unexpected or non-obvious effects) having several user accounts with the same UID and GID (in particular the same as root)? P.S. I asked this over at Superuser, but that only earned me a Tumbleweed badge, so I am trying here instead.
Obviously you are able to get to do (most of) your tasks as "root", as what counts is having the correct id and gid; however having both new accounts as root won't work as you wish to distinguish between you and your friend actions/logins. Often the system or other software will get confused, and will show one user having the name of the other, a totally different user, or even the root name. (e.g. the system will get slightly confused for your purposes, having three users with the same id/gid); you might even be logging in, and left wondering why the system is saying you have got another login name. I would advise you both using regular accounts, that belong to the sudo group, and having some discipline using sudo, avoiding abusing sudo su for getting in as regular root; the advantage of using all root commands as a sudo command as you mention, ignoring the slight inconvenience is that you have all logged according to the right user doing it. From the security point of view, often the root user has several kind of limitations that the system might not impose to other users with the same id. One good example might be sshd by default not allowing remote root logins. There will be others. Up to the point. Many things in Unix having been historically built assuming mainly root has id/gid 0. Trying to get around that in non-conventional ways is asking for some surprises along the way. see How to add self to sudoers list? and How to run a specific program as root without a password prompt?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114317/" ] }
386,082
Hi when am submitting a shell script using cronjob am facing the bellow issue. I have attached the backup help command. * * * * * ankush /home/ankush/test.shbackup: Unrecognized operation 'codebak'; type 'backup help' for listbackup help listbackup: Commands are:adddump add dump scheduleaddhost add host to configaddvolentry add a new volume entryaddvolset create a new volume setapropos search by help textdbverify check ubik database integritydeldump delete dump scheduledeletedump delete dumps from the databasedelhost delete host to configdelvolentry delete a volume set sub-entrydelvolset delete a volume setdiskrestore restore partitiondump start dumpdumpinfo provide information about a dump in the databasehelp get help on commandsinteractive enter interactive modejobs list running jobskill kill running joblabeltape label a tapelistdumps list dump scheduleslisthosts list config hostslistvolsets list volume setsquit leave the programreadlabel read the label on taperestoredb restore backup databasesavedb save backup databasescantape dump information recovery from tapesetexp set/clear dump expiration datesstatus get tape coordinator statusversion show versionvolinfo query the backup databasevolrestore restore volumevolsetrestore restore a set of volumes Please find the flow of commands from my console. ankush@hn0-ank-d:~$ more test_script.shecho "test"ankush@hn0-ank-d:~$ * * * * * ankush /home/ankush/test_script.shbackup: Unrecognized operation 'codebak'; type 'backup help' for listankush@hn0-ank-d:~$ when I first ran the code, it asked me to install sudo apt install openafs-client. I went ahead and installed it. What could be the reason?
It looks as if you are trying to enter a crontab job specification directly on the command line. That won't work. To add a crontab job, use $ crontab -e to edit your crontab. Add the job specification there, save and exit the editor. The job specification that you have, * * * * * ankush /home/ankush/test_script.sh looks as a system crontab job. That is, it has an extra sixth field which is the username (see your crontab manual, man 5 crontab ). Your own private crontab should not have this. I believe this is what you should have in your crontab: * * * * * /home/ankush/test_script.sh This will invoke the script /home/ankush/test_script.sh once a minute. Any output or error from this job ought to get emailed to you. The cryptic error message that you get comes from trying to execute the command * * * (etc.) in the shell. It is totally unrelated to cron and your script. The shell just expands the * to all the files in the current directory and tries to run that as a command. Apparently, the first * expands to backup codebak and it just happens that backup is the name of some command that does not understand what codebak means.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386082", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246483/" ] }
386,105
When using starting OpenVPN as a service it does not use my /etc/openvpn/server.conf . When looking in the /var/log/syslog , I only see Started OpenVPN service. without any additional logging of OpenVPN. When I start OpenVPN manually, openvpn --config /etc/openvpn/server.conf , I get a bunch of logging of OpenVPN and clients can connect with it. How do I make sure that when starting it as a service, it uses the config file? Debian GNU/Linux 9OpenVPN 2.4.0 x86_64-pc-linux-gnu
If you're using a systemd based OS like Ubuntu 16.04 or Debian 9, you'll need to use the systemctl command instead of service : To enable at boot time: systemctl enable [email protected] To start and stop manually: systemctl start [email protected] systemctl stop [email protected] You can enable, disable, start, and stop any OpenVPN configuration this way by replacing server with the name of the .conf file in /etc/openvpn .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246499/" ] }
386,127
When I use actual values on curl command in the next script, it's working and giving me a result back, but when I use variables it does not work properly I think this is issue with how I define them on the command amz_t=$(cat amazon-token.txt )flx_id=$(cat flex-id.txt )ses_t=$(cat session-token.txt )curl -s -H 'Host: flex-capacity-na.amazon.com' \ -H 'Cookie: session-token='$ses_t'' \ -H 'x-amz-access-token: '$amz_t'' \ -H 'x-flex-instance-id: '$flx_id'' \ -H 'Accept: */*' \ -H 'User-Agent: iOS/10.2.2 (iPhone Darwin) Model/iPhone Platform/iPhone6,1 RabbitiOS/2.0.141' \ -H 'Accept-Language: en-us' \ --compressed 'https://flex-capacity-na.amazon.com/GetOffersForProvider?serviceAreaIds=122' >> output.txt This is the command I try to run in script above mention txt files only contain the certain values no garbage values.
Try something like this: amz_t=$(cat amazon-token.txt)flx_id=$(cat flex-id.txt)ses_t=$(cat session-token.txt)UA='iOS/10.2.2 (iPhone Darwin) Model/iPhone Platform/iPhone6,1 RabbitiOS/2.0.141'URL='https://flex-capacity-na.amazon.com/GetOffersForProvider?serviceAreaIds=122'curl -s -H 'Host: flex-capacity-na.amazon.com' \ -H "Cookie: session-token=$ses_t" \ -H "x-amz-access-token: $amz_t" \ -H "x-flex-instance-id: $flx_id" \ -H 'Accept: */*' \ -H "User-Agent: $UA" \ -H 'Accept-Language: en-us' \ --compressed "$URL" >> output.txt Use single-quotes for fixed strings (i.e. without any variables in them) and double-quotes for strings that need variable interpolation to take place.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/386127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211135/" ] }
386,220
I was trying to move a set of 7 files to my computer, via mv g* dir . The command line moved 6 of them, and for the last file gave the following error: mv: g.tex: Argument list too long Since the other files, both those before and after it, are already moved, I tried mv g.tex dir . Same error. Moving other files works fine. (Note: g.tex is a file, not a directory.) Update: Renaming the file via mv also works fine; moving it to another directory on the USB drive also works fine. However, even when I rename it, or move it to another directory on the USB drive, I still cannot move it to my computer. I tried to cat this file, to copy its contents to the desktop: cat: g.tex: Argument list too long What else might be causing this problem? Update: after comparing output of dtruss with a file which successfully moved, here are the lines of the log which differ: read(0x3, "\0", 0x20000) = -1 Err#7write_nocancel(0x2, "mv: \0", 0x4) = 4 0getrlimit(0x1008, 0x7FFF5A00BC78, 0x4) = 0 0write_nocancel(0x2, "g.tex\0", 0x5) = 5 0write_nocancel(0x2, ": \0", 0x2) = 2 0write_nocancel(0x2, "Argument list too long\n\0", 0x17) = 23 0unlink("/Users/username/Desktop/Tex/g.tex\0", 0x7FFF5A00B8A0, 0x17) = 0 0close(0x3) = 0 0 From the list of Unix error codes for read : #define E2BIG 7 /* Argument list too long */ On a successful move, it displays instead: read(0x3, "Beginning of file contents...", 0x20000) = 0 0fstat64_extended(0x3, 0x7FF1F5C02568, 0x7FF1F5C02660) = 0 0fstat64(0x4, 0x7FFF5A653EF0, 0x7FF1F5C02660) = 0 0fchmod(0x4, 0x180, 0x7FF1F5C02660) = 0 0__mac_syscall(0x7FFF8E670D02, 0x52, 0x7FFF5A653E70) = -1 Err#93flistxattr(0x4, 0x0, 0x0) = 0 0flistxattr(0x3, 0x0, 0x0) = 23 0flistxattr(0x3, 0x7FF1F5C02490, 0x17) = 23 0fgetxattr(0x3, 0x7FF1F5C02490, 0x0) = 11 0fgetxattr(0x3, 0x7FF1F5C02490, 0x7FF1F6001000) = 11 0fsetxattr(0x4, 0x7FF1F5C02490, 0x7FF1F6001000) = 0 0fstat64_extended(0x4, 0x7FFF5A653628, 0x7FF1F5C02660) = 0 0fchmod_extended(0x4, 0xFFFFFF9B, 0xFFFFFF9B) = 0 0fchmod(0x4, 0x0, 0xFFFFFF9B) = 0 0close(0x3) = 0 0fchown(0x4, 0x6300000063, 0x63) = 0 0fchmod(0x4, 0x81FF, 0x63) = 0 0fchflags(0x4, 0x0, 0x63) = 0 0utimes("/Users/aleksander/Desktop/Tex/new_filename\0", 0x7FFF5A654860, 0x63) = 0 0 Just in case this helps, the remainder of the lines, which match for a successful mv command and for the failed one, right before the differing text quoted above: open("/dev/dtracehelper\0", 0x2, 0x7FFF53E619B0) = 3 0ioctl(0x3, 0x80086804, 0x7FFF53E61938) = 0 0close(0x3) = 0 0thread_selfid(0x3, 0x80086804, 0x7FFF53E61938) = 167920154 0bsdthread_register(0x7FFF8E8710F4, 0x7FFF8E8710E4, 0x2000) = 1073741919 0ulock_wake(0x1, 0x7FFF53E6116C, 0x0) = -1 Err#2issetugid(0x1, 0x7FFF53E6116C, 0x0) = 0 0mprotect(0x10BDA5000, 0x88, 0x1) = 0 0mprotect(0x10BDA7000, 0x1000, 0x0) = 0 0mprotect(0x10BDBD000, 0x1000, 0x0) = 0 0mprotect(0x10BDBE000, 0x1000, 0x0) = 0 0mprotect(0x10BDD4000, 0x1000, 0x0) = 0 0mprotect(0x10BDD5000, 0x1000, 0x1) = 0 0mprotect(0x10BDA5000, 0x88, 0x3) = 0 0mprotect(0x10BDA5000, 0x88, 0x1) = 0 0getpid(0x10BDA5000, 0x88, 0x1) = 28838 0stat64("/AppleInternal/XBS/.isChrooted\0", 0x7FFF53E61028, 0x1) = -1 Err#2stat64("/AppleInternal\0", 0x7FFF53E610C0, 0x1) = -1 Err#2csops(0x70A6, 0x7, 0x7FFF53E60B50) = 0 0sysctl([CTL_KERN, 14, 1, 28838, 0, 0] (4), 0x7FFF53E60CA8, 0x7FFF53E60CA0, 0x0, 0x0) = 0 0ulock_wake(0x1, 0x7FFF53E610D0, 0x0) = -1 Err#2csops(0x70A6, 0x7, 0x7FFF53E60430) = 0 0stat64("/Users/aleksander/Desktop/Tex\0", 0x7FFF53E62B88, 0x7FFF53E60430) = 0 0lstat64("g.tex\0", 0x7FFF53E62AF8, 0x7FFF53E60430) = 0 0lstat64("/Users/aleksander/Desktop/Tex\0", 0x7FFF53E62A68, 0x7FFF53E60430) = 0 0stat64("g.tex\0", 0x7FFF53E62AF8, 0x7FFF53E60430) = 0 0stat64("/Users/aleksander/Desktop/Tex/g.tex\0", 0x7FFF53E62A68, 0x7FFF53E60430) = -1 Err#2access("/Users/aleksander/Desktop/Tex/g.tex\0", 0x0, 0x7FFF53E60430) = -1 Err#2rename("g.tex\0", "/Users/aleksander/Desktop/Tex/g.tex\0") = -1 Err#18stat64("/\0", 0x7FFF53E5FB60, 0x7FFF53E60430) = 0 0open_nocancel(".\0", 0x0, 0x1) = 3 0fstat64(0x3, 0x7FFF53E5F900, 0x1) = 0 0fcntl_nocancel(0x3, 0x32, 0x7FFF53E61980) = 0 0close_nocancel(0x3) = 0 0stat64("/Volumes/NO NAME\0", 0x7FFF5A00A870, 0x7FFF5A00C980) = 0 0stat64("/Volumes/NO NAME\0", 0x7FFF5A00AB60, 0x7FFF5A00C980) = 0 0getattrlist("/Volumes/NO NAME/g.tex\0", 0x7FFF8E715B04, 0x7FFF5A00C470) = 0 0statfs64(0x7FFF5A00C980, 0x7FFF5A00CD88, 0x7FFF5A00C470) = 0 0lstat64("g.tex\0", 0x7FFF5A00C8F0, 0x7FFF5A00C470) = 0 0open("g.tex\0", 0x0, 0x0) = 3 0open("/Users/aleksander/Desktop/Tex/g.tex\0", 0xE01, 0x0) = 4 0fstatfs64(0x4, 0x7FFF5A00BFF8, 0x0) = 0 0 xattr -l g.tex doesn't give any output. ls -l g.tex yields: -rwxrwxrwx 1 username staff 159939 Aug 15 11:54 g.tex mount yields: /dev/disk5s1 on /Volumes/NO NAME (msdos, local, nodev, nosuid, noowners)
E2BIG is not one of the errors that read(2) may return. It looks like a bug in the kernel. Pure speculation, but it could be down to some corruption on the file system and the macOS driver for the FAT filesystem returning that error upon encountering that corruption which eventually makes it through to the return of read . In any case, it looks like you've taken the investigation as far as it gets. Going further would require dissecting the file system and the kernel driver code. You could have a look at the kernel logs to see if there's more information there. You could try mounting the FS on a different OS. Or use the GNU mtools to access that FAT filesystem. You could also report the problem to Apple as at least a documentation issue (to include E2BIG as one of the possible error codes, and the conditions upon which it may be returned).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145784/" ] }
386,244
I'm following this guide to install Gentoo, in my case in a virtual machine. During partitioning with parted there is 1 MiB missing at the beginning of all space allocation and 1 MiB missing at the end. There is an example of this in the guide, when (parted) print is invoked. In my case, I allocated a disk with exactly 200 GiB for this VM, which translates into 204800 MiB. I was expecting the first partition to begin at 0 MiB and the last partition to end at 204800 MiB. But the space allocated begins at 1 MiB and ends at 204799 MiB as the following image shows: The last partition was allocated with (parted) mkpart primary 5121 -1 . Why is space missing: 1 MiB before the first partition and 1 MiB after the last partition?
The space reserved before is known as partition alignment; 1MiB is reserved by default by parted. It is reserved usually for performance reasons, either in physical media or in VMs. see Partition Alignment Partition alignment is understood to mean the proper alignment of partitions to the reasonable boundaries of a data storage device (such as a hard disk, solid-state drive (SSD) or RAID volume). Proper partition alignment ensures ideal performance during data access. Incorrect partition alignment will cause reduced performance, especially with regard to SSDs (with an internal page size of 4,096 or 8,192 bytes, for example), hard disks with four-kilobyte (4,096 byte) sectors and RAID volumes. see also Guest OS Partition Alignment An unaligned partition results in the I/O crossing a track boundary and causes an additional I/O. This incurs a penalty on latency and throughput. The additional I/O (especially if small) can impact system resources significantly on some host types. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86552/" ] }
386,319
If I start ksh or mksh , my upwards arrow does nothing: $ ksh$ ^[[A^[[A^[[A^[[A^[[A But it works with bash if I start bash and press the upwards arrow. $ bashdeveloper@1604:~$ ssh [email protected] -p 2223 I have no history if I start ksh or mksh. I even set the $HISTFILE variable and still no history if I start a new shell. What can I do about it? Is it true that the Korn shell can't remember history between sessions while the bash shell can? If I like the Korn shell and I want a better and more extensive history, is it possible to use that functionality with ksh?
No, this is not true. If $HISTFILE is a filename, then the session history will be stored in that file. This is explained in the manual. The number of commands remembered in the shell history is limited by the value of $HISTSIZE . I believe that the history is flushed to the file after the execution of each command, as opposed to bash that flushes the history to file when the shell session ends. This may depend on which implementation of ksh you are using. Set HISTFILE to a filename in your ~/.profile file (which is read by login shells), or in the file pointed to by $ENV (which is read by interactive shells and has the default value of $HOME/.kshrc in ksh93 ). $HISTSIZE is by default 500 or 512 or something thereabouts depending on the implementation of ksh you are using. Neither of these variables need to be exported. The history file does not need to exist before doing this. In comments you mention that some Emacs movement and command line editing keys do not work. This is because the shell is not in Emacs editing mode. Either set the variable EDITOR (or VISUAL ) to emacs or use set -o emacs to enable Emacs command line editing mode. This is also explained in the manual. These variable also do not need to be exported unless you want other programs than the shell to use them. Summary: In your $HOME/.profile file: export ENV="$HOME/.kshrc" In your $HOME/.kshrc file: HISTFILE="$HOME/.ksh_history"HISTSIZE=5000export VISUAL="emacs"export EDITOR="$VISUAL"set -o emacs This has been thoroughly tested on OpenBSD with both ksh93 and pdksh (which is ksh on OpenBSD). I don't use mksh , but since it's a pdksh derivative, I believe this would work with that shell too. Note that pdksh and ksh93 (and bash ) can not share history file as they have different history formats. This is usually not a problem if you have separated initialization files for bash and ksh , e.g. .bash_profile and .bashrc for bash and .profile and .kshrc for ksh (with export ENV="$HOME/.kshrc" in .profile ). You may further distinguish various ksh implementations by looking at $KSH_VERSION (usually).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
386,346
I have a file that looks like this: Heading1,Heading2value1,value2 And another one that looks like this: Row1Row2 How can I combine the two to become: Row1,Heading1,Heading2Row2,value1,value2 Effectively appending a column in the place of the first column?
Job for paste : paste -d, f2.txt f1.txt -d, sets the delimiter as , (instead of tab) With awk : awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt BEGIN {FS=OFS=","} sets the input and output field separators as , NR==FNR {a[NR]=$0; next} : for first file ( f2.txt ), we are saving the record number as key to an associative array ( a ) with values being the corresponding record {print a[FNR], $0} : for second file, we are just printing the record with the value of record number-ed key from a prepended Example: % cat f1.txt Heading1,Heading2value1,value2% cat f2.txt Row1Row2% paste -d, f2.txt f1.txtRow1,Heading1,Heading2Row2,value1,value2% awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt Row1,Heading1,Heading2Row2,value1,value2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133349/" ] }
386,351
What is the interpretation of the whitespace in this command foo= bar ? Why are foo=bar and foo= bar interpreted differently Example (Ubuntu bash) developer@1604:~$ foo=bardeveloper@1604:~$ foo= barThe program 'bar' is currently not installed. You can install it by typing:sudo apt install bar
This is syntax: Bash variables get inititalized with the value that follows immediately after the assignment operator = . That's simply the way it is... When you do foo= bar then you are assigning an empty string to the variable foo and then execute the command bar .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
386,395
I am using convert to create a PDF file from about 2,000 images: convert 0001.miff 0002.miff ... 2000.miff -compress jpeg -quality 80 out.pdf The process terminates reproducible when the output file has reached 2^31-1 bytes (2 GB −1) with the message convert: unknown `out.pdf'. The PDF file specification allows for ≈10 GB . I tried to pull more information from -debug all , but I didn’t see anything helpful in the logging output. The file system is ext3 which allows for files at least up to 16 GiB (may be more) . As to ulimit , file size is unlimited . /etc/security/limits.conf only contains commented-out lines. What else can cause this and how can I increase the limit? ImageMagick version: 6.4.3 2016-08-05 Q16 OpenMP Distribution: SLES 11.4 (i586)
Your limitation does not stem indeed from the filesystem; or from package versions I think . Your 2GB limit is coming from you using a 32-bit version of your OS. The option to increase the file would be installing a 64-bit version if the hardware supports it . See Large file support Traditionally, many operating systems and their underlying file system implementations used 32-bit integers to represent file sizes and positions. Consequently, no file could be larger than 2 32 − 1 bytes (4 GB − 1). In many implementations, the problem was exacerbated by treating the sizes as signed numbers, which further lowered the limit to 2 31 − 1 bytes (2 GB − 1).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28856/" ] }
386,412
Things grow tedious quickly when playing with a massive tool like mount . With different filesystem types and different settings for each filesystem, mount stood the test of time. I wonder how mount knows which default settings when mounting a filesystem. Aside from the fact the udisksd daemon automatically mounts fielsystems, how does mount determine the appropriate settings when mounting a filesystem without options like the following: # mount /dev/sdc /media/usb_drive What we're particularly interested at is the long options of mount such as ( ro , rw , noexec , exec , nodev ,...). As seen above, the command doesn't list any long options: $ mount | grep /dev/sdc/dev/sdc on /media/usb_drive type ext4 (rw,relatime,data=ordered) You can see some options were used by default for the ext4 filesystem when mounting /dev/sdc: (rw,relatime,data=ordered) . Though, there's no entry for /dev/sdc in fstab. Notice that the filesystem lives on the whole usb drive not on a partition. The above command looks like as if we ran this command: # mount /dev/sdc /media/usb_drive -o rw,relatime,data=ordered What is the mechanism that mount uses to determine the appropriate default mounting options?
On Linux at least, any defaults are either hard coded into: The mount command itself The filesystem specific mount helper ( mount.ext4 in this case). The generic VFS layer mount function in the kernel The filesystem specific mount function in the kernel relatime falls under case 3, and is actually a common location for people to locally patch in custom kernels (usually it gets patched to default to noatime ). rw is also case 3, but it can be overridden by the FS specific mount function in the kernel. data=ordered is from 4, is ext* specific, and can be changed at build time to data=writeback if you're building your own kernel (and may be different on some distros). The exact list you get for default options will vary by filesystem type (BTRFS has a different set other than rw,relatime than ext4 for example), by specifics of the filesystem (you can embed some default options in the superblock for ext4), and sometimes even by hardware (BTRFS tries to guess if you have an SSD and will add the FS specific ssd mount option if it thinks you do). The situation is pretty similar on most other systems as well, although on some older UNIX systems mount ends up just being a multiplexer for FS specific mount commands.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233788/" ] }
386,444
I am trying to better understand the way Docker wires up the network and came across this question. Note: I don't believe this has anything to do with Docker per se, that was only the vehicle under which it came up. Please feel free to correct if this is a misperception on my part! With Docker up and running in Swarm mode, the following iptables command is executed: > iptables -t filter -LChain INPUT (policy ACCEPT)target prot opt source destinationACCEPT tcp -- anywhere anywhere tcp dpt:domainACCEPT udp -- anywhere anywhere udp dpt:domainACCEPT tcp -- anywhere anywhere tcp dpt:bootpsACCEPT udp -- anywhere anywhere udp dpt:bootpsChain FORWARD (policy ACCEPT)target prot opt source destinationDOCKER-ISOLATION all -- anywhere anywhereDOCKER all -- anywhere (1) anywhereACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHEDACCEPT all -- anywhere (2) anywhereACCEPT all -- anywhere (3) anywhereDOCKER all -- anywhere (4) anywhereACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHEDACCEPT all -- anywhere anywhereACCEPT all -- anywhere anywhere I added the 1,2,3,4 numbers in the output. Numbers 1 and 4 seem to be duplicates. Likewise, 2 and 3 seem like exact copies of one another. What is the purpose of these? Are they really duplicates? If not, how do I see the next level of information which would then discern them? Separately, in the first section, if anyone can explain dpt:domain vs dpt:bootps that would be cool too!
The rule "pairs" 1-4 and 2-3 you noted are most likely not duplicates, but you can't see the differences in the output of the command you used. if you use iptables -L -v you will get additional output that may reveal the differences - this usually occurs (in my experience) when the rules are operating on different interfaces. The dpt:domain and dpt:bootps are different destination port specifications. dpt:domain is destination port 53 (domain, or DNS), while dpt:bootps is destination port 67 (DHCP). Edit: you are correct, this situation has nothing to do with Docker directly. It's a relatively common situation that was exposed by Docker in your environment, but occurs outside of a Docker environment just as often.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145710/" ] }
386,470
I'm looking for a relatively simple one-line command for bash that will disable the password prompt on future ssh logins (or preferably only the next ssh login), as well as a way to reverse it (the reversal doesn't have to be a one-liner). Is this possible?
Simply ssh-copy-id [email protected] for one-stop shopping for key-pair authentication. If you don't already have a keypair to use, generate one: ssh-keygen && ssh-copy-id [email protected] .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
386,476
When I want to search a whole tree for some content, I use find . -type f -print0 | xargs -0 grep <search_string> Is there a better way to do this in terms of performance or brevity?
Check if your grep supports -r option (for recurse ): grep -r <search_string> .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43524/" ] }
386,490
Goal: Need lmc or "LAN Messenger" to work on 2 lans separated by a Linux gateway using iptables. Information: Must be this program "LAN Messenger". Lmc uses multicast address 239.255.100.100:50000 to see users, then creates a tcp connection for chat. lan1 = olan1 = 192.168.2.0/24: gateway is a smart switch "Linksys Etherfast router" with filter multicast disabled. lan2 = slan1 = 10.10.10.0/24: gateway is the linux box gateway pc = Ubuntu 14 server. iptables to forward some traffic between lans. iptable rules: filter table: -P INPUT ACCEPT-P FORWARD ACCEPT-P OUTPUT ACCEPT-A INPUT -i lo -j ACCEPT-A FORWARD -i slan1 -o olan1 -j ACCEPT-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT-A FORWARD -m iprange --src-range 192.168.2.100-192.168.2.254 -j ACCEPT-A FORWARD -i olan1 -o slan1 -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -i olan1 -o slan1 -p tcp -m tcp --dport 9696 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -i olan1 -o slan1 -p tcp -m tcp --dport 50000 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -i olan1 -o slan1 -p udp -m udp --dport 50000 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -s 224.0.0.0/4 -d 224.0.0.0/4 -j ACCEPT-A FORWARD -p icmp -j ACCEPT-A FORWARD -p igmp -j ACCEPT-A FORWARD -i olan1 -o slan1 -j DROP nat table: -P PREROUTING ACCEPT-P INPUT ACCEPT-P OUTPUT ACCEPT-P POSTROUTING ACCEPT-A POSTROUTING -j MASQUERAD Rules that I thought should forward multicast traffic: -A FORWARD -i olan1 -o slan1 -p tcp -m tcp --dport 50000 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -i olan1 -o slan1 -p udp -m udp --dport 50000 -m conntrack --ctstate NEW -j ACCEPT-A FORWARD -s 224.0.0.0/4 -d 224.0.0.0/4 -j ACCEPT-A FORWARD -p igmp -j ACCEPT Monitored the traffic of the gateway using tcpdump, I never saw multicast traffic go through as I changed iptable rules. Will iptables forward multicast traff ic? Do I need to use a multicast routing daemon or proxy like pimd or smcroute ?
I just tested smcroute with two network namespaces and two veth pairs. Setup: ns1 <-- main namespace --> ns210.0.0.1 -- 10.0.0.254 10.0.1.254 -- 10.0.1.1veth0b veth0a veth1a veth1b The Debian smcroute package is version 2.0.0, and doesn't seem to support virtual eth, so I installed version 2.3.1 from the smcroute homepage . The multicast route howto of smcroute is also very helpful. I used the ssmping package to test multicasts. I ran ssmpingd in ns2, while pinging with ssmping -4 -I veth0b 10.0.1.1 from ns1. These are source-specific multicasts (SSM) using group 232.43.211.234 , you can also test any-source multicasts (ASM) with asmping . I don't know what LAN messenger uses. I enabled forwarding in the main namespace to allow the unicast ping requests to get through, then did smcroutectl add veth1a 10.0.1.1 232.43.211.234 veth0a and everything worked fine. I would expect it also to work, adjusted to your setup, though you also may have to smcroutectl join to tell your switches they should forward multicasts properly. Multiple tcpdump terminal windows on all relevant interfaces greatly help with debugging. I found the following pieces of information interesting: To be able to setup multicast routes a program must connect to the multicast routing socket in the kernel, when that socket is closed, which is done automatically when a UNIX program ends, the kernel cleans up all routes. This means if you intend to use the multicast routing feature of the kernel, you must use a demon, not a commandline tool. For static vs. dynamic routing, it says: The intended purpose of smcroute is to aid in situations where dynamic multicast routing does not work properly. However, a dynamic multicast routing protocol is in nearly all cases the preferred solution. The reason for this is their ability to translate Layer-3 signalling to Layer-2 and vice versa (IGMP or MLD). Finally, pay close attention to the TTL that is produced by your LAN messenger, see multicast FAQ at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130246/" ] }
386,499
This will notify us if the file is empty: [[ ! -s $file ]] && echo "hello there I am empty file !!!" But how to check if a file has blank spaces (spaces or tabs)? empty file can include empty spaces / TAB
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
386,510
I want to monitor 2 different log files (at the time events appear in logs). tail -f /var/log/file1 -f /var/log/file2 For each file, I want to grep some patterns: tail -f /var/log/file1 | grep '\(pattern1\|pattern2\)' tail -f /var/log/file2 | grep '\(pattern3\|pattern4\|pattern5\)' I do not know how to have this working all together. Furthermore I would like to print file1 logs output in red and file2 logs output in blue: Again, I can do it for one file (snippet I grabbed in this forum): RED='\033[0;31m'BLUE='\033[0;34m' tail -fn0 /var/log/file1 | while read line; do if echo $line | grep -q '\(pattern1\|pattern2\)';then echo -e "{$RED}$line" fi done But I absolutely do not know how to do this for multiple files. Any ideas?
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244315/" ] }
386,513
I know it can be done (LUKS) using an extra hard-drive which involves significant manual work. In Windows it's just a single click for encrypting/decrypting your hard-drives. Ubuntu offers a way to encrypt during installation but can anyone explain why there are no viable encryption mechanisms/tools in Linux as compared to Windows/Mac after the OS is installed? Is there an inherent bottleneck in OS architecture or it's just that no one have developed one yet?
Just grep for a character other than space: grep -q '[^[:space:]]' < "$file" && printf '%s\n' "$file contains something else than whitespace characters"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246664/" ] }
386,521
Here's the problem: I'm trying to SSH into a system that is accessible from at least 3 different networks—sometimes directly, sometimes via a proxy—at different times. Connecting directly is far faster and more reliable than connecting via an intermediate host, which is again far faster and more reliable than connecting over the general internet, so I would like SSH to attempt to connect in 3 different ways in a prioritized fashion, picking the first that succeeds. They're all the same machine, obviously, so I don't want to keep having to manually choose between 3 different aliases depending on where I'm connecting from. However, I can't find any mechanism for solving this. Is it possible to do this at all, or no? If not, what do people generally do in such a situation?
Do not use aliases for ssh connections! Use a proper ssh_config in ~/.ssh/config . It has some truly powerful features. Lets say you can identify in which network you are. For example using your IP, which can be pulled for example using hostname -I . So lets write some configuration: # in network1 I am getting ip from "10.168.*.*" and I need to connect through proxyMatch Host myalias Exec "hostname -I | grep 10\.168\." Hostname real-host-IP ProxyCommand ssh -W %h:%p proxy-server# in network2 I am getting IP from "192.168.*.*" and I do not need a proxyMatch Host myalias Exec "hostname -I | grep 192\.168\." Hostname real-host-IP# in network3 I am getting something else
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/386521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6252/" ] }
386,522
I just read here : up to 128TiB virtual address space per process (instead of 2GiB) 64TiB physical memory support instead of 4GiB (or 64GiB with the PAE extension) Why is that? I mean, the physical memory support is being limited by the kernel or by the current hardware? Why would you need twice the virtual memory space than the physical memory you can actually address?
Those limits don't come from Debian or from Linux, they come from the hardware. Different architectures (processor and memory bus) have different limitations. On current x86-64 PC processors, the MMU allows 48 bits of virtual address space . That means that the address space is limited to 256TB. With one bit to distinguish kernel addresses from userland addresses, that leaves 128TB for a process's address space. On current x86-64 processors, physical addresses can use up to 48 bits , which means you can have up to 256TB. The limit has progressively risen since the amd64 architecture was introduced (from 40 bits if I recall correctly). Each bit of address space costs some wiring and decoding logic (which makes the processor more expensive, slower and hotter), so hardware manufacturers have an incentive to keep the size down. Linux only allows physical addresses to go up to 2^46 (so you can only have up to 64TB) because it allows the physical memory to be entirely mapped in kernel space. Remember that there are 48 bits of address space; one bit for kernel/user leaves 47 bits for the kernel address space. Half of that at most addresses physical memory directly, and the other half allows the kernel to map whatever it needs. (Linux can cope with physical memory that can't be mapped in full at the same time, but that introduces additional complexity, so it's only done on platforms where it's require, such as x86-32 with PAE and armv7 with LPAE.) It's useful for virtual memory to be larger than physical memory for several reasons: It lets the kernel map the whole physical memory, and have space left for additional virtual mappings. In addition to mappings of physical memory, there are mappings of swap, of files and of device drivers. It's useful to have unmapped memory in places: guard pages to catch buffer overflows , large unmapped zones due to ASLR , etc.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
386,541
Hey I am using conduit curl method to create tasks from post. It work fine when I run from terminal with hardcoded values. But when I try to execute it with variables it throws an error: Script: #!/bin/bashecho "$1"echo "$2"echo "$3"echo "$4"echo "$5"echo '{ "transactions": [ { "type": "title", "value": "$1" }, { "type": "description", "value": "$2" }, { "type": "status", "value": "$3" }, { "type": "priority", "value": "$4" }, { "type": "owner", "value": "$5" } ]}' | arc call-conduit --conduit-uri https://mydomain.phacility.com/ --conduit-token mytoken maniphest.edit execution: ./test.sh "test003 ticket from api post" "for testing" "open" "high" "ahsan" Output: test003 ticket from api postfor testingopenhighahsan{"error":"ERR-CONDUIT-CORE","errorMessage":"ERR-CONDUIT-CORE: Validation errors:\n - User \"$5\" is not a valid user.\n - Task priority \"$4\" is not a valid task priority. Use a priority keyword to choose a task priority: unbreak, very, high, kinda, triage, normal, low, wish.","response":null} As you can see in error its reading $4 and $5 as values not variables. And I am failing to understand how to use $variables as input in these arguments.
Those limits don't come from Debian or from Linux, they come from the hardware. Different architectures (processor and memory bus) have different limitations. On current x86-64 PC processors, the MMU allows 48 bits of virtual address space . That means that the address space is limited to 256TB. With one bit to distinguish kernel addresses from userland addresses, that leaves 128TB for a process's address space. On current x86-64 processors, physical addresses can use up to 48 bits , which means you can have up to 256TB. The limit has progressively risen since the amd64 architecture was introduced (from 40 bits if I recall correctly). Each bit of address space costs some wiring and decoding logic (which makes the processor more expensive, slower and hotter), so hardware manufacturers have an incentive to keep the size down. Linux only allows physical addresses to go up to 2^46 (so you can only have up to 64TB) because it allows the physical memory to be entirely mapped in kernel space. Remember that there are 48 bits of address space; one bit for kernel/user leaves 47 bits for the kernel address space. Half of that at most addresses physical memory directly, and the other half allows the kernel to map whatever it needs. (Linux can cope with physical memory that can't be mapped in full at the same time, but that introduces additional complexity, so it's only done on platforms where it's require, such as x86-32 with PAE and armv7 with LPAE.) It's useful for virtual memory to be larger than physical memory for several reasons: It lets the kernel map the whole physical memory, and have space left for additional virtual mappings. In addition to mappings of physical memory, there are mappings of swap, of files and of device drivers. It's useful to have unmapped memory in places: guard pages to catch buffer overflows , large unmapped zones due to ASLR , etc.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/386541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246842/" ] }
386,564
I've got a file containing: GeorgiaUnixGoogle The desired output is: GeorgUnGoog
sed 's/..$//' < input > output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244055/" ] }
386,572
I'm trying to build the "nvidiaBeta" driver, but it fails to build nvidia-settings with the error gtk+-2.x/ctkgridlicense.c:38:23: fatal error: dbus/dbus.h: No such file or directory I have tried installing all kinds of dbus packages but the closest thing I get in my nix store is a "dbus-c++/dbus.h" from the dbus_cplusplus derivation.While searching I've read that apparently what I need is supposed to be contained in "dbus-libs" but it doesn't seem to be available in channel 17.03. I cannot seem to figure out which derivation is supposed to pull in this library. Can I somehow work around the issue and get it to use the one I have from the dbus_cplusplus derivation?
sed 's/..$//' < input > output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210021/" ] }
386,574
I installed Linux Mint on one HDD and Windows 10 on another (Windows first), then I set the BIOS to boot the Linux disk first. It boots just fine, but it skips GRUB completely and just boots into Mint. I can boot into Windows by changing the BIOS back, but I'd much prefer to have grub handle that. I have already tried updating grub (with update-grub) and get the output: Found linux image: {some file}Found initrd image: {some file}Found memtest86+ image: {some elf file}Found memtest86+ image: {some bin file} I think grub can't find Windows, but I'm open to any other ideas. Update: It shows the menu now but Windows 10 isn't on it.
sed 's/..$//' < input > output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386574", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126625/" ] }
386,579
so, I have ssh setup on a Linux machine to start screen when user (root) logs in on ssh (on a basically single-user machine) I have dataplicity installed too, so I can access it from anywhere. when I remote to it, it runs as user dataplicity, so I su as root user to do anything, which doesn't reconnect to any available screen sessions. is there a way to add things that happens after 'su' only for a specific user?
sed 's/..$//' < input > output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211659/" ] }
386,619
For example, I want to check if a directory exists on the phone. R=$(adb shell 'ls /mnt/; echo $?' | tail -1);$ echo $R0$ if [ "$R" -ne 0 ]; then echo "Path doesn't exist"; else echo "Path exists"; fi: integer expression expectedPath exists What's wrong with R? Ok, try it with another variable which is definitely 0. $ x=0$ if [ "$x" -ne 0 ]; then echo "Path doesn't exist"; else echo "Path exists"; fiPath exists$ echo "|$x|"|0|$ echo "|$R|"|0 The second pipe isn't printed. Is there a character after 0? Try to trim: $ R=$(adb shell 'ls /mnt/; echo $?' | tail -1 | xargs)$ echo "|$R|"|0 I'm out of ideas.
adb is adding a carriage-return (aka 0x0d , Ctrl-M , \r , etc) before the line-feed. Probably for ease of use with Windows software that expects lines to end with CR-LF rather than just LF. You can see this yourself with hexdump aka hd , e.g.: $ printf "$R" | hd00000000 30 0d |0.|00000002 Because you only need to return a single value (the exit code). you could use printf instead of echo and redirect all of ls 's output to /dev/null on the Android device to avoid printing any newlines (then adb doesn't add a CR): R="$(adb shell 'ls /mnt/ > /dev/null 2>&1 ; printf $?')" If your android device doesn't have printf , or if you need to return one or more lines of output from a the android shell, you can use tr -d '\r' or dos2unix or sed 's/\r$//' or similar to strip the CR. dos2unix and sed are better choices than tr here because they will only strip CRs that are immediately followed by LF, leaving alone any CRs that might be in elsewhere in a line: $ R="$(adb shell 'ls /mnt/ > /dev/null 2>&1 ; echo $?' | dos2unix)"$ printf "$R" | hd00000000 30 |0|00000001
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386619", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163956/" ] }
386,632
I have spent considerable time on my Centos 7 with sudo . I added local user test to /etc/sudoers via visudo as follows: ## Next comes the main part: which users can run what software on## which machines (the sudoers file can be shared between multiple## systems).## Syntax:#### user MACHINE=COMMANDS#### The COMMANDS section may have other options added to it.#### Allow root to run any commands anywhere root ALL=(ALL) ALL test ALL=(ALL) ALL Also added test to the wheel group: [root@ark-centos-smb4 ~]# groups testtest : bin wheel arkgrp Then I su to test , and try to run a command as root, but I get an error saying that the user in not in the sudoers file. [root@ark-centos-smb4 ~]# su - testLast login: Tue Aug 8 01:03:48 PDT 2017 on pts/0[test@ark-centos-smb4 ~]$ sudo ls /root/[sudo] password for test:test is not in the sudoers file. This incident will be reported. Interestingly, the root user is also refused to run sudo: [root@ark-centos-smb4 ~]# sudo lsroot is not allowed to run sudo on ark-centos-smb4. This incident will be reported. visudo result: [root@ark-centos-smb4 ~]# visudo -c/etc/sudoers: parsed OK/etc/sudoers.d/arkgrp-users: parsed OK sudo -V result: [root@ark-centos-smb4 ~]# sudo -VSudo version 1.8.6p7Configure options: --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --prefix=/usr --sbindir=/usr/sbin --libdir=/usr/lib64 --docdir=/usr/share/doc/sudo-1.8.6p7 --with-logging=syslog --with-logfac=authpriv --with-pam --with-pam-login --with-editor=/bin/vi --with-env-editor --with-ignore-dot --with-tty-tickets --with-ldap --with-ldap-conf-file=/etc/sudo-ldap.conf --with-selinux --with-passprompt=[sudo] password for %p: --with-linux-audit --with-sssd --with-gcryptSudoers policy plugin version 1.8.6p7Sudoers file grammar version 42Sudoers path: /etc/sudoersnsswitch path: /etc/nsswitch.confldap.conf path: /etc/sudo-ldap.confldap.secret path: /etc/ldap.secretAuthentication methods: 'pam'Syslog facility if syslog is being used for logging: authprivSyslog priority to use when user authenticates successfully: noticeSyslog priority to use when user authenticates unsuccessfully: alertIgnore '.' in $PATHSend mail if the user is not in sudoersUse a separate timestamp for each user/tty comboLecture user the first time they run sudoRequire users to authenticate by defaultRoot may run sudoAllow some information gathering to give useful error messagesVisudo will honor the EDITOR environment variableSet the LOGNAME and USER environment variablesLength at which to wrap log file lines (0 for no wrap): 80Authentication timestamp timeout: 5.0 minutesPassword prompt timeout: 5.0 minutesNumber of tries to enter a password: 3Umask to use or 0777 to use user's: 022Path to mail program: /usr/sbin/sendmailFlags for mail program: -tAddress to send mail to: rootSubject line for mail messages: *** SECURITY information for %h ***Incorrect password message: Sorry, try again.Path to authentication timestamp dir: /var/db/sudoDefault password prompt: [sudo] password for %p:Default user to run commands as: rootPath to the editor for use by visudo: /bin/viWhen to require a password for 'list' pseudocommand: anyWhen to require a password for 'verify' pseudocommand: allFile descriptors >= 3 will be closed before executing a commandReset the environment to a default set of variablesEnvironment variables to check for sanity: TZ TERM LINGUAS LC_* LANGUAGE LANG COLORTERMEnvironment variables to remove: RUBYOPT RUBYLIB PYTHONUSERBASE PYTHONINSPECT PYTHONPATH PYTHONHOME TMPPREFIX ZDOTDIR READNULLCMD NULLCMD FPATH PERL5DB PERL5OPT PERL5LIB PERLLIB PERLIO_DEBUG JAVA_TOOL_OPTIONS SHELLOPTS GLOBIGNORE PS4 BASH_ENV ENV TERMCAP TERMPATH TERMINFO_DIRS TERMINFO _RLD* LD_* PATH_LOCALE NLSPATH HOSTALIASES RES_OPTIONS LOCALDOMAIN CDPATH IFSEnvironment variables to preserve: XAUTHORIZATION XAUTHORITY PS2 PS1 PATH LS_COLORS KRB5CCNAME HOSTNAME DISPLAY COLORSLocale to use while parsing sudoers: CCompress I/O logs using zlibDirectory in which to store input/output logs: /var/log/sudo-ioFile in which to store the input/output log: %{seq}Add an entry to the utmp/utmpx file when allocating a ptyDon't pre-resolve all group namesPAM service name to usePAM service name to use for login shellsLocal IP address and netmask pairs: 192.168.32.26/255.255.252.0 2001:21:21:32:250:56ff:feb4:720d/ffff:ffff:ffff:ffff:: fe80::250:56ff:feb4:720d/ffff:ffff:ffff:ffff::Sudoers I/O plugin version 1.8.6p7 /etc/sudoers non-comment content: Defaults !visiblepwDefaults always_set_homeDefaults env_resetDefaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/binroot ALL=(ALL:ALL) ALLtest ALL=(ALL:ALL) ALLusera ALL=(ALL:ALL) ALL%wheel ALL=(ALL) ALL## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)#includedir /etc/sudoers.d /etc/sudoers.d/arkgrp-users content: %arkgrp ALL=(ALL) ALL i joined centos to our windows domain by realm join QA.ARKIVIO.COM [root@ark-centos-smb4 ~]# realm listqa.arkivio.com type: kerberos realm-name: QA.ARKIVIO.COM domain-name: qa.arkivio.com configured: kerberos-member server-software: active-directory client-software: winbind required-package: oddjob-mkhomedir required-package: oddjob required-package: samba-winbind-clients required-package: samba-winbind required-package: samba-common-tools login-formats: QA\%U login-policy: allow-any-loginQA.ARKIVIO.COM type: kerberos realm-name: QA.ARKIVIO.COM domain-name: qa.arkivio.com configured: kerberos-member server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools login-formats: %[email protected] login-policy: allow-realm-logins /etc/sssd/sssd.conf content [sssd]config_file_version = 2#services = nss, pam, pac, ssh, ifpservices = nss, pam, pac, ssh, ifp, sudo#domains = QAdomains = QA.ARKIVIO.COM#debug_level = 0 - Set this to troubleshoot; 0-10 are valid values#debug_level = 0debug_level = 9#ldap_sasl_authid = host/[email protected][nss]#filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscdfilter_groups = rootfilter_users = rootreconnection_retries = 3[pam]reconnection_retries = 3[domain/QA.ARKIVIO.COM]ad_domain = QA.ARKIVIO.COMkrb5_realm = QA.ARKIVIO.COMrealmd_tags = manages-system joined-with-sambacache_credentials = Trueid_provider = adkrb5_store_password_if_offline = Truedefault_shell = /bin/bashldap_id_mapping = Trueldap_schema = ad#ldap_access_order = expire#ldap_account_expire_policy = aduse_fully_qualified_names = Truefallback_homedir = /home/%u@%daccess_provider = adauth_provider = ad sudo item in /etc/nsswitch.conf [root@ark-centos-smb4 /]# grep sudo /etc/nsswitch.confsudoers: ldap Please give some advice.
The problem here is that when you joined your CentOS system to the Active Directory domain, the realm command also modified /etc/nsswitch.conf to take over the configuration of sudo : grep sudo /etc/nsswitch.confsudoers: ldap If you want to retain local configuration of sudo you need to revert this to its original setting: sudoers: files Interestingly, on my (Debian and Raspbian) systems that have been joined to AD I have a merged configuration: sudoers: files sss Distribution aside, I'm curious to understand why yours isn't also a merged configuration and that yours is configured directly via LDAP whereas mine is through sssd . (I'd be pleased if someone to be able to explain that. But perhaps it's just a distribution difference.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246908/" ] }
386,641
I need to do multiple sums; my input file is: DATE|NATION|CITY|FILES|REVENUE|FREQUENCY|INVESTMENT20170807|USA|VIRGINIA|TIMES|1919150|1779|28207520170807|USA|NYC|ROADS|92877|41|159920170808|USA|PENS|ROADS|133001|7|120170808|USA|NYC|TIMES|361625|1592|0 Sum $5 in every uniq of $1 (date) sum $5 in every uniq where $4=="TIMES" sum $5 in every uniq where $4=="ROADS" sum $5 in every uniq where $4=="ROADS" and $3=="NYC" arrange based on column $1 my expected output DATE|REV|TIMES|ROADS|ROADS&NYC20170807|2012027|1919150|92877|9287720170808|494626|361625|133001|0 I only know how to sum based on 1 column awk -F"|" '{FS=OFS="|"}{col[$1]+=$5} END {for (i in col) print i, col[i]}'
The problem here is that when you joined your CentOS system to the Active Directory domain, the realm command also modified /etc/nsswitch.conf to take over the configuration of sudo : grep sudo /etc/nsswitch.confsudoers: ldap If you want to retain local configuration of sudo you need to revert this to its original setting: sudoers: files Interestingly, on my (Debian and Raspbian) systems that have been joined to AD I have a merged configuration: sudoers: files sss Distribution aside, I'm curious to understand why yours isn't also a merged configuration and that yours is configured directly via LDAP whereas mine is through sssd . (I'd be pleased if someone to be able to explain that. But perhaps it's just a distribution difference.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246910/" ] }
386,747
I am looking for such kind of a tool on either Unix/Linux platform which can achieve: I have the source files and I compiled the application myself (the source code is in C, although I don't really think it matters here) I want to run this application while every function calls are printed/logged to a stdout/file For example: #include <stdio.h>int square(int x) { return x*x; }int main(void) { square(2);} And when I run this program it will print out main square I understand that gdb can do this to some extent, or valgrind but they all do not do exactly what I want. I am just wondering if such a tool exist? Thanks.
Using gcov : $ gcc -O0 --coverage square.c$ ./a.out$ gcov -i square.c$ awk -F '[,:]' '$1 == "function" && $3 > 0 {print $3, $4}' square.c.gcov1 square1 main (where the number is the number of times the function was called (we skip the ones that are never called with $3 > 0 in the awk part)). That's typically used for code coverage (how much of the code is being tested). You could also use the gprof code profiling tool (typically used to figure out how much time is spent in various areas of the code): $ gcc -O0 -pg square.c$ ./a.out$ gprof -b -P Call graphgranularity: each sample hit covers 2 byte(s) no time propagatedindex % time self children called name 0.00 0.00 1/1 main [7][1] 0.0 0.00 0.00 1 square [1]-----------------------------------------------Index by function name [1] square
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33044/" ] }
386,751
When I use: msgattrib --untranslated pl.po to see untranslated strings from po file I've got strings in color, but not when I use: msgattrib --untranslated pl.po | less
msgattrib display colors only if executed from real terminal. You can use unbuffer command that's part of expect to make msgattrib think that it's executed from real terminal and then use -r option to handle ANSI escapes in less : unbuffer msgattrib --untranslated pl.po | less -r You can do that in any command that produce colors (ANSI escapes codes) based on existance of tty.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
386,841
I am trying to generate an xorg.conf file for my current configuration. I am using X -config xorg.conf , but it doesn't produce a file. Here is the output: # X -config xorg.conf.newX.Org X Server 1.17.2Release Date: 2015-06-16X Protocol Version 11, Revision 0Build Operating System: 2.6.32-573.18.1.el6.x86_64Current Operating System: Linux mapcrunch.localdomain 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-514.26.2.el7.x86_64 root=/dev/mapper/cl-root ro crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet LANG=en_US.UTF-8Build Date: 06 November 2016 12:43:39AMBuild ID: xorg-x11-server 1.17.2-22.el7Current version of pixman: 0.34.0 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown.(==) Log file: "/var/log/Xorg.0.log", Time: Thu Aug 17 19:21:55 2017(==) Using config directory: "/etc/X11/xorg.conf.d"(==) Using system config directory "/usr/share/X11/xorg.conf.d"pci id for fd 12: 102b:0534, driver (null)EGL_MESA_drm_image required. It then hangs and never returns to prompt. [edit] Last comment by original poster: I did end up realizing that I was using the wrong command then ran into a bound kernel driver error. I then had to blacklist the driver for some reason in order to get the xorg.conf file.
Switch to the console mode , then stop your display manager. To generate the xorg.conf.new file you should use the Xorg -configure command, it will be located under /root/xorg.conf.new Move the /root/xorg.conf.new to your /etc/X11/xorg.conf Finally start your display manager. Debian-WIKI : What if I do not have an xorg config file?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247051/" ] }
386,882
I have again and again had this problem: I have a glob, that matches exactly the correct files, but causes Command line too long . Every time I have converted it to some combination of find and grep that works for the particular situation, but which is not 100% equivalent. For example: ./foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpg Is there a tool for converting globs into find expressions that I am not aware of? Or is there an option for find to match the glob without matching a the same glob in a subdir (e.g. foo/*.jpg is not allowed to match bar/foo/*.jpg )?
If the problem is that you get an argument-list-is-too-long error, use a loop, or a shell built-in. While command glob-that-matches-too-much can error out, for f in glob-that-matches-too-much does not, so you can just do: for f in foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpgdo something "$f"done The loop might be excruciatingly slow, but it should work. Or: printf "%s\0" foo*bar/quux[A-Z]{.bak,}/pic[0-9][0-9][0-9][0-9]?.jpg | xargs -r0 something ( printf being builtin in most shells, the above works around the limitation of the execve() system call) $ cat /usr/share/**/* > /dev/nullzsh: argument list too long: cat$ printf "%s\n" /usr/share/**/* | wc -l165606 Also works with bash. I'm not sure exactly where this is documented though. Both Vim's glob2regpat() and Python's fnmatch.translate() can convert globs to regexes, but both also use .* for * , matching across / .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/386882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
386,925
I installed Debian 9 stretch (GNOME desktop) 64-bit on my PC. My USB wireless adapter (TP-LINK TL-WN722N) was detected automatically after installing atheros firmware: apt-get install firmware-atheros But I can't connect to any wireless framework, whether they are protected with password or unprotected. I plugged my USB. It was detected, sent auth, got authenticated, but immediately aborted authentication. Disabling IPV6 did not solve my problem..Here is my dmesg report: [ 59.880805] usb 1-1.4: new high-speed USB device number 4 using ehci-pci[ 60.005727] usb 1-1.4: New USB device found, idVendor=0cf3, idProduct=9271[ 60.005729] usb 1-1.4: New USB device strings: Mfr=16, Product=32, SerialNumber=48[ 60.005731] usb 1-1.4: Product: USB2.0 WLAN[ 60.005732] usb 1-1.4: Manufacturer: ATHEROS[ 60.005734] usb 1-1.4: SerialNumber: 12345[ 60.324981] usb 1-1.4: ath9k_htc: Firmware ath9k_htc/htc_9271-1.4.0.fw requested[ 60.325069] usbcore: registered new interface driver ath9k_htc[ 60.348095] usb 1-1.4: firmware: direct-loading firmware ath9k_htc/htc_9271-1.4.0.fw[ 60.629962] usb 1-1.4: ath9k_htc: Transferred FW: ath9k_htc/htc_9271-1.4.0.fw, size: 51008[ 60.880826] ath9k_htc 1-1.4:1.0: ath9k_htc: HTC initialized with 33 credits[ 61.111895] ath9k_htc 1-1.4:1.0: ath9k_htc: FW Version: 1.4[ 61.111897] ath9k_htc 1-1.4:1.0: FW RMW support: On[ 61.111899] ath: EEPROM regdomain: 0x809c[ 61.111900] ath: EEPROM indicates we should expect a country code[ 61.111901] ath: doing EEPROM country->regdmn map search[ 61.111911] ath: country maps to regdmn code: 0x52[ 61.111912] ath: Country alpha2 being used: CN[ 61.111912] ath: Regpair used: 0x52[ 61.122477] ieee80211 phy0: Atheros AR9271 Rev:1[ 61.185069] ath9k_htc 1-1.4:1.0 wlx18a6f7160a49: renamed from wlan0[ 61.224640] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 61.361032] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 61.535923] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 61.743450] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 69.190250] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 70.360621] wlx18a6f7160a49: authenticate with 74:23:44:dc:0f:d7[ 70.551637] wlx18a6f7160a49: send auth to 74:23:44:dc:0f:d7 (try 1/3)[ 70.556012] wlx18a6f7160a49: authenticated[ 75.555233] wlx18a6f7160a49: aborting authentication with 74:23:44:dc:0f:d7 by local choice (Reason: 3=DEAUTH_LEAVING)[ 76.872114] wlx18a6f7160a49: authenticate with 74:23:44:dc:0f:d7[ 77.061146] wlx18a6f7160a49: send auth to 74:23:44:dc:0f:d7 (try 1/3)[ 77.065158] wlx18a6f7160a49: authenticated[ 82.061225] wlx18a6f7160a49: aborting authentication with 74:23:44:dc:0f:d7 by local choice (Reason: 3=DEAUTH_LEAVING)[ 83.775718] wlx18a6f7160a49: authenticate with 74:23:44:dc:0f:d7[ 83.965040] wlx18a6f7160a49: send auth to 74:23:44:dc:0f:d7 (try 1/3)[ 83.969807] wlx18a6f7160a49: authenticated[ 88.969792] wlx18a6f7160a49: aborting authentication with 74:23:44:dc:0f:d7 by local choice (Reason: 3=DEAUTH_LEAVING)[ 91.207178] wlx18a6f7160a49: authenticate with 74:23:44:dc:0f:d7[ 91.395860] wlx18a6f7160a49: send auth to 74:23:44:dc:0f:d7 (try 1/3)[ 91.400263] wlx18a6f7160a49: authenticated[ 93.996839] wlx18a6f7160a49: aborting authentication with 74:23:44:dc:0f:d7 by local choice (Reason: 3=DEAUTH_LEAVING)[ 94.061841] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready[ 94.233433] IPv6: ADDRCONF(NETDEV_UP): wlx18a6f7160a49: link is not ready I have no idea why this happened, nor why it was aborted multiple times in one try. Edit: iwconfig report: enp3s0 no wireless extensions.wlx18a6f7160a49 IEEE 802.11 ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:offlo no wireless extensions.
Somehow, my firmware got trouble with long interface name. So I ran this command to prevent it: ln -s /dev/null /etc/systemd/network/99-default.link and it worked.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/386925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175183/" ] }
386,958
I have a directory of files with filenames of the form <num1>v<num2>.txt . I'd like to find all files for which <num1> is a duplicate. When duplicates are found, we should delete the ones with smaller <num2> . Is this possible? I could easily write a python script to handle this, but thought it might be a nice application of built-in zsh features. Example In the following list of files, the first three have duplicate <num1> parts. As well, the fourth and fifth are duplicate. 012345v1.txt012345v2.txt012345v3.txt3333v4.txt3333v7.txt11111v11.txt I would like to end up with directory containing 012345v3.txt3333v7.txt11111v11.txt
You could do something like: files=(<->v<->.txt(n))typeset -A hfor f ($files) h[${f%%v*}]=$fkeep=($h)echo rm ${files:|keep} (remove echo if happy) <-> : any sequence of digits ( <x-y> glob operator with no bound specified) (n) : numeric sort ${f%%v*} : standard/ksh greedy pattern stripping from the end. ${files:|keep} : array subtraction.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/386958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247135/" ] }
386,979
I'm trying to understand what cp --preserve=links does when used by itself. From my tests it seems that it copies a normal file normally and dereferences symlinks, but it seems like it just has the same effect as cp -L when used on a single file. Is that true or is there something I'm missing?
The --preserve=links option does not refer to symbolic links, but to hard links. It asks cp to preserve any existing hard link between two or more files that are being copied. $ date > file1$ ln file1 file2$ ls -1i file1 file26034008 file16034008 file2 You can see that the two original files are hard-linked and their inode number is 6034008. $ mkdir dir1$ cp file1 file2 dir1$ ls -1i dir1total 86035093 file16038175 file2 You can see now that without --preserve=links their copies have two different inode numbers: there is no longer a hard link between the two. $ mkdir dir2$ cp --preserve=links file1 file2 dir2$ ls -1i dir2total 86089617 file16089617 file2 You can see now that with --preserve=links , the two copies are still hard-linked, but their inode number is 6089617, which is not the same as the inode number of the original files (contrary to what cp --link would have done).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/386979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247143/" ] }
387,010
I have written a script that notifies me when a value is not within a given range. All values "out of range" are logged in a set of per day files. Every line is timestamped in a proprietary reverse way: yyyymmddHHMMSS Now, I would like to refine the script, and receive notifications just when at least 60 minutes are passed since the last notification for the given out of range value. I already solved the issue to print the logs in reverse ordered way with: for i in $(ls -t /var/log/logfolder/*); do zcat $i|tac|grep \!\!\!|grep --color KEYFORVALUE; done that results in: ...20170817041001 - WARNING: KEYFORVALUE=252.36 is not between 225 and 245 (!!!)20170817040001 - WARNING: KEYFORVALUE=254.35 is not between 225 and 245 (!!!)20170817035001 - WARNING: KEYFORVALUE=254.55 is not between 225 and 245 (!!!)20170817034001 - WARNING: KEYFORVALUE=254.58 is not between 225 and 245 (!!!)20170817033001 - WARNING: KEYFORVALUE=255.32 is not between 225 and 245 (!!!)20170817032001 - WARNING: KEYFORVALUE=254.99 is not between 225 and 245 (!!!)20170817031001 - WARNING: KEYFORVALUE=255.95 is not between 225 and 245 (!!!)20170817030001 - WARNING: KEYFORVALUE=255.43 is not between 225 and 245 (!!!)20170817025001 - WARNING: KEYFORVALUE=255.26 is not between 225 and 245 (!!!)20170817024001 - WARNING: KEYFORVALUE=255.42 is not between 225 and 245 (!!!)20170817012001 - WARNING: KEYFORVALUE=252.04 is not between 225 and 245 (!!!)... Anyway, I'm stuck at calculating the number of seconds between two of those timestamps, for instance: 2017081704000120160312000101 What should I do in order to calculate the time elapsed between two timestamps?
This will give you the date in seconds (since the UNIX epoch) date --date '2017-08-17 04:00:01' +%s # "1502938801" And this will give you the date as a readable string from a number of seconds date --date '@1502938801' # "17 Aug 2017 04:00:01" So all that's needed is to convert your date/timestamp into a format that GNU date can understand, use maths to determine the difference, and output the result datetime1=20170817040001datetime2=20160312000101# bash string manipulationdatestamp1="${datetime1:0:4}-${datetime1:4:2}-${datetime1:6:2} ${datetime1:8:2}:${datetime1:10:2}:${datetime1:12:2}"datestamp2="${datetime2:0:4}-${datetime2:4:2}-${datetime2:6:2} ${datetime2:8:2}:${datetime2:10:2}:${datetime2:12:2}"# otherwise use sed# datestamp1=$(echo "$datetime1" | sed -nr 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')# datestamp2=$(echo "$datetime2" | sed -nr 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')seconds1=$(date --date "$datestamp1" +%s)seconds2=$(date --date "$datestamp2" +%s)delta=$((seconds1 - seconds2))echo "$delta seconds" # "45197940 seconds" We've not provided timezone information here so it assumes local timezone. Your values for the seconds from the datetime will probably be different to mine. (If your values are UTC then you can use date --utc .)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/387010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105218/" ] }
387,045
How to exit fullscreen mode in vinagre VNC client? I have not found any key combination to do this.
You also have to enable keyboard shortcuts in the Menu (check view->keyboard shortcuts). Then F11 is you key of choice (for me at least in version 3.22.0, you can check the shortcut key once it's enabled in View->Fullscreen) This is where I found the suggestion: https://bugs.launchpad.net/ubuntu/+source/vinagre/+bug/1547770
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
387,062
I am currently using Linux Mint 18.2, Cinnamon 3.4.6, and Nemo 3.4.6 aswell. I seem to be having an issue with Nemo . My places (those bookmarks that are like Music, Pictures, Documents, etc) are now in my bookmarks section and without icons. All they show is a folder icon and the symbolic link icon next to them (since I have my Music, Pictures, Documents and such all located on an external hdd). This is making it difficult to determine which is which since I usually determine the folder by what icon it has first. It doesn't help that the places bookmarks are not in their original positions. This has happened twice already. I can't remember exactly how I was able to fix the issue before, but I remember there was a bookmarks config file with bookmarks and places separated. I don't remember where it is located but it fixed the problem. So, that is what I am asking. Where the bookmarks config file for nemo is. I am sure it will fix this same issue for a second time. If anybody has any alternative way to fix this issue, I am all ears. Maybe possibly resetting all the settings of nemo ? I don't know.
Bookmarks are loaded from /home/<username>/.config/gtk-3.0/bookmarks . (Using strace to see what files are accessed. Also adding bookmark adds to this file, and adding lines to this file adds bookmarks.) This file looks like this: file:///home/<username>/Documents Documentsfile:///home/<username>/Music Musicfile:///home/<username>/Pictures Picturesfile:///home/<username>/Videos Videosfile:///home/<username>/Downloads Downloads Images used are located in /usr/share/icons/Mint-X/places/16 . (Path will vary depending on your theme.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198063/" ] }
387,073
I am working with GRASS and I have a problem with a one-dimensional vector of numbers. Now, I want to print the length of that vector because my output is looks like this: (1,2,3,4,5,6,7,8,9 , ) when it should look like this: (1,2,3,4,5,6,7,8,9) i.e., I don't need the last separator. Is there a way to do that? I want POSIX-compliant solutions; no bashisms. My code looks like this: for i in $CATSdo step=$step"$i," echo $step g.region --overwrite vector="region_uspo_$i," save=regions_uspo regions_uspos="region_$i," echo $regions_usposdone
Bookmarks are loaded from /home/<username>/.config/gtk-3.0/bookmarks . (Using strace to see what files are accessed. Also adding bookmark adds to this file, and adding lines to this file adds bookmarks.) This file looks like this: file:///home/<username>/Documents Documentsfile:///home/<username>/Music Musicfile:///home/<username>/Pictures Picturesfile:///home/<username>/Videos Videosfile:///home/<username>/Downloads Downloads Images used are located in /usr/share/icons/Mint-X/places/16 . (Path will vary depending on your theme.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247187/" ] }
387,076
I'd like to be able to use xargs to execute multiple parameters in different parts of a command. For example, the following: echo {1..8} | xargs -n2 | xargs -I v1 -I v2 echo the number v1 comes before v2 I would hope that it would return the number 1 comes before 2the number 3 comes before 4 ... etc Is this achievable? I suspect that my multiple use of -I is incorrect.
I believe that you can’t use -I that way. But you can get the effect / behavior you want by saying: echo {1..8} | xargs -n2 sh -c 'echo "the number $1 comes before $2"' sh This, essentially, creates an ad hoc one-line shell script,which xargs executes via sh -c . The two values that xargs parses out of the inputare passed to this “script”. The shell then assigns those values to $1 and $2 ,which you can then reference in the “script”.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/387076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20889/" ] }
387,096
I can not kill irq/${nnn}-nvidia by kill -9 or pkill -f -9 .Does anyone how to kill or stop those process? (I am using Ubuntu 16.04, if that is relevant.)
As @hobbs explained, it is a kernel thread. A broader perspective is the following: IRQ handling is problematic in any OS because interrupts can arrive at any time. Interrupts can arrive even while the kernel is in the middle of working on a complex task and resources are inconsistent (pointers are pointing to invalid addresses and so on). This problem can be solved with locks, i.e. don't allow the interrupt handlers to run until the kernel is in an interruptible, consistent state. The disadvantage of using locks is that too many locks make the system slow and inefficient. Thus, the optimal solution for the problem is this: The kernel interrupt handlers are as short as possible. Their only job is to move all relevant interrupt data into a temporary buffer Some "background" thread works continuously on this buffer and does the real work on behalf of the interrupt handlers. These "background" threads are the interrupt handler kernel threads. You see them in top as normal processes. However, they are displayed as if they use zero memory. And yes, this is true, because no real user space memory belongs to them . They are essentially kernel threads running in the background. You can't kill kernel threads: they are managed entirely by the kernel. If you could kill it, the irq/142 handler in your nvidia driver wouldn't exist any more: if your video card sends an interrupt, nothing would handle it. The result would be likely a freeze, but your video surely wouldn't work any more. The problem in your system is that this interrupt handler gets a lot of CPU resource. There are many potential reasons: For some reason, the hardware (your video card) sends so many interrupts that your CPU can't handle all of them. The hardware is buggy. The driver is buggy. Knowing the quality of the Nvidia drivers, unfortunately a buggy driver is the most likely. The solution is to somehow reset this driver. Some ideas, ordered ascending by brutality: Is it running some 3D accelerated process in the background? Google Earth, for example? If yes, stop or kill it. From X, switch back to character console (alt/ctrl/f1) and then back (alt/ctrl/f7). Then most of the video will re-initialize. Restart X (exit ordinarily, or type alt/ctrl/backspace to kill the X server). Kill X (killall -9 Xorg). It is better if you do this from the character console. If you kill X and you still see this kernel thread, you may try to remove the Nvidia kernel module (you can see it in the list given by lsmod , then you can remove it with rmmod ). Restarting X will insmod it automatically, resetting the hardware. If none of these work, you need to reboot. If an ordinary reboot doesn't work you can do this with additional brutality: use alt/printscreen/s followed by alt/printscreen/b. Extension: as a temporary workaround you could try to give a very low priority to that thread ( renice +20 -p 1135 ). Then it will still run, but it will have less impact on your system performance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387096", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247202/" ] }
387,167
I want to extract all the lines in a file containting these patterns: "#1:" and "tree length for". Input: #1: nexus0002_Pseudomonas_10M branch t N S dN/dS dN dS N*dN S*dS 6..5 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 6..7 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2 7..1 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 7..4 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 6..8 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 8..2 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2 8..3 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2tree length for dN: 0.0023tree length for dS: 0.0341#1: nexus0003_Pseudomonas_10M branch t N S dN/dS dN dS N*dN S*dS 6..5 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 6..7 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2 7..1 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 7..4 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 6..8 0.000 390.0 195.0 0.0668 0.0000 0.0000 0.0 0.0 8..2 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2 8..3 0.013 390.0 195.0 0.0668 0.0008 0.0114 0.3 2.2tree length for dN: 0.0111tree length for dS: 0.0444 Output: #1: nexus0002_Pseudomonas_10M tree length for dN: 0.0023tree length for dS: 0.0341#1: nexus0003_Pseudomonas_10Mtree length for dN: 0.0111tree length for dS: 0.0444 Is there any simple sed solution?
Use grep grep -E "^#1:|tree length for" infile.txt or sed sed -n '/^#1:/p;/^tree length for/p' infile.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387167", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238538/" ] }
387,186
If I run set -x and then run a script in a shell, the option set by set -x doesn't work inside the script. I was wondering if all the shell options are not inherited by scripts? I didn't find this is mentioned in bash manual. The only relevant that I found is "Unless otherwise noted, the values are inherited from the shell." So I guess shell options are inherited. Do I miss it in the manual? I have read a related question which asked how to let scripts inherit shell options. I was wondering whether and why instead of how. Thanks. When a simple command other than a builtin or shell function is to be executed, it is invoked in a separate execution environment that consists of the following. Unless otherwise noted, the values are inherited from the shell. • the shell’s open files, plus any modifications and additions specified by redirections to the command • the current working directory • the fi le creation mode mask • shell variables and functions marked for export, along with variables exported for the command, passed in the environment (see Section 3.7.4 [Environment], page 37) • traps caught by the shell are reset to the values inherited from the shell’s parent, and traps ignored by the shell are ignored A command invoked in this separate environment cannot aff ect the shell’s execution environment.
In the case of bash , that depends on whether $SHELLOPTS is in the environment or not. bash-4.4$ export SHELLOPTSbash-4.4$ set -xbash-4.4$ bash -c 'echo x'+ bash -c 'echo x'+ echo xx See how the bash -c 'echo x' inherited the xtrace option. For the options set by shopt , it's the same but with the $BASHOPTS variable. It comes handy especially for the xtrace option for debugging when you want to run a bash script (or any command running a bash script) and all other bash script it may invoke, recursively with xtrace (provided nothing does a set +x in those scripts). If your sh is bash , that will also affect them, so also the system("command line") made in other languages: env SHELLOPTS=xtrace some-command
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
387,206
I do a lot of bash scripts for various needs. Recently I started to feel the urge of implement various indicators on top of them. Would be cool during an automated unattended installation script know which operation is being carried. Also.. could be nice have a status bar displaying a percentage of the actual progress. There are in Linux (preferibly Debian) some libs and commands like my mockup ones for manipulate the terminal output ? (following commands are fake mockups ones just for make reader understand) txtoverlay -k head -c azure "MyString on top of all the commands" or txtovelay -k tail -c green -a right "[ Completition: 57 % ]" or txtovelay -k canvas -c azure -b darkblue -l 2 -t 5 -w 68 -h 50 To generate something like the following graphical mockups? Or even some more complex overlays.. Basically the concept could be the same in HTML with some DIVs over the main webpage with position: fixed <div id="MyDiv1" style="position:fixed; color: #00ffff; top: 0px; left: 0px; padding: 10px"></div><div id="MyDiv2" style="position:fixed; color: #00ff00; bottom: 0px; right: 0px; padding: 10px; text-align: right"></div> and time by time during the script various commands like: document.getElementById("MyDiv1").innerHTML = "Step 5: Installing NET-TOOLS package in progress..<br>-------------------------"document.getElementById("MyDiv2").innerHTML = "[Completition: 57 % ]"
Yes, you can do those things. Focusing just on the question of how does one place colored text in specific positions, one direct though somewhat low-level route is to use the tput utility. tput has numerous commands that, with the help of the terminfo database, manipulate the terminal screen. For example tput cup 23 4 will move the cursor to row 23, column 4 of your terminal. A few other examples: tput ed # clear to end of screentput setaf 2 # set foreground color to bright greentput cubl # move cursor left one spacetput rev # turn on reverse video modetput sc # save the cursor positiontput rc # restore the cursor position You may also find use for the stty utility. For example if you want to determine the dimensions of the current screen you can do stty size . I have previously built a rough 'GUI' for some utility of mine that split the screen into two sections. The top section was a header of fixed height. The bottom section contained (scrolling) command output. I did this using Bash scripting and tput + stty only. I figured out a lot of it just by trial-and-error but there are some nice resources online such as http://linuxcommand.org/lc3_adv_tput.php See man tput and man 5 terminfo . For the latter you'll want to scroll down to the Predefined Capabilities section in particular. There may be higher level abstractions for terminfo -based screen manipulation but if you have relatively simple requirements tput is a good option. (I believe tput is part of the ncurses package mentioned in another answer here.) Edit: I should add, since it sounds like you want some of these features on all your screens, that you can accomplish this by writing a shell script that utilizes tput as described above and point environment variable PROMPT_COMMAND at that script so it gets invoked each time your prompt is refreshed. If you want more frequent refreshing then you'd have to get some process to run in the background while still being attached to your screen. That's more than I'll try to bite off in this answer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
387,237
I am trying to figure out how to write a standalone awk script file. I thought it would be similar to a standalone bash script file: #! /usr/bin/awk -f BEGIN{ for (i = 0; i < ARGC; i++) printf "%s ", ARGV[i] printf "\n"}{print $0} I was trying to figure out how the command line arguments arespecified in shell, and passed into the script: $ myscript.awk arg1 arg2 arg3awk arg1 arg2 arg3 awk: /home/tim/myscript.awk:5: fatal: cannot open file `arg1' for reading (No such file or directory) What does an awk script expect its command line arguments to be? Whydoes it expect arg1 to be the input file? Command line arguments are passed into an awk script, and stored in array ARGV. See my udpate. So I suppose the command line arguments are interpreted up to the script, not to awk . If I remove -f in the shebang, i.e. #! /usr/bin/awk $ myscript.awk arg1 arg2 arg3awk: cmd. line:1: /home/tim/myscript.awkawk: cmd. line:1: ^ syntax error Why is -f necessary? Thanks.
What does an awk script expect its command line arguments to be? Why does it expect arg1 to be the input file? awk 's pattern based rules need input. When processing of this parts of your program starts, awk starts to consume arguments as filenames (or stdin if no filenames are given). Before this step you can do whatever you want with given argments in the BEGIN block. I think, these small examples get you started: $ cat a.awk #!/usr/bin/awk -fBEGIN { i=1 while( i in ARGV ) print ARGV[i++]} a.awk only has a BEGIN block and no pattern based rules. awk does not need files and so does not use the given arguments as filenames: $ ./a.awk poit --zort -troz narfpoit--zort-troznarf It is your decision what to do with these. If you want to have pattern based rules processing files given as arguments too, you need to delete all arguments you have used in your BEGIN block: $ cat b.awk #!/usr/bin/awk -fBEGIN { if( ARGV[1] == "--tolower" ) { cmd = "tr A-Z a-z" ; delete ARGV[1] } else if( ARGV[1] == "--toupper" ) { cmd = "tr a-z A-Z" ; delete ARGV[1] } else cmd = "cat"}{ print | cmd} Example run without option: $ ./b.awk a.awk#!/usr/bin/awk -fBEGIN { i=1 while( i in ARGV ) print ARGV[i++]} Example run with --toupper option: $ ./b.awk --toupper a.awk#!/USR/BIN/AWK -FBEGIN { I=1 WHILE( I IN ARGV ) PRINT ARGV[I++]}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
387,246
I'm trying to get the file of the current date with the following command in HP-UX Unix: $ ls -lrt ABC.LOG* |grep "`date +"%b %d"`" But, it's giving me the error: ksh: : cannot executegrep: can't open %d Any suggestions?
The error stems from the quoting of the arguments of grep and the fact that backticks don't do nesting very well: grep "`date +"%b %d"`" This is better written as grep "`date +'%b %d'`" ... or even better, grep "$(date +'%b %d')" In fact, with $(...) instead of backticks, you should be able to keep the inner double quotes: grep "$(date +"%b %d")" An alternative to grepping the output of ls would be to do find . -type f -name "ABC.LOG*" -ctime -1 This would find all regular files ( -type f ) in the current directory whose names matches the given pattern and whose ctime is less than 24 hours since the current time . A file's ctime is the time when the last modification of the file's data or metadata was made. This is not exactly equivalent to what you're trying to achieve though. This also recurses into subdirectories.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247325/" ] }
387,292
How can I flush the DNS cache in Debian 9.1 with KDE?
If using systemd-resolved as your DNS resolver (i.e. the hosts line of your /etc/nsswitch.conf file includes the word resolve and/or /etc/resolv.conf contains the line nameserver 127.0.0.53 ), then this command will flush its cache: $ sudo systemd-resolve --flush-caches A newer version of this command seems to be: $ sudo resolvectl flush-caches
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
387,304
From bash manual about shebang Most versions of Unix make this a part of the operating system’s command execution mechanism. If the first line of a script begins with the two characters‘ #!’, the remainder of the line specifies an interpreter for the program. Thus, you can specify Bash, awk, Perl, or some other interpreter and write the rest of the script file in that language. The arguments to the interpreter consist of a single optional argument following the interpreter name on the first line of the script file, followed by the name of the script file , followed by the rest of the arguments . Bash will perform this action on operating systems that do not handle it themselves. Note that some older versions of Unix limit the interpreter name and argument to a maximum of 32 characters. Does "a optional argument" mean an argument to an option, or an argument which may be there or might not be? Why "a single optional argument"? does it not allow multiple "optional arguments"? If a script looks like #! /path/to/interpreter --opt1 opt1-arg --opt2 opt2-arg --opt3 nonopt-arg1 nonopt-arg2... when I run the script in bash as $ myscript arg1 arg2 arg3 what is the actual command being executed in bash? Is it $ /path/to/interpreter --opt1 opt1-arg --opt2 opt2-arg --opt3 nonopt-arg1 nonopt-arg2 myscript arg1 arg2 arg3 Thanks.
The arguments to the interpreter in this case are the arguments constructed after interpretation of the shebang line, combining the shebang line with the script name and its command-line arguments. Thus, an AWK script starting with #! /usr/bin/awk -f named myscript and called as ./myscript file1 file2 results in the actual arguments /usr/bin/awk -f ./myscript file1 file2 The single optional argument is -f in this case. Not all interpreters need one (see /bin/sh for example), and many systems only allow at most one (so your shebang line won’t work as you expect it to). The argument can contain spaces though; the whole content of the shebang line after the interpreter is passed as a single argument. To experiment with shebang lines, you can use #! /bin/echo (although that doesn’t help distinguish arguments when there are spaces involved). See How programs get run for a more detailed explanation of how shebang lines are processed (on Linux).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
387,319
A directory inode isn't substantially different from that of a regular file's inode, what I comprehend from Ext4 Disk Layout is that: Directory Entries : Therefore, it is more accurate to say that a directory is a series of data blocks and that each block contains a linear array of directory entries. The directory entry stores the filename together with a pointer to its inode. Hence, if the documentation says each block contains directory entries, why debugfs reports something different that the filenames stored in the directory's inode? This is a debugging session on an ext4 formatted flash drive: debugfs: cat /sub� . ..� spam�spam2�spam3��spam4 I don't think inode_i_block can store those filenames, I've created files with really long filenames, more than 60 bytes in size. Running cat on the inode from debugfs displayed the filenames too, so the long filenames were in the inode again! The Contents of inode.i_block : Depending on the type of file an inode describes, the 60 bytes of storage in inode.i_block can be used in different ways. In general, regular files and directories will use it for file block indexing information, and special files will use it for special purposes. Also, there's no reference to the inode storing the filenames in Hash Tree Directories section which is the newer implementation. I feel I missed something in that document. The main question is if a directory's inode contain filenames, what do its data blocks store then?
Directory entries are stored both in inode.i_block and the data blocks. See "Inline Data" and "Inline Directories" in the document you linked to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/387319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233788/" ] }
387,377
I have two text files. The first "file1.txt" has content: AppleOrangeBanana while the second file "file2.txt" has content: mondaytuesdaywednesday I want to combine them into one file and its output is: Apple File1.txtOrange File1.txtBanana File1.txtmonday File2.txttuesday File2.txtwednesday File2.txt
That's quite trivial with awk : $ awk '{print $0,FILENAME}' File*.txtApple File1.txtOrange File1.txtBanana File1.txtmonday File2.txttuesday File2.txtwednesday File2.txt If you want a tab rather than a space between the input line and the filename, add -v OFS='\t' to the command-line to set the Output Field Separator (OFS): awk -v OFS='\t' '{print $0,FILENAME}' File*.txt or use: awk '{print $0 "\t" FILENAME}' File*.txt That's assuming file names don't contain = characters. If you can't guarantee that the filenames won't contain = characters, you can change that to: awk '{print $0 "\t" substr(FILENAME, 3)}' ./File*.txt Though with GNU awk at least, you'd then get warnings if the name of the file contained bytes not forming valid characters (which you could work around by fixing the locale to C (with LC_ALL=C awk... ) though that would also have the side effect of potentially changing the language of other error messages if any).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247444/" ] }
387,408
How to match strings that contain $ character? Example - by the following grep doesn't return the match. param="ambari_parameter$" echo "ambari_parameter$" | grep $param echo "ambari_parameter$" | grep "$param"
In this case, the string in $param will be interpreted as a regular expression. This expression contains a $ which anchors the ambari_parameter in the pattern to the end of the match. Since ambari_parameter$ in the input data does not contain ambari_parameter at the very end (due to the $ at the end of the string), it won't match. You could just escape the $ as \$ in the pattern ( \\$ in a double quoted string), or place the $ in a bracketed group as [$] , but since you seem to want to do a string match rather than a regular expression match, it may be more appropriate to use -F : echo 'ambari_parameter$' | grep -F -e "$param" Using grep -F makes grep treat the given pattern as a string rather than as a regular expression. Each character in the pattern will therefore be match literally, even the $ at the end. I also used -e here to force grep to recognize the following argument as a pattern. This is necessary if $param starts with a dash ( - ). It's generally good to use -e whenever the pattern you match with is given in a variable. To further require that the whole input line matches the complete string, add -x : echo 'ambari_parameter$' | grep -xF -e "$param" If $param is ambari_parameter$ , then this will not match the string _ambari_parameter$ or ambari_parameter$100 (but will match if -x is left out).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/387408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
387,433
I have a text file separated by space as follows. I need to re-arrange the columns so that the first column is at the end of each line. I have an idea of how this could be done using cut -d' ' f1 but I'm wondering if there is an easy way with awk or sed . Text File: ™️ trade markℹ️ information↔️..↙️ left-right arrow..down-left arrow↩️..↪️ right arrow curving left..left arrow curving right⌚..⌛ watch..hourglass done⌨️ keyboard⏏️ eject button⏩..⏳ fast-forward button..hourglass not done⏸️..⏺️ pause button..record buttonⓂ️ circled M▪️..▫️ black small square..white small square▶️ play button◀️ reverse button Want symbol list to follow description instead.
Use sed sed 's/\([^ ]*\) *\(.*\)/\2 \1/' infile This \([^ ]*\) will match everything until a non-space characters seen. The parentheses \(...\) is used to made a group matching which its index would be \1 . The \(.*\) matches everything after first group and it's indexed \2 . The * in \(...\) *\(...\) out of matched groups will ignore to print in output which is matching spaces between group 1 and 2, you could use \s* (with GNU sed ) or [[:space:]]* (standardly) instead to match any spacing characters instead of just ASCII SPC as well. Then at the end first we are printing matched group2 than group1 with a space between.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
387,437
remove all text after a certain character that appears on all odd lines this works for every line in the file with sed sed 's/;.*//' how to edit this sed line so it just does all odd lines
Use sed sed 's/\([^ ]*\) *\(.*\)/\2 \1/' infile This \([^ ]*\) will match everything until a non-space characters seen. The parentheses \(...\) is used to made a group matching which its index would be \1 . The \(.*\) matches everything after first group and it's indexed \2 . The * in \(...\) *\(...\) out of matched groups will ignore to print in output which is matching spaces between group 1 and 2, you could use \s* (with GNU sed ) or [[:space:]]* (standardly) instead to match any spacing characters instead of just ASCII SPC as well. Then at the end first we are printing matched group2 than group1 with a space between.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247485/" ] }
387,502
I'm trying to disable the bluetooth at boot, without blacklisting the kernel module. I commented the following two lines in the /etc/init/bluetooth.conf : start on started dbusstop on stopping dbus Then I added: stop on runlevel [0123456] In the file /etc/init.d/bluetooth , wright before the exit 0 , I added the line: rfkill block bluetooth None of those try succeeded. I saw on the Internet to add the last command in the /etc/rc.local file. But instead of this file, I've got rc0.d to rc6.d and rcS.d folders, full of symbolic links to scripts. I'm running under Ubuntu-Mate 17.04, with the 4.10.0 kernel.
Just in case someone else needs the answer ;) If the user is running systemd (default in many distros) the service can be disabled with systemctl disable bluetooth.service
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/387502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163665/" ] }
387,517
I'm using Debian 9.1 with KDE and would like to have a button to show all open windows. However I don't know what I should put as "Action" to get that it working. So how can I implement this? Is there a command for this?
The default action to show present windows is ctrl+F9 This will zoom out and show all open windows. Alternatively If you go to System settings - Desktop behavior - Screen edges You can set present windows (all desktops/current desktop/current application) On one of the 8 screen edge actions, that way you just push your mouse cursor to whichever edge you created the action for, and it will accomplish the same thing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/387517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
387,544
I wanted to use cut to with a 2 charachter delimeter to process a file with many lines like this: 1F3C6..1F3CA1F3CF..1F3D31F3E0..1F3F0 But cut only allows a single character. Instead of cut -d'..' I'm trying awk -F'..' "{echo $1}" but it's not working. My script: wget -O output.txt http://www.unicode.org/Public/emoji/6.0/emoji-data.txt sed -i '/^#/ d' output.txt # Remove comments cat output.txt | cut -d' ' -f1 | while read line ; do echo $line | awk -F'..' "{echo $1}" done
awk 's field separator is treated as a regexp as long as it's more than two characters. .. as a regexp, means any 2 characters. You'd need to escape that . either with [.] or with \. . awk -F'[.][.]' ...awk -F'\\.\\.' ... (the backslash itself also needs to be escaped (with some awks like gawk at least) for the \n / \b expansion that the argument to -F undergoes). In your case: awk -F' +|[.][.]' '/^[^#]/{print $1}' < output.txt In any case, avoid shell loops to process text , note that read is not meant to be used like that , that echo should not be used for arbitrary data and remember to quote your variables .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
387,553
I have a long file with values like this: 0030..00392194..219921A9..21AA231A..231B23E9..23F323F8..23FA25AA..25AB I want to convert the hexadecimals to decimal format. I'm reading that you can use $(($HEX)) but with the above file applying to each number I get back: $((0039))bash: 0039: value too great for base (error token is "0039") What exactly is going wrong here, the message makes me think it knows what I want to do, as opposed to $(39) which reads a different error. But it says the number is to big. Seems like a strange error message, can someone explain?
awk 's field separator is treated as a regexp as long as it's more than two characters. .. as a regexp, means any 2 characters. You'd need to escape that . either with [.] or with \. . awk -F'[.][.]' ...awk -F'\\.\\.' ... (the backslash itself also needs to be escaped (with some awks like gawk at least) for the \n / \b expansion that the argument to -F undergoes). In your case: awk -F' +|[.][.]' '/^[^#]/{print $1}' < output.txt In any case, avoid shell loops to process text , note that read is not meant to be used like that , that echo should not be used for arbitrary data and remember to quote your variables .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
387,568
i have gnome-tweak tool running on a install of Debian 8 with gdm3 installed, how do i allow this application to run with full permissions? When i run "sudo gnome-tweak-tool" the program runs as expected, but when i try to run some programs from the applications menu or just using the app name without sudo from terminal, there are problems. With gnome-tweak-tool it runs but it's not possible to change anything, and it opens with a theme different to the system theme.I don't have this problem on Ubuntu gnome, could someone please explain why that would be, what am i missing, how do i fix it without adding sudo or gksu to the exec line of every .desktop file? This is what i expect to happen: Here's what's actually happening without running with sudo from terminal:
awk 's field separator is treated as a regexp as long as it's more than two characters. .. as a regexp, means any 2 characters. You'd need to escape that . either with [.] or with \. . awk -F'[.][.]' ...awk -F'\\.\\.' ... (the backslash itself also needs to be escaped (with some awks like gawk at least) for the \n / \b expansion that the argument to -F undergoes). In your case: awk -F' +|[.][.]' '/^[^#]/{print $1}' < output.txt In any case, avoid shell loops to process text , note that read is not meant to be used like that , that echo should not be used for arbitrary data and remember to quote your variables .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221542/" ] }
387,572
How to execute a bash script in the docker container from the host such that it should not get exited from the container after execution of script ?
awk 's field separator is treated as a regexp as long as it's more than two characters. .. as a regexp, means any 2 characters. You'd need to escape that . either with [.] or with \. . awk -F'[.][.]' ...awk -F'\\.\\.' ... (the backslash itself also needs to be escaped (with some awks like gawk at least) for the \n / \b expansion that the argument to -F undergoes). In your case: awk -F' +|[.][.]' '/^[^#]/{print $1}' < output.txt In any case, avoid shell loops to process text , note that read is not meant to be used like that , that echo should not be used for arbitrary data and remember to quote your variables .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247582/" ] }
387,586
When I do tail -f filename , how to quit the mode without use Ctrl+c to kill the process? What I want is a normal way to quit, like q in top . I am just curious about the question, because I feel that killing the process is not a good way to quit something.
As said in the comments, Ctrl-C does not kill the tail process, which is done by sending either a SIGTERM or SIGKILL signal (the infamous -9 ...); it merely sends a SIGINT which tells tail to end the forward mode and exit. FYI, these's a better tool: less +F filename In less , you can press Ctrl-C to end forward mode and scroll through the file, then press F to go back to forward mode again. Note that less +F is advocated by many as a better alternative to tail -f . For difference and caveats between the two tools, read this answer: Is `tail -f` more efficient than `less +F`?
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/387586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235470/" ] }
387,600
dmesg shows lots of messages from serial8250: $ dmesg | grep -i serial[ 0.884481] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled[ 6.584431] systemd[1]: Created slice system-serial\x2dgetty.slice.[633232.317222] serial8250: too much work for irq4[633232.453355] serial8250: too much work for irq4[633248.378343] serial8250: too much work for irq4... I have not seen this message before. What does it generally mean? Should I be worried? (From my research, it is not distribution specific, but in case it is relevant, I see the messages on an EC2 instance running Ubuntu 16.04.)
There is nothing wrong with your kernel or device drivers. The problem is with your machine hardware. The problem is that it is impossible hardware. This is an error in several virtualization platforms (including at least XEN, QEMU, and VirtualBox) that has been plaguing people for at least a decade. The problem is that the UART hardware that is emulated by various brands of virtual machine behaves impossibly, sending characters at an impossibly fast line speed. To the kernel, this is indistinguishable from faulty real UART hardware that is continually raising an interrupt for an empty output buffer/full input buffer. (Such faulty real hardwares exist, and you will find embedded Linux people also discussing this problem here and there.) The kernel pushes the data out/pulls the data in, and the UART is immediately raising an interrupt saying that it is ready for more. H. Peter Anvin provided a patch to fix QEMU in 2008. You'll need to ask Amazon when EC2 is going to catch up. Further reading Alan Cox (2008-01-12). Re: [PATCH] serial: remove "too much work for irq" printk . Linux Kernel Mailing List. H. Peter Anvin (2008-02-07). Re: 2.6.24 says "serial8250: too much work for irq4" a lot. . Linux Kernel Mailing List. Casey Dahlin (2009-05-15). 'serial8250: too much work for irq4' message when viewing serial console on SMP full-virtualized xen domU . 501026. Red Hat Bugzilla. Sibiao Luo (2013-07-21). guest kernel will print many "serial8250: too much work for irq3" when using kvm with isa-serial . 986761. Red Hat Bugzilla. schinkelm (2008-12-16). serial port in linux guest gives "serial8250: too much work for irq4" . 2752. VirtualBox bugs. Marc PF (2015-09-05). EC2 instance becomes unresponsive . AWS Developer Forums.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/387600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29509/" ] }
387,640
It is written in the linux kernel Makefile that clean - Remove most generated files but keep the config and enough build support to build external modulesmrproper - Remove all generated files + config + various backup files And it is stated on the arch docs that To finalise the preparation, ensure that the kernel tree is absolutely clean; $ make clean && make mrproper So if make mrproper does a more thorough remove, why is the make clean used?
Cleaning is done on three levels, as described in a comment in the Linux kernel Makefile : #### Cleaning is done on three levels.# make clean Delete most generated files# Leave enough to build external modules# make mrproper Delete the current configuration, and all generated files# make distclean Remove editor backup files, patch leftover files and the like According to the Makefile, the mrproper target depends on the clean target (see line 1421 ). Additionally, the distclean target depends on mrproper . Executing make mrproper will therefore be enough as it would also remove the same things as what the clean target would do (and more). The mrproper target was added in 1993 (Linux 0.97.7) and has always depended on the clean target. This means that it was never necessary to use both targets as in make clean && make mrproper . Historic reference: https://archive.org/details/git-history-of-linux
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/387640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100597/" ] }
387,656
For example, we want to count all quote ( " ) characters; we just worry if files have more quotes than it should. For example: cluster-env,"manage_dirs_on_root","true"cluster-env,"one_dir_per_partition","false"cluster-env,"override_uid","true"cluster-env,"recovery_enabled","false" expected results: 16
You can combine tr (translate or delete characters) with wc (count words, lines, characters): tr -cd '"' < yourfile.cfg | wc -c -d elete all characters in the c omplement of " , and then count the c haracters (bytes). Some versions of wc may support the -m or --chars flag which will better suit non-ASCII character counts.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/387656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
387,675
I have been struggling for the past couple of days attempting to hook up my 1920x1080 external monitor to my 3200x1800 laptop. When I run xrandr , it outputs: Screen 0: minimum 320 x 200, current 5120 x 1800, maximum 8192 x 8192eDP-1 connected 3200x1800+1920+0 (normal left inverted right x axis y axis) 294mm x 165mm 3200x1800 59.98*+ 47.99 2048x1536 60.00 1920x1440 60.00 1856x1392 60.01 1792x1344 60.01 1920x1200 59.95 1920x1080 59.93 1600x1200 60.00 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1280x1024 60.02 1440x900 59.89 1280x960 60.00 1360x768 59.80 59.96 1152x864 60.00 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 960x600 60.00 960x540 59.99 800x600 60.00 60.32 56.25 840x525 60.01 59.88 800x512 60.17 700x525 59.98 640x512 60.02 720x450 59.89 640x480 60.00 59.94 680x384 59.80 59.96 576x432 60.06 512x384 60.00 400x300 60.32 56.34 320x240 60.05 DP-1 connected primary 1920x1080+0+720 (normal left inverted right x axis y axis) 527mm x 296mm 1920x1080 60.00 + 50.00 59.94 1920x1080i 60.00* 50.00 59.94 1600x1200 60.00 1600x900 60.00 1280x1024 75.02 60.02 1152x864 75.00 1280x720 60.00 50.00 59.94 1024x768 75.03 60.00 800x600 75.00 60.32 720x576 50.00 720x480 60.00 59.94 640x480 75.00 60.00 59.94 720x400 70.08 HDMI-1 disconnected (normal left inverted right x axis y axis)DP-2 disconnected (normal left inverted right x axis y axis)HDMI-2 disconnected (normal left inverted right x axis y axis) So, I figured if I run, xrandr --output DP-1 --mode 1920x1080 , then the display would show on the external monitor... I was wrong: the monitor claimed to have no signal. I followed this comment which allowed the monitor to detect the HDMI signal, but I could only use a resolution lower than 1024x768 . I played around a bit more, and the monitor detected 1920x1080i as well, but the borders around the screen were cutoff. I did some research and figured out about something called overscan and used xrandr --output DP-1 --set underscan on , but that caused the following output: X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 11 (RRQueryOutputProperty) Serial number of failed request: 38 Current serial number in output stream: 38 I also tried to add a new mode via xrandr and cvt and also tried changing the display settings via the settings panel in Ubuntu. There does not seem to be a problem with the monitor because it works fine when I boot Windows 10. Is there anything else I could try? Machine: Dell XPS 13 9350 (no hardware changes) OS: Ubuntu 16.04 LTS External Monitor: Dell S2415H
You can combine tr (translate or delete characters) with wc (count words, lines, characters): tr -cd '"' < yourfile.cfg | wc -c -d elete all characters in the c omplement of " , and then count the c haracters (bytes). Some versions of wc may support the -m or --chars flag which will better suit non-ASCII character counts.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/387675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247647/" ] }
387,690
I want to find the location of all files named index.php that contain the string "hello". thanks.
Using grep with find : find /top-dir -type f -name index.php -exec grep -l 'hello' {} + where /top-dir is the path to the top-most directory that you want to search. With -type f , we only look at regular files with find , and with -name index.php we restrict the search to files called index.php . -exec grep -l 'hello' {} + will run grep on the found files and it will output the paths of all the files that matches the pattern ( 'hello' ). It's the -l with grep that causes the output of the paths. With + at the end, find will give as many files as possible to each invocation of grep . Changing this to ';' or \; would result in grep being invoked with one file at a time, which may be slow if there are many files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229085/" ] }
387,698
We have example of partial CSV file ( with only 3 fields ) our target is to remove all " characters but not inside the double quotes ssl-server,"ssl.server.truststore.type","jks"tez-env,"enable_heap_"\n"dump","false"tez-env,"heap_dump_location"\n"port","/tmp"tez-env,"tez_user","tez" expected output: ssl-server,ssl.server.truststore.type,jkstez-env,enable_heap_"\n"dump,falsetez-env,heap_dump_location"\n"port,/tmptez-env,tez_user,tez
Using grep with find : find /top-dir -type f -name index.php -exec grep -l 'hello' {} + where /top-dir is the path to the top-most directory that you want to search. With -type f , we only look at regular files with find , and with -name index.php we restrict the search to files called index.php . -exec grep -l 'hello' {} + will run grep on the found files and it will output the paths of all the files that matches the pattern ( 'hello' ). It's the -l with grep that causes the output of the paths. With + at the end, find will give as many files as possible to each invocation of grep . Changing this to ';' or \; would result in grep being invoked with one file at a time, which may be slow if there are many files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
387,754
Problem 1:I want to get the array items as user inputs at runtime; print the items and print the array length. This is what I have: read -a myarrayecho "array items" ${myarray[@]}echo "array length" ${#myarray[@]} At runtime, I gave the following as input, $ ("apple fruit" "orange" "grapes") The output was, array items "apple fruit" "orange" "grapes"array length 4 which is not correct. If I don't ask for user input and instead used an array declared and initialised as part of the code as myarray=("apple fruit" "orange" "grapes") the array length is echoed as 3. So, It seems like my usage of read command is not right. Problem 2:If I add a prompt to the read command as follows, read -p "enter array items: " myarray the first item "apple fruit" gets printed as fruit" and the length is also wrong. If I remove the prompt and add -a, everything is good. If I combine both a and p and give it as read -ap, prompt doesn't popup at all. It waits for values without any message. Why is it so? Can someone explain to me what is wrong?
Problem 1: In your example, read does not get its input from a command line argument, but from stdin. As such, the input it receives does not go through bash 's string parser. Instead, it is treated as a literal string, delimited by spaces. So with your input, your array values become: [0]->("apple[1]->fruit"[2]->"orange"[3]->"grapes" To do what you want, you need to escape any spaces you have, to avoid the delimiter from kicking in. Namely, you must enter the following input after invoking read : apple\ fruit oranges grapes Problem 2: In order for read to store the input it receives as an array, you must have an -a switch followed by the array name. So you need: read -a myarray -p "Enter your items"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247398/" ] }
387,761
I'm having trouble installing GRUB bootloader manually. I was attempting to install Kali Linux to dual boot with my already existing Windows 10 system. During the installation, it said it couldn't install GRUB, so I tried to manually install it from a Kali Live USB. However, whenever I run these commands in Terminal: mount /dev/sda5 /mntmount --bind /dev /mnt/devmount --bind /dev/pts /mnt/dev/ptsmount --bind /proc /mnt/procmount --bind /sys /mnt/syschroot /mntgrub-install /dev/sda It says bash: grub-install: command not found . grub2-install also doesn't work. Trying update-grub says the same thing. GRUB was never installed, so how do I install it?
Problem 1: In your example, read does not get its input from a command line argument, but from stdin. As such, the input it receives does not go through bash 's string parser. Instead, it is treated as a literal string, delimited by spaces. So with your input, your array values become: [0]->("apple[1]->fruit"[2]->"orange"[3]->"grapes" To do what you want, you need to escape any spaces you have, to avoid the delimiter from kicking in. Namely, you must enter the following input after invoking read : apple\ fruit oranges grapes Problem 2: In order for read to store the input it receives as an array, you must have an -a switch followed by the array name. So you need: read -a myarray -p "Enter your items"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/387761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247734/" ] }