source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
96,510 | Coming from a FreeBSD world I wish to make the Linux terminal behave like FreeBSD one, especially the 9.1 version, basically when you type cd in the terminal and push the "up" arrow you can browse all the commands in the history starting with cd which makes you gain a lot time. I don't know how to enable this feature in Linux Debian or CentOS which force me to type the whole, could someone please help. | Add to the following to ~/.inputrc : # Press up-arrow for previous matching command
"\e[A":history-search-backward
# Press down-arrow for next matching command
"\e[B":history-search-forward Explanation ~/.inputrc is the configuration file for GNU readline . Many shells, including bash and tcsh use readline for command line editing. The two lines above will tell readline to invoke its history search functionality when the escape sequences for the up-arrow key ( \e[A ) and down-arrow key ( \e[B ) are encountered. | {
"source": [
"https://unix.stackexchange.com/questions/96510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38462/"
]
} |
96,548 | CentOS 6.0 I'm studying iptables and am getting confused on the difference between FORWARD and OUTPUT chains. In my training documentation, it states: If you're appending to (-A) or deleting from (-D) a chain, you'll want
to apply it to network data traveling in one of three directions: INPUT - All incoming packets are checked against the rules in this chain. OUTPUT - All outgoing packets are checked against the rules in this chain. FORWARD - All packets being sent to another computer are checked against the rules in this chain. This confuses me because, in my mind, packets leaving for a host WOULD be outgoing. So are there scenarios where a packet would be going to another computer but NOT be "outgoing"? How would iptables distinguish between the two? | OUTPUT is for packets that are emitted by the host. Their destination is usually another host, but can be the same host via the loopback interface, so not all packets that go through OUTPUT are in fact outgoing. FORWARD is for packets that are neither emitted by the host nor directed to the host. They are the packets that the host is merely routing. When you start digging into packet mangling and NAT, the full story is rather more complex . | {
"source": [
"https://unix.stackexchange.com/questions/96548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
96,625 | Is it possible to allow some particular users (e.g. members of a group) to mount any filesystem without superuser privileges on Linux? Another question might have been "in what ways a user can harm a system by mounting filesystems?" | There are a couple approaches, some of them mostly secure, others not at all. The insecure way Let any use run mount , e.g., through sudo. You might as well give them root; it's the same thing. The user could mount a filesystem with a suid root copy of bash —running that instantly gives root (likely without any logging, beyond the fact that mount was run). Alternatively, a user could mount his own filesystem on top of /etc , containing his/her own copy of /etc/shadow or /etc/sudoers , then obtain root with either su or sudo . Or possibly bind-mount ( mount --bind ) over one of those two files. Or a new file into /etc/sudoers.d . Similar attacks could be pulled off over /etc/pam.d and many other places. Remember that filesystems need not even be on a device, -o loop will mount a file which is owned (and thus modifiable) by the user. The mostly secure way: udisks or similar The various desktop environments have actually already built solutions to this, to allow users to mount removable media. They work by mounting in a subdirectory of /media only and by turning off set-user/group-id support via kernel options. Options here include udisks , udisks2 , pmount , usbmount , If you must, you could write your own script to do something similar, and invoke it through sudo—but you have to be really careful writing this script to not leave root exploits. If you don't want your users to have to remember sudo, you can do something like this in a script: #!/bin/bash
if [ $UID -ne 0 ]; then # or `id -u`
exec sudo -- "$0" "$@"
fi
# rest of script goes here The will-be-secure someday way: user namespaces Linux namespaces are a very lightweight form of virtualization (containers, to be more specific). In particular, with user namespaces, any user on the system can create their own environment in which they are root. This would allow them to mount filesystems, except that has been explicitly blocked except for a few virtual filesystems. Eventually, FUSE filesystems will probably be allowed, but the most recent patches I could find don't cover block devices, only things like sshfs. Further, many distro kernels have (for security reasons) defaulted to not allowing unprivileged users to use user namespaces; for example Debian has a kernel.unprivileged_userns_clone that defaults to 0. Other distros have similar settings, though often with slightly different names. The best documentation I know of about user namespaces is an LWN article Namespaces in operation, part 5: User namespaces . For now, I'd go with udisks2. | {
"source": [
"https://unix.stackexchange.com/questions/96625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
96,693 | I'm using Mint 15 w/ Cinnamon. I bought a set of bluetooth speakers and I'm trying to connect to them via terminal. Via the GUI I can see them normally and I am connected to them. I want to make a small script so every time they are visible I would connect to them automatically. I am trying to scan them with: hcitool scan But I get Scanning... and after a few seconds the process dies. The same thing with hidd --search . If I run hciconfig scan I get: hci0: Type: BR/EDR Bus: USB
BD Address: 40:2C:F4:78:E8:69 ACL MTU: 1021:8 SCO MTU: 64:1
UP RUNNING PSCAN ISCAN
RX bytes:130700 acl:22 sco:0 events:18527 errors:0
TX bytes:31875398 acl:36784 sco:0 commands:75 errors:0 I suppose that is just saying my bluetooth address and that it is turned on. As I said already, via the normal User Interface, I can see the speakers and I am connected to them, but through terminal I get nothing. Actually it is quite funny that hcitool scan isn't finding anything since my speakers are connected and every time I run the command the sound from the speakers breaks for a couple of seconds. | I managed to do so via bluez-tools : sudo apt-get install bluez-tools List of devices to get the MAC address of my device: bt-device -l and successfully connect to it: bt-device -c 01:02:03:04:05:06 | {
"source": [
"https://unix.stackexchange.com/questions/96693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47168/"
]
} |
96,798 | How can I start applications on specific workspaces in i3 when it starts? Why is this not working in my config file? : workspace 1; exec firefox; workspace 2; exec chromium; workspace 1 | According to the Arch Wiki i3 page , to autostart an application on a specific workspace, you use i3-msg : exec --no-startup-id i3-msg 'workspace 1:Web; exec /usr/bin/firefox' | {
"source": [
"https://unix.stackexchange.com/questions/96798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36146/"
]
} |
96,847 | If I do watch cat /proc/sys/kernel/random/entropy_avail I see that my systems entropy slowly increases over time, until it reaches the 180-190 range at which point it drops down to around 120-130. The drops in entropy seem to occur about every twenty seconds. I observe this even when lsof says that no process has /dev/random or /dev/urandom open. What is draining away the entropy? Does the kernel need entropy as well, or maybe it is reprocessing the larger pool into a smaller, better quality pool? This is on a bare-metal machine, with no SSL/SSH/WPA connections. | Entropy is not only lost via /dev/{,u}random , the kernel also takes some. For example, new processes have randomized addresses (ASLR) and network packets need random sequence numbers. Even the filesystem module may remove some entropy. See the comments in drivers/char/random.c . Also note that entropy_avail refers to the input pool , not the output pools (basically the non-blocking /dev/urandom and the blocking /dev/random ). If you need to watch the entropy pool, do not use watch cat , that will consume entropy at every invocation of cat . In the past I also wanted to watch this pool as GPG was very slow at generating keys, therefore I wrote a C program with the sole purpose to watch the entropy pool: https://git.lekensteyn.nl/c-files/tree/entropy-watcher.c . Note that there may be background processes which also consume entropy. Using tracepoints on an appropriate kernel you can see the processes that modify the entropy pool. Example usage that records all tracepoints related to the random subsystem including the callchain ( -g ) on all CPUs ( -a ) starting measuring after 1 second to ignore its own process ( -D 1000 ) and including timestamps ( -T ): sudo perf record -e random:\* -g -a -D 1000 -T sleep 60 Read it with either of these commands (change owner of perf.data as needed): perf report # opens an interactive overview
perf script # outputs events after each other with traces The perf script output gives an interesting insight and shows when about 8 bytes (64 bits) of entropy is periodically drained on my machine: kworker/0:2 193 [000] 3292.235908: random:extract_entropy: ffffffff8173e956 pool: nbytes 8 entropy_count 921 caller _xfer_secondary_pool
5eb857 extract_entropy (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5eb984 _xfer_secondary_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5ebae6 push_to_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
293a05 process_one_work (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
293ce8 worker_thread (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
299998 kthread (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
7c7482 ret_from_fork (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
kworker/0:2 193 [000] 3292.235911: random:debit_entropy: ffffffff8173e956: debit_bits 64 5eb3e8 account.part.12 (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5eb770 extract_entropy (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5eb984 _xfer_secondary_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5ebae6 push_to_pool (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
293a05 process_one_work (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
293ce8 worker_thread (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
299998 kthread (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
7c7482 ret_from_fork (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
...
swapper 0 [002] 3292.507720: random:credit_entropy_bits: ffffffff8173e956 pool: bits 2 entropy_count 859 entropy_total 2 caller add_interrupt_randomness
5eaab6 credit_entropy_bits (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
5ec644 add_interrupt_randomness (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2d5729 handle_irq_event_percpu (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2d58b9 handle_irq_event (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2d8d1b handle_edge_irq (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
230e6a handle_irq (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
7c9abb do_IRQ (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
7c7bc2 ret_from_intr (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
6756c7 cpuidle_enter (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2bd9fa call_cpuidle (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2bde18 cpu_startup_entry (/lib/modules/4.6.2-1-ARCH/build/vmlinux)
2510e5 start_secondary (/lib/modules/4.6.2-1-ARCH/build/vmlinux) Apparently this happens to prevent waste of entropy by transferring entropy from the input pool to outputs pools: /*
* Credit (or debit) the entropy store with n bits of entropy.
* Use credit_entropy_bits_safe() if the value comes from userspace
* or otherwise should be checked for extreme values.
*/
static void credit_entropy_bits(struct entropy_store *r, int nbits)
{
...
/* If the input pool is getting full, send some
* entropy to the two output pools, flipping back and
* forth between them, until the output pools are 75%
* full.
*/
...
schedule_work(&last->push_work);
}
/*
* Used as a workqueue function so that when the input pool is getting
* full, we can "spill over" some entropy to the output pools. That
* way the output pools can store some of the excess entropy instead
* of letting it go to waste.
*/
static void push_to_pool(struct work_struct *work)
{
...
} | {
"source": [
"https://unix.stackexchange.com/questions/96847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15795/"
]
} |
96,892 | An install document I'm following instructs to add a user like so: sudo adduser --disabled-login --gecos 'GitLab' git The --disabled-login flag is absent from most man pages I have searched. I've made two users, one with the --disabled-login ( foo ), and one without ( git ). As far as I can tell the --disabled-login flag does nothing. I can still su to both users, and both use /bin/bash as their login shell. The only difference I can see is getent passwd has extra commas before the home folder on the user that has login's disabled. There is no documentation that I can find to indicate what this would mean. root@gitlab:~# getent passwd git
git:x:998:998:GitLab:/home/git:/bin/bash
root@gitlab:~# getent passwd foo
foo:x:1001:1002:GitLab,,,:/home/foo:/bin/bash UPDATE #1 I've found another difference, one user has a * as their password, the other has ! : root@gitlab:~# getent shadow git
git:*:15998::::::
root@gitlab:~# getent shadow foo
foo:!:15998:0:99999:7::: What exactly does --disabled-login do on Ubuntu? | The explanation is not well documented. --disabled-login sets the password to ! Password values NP or null = The account has no password
* = The account is deactivated & locked
! = The login is deactivated, user will be unable to login
!! = The password has expired Examples root@gitlab:~# getent shadow vagrant
vagrant:$6$abcdefghijklmnopqrstuvwxyz/:15805:0:99999:7:::
root@gitlab:~# getent shadow foo
foo:!:15998:0:99999:7:::
root@gitlab:~# getent shadow git
git:*:15998:::::: wikipedia briefly covers this. It appears that * and ! effectively do the same thing; prevent the user from logging in (but not from su'ing from a different user) | {
"source": [
"https://unix.stackexchange.com/questions/96892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
96,907 | I can check, if a file exists and is a symbolic link with -L for file in *; do
if [[ -L "$file" ]]; then echo "$file is a symlink"; else echo "$file is not a symlink"; fi
done and if it is a directory with -d: for file in *; do
if [[ -d "$file" ]]; then echo "$file is a directory"; else echo "$file is a regular file"; fi
done But how can I test for only links to directories? I simulated all cases in a test folder: /tmp/test# ls
a b c/ d@ e@ f@
/tmp/test# file *
a: ASCII text
b: ASCII text
c: directory
d: symbolic link to `c'
e: symbolic link to `a'
f: broken symbolic link to `nofile' | Just combine the two tests with && : if [[ -L "$file" && -d "$file" ]]
then
echo "$file is a symlink to a directory"
fi Or, for POSIX compliant-syntax, use: if [ -L "$file" ] && [ -d "$file" ]
... Note: the first syntax using [[ expr1 && expr2 ]] is valid, but only works in certain shells such as ksh (where it comes from), bash or zsh. The second syntax using [ expr1 ] && [ expr2 ] is POSIX-compliant and even Bourne-compatible, meaning it will work in all modern sh and sh -like shells | {
"source": [
"https://unix.stackexchange.com/questions/96907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
96,962 | If I launch xterm with its default bitmap fonts and then select the 'Large' font from the 'VT Fonts' menu (via ctrl+right mouse ), I get a very usable bitmap font with apparently good Japanese character support. I'd like to know what this font is so that I can use it elsewhere. Unfortunately, I've found no information on what default settings XTerm uses (i.e. when none are explicitly specified). Lots of sites show how to use X resources to specify new settings (e.g. particular fonts), but none I've seen say what defaults are used if I do nothing. I've tried eyeballing the font, and it looks similar to and is the same width as 9x15 , but it uses more vertical space. It appears not to be 9x15 with different line spacing, though, as specifying this font directly fails to display some Japanese characters that 'Large' can handle just fine. Although I'll be happy to know what this specific font is, I really want to know where to find what defaults XTerm uses for its resources more generally. If it makes any difference, I'm running Ubuntu 12.04 LTS, 64-bit. [I have seen this question on the subject already, which is why I'm specifically asking about defaults rather than trying to get live values from a running XTerm.] | The appres utility lists the resources used by an application, both user and default. appres XTerm xterm The first argument is the class name ( xterm -class Xxx ). The second argument, which is optional, is the instance name ( xterm -name xxx ). The “Large” font is .VT100.font5 or .VT100.utf8Fonts.font5 . See the manual for whether .utf8Fonts is used, it's a bit complex. If you have more than one among *.VT100.font5 and ?.VT100.font5 and XTerm.VT100.font5 and xterm.VT100.font5 , the last one in this list applies; see the X documentation for the gory details of resource name precedence. appres XTerm | grep font5 | {
"source": [
"https://unix.stackexchange.com/questions/96962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5602/"
]
} |
97,003 | I have to sftp to a server to a specific port but the username contains the @ symbol. suppose the user is "[email protected]"
and the ftp server is just "example.com" I will end with sftp -oPort:8777 [email protected]@example.com what will obviously not work, as I have tested. What kind of amateur creates a username with @? Ok, my boss did. How do I solve that? | Pass the user name through the -o User option, or through the equivalent User directive in the client configuration file ( ~/.ssh/config ). sftp -o Port:8777 -o [email protected] example.com This applies to ssh , scp and sshfs as well. Using the configuration file instead of -o options has the advantage of also working with tools that call ssh and don't let you easily pass command line options if at all. | {
"source": [
"https://unix.stackexchange.com/questions/97003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
97,059 | I see the same behaviour for below loop as the loop with while [ 1 ] . Why is that so? while [ 0 ]; do
echo "hello"
done | Single square brackets in the shell is a synonym for test (either the separate command or the shell built-in), so [ 0 ] means the same thing as test 0 . test is for doing comparisons and testing the attributes of files, as you can read about in its manpage. When it isn't given an expression that looks like a comparison, file test, or one of the other operations it can do, it will instead test if the argument is present and a non-empty string. Neither 0 or 1 are really appropriate inputs for test, and as non-empty strings test simply succeeds and your while loop loops forever. You may want to try instead while false; do
echo "hello"
done possibly replacing false with true . Or maybe what you want is to use (( )) : while (( 0 )); do
echo "hello"
done Which will behave like most languages, where 0 means failure/false and 1 means success/true. | {
"source": [
"https://unix.stackexchange.com/questions/97059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23301/"
]
} |
97,143 | I have a drive (SD card) with a few ext4 partitions but also some unallocated space. The fstrim utility can only work within a filesystem. Before I reinvent the wheel and write one, is there another utility that can TRIM the unallocated space (or that can TRIM an explicitly specified range)? I can verify that the majority of the unallocated space on the device is not currently known to be free by the controller, as I've observed that, on this particular card, reads to trimmed space return 0's, but a scan of the device shows plenty of garbage data left over. Edit: I am having an issue using hdparm . The example below discards the first sector, but I am seeing the same results regardless of the range I specify. fstrim has no issues on the device: root@ubuntu:~# hdparm --please-destroy-my-drive --trim-sector-ranges 0:1 --verbose /dev/mmcblk0
/dev/mmcblk0:
trimming 1 sectors from 1 ranges
outgoing cdb: 85 0d 06 00 01 00 01 00 00 00 00 00 00 40 06 00
outgoing_data:
00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ioctl(fd,SG_IO): Invalid argument
FAILED: Invalid argument I am investigating further but does anybody have any insight? | If you have a recent enough version of util-linux , it contains the tool blkdiscard which is able to TRIM entire devices, or ranges within a device using --offset and --length options. Please note: blkdiscard is dangerous, if you let it TRIM the wrong regions, your data is gone! So you can figure out the unpartitioned (free) regions of your partition table and then TRIM them using this tool. For msdos and gpt partitions, parted provides the free regions like so: # parted -m /dev/sda unit b print free | grep ':free;'
1:17408B:1048575B:1031168B:free;
1:64022904832B:64023240191B:335360B:free; Add a loop to it... while IFS=: read -ra FREE
do
echo blkdiscard --offset ${FREE[1]%%B} --length ${FREE[3]%%B} /dev/sda
done < <(parted -m /dev/sda unit b print free | grep ':free;') which prints blkdiscard --offset 17408 --length 1031168 /dev/sda
blkdiscard --offset 64022904832 --length 335360 /dev/sda Verify that this output is correct for you, add additional options if you like (verbose?), and finally remove the echo so it will be actually executed, and you should be set. The second command of that example actually fails because the length is too small - it may be worth checking inside the loop, ignore regions smaller than 1MB as they're unlikely to be successfully trimmed. If you are using LVM instead of partitions, you can create a LV for the unoccupied space and trim that: lvcreate -l100%FREE -n blkdiscard SSD-VG
blkdiscard -v /dev/SSD-VG/blkdiscard
lvremove SSD-VG/blkdiscard If you set issue_discards = 1 in your lvm.conf , you can skip the blkdiscard call as LVM will issue the TRIM on lvremove by itself. | {
"source": [
"https://unix.stackexchange.com/questions/97143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49422/"
]
} |
97,244 | My git client claims error: Peer's Certificate issuer is not recognized. That means it can not find the corresponding ssl server key in the global system keyring. I want to check this by looking at the list of all system wide available ssl keys on a gentoo linux system. How can I get this list? | It's not SSL keys you want, it's certificate authorities, and more precisely their certificates. You could try: awk -v cmd='openssl x509 -noout -subject' '
/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt To get the "subject" of every CA certificate in /etc/ssl/certs/ca-certificates.crt (this works because openssl exits after reading an individual cert block, but awk relaunches openssl on the next print | cmd call). Beware that sometimes, you get that error when SSL servers forget to provide the intermediate certificates. Use openssl s_client -showcerts -connect the-git-server:443 to get the list of certificates being sent. Note that the pathname of the certificates bundle may differ depending on operating system. The directory holding the certs sub-directory is given by the command openssl version -d . The actual certificates file in that directory may additionally have a different name. | {
"source": [
"https://unix.stackexchange.com/questions/97244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
97,261 | This question is motivated by my shock when I discovered that Mac OS X kernel uses 750MB of RAM . I have been using Linux for 20 years, and I always "knew" that the kernel RAM usage is dwarfed by X (is it true? has it ever been true?). So, after some googling, I tried slabtop which told me: Active / Total Size (% used) : 68112.73K / 72009.73K (94.6%) Does this mean that my kernel is using ~72MB of RAM now? (Given that top reports Xorg 's RSS as 17M, the kernel now dwarfs X, not the other way around). What is the "normal" kernel RAM usage (range) for a laptop? Why does MacOS use an order of magnitude more RAM than Linux? PS. No answer here addressed the last question, so please see related questions: Is it a problem if kernel_task is routinely above 130MB on mid 2007 white MacBook? kernel_task using way too much memory What is included under kernel_task in Activity Monitor? | Kernel is a bit of a misnomer. The Linux kernel is comprised of several proceses/threads + the modules ( lsmod ) so to get a complete picture you'd need to look at the whole ball and not just a single component. Incidentally mine shows slabtop : Active / Total Size (% used) : 173428.30K / 204497.61K (84.8%) The man page for slabtop also had this to say: The slabtop statistic header is tracking how many bytes of slabs are being used and it not a measure of physical memory. The 'Slab' field in the /proc/meminfo file is tracking information about used slab physical memory. Dropping caches Dropping my caches as @derobert suggested in the comments under your question does the following for me: $ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
$
Active / Total Size (% used) : 61858.78K / 90524.77K (68.3%) Sending a 3 does the following: free pagecache, dentries and inodes. I discuss this more in this U&L Q&A titled: Are there any ways or tools to dump the memory cache and buffer? ". So 110MB of my space was being used by just maintaining the info regarding pagecache, dentries and inodes. Additional Information If you're interested I found this blog post that discusses slabtop in a bit more details. It's titled: Linux command of the day: slabtop . The Slab Cache is discussed in more detail here on Wikipedia, titled: Slab allocation . So how much RAM is my Kernel using? This picture is a bit foggier to me, but here are the things that I "think" we know. Slab We can get a snapshot of the Slab usage using this technique. Essentially we can pull this information out of /proc/meminfo . $ grep Slab /proc/meminfo
Slab: 100728 kB Modules Also we can get a size value for Kernel modules (unclear whether it's their size from on disk or when in RAM) by pulling these values from /proc/modules : $ awk '{print $1 " " $2 }' /proc/modules | head -5
cpufreq_powersave 1154
tcp_lp 2111
aesni_intel 12131
cryptd 7111
aes_x86_64 7758 Slabinfo Much of the details about the SLAB are accessible in this proc structure, /proc/slabinfo : $ less /proc/slabinfo | head -5
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_ffff8801f2b30000 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0
fuse_request 100 125 632 25 4 : tunables 0 0 0 : slabdata 5 5 0
fuse_inode 21 21 768 21 4 : tunables 0 0 0 : slabdata 1 1 0 Dmesg When your system boots there is a line that reports memory usage of the Linux kernel just after it's loaded. $ dmesg |grep Memory:
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init) References Where is the memory going? Memory usage in the 2.6 kernel | {
"source": [
"https://unix.stackexchange.com/questions/97261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31443/"
]
} |
97,313 | Currently I'm doing it by SSHing into a server, and executing Vim on the server. This has the benefit of not having to deal with cumbersome syntax of opening files from a remote server over SCP, and, more importantly, being able to really quickly navigate the server's filesystem. On the other hand, it has lag, which make editing kind of hard. What's the canonical way of editing lots of remote files? | You can use SSHFS to mount a remote home in a local folder. Has the advantage of using the current infrastructure and low latency of local vim. | {
"source": [
"https://unix.stackexchange.com/questions/97313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17008/"
]
} |
97,428 | If I am doing several substitutions which need to be consecutive, e.g. sed -i '/^[[:space:]]*browser.*\.should/s/browser/expect(browser/' t1_spec.rb
sed -i '/expect(browser.*\.should/s/\.should/).should/' t1_spec.rb
sed -i 's/\.should/\.to/' t1_spec.rb
sed -i 's/==/eq/' t1_spec.rb Is there a better way to do this that will only go through the t1_spec.file once and do the the 4 substitutions for each line rather than going through the file 4 times? | In GNU (e.g. on my Ubuntu machine), simply using multiple lines is supported and is inferred to mean multiple substitutions. It works well and looks good (imho) as it avoids super long lines, e.g. sed -i '/^[[:space:]]*browser.*\.should/s/browser/expect(browser/
/expect(browser.*\.should/s/\.should/).should/
s/\.should/\.to/
s/==/eq/' t1_spec.rb | {
"source": [
"https://unix.stackexchange.com/questions/97428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
97,560 | Can I safely omit quotes on the right side of a local assignment? function foo {
local myvar=${bar}
stuff()
} I'm mainly interested in bash , but any info on corner cases in other shells are welcome. | Quotes are needed in export foo="$var" or local foo="$var" (or readonly , typeset , declare and other variable declaring commands ) in: dash versions 0.3.8-15 to 0.5.10.2 (see change ). the sh of NetBSD (also based on the Almquist shell). The sh of FreeBSD 9.2 or older (see the change in 9.3 ) yash zsh with versions prior to 5.1 in ksh or sh emulation (or for export var="$(cmd)" where zsh would perform word splitting otherwise (not globbing)). As otherwise the variable expansion would be subject to word splitting and/or filename generation like in any argument to any other command. And are not needed in: bash ksh (all implementations) the sh of FreeBSD 9.3 or newer busybox' ash-based sh (since 2005) zsh dash 0.5.11 or newer. In zsh , split+glob is never done upon parameter expansion, unless in sh or ksh emulation, but split (not glob) is done upon command substitution. Since version 5.1, export / local and other declaration commands have become dual keyword / builtin commands like in the other shells above, which means quoting is not necessary, even in sh / ksh emulation and even for command substitution. There are special cases where quoting is needed even in those shells though like: a="b=some value"
export "$a" Or more generally, if anything left of the = (including the = ) is quoted or the result of some expansion (like export 'foo'="$var" , export foo\="$var" or export foo$((n+=1))="$var" (that $((...)) should also be quoted actually)...). Or in other words when the argument to export wouldn't be a valid variable assignment if written without the export . If the export / local command name itself is quoted (even in part like "export" a="$b" , 'ex'port a="$b" , \export a="$b" , or even ""export a="$b" ), the quotes around $b are needed except in AT&T ksh , mksh and recent versions of dash . If export / local or some part of it is the result of some expansion (like in cmd=export; "$cmd" a="$b" or even export$(:) a="$b" ) or in things like dryrun=; $dryrun export a="$b" ), then the quotes are needed except in recent versions of dash . In the case of > /dev/null export a="$b" , the quotes are needed in pdksh and some of its derivatives. For command export a="$b" , the quotes are needed in every shell but mksh , ksh93 , recent dash , and bash -o posix (with the same caveats about command and export not being the result of some expansion in shells other than dash ). They are not needed in any shell when written: foo=$var export foo (that syntax being also compatible with the Bourne shell but in recent versions of zsh , only working when in sh / ksh emulation). (note that var=value local var shouldn't be used as the behaviour varies across shells). Also note that using export with an assignment also means that the exit status of cmd in export var="$(cmd)" is lost. Doing it as export var; var=$(cmd) doesn't have that problem. Also beware of this special case with bash : $ bash -c 'IFS=; export a="$*"; echo "$a"' bash a b
ab
$ bash -c 'IFS=; export a=$*; echo "$a"' bash a b
a b My advice would be to always quote. | {
"source": [
"https://unix.stackexchange.com/questions/97560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
97,657 | I first listed the groups using : groups I added group using groupadd -g 300 oinstall
groupadd –g 500 dba and then when I do groups
root bin daemon sys adm disk wheel sfcb I am unable to find groups I added. How to list groups with group id?
Also, if I try to add it again it says groups is already present. | The groups command lists groups that the user is currently a member of, not all the groups available on the system. You can lookup a group by name or gid using the getent command. getent group oinstall
getent group 500 To show all the groups, just leave your search query off of the command: getent group | {
"source": [
"https://unix.stackexchange.com/questions/97657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47291/"
]
} |
97,676 | On Linux, given: a device, for example /dev/sda , and its major and minor numbers, for example 8, 0 , how can I know which module / driver is "driving" it? Can I dig into /sys or /proc to discover that? | To get this information from sysfs for a device file, first determine the major/minor number by looking at the output of ls -l , eg $ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 Apr 17 12:26 /dev/sda The 8, 0 tells us that major number is 8 and the minor is 0 . The b at the start of the listing also tells us that it is a block device. Other devices may have a c for character device at the start. If you then look under /sys/dev , you will see there are two directories. One called block and one called char . The no-brainer here is that these are for block and character devices respectively. Each device is then accessible by its major/minor number is this directory. If there is a driver available for the device, it can be found by reading the target of the driver link in this or the device sub-directory. Eg, for my /dev/sda I can simply do: $ readlink /sys/dev/block/8\:0/device/driver
../../../../../../../bus/scsi/drivers/sd This shows that the sd driver is used for the device. If you are unsure if the device is a block or character device, in the shell you could simply replace this part with a * . This works just as well: $ readlink /sys/dev/*/8\:0/device/driver
../../../../../../../bus/scsi/drivers/sd Block devices can also be accessed directly through their name via either /sys/block or /sys/class/block . Eg: $ readlink /sys/block/sda/device/driver
../../../../../../../bus/scsi/drivers/sd Note that the existence of various directories in /sys may change depending on the kernel configuration. Also not all devices have a device subfolder. For example, this is the case for partition device files like /dev/sda1 . Here you have to access the device for the whole disk (unfortunately there are no sys links for this). A final thing which can be useful to do is to list the drivers for all devices for which they are available. For this you can use globs to select all the directories in which the driver links are present. Eg: $ ls -l /sys/dev/*/*/device/driver && ls -l /sys/dev/*/*/driver
lrwxrwxrwx 1 root root 0 Apr 17 12:27 /sys/dev/block/11:0/device/driver -> ../../../../../../../bus/scsi/drivers/sr
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/block/8:0/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/block/8:16/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/block/8:32/device/driver -> ../../../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:0/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:1024/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:128/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:256/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:384/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/189:512/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/189:513/driver -> ../../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/189:514/driver -> ../../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/189:640/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/189:643/driver -> ../../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:768/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 20:38 /sys/dev/char/189:896/driver -> ../../../../bus/usb/drivers/usb
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/21:0/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/21:1/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:27 /sys/dev/char/21:2/device/driver -> ../../../../../../../bus/scsi/drivers/sr
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/21:3/device/driver -> ../../../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/250:0/device/driver -> ../../../../../../../bus/hid/drivers/hid-generic
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/250:1/device/driver -> ../../../../../../../bus/hid/drivers/hid-generic
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/250:2/device/driver -> ../../../../../../../bus/hid/drivers/hid-generic
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/252:0/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/252:1/device/driver -> ../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 12:27 /sys/dev/char/252:2/device/driver -> ../../../../../../../bus/scsi/drivers/sr
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/252:3/device/driver -> ../../../../../../../../../bus/scsi/drivers/sd
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/254:0/device/driver -> ../../../bus/pnp/drivers/rtc_cmos
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/29:0/device/driver -> ../../../bus/platform/drivers/simple-framebuffer
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/4:64/device/driver -> ../../../bus/pnp/drivers/serial
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/4:65/device/driver -> ../../../bus/platform/drivers/serial8250
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/4:66/device/driver -> ../../../bus/platform/drivers/serial8250
lrwxrwxrwx 1 root root 0 Apr 17 19:53 /sys/dev/char/4:67/device/driver -> ../../../bus/platform/drivers/serial8250
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/6:0/device/driver -> ../../../bus/pnp/drivers/parport_pc
lrwxrwxrwx 1 root root 0 Apr 17 12:26 /sys/dev/char/99:0/device/driver -> ../../../bus/pnp/drivers/parport_pc Finally, to diverge from the question a bit, I will add another /sys glob trick to get a much broader perspective on which drivers are being used by which devices (though not necessarily those with a device file): find /sys/bus/*/drivers/* -maxdepth 1 -lname '*devices*' -ls Update Looking more closely at the output of udevadm , it appears to work by finding the canonical /sys directory (as you would get if you dereferenced the major/minor directories above), then working its way up the directory tree, printing out any information that it finds. This way you get information about parent devices and any drivers they use as well. To experiment with this I wrote the script below to walk up the directory tree and display information at each relevant level. udev seems to look for readable files at each level, with their names and contents being incorporated in ATTRS . Instead of doing this I display the contents of the uevent files at each level (seemingly the presence of this defines a distinct level rather than just a subdirectory). I also show the basename of any subsystem links I find and this showing how the device fits in this hierarchy. udevadm does not display the same information, so this is a nice complementary tool. The parent device information (eg PCI information) is also useful if you want to match the output of other tools like lshw to higher level devices. #!/bin/bash
dev=$(readlink -m $1)
# test for block/character device
if [ -b "$dev" ]; then
mode=block
elif [ -c "$dev" ]; then
mode=char
else
echo "$dev is not a device file" >&2
exit 1
fi
# stat outputs major/minor in hex, convert to decimal
data=( $(stat -c '%t %T' $dev) ) || exit 2
major=$(( 0x${data[0]} ))
minor=$(( 0x${data[1]} ))
echo -e "Given device: $1"
echo -e "Canonical device: $dev"
echo -e "Major: $major"
echo -e "Minor: $minor\n"
# sometimes nodes have been created for devices that are not present
dir=$(readlink -f /sys/dev/$mode/$major\:$minor)
if ! [ -e "$dir" ]; then
echo "No /sys entry for $dev" >&2
exit 3
fi
# walk up the /sys hierarchy one directory at a time
# stop when there are three levels left
while [[ $dir == /*/*/* ]]; do
# it seems the directory is only of interest if there is a 'uevent' file
if [ -e "$dir/uevent" ]; then
echo "$dir:"
echo " Uevent:"
sed 's/^/ /' "$dir/uevent"
# check for subsystem link
if [ -d "$dir/subsystem" ]; then
subsystem=$(readlink -f "$dir/subsystem")
echo -e "\n Subsystem:\n ${subsystem##*/}"
fi
echo
fi
# strip a subdirectory
dir=${dir%/*}
done | {
"source": [
"https://unix.stackexchange.com/questions/97676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30196/"
]
} |
97,705 | I have an application which will produce a large amount of data which I do not wish to store onto the disk. The application mostly outputs data which I do not wish to use, but a set of useful information that must be split into separate files. For example, given the following output: JUNK
JUNK
JUNK
JUNK
A 1
JUNK
B 5
C 1
JUNK I could run the application three times like so: ./app | grep A > A.out
./app | grep B > B.out
./app | grep C > C.out This would get me what I want, but it would take too long. I also don't want to dump all the outputs to a single file and parse through that. Is there any way to combine the three operations shown above in such a way that I only need to run the application once and still get three separate output files? | If you have tee ./app | tee >(grep A > A.out) >(grep B > B.out) >(grep C > C.out) > /dev/null (from here ) ( about process substitution ) | {
"source": [
"https://unix.stackexchange.com/questions/97705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11944/"
]
} |
97,721 | I'm trying to play a game (Deus Ex) which I have to modify the brightness since it is very dark in my ambiance. The game has a "Brightness" setting, but lately it doesn't work. I tried to figure out how to change it and find out that xgamma do a similar effect with xgamma -gamma 5 . But whenever I change it, the settings revert back after almost a second (so yeah, my screen light up then shuts down). How can I either, make the xgamma settings permanent (or persistent) or I have to use another tool? My system is a desktop. Seemsly xrandr --output DVI-0 --brightness 2 do the same, but still reverts back to 0 whenever I apply the settings. Each time I try to change it the following output fill the Xorg.0.log file: [ 14768.313] (II) RADEON(0): EDID vendor "HWP", prod id 9798
[ 14768.313] (II) RADEON(0): Using hsync ranges from config file
[ 14768.313] (II) RADEON(0): Using vrefresh ranges from config file
[ 14768.313] (II) RADEON(0): Printing DDC gathered Modelines:
[ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz eP)
[ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0 31.50 640 664 704 832 480 489 492 520 -hsync -vsync (37.9 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "640x480"x0.0 25.18 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "720x400"x0.0 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0 78.75 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.0 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "1024x768"x0.0 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "832x624"x0.0 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e)
[ 14768.313] (II) RADEON(0): Modeline "800x600"x0.0 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) So, apparently my monitor gets redetected each time. | Silly me! I have xflux with fluxgui activated, each time I would like to modify the settings xflux will be in my way. All commands worked, just that xflux would revert it back. Those who want to change their gamma/brightness: Use xrandr to list your outputs: $ xrandr
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 8192 x 8192
DVI-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 304mm x 228mm As you can see my output is DVI-0 to change the brightness: xrandr --output DVI-0 --brightness 2 To change the gamma: xrandr --output DVI-0 --gamma 2:2:1 | {
"source": [
"https://unix.stackexchange.com/questions/97721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
97,736 | I am adding an env variable to /etc/environment but because the variable value contains # sign, string is stripped. MYSQL_PWD="something#no" Now if I do env above code yields MYSQL_PWD=something . How can I escape hash? I've already tried \ character. | This doesn't appear to be possible with /etc/environment . It's meant as a common location for variables that's shell independent. Given this it doesn't look like it supports strings with hash marks ( # ) in them and there doesn't appear to be a way to escape them. I found this SF Q&A titled: How does one properly escape a leading “#” character in linux etc/environment? . None of these methods worked: control="hello" test0="#hello" test1="h\#ello" test2="h#ello" test3="h//#ello" test4="h/#ello" test5=h#ello test6=h\#ello test7=h#ello test8=h//#ello test9=h/#ello test10='h#ello' test11='h\#ello' test12='h#ello' test13='h//#ello' test14='h/#ello' The accepted answer to that question and what would also be my advice: Well it is tricky stuff you want to do /etc/environment is not shell syntax, it looks like one , but it is not, which does frustrates people. The idea behind /etc/environment was wonderful. A shell-independent way to set up variables! Yay! But the practical limitations make it useless. You can't pass variables in there. Try for example put MAIL=$HOME/Maildir/ into it and see what happens. Best just to stop trying to use it for any purpose whatsoever, alas. So you can't do things with it that you would expect to be able to do if it were processed by a shell. Use /etc/profile or /etc/bashrc . Yet still another Q&A gave this rational as to why this is the case: There is no way in /etc/environment to escape the #(as it treated as a comment) as it is being parsed by he PAM module "pam_env" and it treats it as a simple list of KEY=VAL pairs and sets up the environment accordingly. It is not bash/shell, the parser has no language for doing variable expansion or characters escaping. References Environment variable in /etc/environment with pound (hash) sign in the value | {
"source": [
"https://unix.stackexchange.com/questions/97736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49630/"
]
} |
97,752 | I have a process which listen to 2 ports : 45136/tcp and 37208/udp (actually I assume it is the same process). But netstat doesn't return any pid : netstat -antlp | grep 45136
tcp 0 0 0.0.0.0:45136 0.0.0.0:* LISTEN - Same result with "grep 37208". I tried lsof too : lsof -i TCP:45136 But it doesn't return anything.
It's a new installation of squeeze and I really don't know what could be this process. Any idea ? ANSWER Thanks to your comments I found out what it was. I deinstalled nfs-server nfs-common (after a dkpg --get-selections | grep nfs search) and the unknown process disapeared.
Strange though that kernel processes aren't marked in any way. Thanks again to both of you. ;) | netstat There's a process there, your userid just isn't privy to seeing what it is. This is a layer of protection provided by lsof that's keeping you from seeing this. Simply re-run the command but prefix it using the sudo command instead. $ sudo netstat -antlp | grep 45136 There's even a warning about this in the output of lsof at the top. (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Example $ netstat -antlp | grep 0:111
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
$ sudo netstat -antlp | grep 0:111
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1248/rpcbind ss If you're not having any luck with netstat perhaps ss will do. You'll still need to use sudo , and the output can be a little bit more cryptic. Example $ ss -apn|grep :111
LISTEN 0 128 :::111 :::*
LISTEN 0 128 *:111 *:*
$ sudo ss -apn|grep :111
LISTEN 0 128 :::111 :::* users:(("rpcbind",1248,11))
LISTEN 0 128 *:111 *:* users:(("rpcbind",1248,8)) Process ID still not there? There are instances where there simply isn't a PID associated to the TCP port in use. You can read about NFS, in @derobert's answer , which is one of them. There are others. I have instances where I'm using ssh tunnels to connect back to services such as IMAP. These are showing up without a process ID too. In any case you can use a more verbose form of netstat which might shed additional light on what process is ultimately using a TCP port. $ netstat --program --numeric-hosts --numeric-ports --extend Example $ netstat --program --numeric-hosts --numeric-ports --extend |grep -- '-' | head -10
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 192.168.1.103:936 192.168.1.3:60526 ESTABLISHED root 160024310 -
tcp 0 0 192.168.1.1:2049 192.168.1.3:841 ESTABLISHED sam 159941218 -
tcp 0 0 127.0.0.1:143 127.0.0.1:57443 ESTABLISHED dovecot 152567794 13093/imap-login
tcp 0 0 192.168.1.103:739 192.168.1.3:2049 ESTABLISHED root 160023970 -
tcp 0 0 192.168.1.103:34013 192.168.1.3:111 TIME_WAIT root 0 -
tcp 0 0 127.0.0.1:46110 127.0.0.1:783 TIME_WAIT root 0 -
tcp 0 0 192.168.1.102:54891 107.14.166.17:110 TIME_WAIT root 0 -
tcp 0 0 127.0.0.1:25 127.0.0.1:36565 TIME_WAIT root 0 -
tcp 0 0 192.168.1.1:2049 192.168.1.6:798 ESTABLISHED tammy 152555007 - If you notice the output includes INODES so we could back track into the process using this info. $ find -inum 152555007 Which will show you a file which might lead you to a process. References Port to PID | {
"source": [
"https://unix.stackexchange.com/questions/97752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50019/"
]
} |
97,843 | In zsh, I know that I can search history with Ctrl + r . However, oftentimes I start to type a command directly at the prompt, but then realize I should be searching history. When I hit Ctrl + r , it brings up a blank history search prompt like this: Notice how there is text at my prompt but not at the history search prompt. How do I start the history search with the text already in the prompt, so it looks like this: | You can use zle's history-search functionality: bindkey "^[[A" history-beginning-search-backward
bindkey "^[[B" history-beginning-search-forward This binds Up and Down (adjust for your own escape sequences) to a history search, backwards and forwards, based upon what has already been entered at the prompt. So, if you were to enter "vim" and hit Up , zsh will traverse backwards through your history for only those commands commencing with "vim". You can additionally have the cursor placed at the end of the line once you have selected your desired command from zsh's history by using the history-search-end function (typically located in /usr/share/zsh/functions/Zle/ ) and appending -end to the end of each line, like so: autoload -U history-search-end
zle -N history-beginning-search-backward-end history-search-end
zle -N history-beginning-search-forward-end history-search-end
bindkey "^[[A" history-beginning-search-backward-end
bindkey "^[[B" history-beginning-search-forward-end | {
"source": [
"https://unix.stackexchange.com/questions/97843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32691/"
]
} |
97,882 | I was studying code in which the at command is used. I looked around and found that it is used to execute batch jobs. It is used to schedule jobs. It is given, as its input, a command, and a time, relative or absolute. So, my first question is: why is the at command used? Under which circumstances does one need to use at ? I have encountered it when there was some bash script code trying to uninstall software and when some background services were to be restarted. My second question: What is the difference between having any command executed as a batch job and having a command executed in calling command directly (or in subshell)? | Bernhard's reply is correct: in multi-user systems, the ability to execute heavy programs at some ungodly hours of the night is especially convenient, for both the person submitting the job, and his coworkers. It is part of "playing nice". I did most of my Ph.D. computations this way, combining the script with the nice command which demoted the priority of my work whenever other people were keeping the machine busy, while leaving intact its ability to hog all the system resources at night. I used the very same command to check whether my program was running, and to restart it if necessary. Also, you should keep in mind that at was written way before screen, tmux , and so on, so that it was a simple way to have a detached shell, i.e., one that would not die once you logged off the system. Lastly, you should also notice that it is different from cron, which also has been around for a long time. The difference lies in the fact that at is occasional, while cron, being so repetitive, is more suited for system jobs which really need to be executed forever at fixed intervals: in fact, at gives you your own environment, with your own settings (and choices) of environment variable, while cron uses a minimal set of environment variables (just check the difference in PATH , as an example). | {
"source": [
"https://unix.stackexchange.com/questions/97882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50083/"
]
} |
97,920 | I want to automatically cd to the directory created by the clone command after I git clone d something. Important: I don't want to alter the syntax for the command (e.g. use an alias/function) because it would break the zsh-completions I get automatically from the Pretzo project. EDIT : The reason I didn't pick any answer as correct, is because no answer was given that did comply with the condition above. I use ZSH, but an answer in any other shell is acceptable as well. | Create a function: gclonecd() {
git clone "$1" && cd "$(basename "$1" .git)"
} (Works for links both with and without ".git") | {
"source": [
"https://unix.stackexchange.com/questions/97920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50112/"
]
} |
98,007 | For example, I want to give my colleagues write access to certain directory. Let's assume that subdirectories in it had access rights 775, files 664, and also there were some executable files in the dir - 775. Now I want to add write permissions. With chmod, I could try something like chmod o+w -R mydir/ But that's not cool, since I don't want to make the dir world-writable - I want give access only to certain users, so I want to use ACL. But is there an easy way to set those permissions? As I see it, I need to tackle at least three cases (dirs, files, executable files) separately: find -type d -exec setfacl -m u:colleague:rwx {} \;
find -type f -executable -exec setfacl -m u:colleague:rwx {} \;
find -type f \! -executable -exec setfacl -m u:colleague:rw {} \; It seems quite a lot of code lines for such a simple task. Is there a better way? | setfacl has a recursive option ( -R ) just like chmod : -R, --recursive
Apply operations to all files and directories recursively. This
option cannot be mixed with `--restore'. it also allows for the use of the capital-x X permission, which means: execute only if the file is a directory or already has
execute permission for some user (X) so doing the following should work: setfacl -R -m u:colleague:rwX . (all quotes are from man setfacl for acl-2.2.52 as shipped with Debian) | {
"source": [
"https://unix.stackexchange.com/questions/98007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
98,164 | I'm looking for sort of an 'app-store' or Google Play store type functionality for apt-get packages. What I'd really like to do is select a category, like 'Music' or 'Internet' and see the list of available packages in that category with their summaries. It'd be even better if the packages had ratings or reviews. Does anything like this exist? | Such a thing already exists for Ubuntu: https://apps.ubuntu.com/ You can browse by category and search for packages using the web interface. Each application also displays its rating and any reviews it has received - just like the Software Center in Ubuntu. | {
"source": [
"https://unix.stackexchange.com/questions/98164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32762/"
]
} |
98,253 | How do I install htop for macOS (OS X)? (The easiest and laziest path) | Here is the laziest way (or homebrew way) First install Homebrew if you haven't Second brew install htop Third, done | {
"source": [
"https://unix.stackexchange.com/questions/98253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50300/"
]
} |
98,339 | On my 240 GB SSD I had at first two partitions, one containing the Logical Volume with Linux Mint and the other had contained a NTFS partition to share with Windows. Now I removed the NTFS partition and want to extend my logical volume group to use the released disk space. How do I extend the volume group , my logical volume containing /home and the filesystem (ext4) on /home? Is this possible to do online? PS: Yes, I know that I have to backup my data :) /dev/sdb/ (240GB)
linuxvg (160GB) should use 100% of the disk space
swap
root
home (ext4, 128GB) should be extended to use the remaining space output of sudo vgdisplay : --- Volume group ---
VG Name linuxvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 160,00 GiB
PE Size 4,00 MiB
Total PE 40959
Alloc PE / Size 40959 / 160,00 GiB
Free PE / Size 0 / 0
VG UUID ...
--- Logical volume ---
LV Path /dev/linuxvg/swap
LV Name swap
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:32 +0200
LV Status available
# open 2
LV Size 8,00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Logical volume ---
LV Path /dev/linuxvg/root
LV Name root
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:43 +0200
LV Status available
# open 1
LV Size 24,00 GiB
Current LE 6144
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1
--- Logical volume ---
LV Path /dev/linuxvg/home
LV Name home
VG Name linuxvg
LV UUID ...
LV Write Access read/write
LV Creation host, time mint, 2013-08-06 22:48:57 +0200
LV Status available
# open 1
LV Size 128,00 GiB
Current LE 32767
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
--- Physical volumes ---
PV Name /dev/sdb1
PV UUID ...
PV Status allocatable
Total PE / Free PE 40959 / 0 output of sudo fdisk -l : Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 468862127 234431063+ ee GPT
Disk /dev/mapper/linuxvg-swap: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/linuxvg-root: 25.8 GB, 25769803776 bytes
255 heads, 63 sectors/track, 3133 cylinders, total 50331648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/linuxvg-home: 137.4 GB, 137434759168 bytes
255 heads, 63 sectors/track, 16708 cylinders, total 268427264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000 | You can do this fairly simply. Kinda surprised there wasn't an answer for this here already. You can do this entire process while running on the filesystem you want to resize (yes, it's safe and fully supported). There is no need for rescue CDs or alternate operating systems. Resize the partition (again, you can do this with the system running). GParted is easy to use and supports resizing. You can also use a lower level tool such as fdisk . But you'll have to delete the partition and recreate it. Just make sure when doing so that the new partition starts at the exact same location. Reboot. Since the partition table was modified on the running system, it won't take effect until a reboot. Run pvresize /dev/sdXY to have LVM pick up the new space. Resize the logical volume with lvextend . If you want to use the whole thing, lvextend -r -l +100%FREE /dev/VGNAME/LVNAME . The -r will resize the filesystem as well. Though I always recommend against using the entire volume group. You never know what you'll need in the future. You can always expand later, you can't shrink. | {
"source": [
"https://unix.stackexchange.com/questions/98339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40541/"
]
} |
98,531 | What is the difference between sudo -i and sudo su ? | Based on the descriptions from the man pages for su and sudo I would assume the following things. Since sudo -iu <user> means a login shell this would be equivalent to an su - <user> or su -l <user> . An su without any arguments changes your effective user ID but you're still using your original <user> environment and a who am i will report you're still <user> . excerpt sudo man page -i [command]
The -i (simulate initial login) option runs the shell specified in
the passwd(5) entry of the target user as a login shell. This means
that login-specific resource files such as .profile or .login will
be read by the shell. If a command is specified, it is passed to
the shell for execution. Otherwise, an interactive shell is
executed. sudo attempts to change to that user's home directory
before running the shell. It also initializes the environment,
leaving DISPLAY and TERM unchanged, setting HOME, MAIL, SHELL,
USER, LOGNAME, and PATH, as well as the contents of
/etc/environment on Linux and AIX systems. All other environment
variables are removed. Example I have a user account, saml with a UID of 500. $ egrep "Uid|Gid" /proc/$$/task/$$/status
Uid: 500 500 500 500
Gid: 501 501 501 501 In the above output, the 1st column is my real UID (uid) and the 2nd is my effective UID (euid). Becoming root via (su) $ su Now I'm root, but I still maintain my environment and my real UID is still 500 . Notice that my euid is now 0 (root). $ egrep "Uid|Gid" /proc/$(pgrep su -n)/task/$(pgrep su -n)/status
Uid: 500 0 0 0
Gid: 501 501 501 501 However my environment is still saml 's. Here's one of he environment variables, $LOGNAME . $ env | grep LOGNAME
LOGNAME=saml Becoming root via (su -) or (sudo -i) $ su - With an su - or sudo -i not only do I change my effective UID to a new user, but I also source their files as if it was a login, and my environment now becomes identical as if I were them directly logging in. $ egrep "Uid|Gid" /proc/$(pgrep su -n)/task/$(pgrep su -n)/status
Uid: 500 0 0 0
Gid: 501 501 501 501 However my environment is now root 's. Same variable, $LOGNAME , now it's set with root . $ env | grep LOGNAME
LOGNAME=root So then what's the difference? Well let's try the above with sudo -i and find out. $ sudo -i Now let's look at the same info: $ egrep "Uid|Gid" /proc/$(pgrep su -n)/task/$(pgrep su -n)/status
Uid: 0 0 0 0
Gid: 501 501 501 501 Well one major thing is my effective ID and real ID are both 0 ( root ) with this approach. The environment variable $LOGNAME is as if we logged in as root . $ env | grep LOGNAME
LOGNAME=root Comparing environments If we count the number of lines in say the 3 methods, perhaps there is some additional info to be had. $ env > /tmp/<method used to become root> We are left with these 3 files: -rw-r--r-- 1 root root 1999 Nov 2 06:43 sudo_root.txt -rw-r--r-- 1 root root 1970 Nov 2 06:44 sudash_root.txt -rw-r--r-- 1 root root 4859 Nov 2 06:44 su_root.txt Already we can see that something is up with just a plain su . The env. is over 2x the size of the others. Number of lines in each: $ wc -l su*
28 sudash_root.txt
32 sudo_root.txt
92 su_root.txt There's really no need to look further at the su_root.txt file. This file contains a much of user's environment that ran the su command. So let's look at the other 2 files. They're virtually identical except for a few cosmetic variables, such as $LANG being slightly different. The one smoking gun in the list is the $PATH . sudo PATH=/usr/lib64/ccache:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/brlcad/bin:/root/bin su - PATH=/usr/lib64/qt-3.3/bin:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/brlcad/bin:/root/bin As you can see sudo -i gives us some additional protection by stripping out suspicious paths, but it also keeps our $DISPLAY and $TERM intact in case we were displaying a GUI to different location. Take aways? So the big take away is that the method used to become root sudo -i is advantages over the others because you use your own password to do so, protecting root's password from needing to be given out. There is logging when you became root , vs. mysteriously some one becoming root via su or su - . sudo -i gives you a better user experience over either su 's because it protects your $DISPLAY and $TERM . sudo -i provides some protection to the system when user's become root , by limiting the environment which they are given. What about sudo su , you didn't even discuss it? I intentionally avoided bringing that into the discussion even though the OP asked about it because doing so would only have confused the issue, IMO. When you run sudo su the sudo command masks the effects of the su and so much of the environment that you'd get from a regular su is lost. Sudo is doing its job and providing a limited and protected environment regardless of whether it's sudo su or sudo -i . Example Here's the result of the sudo su environment being dumped: ls -l /tmp/sudosu_root.txt
-rw-r--r-- 1 root root 1933 Nov 2 14:48 /tmp/sudosu_root.txt And the number of lines: $ wc -l /tmp/sudosu_root.txt
31 /tmp/sudosu_root.txt These are the only variables that differ between a sudo su - and a sudo -i : $ sdiff /tmp/sudosu_root.txt /tmp/sudo_root.txt | grep ' |'
USERNAME=saml | USERNAME=root
PATH=/usr/lib64/ccache:/sbin:/bin:/usr/sbin:/usr/bin:/usr/brl | PATH=/usr/lib64/ccache:/usr/local/sbin:/sbin:/bin:/usr/sbin:/
MAIL=/var/spool/mail/saml | MAIL=/var/spool/mail/root
PWD=/home/saml/tst | PWD=/root
SUDO_COMMAND=/bin/su | SUDO_COMMAND=/bin/bash
XAUTHORITY=/root/.xauthYFtlL3 | XAUTHORITY=/var/run/gdm/auth-for-saml-iZePuv/datab So as you can see there really isn't much of a difference between them. Slightly different $PATH , the $SUDO_COMMAND , and the $MAIL and $USERNAME are the only differences. References Real and Effective IDs | {
"source": [
"https://unix.stackexchange.com/questions/98531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8484/"
]
} |
98,892 | I've noticed that basically no system I've ever worked with has /bin/sh as a real executable. It's always a symlink to dash , bash in POSIX mode, or something similar. Why? What are the disadvantages of using the true, original /bin/sh ? (Speed? Licensing?) | I'd guess lack of features - no command history, no fancy redirection, no command line editing. BSD introduced csh the C shell for those reasons. Another factor is that the Genuine Bourne Shell was only recently available in open source form . Unless you licensed it, you couldn't distribute it. That put it out of reach for free-of-cost distros, and made it ideologically unpalatable for other distros, and *BSDs. But the code is available now. You can take a look, compile it, give it a spin. | {
"source": [
"https://unix.stackexchange.com/questions/98892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
98,948 | Which is a good tool to convert ASCII to binary, and binary to ASCII? I was hoping for something like: $ echo --binary "This is a binary message"
01010100 01101000 01101001 01110011 00100000 01101001 01110011 00100000 01100001 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101101 01100101 01110011 01110011 01100001 01100111 01100101 Or, more realistic: $ echo "This is a binary message" | ascii2bin
01010100 01101000 01101001 01110011 00100000 01101001 01110011 00100000 01100001 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101101 01100101 01110011 01110011 01100001 01100111 01100101 And also the reverse: $ echo "01010100 01101000 01101001 01110011 00100000 01101001 01110011 00100000 01100001 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101101 01100101 01110011 01110011 01100001 01100111 01100101" | bin2ascii
This is a binary message PS: I'm using bash PS2: I hope I didn't get the wrong binary | $ echo AB | perl -lpe '$_=unpack"B*"'
0100000101000010
$ echo 0100000101000010 | perl -lpe '$_=pack"B*",$_'
AB -e expression evaluate the given expression as perl code -p : sed mode. The expression is evaluated for each line of input, with the content of the line stored in the $_ variable and printed after the evaluation of the expression . -l : even more like sed : instead of the full line, only the content of the line (that is, without the line delimiter) is in $_ (and a newline is added back on output). So perl -lpe code works like sed code except that it's perl code as opposed to sed code. unpack "B*" works on the $_ variable by default and extracts its content as a bit string walking from the highest bit of the first byte to the lowest bit of the last byte. pack does the reverse of unpack . See perldoc -f pack for details. With spaces: $ echo AB | perl -lpe '$_=join " ", unpack"(B8)*"'
01000001 01000010
$ echo 01000001 01000010 | perl -lape '$_=pack"(B8)*",@F'
AB (it assumes the input is in blocks of 8 bits (0-padded)). With unpack "(B8)*" , we extract 8 bits at a time, and we join the resulting strings with spaces with join " " . | {
"source": [
"https://unix.stackexchange.com/questions/98948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30320/"
]
} |
98,983 | aaaaaaaa 09
bbbbbbbb 90
ccccccccccccccc 89
ddddd 09 Using sed/awk/replace, in the above text I want to remove anything that comes after the first space in each line. For example the output will be: aaaaaaaa
bbbbbbbb
ccccccccccccccc
ddddd any help will be appreciated. | Sed sed 's/\s.*$//' Grep grep -o '^\S*' Awk awk '{print $1}' As pointed out in the comments, -o isn't POSIX; however both GNU and BSD have it, so it should work for most people. Also, \s / \S may not be on all systems, if yours doesn't recognize it you can use a literal space, or if you want space and tab, those in a bracket expression ( [...] ), or the [[:blank:]] character class (note that strictly speaking \s is equivalent to [[:space:]] and includes vertical spacing characters as well like CR, LF or VT which you probably don't care about). The awk one assumes the lines don't start with a blank character. | {
"source": [
"https://unix.stackexchange.com/questions/98983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50681/"
]
} |
98,993 | Bash scripts start with the following line #!/bin/bash
# Rest of script below
... In bash the # character is the start of a comment, but #!/bin/bash is definitely not a comment, therefore it isn't bash but the kernel that interprets that statement. So what exactly is that first line? Is it a specific language, or a special one-off case in the Linux kernel? Are there other commands or statements in this "language" that can be used when scripting? | Sed sed 's/\s.*$//' Grep grep -o '^\S*' Awk awk '{print $1}' As pointed out in the comments, -o isn't POSIX; however both GNU and BSD have it, so it should work for most people. Also, \s / \S may not be on all systems, if yours doesn't recognize it you can use a literal space, or if you want space and tab, those in a bracket expression ( [...] ), or the [[:blank:]] character class (note that strictly speaking \s is equivalent to [[:space:]] and includes vertical spacing characters as well like CR, LF or VT which you probably don't care about). The awk one assumes the lines don't start with a blank character. | {
"source": [
"https://unix.stackexchange.com/questions/98993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
99,074 | A specific file on our production servers is being modified at apparently random times which do not appear to correlate with any log activity. We can't figure out what program is doing it, and there are many suspects. How can I find the culprit? It is always the same file, at the same path, but on different servers and at different times. The boxes are managed by puppet , but the puppet logs show no activity at the time the file is modified. What kernel hook, tool, or technique could help us find what process is modifying this file? lsof is unsuitible for this, because the file is being opened, modified and closed very quickly. Any solution that relies upon polling (such as running lsof often) is no good. OS: Debian testing Kernels: Linux, 2.6.32 through 3.9, both 32 and 64-bit. | You can use auditd and add a rule for that file to be watched: auditctl -w /path/to/that/file -p wa Then watch for entries to be written to /var/log/audit/audit.log . | {
"source": [
"https://unix.stackexchange.com/questions/99074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8324/"
]
} |
99,112 | When a process is killed with a handle-able signal like SIGINT or SIGTERM but it does not handle the signal, what will be the exit code of the process? What about for unhandle-able signals like SIGKILL ? From what I can tell, killing a process with SIGINT likely results in exit code 130 , but would that vary by kernel or shell implementation? $ cat myScript
#!/bin/bash
sleep 5
$ ./myScript
<ctrl-c here>
$ echo $?
130 I'm not sure how I would test the other signals... $ ./myScript &
$ killall myScript
$ echo $?
0 # duh, that's the exit code of killall
$ killall -9 myScript
$ echo $?
0 # same problem | Processes can call the _exit() system call (on Linux, see also exit_group() ) with an integer argument to report an exit code to their parent. Though it's an integer, only the 8 least significant bits are available to the parent (exception to that is when using waitid() or handler on SIGCHLD in the parent to retrieve that code , though not on Linux). The parent will typically do a wait() or waitpid() to get the status of their child as an integer (though waitid() with somewhat different semantics can be used as well). On Linux and most Unices, if the process terminated normally, bits 8 to 15 of that status number will contain the exit code as passed to exit() . If not, then the 7 least significant bits (0 to 6) will contain the signal number and bit 7 will be set if a core was dumped. perl 's $? for instance contains that number as set by waitpid() : $ perl -e 'system q(kill $$); printf "%04x\n", $?'
000f # killed by signal 15
$ perl -e 'system q(kill -ILL $$); printf "%04x\n", $?'
0084 # killed by signal 4 and core dumped
$ perl -e 'system q(exit $((0xabc))); printf "%04x\n", $?'
bc00 # terminated normally, 0xbc the lowest 8 bits of the status Bourne-like shells also make the exit status of the last run command in their own $? variable. However, it does not contain directly the number returned by waitpid() , but a transformation on it, and it's different between shells. What's common between all shells is that $? contains the lowest 8 bits of the exit code (the number passed to exit() ) if the process terminated normally. Where it differs is when the process is terminated by a signal. In all cases, and that's required by POSIX, the number will be greater than 128. POSIX doesn't specify what the value may be. In practice though, in all Bourne-like shells that I know, the lowest 7 bits of $? will contain the signal number. But, where n is the signal number, in ash, zsh, pdksh, bash, the Bourne shell, $? is 128 + n . What that means is that in those shells, if you get a $? of 129 , you don't know whether it's because the process exited with exit(129) or whether it was killed by the signal 1 ( HUP on most systems). But the rationale is that shells, when they do exit themselves, by default return the exit status of the last exited command. By making sure $? is never greater than 255, that allows to have a consistent exit status: $ bash -c 'sh -c "kill \$\$"; printf "%x\n" "$?"'
bash: line 1: 16720 Terminated sh -c "kill \$\$"
8f # 128 + 15
$ bash -c 'sh -c "kill \$\$"; exit'; printf '%x\n' "$?"
bash: line 1: 16726 Terminated sh -c "kill \$\$"
8f # here that 0x8f is from a exit(143) done by bash. Though it's
# not from a killed process, that does tell us that probably
# something was killed by a SIGTERM ksh93 , $? is 256 + n . That means that from a value of $? you can differentiate between a killed and non-killed process. Newer versions of ksh , upon exit, if $? was greater than 255, kills itself with the same signal in order to be able to report the same exit status to its parent. While that sounds like a good idea, that means that ksh will generate an extra core dump (potentially overwriting the other one) if the process was killed by a core generating signal: $ ksh -c 'sh -c "kill \$\$"; printf "%x\n" "$?"'
ksh: 16828: Terminated
10f # 256 + 15
$ ksh -c 'sh -c "kill -ILL \$\$"; exit'; printf '%x\n' "$?"
ksh: 16816: Illegal instruction(coredump)
Illegal instruction(coredump)
104 # 256 + 15, ksh did indeed kill itself so as to report the same
# exit status as sh. Older versions of `ksh93` would have returned
# 4 instead. Where you could even say there's a bug is that ksh93 kills itself even if $? comes from a return 257 done by a function: $ ksh -c 'f() { return "$1"; }; f 257; exit'
zsh: hangup ksh -c 'f() { return "$1"; }; f 257; exit'
# ksh kills itself with a SIGHUP so as to report a 257 exit status
# to its parent yash . yash offers a compromise. It returns 256 + 128 + n . That means we can also differentiate between a killed process and one that terminated properly. And upon exiting, it will report 128 + n without having to suicide itself and the side effects it can have. $ yash -c 'sh -c "kill \$\$"; printf "%x\n" "$?"'
18f # 256 + 128 + 15
$ yash -c 'sh -c "kill \$\$"; exit'; printf '%x\n' "$?"
8f # that's from a exit(143), yash was not killed To get the signal from the value of $? , the portable way is to use kill -l : $ /bin/kill 0
Terminated
$ kill -l "$?"
TERM (for portability, you should never use signal numbers, only signal names) On the non-Bourne fronts: csh / tcsh and fish same as the Bourne shell except that the status is in $status instead of $? (note that zsh also sets $status for compatibility with csh (in addition to $? )). rc : the exit status is in $status as well, but when killed by a signal, that variable contains the name of the signal (like sigterm or sigill+core if a core was generated) instead of a number, which is yet another proof of the good design of that shell. es . the exit status is not a variable. If you care for it, you run the command as: status = <={cmd} which will return a number or sigterm or sigsegv+core like in rc . Maybe for completeness, we should mention zsh 's $pipestatus and bash 's $PIPESTATUS arrays that contain the exit status of the components of the last pipeline. And also for completeness, when it comes to shell functions and sourced files, by default functions return with the exit status of the last command run, but can also set a return status explicitly with the return builtin. And we see some differences here: bash and mksh (since R41, a regression^Wchange apparently introduced intentionally ) will truncate the number (positive or negative) to 8 bits. So for instance return 1234 will set $? to 210 , return -- -1 will set $? to 255. zsh and pdksh (and derivatives other than mksh ) allow any signed 32 bit decimal integer (-2 31 to 2 31 -1) (and truncate the number to 32bits). ash and yash allow any positive integer from 0 to 2 31 -1 and return an error for any number out of that. ksh93 for return 0 to return 320 set $? as is, but for anything else, truncate to 8 bits. Beware as already mentioned that returning a number between 256 and 320 could cause ksh to kill itself upon exit. rc and es allow returning anything even lists. Also note that some shells also use special values of $? / $status to report some error conditions that are not the exit status of a process, like 127 or 126 for command not found or not executable (or syntax error in a sourced file)... | {
"source": [
"https://unix.stackexchange.com/questions/99112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
99,154 | We are installing SAP HANA in a RAID machine. As part of the installation step, it is mentioned that, To disable the usage of transparent hugepages set the kernel settings
at runtime with echo never > /sys/kernel/mm/transparent_hugepage/enabled So instead of runtime, if I wanted to make this a permanent change, should I add the above line inside /proc/vmstat file? | To make options such as this permanent you'll typically add them to the file /etc/sysctl.conf . You can see a full list of the options available using this command: $ sysctl -a Example $ sudo sysctl -a | head -5
kernel.sched_child_runs_first = 0
kernel.sched_min_granularity_ns = 6000000
kernel.sched_latency_ns = 18000000
kernel.sched_wakeup_granularity_ns = 3000000
kernel.sched_shares_ratelimit = 750000 You can look for hugepage in the output like so: $ sudo sysctl -a | grep hugepage
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.hugepages_treat_as_movable = 0
vm.nr_overcommit_hugepages = 0 It's not there? However looking through the output I did not see transparent_hugepage . Googling a bit more I did come across this Oracle page which discusses this very topic. The page is titled: Configuring HugePages for Oracle on Linux (x86-64) . Specifically on that page they mention how to disable the hugepage feature . excerpt The preferred method to disable Transparent HugePages is to add "transparent_hugepage=never" to the kernel boot line in the "/etc/grub.conf" file. title Oracle Linux Server (2.6.39-400.24.1.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.39-400.24.1.el6uek.x86_64 ro root=/dev/mapper/vg_ol6112-lv_root rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=uk
LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 rd_NO_DM rd_LVM_LV=vg_ol6112/lv_swap rd_LVM_LV=vg_ol6112/lv_root rhgb quiet numa=off
transparent_hugepage=never
initrd /initramfs-2.6.39-400.24.1.el6uek.x86_64.img The server must be rebooted for this to take effect. Alternatively you can add the command to your /etc/rc.local file. if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi I think I would go with the 2nd option, since the first will be at risk of getting unset when you upgrade from one kernel to the next. You can confirm that it worked with the following command after rebooting: $ cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never] | {
"source": [
"https://unix.stackexchange.com/questions/99154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
99,185 | As far as I know, square brackets are used to enclose an expression usually in if else statements. But I found square brackets being used without the "if" as follows: [ -r /etc/profile.d/java.sh ] && . /etc/profile.d/java.sh in the following script. #!/bin/bash### BEGIN INIT INFO
# Provides: jbossas7
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start/Stop JBoss AS 7
### END INIT INFO
# chkconfig: 35 92 1
## Include some script files in order to set and export environmental variables
## as well as add the appropriate executables to $PATH.
[ -r /etc/profile.d/java.sh ] && . /etc/profile.d/java.sh
[ -r /etc/profile.d/jboss.sh ] && . /etc/profile.d/jboss.sh
JBOSS_HOME=/sw/AS7
AS7_OPTS="$AS7_OPTS -Dorg.apache.tomcat.util.http.ServerCookie.ALLOW_HTTP_SEPARATORS_IN_V0=true" ## See AS7-1625
AS7_OPTS="$AS7_OPTS -Djboss.bind.address.management=0.0.0.0"
AS7_OPTS="$AS7_OPTS -Djboss.bind.address=0.0.0.0"
case "$1" in
start)
echo "Starting JBoss AS 7..."
#sudo -u jboss sh ${JBOSS_HOME}/bin/standalone.sh $AS7_OPTS ## If running as user "jboss"
#start-stop-daemon --start --quiet --background --chuid jboss --exec ${JBOSS_HOME}/bin/standalone.sh $AS7_OPTS ## Ubuntu
${JBOSS_HOME}/bin/standalone.sh $AS7_OPTS &
;;
stop)
echo "Stopping JBoss AS 7..."
#sudo -u jboss sh ${JBOSS_HOME}/bin/jboss-admin.sh --connect command=:shutdown ## If running as user "jboss"
#start-stop-daemon --start --quiet --background --chuid jboss --exec ${JBOSS_HOME}/bin/jboss-admin.sh -- --connect command=:shutdown ## Ubuntu
${JBOSS_HOME}/bin/jboss-cli.sh --connect command=:shutdown
;;
*)
echo "Usage: /etc/init.d/jbossas7 {start|stop}"; exit 1;
;;
esac
exit 0 What do square brackets do without the "if"? I mean, exactly, what do they mean when used in that context? This isn't a duplicate of that in which the OP used "if" which I don't have a problem with. In this question, brackets were used in a counter intuitive way. That question and this question may have the same answer but they are two different questions. | Square brackets are a shorthand notation for performing a conditional test. The brackets [ , as well as [[ are actual commands within Unix, believe it or not. Think: $ [ -f /etc/rc.local ] && echo "real file"
real file
-and-
$ test -f /etc/rc.local && echo "real file"
real file In Bash the [ is a builtin command as well as an executable. [[ is just a keyword to Bash. Example You can confirm this using type : $ type -a [
[ is a shell builtin
[ is /usr/bin/[
$ type -a [[
[[ is a shell keyword You can see the physical executable here: $ ls -l /usr/bin/[
-rwxr-xr-x 1 root root 37000 Nov 3 2010 /usr/bin/[ builtins vs. keywords If you take a look at the Bash man page, man bash , you'll find the following definitions for the 2: keywords - Reserved words are words that have a special meaning to the shell. The following words are recognized as reserved when unquoted and either the first word of a simple command (see SHELL GRAMMAR below) or the third word of a case or for command: ! case do done elif else esac fi for function if in select then until while { } time [[ ]] builtins - If the command name contains no slashes, the shell attempts to locate it. If there exists a shell function by that name, that function is invoked as described above in FUNCTIONS. If the name does not match a function, the shell searches for it in the list of shell builtins. If a match is found, that builtin is invoked. If the name is neither a shell function nor a builtin, and contains no slashes, bash searches each element of the PATH for a directory containing an executable file by that name. Bash uses a hash table to remember the full pathnames of executable files (see hash under SHELL BUILTIN COMMANDS below). A full search of the directories in PATH is performed only if the command is not found in the hash table. If the search is unsuccessful, the shell searches for a defined shell function named command_not_found_handle. If that function exists, it is invoked with the original command and the original command's arguments as its arguments, and the function's exit status becomes the exit status of the shell. If that function is not defined, the shell prints an error message and returns an exit status of 127. man page If you look through the Bash man page you'll find the details on it. test expr
[ expr ]
Return a status of 0 or 1 depending on the evaluation of the
conditional expression expr. Each operator and operand must be
a separate argument. Expressions are composed of the primaries
described above under CONDITIONAL EXPRESSIONS. test does not
accept any options, nor does it accept and ignore an argument of
-- as signifying the end of options. Lastly from the man page: test and [ evaluate conditional expressions using a set of rules
based on the number of arguments. EDIT #1 Follow-up question from the OP. Ok, so why is there a need for an "if" then? I mean, why "if" even exists if "[" would suffice. The if is part of a conditional. The test command or [ ... ] command simply evaluate the conditional, and return a 0 or a 1. The 0 or 1 is then acted on by the if statement. The 2 are working together when you use them. Example if [ ... ]; then
... do this ...
else
... do that ...
fi | {
"source": [
"https://unix.stackexchange.com/questions/99185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33965/"
]
} |
99,307 | I have some doubts about certain ssh server configurations on /etc/ssh/sshd_config . I want the next behavior: Public key authentication is the only way to authenticate as root (no password authentication or other) Normal users can use both (password and public key authentication) If I set PasswordAuthentication no my first point is satisfied but not the second. There is a way to set PasswordAuthentication no only for root? | You can do this using the PermitRootLogin directive. From the sshd_config manpage: Specifies whether root can log in using ssh(1). The argument must be
“yes”, “without-password”, “forced-commands-only”, or “no”. The
default is “yes”. If this option is set to “without-password”, password authentication
is disabled for root. The following will accomplish what you want: PasswordAuthentication yes
PermitRootLogin prohibit-password From OpenSSH 7.0 changelog PermitRootLogin now accepts an argument of 'prohibit-password' as a less-ambiguous synonym of 'without-password'. Then reload your ssh server: systemctl reload sshd As usual, don't close your active terminal until you verified, from another terminal, that everything works and that you are not locked out by a mistake. | {
"source": [
"https://unix.stackexchange.com/questions/99307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46091/"
]
} |
99,334 | I want to do some low-resources testing and for that I need to have 90% of the free memory full. How can I do this on a *nix system? | stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14: stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping: stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 Adapt the /proc/meminfo call with free(1) / vm_stat(1) /etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples. | {
"source": [
"https://unix.stackexchange.com/questions/99334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
99,350 | I've been looking around sed command to add text into a file in a specific line.
This works adding text after line 1: sed '1 a\ But I want to add it before line 1. It would be: sed '0 a\ but I get this error: invalid usage of line address 0 . Any suggestion? | Use sed 's insert ( i ) option which will insert the text in the preceding line. sed '1 i\ Question author's update: To make it edit the file in place - with GNU sed - I had to add the -i option: sed -i '1 i\anything' file Also syntax sed -i '1i text' filename For non-GNU sed You need to hit the return key immediately after the backslash 1i\ and after first_line_text : sed -i '1i\
first_line_text
' Also note that some non-GNU sed implementations (for example the one on macOS) require an argument for the -i flag (use -i '' to get the same effect as with GNU sed ). For sed implementations that does not support -i at all, run without this option but redirect the output to a new file. Then replace the old file with the newly created file. | {
"source": [
"https://unix.stackexchange.com/questions/99350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49315/"
]
} |
99,460 | Is there any command to send messages through the Linux shell to other people on the same network? I'm using write user and then write the message itself. But there's any command that doesn't show my username or that I'm trying to message them The command I'm using will show this to the user I'm trying to contact (code taken from the web): Message from [email protected] on pts/1 at 17:11 ... | The only straightforward way I know of doing this is to use the wall command. This can be used to omit the sender's identification, via the -n switch. Example $ sudo wall -n hi
Remote broadcast message (Fri Nov 8 13:49:18 2013):
hi using echo This alternative method is more of a hack, since it isn't done through an explicit tool but you can echo text out to a users' terminal assuming you know which one they're on. Example $ w
13:54:26 up 2 days, 36 min, 4 users, load average: 4.09, 4.20, 3.73
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
saml tty1 :0 Wed13 2days 3:55m 0.04s pam: gdm-password
saml pts/0 :0.0 Wed13 24:16m 0.35s 0.35s bash
saml pts/1 :0.0 Wed20 0.00s 3.71s 0.00s w
saml pts/4 :0.0 01:20 12:33m 0.36s 0.05s man rsync Assuming you know user saml is in fact on one of the pseudo terminals you can echo text to that device directly like so. From terminal pts/1 : $ sudo echo "Let's go have lunch... ok?" > /dev/pts/4
$ Result on pts/4 : $ man rsync
$ Let's go have lunch... ok? | {
"source": [
"https://unix.stackexchange.com/questions/99460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50960/"
]
} |
100,588 | I have a Debian system working as a wireless router with eth0 and wlan0 . Now I added an additional network manually on eth1 with ifconfig : alix:~# ifconfig eth1 192.168.0.2 netmask 255.255.255.0
alix:~# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
alix:~# ping 192.168.0.254
PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data.
64 bytes from 192.168.0.254: icmp_req=1 ttl=64 time=0.537 ms
64 bytes from 192.168.0.254: icmp_req=2 ttl=64 time=0.199 ms
64 bytes from 192.168.0.254: icmp_req=3 ttl=64 time=0.188 ms
^C
--- 192.168.0.254 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.188/0.308/0.537/0.161 ms Everything works fine as you can see. Now I would like to make the configuration permanent. Therefor I added the following section to /etc/network/interfaces : alix:~# sed -n '/iface eth1/,/^$/p' /etc/network/interfaces
iface eth1 inet static
address 192.168.0.2
netmask 255.255.255.0 But when I try to start the network I get the following error: alix:~# ifconfig eth1 down
alix:~# ifup -v eth1
Configuring interface eth1=eth1 (inet)
run-parts --verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/hostapd
ip addr add 192.168.0.2/255.255.255.0 broadcast 192.168.0.255 dev eth1 label eth1
RTNETLINK answers: File exists
Failed to bring up eth1. When I run the ip command manually I get the same error: alix:~# ip addr add 192.168.0.2/255.255.255.0 broadcast 192.168.0.255 dev eth1 label eth1
RTNETLINK answers: File exists What is wrong with the command? And how can I tell Debian to do the right thing? | I got it that I had to flush the device before bringing it up: # ip addr flush dev eth1 Clearing manually set interface configuration information like this is mentioned in the Ubuntu Server Guide . | {
"source": [
"https://unix.stackexchange.com/questions/100588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7167/"
]
} |
100,647 | Wondering what use the yes command might be, I stumbled upon this comment , and tried to execute yes $(yes yes) From what I understand, this should simply print out an infinite sequence of yes , but instead it outputs nothing and crashes my graphical terminal after a few seconds. (If I execute it on tty1, I see the login prompt after some time.) What is happening here? | It should already be enough to run echo $(yes yes) The $(...) runs the inner command until it is finished and captures all its output. - Now as yes runs a long time and generates a lot of output, bash will eventually run out of memory and crash. | {
"source": [
"https://unix.stackexchange.com/questions/100647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16833/"
]
} |
100,652 | I have a script which connects to a remote server and check if some package is installed: ssh root@server 'bash -s' < myscript.sh myscript.sh: OUT=`rpm -qa | grep ntpdate`
if [ "$OUT" != "" ] ; then
echo "ntpdate already installed"
else
yum install $1
fi This example could be simplified. Here is myscript2.sh which has same problem: read -p "Package is not installed. Do you want to install it (y/n)?" choise My problem is that bash can not read my answers interactively. Is there a way to execute local script remotely without losing ability to prompt user? | Try something like this: $ ssh -t yourserver "$(<your_script)" The -t forces a tty allocation, $(<your_script) reads the whole file and in this cases passes the content as one argument to ssh , which will be executed by the remote user's shell. If the script needs parameters, pass them after the script: $ ssh -t yourserver "$(<your_script)" arg1 arg2 ... Works for me, not sure if it's universal though. | {
"source": [
"https://unix.stackexchange.com/questions/100652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16253/"
]
} |
100,660 | I want to copy my c directory with all subdirectories excluding ./git subdirectory. I do it using rsync : echo "copy c and sh files "
rsync -a --include='*.c' --include='*.sh' --include='*/' --exclude='*' ~/c/ ~/Dropbox/Public/c
# remove .git directory = do not send it to dropbox. Thx to Tomasz Sowa
rm -rf ~/Dropbox/Public/c/.git Can I do it better? | Just add an explicit exclude for .git: rsync -a --exclude='.git/' --include='*.c' --include='*.sh' --include='*/' --exclude='*' ~/c/ ~/Dropbox/Public/c Another option is to create ~/.cvsignore containing the following line along with any other directories you'd like to exclude: .git/ | {
"source": [
"https://unix.stackexchange.com/questions/100660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52061/"
]
} |
100,704 | What's the difference between executing multiple commands with && and ; ? Examples: echo "Hi\!" && echo "How are you?" and echo "Hi\!"; echo "How are you?" | In the shell, && and ; are similar in that they both can be used to terminate commands. The difference is && is also a conditional operator. With ; the following command is always executed, but with && the later command is only executed if the first succeeds. false; echo "yes" # prints "yes"
true; echo "yes" # prints "yes"
false && echo "yes" # does not echo
true && echo "yes" # prints "yes" Newlines are interchangeable with ; when terminating commands. | {
"source": [
"https://unix.stackexchange.com/questions/100704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52095/"
]
} |
100,707 | I've been using rsync for Android to backup my phone to a remote NTFS filesystem on a Linux system for a while. Recently, the HDD containing the NTFS filesystem has started to fail (or throw "I/O Errors") so I took the opportunity to copy all the files onto a new HDD and new NTFS filesystem. In this instance I used the "FastCopy v2.11" tool for Windows. My problem is that when I do an rsync "dry run" I can see that it wants to recopy files which already exist on the remote rsync folder. For example, when I run with "-iv" I get this kind of output: Which, as I understand it means that rsync wants to copy this file to the remote rsync because of a timestamp difference. The strange thing is that if I use "Astro" for Android to look at the local file properties, I can see that the file's size, modified time, and MD5 checksum are exactly the same as that of the remote file (using ls -l to check the modified time). Given that I recently copied the remote rsync files from an old NTFS filesystem, the remote file's ctime is different (using ls -lc ). Does rsync look at the remote ctime, and if so is there any way I can use rsync , or ntfs-3g to get around this problem? | In the shell, && and ; are similar in that they both can be used to terminate commands. The difference is && is also a conditional operator. With ; the following command is always executed, but with && the later command is only executed if the first succeeds. false; echo "yes" # prints "yes"
true; echo "yes" # prints "yes"
false && echo "yes" # does not echo
true && echo "yes" # prints "yes" Newlines are interchangeable with ; when terminating commands. | {
"source": [
"https://unix.stackexchange.com/questions/100707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52086/"
]
} |
100,722 | On my Ubuntu-Desktop and on my debian-server I have a script which needs to be executed each minute (a script that calls the minute-tic of my space online browsergame ). The problem is that on debian derivates cron is logging to /var/log/syslog each time it executes. I end up seeing repeated the message it was executed over and over in /var/log/syslog : Nov 11 16:50:01 eclabs /USR/SBIN/CRON[31636]: (root) CMD (/usr/bin/w3m -no-cookie http://www.spacetrace.org/secret_script.php > /dev/null 2>&1) I know that in order to suppress the output of a program I can redirect it to /dev/null , for example to hide all error and warning messages from a program I can create a line in crontab like this * * * * * root /usr/local/sbin/mycommand.sh > /dev/null But I would like to run a cronjob and be sure that all generated output or errors are piped to NULL, so it doesn't generate any messages in syslog and doesn't generate any emails EDIT: there is a solution to redirect the cron-logs into a separate log like proposed here by changing /etc/syslog.conf But the drawback is, that then ALL output of all cronjobs is redirected. Can I somehow only redirect a single cronjob to a separate log file? Preferably configurable inside the cron.hourly file itself. | Make the line this: * * * * * root /usr/local/sbin/mycommand.sh > /dev/null 2>&1 This will capture both STDOUT (1) and STDERR (2) and send them to /dev/null . MAILTO You can also disable the email by setting and then resetting the MAILTO="" which will disable the sending of any emails. Example MAILTO=""
* * * * * root /usr/local/sbin/mycommand.sh > /dev/null 2>&1
MAILTO="[email protected]"
* * * * * root /usr/local/sbin/myothercommand.sh Additional messaging Often times you'll get the following types of messages in /var/log/syslog : Nov 11 08:17:01 manny CRON[28381]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) These are simply notifications via cron that a directory of cronjobs was executed. This message has nothing to do directly with these jobs, instead it's coming from the crond daemon directly. There isn't really anything you can do about these, and I would encourage you to not disable these, since they're likely the only window you have into the goings on of crond via the logs. If they're very annoying to you, you can always direct them to an alternative log file to get them out of your /var/log/syslog file, through the /etc/syslog.conf configuration file for syslog . | {
"source": [
"https://unix.stackexchange.com/questions/100722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
100,739 | As I understand emulators (in a simple way), they do translate or substitute the function calls of a program using functions of system X into functions used by system Y in which the program is being run onto. Wine project claims that Wine Is Not an Emulator, because: Instead of simulating internal Windows logic like a virtual machine or
emulator, Wine translates Windows API calls into POSIX calls
on-the-fly, eliminating the performance and memory penalties of other
methods and allowing you to cleanly integrate Windows applications
into your desktop. Well, how emulators and virtual machines simulate internal Windows logic on host non-Windows systems? Isn't that by translating Windows system calls into the host's own respective calls? Is the difference between emulators and non-emulators (like Wine) is that emulators emulate a whole operating system then the application uses that system APIs without knowing that it is talking to an emulator, while non-emulators directly translates application's calls into the host's (and the application also may not know it)? Is the extra level of indirection is the only different between emulators and Wine? | Well, how emulators and virtual machines simulate internal Windows logic on host non-Windows systems? Isn't that by translating Windows system calls into the host's own respective calls? No, or at least not in the sense that WINE does -- by literally translating system calls one to one in user space. An emulator does this abstractly via a more circuitous route; it does not translate system calls directly. A true emulator creates a virtual machine (e.g. x86-64), not a virtual operating system . You can then in theory run any operating system targeting that style of machine. Commonly an "emulator" includes the operating system, but that's not really what it is emulating; the OS it includes is the same as one that would run on a real machine. Emulators are sometimes used to simulate hardware different from the host machine, but also hardware that is exactly the same for the purpose of running one OS inside another. WINE is different from this in that it is not actually windows. You could run an
x86-64 emulator with a real copy of windows inside it, but that is not what WINE is. Their claim that it is actually more efficient than an emulator makes sense -- the overhead for just translating system calls is probably lower than that of running a VM. The disadvantage is that WINE can only be windows; you cannot use it with some other OS as you could a normal VM . | {
"source": [
"https://unix.stackexchange.com/questions/100739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9632/"
]
} |
100,801 | Most shells provide functions like && and ; to chain the execution of commands in certain ways. But what if a command is already running, can I still somehow add another command to be executed depending on the result of the first one? Say I ran $ /bin/myprog
some output... but I really wanted /bin/myprog && /usr/bin/mycleanup . I can't kill myprog and restart everything because too much time would be lost. I can Ctrl + Z it and fg / bg if necessary. Does this allow me to chain in another command? I'm mostly interested in bash, but answers for all common shells are welcome! | You should be able to do this in the same shell you're in with the wait command: $ sleep 30 &
[1] 17440
$ wait 17440 && echo hi
...30 seconds later...
[1]+ Done sleep 30
hi excerpt from Bash man page wait [n ...]
Wait for each specified process and return its termination status. Each n
may be a process ID or a job specification; if a job spec is given, all
processes in that job's pipeline are waited for. If n is not given, all
currently active child processes are waited for, and the return status is
zero. If n specifies a non-existent process or job, the return status is
127. Otherwise, the return status is the exit status of the last process
or job waited for. | {
"source": [
"https://unix.stackexchange.com/questions/100801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34398/"
]
} |
100,859 | I have to set up a tunnel between two hosts. For this I use ssh in this way: ssh -L MY_LOCAL_PORT:FOREIGN_ADDRESS:FOREIGN_PORT MYUSER@SSH_SERVER after that, I log in to my SSH_SERVER. How can I avoid this feature?!
I have only to set up a tunnel. I don't have to login into my SSH_SERVER... I've tried the -N option, but it kept my shell busy. | As said in other posts, if you don't want a prompt on the remote host, you must use the -N option of SSH. But this just keeps SSH running without having a prompt, and the shell busy. You just need to put the SSH'ing as a background task with the & sign : ssh -N -L 8080:ww.xx.yy.zz:80 user@server & This will launch the ssh tunnelling in the background.
But some messages may appear, especially when you try to connect to a non-listening port (if you server apache is not launched). To avoid these messages to spawn in your shell while doing other stuff, you may redirect STDOUT/STDERR to the big void : ssh -N -L 8080:ww.xx.yy.zz:80 user@server >/dev/null 2>&1 & Have fun with SSH. | {
"source": [
"https://unix.stackexchange.com/questions/100859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52173/"
]
} |
100,871 | Stupidly, I had been using a condition like this as part of a script: if [ $(ls FOO* 2> /dev/null) ] # if files named "FOO*" were downloaded
then
echo "Files found"
# ... process and email results
else
echo "Not found"
# ... email warning that no files were found (against expectations)
fi That works for zero and one files named FOO* , but fails if there are more than one . From logs I found several different error messages stemming from this: [: FOO_20131107_082920: unary operator expected
[: FOO_20131108_070203: binary operator expected
[: too many arguments My question is: what is the correct way to check, in a Bash if condition, whether one or more files whose name begins with FOO exist? GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) | This happens because your command substitution for ls outputs whitespace, and it ultimately undergoes word splitting before being passed to [ . A less breakable way would be to put the files in an array, and then check that the array has at least one member. shopt -s nullglob
files=( FOO* )
if (( ${#files[@]} )); then
# there were files
fi This works because (( by default returns true if the value does not equal 0, and ${#files[@]} gets the number of items in the array (which will be >0 if there are files matching the glob). You could also do something like this, as long as nullglob is not set: if ls FOO* >/dev/null 2>&1; then
# there were files
fi This just checks the exit code of ls , which will be 1 if you passed a filename that doesn't exist (the literal FOO* , if nothing is matched (unless, of course, you are evil and there is a file named FOO* , in which case it will return 0 :-) )). Note that both of these also match directories. If you really only want to match regular files, you need to test that: for file in FOO*; do
if [[ -f $file ]]; then
# file found, do some stuff and break
break
fi
done | {
"source": [
"https://unix.stackexchange.com/questions/100871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1506/"
]
} |
100,883 | Is there any hack/tip/trick to make this specific Broadcom Wireless work with OpenBSD? After digging some FreeBSD-wireless threads and OpenBSD-tech/OpenBSD-misc, I noticed that adding the PCI vendor to any specific driver will not work since this specific device have differences on it´s hardware construction compared with Broadcom 4312 or Broadcom 4318. Implementing this Broadcom Wireless driver will need a huge effort to get done, and many of the users are using wifi dongles or converting ndis (Windows XP version) drivers to get wireless conectivity. Are there any patches floating through the internet that would enable ndis on OpenBSD, so I could "convert" this driver as a workaround like the one used on FreeBSD? EDIT1 - The intent here is not to "stick with FreeBSD" or question the OpenBSD binary policy, and that is why i´m looking for guidance. A 3rd part port of ndis to OpenBSD could be a solution... This thread , shows that adding the PCI Vendor id will just probe the hardware, but will not work. This other thread , gives some insight about the different construction of the bcm4313 card. | This happens because your command substitution for ls outputs whitespace, and it ultimately undergoes word splitting before being passed to [ . A less breakable way would be to put the files in an array, and then check that the array has at least one member. shopt -s nullglob
files=( FOO* )
if (( ${#files[@]} )); then
# there were files
fi This works because (( by default returns true if the value does not equal 0, and ${#files[@]} gets the number of items in the array (which will be >0 if there are files matching the glob). You could also do something like this, as long as nullglob is not set: if ls FOO* >/dev/null 2>&1; then
# there were files
fi This just checks the exit code of ls , which will be 1 if you passed a filename that doesn't exist (the literal FOO* , if nothing is matched (unless, of course, you are evil and there is a file named FOO* , in which case it will return 0 :-) )). Note that both of these also match directories. If you really only want to match regular files, you need to test that: for file in FOO*; do
if [[ -f $file ]]; then
# file found, do some stuff and break
break
fi
done | {
"source": [
"https://unix.stackexchange.com/questions/100883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
100,888 | I'm running Linux Mint Debian Edition with Update Pack 7. I'm trying to connect to a WPA enterprise network using TTLS and PAP, with no luck. The problem seems to be in authentication. Visually, NetworkManager keeps asking my password time after time. The password is correct and works both on Android, Ubuntu and ArchLinux Manjaro. I have seen it work on a LMDE UP6 before in which now also doesn't work (UP7). Here is the log I'm getting (with time removed for readability) NetworkManager[2641]: get_secret_flags: assertion `is_secret_prop (setting, secret_name, error)' failed
NetworkManager[2641]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled...
NetworkManager[2641]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started...
NetworkManager[2641]: <info> (wlan0): device state change: need-auth -> prepare (reason 'none') [60 40 0]
NetworkManager[2641]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled...
NetworkManager[2641]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete.
NetworkManager[2641]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting...
NetworkManager[2641]: <info> (wlan0): device state change: prepare -> config (reason 'none') [40 50 0]
NetworkManager[2641]: <info> Activation (wlan0/wireless): connection 'EduRoam CACert' has security, and secrets exist. No new secrets needed.
NetworkManager[2641]: <info> Config: added 'ssid' value 'eduroam'
NetworkManager[2641]: <info> Config: added 'scan_ssid' value '1'
NetworkManager[2641]: <info> Config: added 'key_mgmt' value 'WPA-EAP'
NetworkManager[2641]: <info> Config: added 'password' value '<omitted>'
NetworkManager[2641]: <info> Config: added 'eap' value 'TTLS'
NetworkManager[2641]: <info> Config: added 'fragment_size' value '1300'
NetworkManager[2641]: <info> Config: added 'phase2' value 'auth=PAP'
NetworkManager[2641]: <info> Config: added 'ca_path' value '/etc/ssl/certs'
NetworkManager[2641]: <info> Config: added 'ca_path2' value '/etc/ssl/certs'
NetworkManager[2641]: <info> Config: added 'ca_cert' value '/home/darkhogg/.eduroam/ca.pem'
NetworkManager[2641]: <info> Config: added 'identity' value '[email protected]'
NetworkManager[2641]: <info> Config: added 'anonymous_identity' value '[email protected]'
NetworkManager[2641]: <info> Config: added 'bgscan' value 'simple:30:-45:300'
NetworkManager[2641]: <info> Config: added 'proactive_key_caching' value '1'
NetworkManager[2641]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete.
NetworkManager[2641]: <info> Config: set interface ap_scan to 1
NetworkManager[2641]: <info> (wlan0): supplicant interface state: disconnected -> scanning
NetworkManager[2641]: <info> (wlan0): supplicant interface state: scanning -> authenticating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: authenticating -> associated
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associated -> disconnected
NetworkManager[2641]: <info> (wlan0): supplicant interface state: disconnected -> scanning
NetworkManager[2641]: <info> (wlan0): supplicant interface state: scanning -> authenticating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: authenticating -> associating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associating -> associated
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associated -> disconnected
NetworkManager[2641]: <info> (wlan0): supplicant interface state: disconnected -> scanning
NetworkManager[2641]: <info> (wlan0): supplicant interface state: scanning -> authenticating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: authenticating -> associating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associating -> associated
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associated -> disconnected
NetworkManager[2641]: <info> (wlan0): supplicant interface state: disconnected -> scanning
NetworkManager[2641]: <info> (wlan0): supplicant interface state: scanning -> authenticating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: authenticating -> associating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associating -> associated
NetworkManager[2641]: <info> (wlan0): supplicant interface state: associated -> disconnected
NetworkManager[2641]: <info> (wlan0): supplicant interface state: disconnected -> scanning
NetworkManager[2641]: <warn> Activation (wlan0/wireless): association took too long.
NetworkManager[2641]: <info> (wlan0): device state change: config -> need-auth (reason 'none') [50 60 0]
NetworkManager[2641]: <warn> Activation (wlan0/wireless): asking for new secrets
NetworkManager[2641]: <info> (wlan0): supplicant interface state: scanning -> authenticating
NetworkManager[2641]: <info> (wlan0): supplicant interface state: authenticating -> disconnected
NetworkManager[2641]: <warn> Couldn't disconnect supplicant interface: This interface is not connected. The network is eduroam , used by my university to provide WiFi access. More information can be found here . In particular, I'm from Spain, in Univerdad Complutense de Madrid. This may be relevant as I understand every university implements it more or less as they want. I have unsuccessfully followed multiple tutorials involving wpa_supplicant scripts and configuration, and the result is always the same: Authentication fails and it asks my password again on a loop. | This happens because your command substitution for ls outputs whitespace, and it ultimately undergoes word splitting before being passed to [ . A less breakable way would be to put the files in an array, and then check that the array has at least one member. shopt -s nullglob
files=( FOO* )
if (( ${#files[@]} )); then
# there were files
fi This works because (( by default returns true if the value does not equal 0, and ${#files[@]} gets the number of items in the array (which will be >0 if there are files matching the glob). You could also do something like this, as long as nullglob is not set: if ls FOO* >/dev/null 2>&1; then
# there were files
fi This just checks the exit code of ls , which will be 1 if you passed a filename that doesn't exist (the literal FOO* , if nothing is matched (unless, of course, you are evil and there is a file named FOO* , in which case it will return 0 :-) )). Note that both of these also match directories. If you really only want to match regular files, you need to test that: for file in FOO*; do
if [[ -f $file ]]; then
# file found, do some stuff and break
break
fi
done | {
"source": [
"https://unix.stackexchange.com/questions/100888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34006/"
]
} |
100,959 | I can print my current working dir like this myPrompt$ pwd
/Users/me/myDir I want my shell to look like this /Users/me/myDir$ pwd
/Users/me/myDir Is that possible? How can I do it? | You can use escape sequences in prompt variables . Put this in your ~/.bashrc : PS1='\w\$ ' | {
"source": [
"https://unix.stackexchange.com/questions/100959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18366/"
]
} |
101,073 | I created a folder on the command line as the root user. Now I want to edit it and its contents in GUI mode. How do I change the permissions on it to allow me to do this? | If I understand you correctly, fire up a terminal, navigate to one level above that directory, change to root and issue the command: chown -R user:group directory/ This changes the ownership of directory/ (and everything else within it) to the user user and the group group . Many systems add a group named after each user automatically, so you may want: chown -R user:user directory/ After this, you can edit the tree under directory/ and even change the permissions of directory/ and any file/directory under it, from the GUI. If you truly want any user to have full permissions on all files under directory/ (which may be OK if this is your personal computer, but is definitely not recommended for multi-user environments), you can issue this: chmod -R a+rwX directory/ as root. | {
"source": [
"https://unix.stackexchange.com/questions/101073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46431/"
]
} |
101,143 | I have a huge gzipped file and I want a program (4s-import in this case) to read it. It takes a lot of time to first unzip the file and then call the program with the path to the file as an argument. Would it be possible to do something like: zcat huge.gz | 4s-import <SOME MAGIC> where SOME-MAGIC is like a path to an abstract file that contains stdin? The much slower and more disk space consuming alternative that I have to do otherwise is: zcat huge.gz > huger
4s-import huger | You can use the process substitution operator <() of bash (or zsh ): 4s-import <(zcat huge.gz) This operator will create a temporary fifo /dev/fd/NN and replace <(.) with the string /dev/fd/NN . 4s-import now can open /dev/fd/NN and read from that fifo, while bash will run zcat huge.gz , which sends its output to /dev/fd/NN . | {
"source": [
"https://unix.stackexchange.com/questions/101143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50823/"
]
} |
101,146 | As far as I understand, the /etc/apt/apt.conf file of Squeeze has been broken into separate files within the /etc/apt/apt.conf.d/ directory in Wheezy. The debian wiki has not been updated yet and it seems to contain info only about /etc/apt/apt.conf.d/70debconf . In any case, I am not entirely familiar with configuring apt, besides merely editing /etc/apt/sources.list and I am kinda lost here. The directory structure on my machine is: /etc/apt/apt.conf.d/
00aptitude
00CDMountPoint
00trustcdrom
01autoremove
20listchanges
20packagekit
70debconf and my questions are: What do these files do? What do the numbers mean? Can I add new files to this dir so that they are loaded as well? If so, is there a convention for doing so? | You can use the process substitution operator <() of bash (or zsh ): 4s-import <(zcat huge.gz) This operator will create a temporary fifo /dev/fd/NN and replace <(.) with the string /dev/fd/NN . 4s-import now can open /dev/fd/NN and read from that fifo, while bash will run zcat huge.gz , which sends its output to /dev/fd/NN . | {
"source": [
"https://unix.stackexchange.com/questions/101146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33054/"
]
} |
101,237 | By mistake I ran rm * on the current directory where I created many c program files. I had been working on these since morning. Now I can't take out again the time that I spent since morning on creating the files. Please say how to recover. They aren't in recycle bin also! | If a running program still has the deleted file open, you can recover the file through the open file descriptor in /proc/[pid]/fd/[num] . To determine if this is the case, you can attempt the following: $ lsof | grep "/path/to/file" If the above gives output of the form: progname 5383 user 22r REG 8,1 16791251 265368 /path/to/file take note of the PID in the second column, and the file descriptor number in the fourth column. Using this information you can recover the file by issuing the command: $ cp /proc/5383/fd/22 /path/to/restored/file If you're not able to find the file with lsof , you should immediately remount the file system which housed the file read-only: $ mount -o remount,ro /dev/[partition] or unmount the file system altogether: $ umount /dev/[partition] The reason for this is that as soon as the file has been unlinked, and there are no remaining hard links to the file in question, the underlying file system may free the blocks previously allocated for the deleted file, at which point the blocks may be allocated to another file and their contents overwritten. Ceasing any further writes to the file system is therefore time critical if any recovery is to be possible. If the file system is the root file system or cannot be made read-only or unmounted for some other reason, it might be necessary to shutdown the system (if possible) and continue the recovery from a live environment where you can leave the target file system read-only. After writes to the file system have been prevented, there is no immediate hurry to attempt the actual recovery. To play it safe, you might want to make a backup of the file system to perform the actual recovery on: $ dd bs=4M if=/dev/[partition] of=/path/to/backup The next steps now depend on the file system type. Assuming a typical Ubuntu installation, you most likely have a ext3 or ext4 file system. In this case, you may attempt recovery using extundelete . Recovery may be attempted safely on either the backup, or the raw device, as long as it is not mounted (or it is mounted read-only). DO NOT ATTEMPT RECOVERY FROM A LIVE FILE SYSTEM . This will most likely bring the file system to an inconsistent state. extundelete will attempt restore any files it finds to a subdirectory of the current directory named RECOVERED_FILES . Typical usage to restore all deleted files from a backup would be: With older versions: $ extundelete /path/to/backup --restore-all With newer versions (e.g. 0.2.4), don't mount the device you're trying to recover from (thanks to Ryan Lue) : $ extundelete /dev/<device-file> --restore-all Instead of --restore-all , you can try options like --restore-file <path> or --restore-directory <path> | {
"source": [
"https://unix.stackexchange.com/questions/101237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46723/"
]
} |
101,263 | On Unix, a long time back, I learned about chmod :
the traditional way to set permissions on Unix
(and to allow programs to gain privileges, using setuid and setgid). I have recently discovered some newer commands on GNU/Linux: setfacl extends the traditional ugo:rwx bits and the t bit of chmod . setcap gives more fine-grained control over privileges
than the ug:s bits of chmod . chattr allows some other controls (a bit of a mix) over the file. Are there any others? | chmod : change file mode bits Usage (octal mode): chmod octal-mode files... Usage (symbolic mode): chmod [ references ][[ operator ][ modes ]] files... references is a combination of the letters ugoa ,
which specify which user's access to the files will be modified: u the user who owns it g other users in the file's group o other users not in the file's group a all users If omitted, it defaults to all users,
but only permissions allowed by the umask are modified. operator is one of the characters +-= : + add the specified file mode bits
to the existing file mode bits of each file - removes the specified file mode bits
from the existing file mode bits of each file = adds the specified bits and removes unspecified bits, except the setuid and setgid bits set for directories, unless explicitly specified. mode consists of a combination of the letters rwxXst , which specify which permission bits are to be modified: r read w write x (lower case X ) execute (or search for directories) X (capital) execute/traverse only if the file is a directory
or already has an execute bit set for some user category s setuid or setgid (depending on the specified references ) t restricted deletion flag or sticky bit Alternatively, the mode can consist of one of the letters ugo ,
in which case case the mode corresponds to the permissions
currently granted to the owner ( u ), members of the file's group ( g )
or users in neither of the preceding categories ( o ). The various bits of chmod explained: Access control (see also setfacl ) rwx — read ( r ), write ( w ), and execute/traverse ( x ) permissions Read (r) affects if a file can be read, or if a directory can be listed. Write (w) affects if a file can be written to,
or if a directory can be modified (files added, deleted, renamed). Execute (x) affects if a file can be run,
use for scripts and other executable files. Traverse (x), also known as "search",
affects whether a directory can be traversed;
i.e., whether a process can access (or try to access) file system objects
through entries in this directory. s and t — sticky bit ( t ), and setgid ( s ) on directories The sticky bit only affects directories. Will prevent anyone except file owner, and root, from deleting files in the directory. The setgid bit on directories will cause new files and directories
to have the group set to the same group,
and new directories to have their setgid bit set
(see also defaults in setfacl ). s — setuid, setgid, on executable files This can affect security in a bad way, if you don't know what you are doing. When an executable is run, if one of these bits is set,
then the user/group of the executable
will become the effective user/group of the process.
Thus the program runs as that user.
See setcap for a more modern way to do this. chown chgrp : chattr : change file attributes Usage: chattr operator [ attribute ] files... operator is one of the characters +-= : + adds the selected attributes to be to the existing attributes of the files - removes the selected attributes = overwrites the current set of attributes the files have with the specified attributes . attribute is a combination of the letters acdeijmstuxACDFPST ,
which correspond to the attributes: a append only c compressed d no dump e extent format i immutable j data journaling m don't compress s secure deletion t no tail-merging u undeletable x direct access for files A no atime updates C no copy on write D synchronous directory updates F case-insensitive directory lookups P project hierarchy S synchronous updates T top of directory hierarchy There are restrictions on the use of many of these attributes.
For example, many of them can be set or cleared only
by the superuser (i.e., root) or an otherwise privileged process. setfattr : change extended file attributes Usage (set attribute): setfattr -n name -v value files... Usage (remove): setfattr -x name files... name is the name of the extended attribute to set or remove value is the new value of the extended attribute setfacl : change file access control lists Usage: setfacl option [default:][ target :][ param ][: perms ] files... option must include one of the following: --set set the ACL of a file or a directory, replacing the previous ACL -m | --modify modify the ACL of a file or directory -x | --remove remove ACL entries of a file or directory target is one of the letters ugmo (or the longer forms shown below): u , users permission of a named user identified by param , defaults to file owner UID if omitted g , group permission of a named group identified by param , default to owning group GID if omitted m , mask effective rights mask o , other permissions of others perms is a combination of the letters rwxX , which correspond to the permissions: r read w write x execute X execute only if the file is a directory or already has execute permission for some user Alternatively, perms may be an octal digit ( 0 - 7 ) indicating the set of permissions. setcap : change file capabilities Usage: setcap capability-clause file A capability-clause consists of a comma-separated list of capability names followed by a list of operator-flag pairs. The available operators are = , + and - . The available flags are e , i and p which correspond to the Effective , Inheritable and Permitted capability sets. The = operator will raise the specified capability sets and reset the others. If no flags are given in conjunction with the = operator all the capability sets will be reset. The + and - operators will raise or lower the one or more specified capability sets respectively. chcon : change file SELinux security context Usage: chcon [-u user ] [-r role ] [-t type ] files... user is the SELinux user, such as user_u , system_u or root . role is the SELinux role (always object_r for files) type is the SELinux subject type chsmack : change SMACK extended attributes SMACK is Simplified Mandatory Access Control Kernel. Usage: chsmack -a value file value is the SMACK label to be set for the SMACK64 extended file attribute setrichacl : change rich access control list richacl s are a feature that will add more advanced ACLs. Currently a work in progress, so I can not tell you much about them. I have not used them. See also this question Are there more advanced filesystem ACLs beyond traditional 'rwx' and POSIX ACL? and man page | {
"source": [
"https://unix.stackexchange.com/questions/101263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4778/"
]
} |
101,271 | How do I open a text file in a terminal with instant auto-refresh every time it is changed? I've looked at vim with :set autoread , but it requires some elementary input (such as a keypress inside vim ) to trigger the refresh. I want the auto-refresh to be hands-free. Is there some hack to do this? I'm using Crunchbang 11, but I'm quite comfortable with the terminal. | This should show you the file once per second: watch -n 1 cat file | {
"source": [
"https://unix.stackexchange.com/questions/101271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1243/"
]
} |
101,272 | Fun fact: If you use Archive Manager and extract a .tar.gz so that you have "Keep directory structure" unticked, you will get a tarbomb . tar -ztf lists all the files and directories in a tar file.
Is there a way to list all the files in a tar file, without the directory structure? | I don't see a way to do it from the man page, but you can always filter the results. The following assumes no newlines in your file names: tar tzf your_archive | awk -F/ '{ if($NF != "") print $NF }' How it works By setting the field separator to / , the last field awk knows about ( $NF ) is either the file name if it's processing a file name or empty if it's processing a directory name ( tar adds a trailing slash to directory names). So, we're basically telling awk to print the last field if it's not empty. | {
"source": [
"https://unix.stackexchange.com/questions/101272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40678/"
]
} |
101,295 | Is there a way to search man pages case-insensitively? Using the '/' search feature matches exact case. | When no other pager is specified, man uses less to display man pages. The other answers that involve changing the pager command line are correct, but you can also type -i while less is running. From the less man page: - Followed by one of the command line option letters (see OPTIONS
below), this will change the setting of that option and print a
message describing the new setting. So typing -i while in less changes the setting in the same way that specifying it on the command line would. I got the hint that this would work from How do you do a case insensitive search using a pattern modifier using less , then found the explanation in the man page. | {
"source": [
"https://unix.stackexchange.com/questions/101295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52244/"
]
} |
101,308 | I'm using a PC-BSD workstation, and I would like to know if there is a way to monitor which app / process is using the network. I use a Mac OS X (Mavericks) laptop, and the "Network" tab in the "Activity Monitor" allows to see which process is sending / receiving data to / from the network. But I don't see (or haven't found) anything like that in FreeBSD. Since Mac OS X is similar to FreeBSD under the hood, is there any graphical app (similar to System Monitor) or command-line utility (similar to top ) to monitor network activity for each process? | When no other pager is specified, man uses less to display man pages. The other answers that involve changing the pager command line are correct, but you can also type -i while less is running. From the less man page: - Followed by one of the command line option letters (see OPTIONS
below), this will change the setting of that option and print a
message describing the new setting. So typing -i while in less changes the setting in the same way that specifying it on the command line would. I got the hint that this would work from How do you do a case insensitive search using a pattern modifier using less , then found the explanation in the man page. | {
"source": [
"https://unix.stackexchange.com/questions/101308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19370/"
]
} |
101,332 | I'd like to generate a file with the name example.file . I could use touch example.file but I want the file to be exactly 24MB in size. I already checked the manpage of touch, but there is no parameter like this. Is there an easy way to generate files of a certain size? | You can use dd: dd if=/dev/zero of=output.dat bs=24M count=1 or dd if=/dev/zero of=output.dat bs=1M count=24 or, on Mac, dd if=/dev/zero of=output.dat bs=1m count=24 | {
"source": [
"https://unix.stackexchange.com/questions/101332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52447/"
]
} |
101,440 | I want to remove all empty lines from a file. Even if the line contains spaces or tabs it should also be removed. | Just grep for non-blanks: grep '[^[:blank:]]' < file.in > file.out [:blank:] , inside character ranges ( [...] ), is called a POSIX character class. There are a few like [:alpha:] , [:digit:] ... [:blank:] matches horizontal white space (in the POSIX locale, that's space and tab, but in other locales there could be more, like all the Unicode horizontal spacing characters in UTF8 locales) while [[:space:]] matches horizontal and vertical white space characters (same as [:blank:] plus things like vertical tab, form feed...). grep '[:blank:]' Would return the lines that contain any of the characters, : , b , l , a , n or k . Character classes are only recognised within [...] , and ^ within [...] negates the set. So [^[:blank:]] means any character but the blank ones. | {
"source": [
"https://unix.stackexchange.com/questions/101440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52496/"
]
} |
101,515 | Say I just create directory newDirectory and then I do ls -ld command. I see that the number of hard links is 2. What exactly makes the hard link 2 from the start? Also is the number of subdirectories in the current directory equal to the number of hard links - 2? | Historically , the first Unix filesystem created two entries in every directory: . pointing to the directory itself, and .. pointing to its parent. This provided an easy way to traverse the filesystem, both for applications and for the OS itself. Thus each directory has a link count of 2+n where n is the number of subdirectories. The links are the entry for that directory in its parent, the directory's own . entry, and the .. entry in each subdirectory. For example, suppose this is the content of the subtree rooted at /parent , all directories: /parent
/parent/dir
/parent/dir/sub1
/parent/dir/sub2
/parent/dir/sub3 Then dir has a link count of 5: the dir entry in /parent , the . entry in /parent/dir , and the three .. entries in each of /parent/dir/sub1 , /parent/dir/sub2 and /parent/dir/sub3 . Since /parent/dir/sub1 has no subdirectory, its link count is 2 (the sub1 entry in /parent/dir and the . entry in /parent/dir/sub1 ). To minimize the amount of special-casing for the root directory, which doesn't have a “proper” parent, the root directory contains a .. entry pointing to itself. This way it, too, has a link count of 2 plus the number of subdirectories, the 2 being /. and /.. . Later filesystems have tended to keep track of parent directories in memory and usually don't need . and .. to exist as actual entries; typical modern unix systems treat . and .. as special values as part of the filesystem-type-independent filesystem code. Some filesystems still include . and .. entries, or pretend to even though nothing appears on the disk. Most filesystems still report a link count of 2+n for directories regardless of whether . and .. entries exist, but there are exceptions, for example btrfs doesn't do this and changing this has been marked as a rejected idea in the btrfs wiki . | {
"source": [
"https://unix.stackexchange.com/questions/101515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49088/"
]
} |
101,561 | I've always used GNU tar . However, all GNU/Linux distributions that I've seen ship bsdtar in their repositories. I've even seen it installed by default in some, IIRC. I know for sure that Arch GNU/Linux requires it as a part of basedevel (maybe base , but I'm not sure), as I've seen it in PKGBUILDs. Why would you want to use bsdtar instead of GNU tar ? What are the advantages? Note that I am the person who asked What are the main differences between BSD and GNU/Linux userland? . | The Ubuntu bsdtar is actually the tar implementation bundled with libarchive ; and that should be differentiated from classical bsdtar . Some BSD variants do use libarchive for their tar implementation, eg FreeBSD. GNUtar does support the other tar variants and automatic compression detection. As visualication pasted the blurb from Ubuntu, there are a few things in there that are specific to libarchive : libarchive is by definition a library, and different from both classical bsdtar and GNUtar in that way. libarchive cannot read some older obscure GNU tar variations, most notable was encoding of some headers in base64, so that the tar file would be 7-bit clean ASCII (this was the case for 1.13.6-1.13.11 and changed in 1.13.12, that code was only officially in tar for 2 weeks) libarchive 's bsdtar will read non-tar files (eg zip, iso9660, cpio), but classical bsdtar will not. Now that we've gotten libarchive out of the way, it mostly comes down to what is supported in classical bsdtar . You can see the manpages yourself here: GNU tar(1) FreeBSD tar(1) - libarchive-based NetBSD tar(1) OpenBSD tar(1) Standard/Schily tar(1) - the oldest free tar implementation, no heritage to any other busybox (1) - Mini tar implementation for BusyBox, common in embedded systems In your original question, you asked what are the advantages to the classical bsdtar , and I'm not sure there are really any. The only time it really matters is if you're trying to writing shell scripts that need to work on all systems; you need to make sure what you pass to tar is actually valid in all variants. GNUtar , libarchive 's bsdtar , classical bsdtar , star and BusyBox 's tar are certainly the tar implementations that you'll run into most of the time, but I'm certain there are others out there (early QNX for example). libarchive / GNUtar / star are the most feature-packed, but in many ways they have long deviated from the original standards (possibly for the better). | {
"source": [
"https://unix.stackexchange.com/questions/101561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
101,576 | I'm running a server, and I need to give read/write access to a particular directory to a single user. I've tried the following: sudo adduser abcd
sudo groupadd abcdefg
chown -R .abcdefg /var/www/allowfolder
chmod -R g+rw /var/www/allowfolder
usermod -a -G comments abcd The above seems to work, however it gives the user read-only access to the rest of the server. How can I set up permissions so that the user can only read and write to a particular folder? The user should also be able to run programs like mysql . | Negative ACLs You can prevent a user from accessing certain parts of the filesystem by setting access control lists . For example, to ensure that the user abcd cannot access any file under /home : setfacl -m user:abcd:0 /home This approach is simple, but you must remember to block access to everything that you don't want abcd to be able to access. Chroot To get positive control over what abcd can see, set up a chroot , i.e. restrict the user to a subtree of the filesystem. You need to make all the files that the user needs (e.g. mysql and all its dependencies, if you want the user to be able to run mysql ) under the chroot. Say the path to the chroot is /home/restricted/abcd ; the mysql program needs to be available under /home/restricted/abcd . A symbolic link pointing outside the chroot is no good because symbolic link lookup is affected by the chroot jail. Under Linux, you can make good use of bind mounts: mount --rbind /bin /home/restricted/abcd/bin
mount --rbind /dev /home/restricted/abcd/dev
mount --rbind /etc /home/restricted/abcd/dev
mount --rbind /lib /home/restricted/abcd/lib
mount --rbind /proc /home/restricted/abcd/proc
mount --rbind /sbin /home/restricted/abcd/sbin
mount --rbind /sys /home/restricted/abcd/sys
mount --rbind /usr /home/restricted/abcd/usr You can also copy files (but then you'll need to take care that they're up to date). To restrict the user to the chroot, add a ChrootDirectory directive to /etc/sshd_config . Match User abcd
ChrootDirectory /home/restricted/abcd You can test it with: chroot --userspec=abcd /home/restricted/abcd/ /bin/bash Security framework You can also use security frameworks such as SELinux or AppArmor. In both cases, you need to write a fairly delicate configuration, to make sure you aren't leaving any holes. | {
"source": [
"https://unix.stackexchange.com/questions/101576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17002/"
]
} |
101,580 | Lets say when I do ls command the output is: file1 file2 file3 file4 Is it possible to display only a certain column of output, in this case file2? I have tried the following with no success: echo ls | $2 Basically all I want to do is echo only the second column, in this case, I want to echo: file2 | The following command will format the ls output into one column: ls -1 /directory | {
"source": [
"https://unix.stackexchange.com/questions/101580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49088/"
]
} |
101,599 | I came across a command in Bash script in which I found: find /var/log/abcd -type f The above command was in context of cleaning the log files.
I know what find does. After having seen -type f , I looked manual page for it.
I got to see it in man page of BASH_BUILTINS(1) The description of -f flag under type command is :- The -f option suppresses shell function lookup, as with the command builtin. Following are my questions: What is the use of type ? What is the significance of -f flag? What is the use of using type with find command? [EDIT]:- After having read all the comments and answers till now, I would like to mention the cause for my misinterpretation of -type option in command find Vs type command .
This all happened because I was assuming and till date have seen only the short options(Tests in case of the find command) with a single minus sign '-' , example, ls -l . Most of the times I have seen long options with double minus sign '--' , example, ls --version . | In this case type has nothing to do with the bash built-in type , but more on that later on. A little about "type" The BASH built-in type command gives you information about commands. Thus: $ type type
type is a shell builtin The syntax is: type [-tap] [name ...] -t : print only type, if found -a : print all occurrences of the command, both built-in and other. -p : print the disk file that would be executed on call to command, or nothing. If we look at time , kill and cat as an example: $ type time kill cat
time is a shell keyword
kill is a shell builtin
cat is /bin/cat
$ type -t time kill cat
keyword
builtin
file
$ type -a time kill cat
time is a shell keyword
time is /usr/bin/time
kill is a shell builtin
kill is /bin/kill
cat is /bin/cat
$ type -ta time kill cat
keyword
file
builtin
file
file Now, this specify that if you are in a Bash shell and type time some_cmd , the bash builtin time is used. To use the system time you can do /usr/bin/time some_cmd . One way often used to ensure that the system, and not built-in, command is used is by using which . tt=$(which time) and then use $tt to call system time . The command in question In this case the -type is an option to the command find . The option takes one argument of by which specify the type of entity. Example find . -type f # File
find . -type d # Directory There are more, check man find for the rest. To search for the specific option you can do (whilst in man): /^\s*-type Enter Then use n for next until you find it. A little about shell command This is a bit of a personal interpretation. Some of the things worth mentioning, in this specific case, are commands, options, arguments and pipes. This is somewhat loosely used, but in my vocabulary we have in short: command: a program or built-in . parameter: an entity after the command word. option: an optional parameter. argument: a required parameter. In a command specification square brackets are used to specify options and, optionally less/greater then, used to specify arguments. Thus: foo [-abs] [-t <bar>] <file> ...
foo [-abs] [-t bar] file ... Gives -a -b and -s as optional parameters, and file a required one. -t is optional, but if specified takes the required argument bar . Dots represent that it can take several files. This is no exact specification, and often man or help is required to be sure. Positioning of arguments options and input can often be mixed up, but it is generally best to keep to a position based approach as some systems does not handle mixed positioning of arguments. As an example: chmod -R nick 722 foo
chmod nick 722 foo -R Both work on some systems, whilst the latter does not on other. In your exact command all parameters belongs to find – thus if you wonder about a property man find is the correct place to look. In cases where you need to look at man pages for the shell etc. could be in e.g.: find . $(some command)
find . `some command`
find . $some_var
find . -type f -exec some_command {} \;
find . -type f | some_command
... The -exec is a special one where -exec some_command {} \; are all parameters given to find , but the some_command {} \; part is expanded, within find to some_command string_of_found_entity . Further on quoting expansion command substitution and so much more You might find this useful . | {
"source": [
"https://unix.stackexchange.com/questions/101599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50083/"
]
} |
101,766 | Not much to put here in the body. | Processes need to have a parent (PPID). The kernel, despite not being a real process, is nevertheless handcrafting some real processes like at least init, and is giving itself the process ID 0. Depending on the OS it might or might not be displayed as a process in ps output but is always displayed as a PPID: eg on Linux: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 09:09 ? 00:00:00 /sbin/init
root 2 0 0 09:09 ? 00:00:00 [kthreadd]
root 3 2 0 09:09 ? 00:00:00 [ksoftirqd/0]
... on Solaris: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Oct 19 ? 0:01 sched
root 5 0 0 Oct 19 ? 11:20 zpool-rpool1
root 1 0 0 Oct 19 ? 0:13 /sbin/init
root 2 0 0 Oct 19 ? 0:07 pageout
root 3 0 1 Oct 19 ? 117:10 fsflush
root 341 1 0 Oct 19 ? 0:15 /usr/lib/hal/hald --daemon=yes
root 9 1 0 Oct 19 ? 0:59 /lib/svc/bin/svc.startd
... Note also that pid 0 (and -1 and other negative values for that matter) have different meanings depending on what function use them like kill , fork and waitpid . Finally, while the init process is traditionally given pid #1 , this is no more the case when OS level virtualization is used like Solaris zones, as there can be more than one init running: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 4733 3949 0 11:07:25 ? 0:26 /lib/svc/bin/svc.configd
root 4731 3949 0 11:07:24 ? 0:06 /lib/svc/bin/svc.startd
root 3949 3949 0 11:07:14 ? 0:00 zsched
daemon 4856 3949 0 11:07:46 ? 0:00 /lib/crypto/kcfd
root 4573 3949 0 11:07:23 ? 0:00 /usr/sbin/init
netcfg 4790 3949 0 11:07:34 ? 0:00 /lib/inet/netcfgd
root 4868 3949 0 11:07:48 ? 0:00 /usr/lib/pfexecd
root 4897 3949 0 11:07:51 ? 0:00 /usr/lib/utmpd
netadm 4980 3949 0 11:07:54 ? 0:01 /lib/inet/nwamd | {
"source": [
"https://unix.stackexchange.com/questions/101766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52648/"
]
} |
101,771 | Question: I have the bellow data field zzzzz: 4
afsdf: 5
sdfsd: 3 how do I change the places of two columns so I get e.g 4: zzzzz using the awk or sed command? If possible please show multiple ways so I can explore further | Processes need to have a parent (PPID). The kernel, despite not being a real process, is nevertheless handcrafting some real processes like at least init, and is giving itself the process ID 0. Depending on the OS it might or might not be displayed as a process in ps output but is always displayed as a PPID: eg on Linux: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 09:09 ? 00:00:00 /sbin/init
root 2 0 0 09:09 ? 00:00:00 [kthreadd]
root 3 2 0 09:09 ? 00:00:00 [ksoftirqd/0]
... on Solaris: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Oct 19 ? 0:01 sched
root 5 0 0 Oct 19 ? 11:20 zpool-rpool1
root 1 0 0 Oct 19 ? 0:13 /sbin/init
root 2 0 0 Oct 19 ? 0:07 pageout
root 3 0 1 Oct 19 ? 117:10 fsflush
root 341 1 0 Oct 19 ? 0:15 /usr/lib/hal/hald --daemon=yes
root 9 1 0 Oct 19 ? 0:59 /lib/svc/bin/svc.startd
... Note also that pid 0 (and -1 and other negative values for that matter) have different meanings depending on what function use them like kill , fork and waitpid . Finally, while the init process is traditionally given pid #1 , this is no more the case when OS level virtualization is used like Solaris zones, as there can be more than one init running: $ ps -ef|head
UID PID PPID C STIME TTY TIME CMD
root 4733 3949 0 11:07:25 ? 0:26 /lib/svc/bin/svc.configd
root 4731 3949 0 11:07:24 ? 0:06 /lib/svc/bin/svc.startd
root 3949 3949 0 11:07:14 ? 0:00 zsched
daemon 4856 3949 0 11:07:46 ? 0:00 /lib/crypto/kcfd
root 4573 3949 0 11:07:23 ? 0:00 /usr/sbin/init
netcfg 4790 3949 0 11:07:34 ? 0:00 /lib/inet/netcfgd
root 4868 3949 0 11:07:48 ? 0:00 /usr/lib/pfexecd
root 4897 3949 0 11:07:51 ? 0:00 /usr/lib/utmpd
netadm 4980 3949 0 11:07:54 ? 0:01 /lib/inet/nwamd | {
"source": [
"https://unix.stackexchange.com/questions/101771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52418/"
]
} |
101,806 | I'm trying out XFCE on Arch Linux, and for some reason the Lock Screen option in the session menu doesn't do anything. Neither does running xflock4 at the command line (it exits 0 with no output). xfce4-session is running. Do I need to install a screensaver package or something? | Do I need to install a screensaver package or something? Yes, according to the wiki , you need to choose and install a locker. xflock4 will then activate it. | {
"source": [
"https://unix.stackexchange.com/questions/101806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2421/"
]
} |
101,847 | Can someone explain to me the following? $ ls -ld /temp/sit/build/
dr-xr-s--- 3 asdf qwer 4096 Jan 31 2012 /temp/sit/build/
$ ls -ld /temp/sit/build/*
ls: /temp/sit/build/*: Permission denied So apprently, I can't use the asterisk here. I've tried it with a sudo command and I get a "no such file" error rather than "permission denied"... sudo ls -l /temp/sit/build/*
ls: /temp/sit/build/batch*: No such file or directory but it finally works if I don't use the * sudo ls -l /temp/sit/build/
total 4
dr-xr-s--- 11 asdf qwer 4096 Oct 3 23:31 file | The shell that's doing the expansion of the * wildcard is the shell where you type it. If the shell has the permission to read the list of files in the directory, then it expands /temp/sit/build/* to /temp/sit/build/file , and runs sudo with the arguments ls , -l and /temp/sit/build/file . If the shell is unable to find any match for /temp/sit/build/* (whether it's because there are no matches, or because the shell has no permission to see the matches), then it leaves the pattern alone, and sudo is called with the arguments ls , -l and /temp/sit/build/* . Since there is no file called /temp/sit/build/* , the ls command complains if you pass it that name. Recall that ls doesn't expand wildcards, that's the shell's job. If you want wildcard expansion to happen in a directory where you don't have read permission, then the expansion must happen in a shell that's started by sudo instead of in the shell that calls sudo . sudo doesn't automatically start a shell, you need to do that explicitly. sudo sh -c 'ls -l /temp/sit/build/*' Here, of course, you can do sudo ls -l /temp/sit/build/ instead, but that doesn't generalize to other patterns. | {
"source": [
"https://unix.stackexchange.com/questions/101847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32249/"
]
} |
101,867 | In Windows, I'm used to clicking the center button and it offering a "fast scroll" option up or down. How can I get this behavior on Linux? It currently seems to use the back button upon center click instead. I use Gnome under CentOS. | This Windows feature has never really made its way into the Unix world. In the Unix world, the primary purpose of the middle mouse button is to paste the clipboard content (or more precisely, text selected with the mouse, which is auto-copied). A couple of cross-platform applications such as Firefox and Chrome that support Linux-style middle mouse button under Windows and vice versa, but other than that most applications don't support this kind of fine-grained scrolling. Nonetheless, you can get fairly close at the system level. It is possible to set up a mouse button such that when it is pressed, mouse movements are transformed into wheel events. This is the same feature that you're used to, but you're likely to find the motion choppy, because applications receive wheel events, which are typically interpreted as scrolling by one whole line or column. To play with this configuration, use the xinput program (I don't know if there's a GUI frontend for it). First, run the following command to see the name of your pointing device: $ xinput --list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Generic USB Mouse id=8 [slave pointer (2)]
⎜ ↳ Macintosh mouse button emulation id=12 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=7 [slave keyboard (3)]
↳ USB Keyboard id=9 [slave keyboard (3)] For example, in the output above, the pointer device is Generic USB mouse . You can run the following command to list the properties that can be tuned: xinput --list-props 'Generic USB Mouse' The set of properties you're looking for are the “Evdev Wheel Emulation” ones. With the following settings, when the middle mouse button (button 2) is pressed, moving the mouse sends wheel events (4=up, 5=down, 6=left, 7=right). xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation' 1
xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation Button' 2
xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation Axes' 6 7 4 5 You may want to tweak other parameters (inertia, timeout). You can put these commands in a script. Add #!/bin/sh as the very first line, and make the script file executable (e.g. chmod +x ~/bin/activate-wheel-emulation.sh ). Then add that script to the list of commands to run when your session starts ( gnome-session-properties lets you configure that). If you have root access and you want to make the change for all users (acceptable on a home machine), it's simpler to do it via the X.org server configuration file . As root, create a file called /etc/X11/xorg.conf.d/wheel-emulation.conf containing settings for the mouse driver . The settings are the same but they're organized a bit differently. Section "InputClass"
Identifier "Wheel Emulation"
MatchProduct "Generic USB Mouse"
Option "EmulateWheel" "on"
Option "EmulateWheelButton" "2"
Option "XAxisMapping" "6 7"
Option "YAxisMapping" "4 5"
EndSection | {
"source": [
"https://unix.stackexchange.com/questions/101867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43951/"
]
} |
101,873 | I need to write a bash shell script that will check a range of IP
addresses to see if a host is “alive” at that address.I also need to list the IP numbers of those that are “alive”, and provide a summary showing the number of “alive” and “not alive” IP addresses. The range of addresses will all be in a “Class C” network. I'm having problems figuring out how to pass through each arg and get it pinging each individual address within the range. Below is a sample run of the script I intend to create. $ programName 192.168.42 18 22
Checking: 192.168.42.18 19 20 21 22
Live hosts:
192.168.42.21
192.168.42.22
There were:
2 alive hosts
3 not alive hosts
found through the use of 'ping'. | This Windows feature has never really made its way into the Unix world. In the Unix world, the primary purpose of the middle mouse button is to paste the clipboard content (or more precisely, text selected with the mouse, which is auto-copied). A couple of cross-platform applications such as Firefox and Chrome that support Linux-style middle mouse button under Windows and vice versa, but other than that most applications don't support this kind of fine-grained scrolling. Nonetheless, you can get fairly close at the system level. It is possible to set up a mouse button such that when it is pressed, mouse movements are transformed into wheel events. This is the same feature that you're used to, but you're likely to find the motion choppy, because applications receive wheel events, which are typically interpreted as scrolling by one whole line or column. To play with this configuration, use the xinput program (I don't know if there's a GUI frontend for it). First, run the following command to see the name of your pointing device: $ xinput --list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Generic USB Mouse id=8 [slave pointer (2)]
⎜ ↳ Macintosh mouse button emulation id=12 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=7 [slave keyboard (3)]
↳ USB Keyboard id=9 [slave keyboard (3)] For example, in the output above, the pointer device is Generic USB mouse . You can run the following command to list the properties that can be tuned: xinput --list-props 'Generic USB Mouse' The set of properties you're looking for are the “Evdev Wheel Emulation” ones. With the following settings, when the middle mouse button (button 2) is pressed, moving the mouse sends wheel events (4=up, 5=down, 6=left, 7=right). xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation' 1
xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation Button' 2
xinput --set-prop 'Generic USB Mouse' 'Evdev Wheel Emulation Axes' 6 7 4 5 You may want to tweak other parameters (inertia, timeout). You can put these commands in a script. Add #!/bin/sh as the very first line, and make the script file executable (e.g. chmod +x ~/bin/activate-wheel-emulation.sh ). Then add that script to the list of commands to run when your session starts ( gnome-session-properties lets you configure that). If you have root access and you want to make the change for all users (acceptable on a home machine), it's simpler to do it via the X.org server configuration file . As root, create a file called /etc/X11/xorg.conf.d/wheel-emulation.conf containing settings for the mouse driver . The settings are the same but they're organized a bit differently. Section "InputClass"
Identifier "Wheel Emulation"
MatchProduct "Generic USB Mouse"
Option "EmulateWheel" "on"
Option "EmulateWheelButton" "2"
Option "XAxisMapping" "6 7"
Option "YAxisMapping" "4 5"
EndSection | {
"source": [
"https://unix.stackexchange.com/questions/101873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52711/"
]
} |
101,916 | I'd like to copy a content of directory 1 to directory 2.
However,
I'd like to only copy files (and not directories) from my directory 1.
How can I do that? cp dir1/* dir2/* then I still have the directories issue. Also, all my files don't have any extension, so *.* won't do the trick. | cp dir1/* dir2 cp will not copy directories unless explicitly told to do so (with --recursive for example, see man cp ). Note 1: cp will most likely exit with a non-zero status, but the files will have been copied anyway. This may be an issue when chaining commands based on exit codes: && , || , if cp -r dir1/* dir2; then ... , etc. (Thanks to contrebis for their comment on that issue ) Note 2 : cp expects the last parameter to be a single file name or directory. There really should be no wildcard * after the name of the target directory. dir2\* will be expanded by the shell just like dir1\* . Unexpected things will happen: If dir2 is empty and depending on your shell and settings: you may just get an error message, which is the best case scenario. dir2/* will be taken literally (looking for a file/directory named * ), which will probably lead to an error, too, unless * actually exists. dir2/* it will just be removed from the command entirely, leaving cp dir1/* . Which, depending on the expansion of dir1/* , may even destroy data: If dir1/* matches only one file or directory, you will get an error from cp . If dir1/* matches exactly two files, one will be overwritten by the other ( Bad ). If dir/* matches multiple files and the last match is a, you will get an error message. If the last match of dir/* is a directory all other matches will be moved into it. If dir2 is not empty, it again depends: If the last match of dir2/* is a directory, dir1/* and the other matches of dir2/* will be moved into. If the last match of dir2/* is a file, you probably will get an error message, unless dir1/* matches only one file. | {
"source": [
"https://unix.stackexchange.com/questions/101916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52350/"
]
} |
101,921 | Quoting from the FreeBSD Handbook , in section 4.2.1. “Virtual Consoles”: While the system console can be used to interact with the system, a user working from the command line at the keyboard of a FreeBSD system will typically instead log into a virtual console. This is because system messages are configured by default to display on the system console. These messages will appear over the command or file that the user is working on, making it difficult to concentrate on the work at hand. What does it mean: log into a VC as system messages will display on the system console and will appear over the command making it difficult? | cp dir1/* dir2 cp will not copy directories unless explicitly told to do so (with --recursive for example, see man cp ). Note 1: cp will most likely exit with a non-zero status, but the files will have been copied anyway. This may be an issue when chaining commands based on exit codes: && , || , if cp -r dir1/* dir2; then ... , etc. (Thanks to contrebis for their comment on that issue ) Note 2 : cp expects the last parameter to be a single file name or directory. There really should be no wildcard * after the name of the target directory. dir2\* will be expanded by the shell just like dir1\* . Unexpected things will happen: If dir2 is empty and depending on your shell and settings: you may just get an error message, which is the best case scenario. dir2/* will be taken literally (looking for a file/directory named * ), which will probably lead to an error, too, unless * actually exists. dir2/* it will just be removed from the command entirely, leaving cp dir1/* . Which, depending on the expansion of dir1/* , may even destroy data: If dir1/* matches only one file or directory, you will get an error from cp . If dir1/* matches exactly two files, one will be overwritten by the other ( Bad ). If dir/* matches multiple files and the last match is a, you will get an error message. If the last match of dir/* is a directory all other matches will be moved into it. If dir2 is not empty, it again depends: If the last match of dir2/* is a directory, dir1/* and the other matches of dir2/* will be moved into. If the last match of dir2/* is a file, you probably will get an error message, unless dir1/* matches only one file. | {
"source": [
"https://unix.stackexchange.com/questions/101921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48160/"
]
} |
101,949 | Can anyone help me set up this configuration? If I create a new pane, the new pane should start out in the same working directory as the pane I was just in. If I create a new window, the new window should start out in the home directory (or any other global default path). Is this possible with tmux 1.8? | Add -c "#{pane_current_path}" to the new-window / split-window commands. Example configuration using the default key bindings: bind c new-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"
bind '"' split-window -v -c "#{pane_current_path}" I found the pane_current_path trick here . It's also documented in upstream CHANGES . | {
"source": [
"https://unix.stackexchange.com/questions/101949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30935/"
]
} |
102,008 | I would like to remove all leading and trailing spaces and tabs from each line in an output. Is there a simple tool like trim I could pipe my output into? Example file: test space at back
test space at front
TAB at end
TAB at front
sequence of some space in the middle
some empty lines with differing TABS and spaces:
test space at both ends | awk '{$1=$1;print}' or shorter: awk '{$1=$1};1' Would trim leading and trailing space or tab characters 1 and also squeeze sequences of tabs and spaces into a single space. That works because when you assign something to one of the fields , awk rebuilds the whole record (as printed by print ) by joining all fields ( $1 , ..., $NF ) with OFS (space by default). To also remove blank lines, change it to awk '{$1=$1};NF' (where NF tells awk to only print the records for which the N umber of F ields is non-zero). Do not do awk '$1=$1' as sometimes suggested as that would also remove lines whose first field is any representation of 0 supported by awk ( 0 , 00 , -0e+12 ...) 1 (and possibly other blank characters depending on the locale and the awk implementation) | {
"source": [
"https://unix.stackexchange.com/questions/102008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
102,089 | I'm using grep -e Peugeot -e PeuGeot carlist.txt to search through carlist.txt and pull out some items and I presumed that grep -e Peugeot -e PeuGeot carlist.txt | vi would pipe it through for me but this is what I get: Vim: Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished. | Running vi or vim with '-' as an argument makes it read the file to edit from standard input. Hence: grep -e Peugeot -e PeuGeot carlist.txt | vi - will do what you need. | {
"source": [
"https://unix.stackexchange.com/questions/102089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52808/"
]
} |
102,172 | I know how to execute multiple commands at same time but now what I need is to run the same command multiple times with some time delay between the command execution. Requirements I don't want to use a script for this A single one line command to accomplish this Needs to run on Linux Need to control the number of times the command will run | You can use this one liner to do what you're asking: $ cmd="..some command..."; for i in $(seq 5); do $cmd; sleep 1; done Example $ date
Fri Nov 22 01:37:43 EST 2013
$ cmd="echo"; for i in $(seq 5); do $cmd "count: $i"; sleep 1;done
count: 1
count: 2
count: 3
count: 4
count: 5
$ date
Fri Nov 22 01:37:51 EST 2013 You can adjust the sleep ... to what ever delay you'd like between commands, and change cmd=... to whatever command you want. Brace expansions vs. seq cmd You can also use brace expansions instead of the seq command to generate ranges of values. This is a bit more performant since the brace expansions will run in the same shell as the for loop. Using the subshell ( $(seq ..) ) is a little less performant, since it's spawning a subshell within the confines of the shell that the for loop is running. Example $ cmd="echo"; for i in {1..5}; do $cmd "count: $i"; sleep 1;done
count: 1
count: 2
count: 3
count: 4
count: 5 | {
"source": [
"https://unix.stackexchange.com/questions/102172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38659/"
]
} |
102,191 | When I'm using find . -type f -name "*.htm*" -o -name "*.js*" -o -name "*.txt" it finds all the types of file. But when I add -exec at the end: find . -type f -name "*.htm*" -o -name "*.js*" -o -name "*.txt" -exec sh -c 'echo "$0"' {} \; it seems it only prints .txt files. What am I doing wrong? Note: using MINGW (Git Bash) | find . -type f -name "*.htm*" -o -name "*.js*" -o -name "*.txt" is short for: find . '(' '(' -type f -a -name "*.htm*" ')' -o \
'(' -name "*.js*" ')' -o \
'(' -name "*.txt" ')' \
')' -a -print That is, because no action predicate is specified (only conditions ), a -print action is implicitly added for the files that match the conditions. (and, by the way, that would print non-regular .js files (the -type f only applies to .htm files)). While: find . -type f -name "*.htm*" -o -name "*.js*" -o -name "*.txt" \
-exec sh -c 'echo "$0"' {} \; is short for: find . '(' -type f -a -name "*.htm*" ')' -o \
'(' -name "*.js*" ')' -o \
'(' -name "*.txt" -a -exec sh -c 'echo "$0"' {} \; ')' For find (like in many languages), AND ( -a ; implicit when omitted) has precedence over OR ( -o ), and adding an explicit action predicate (here -exec ) cancels the -print implicit action seen above. Here, you want: find . -type f '(' -name "*.htm*" -o -name "*.js*" -o -name "*.txt" ')' \
-exec sh -c 'echo "$0"' {} \; Or: find . -type f '(' -name "*.htm*" -o -name "*.js*" -o -name "*.txt" ')' -exec sh -c '
for i do
echo "$i"
done' sh {} + To avoid running one sh per file. | {
"source": [
"https://unix.stackexchange.com/questions/102191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10745/"
]
} |
102,211 | I want to know how use rsync for sync to folders recursive but
I only need to update the new files or the updated files (only the content not the owner, group or timestamp) and I want to delete the files that not exist in the source. | I think you can use the -no- options to rsync to NOT copy the ownership or permissions of the files you're sync'ing. Excerpt From rsync Man Page --no-OPTION
You may turn off one or more implied options by prefixing the option
name with "no-". Not all options may be pre‐fixed with a "no-":
only options that are implied by other options (e.g. --no-D,
--no-perms) or have different defaults in various circumstances (e.g.
--no-whole-file, --no-blocking-io, --no-dirs). You may specify
either the short or the long option name after the "no-" prefix (e.g.
--no-R is the same as --no-relative).
For example: if you want to use -a (--archive) but don’t want -o
(--owner), instead of converting -a into -rlptgD, you could specify
-a --no-o (or -a --no-owner).
The order of the options is important: if you specify --no-r -a, the
-r option would end up being turned on, the opposite of -a
--no-r. Note also that the side-effects of the --files-from
option are NOT positional, as it affects the default state of several
options and slightly changes the meaning of -a (see the --files-from
option for more details). Ownership & Permissions Looking through the man page I believe you'd want to use something like this: $ rsync -avz --no-perms --no-owner --no-group ... To delete files that don't exist you can use the --delete switch: $ rsync -avz --no-perms --no-owner --no-group --delete .... Timestamps As for the timestamp I don't see a way to keep this without altering how you'd do the comparison of SOURCE vs. DEST files. You might want to tell rsync to ignore timestamps using this switch: -I, --ignore-times
Normally rsync will skip any files that are already the same size
and have the same modification timestamp. This option turns off this
"quick check" behavior, causing all files to be updated. Update For timestamps, --no-times might do what you're looking for. | {
"source": [
"https://unix.stackexchange.com/questions/102211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52884/"
]
} |
102,427 | I've been trying to find a command to remove all files in a folder but not a kind of filetype. But I seems to not have any luck. What I've tried so far : set extended_glob
rm !(*.dmg)
# this returns zsh:number expected
rm ./^*.dmg
# this returns no matches found The version of zsh I'm using is zsh 5.0.2 (x86_64-apple-darwin13.0.1) . | The extended_glob option gives you zsh's own extended glob syntax . setopt extended_glob
rm -- ^*.dmg
rm -- ^*.(dmg|txt) This removes files without an extension (e.g. README ). If you want to keep those files, you can use the ~ operator to limit the matches: setopt extended_glob
rm -- *.*~*.dmg If you want to delete files in subdirectories as well, you can use ** for recursive globbing. Pass the . glob qualifier to restrict the matching to regular files, or use ^/ instead of . to match all non-directories (including e.g. symbolic links). (Note that rm -r wouldn't help you since it would either delete a directory and all its contents, or not descend into a directory at all.) rm -- **/^*.(dmg|txt)(.) You can set the ksh_glob option to get ksh globs . Beware that in the common case where the negative pattern is the last thing in the word, zsh may parse the parentheses as glob qualifiers (it doesn't do this in ksh emulation mode). setopt ksh_glob
rm -- !(*.dmg|*.txt)
setopt no_bare_glob_qual
rm -- !(*.dmg) | {
"source": [
"https://unix.stackexchange.com/questions/102427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2406/"
]
} |
102,430 | I have a directory with multiple img files and some of them are identical but they all have different names. I need to remove duplicates but with no external tools only with a bash script. I'm a beginner in Linux. I tried nested for loop to compare md5 sums and depending on the result remove but something is wrong with the syntax and it doesn't work. any help? what I've tried is... for i in directory_path; do
sum1='find $i -type f -iname "*.jpg" -exec md5sum '{}' \;'
for j in directory_path; do
sum2='find $j -type f -iname "*.jpg" -exec md5sum '{}' \;'
if test $sum1=$sum2 ; then rm $j ; fi
done
done I get: test: too many arguments | There are quite a few problems in your script. First, in order to assign the result of a command to a variable you need to enclose it either in backtics ( `command` ) or, preferably, $(command) . You have it in single quotes ( 'command' ) which instead of assigning the result of your command to your variable, assigns the command itself as a string. Therefore, your test is actually: $ echo "test $sum1=$sum2"
test find $i -type f -iname "*.jpg" -exec md5sum {} \;=find $j -type f -iname "*.jpg" -exec md5sum {} \; The next issue is that the command md5sum returns more than just the hash: $ md5sum /etc/fstab
46f065563c9e88143fa6fb4d3e42a252 /etc/fstab You only want to compare the first field, so you should parse the md5sum output by passing it through a command that only prints the first field: find $i -type f -iname "*.png" -exec md5sum '{}' \; | cut -f 1 -d ' ' or find $i -type f -iname "*.png" -exec md5sum '{}' \; | awk '{print $1}' Also, the find command will return many matches, not just one and each of those matches will be duplicated by the second find . This means that at some point you will be comparing the same file to itself, the md5sum will be identical and you will end up deleting all your files (I ran this on a test dir containing a.jpg and b.jpg ): for i in $(find . -iname "*.jpg"); do
for j in $(find . -iname "*.jpg"); do
echo "i is: $i and j is: $j"
done
done
i is: ./a.jpg and j is: ./a.jpg ## BAD, will delete a.jpg
i is: ./a.jpg and j is: ./b.jpg
i is: ./b.jpg and j is: ./a.jpg
i is: ./b.jpg and j is: ./b.jpg ## BAD will delete b.jpg You don't want to run for i in directory_path unless you are passing an array of directories. If all these files are in the same directory, you want to run for i in $(find directory_path -iname "*.jpg" ) to go through all the files. It is a bad idea to use for loops with the output of find. You should use while loops or globbing : find . -iname "*.jpg" | while read i; do [...] ; done or, if all your files re in the same directory: for i in *jpg; do [...]; done Depending on your shell and the options you have set, you can use globbing even for files in subdirectories but let's not get into that here. Finally, you should also quote your variables else directory paths with spaces will break your script. File names can contain spaces, new lines, backslashes and other weird characters, to deal with those correctly in a while loop you'll need to add some more options. What you want to write is something like: find dir_path -type f -iname "*.jpg" -print0 | while IFS= read -r -d '' i; do
find dir_path -type f -iname "*.jpg" -print0 | while IFS= read -r -d '' j; do
if [ "$i" != "$j" ]
then
sum1=$(md5sum "$i" | cut -f 1 -d ' ' )
sum2=$(md5sum "$j" | cut -f 1 -d ' ' )
[ "$sum1" = "$sum2" ] && rm "$j"
fi
done
done An even simpler way would be: find directory_path -name "*.jpg" -exec md5sum '{}' + |
perl -ane '$k{$F[0]}++; system("rm $F[1]") if $k{$F[0]}>1' A better version that can deal with spaces in file names: find directory_path -name "*.jpg" -exec md5sum '{}' + |
perl -ane '$k{$F[0]}++; system("rm \"@F[1 .. $#F]\"") if $k{$F[0]}>1' This little Perl script will run through the results of the find command (i.e. the md5sum and file name). The -a option for perl splits input lines at whitespace and saves them in the F array, so $F[0] will be the md5sum and $F[1] the file name. The md5sum is saved in the hash k and the script checks if the hash has already been seen ( if $k{$F[0]}>1 ) and deletes the file if it has ( system("rm $F[1]") ). While that will work, it will be very slow for large image collections and you cannot choose which files to keep. There are many programs that handle this in a more elegant way including: fdupes fslint Various other options listed here . | {
"source": [
"https://unix.stackexchange.com/questions/102430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53012/"
]
} |
102,441 | To the best of my ability, I have looked everywhere in order to find a .deb file however even on its own website ( bastille-linux.org ) it is recommended to use aptitude which no longer seem to contain Bastille in its repository. How can I download Bastille's installer? | There are quite a few problems in your script. First, in order to assign the result of a command to a variable you need to enclose it either in backtics ( `command` ) or, preferably, $(command) . You have it in single quotes ( 'command' ) which instead of assigning the result of your command to your variable, assigns the command itself as a string. Therefore, your test is actually: $ echo "test $sum1=$sum2"
test find $i -type f -iname "*.jpg" -exec md5sum {} \;=find $j -type f -iname "*.jpg" -exec md5sum {} \; The next issue is that the command md5sum returns more than just the hash: $ md5sum /etc/fstab
46f065563c9e88143fa6fb4d3e42a252 /etc/fstab You only want to compare the first field, so you should parse the md5sum output by passing it through a command that only prints the first field: find $i -type f -iname "*.png" -exec md5sum '{}' \; | cut -f 1 -d ' ' or find $i -type f -iname "*.png" -exec md5sum '{}' \; | awk '{print $1}' Also, the find command will return many matches, not just one and each of those matches will be duplicated by the second find . This means that at some point you will be comparing the same file to itself, the md5sum will be identical and you will end up deleting all your files (I ran this on a test dir containing a.jpg and b.jpg ): for i in $(find . -iname "*.jpg"); do
for j in $(find . -iname "*.jpg"); do
echo "i is: $i and j is: $j"
done
done
i is: ./a.jpg and j is: ./a.jpg ## BAD, will delete a.jpg
i is: ./a.jpg and j is: ./b.jpg
i is: ./b.jpg and j is: ./a.jpg
i is: ./b.jpg and j is: ./b.jpg ## BAD will delete b.jpg You don't want to run for i in directory_path unless you are passing an array of directories. If all these files are in the same directory, you want to run for i in $(find directory_path -iname "*.jpg" ) to go through all the files. It is a bad idea to use for loops with the output of find. You should use while loops or globbing : find . -iname "*.jpg" | while read i; do [...] ; done or, if all your files re in the same directory: for i in *jpg; do [...]; done Depending on your shell and the options you have set, you can use globbing even for files in subdirectories but let's not get into that here. Finally, you should also quote your variables else directory paths with spaces will break your script. File names can contain spaces, new lines, backslashes and other weird characters, to deal with those correctly in a while loop you'll need to add some more options. What you want to write is something like: find dir_path -type f -iname "*.jpg" -print0 | while IFS= read -r -d '' i; do
find dir_path -type f -iname "*.jpg" -print0 | while IFS= read -r -d '' j; do
if [ "$i" != "$j" ]
then
sum1=$(md5sum "$i" | cut -f 1 -d ' ' )
sum2=$(md5sum "$j" | cut -f 1 -d ' ' )
[ "$sum1" = "$sum2" ] && rm "$j"
fi
done
done An even simpler way would be: find directory_path -name "*.jpg" -exec md5sum '{}' + |
perl -ane '$k{$F[0]}++; system("rm $F[1]") if $k{$F[0]}>1' A better version that can deal with spaces in file names: find directory_path -name "*.jpg" -exec md5sum '{}' + |
perl -ane '$k{$F[0]}++; system("rm \"@F[1 .. $#F]\"") if $k{$F[0]}>1' This little Perl script will run through the results of the find command (i.e. the md5sum and file name). The -a option for perl splits input lines at whitespace and saves them in the F array, so $F[0] will be the md5sum and $F[1] the file name. The md5sum is saved in the hash k and the script checks if the hash has already been seen ( if $k{$F[0]}>1 ) and deletes the file if it has ( system("rm $F[1]") ). While that will work, it will be very slow for large image collections and you cannot choose which files to keep. There are many programs that handle this in a more elegant way including: fdupes fslint Various other options listed here . | {
"source": [
"https://unix.stackexchange.com/questions/102441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49621/"
]
} |
102,484 | This question concerns the yes command found in UNIX and Linux machines: Basically, what is the point (if any) and history of this tool? Are there practical applications for it? Can an example be shown where it is useful in a script or chained (via pipe or redirect) with another tool? The manpage is below: YES(1) BSD General Commands Manual YES(1)
NAME
yes -- be repetitively affirmative
SYNOPSIS
yes [expletive]
DESCRIPTION
yes outputs expletive, or, by default, ``y'', forever.
HISTORY
The yes command appeared in 4.0BSD.
4th Berkeley Distribution June 6, 1993 4th Berkeley Distribution Sample output: $ yes why
why
why
why
why
^Cwhy | It's usually used as a quick and dirty way to provide answers to an interactive script: yes | rm -r large_directory will not prompt you about any file being removed. Of course in the case of rm , you can always supply -f to make it steamroll the directory removal, but not all tools are so forgiving. Update A more relevant example of this that I recently came across is when you are fsck ing a filesystem and you don't want to bother answering y when prompted before fixing each error: yes | fsck /dev/foo | {
"source": [
"https://unix.stackexchange.com/questions/102484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53056/"
]
} |
102,552 | I would like to check the current volume level from the CLI on my Mac. I know I can set it like this: osascript -e 'set volume <N>' But that doesn't seem to work when trying to get the current volume level. $ osascript -e 'get volume'
4:10: execution error: The variable volume is not defined. (-2753) | You should find that get volume settings will return an object containing among other things the output volume and the alert volume. So for example you could do this to retrieve the entire object: osascript -e 'get volume settings' or rather maybe this to grab just the output volume (e.g. rather than the alert volume): osascript -e 'set ovol to output volume of (get volume settings)' ... but note that not all audio devices will have direct software control over volume settings. For example your display audio should have control; however, a firewire or USB i/o board probably would not have those settings under software control (since they might be physical knobs). If the particular setting is not under the control of software then it will show up in the object returned from get volume settings as "missing value" or something like that. | {
"source": [
"https://unix.stackexchange.com/questions/102552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
102,611 | I'd like to execute the opposite of: find . -name "*2013*" Find all the files in the current directory that don't contain the string "2013" in their names. How can I do that? | Simply: find . ! -name '*2013*' Add a ! -type d to also exclude the files of type directory (like . itself), or -type f to include only regular files, excluding all other types of files (directories, fifos, symlinks, devices, sockets...). Beware however that * matches a sequence of 0 or more characters . So it could report file names that contain 2013 if that 2013 was preceded or followed by something that cannot be fully decoded as valid characters in the current locale. That can happen if you're in a locale where the characters can be encoded on more than one byte (like in UTF-8) for file names that are encoded in a different encoding. For instance, in a UTF-8 locale, it would report a Stéphane2013 file if that é had been encoded in the iso8859-15 character set (as the 0xe9 byte). Best would be to make sure the file names are encoded in the locale's character set, but if you can't guarantee it, a work around is to run find in the C locale: LC_ALL=C find . ! -name '*2013*' | {
"source": [
"https://unix.stackexchange.com/questions/102611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52350/"
]
} |
102,613 | I know how to create an empty file: touch /var/tmp/nullbytes but how can I create a 1MB file that only contains nullbytes on the commandline with bash? | With GNU truncate : truncate -s 1M nullbytes (assuming nullbytes didn't exist beforehand) would create a 1 mebibyte sparse file. That is a file that appears filled with zeros but that doesn't take any space on disk. Without truncate , you can use dd instead: dd bs=1048576 seek=1 of=nullbytes count=0 (with some dd implementations, you can replace 1048576 with 1M ) If you'd rather the disk space be allocated , on Linux and some filesystems, you could do: fallocate -l 1M nullbytes That allocates the space without actually writing data to the disk (the space is reserved but marked as uninitialised). dd < /dev/zero bs=1048576 count=1 > nullbytes Will actually write the zeros to disk. That is the least efficient, but if you need your drives to spin when accessing that file, that's the one you'll want to go for. Or @mikeserv's way to trick dd into generating the NUL bytes: dd bs=1048576 count=1 conv=sync,noerror 0> /dev/null > nullbytes An alternative with GNU head that doesn't involve having to specify a block size (1M is OK, but 10G for instance wouldn't): head -c 1M < /dev/zero > nullbytes Or to get a progress bar: pv -Ss 1M < /dev/zero > nullbytes | {
"source": [
"https://unix.stackexchange.com/questions/102613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
102,624 | I performed an ls -la on directory on my CentOS 6.4 server here and the permissions for a given file came out as: -rwxr-xr-x. I understand what -rwxr-xr-x means, what I don't understand is the . after the last attribute. Can someone explain it to me? Is it harmful in any way? Can it be removed? | GNU ls uses a . character to indicate a file with an SELinux
security context, but no other alternate access method. -- From ls man page ( info coreutils 'ls invocation' ). | {
"source": [
"https://unix.stackexchange.com/questions/102624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17169/"
]
} |
102,647 | I have the below list of files aro_tty-mIF-45875564pmo_opt
aro_tty-mIF-45875664pmo_opt
aro_tty-mIF-45875964pmo_opt
aro_tty-mIF-45875514pmo_opt
aro_tty-mIF-45875524pmo_opt that I need to rename to aro_tty-mImpFRA-45875564pmo_opt
aro_tty-mImpFRA-45875664pmo_opt
aro_tty-mImpFRA-45875964pmo_opt
aro_tty-mImpFRA-45875514pmo_opt
aro_tty-mImpFRA-45875524pmo_opt | Most standard shells provide a way to do simple text substitution within shell variables. http://tldp.org/LDP/abs/html/parameter-substitution.html explains as follows: ${var/Pattern/Replacement}
First match of Pattern, within var replaced with Replacement. So use this script to loop through all the appropriate files and rename each of them: for file in aro_tty-mIF-*_opt
do
mv -i "${file}" "${file/-mIF-/-mImpFRA-}"
done I have added a -i option so you have the chance to confirm each renaming operation. As always, you should make a backup of all your files before doing any large amount of renaming or deleting. | {
"source": [
"https://unix.stackexchange.com/questions/102647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53186/"
]
} |
102,648 | I have a Samsung laptop (Chronos s7) with one SATA hard disk on bus ata:1 , which is detected as /dev/sda , an 8G SSD on ata:2 , /dev/sdb , and various other devices on the rest of SATA interface. The problem is that the SSD disk is soldered to the main board (unmovable) busted (it just gives I/O errors for any operation) it does not appear in the bios (probably because it is broken) Now this disk: delays the boot three to five minutes trying to probe the failing disk, which is annoying; but the most annoying thing is that the system fails to suspend due to /dev/sdb failing. Notice that I can live with the delay at boot --- what worries me is the resume/suspend thing. So the question is: can I tell the kernel to avoid even probing the device on ata:2? In older kernel (<3.0), when I was still able to dig a bit into the source, there was a command-line parameter of the style hdb=ignore that would have done the trick. I have tried all the tricks proposed below with udev and libata:force kernel parameters, to no avail. Specifically, the following does not work: Adding to one of the following /etc/udev/rules.d/ a file (in early execution like 00-ignoredisk.rules or in late as 99-ignoredisk.rules or in both places) SUBSYSTEMS=="scsi", DRIVERS=="sd", ATTRS{rev}=="SSD ", ATTRS{model}=="SanDisk iSSD P4 ", ENV{UDISKS_IGNORE}="1" nor KERNEL=="sdb", ENV{UDISKS_IGNORE}="1" nor a lot of intermediate solutions --- this makes the disk not accessible after boot, but it is probed at boot, and still checked when suspending --- causing the suspend to fail. Editing the system files /lib/udev/rules.d/60-persistent-storage.rules (and udisks , udisks2 ) changing KERNEL=="ram*|loop*|fd*|nbd*|gnbd*|dm-|md", GOTO="persistent_storage_end" to KERNEL=="ram*|loop*|fd*|nbd*|gnbd*|dm-|md|sdb*", GOTO="persistent_storage_end" again, this has some effect, masking the disk from userspace, but the disk is still visible to the kernel. Booting with all the possible combinations (well, a lot of them) of the libata:force parameters (found for example here ) in order to disable DMA, lower speed or whatever about the failing disk --- does not work. The parameter is used, but the disk is still probed and fails. Full udevadm info -a -n /dev/sdb pasted to http://paste.ubuntu.com/6186145/ smartctl -i /dev/sdb -T permissive gives: root@samsung-romano:/home/romano# smartctl -i /dev/sdb -T permissive
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-3.8.0-31-generic] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
Vendor: /1:0:0:0
Product:
User Capacity: 600,332,565,813,390,450 bytes [600 PB]
Logical block size: 774843950 bytes
>> Terminate command early due to bad response to IEC mode page which is clearly wrong. Nevertheless: root@samsung-romano:/home/romano# fdisk -b 512 -C 970 -H 256 -S 63 /dev/sdb
fdisk: unable to read /dev/sdb: Input/output error (SSD data from http://ubuntuforums.org/showthread.php?t=1935699&p=11739579#post11739579 ). | libata does not have a noprobe option at all; that was a legacy IDE option... But I went and wrote a kernel patch for you that implements it. It Should apply to many kernels very easily (the line above it was added 2013-05-21/v3.10-rc1*, but can be safely applied manually without that line). Update The patch is now upstream (at least in 3.12.7 stable kernel). It is in the standard kernel distributed with Ubuntu 14.04 (which is based on 3.13-stable). Once the patch is installed, adding libata.force=2.00:disable to the kernel boot parameters will hide the disk from the Linux kernel. Double check that the number is correct; searching for the device name can help (obviously, you have to check the kernel messages before adding the boot parameters): (0)samsung-romano:~% dmesg | grep iSSD
[ 1.493279] ata2.00: ATA-8: SanDisk iSSD P4 8GB, SSD 9.14, max UDMA/133
[ 1.494236] scsi 1:0:0:0: Direct-Access ATA SanDisk iSSD P4 SSD PQ: 0 ANSI: 5 The important number is the ata2.00 in the first line above. | {
"source": [
"https://unix.stackexchange.com/questions/102648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52205/"
]
} |
102,660 | For a while I have been formatting my hosts file like this. Notice the same ip on two lines: e.f.g.h foo.mydevsite.com
e.f.g.h foo.myOtherDevSite.com I read recently that aliases are supposed to be consolidated on one line: e.f.g.h foo.mydevsite.com foo.myOtherDevSite.com However, I don't like this method because you can't easily comment out certain aliases or add comments to particular aliases, like this: a.b.c.d foo.mydevsite.com # myDevSite on box 1
# a.b.c.d foo.myOtherSite.com # myOtherSite on box 1
a.b.c.d ubuntuBox
e.f.g.h foo.myOtherSite.com # myOtherSite testing environment So far this has been working fine; is there a problem with this? | I found this thread that discusses doing something along these lines. The thread is pretty adamant about not having multiple lines line the /etc/hosts file. excerpt - Re: /etc/hosts: Two lines with the same IP address? No, it will not. The resolvers stop at the first resolution. Having
something like: 127.0.0.1 localhost.localdomain localhost
127.0.0.1 somenode.somedom.com somenode Will not do what you are talking about. BUT having: 127.0.0.1 somenode.somedom.com somenode
127.0.0.1 localhost.localdomain localhost Will cause all kinds of havoc. Including forwarding. I would generally not do what you're attempting. If you need more evidence the man page even says not to do this: excerpt man hosts This manual page describes the format of the /etc/hosts file. This file is a simple text file that associates IP addresses with hostnames, one line per IP address. For each host a single line should be present with the following information: IP_address canonical_hostname [aliases...] All this being said, if your hostnames are FQDN and they don't overlap then you're probably safe to do what you're doing. Just keep in mind that if there is any overlap such as what was mentioned in the thread above, then you may run into resolving issues. | {
"source": [
"https://unix.stackexchange.com/questions/102660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30551/"
]
} |
102,678 | I intend to use Ruby when programming my Raspberry Pi which is running the Debian based Occidentals. Via SSH, I executed: curl -L https://get.rvm.io | bash -s stable --ruby which downloaded the ruby source and compiled it. It tool about 2 hours to complete. I would like to use ruby via AdaFruit's WebIDE - http://learn.adafruit.com/webide/ . However, the ruby installation I performed via SSH created a folder called .rvm in the pi user's directory, whereas the WebIDE uses the webide user account. What is the best way to allow the webide user account access to ruby? I tried moving the .rvm folder from /home/pi to /etc/share , but this didn't work - when trying to use ruby at a terminal I got the error "ERROR: Missing RVM environment file: '/home/pi/.rvm/environments/ruby-2.0.0-p353'" so I must've broken some link. I'm holding back running another 2hr install for the webide user as I'm sure there's a better way! | Don't dismiss RVM's value You can use the repository version of Ruby but I would recommend going another way and using RVM to manage Ruby. I realize it might seem like it's slowing you down, but the version of Ruby that's deployed via the repositories though usable will often lead to problems down the road. It's generally best to create dedicated versions of interpreters and any required libraries (Gems) that can be dedicated to a particular application and/or use case. RVM provides the ability to install for single user (which is what you did) as well as do a multi-user installation. $ curl -L https://get.rvm.io | sudo bash -s stable Running the installation this way will automatically trigger RVM to do a multi-user installation which will install the software under /usr/local/rvm . From here the software can be accessed by anyone that's in the Unix group rvm . $ sudo usermod -a -G rvm <user> Where <user> would be the user webide . Installing a Ruby Now add the following to each user's $HOME/.bashrc . I generally put this at the end of the file: [[ -s /usr/local/rvm/scripts/rvm ]] && source /usr/local/rvm/scripts/rvm With that, you'll want to logout and log back in. NOTE1: It isn't enough to start another tab in gnome-terminal, it needs to be a newly logged in session. This is so that the group you just added this user to, get's picked up. NOTE2: You'll probably not have to add the above to your $HOME/.bashrc if you find you have the following file installed here already, this does the above plus more for all users that are in the group rvm on the system. $ ls -l /etc/profile.d/rvm.sh
-rwxr-xr-x 1 root root 1698 Nov 27 21:14 /etc/profile.d/rvm.sh Once logged in you'll need to install a Ruby. You can do this using the following steps, as user webide . What versions available to install? $ rvm list known | less
...
# MRI Rubies
[ruby-]1.8.6[-p420]
[ruby-]1.8.7[-p374]
[ruby-]1.9.1[-p431]
[ruby-]1.9.2[-p320]
[ruby-]1.9.3[-p484]
[ruby-]2.0.0-p195
[ruby-]2.0.0[-p353]
[ruby-]2.1.0-preview2
[ruby-]2.1.0-head
ruby-head
... NOTE: The 1st time you install a Ruby you should do this with a user that has sudo rights so that dependencies can be installed. For example on Ubuntu, you'll see this type of activity. After these are installed other users, such as webide , should be able to install additional Rubies too, into the directory /usr/local/rvm . Installing requirements for ubuntu.
Updating system..............................................................................................................
Installing required packages: libreadline6-dev, zlib1g-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3, autoconf, libgdbm-dev, libncurses5-dev, automake, libtool, bison, libffi-dev...............................................................................................
Requirements installation successful. Viewing installed versions $ rvm list
rvm rubies
* ruby-1.9.3-p484 [ x86_64 ]
# => - current
# =* - current && default
# * - default Installing a 2nd Ruby $ whoami
webide
$ rvm install 2.0.0-p195
...
ruby-2.0.0-p195 - #validate binary
ruby-2.0.0-p195 - #setup
Saving wrappers to '/usr/local/rvm/wrappers/ruby-2.0.0-p195'........
ruby-2.0.0-p195 - #importing default gemsets, this may take time.................. Now when we list what's installed: $ rvm list
rvm rubies
* ruby-1.9.3-p484 [ x86_64 ]
ruby-2.0.0-p195 [ x86_64 ]
# => - current
# =* - current && default
# * - default From the above we can see that user webide was able to install a Ruby. Setting a default for all rvm users $ rvm use ruby-2.0.0-p195 --default
Using /usr/local/rvm/gems/ruby-2.0.0-p195
$ rvm list
rvm rubies
ruby-1.9.3-p484 [ x86_64 ]
=* ruby-2.0.0-p195 [ x86_64 ]
# => - current
# =* - current && default
# * - default Logging in as another user that's in the group rvm we can see the effects of making ruby-2.0.0-p195 the default. $ rvm list
rvm rubies
=> ruby-1.9.3-p484 [ x86_64 ]
* ruby-2.0.0-p195 [ x86_64 ]
# => - current
# =* - current && default
# * - default So this user is using, ruby-1.9.3-p484 , and he's now configured to use ruby-2.0.0-p195 as the default too. Slow downloads/installs If you're experiencing a slow download you might want to make use of the offline installation method instead. This will allow you to do a re-install later on. Or perhaps the download via this system is problematic, and you could download the RVM installer on one system, and then use scp to copy the installer to this system afterwards. $ curl -L https://github.com/wayneeseguin/rvm/tarball/stable -o rvm-stable.tar.gz See here, RVM in offline mode for full details. References RVM ArchLinux Wiki Installing RVM - Quick (guided) Install | {
"source": [
"https://unix.stackexchange.com/questions/102678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53213/"
]
} |
102,691 | How can I get the age of a given file in, at least, days? I'm well aware of ls -lh and similar commands. I want something that will work sort of like this: getfage <FILE> # prints out '12d' (12 days) Also, this needs to be somewhat cross-platform since I'd also like to use this under Mac OS X, but the primary use-case is on my Linux-box. NOTE Since Linux doesn't track creation time, I'm looking for two-fold solution: one for mtime (linux)--that is the last time said file was modified --and one for Mac OS X, which can either deal with mtime or creation time. | Unix doesn't keep track of a creation date. The only information that's available is typically the last times the files was: Accessed Modified Changed Access - the last time the file was read Modify - the last time the file was modified (content has been modified) Change - the last time meta data of the file was changed (e.g. permissions) ( From this answer ) You can get dates related to a particular file using the stat command. Example $ stat ffmpeg
File: `ffmpeg'
Size: 19579304 Blocks: 38248 IO Block: 4096 regular file
Device: fd02h/64770d Inode: 10356770 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 500/ saml) Gid: ( 501/ saml)
Access: 2013-11-26 10:49:09.908261694 -0500
Modify: 2013-11-02 17:05:13.357573854 -0400
Change: 2013-11-02 17:05:13.357573854 -0400 OSX and HFS If you're using OSX the filesystem that's used under that Unix is HFS . This is one of the few (that I'm aware of) that keeps the creation date within the filesystem, along with modification time etc. similar to other Unixes. excerpt A File Record stores a variety of metadata about the file including its CNID, the size of the file, three timestamps (when the file was created, last modified, last backed up), the first file extents of the data and resource forks and pointers to the file's first data and resource extent records in the Extent Overflow File. The File Record also stores two 16 byte fields that are used by the Finder to store attributes about the file including things like its creator code, type code, the window the file should appear in and its location within the window. Timestamps Time stamps are always maintained in the filesystem, so you're limited by whatever time tracking is offered through them (EXT3, EXT4, XFS, etc.). Filesystems If you're ever curious take a look at this Wikipedia topic titled: Comparison of file systems . It has the most extensive list of filesytems I'm aware of along with a nice table of the various features and the status of whether it's supported or not within a given filesystem. References How to find creation date of file? How do I do a ls and then sort the results by date created? List files created on Sundays Get file created/creation time? Why does Unix time start at 1970-01-01? | {
"source": [
"https://unix.stackexchange.com/questions/102691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.