source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
192,048 | I to mount my ASUS Android device in Linux Mint 17.1 (MTP). My pc didn't see my device, so I tried following the answer in this page: http://forum.xda-developers.com/showthread.php?t=1077377 But when I click on "connect" I get the following error: Listing raw device(s)mtpfs: symbol lookup error: mtpfs: undefined symbol: LIBMTP_Detect_Raw_Devices | Linux Mint 17.1 Cinnamon 64-bit or a recent Ubuntu/Debian distribution. sudo apt-get updatesudo apt-get install mtp-tools Unplug the USB cable and reconnect your Android device. The Android device should now be recognised as expected. If you are still having problems, please try the following. Check if you have other MTP software installed, run the following in a terminal window: dpkg --get-selections | grep -v deinstall | grep -i mtp This will list any packages relating with MTP. You should only need mtp-tools installed for normal, everyday use. Ignore any lines starting with "libmtp" and uninstall any other packages listed except mtp-tools for example with the following command replacing 'mtp-server' with what you have listed from the previous instruction. sudo apt-get remove mtp-server Then reinstall the mtp-tools package with the following command: sudo apt-get install --reinstall mtp-tools You should now be able to access your Android device using MTP as expected from a Removable Storage device. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107743/"
]
} |
192,066 | I am trying to debug DHCP on my laptop (I am using dhcping and dhcdump to see what the DHCP server sends back). Following is my /etc/dhcp/dhclient.conf . option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;send host-name = gethostname();request subnet-mask, broadcast-address, time-offset, routers,domain-name-servers, interface-mtu,rfc3442-classless-static-routes; I think, I have an idea what all these options mean, except for rfc3442-classless-static-routes . Also, I don't see anything pertaining to rfc3442-classless-static-routes in the DHCP replies. What is the meaning of rfc3442-classless-static-routes and in what situation would I make use of it? (the documentation makes no sense whatsoever) | The original DHCP specification (RFC 2131 and 2132 ) defines an option (33) that allows the administrator of the DHCP service to issue static routes to the client if needed. Unfortunately, that original design is flawed these days as it assumes classful network addresses , which is rarely used. The rfc3442-classless-static-routes option allows you to use classless network addresses (or CIDR) instead. CIDR requires a subnet mask to be explicitly stated, but the original DHCP option 33 doesn't have space for this. Therefore, this option (as defined in RFC 3442) simply enables an newer replacement DHCP option (option 121) which defines static routes using CIDR notation. Basically, if you need to issue static routes to your devices using DHCP and these static routes use CIDR then you need to enable this option. A static routes can be used if you have split a network into to multiple smaller networks and need to inform each routers about how traffic gets from one to another without using one of the many dynamic routing protocols available. You basically set up each router with a statement to the effect of "to get to network a.b.c.d, send traffic through f.g.h.i" . If the route you set up in the router are classful, then you do not need to enable this option. However, if the routes are CIDR then you will need to enable this option. Fortunately, many home/cafe network use the 192.168.0.0 network with a subnet of 255.255.255.0 (or /24 ), which is a true Class-C network, therefore you can avoid this option. On the other hand, some home/cafe networks run on the 10.0.0.0 network. This is a Class-A network by default. If you are breaking this into many 10.0.x.0 sub-nets for example, then these will all be CIDR networks which means you will need to enable this option. The above is only true if you also need to issue this routing information to your hosts via DHCP. Whether you need to issue these static routing information to your hosts is defined by the design of your network. I'd hazard a guess that a basic home/cafe network doesn't need it as static routes are usually defined at the routers. The configuration you have above simply defines a new option (there are many predefined options that dhclient already understands) as option 121 which consists of an array of 8-bit unsigned integers. It then configures the client to request this option if it is set on the DHCP server. If the DHCP server returns a value for this option a dhclient exit hook script ( /etc/dhclient/dhclient-exit-hooks.d/rfc3442-classless-routes ) reads the value and configures the routing table accordingly. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105621/"
]
} |
192,116 | I'm running Laptop with single OS: UbuntuMATE 15.04 Beta1 64bit on Toshiba laptop Core i3. After burning "Elementary OS" on a live USB drive using UNetbootin, what happened is: -After reboot, laptop directly shows UbuntuMATE boot screen, doesn't show Toshiba logo at the begginning as usual. So no access to BIOS or boot menu anymore. So it boots directly to UbuntuMATE and it runs normally. Installed Boot-Repair and ran (Recommended Repair); it gets aborted showing me this message: "Please use this software in a live-session (live-CD or live-USB). This will enable this feature"... which I can't boot from a live CD or USB as I lost access to boot menu. Boot info summary gave me this link http://paste.ubuntu.com/10664795/ which I can ask for help providing information in it. I looked around into several posts but couldn't find what matches my case. What exactly am I supposed to do? I'm a little new to Linux (3 months) and I'm still learning, so I do not know much. | It sounds like you enabled the "fast boot" option in your BIOS setup which disables the F2 setup and F12 boot menu prompts. Power-off your laptop and hold down the F2 key, then power it on for the BIOS setup utility. Disable "fast boot", save and reboot. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107784/"
]
} |
192,118 | I have been working on several projects, and they require different environment variables (e.g., PATH for different versions of clang executables, PYTHONPATH for several external modules). Whenever I work on one project, I have to modify these environment variables myself (e.g., change .zshrc / .bashrc and source it); and I sometimes forget and make mistakes. Is there a way/project that helps do this automatically, similar to what virtualenv does in Python? | There're mature tools designed to set environment variables for a specific directory. Compared with other tools designed for this, direnv is the best of them. One of the main benefit is that it supports unload ing the environment variables when you exit from that directory. direnv is an environment switcher for the shell. It knows how to hook into bash, zsh, tcsh, fish shell and elvish to load or unload environment variables depending on the current directory. This allows project-specific environment variables without cluttering the ~/.profile file. What makes direnv distinct between other similar tools: direnv is written in Go, faster compared with its counterpart written in Python direnv supports unload ing environment variables when you quit from the specific dir direnv covers many shells Similar projects Environment Modules - one of the oldest (in a good way) environment-loading systems autoenv - lightweight; doesn't support unloads; slow written in Python zsh-autoenv - a feature-rich mixture of autoenv and smartcd : enter/leave events, nesting, stashing (Zsh-only). ~~ asdf ~~, asdf is a plugin manager to switch different versions of the same executable. NOT a env switcher at all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8776/"
]
} |
192,147 | I want to wget a file and tar it, in one command, I guess it is simple but I can't get it done. I tried several things. wget <url> | tar -cvz file.gz.tar -tar -cvzf file.tar `wget <url>`wget -qO <url> | tar -cvf file.tarwget <url> -O - | tar Any help? | Do you really want to tar the file or are you looking for downloading a file into a compressed form. Tarring a file is just bundling (uncompressed) files into an (uncompressed) archive. If you want to download a file into a compressed file you can use: wget -qO - <url>|gzip -c - > file.gz | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40491/"
]
} |
192,148 | How can I use grep to find a string in files, but only search in the first line of these files? | Two more alternatives : With awk awk '{if ($0~"pattern") print $0; nextfile;}' mydir/* or if your awk version doesn't support nextfile (thanks to Stéphane Chazelas for the suggestion) : awk 'FNR==1{if ($0~"pattern") print $0;}' mydir/* will read only the first line before switching to next file, and print it only if it matches "pattern" . Advantages are that one can fine-tune both the field on which to search the pattern for (using e.g. $2 to search on the second field only) and the output (e.g. $3 to print the third field, or FILENAME , or even mix). Note that with the FNR ("current input record number", i.e. line number) version you can fine-tune further the line(s) on which you want to grep : FNR==3 for the third line, FNR<10 for the 10 first lines, etc. (I guess in this case, if you are dealing with very large files and your awk version supports it you may want to mix FNR with nextfile to improve performances.) With head , keeping filenames head -n1 -v mydir/files*|grep -B1 pattern -v option of head will print filenames, and option -B1 of grep will print the previous line of matching lines — that is, the filenames. If you only need the filenames you can pipe it further to grep : head -n1 -v mydir/*|grep -B1 pattern|grep ==> As noticed by don_crissti in comments, beware of filenames matching the pattern themselves, though… | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107823/"
]
} |
192,177 | Does Linux virtual bridge(configured with for example ip or brctl ) support VLAN's? For example configure access ports in different VLAN's and trunk ports with only certain VLAN's enabled. Only option in my kernel( 3.2.0-4-686-pae ) configuration file regarding VLAN's and bridge is CONFIG_BRIDGE_EBT_VLAN , but as I understand, this enables filtering of 802.1q VLAN fields for ebtables . | Not a problem, it's the way most openWRT systems connect the wlan and switch ports into the same LAN. Here's an example of the config on my openWRT system which has two wifi networks, one for private use and one for guests: # brctl showbridge name bridge id STP enabled interfacesbr-vlan2 7fff.a0f3c15eb708 no eth0.2 wlan0 wlan1br-vlan3 7fff.a0f3c15eb708 no eth0.3 wlan0-1 wlan1-1 Some extra explanation: The typical openwrt hardware (above is on a TP-Link WDR4300) has a switch that handles all the physical ports; sometimes the physical WAN port is a separate eth interface on the SoC CPU. The switch is connected to the CPU with a trunk (packets on this connection are tagged with a VLAN tag). So eth0.2 is VLAN2 that is simply connected to 4 of the physical switch ports, stripped of the VLAN tag. So you should see br-vlan2 simply as the "LAN network", the VLANs are used due to necessity as there is just one connection from CPU to the switch. An ethernet bridge in Linux can have VLANs and physical interfaces as members. That's according to my expectations as a VLAN interface behaves just like a physical interface in Linux, having its own routing, firewalling etc. just like any physical interface. I expect you could also add different VLANs to the bridge, if you don't mind the insanity that follows :) I haven't tried bridging a physical interface such as eth0 that is also carrying VLAN-tagged traffic though... I don't know whether those tagged packets will also be bridged. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
192,201 | I need to change the md5 of my files, Sometimes once, sometimes multiple times. A friend gave me this script to add a string to the end of the file but i think i may be adding it multiple times when i had a look in a hex editor i saw it twice. I'm also not sure if this is actually editing anything else. Just wanted to get it checked out before i started using it. #!/bin/shecho "md5change`date +%s`"cat $1 | sed --in-place '/md5change/s/.*//' $1sed -i '/^[ \t]*$/d' $1echo "md5change`date +%s`" >> $1echo "done" Thanks | Not a problem, it's the way most openWRT systems connect the wlan and switch ports into the same LAN. Here's an example of the config on my openWRT system which has two wifi networks, one for private use and one for guests: # brctl showbridge name bridge id STP enabled interfacesbr-vlan2 7fff.a0f3c15eb708 no eth0.2 wlan0 wlan1br-vlan3 7fff.a0f3c15eb708 no eth0.3 wlan0-1 wlan1-1 Some extra explanation: The typical openwrt hardware (above is on a TP-Link WDR4300) has a switch that handles all the physical ports; sometimes the physical WAN port is a separate eth interface on the SoC CPU. The switch is connected to the CPU with a trunk (packets on this connection are tagged with a VLAN tag). So eth0.2 is VLAN2 that is simply connected to 4 of the physical switch ports, stripped of the VLAN tag. So you should see br-vlan2 simply as the "LAN network", the VLANs are used due to necessity as there is just one connection from CPU to the switch. An ethernet bridge in Linux can have VLANs and physical interfaces as members. That's according to my expectations as a VLAN interface behaves just like a physical interface in Linux, having its own routing, firewalling etc. just like any physical interface. I expect you could also add different VLANs to the bridge, if you don't mind the insanity that follows :) I haven't tried bridging a physical interface such as eth0 that is also carrying VLAN-tagged traffic though... I don't know whether those tagged packets will also be bridged. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55632/"
]
} |
192,206 | I am not sure if it is the only possible way, butI read that in order to put a single pixel onto the screen at a location of your choice one has to write something into a place called framebuffer.So I became curious, if it is possible to enter into this place and write something into it in order to display a single pixel somewhere on the screen. | yes, outside X-server, in tty, try command: cat /dev/urandom >/dev/fb0 if colourfull pixels fills the screen, then your setup is ok, and you can try playing with this small script: #!/usr/bin/env bashfbdev=/dev/fb0 ; width=1280 ; bpp=4color="\x00\x00\xFF\x00" #red coloredfunction pixel(){ xx=$1 ; yy=$2 printf "$color" | dd bs=$bpp seek=$(($yy * $width + $xx)) \ of=$fbdev &>/dev/null}x=0 ; y=0 ; clearfor i in {1..500}; do pixel $((x++)) $((y++))done where function 'pixel' should be an answer... write a pixel to screen by changing byte values (blue-green-red-alpha) on x-y offset of device /dev/fbX which is frame buffer for the video-card. or try one liner pixel draw (yellow on x:y=200:100, if width is 1024): printf "\x00\xFF\xFF\x00" | dd bs=4 seek=$((100 * 1024 + 200)) >/dev/fb0 UPDATE: this code works even inside X-server, if we just configure X to use frame buffer . by specifying fb0 inside /usr/share/X11/xorg.conf.d/99-fbdev.conf | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
192,212 | Does anybody know how to setup vim (the text editor) in such a way to make it possible to trace back all the changes done to a file within a day? I need it for the cases when I have accidentially modified a subroutine and would like to return it to it's initial state, as it was at the start of the day. I usually use svn to keep the most recent version of my source code in a global repository. But sometimes I need to return to the state the code was in in between of two svn commits. Update: To enable persistent undo create the undo directory, e.g. ~/.vim/undodir , and place the following settings into the .vimrc file: set undodir=~/.vim/undodirset undofileset undolevels=1000set undoreload=10000 | With a large enough value of 'undolevel' , Vim should be able to undo the whole day's changes. If you quit Vim in between, you also need to enable persistent undo by setting the 'undofile' option. Vim captures not just a sequential list of commands for undo, but actually a tree of all changes. It also has several commands around undo (cp. :help undo-branches ); to go back to the state at the beginning of the day, :earlier 12h is a good candidate. There are also plugins like Gundo and undotree.vim , which visualize the undo tree and allow to navigate it. If navigating the undo tree sounds too cumbersome (e.g. because you make a lot of changes throughout the day), you can also add a lightweight version control system (in addition to Subversion), like my writebackup plugin , and explicitly create a backup at certain times. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52830/"
]
} |
192,241 | This code excerpt is from Chapter 8.6 of GNU makefile manual. What does @[email protected] for file function arg in a makefile mean? and why are shell commands like rm prefixed by '@' symbol program: $(OBJECTS) $(file >[email protected],$^) $(CMD) $(CMDFLAGS) @[email protected] @rm [email protected] File function syntax is $(file op filename[,text]) | There are three unrelated uses of @ here. In $@ , the character @ is the name of an automatic variable that can be used in a rule. The value of that variable is the target that the rule is building. When @ is used at the very beginning of a recipe (command) line, just after the tab character, it causes the command not to be printed when it's about to be executed. The character @ elsewhere isn't special. Thus, in your example, to build program : The file function is invoked. It writes the dependencies of the target ( $^ automatic variable) to the file program.in . Whatever command is stored in the variable CMD is executed, with the parameters stored in the variable CMDFLAGS , plus the extra parameter @program.in . What this does depends on what CMD is. The command rm program.in is executed, without printing it first. A few commands treat a parameter starting with @ as indicating a file from which to read more parameters. This is a DOS convention which came about because DOS had a stringent limit on the command line length and no way to interpolate the output of a command into a command line. It is uncommon in the Unix world since Unix doesn't have these limitations. The effect of the recipe is thus likely the same as $(CMD) $(CMDFLAGS) $(OBJECTS) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
192,263 | I occasionally search through files in vim or less using / or ? but as far as I can tell, the search patterns are case sensitive. So for example, /foo won't find the same things that /FOO will. Is there an way way to make it less strict? How can I search in vim or less for a pattern that is NOT case sensitive? | In vi or vim you can ignore case by :set ic , and all subsequent searches will consider the setting until you reset it by :set noic . In less there are options -i and -I to ignore case. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
192,273 | How can we check the mount points for partitions listed in /dev/sd* ? For example, I would like know if the partition for my home is /dev/sda4 . | Another approach is with findmnt : findmnt /dev/sda4 ...to get mountpoint from dev. Or vice-versa: findmnt /home | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
192,284 | I have a plug computer that I want to use as scanner server using sane. It already worked with a different plug, so I know for sure that the scanner and sane are workable together. If I issue scanimage -L as root I get this output: device `hpaio:/usb/Deskjet_F300_series?serial=CN73CGJ05504KH' is a Hewlett-Packard Deskjet_F300_series all-in-one which is what I am expecting, but when I call the command as saned I get this output: No scanners were identified. [...] If I call sane-find-scanner as saned it brings up: found USB scanner (vendor=0x03f0, product=0x5511) at libusb:001:015 Now the interesting part is that the vendor and product is not detected here, but when I do the same as root this is the result: found USB scanner (vendor=0x03f0 [HP], product=0x5511 [Deskjet F300 series]) at libusb:001:015 So, somehow the root user has access to the list of vendors (and thus is able to detect the scanner) while saned is not. I don't want to run the saned server as root so I need to figure this out. All settings I did in saned.conf are for the network interaction, but my problem is on the local host, so I skip the config file (but of course can provide it if necessary) saned groups: saned scanner I assume that I need to change the privileges of the file where vendor and product are mapped ( /etc/sane.d/hp.conf ), but that is already readable by sane. -rw-r--r-- 1 saned scanner 396 Dec 12 2010 hp3900.conf-rw-r--r-- 1 saned scanner 76 Dec 12 2010 hp4200.conf-rw-r--r-- 1 saned scanner 238 Dec 12 2010 hp5400.conf-rw-r--r-- 1 saned scanner 497 Dec 12 2010 hp.conf-rw-r--r-- 1 saned scanner 22 Dec 12 2010 hpsj5s.conf Same for /etc/sane.d/dll.d/ -rw-r--r-- 1 saned scanner 38 Dec 10 2013 hplip Interesting is that neither of these files contain the Deskjet_F300_series information, so maybe there is another file? Also, while the scanner does have a printing option, I'm not interested in this. I did read this post , but I would prefer not to do what is described there, because somewhere the information is already present and I would like to access that place, from the saned user. | Another approach is with findmnt : findmnt /dev/sda4 ...to get mountpoint from dev. Or vice-versa: findmnt /home | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54677/"
]
} |
192,313 | Both: sudo ip -s -s neigh flush all And: sudo arp -d 192.168.0.102 Instead of clearing the arp cache they seem to just invalidate entries (they will appear as incomplete ). Even after some minutes, the ARP cache looks like: $ arp -nAddress HWtype HWaddress Flags Mask Iface192.168.0.103 (incomplete) eth0192.168.0.1 ether DE:AD:BE:EF:DE:AD C eth0 (The MAC of the gateway has been refreshed - that is ok) How can I really clear the ARP cache, like in "delete all entries from the table"? I do not want to keep incomplete entries, I want them removed. Is this possible? EDIT This is my system: » arp --versionnet-tools 1.60arp 1.88 (2001-04-04)+I18NAF: (inet) +UNIX +INET +INET6 +IPX +AX25 +NETROM +X25 +ATALK +ECONET +ROSE HW: (ether) +ETHER +ARC +SLIP +PPP +TUNNEL -TR +AX25 +NETROM +X25 +FR +ROSE +ASH +SIT +FDDI +HIPPI +HDLC/LAPB +EUI64 » lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 14.04.2 LTSRelease: 14.04Codename: trusty» uname -aLinux polyphemus.xxx-net 3.13.0-46-generic #77-Ubuntu SMP Mon Mar 2 18:23:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux | Original oneliner ip link set arp off dev eth0 ; ip link set arp on dev eth0 Be sure to do it all at once, so you don't break network connectivity before you're able to turn ARP back on. Interface discovering copy-paste command interfaces=$( arp -n | awk ' NR == 1 {next} {interfaces[$5]+=1} END {for (interface in interfaces){print(interface)}} ');for interface in $interfaces; do echo "Clearing ARP cache for $interface"; sudo ip link set arp off dev $interface; sudo ip link set arp on dev $interface;done Note: The semicolons allow you to condense this command into a oneliner, but it looks terrible in a code block on SO. Example output on Raspbian pi@raspberrypi:~ $ arp -nAddress HWtype HWaddress Flags Mask Iface10.0.0.1 ether 58:19:f8:0d:57:aa C wlan010.0.0.159 ether 88:e9:fe:84:82:c8 C wlan0pi@raspberrypi:~ $ interfaces=$( arp -n | awk ' NR == 1 {next} {interfaces[$5]+=1} END {for (interface in interfaces){print(interface)}} '); for interface in $interfaces; do echo "Clearing ARP cache for $interface"; sudo ip link set arp off dev $interface; sudo ip link set arp on dev $interface; doneClearing ARP cache for wlan0pi@raspberrypi:~ $ arp -nAddress HWtype HWaddress Flags Mask Iface10.0.0.159 ether 88:e9:fe:84:82:c8 C wlan0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39807/"
]
} |
192,325 | I want to display a text above the user screen(as an upper layer). I know that there is solutions like xmessages that could display the text in a box, But need it to be displayed without a box on the entire screen if possible I am running Raspbian Is there any solution/software that could do this ? | xosd , which is available in Raspbian, can display text on top of the current X screen. It takes its input from a file or from the standard input: echo Hello | osd_cat -p middle -A center It's an old-style X11 application so its configuration can be verbose; changing the font in particular looks like echo Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240' or even strictly speaking echo Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240-*-*-*-*-*-*' You can customise the colour, add a shadow and/or outline, change the delay, even add a progress bar. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63384/"
]
} |
192,429 | I have successfully installed three OSs; Kali linux, Ubuntu and Windows 8. What are the chances I break my system if I use this type of 'way of using pc' for years? Is it possible to break my system if let say I boot Kali Linux, and then accidentally view (just view, not open) windows restricted files or some sort? What rate of brick/system failure should I expect? | No, you won't. You can expect 0 rate of system failure unless you do something silly like delete files that are essential to one of the other operating systems. You can have as many OSs installed as you like, they do not communicate and they won't affect each other. Why should they? By the way, there's no such thing as "viewing" a file without "opening" it. I suppose you meant viewing and not editing . In any case, as long as all you do is read files and not change them, that won't affect your system, no. Each installed OS should have its own partition(s) and won't be affected by the others unless you try really hard to break it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101790/"
]
} |
192,437 | I've used OpenVPN Access Server in the past although it seems overkill for a second test server I am setting up. I am not very familiar with OpenVPN (including Access Server which I had help getting setup). I wanted to get some advice on setting up OpenVPN on this test server (Not OpenVPN Access Server). OpenVPN is intended to be installed on the test server to allow remote connection to the database running on the server (PostgreSQL) as well as ssh connection to run terminal commands. I would like to be able to setup the OpenVPN server and generate the client.ovpn config files that I can supply to the client machines (all running OpenVPN client) to allow connection via config files Is it a process I can run from the servers terminal (CentOS 6.5) with a yum install or is Access Server my only option to allow remote client machines Access to my Server? Any advice would be greatly appreciated or warnings from people that have setup their own OpenVPN servers | No, you won't. You can expect 0 rate of system failure unless you do something silly like delete files that are essential to one of the other operating systems. You can have as many OSs installed as you like, they do not communicate and they won't affect each other. Why should they? By the way, there's no such thing as "viewing" a file without "opening" it. I suppose you meant viewing and not editing . In any case, as long as all you do is read files and not change them, that won't affect your system, no. Each installed OS should have its own partition(s) and won't be affected by the others unless you try really hard to break it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
192,465 | I have a file ( file.php ) like this: ...Match user foo ChrootDirectory /NAS/foo.info/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding no I am trying to write a bash script to delete one of the paragraphs. So say I wanted delete the user foo from the file.php . After running the script, it would then look like this: ...Match user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding no How could I go about doing this. I have thought about using sed but that only seems to be appropriate for one liners? sed -i 's/foo//g' file.php And I couldn't do it for each individual line as most of the lines withing the paragraph are not unique! Any ideas? | Actually, sed can also take ranges. This command will delete all lines between Match user foo and the first empty line (inclusive): $ sed '/Match user foo/,/^\s*$/{d}' fileMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding no Personally, however, I would do this using perl's paragraph mode ( -00 ) that has the benefit of removing the leading blank lines: $ perl -00ne 'print unless /Match user foo/' fileMatch user bar ChrootDirectory /NAS/bar.co.uk/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding noMatch user baz ChrootDirectory /NAS/baz.com/ ForceCommand internal-sftp AllowTcpForwarding no GatewayPorts no X11Forwarding no In both cases, you can use -i to edit the file in place (these will create a backup of the original called file.bak ): sed -i.bak '/Match user foo/,/^\s*$/{d}' file or perl -i.bak -00ne 'print unless /Match user foo/' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101882/"
]
} |
192,473 | When I run yum install <X> where <X> has already been installed, yum exits with a return status of 1 and prints "Error: Nothing to do". Aside from checking for this string in the output (which seems like a very shaky thing to base my script on), is there some way I can test whether the package already exists? Clearly, yum knows whether or not it already exists, since it's throwing that error, but how can I access that knowledge? To add to this, some of the packages are downloaded by way of URLs, not package names, so checking yum list installed doesn't work. | In your script use rpm -q packagename : if rpm -q vim-enhancedthen echo "Already installed vim-enhanced"else echo "Install vim-enhanced"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67823/"
]
} |
192,476 | Suppose the owner/user doesn't have the write permission on a directory but he has it on a file under it. Can the file here be edited or not? If yes, is there any situation where the file cannot be edited? | Yes, the file can be edited. As far as the directory is concerned, the file can not be edited if you remove the execute permission on the directory for the target (owner/group/others). EDIT: If you want the owner to not be able to edit the file by changing the permission of the directory (assuming the same user owns the directory and file), then you can simply remove the execute permission on the directory for the owner. For example you can make the permission for the owner as rw- i.e. 6 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108011/"
]
} |
192,485 | According to man perlrun : -0[octal/hexadecimal] specifies the input record separator ($/) as an octal or hexadecimal number. If there are no digits, the null character is the separator. and The special value 00 will cause Perl to slurp files in paragraphmode. Any value 0400 or above will cause Perl to slurp fileswhole, but by convention the value 0777 is the one normally usedfor this purpose. However, given this input file: This is paragraph oneThis is paragraph two. I get some unexpected results: $ perl -0ne 'print; exit' file ## \0 is used, so everything is printedThis is paragraph one.This is paragraph two. $ perl -00ne 'print; exit' file ## Paragraph mode, as expected This is paragraph one. So far, so good. Now, why do these two seem to also work in paragraph mode? $ perl -000ne 'print; exit' file This is paragraph one.$ perl -0000ne 'print; exit' file This is paragraph one. And why is this one apparently slurping the entire file again? $ perl -00000ne 'print; exit' file This is paragraph one.This is paragraph two. Further testing shows that these all seem to work in paragraph mode: perl -000 perl -0000perl -000000perl -0000000perl -00000000 While these seem to slurp the file whole: perl -00000perl -000000000 I guess my problem is that I don't understand octal well enough (at all, really), I am a biologist, not a programmer. Do the latter two slurp the file whole because both 0000 and 00000000 are >= 0400 ? Or is there something completely different going on? | Octal is just like decimal in that 0 == 0, and 0000 == 0, 0 == 000000, etc. The fact that the switch here is -0 may make things a little confusing -- I would presume the point about "the special value 00" means one 0 for the switch and one for the value; adding more zeros is not going to change the latter, so you get the same thing... Up to a point. The behavior of 000000 etc. is kind of bug-like, but keep in mind that this is supposed to refer to a single 8-bit value . The range of 8 bits in decimal is 0-255, in octal, 0-377. So you can't possibly use more than 3 digits here meaningfully (the special values are all outside that range, but still 3 digits + the switch). You are perhaps meant to just infer this from: You can also specify the separator character using hexadecimal notation: -0xHHH..., where the H are valid hexadecimal digits. Unlike the octal form, this one may be used to specify any Unicode character, even those beyond 0xFF . 0xFF hex == 255 decimal == 377 octal == max for 8-bits, the size of one byte and a character in the (extended) ASCII set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
192,486 | I would like run remote ssh commands and have them always load server-side startup files by default. I am looking for a solution that does not require: configuration by root adding extra boiler plate to the command each time duplicating environment variables across multiple files I am not looking to transport local environment variables to the remote shell. I want to run a remote command using the vanilla ssh <host> <command> syntax and have it run using the same environment a remote login session would get . Background The below assumes bash is being used for simplicity By default, remote ssh commands start a non-interactive, non-login shell. Quoting the man page: If command is specified, it is executed on the remote host instead of a login shell. You can see this more explicitly on the shell by running: # the lack of 'i' indicates non-interactive$ ssh localhost 'echo $-'hBc $ ssh localhost 'shopt login_shell'login_shell off But what if your remote command needs certain environment variables set? Being a non-interactive, non-login shell means neither .bash_profile nor .bashrc will be sourced. This is problematic, for example, if you're using tools like perlbrew or virtualenv and want to use your custom interpreter in the remote command: $ which perl/home/calid/perl5/perlbrew/perls/perl-5.20.1/bin/perl$ ssh localhost 'which perl'/usr/bin/perl Solutions that do not satisfy the requirements above Explicitly invoke a login shell in the remote command $ ssh localhost 'bash --login -c "which perl"' Requires extra boiler plate each time Explicitly source your profile before the command $ ssh localhost 'source ~/.bash_profile && which perl' Requires extra boiler plate each time Set environment variables in ~/.ssh/environment Requires root to enable this functionality on the server Duplicates environment variables already set in server startup files Set ENV and BASH_ENV Does not work: .bash_profile and .bashrc aren't sourced Setting in ~/.ssh/environment works, but requires root access to enable Preface the command with the environment variables you need Requires extra boiler plate each time (potentially a lot) Duplicates environment variables already set in server startup files Related posts https://stackoverflow.com/questions/415403/whats-the-difference-between-bashrc-bash-profile-and-environment https://stackoverflow.com/questions/216202/why-does-an-ssh-remote-command-get-fewer-environment-variables-then-when-run-man#216204 dot file not sourced when running a command via ssh | Octal is just like decimal in that 0 == 0, and 0000 == 0, 0 == 000000, etc. The fact that the switch here is -0 may make things a little confusing -- I would presume the point about "the special value 00" means one 0 for the switch and one for the value; adding more zeros is not going to change the latter, so you get the same thing... Up to a point. The behavior of 000000 etc. is kind of bug-like, but keep in mind that this is supposed to refer to a single 8-bit value . The range of 8 bits in decimal is 0-255, in octal, 0-377. So you can't possibly use more than 3 digits here meaningfully (the special values are all outside that range, but still 3 digits + the switch). You are perhaps meant to just infer this from: You can also specify the separator character using hexadecimal notation: -0xHHH..., where the H are valid hexadecimal digits. Unlike the octal form, this one may be used to specify any Unicode character, even those beyond 0xFF . 0xFF hex == 255 decimal == 377 octal == max for 8-bits, the size of one byte and a character in the (extended) ASCII set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56601/"
]
} |
192,493 | On a high DPI screen, lines in Xfig are very thin. Is it possible to scale the lines, or the entire Xfig UI? Note that I don't want to scale the entire desktop. Other application UIs are legible on high DPI screens. Perhaps a Wayland compositor can scale an individual window? | Method that I used in the end: Run Xfig in its own desktop under TigerVNC server. Connect to the VNC server with a VNC client that allows scaling. To simplify the process, I created a tool to run arbitrary applications scaled up, Vncdesk . A simple command such as vncdesk 2 starts the server and connects a viewer. Closing the application or the viewer shuts down the VNC server: An alternative could be running each application window in its own VNC viewer, which is the goal of experimental x11vnc -appscale . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73542/"
]
} |
192,621 | I have 2 questions. The first one is for the -sf options and the second one is for the more specific usage of -f options. By googling, I figured out the description of command ln , option -s and -f . (copy from http://linux.about.com/od/commands/l/blcmdl1_ln.htm ) -s, --symbolic : make symbolic links instead of hard links-f, --force : remove existing destination files I understand these options individually. But, how could one use this -s and -f options simultaneously? -s is used for creating a link file and -f is used for removing a link file. Why use this merged option? To know more about ln command, I made some examples. $ touch foo # create sample file$ ln -s foo bar # make link to file$ vim bar # check how link file works: foo file opened$ ln -f bar # remove link file Everything works fine before next command $ ln -s foo foobar$ ln -f foo # remove original file By the description of -f option, this last command should not work, but it does! foo is removed. Why is this happening? | First of all, to find what a command's options do, you can use man command . So, if you run man ln , you will see: -f, --force remove existing destination files -s, --symbolic make symbolic links instead of hard links Now, the -s , as you said, is to make the link symbolic as opposed to hard. The -f , however, is not to remove the link. It is to overwrite the destination file if one exists. To illustrate: $ ls -ltotal 0-rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 bar-rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo$ ln -s foo bar ## fails because the target existsln: failed to create symbolic link ‘bar’: File exists$ ln -sf foo bar ## Works because bar is removed and replaced with the link$ ls -ltotal 0lrwxrwxrwx 1 terdon terdon 3 Mar 26 13:19 bar -> foo-rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108094/"
]
} |
192,623 | when I log in to a Windows Computer with xfreerdp -v computer -u user --workarea -f the full screen window always appears on the first of my two monitors.Is it possible to tell freerdp to start on the second monitor or maybe to move the window? The standard KDE window moving with Alt+Click does not work with the freerdp window. Searching on the internet, I only found examples regarding multi monitoring with multiple remote screens. But I just want to select the local screen displaying the remote session. I am using freerdp 1.2.0 under Gentoo Linux with KDE 4.14.3. Addition: I am not using different X displays. I have a multi monitor setup with randr, xrandr outputs the following: Screen 0: minimum 8 x 8, current 3840 x 1200, maximum 16384 x 16384DVI-I-0 disconnected (normal left inverted right x axis y axis)DVI-I-1 connected 1920x1200+1920+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 59.95*+ ...DP-0 disconnected (normal left inverted right x axis y axis)DP-1 connected primary 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 59.95*+ ...DP-2 disconnected (normal left inverted right x axis y axis)DP-3 disconnected (normal left inverted right x axis y axis) | Get the monitor number (or numbers) you wish to full screen rdp: xfreerdp /monitor-list Start full screen on monitor: xfreerdp /monitors:2 /multimon /v:<host> Or full screen multiple monitors: xfreerdp /monitors:1,2 /multimon /v:<host> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81831/"
]
} |
192,638 | How can I find what new line character is in my file when using awk ? I know the default is \n but this is what manual says. I want to see it with my own eyes. I have just started learning awk and how to change the output record separator but first I want to see what the actual ORS is. The first 7 chapters of awk manual didn't address this. | Get the monitor number (or numbers) you wish to full screen rdp: xfreerdp /monitor-list Start full screen on monitor: xfreerdp /monitors:2 /multimon /v:<host> Or full screen multiple monitors: xfreerdp /monitors:1,2 /multimon /v:<host> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83681/"
]
} |
192,640 | I'm working with TS-4900, an embedded 'Computer on Module' plugged into a baseboard, running Yocto Linux. It uses U-Boot to start, and supposedly basing on the model of the baseboard it chooses the right dtb file to start, and possibly if it fails to locate the right one it falls back to a 'generic' one for my module. But how/where does it determine the right one? How can I tell which .dtb was used, or set which one should be used? Below are the boot messages of U-Boot. U-Boot 2014.10-g3ac6ec3 (Jan 29 2015 - 17:20:15)CPU: Freescale i.MX6SOLO rev1.1 at 792 MHzReset cause: PORBoard: TS-4900Revision: C Watchdog enabledI2C: readyDRAM: 1 GiBMMC: FSL_SDHC: 0, FSL_SDHC: 1SF: Detected N25Q64 with page size 256 Bytes, erase size 4 KiB, total 8 MiB*** Warning - bad CRC, using default environmentIn: serialOut: serialErr: serialNet: using phy at 7FEC [PRIME]Press Ctrl+C to abort autoboot in 1 second(s)(Re)start USB...USB0: Port not available.USB1: USB EHCI 1.00scanning bus 1 for devices... 2 USB Device(s) found scanning usb for storage devices... 0 Storage Device(s) foundNo storage devices, perhaps not 'usb start'ed..?Booting from the eMMC ...** File not found /boot/boot.ub **** File not found /boot/imx6dl-ts4900-13.dtb **Booting default device tree42507 bytes read in 196 ms (210.9 KiB/s)118642 bytes read in 172 ms (672.9 KiB/s)ICE40 FPGA reloaded successfully4609784 bytes read in 337 ms (13 MiB/s)## Booting kernel from Legacy Image at 12000000 ... Image Name: Linux-3.10.17-1.0.0-technologic+ Image Type: ARM Linux Kernel Image (uncompressed) Data Size: 4609720 Bytes = 4.4 MiB Load Address: 10008000 Entry Point: 10008000 Verifying Checksum ... OK## Flattened Device Tree blob at 18000000 Booting using the fdt blob at 0x18000000EHCI failed to shut down host controller. Loading Kernel Image ... OK Using Device Tree in place at 18000000, end 1800d60aStarting kernel ...[ 0.000000] Booting Linux on physical CPU 0x0(Kernel startup commences...) | When U-Boot executes the boot command, it provides a memory address for the kernel and a memory address for the device tree blob. Therefore, prior to this command, it must load these files into memory. Based on the messages you provided we see that two files failed to be loaded from the eMMC/SD card: /boot/boot.ub/boot/imx6dl-ts4900-13.dtb Its possible that either these files simply weren't present, their path is wrong or the incorrect device:partition was given to the U-Boot load command. In any case, the command fails. At this point, it appears that the bootloader tries to load a "default" device tree - possibly stored on the same medium as the bootloader itself. To find out exactly what is happening, you'll want to halt the boot process at the bootloader and access the U-Boot command prompt. From here, you may enter: printenv This will print out the U-boot environment variables. Many of these variables reference other variables. Some of these variables are often executed like scripts, so you may see boot scripts, kernel & fdt load scripts, etc. To figure out the boot sequence, look for a variable called bootcmd (or something similar). This is usually what is ultimately run at boot time. You'll need to trace the boot sequence out from this point through multiple variables, but you should see where load commands are used to load the FDT into memory. If you'd like to post the output of printenv , we can identify the exact logic used here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30534/"
]
} |
192,642 | How to run wkhtmltopdf headless?! Installation on Debian Whezzy apt-get install wkhtmltopdf Command wkhtmltopdf --title "$SUBJECT" -q $SOURCEFILE $OUTPUTFILE Error QXcbConnection: Could not connect to display | This is a bug , and the fix hasn't been brought to the Debian repositories. Quoting ashkulz (who closed the bug report) : You're using the version of wkhtmltopdf in the debian repositories, which does not support running headless. So you can either... Download wkhtmltopdf from source and compile it (see the instructions in the INSTALL.md file ; you may remove the --recursive option from their git clone line, if you already have Qt 4.8 installed). Run it inside xvfb , as suggested by masterkorp in the bug report . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
192,656 | Someone of our team wanted to recursively change the user permissions on all hidden directories in a users home directory. To do so he executed the following command: cd /home/usernamechown -R username:groupname .* We were pretty surprised when we realized, that he actually recursively changed the permissions of all user directories in /home, because .* equals to .. as well. Would you have expected this behavior in Linux though? | I always get burned when I try using .* for anything and long ago switched to using character classes: chown -R username.groupname .[A-Za-z]* is how I would have done this. Edit: someone pointed out that this doesn't get, for example dot files such as ._Library . The catch all character class to use would be chown -R username.groupname .[A-Za-z0-9_-]* | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108113/"
]
} |
192,671 | According to Debian Network setup document allow-hotplug <interface_name> stanza in /etc/network/interfaces file starts an interface when the kernel detects a hotplug event from the interface. What is this hotplug event? | allow-hotplug <interface> , is used the same way auto is by most people. However, the hotplug event is something that involves kernel/udev detection against the hardware, that could be a cable being connected to the port, or a USB-to-Ethernet dongle that will be up and running whenever you plug on USB, or either a PCMCIA wireless card being connected to the slot. My personal opinion: I also think that allow-hotplug could have more documented examples to make this thing easier to understand. As pointed out by other U&L members and Debian lists, those two options create the "chicken and egg problem" when there are no cables connected or when an event is created: Re: network reference v2: questions about allow-hotplug Re: Netcfg and allow-hotplug vs auto References: Good detailed explanation of /etc/network/interfaces syntax? ; Re: Netcfg and allow-hotplug vs auto ; Howto Set Up Multiple Network Schemes on a Linux Laptop PCMCIA, Cardbus, USB ; Debian networking. Basic sintax of /etc/networ/interfaces ; | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
192,673 | I'm looking for any information regarding how secure an encrypted Linux file system is when contained in a VirtualBox virtual drive on a Windows host? Specifically I'm looking for answers to the following questions: Does the fact it is hosted as a guest system expose the encrypted data to any new attack vectors? Aside from the threat of key loggers on the Host OS, malware etc., when the virtual machine is turned on is there the threat of a rogue host process accessing the virtual machine's file system on the fly ? When both the Host and Guest OSes are turned off and the data is at rest on a storage device, is it any easier/harder to retrieve the encrypted file system? | allow-hotplug <interface> , is used the same way auto is by most people. However, the hotplug event is something that involves kernel/udev detection against the hardware, that could be a cable being connected to the port, or a USB-to-Ethernet dongle that will be up and running whenever you plug on USB, or either a PCMCIA wireless card being connected to the slot. My personal opinion: I also think that allow-hotplug could have more documented examples to make this thing easier to understand. As pointed out by other U&L members and Debian lists, those two options create the "chicken and egg problem" when there are no cables connected or when an event is created: Re: network reference v2: questions about allow-hotplug Re: Netcfg and allow-hotplug vs auto References: Good detailed explanation of /etc/network/interfaces syntax? ; Re: Netcfg and allow-hotplug vs auto ; Howto Set Up Multiple Network Schemes on a Linux Laptop PCMCIA, Cardbus, USB ; Debian networking. Basic sintax of /etc/networ/interfaces ; | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106977/"
]
} |
192,698 | When I type "grep doc" in the terminal, it just don't do anything, stopping the terminal from doing anything else before I escape using Ctrl + C or Z . I know this isn't how I'm supposed to use grep, but just curious why this is happening. | grep by default searches standard input if no files are given: grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines. If you just do grep doc grep expects standard input to come and search inside it (don't enter parts between < and > into the terminal, these are comments): $ grep doca b c <PRESS ENTER HERE>doc <NO MATCH WAS FOUND IN PREVIOUS LINE, TYPE doc AND PRESS ENTER AGAIN>doc <MATCH WAS FOUND> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108136/"
]
} |
192,701 | I have a folder with duplicate (by md5sum ( md5 on a Mac)) files, and I want to have a cron job scheduled to remove any found. However, I'm stuck on how to do this. What I have so far: md5 -r * | sort Which outputs something like this: 04c5d52b7acdfbecd5f3bdd8a39bf8fb gordondam_en-au11915031300_1366x768.jpg1e88c6899920d2c192897c886e764fc2 fortbourtange_zh-cn9788197909_1366x768.jpg266ea304b15bf4a5650f95cf385b16de nebraskasupercell_fr-fr11286079811_1366x768.jpg324735b755c40d332213899fa545c463 grossescheidegg_en-us10868142387_1366x768.jpg3993028fcea692328e097de50b26f540 Soyuz Spacecraft Rolled Out For Launch of One Year Crew.png677bcd6006a305f4601bfb27699403b0 lechaustria_zh-cn7190263094_1366x768.jpg80d03451b88ec29bff7d48f292a25ce6 ontariosunrise_en-ca10284703762_1366x768.jpgb6d9d24531bc62d2a26244d24624c4b1 manateeday_row10617199289_1366x768.jpgca1486dbdb31ef6af83e5a40809ec561 Grueling Coursework.jpgcdf26393577ac2a61b6ce85d22daed24 Star trails over Mauna Kea.jpgdc3ad6658d8f8155c74054991910f39c smoocave_en-au10358472670_1366x768.jpgdc3ad6658d8f8155c74054991910f39c smoocave_en-au10358472670_1366x7682.jpg How can I process based on the MD5 of the file to remove duplicates? I don't really care which "original" I keep - but I only want to keep one. Should I be approaching this in a different manner? | I'm working on Linux, which means the is the command md5sum which outputs: > md5sum *d41d8cd98f00b204e9800998ecf8427e file_1d41d8cd98f00b204e9800998ecf8427e file_10d41d8cd98f00b204e9800998ecf8427e file_2d41d8cd98f00b204e9800998ecf8427e file_3d41d8cd98f00b204e9800998ecf8427e file_4d41d8cd98f00b204e9800998ecf8427e file_5d41d8cd98f00b204e9800998ecf8427e file_6d41d8cd98f00b204e9800998ecf8427e file_7d41d8cd98f00b204e9800998ecf8427e file_8d41d8cd98f00b204e9800998ecf8427e file_9b026324c6904b2a9cb4b88d6d61c81d1 other_file_131d30eea8d0968d6458e0ad0027c9f80 other_file_1026ab0db90d72e28ad0ba1e22ee510510 other_file_26d7fce9fee471194aa8b5b6e47267f03 other_file_348a24b70a0b376535542b996af517398 other_file_41dcca23355272056f04fe8bf20edfce0 other_file_59ae0ea9e3c9c6e1b9b6252c8395efdc1 other_file_684bc3da1b3e33a18e8d5e1bdd7a18d7a other_file_7c30f7472766d25af1dc80b3ffc9a58c7 other_file_87c5aba41f53293b712fd86d08ed5b36e other_file_9 Now using awk and xargs the command would be: md5sum * | \sort | \awk 'BEGIN{lasthash = ""} $1 == lasthash {print $2} {lasthash = $1}' | \xargs rm The awk part initializes lasthash with the empty string, which will not match any hash, and then checks for each line if the hash in lasthash is the same as the hash (first column) of the current file (second column). If it is, it prints it out. At the end of every step it will set lasthash to the hash of the current file (you could limit this to only be set if the hashes are different, but that should be a minor thing especially if you do not have many matching files). The filenames awk spits out are fed to rm with xargs , which basically calls rm with what the awk part gives us. You probably need to filter directories before md5sum * . Edit: Using Marcins method you could also use this one: comm -1 -2 \ <(ls) | \ <(md5sum * | \ sort -k1 | \ uniq -w 32 | \ awk '{print $2}' | \ sort) \xargs rm This substracts from the filelist optained by ls the first filename of each unique hash optained by md5sum * | sort -k1 | uniq -w 32 | awk '{print $2}' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6388/"
]
} |
192,705 | I have files named lect1.txt , lect2.doc , and lect3.doc . I want to get a file which is a .txt file, and contains lect as the filename. I tried find *.txt | grep lect* and it returned nothing. but when I did find *.txt | grep "lect*" it returned lect1.txt . What's the difference between these two expressions? | grep searches for the first argument (the pattern) in the files passed on the command line or stdin if no files are passed. Without the quote your shell will expand lect* to all the files in the directory that begin with lect . Your command will then be: grep lect1.txt lect2.doc lect3.doc which means search for the text lect1.txt in both .doc files . Unless one of the .doc file has the phrase lect1.txt within it, it will return nothing. To be more precise, it will look for lect1 followed by any character (the . ) followed by txt , so it would also find lect1-txt and lect1xtxt etc) In your second example, you've quoted "lect.*" so that the shell doesn't expand it and it is passed as is to grep . With only a pattern passed as an argument, grep will search the filenames passed in stdin for the pattern, which is what you are after I believe. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108136/"
]
} |
192,706 | With sysvinit , a sudoers entry like this would suffice: %webteam cms051=/sbin/service httpd * This would allow for commands such as: sudo service httpd status sudo service httpd restart Now, with systemd , the service name is the final argument. I.e., the service restart would be done with: systemctl restart httpd.service Naturally, I thought defining the command as systemctl * httpd.service would work but that would allow something like systemctl restart puppet.service httpd.service which is not the desired effect. With that being considered, what would be the best way allow non-root users to control a systemd service then? This doesn't need to be sudoers ; perhaps a file permission change may be sufficient? | Just add all needed commands to sudoers separately: %webteam cms051=/usr/bin/systemctl restart httpd.service%webteam cms051=/usr/bin/systemctl stop httpd.service%webteam cms051=/usr/bin/systemctl start httpd.service %webteam cms051=/usr/bin/systemctl status httpd.service | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
192,716 | I am on CentOS 6, trying to enable core dumps for an application I am developing. I have put: ulimit -H -c unlimited >/dev/nullulimit -S -c unlimited >/dev/null in to my bash profile, but a core dump still did not generate (in a new terminal). I have also changed my /etc/security/limits.conf so that the soft limits is zero for all users. How do I set the location of the core files to be output? I wanted to specify the location and append the time the dump was generated, as part of the file name? | To set location of core dumps in CentOS 6 you can edit /etc/sysctl.conf . For example if you want core dumps in /var/crash : kernel.core_pattern=/var/crash/core-%e-%s-%u-%g-%p-%t #corrected spaces before and after = Where variables are: %e is the filename %g is the gid the process was running under %p is the pid of the process %s is the signal that caused the dump %t is the time the dump occurred %u is the uid the process was running under Also you have to add /etc/sysconfig/init DAEMON_COREFILE_LIMIT='unlimited' Now apply new changes: $ sysctl -p But there is a caveat whit this way. If the kernel parameter kernel.core_pattern is always reset and overwritten at reboot to the following configuration even when a value is manually specified in /etc/sysctl.conf : |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e In short when abrtd.service starts kernel.core_pattern is overwritten automatically by the system installed abrt-addon-ccpp . There are two ways to resolve this: Setting DumpLocation option in the /etc/abrt/abrt.conf configuration file. The destination directory can be specified by setting DumpLocation = /var/crash in the /etc/abrt/abrt.conf configuration file, and sysctl kernel.core_pattern 's displayed value is a same but actually core file will be created to the directory under /var/crash . Also if you have SELinux enabled you have to run: $ semanage fcontext -a -t public_content_rw_t "/var/crash(/.*)?" $ setsebool -P abrt_anon_write 1 And finally restart abrtd.service : $ service abrtd.service restart Stop abrtd service. kernel.core_pattern will not be overwritten. - (I've never tested). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50597/"
]
} |
192,732 | Running Mint 17, just ran apt-get upgrade for the first time in a while, with 350MB download. It stops halfway to tell me /etc/issue is not the package maintainers versions. Ditto for issue.net and lsb-release , where the diff looks like: -DISTRIB_ID=LinuxMint-DISTRIB_RELEASE=17-DISTRIB_CODENAME=qiana-DISTRIB_DESCRIPTION="Linux Mint 17 Qiana"+DISTRIB_ID=Ubuntu+DISTRIB_RELEASE=14.04+DISTRIB_CODENAME=trusty+DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS" OK, I've said "no" to each of those three file updates (i.e. keep it as Mint). Now I'm just wondering if this is a symptom of a more serious problem? Could apt-get be corrupted? Is there some simple check I can do to tell myself everything is OK? Google, so far, tells me no-one else has this problem, which seems strange if it is a mess-up in Mint packaging. Sorry, that is a bit of a wishy-washy question. I guess it boils down to: Is it fine to shrug and think nothing about those three files? UPDATE Here is the output of apt-cache policy base-files : base-files: Installed: 7.2ubuntu5.2 Candidate: 7.2ubuntu5.2 Version table: *** 7.2ubuntu5.2 0 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages 100 /var/lib/dpkg/status 7.2ubuntu5 0 500 http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages However there are some mint packages still, here is apt-cache policy | grep -i mint : 700 http://extra.linuxmint.com/ qiana/main i386 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=main origin extra.linuxmint.com 700 http://extra.linuxmint.com/ qiana/main amd64 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=main origin extra.linuxmint.com 700 http://packages.linuxmint.com/ qiana/import i386 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ qiana/upstream i386 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ qiana/main i386 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=main origin packages.linuxmint.com 700 http://packages.linuxmint.com/ qiana/import amd64 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ qiana/upstream amd64 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ qiana/main amd64 Packages release v=17,o=linuxmint,a=qiana,n=qiana,l=linuxmint,c=main origin packages.linuxmint.com | From what I've gathered from Mint's repositories, Mint 17 (Qiana) is based on Ubuntu 14.04 (Trusty Tahr), and instead of hosting everything themselves, Mint rely on the Ubuntu repositories to provide all the packages that haven't been modified by Mint. This includes base-files , which contains /etc/issue etc.; it seems that Mint installs the Ubuntu version of the package, then overwrites the affected files with its own versions without using a package. Now Ubuntu have updated base-files for 14.04.2, and because Mint uses the Ubuntu repositories, that update gets picked up by Mint installations. And since /etc/issue and so on were modified without going through the packaging system, dpkg reckons that the user changed something and asks before overwriting the files. So to answer your question, as Anthon says it's safe enough, if a bit unfortunate (Mint really should have its own version of base-files ). You can either keep the Mint versions or use the Ubuntu versions; the only consequence in the latter case is that software which needs to determine what distribution it's running on will find Ubuntu rather than Mint, but Mint is similar enough to Ubuntu for that to have no real impact. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52393/"
]
} |
192,760 | Supposed I make a listing and sort the files by its temporal attribute: ls -ltr-rwxrwxrwx 1 bla bla 4096 Feb 01 20:10 foo1-rwxrwxrwx 1 bla bla 4096 Feb 01 20:12 foo2...-rwxrwxrwx 1 bla bla 4096 Mar 05 13:25 foo1000 What should I add behind the ls -ltr in a pipe chain in order to obtain only the last line of the listing ? I know there are sed and awk, but I do not know how to use them, I only know what they can do. | Since you asked about sed specifically, ls -ltr | sed '$!d' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
192,766 | I learned thanks to you, how to display only the last line of a listing,e.g. ls -ltr | sed '$!d' Now I thought, how to combine it with the find command, in order to repeat this onto each directory exitent, but I got only to a false solution: find / -type d | xargs ls -ltr | sed '$!d' If i read it right, this would not display the last line of each directory, but only the last line of all listings of all directories. How to do it right ? | Since you asked about sed specifically, ls -ltr | sed '$!d' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
192,786 | In order to understand another answer (by glenn jackman): find / -type d -print0 | while read -r -d '' dir; do ls -ltr "$dir" | sed '$!d'; done the first step is to understand the usage of the option -r of the read command. First, I thought, it would be sufficient to simply execute man read to look up the meaning of the -r option, but I realized the man page does not contain any explanation for options at all, so I Googled for it. I got some read -t , read -p examples, but no read -r . | There is no stand-alone read command: instead, it is a shell built-in, and as such is documented in the man page for bash : read [ -ers ] [ -a aname ] [ -d delim ] [ -i text ] [ -n nchars ] [ -N nchars ] [ -p prompt ] [ -t timeout ] [ -u fd ] [ name ...] ︙ -r Backslash does not act as an escape character. The backslash is considered to be part of the line. In particular, a backslash-newline pair may not be used as a linecontinuation. So, to summarize, read normally allows long lines to be broken using a trailing backslash character, and normally reconstructs such lines. This slightly surprising behavior can be deactivated using -r . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
192,809 | When I try to switch to root using sudo -i i get the error /var/tmp/sclDvf3Vx: line 8: -i: command not found ... However, su - works which I will continue to use. I'm by no means a linux system administrator so the environment is still pretty foggy to me. I guess my questions are: Why is the error being thrown? What's the difference between the two commands? Why would you use one over the other? Update: I'm using CentOS version: CentOS release 6.6 (Final) Here's the output from some commands I was asked to run, in the comments below. type sudo : sudo is /opt/centos/devtoolset-1.1/root/usr/bin/sudo sudo -V : /var/tmp/sclIU7gkA: line 8: -V: command not found grep'^root:' /etc/passwd : root:x:0:0:root:/root:/bin/bash Update: This was added to my non-root user's ~/.bashrc a while back because i needed C++11 support. When I comment it out, re-ssh in, I can run sudo -i just fine without any errors. if [ "$(gcc -dumpversion)" != "4.7.2" ]; then scl enable devtoolset-1.1 bashfi | From the comments and your further investigations it looks like your devtoolset is modifying the PATH . Unfortunately that includes what appears to be an old or broken sudo command. It would be worth trying to modify the devtoolset include in your .bashrc like this, and then logging back in again: if [ "$(gcc -dumpversion)" != "4.7.2" ]; then scl enable devtoolset-1.1 bash PATH=/usr/bin:$PATH # We need a working sudofi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95779/"
]
} |
192,843 | The command rm -rf ./ does not do anything in a directory full of sub directories and files. Why so? Isn't -r supposed to go recursive? To add more confusion, it even prints an error message suggesting that it is traversing the directory: rm: refusing to remove ‘.’ or ‘..’ directory: skipping ‘./’ | The rm command refuses to delete the directory by the '.' name. If you instead use the full path name it should delete the directory recursively. It is also possible to delete the directory if it is the current directory. [testuser@testhost] /tmp$ mkdir ff[testuser@testhost] /tmp$ cd ff[testuser@testhost] /tmp/ff$ touch a b c[testuser@testhost] /tmp/ff$ rm -rf ./rm: cannot remove directory: ‘./’[testuser@testhost] /tmp/ff$ lsa b c[testuser@testhost] /tmp/ff$ rm -rf /tmp/ff[testuser@testhost] /tmp/ff$ ls[testuser@testhost] /tmp/ff$ ls ../ffls: cannot access ../ff: No such file or directory[testuser@testhost] /tmp/ff$ cd ..[testuser@testhost] /tmp$ ls ffls: cannot access ff: No such file or directory From info rm : Any attempt to remove a file whose last file name component is .'or ..' is rejected without any prompting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22534/"
]
} |
192,887 | I just wanted to ask whether there is any command which would work on common shells (bash, dash, kornshell)? It is supposed to check if the line variable contains any part of the path. if [[ $line =~ "$PWD"$ ]] ;then | In any POSIX-compatible shell you can do: case $line in (*"$PWD"*)# whatever your then block had;;esac This works in bash , dash , and just about any other shell you can name. It can also be used to handle multiple possibilities easily. For example: case $line in (*"$PWD"*) echo \$PWD match\!;;(*"$OLDPWD"*) echo \$OLDPWD match\!;;(*) ! echo no match\!;;esac You can also use alternation: case $line in (*"$PWD"*|*"$OLDPWD"*) echo '$OLDPWD|$PWD match!';;esac Note the use of quoting above: case $line ... The object of a case statement will not be split on either $IFS or be used as a pattern for filename gen. This is similar to the way the left argument in a [[ test is treated. (*"$PWD"*) Here also a shell expansion is not subjected to either $IFS or filename generation - an unquoted expansion will neither split nor glob. But an unquoted expansion here might be construed as a pattern rather than a literal string though, and so an expansion might mean more than one thing depending on whether or not it is quoted. It is important to quote any variable used in a pattern that should be literally interpreted, in the same way you would quote pattern chars which you wanted interpreted literally. For example, if $PWD contained a * and was not quoted it would be construed as a pattern object and not as a literal * to be searched for. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106676/"
]
} |
192,897 | I am facing problem with one of the above. Only ssh user@ip works not the other way round. And I am getting the following error when I run ssh ip root sh: root: not found Because of this I GUESS one of the application which uses the problematic syntax is not able to login. | The ssh user@ip is correct syntax. The other logs in to the host ip as you and tries to run a command called root . Please update your question with an explanation of what are you actually trying to achieve. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107566/"
]
} |
192,905 | I noted it is possible set a number of hotkeys in Linux Mint 17.1 using the keyboard application, but there's not possibility to set the hotkey "Show All Windows"; you can only set a button using Preferences -> Hot Corners. You can set customize hotkey too, but I'm not able to do that. Browsing on the internet, I did not find anything as tutorial, wiki,... to do that. Can someone suggest what I have to do setting that? | For Linux Mint 18 Sarah ,< ctrl+alt+down-key > worked to display all the open windows in the current desktop. To view all the windows in all the desktops, use < ctrl+ alr+ up-key >. Should work for Ubuntu too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98693/"
]
} |
192,944 | I'm trying to curl HTTPS website in the following way: $ curl -v https://thepiratebay.se/ However it fails with the error: * About to connect() to thepiratebay.se port 443 (#0)* Trying 173.245.61.146...* connected* Connected to thepiratebay.se (173.245.61.146) port 443 (#0)* SSLv3, TLS handshake, Client hello (1):* SSLv3, TLS alert, Server hello (2):* error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure* Closing connection #0curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure Using -k / --insecure or adding insecure to my ~/.curlrc doesn't make any difference. How do I ignore or force the certificate using curl command line? When using wget seems to work fine. Also works when testing with openssl as below: $ openssl s_client -connect thepiratebay.se:443CONNECTED(00000003)SSL handshake has read 2651 bytes and written 456 bytesNew, TLSv1/SSLv3, Cipher is AES128-SHAServer public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONESSL-Session: Protocol : TLSv1 Cipher : AES128-SHA I've: $ curl --versioncurl 7.28.1 (x86_64-apple-darwin10.8.0) libcurl/7.28.1 OpenSSL/0.9.8| zlib/1.2.5 libidn/1.17Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp Features: IDN IPv6 Largefile NTLM NTLM_WB SSL libz | Some sites disable support for SSL 3.0 (possible because of many exploits/vulnerabilities), so it's possible to force specific SSL version by either -2 / --sslv2 or -3 / --sslv3 .Also -L is worth a try if requested page has moved to a different location. In my case it was a curl bug ( found in OpenSSL ), so curl needed to be upgraded to the latest version (>7.40) and it worked fine. See also: 3 Common Causes of Unknown SSL Protocol Errors with cURL Error when Installing Meteor at SO [Bug 861137] Re: Openssl TLS errors while connecting to SSLv3 sites | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21471/"
]
} |
192,945 | I have just started to use Scientific Linux (7.0) (although I assume this question might be distribution neutral..). The kernel version is 3.10.0-123.20.1.el7.x86_64. Coming back to my question. I switched to root account and from there created an new user account test-account using the command adduser test-account . It didn't prompt me for a password neither did I use the option to provide password. So I guess it's a "without password" account. I can login into this account from root account - which I suppose I'd be able to without providing password even if the test account had a password. However when I try to login into this(test-account) from a third account - it prompts me for password. And just pressing Enter doesn't work. Is it possible to login into this account from a non-root account. Is there a way (without switching to root or using sudo ) ? | By default on enterprise GNU/Linux and its derivatives, the adduser command creates a user which is disabled until you explicitly specify a password for that user. Here is an example on CentOS 6.5, which should be the same as Scientific Linux. $ sudo adduser test $ sudo grep test /etc/shadow test:!!:123456:0:99999:7::: that's because in the /etc/shadow file, the password field is !! , as you can see in the example. Once you run passwd for this account, it will change the user's password and allow the user to be able to login. So what you should be able to do is the following to have a user without a password, simply create an account then delete the password. $ sudo adduser test $ sudo passwd -d test Removing password for user test. passwd: Success $ su test $ whoami test now any user should be able to use su and login as the user test in my example. You will not have to use sudo to login as the account. Although this is possible and you can have an account without a password, it is not advised. If you simply set the password for the user, you should be allowed to login. $ sudo passwd test[sudo] password for <YOURACCOUNT>:Changing password for user test.New password:Retype new password:passwd: all authentication tokens updated successfully. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192945",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39843/"
]
} |
192,995 | I have to finish a writeup of a few coreutils commands for a course at the moment, and I can't think of a starting point for a small practical code example that demonstrates the potential uses of stdbuf .Has anyone used it to fix the interaction of a couple specific unix commands?I know what it does.It's just that the first commands that came to mind have their own buffering controls, and normal terminal output is line buffered anyway.It must be popular for appending to logs, yet I can't find a good command to demonstrate there. In the case of nohup , are there any commands that are commonly run with it to prevent interruption? As I mentioned, I am working on this for a course assignment at the moment. This however doesn't violate any of its rules.I'm just trying to find a good starting point for these examples.I don't have one for stdbuf , and I dislike the rudimentary one I was using for nohup . | stdbuf examples at http://www.pixelbeat.org/programming/stdio_buffering/ nohup is used for any long running command that you want left running across logins. You can also do this with screen(1) or retroactively with screen + https://github.com/nelhage/reptyr | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108306/"
]
} |
192,997 | I am working in a Linux server where a parent directory has 100s of partially numbered files, e.g GGG12345_abb.txt GGG12346_abc.txt GGG12347_cbx.txt ..GGG19045_jha.txt each of which is corresponds to a one of the categories, such as: myname1 myname2..myname12 In addition, there is an index text file which has two columns, name and id, having partial myname* and GGG index (without _xxx.txt ) 12_12_myname1_abc GGG1234512_15_myname1_abc GGG12346..11_15_myname2_abc GGG12353 I have created subdirectories MYNAME1 , MYNAME2 etc. How can I select the files index from the index file and move corresponding text files to related subdirectoies? e.g. GGG12345_abb.txt of category myname1 to directory MYNAME1 | stdbuf examples at http://www.pixelbeat.org/programming/stdio_buffering/ nohup is used for any long running command that you want left running across logins. You can also do this with screen(1) or retroactively with screen + https://github.com/nelhage/reptyr | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106396/"
]
} |
193,039 | I'm new to bash and can't find a good tutorial to answer my question. array=( item1 item2 item3 )for name in ${array[@]}; do echo current/total ... some other codesdone I want to calculate the current and total value, as the expected output of this being: 1/32/33/3 Thanks for any kind of tips. | You can access the array indices using ${!array[@]} and the length of the array using ${#array[@]} , e.g. : #!/bin/basharray=( item1 item2 item3 )for index in ${!array[@]}; do echo $index/${#array[@]}done Note that since bash arrays are zero indexed , you will actually get : 0/31/32/3 If you want the count to run from 1 you can replace $index by $((index+1)) . If you want the values as well as the indices you can use "${array[index]}" i.e. #!/bin/basharray=( item1 item2 item3 )for index in ${!array[@]}; do echo $((index+1))/${#array[@]} = "${array[index]}"done giving 1/3 = item12/3 = item23/3 = item3 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/193039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
193,064 | I need to accept an email id and Full name in a script, and in case the Full name is not provided, I need to generate the name from the email id. Something like this: Case 1:EmailID: [email protected] Name: User NameCase 2:EmailID: [email protected] Name: User Name2Case 3:EmailID: [email protected] Name: This Is A Very Long Email Id This I have been able to achieve using the following steps: EMAIL_ADDRESS="$1"ID=(`echo $EMAIL_ADDRESS | cut -d'@' -f1| tr '.' ' '`)NEW_ID=()NUM=0for IN_VAL in ${ID[@]}do NEW_ID[$NUM]="`echo ${IN_VAL^}`" NUM=$((++NUM))doneecho "${NEW_ID[@]}" I am on BASH version 4.1.2 I think there has to be a better way to achieve this. Also, I already see a problem in this as I am assuming the field separator is going to be dot (.) character and not anything else like underscore (_) or hyphen (-) or anything else. If there is a better way to do this, please let me know. Thanks. | You can actually do this entirely in Bash, without any external commands, using word splitting and parameter expansion . It's even fairly short: [email protected]=${EMAIL_ADDRESS%@*}WORDS=( $(IFS=._- ; printf '%s ' $USER) )echo "${WORDS[@]^}" I'll take this line-by-line: USER=${EMAIL_ADDRESS%@*} This sets the USER variable to the part of EMAIL_ADDRESS that doesn't match @* at the end - that is, removing the domain name and leaving only the user part of the address. WORDS=( $(IFS=._- ; printf '%s ' $USER) ) This creates an initialises an array WORDS . The initial values are given by the results of command substitution $(...) . Command substitutions run in a subshell , so we can change the values of variables safely without affecting their values in our main shell. That includes IFS , which is used during word splitting as the group of characters that cause a new word to begin. Each one of . , _ , and - will form a word boundary, and you can add new characters there if you wish. After changing IFS we use printf to print out the words $USER has been split into, which is a little safer than echo . echo "${WORDS[@]^}" Finally, we print out the result. The [@] is array expansion, as you know, and then ^ performs upper-casing of the first character in the word (strictly, the first match of the default pattern ? ). The final result of running this script is the output: This Is A Very Long Email Id as expected. If any email addresses contain shell metacharacters * , ? , etc, they will be expanded as wildcards. You can wrap the WORDS= line in set -f / set +f to avoid that, but there's another option (courtesy of Glenn Jackman in the comments). IFS=._- read -r -a WORDS <<<"${EMAIL_ADDRESS%@*}"echo "${WORDS[@]^}" This uses read -a to populate an array with the results of word splitting, and the rest (condensed) as before. I find this less clear to read than the explicit array initialisation, but it's an option. It's also worth noting that email addresses can strictly have a wide variety of forms , including ones with spaces, quotes, and bracket characters in them, and this doesn't deal with those addresses at all (nor is it really possible to do so given your problem specification). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81841/"
]
} |
193,066 | The ssh won't let me login, because account is locked. I want to unlock the user on my server for public key authorization over ssh, but do not enable password-ed login. I've tried: # passwd -u usernamepasswd: unlocking the password would result in a passwordless account.You should set a password with usermod -p to unlock the password of this account. Auth log entries: Mar 28 00:00:00 vm11111 sshd[11111]: User username not allowed because account is lockedMar 28 00:00:00 vm11111 sshd[11111]: input_userauth_request: invalid user username [preauth] | Unlock the account and give the user a complex password as @Skaperen suggests. Edit /etc/ssh/sshd_config and ensure you have: PasswordAuthentication no Check that the line isn't commented ( # at the start) and save the file. Finally, restart the sshd service. Before you do this, ensure that your public key authentication is working first. If you need to do this for only one (or a small number) of users, leave PasswordAuthentication enabled and instead use Match User : Match User miro, alice, bob PasswordAuthentication no Place at the bottom of the file as it is valid until the next Match command or EOF. You can also use Match Group <group name> or a negation Match User !bloggs As you mention in the comments, you can also reverse it so that Password Authentication is disabled in the main part of the config and use Match statements to enable it for a few users: PasswordAuthentication no...Match <lame user> PasswordAuthentication yes | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13428/"
]
} |
193,095 | I have picked up -- probably on Usenet in the mid-1990s (!) -- that the construct export var=value is a Bashism, and that the portable expression is var=valueexport var I have been advocating this for years, but recently, somebody challenged me about it, and I really cannot find any documentation to back up what used to be a solid belief of mine. Googling for "export: command not found" does not seem to bring up any cases where somebody actually had this problem, so even if it's genuine, I guess it's not very common. (The hits I get seem to be newbies who copy/pasted punctuation, and ended up with 'export: command not found or some such, or trying to use export with sudo ; and newbie csh users trying to use Bourne shell syntax.) I can certainly tell that it works on OS X, and on various Linux distros, including the ones where sh is dash . sh$ export var=valuesh$ echo "$var"valuesh$ sh -c 'echo "$var"' # see that it really is exportedvalue In today's world, is it safe to say that export var=value is safe to use? I'd like to understand what the consequences are. If it's not portable to v7 "Bourne classic", that's hardly more than trivia. If there are production systems where the shell really cannot cope with this syntax, that would be useful to know. | export foo=bar was not supported by the Bourne shell (an old shell from the 70s from which modern sh implementations like ash/bash/ksh/yash/zsh derive). That was introduced by ksh . In the Bourne shell, you'd do: foo=bar export foo or: foo=bar; export foo or with set -k : export foo foo=bar Now, the behaviour of: export foo=bar varies from shell to shell. The problem is that assignments and simple command arguments are parsed and interpreted differently. The foo=bar above is interpreted by some shells as a command argument and by others as an assignment (sometimes). For instance, a='b c'export d=$a is interpreted as: 'export' 'd=b' 'c' with some shells ( ash , older versions of zsh (in sh emulation), yash ) and: 'export' 'd=b c' in the others ( bash , ksh ). While export \d=$a or var=dexport $var=$a would be interpreted the same in all shells (as 'export' 'd=b' 'c' ) because that backslash or dollar sign stops those shells that support it to consider those arguments as assignments. If export itself is quoted or the result of some expansion (even in part), depending on the shell, it would also stop receiving the special treatment. See Are quotes needed for local variable assignment? for more details on that. The Bourne syntax though: d=$a; export d is interpreted the same by all shells without ambiguity ( d=$a export d would also work in the Bourne shell and POSIX compliant shells but not in recent versions of zsh unless in sh emulation). It can get a lot worse than that. See for instance that recent discussion about bash when arrays are involved. (IMO, it was a mistake to introduce that feature ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19240/"
]
} |
193,101 | I have an old Win XP NEC laptop and I tried to boot a live usb with Lubuntu 14.10 to install Lubuntu, but when I tried to boot the live USB, after about a minute the boot process hangs at a line that says:[Firmware Bug] ACPI: No _BQC method, cannot determine initial brightness. I left it there for ~ 15 minutes and it was still stuck there. I tried rebooting, unplugging everything and booting, but nothing worked. I can only boot to windows XP. I cannot even boot to a Linux terminal. I've looked at many different StackExchange articles and I've tried Google. Please help! -Keith | export foo=bar was not supported by the Bourne shell (an old shell from the 70s from which modern sh implementations like ash/bash/ksh/yash/zsh derive). That was introduced by ksh . In the Bourne shell, you'd do: foo=bar export foo or: foo=bar; export foo or with set -k : export foo foo=bar Now, the behaviour of: export foo=bar varies from shell to shell. The problem is that assignments and simple command arguments are parsed and interpreted differently. The foo=bar above is interpreted by some shells as a command argument and by others as an assignment (sometimes). For instance, a='b c'export d=$a is interpreted as: 'export' 'd=b' 'c' with some shells ( ash , older versions of zsh (in sh emulation), yash ) and: 'export' 'd=b c' in the others ( bash , ksh ). While export \d=$a or var=dexport $var=$a would be interpreted the same in all shells (as 'export' 'd=b' 'c' ) because that backslash or dollar sign stops those shells that support it to consider those arguments as assignments. If export itself is quoted or the result of some expansion (even in part), depending on the shell, it would also stop receiving the special treatment. See Are quotes needed for local variable assignment? for more details on that. The Bourne syntax though: d=$a; export d is interpreted the same by all shells without ambiguity ( d=$a export d would also work in the Bourne shell and POSIX compliant shells but not in recent versions of zsh unless in sh emulation). It can get a lot worse than that. See for instance that recent discussion about bash when arrays are involved. (IMO, it was a mistake to introduce that feature ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108372/"
]
} |
193,132 | I am running Linux Mint 17.1 64-bit (based on Ubuntu 14.04). Ever since upgrading from Linux Mint 14/Ubuntu 12.10, the Python script I use to sync music to my Walkman has stopped working. Previously, when I mounted my Walkman, it would automatically show up as the path /run/user/1000/gvfs/WALKMAN/Storage Media and would work like any other file system: I could copy tracks to it, delete tracks from it, etc, all through Python. However, I can't remember if I had to make any changes to get this to happen. Since upgrading to Linux Mint 17 (and now 17.1), when I mount the Walkman, it shows up as the path /run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C007%5D/Storage Media . Furthermore, when I try to run the same file operations, they now fail. I have discovered that this happens not just through Python, but on the command line as well. For example: david@MILTON:~$ cp '/data/Music/10SecsWhiteNoise.mp3' '/run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C006%5D/Storage Media/MUSIC'cp: cannot create regular file ‘/run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C006%5D/Storage Media/MUSIC/10SecsWhiteNoise.mp3’: Operation not supported I have done some research on this problem, but the most common explanation seems to be that it was formerly solved by this PPA: https://launchpad.net/~langdalepl/+archive/ubuntu/gvfs-mtp But now, Ubuntu versions since 13.10 contain all these changes so it should no longer be necessary. So why am I still having these errors? I am still able to do file operations on my Walkman through a graphical file manager (Caja, on Linux Mint), just not via the command line. | A guess: you are now actually using MTP for accessing your Walkman, and MTP sucks. Details The Operation not supported error could indicate that your Walkman uses an MTP implementation that doesn't support "direct" access. According to http://intr.overt.org/blog/?p=174 this kind of direct access is an Android-specific extension, so it's probably not supported by your Walkman. As result, you can only use a few selected ways to access files on your Walkman using MTP: I guess everything that reads or writes files in one single operation is supported, while access to selected parts of a file is not supported for these MTP implementations. And it appears that cp and Python always use the latter access method and hence fail. Possible Workaround However, you might be able to just replace cp by gvfs-copy . In my tests with a Samsung Android phone (which has a crippled MTP implementation as well) gvfs-copy was able to copy files to the phone where cp failed. Background I couldn't find much info about these device-dependent MTP limitations; here are some snippets where the situation is explained somewhat: https://askubuntu.com/a/284831 https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/1389001/comments/2 https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/1157583/comments/1 Why did it work before? As to why your Walkman was accessible with cp in Mint 14 but not in Mint 17, this might be caused by an internal switch from PTP to MTP as access system. At least that's what I noticed for the Samsung device when switching from Ubuntu 12.04 to 14.04. The phone supports both PTP and MTP, but Ubuntu 12.04 apparently only supports PTP; so that's what was used. Since the new Ubuntu version has built-in support for MTP, this is now used instead. Actually it might even be the case that your Walkman was previously accessed as USB Mass Storage Device , which is what USB hard disks and flash drives use. Maybe for some reason Linux (or your Walkman) decided that MTP was preferable over Mass Storage access. You can see the access method used by looking at the URL for the Walkman (in Nautilus, go to the Walkman folder, press Ctrl+L and look at the address bar): for MTP the device is found under eg. mtp://[usb:001,004]/ while for PTP it's something like gphoto2://[usb:001,004]/store_00010001 . For Mass Storage access the URL is just a normal path like /media/WALKMAN . I don't know if MTP has any actual advantages over PTP or Mass Storage, or whether it's possible to switch back to PTP or Mass Storage. Under Linux, both MTP and PTP implementations have their own set of bugs, so it might depend on your use case which one is better. AFAIK Mass Storage is the most desirable option for the user but device support in phones is waning. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108391/"
]
} |
193,223 | I would to grep certain parts of some shell command output in the shell script: $ uname -r>> 3.14.37-1-lts Where I just need the 3.14.37 . And also for the shell script variable VERSION that has the value "-jwl35", I would like to take only the value "jwl35". How can I use regular expression to this in shell script? Thanks in advance! | Many, many ways. Here are a few: GNU Grep $ echo 3.14.37-1-lts | grep -oP '^[^-]*'3.14.37 sed $ echo 3.14.37-1-lts | sed 's/^\([^-]*\).*/\1/'3.14.37 Perl $ echo 3.14.37-1-lts | perl -lne '/^(.*?)-/ && print $13.14.37 or $ echo 3.14.37-1-lts | perl -lpe 's/^(.*?)-.*/$1/'3.14.37 or $ echo 3.14.37-1-lts | perl -F- -lane 'print $F[0]'3.14.37 awk $ echo 3.14.37-1-lts | awk -F- '{print $1}'3.14.37 cut $ echo 3.14.37-1-lts | cut -d- -f13.14.37 Shell, even! $ echo 3.14.37-1-lts | while IFS=- read a b; do echo "$a"; done3.14.37 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108427/"
]
} |
193,226 | I want to start the w command periodically, according to man watch the smallest possible time interval is 0.1. I tried: watch -n1 w (works)watch -n1.5 w (does not work)watch -n0.1 w (does not work) When I try to start the watch command with the n-option as non-integer, I get the error message: watch: failed to parse argument: '0.1' | This is a locale problem. watch uses strtod(3) , which is locale-dependent, to convert the argument to -n to a double . To fix the problem, you need to either specify the argument to -n with a different separator: watch -n 0,1 w Or change your locale to a setting where the period character is used for the decimal point: export LC_NUMERIC=en_US.UTF-8watch -n 0.1 w A couple of references: A relevant portion of the Linux manpage for strtod : A decimal number consists of a nonempty sequence of decimal digits possibly containing a radix character (decimal point, locale-dependent, usually '.') You can review your current settings by running locale in your terminal: localeLANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"... The source code in question can be reviewed at gitlab: https://gitlab.com/procps-ng/procps/blob/85fff468fa263cdd2ff1c0144579527c32333695/watch.c#L625 https://gitlab.com/procps-ng/procps/blob/85fff468fa263cdd2ff1c0144579527c32333695/lib/strutils.c#L49 (edit 2017-09-07): updated gitlab links | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
193,233 | I've just bought a new VPS with Cent OS 6.6 installed.I'm attempting to install Asterisk 11 on this VPS via command line remotely. I've used the directions here however I get this error: you do not appear to have the sources for the 2.6.32-042stab102.9 kernel installed when running: cd /usr/src/dahdi-linux-complete*make && make install && make config How can I install this kernel and continue my install? | This is a locale problem. watch uses strtod(3) , which is locale-dependent, to convert the argument to -n to a double . To fix the problem, you need to either specify the argument to -n with a different separator: watch -n 0,1 w Or change your locale to a setting where the period character is used for the decimal point: export LC_NUMERIC=en_US.UTF-8watch -n 0.1 w A couple of references: A relevant portion of the Linux manpage for strtod : A decimal number consists of a nonempty sequence of decimal digits possibly containing a radix character (decimal point, locale-dependent, usually '.') You can review your current settings by running locale in your terminal: localeLANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"... The source code in question can be reviewed at gitlab: https://gitlab.com/procps-ng/procps/blob/85fff468fa263cdd2ff1c0144579527c32333695/watch.c#L625 https://gitlab.com/procps-ng/procps/blob/85fff468fa263cdd2ff1c0144579527c32333695/lib/strutils.c#L49 (edit 2017-09-07): updated gitlab links | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108467/"
]
} |
193,240 | When source code is released on the internet and neither a license nor a copyright notice is indicated, what is the code licensed under? I can make a fork of the code? | If you live in a country that has ratified the Berne Convention (you probably do), then anything that can be copyrighted is copyrighted, whether or not it is mentioned explicitly. An explicit mention of copyright can help settle disputes and can lead to higher damages in case of violations, but it is not a requirement to claim the protection of copyright. If you find something on the Internet, there may be a presumption that you're allowed to download it, use it, and possibly even modify it for your private use (that last one depends on the jurisdiction). This is a presumption , not an evident right. If you find code on the author's web page with a mention that you can download it and use it, it's a safe presumption. If you find it on a file sharing website next to other content that is clearly not redistributed legally, then this is not a safe presumption. In no case can you redistribute the code or a modified version of it without an explicit authorization from the author or rights holder. If you know who the author is, contact them and ask them to put an explicit license on the work. That's what free software distributions do — and if they can't find the author, they don't distribute the code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57376/"
]
} |
193,253 | I am trying to print a regex pattern for the following piece of mail log. In particular I am trying to get the ID between the square brackets (see the second line for reference). Mar 29 03:48:13 mx-150 clamsmtpd: 14114F: accepted connection from: 127.0.0.1Mar 29 03:48:13 mx-150 postfix/smtpd[7445]: connect from unknown[127.0.0.1]Mar 29 03:48:13 mx-150 spamd[15674]: prefork: child states: II and using the following command: awk '/\[\d+\]/ { print }' maillog According to https://regex101.com/r/pL7kN2/1 I am getting 1 match, however, awk is not returning anything. Why is that? | Try standard regexps (instead of perl regexps). This will print matching lines: awk '/\[[[:digit:]]+\]/ { print }' maillog To extract and print the matching value inside the brackets: awk 'match($0,/\[[[:digit:]]+\]/) { print substr($0,RSTART+1,RLENGTH-2)}' maillog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64661/"
]
} |
193,275 | I wanted to execute a script that picks out a random directory path: find / -type d | shuf -n1 Unfortunately I get error messages about the prohibition of entering certain directories. How can I exclude a directory from the search with find ? | To exclude specific paths, on Linux: find / -path /sys -prune -o -path /proc -prune -o -type d Another approach is to tell find not to recurse under different filesystems. find / -xdev -type d You could also use locate to query a database of file names (usually updated nightly, you could also update it manually using updatedb ) instead of the live system. locate '*' | shuf -n 1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
193,279 | I am running a Ubuntu Linux machine. When I run applications written by different vendors like Chrome and Firefox, I notice that they all are running with my uid. But if that's the case, any file they create on the file system will also be with the same uid. Then how in linux can two mutually untrusted apps keep their files secure from each other ? using a ACL policy by app A may still allow app B to read A's files - through the user part of (user, group, other) do apps need to use encryption to protect their data from each other ? | The literal answer is that there is no such thing as an untrusted application running under your account. If you want to run an untrusted application, run it under a different account or in a virtual machine. Typical desktop operating systems such as Unix and Windows and typical mobile operating systems such as Android and iOS have different security models. Unix is a multiuser operating system, with mutually untrusted users. Applications are considered trusted: all the applications of a user run in the same security context. Services , on the other hand, are somewhat less trusted: they are typically executed under a dedicated account, to reduce the impact in case of a security vulnerability. There are two major reasons why the Unix security model works this way: A negative reason is history: when Unix was designed, applications came from a small set of programmers, and were backed by the reputation of the vendor or provided as source code or both. Backdoors were rarely feared in applications. Furthermore few applications communicated over the network, so there were relatively few opportunities to trigger and exploit vulnerabilities. Therefore there was no strong incentive to isolate applications from each other. A positive reason is functionality: isolating applications makes a lot of things impossible. If each application has its own data area, that makes sharing data between applications difficult. On a typical Unix system, it is very common for the same data to be handled by multiple applications. This is especially true since Unix has no clear separation between “applications” and “the operating system”. A web browser is an application. Not being able to download a file into the directory of your choice, because the browser is confined to its own directory, is annoying. The program that displays menus and icons when you log in is also an application on the same footing. So are file managers, which by definition need access to all your files. So are the shells and other interpreters that execute scripts all over the place. When you print a document from a word processor, this might involve an application to convert the document to a printable format, and another application to send the data to the printer. Although there are a lot more application authors now than 40 years ago, applications are still typically distributed through trusted channels, which carry a reputation indication. (This is markedly more true for Linux than for Windows, which is part of the reason why viruses are more common under Windows.) An application is found to have a backdoor would be promptly pulled from Linux software repositories. Mobile operating systems were designed with different threats in mind. They were designed for single-user systems, but with applications coming from wholly untrusted sources. Application isolation is starting to make its way onto desktop Unix systems. Some distributions run certain programs under security frameworks such as AppArmor or SELinux which restrict what the application can do. The cost of these security restrictions is that they sometimes make desirable uses impossible, for example preventing a restricted application from opening files in certain directories. Encryption would be completely useless. Encryption only protects data in transit (over the network) or at rest (stored on a disk), it doesn't protect data on a live system — if subsystem A decrypts its data then it's up to the OS to prevent subsystem B to prevent access to the decrypted data, and thus it doesn't matter whether the data was decrypted by A or stored unencrypted. The operating system might encrypt data, but only to protect it in case the storage medium is stolen. If you want to run code that you don't trust, the best thing to do is to run it in a virtual machine. Give the virtual machine access to only the files that the application needs (e.g. don't share your home directory). See also Why do mobile apps have fine-grained permissions while desktop apps don't? and Why are apps for mobile devices more restrictive than for desktop? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63934/"
]
} |
193,345 | I have a directory: /var/lib/mysql/test_db/ which contains numerous files that make up the test_db database I have now created a new directory: /var/lib/mysql/data/ I am trying to move the test_db directory and it's contents into the data directory. I've tried various commands revolving around sudo mv /var/lib/mysql/test_db/ /var/lib/mysql/data/test_db/ But I keep getting the error: mv: cannot move /var/lib/mysql/test_db/ to /var/lib/msyql/data/test_db/: No such file or directory But if I run: ls -lah I get drwxrwxrwx 2 root root 32K Mar 27 15:58 test_dbdrwxrwxrwx 3 mysql mysql 4.0K Mar 30 10:51 data which from what I can tell means they are both directories, and therefore both exist. As you can see I have changed permissions on them both ( chmod 777 test_db ), but that didn't work. What am I missing? | Remove the target database directory and move the test_db directory itself. (This will implicitly move its contents, too.) sudo rmdir /var/lib/mysql/data/test_dbsudo mv /var/lib/mysql/test_db /var/lib/mysql/data Generally you don't need to provide a trailing slash on directory names. Reading your comments, if you find that you're still getting a "no such file or directory" error, it may be that your source directory, test_db has already been moved into the target test_db directory (giving you /var/lib/mysql/data/test_db/test_db/... ). If this is the case then the rmdir above will also fail with a "no such file or directory" error. Fix it with this command, and then re-run the two at the top of this answer: sudo mv /var/lib/mysql/data/test_db/test_db /var/lib/mysql | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
193,352 | I'm just jumping into unix from a different world, and wanted to know if while truedo /someperlscript.pldone The perl script itself internally has a folder/file watcher that executes when files are changed in the target location. Is this ( while true ) a good idea? If not, what is a preferred robust approach? TIA EDIT : Since this seems to have generated a fair bit of interest, here is the complete scenario. The perl script itself watches a directory using a file watcher. Upon receiving new files (they arrive via rsync), it picks up the new one and processes it. Now the incoming files may be corrupt (don't ask.. coming from a raspberry pi), and sometimes the process may not be able to deal with it. I don't know exactly why, because we aren't aware of all the scenarios yet. BUT - if the process does fail for some reason, we want it to be up and running and deal with the next file, because the next file is completely unrelated to the previous one that might have caused the error. Usually I would have used some sort of catch all and wrapped the entire code around it so that it NEVER crashes. But was not sure for perl. From what I've understood, using something like supervisord is a good approach for this. | That depends on how fast the perl script returns. If it returns quickly, you might want to insert a small pause between executions to avoid CPU load, eg: while truedo /someperlscript.pl sleep 1done This will also prevent a CPU hog if the script is not found or crashes immediately. The loop might also better be implemented in the perl script itself to avoid these issues. Edit: As you wrote the loop only purpose is to restart the perl script should it crashes, a better approach would be to implement it as a monitored service but the precise way to do it is OS dependent. Eg: Solaris smf, Linux systemd or a cron based restarter. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108541/"
]
} |
193,365 | I use the screen Screen visual consoles .To detach a screen I need to press Ctrl + A followed by D but some time a session is closed without detaching it. It appears as (Attached) on screen -list : eduard@eduard-X:~$ screen -listThere are screens on: 4561.pts-46.eduard-X (30.03.2015 14:48:51) (Attached) 4547.pts-46.eduard-X (30.03.2015 14:48:33) (Detached) 4329.pts-41.eduard-X (30.03.2015 14:46:28) (Attached) 3995.pts-30.eduard-X (30.03.2015 14:30:01) (Detached) If i try to restore it, screen responds that there is no screen to resume: eduard@eduard-X:~$ screen -r 4329There is a screen on: 4329.pts-41.eduard-X (30.03.2015 14:46:28) (Attached)There is no screen to be resumed matching 4329. Can I still resume a screen that I did not detached properly? | Sure, with screen -d -r You can choose which screen to detach and reattach as usual by finding the pid (or complete name) with screen -list . screen -d -r 12345 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
193,368 | I want to use scp to upload files but sometimes the target directory may not exist. Is it possible to create the folder automatically? If so, how? If not, what alternative way can I try? | This is one of the many things that rsync can do. If you're using a version of rsync released in the past several years,¹ its basic command syntax is similar to scp :² $ rsync -r local-dir remote-machine:path That will copy local-source and its contents to $HOME/path/local-dir on the remote machine, creating whatever directories are required.³ rsync does have some restrictions here that can affect whether this will work in your particular situation. It won't create multiple levels of missing remote directories, for example; it will only create up to one missing level on the remote. You can easily get around this by preceding the rsync command with something like this: $ ssh remote-host 'mkdir -p foo/bar/qux' That will create the $HOME/foo/bar/qux tree if it doesn't exist. It won't complain or do anything else bad if it does already exist. rsync sometimes has other surprising behaviors. Basically, you're asking it to figure out what you meant to copy, and its guesses may not match your assumptions. Try it and see. If it doesn't behave as you expect and you can't see why, post more details about your local and remote directory setups, and give the command you tried. Footnotes : Before rsync 2.6.0 (1 Jan 2004), it required the -e ssh flag to make it behave like scp because it defaulted to the obsolete RSH protocol . scp and rsync share some flags, but there is only a bit of overlap. When using SSH as the transfer protocol, rsync uses the same defaults. So, just like scp , it will assume there is a user with the same name as your local user on the remote machine by default. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/193368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
193,422 | locate gtags would find all the files named gtags . What if I only need executables, is there any way to do this? | Not easily. You can use locate bash | while IFS= read -r line; do [[ -x "$line" ]] && echo $line; done to find all executables where the name contains bash . This is faster than using find across the whole filesystem because only a few files need to be checked. locate bash does what it always does (lists all matches) | (pipe) takes the output from the first command ( locate ) and sends it to the second one (the rest of the line) the while ...; do ... done loop iterates over every line it receives from the pipe (from locate ) read -r line reads one line of input and stores it in a variable called line (in our case, a path/file name) [[ -x "$line" ]] tests whether the file in $line is executable if it is, the && echo $line part prints it on your screen | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2239/"
]
} |
193,441 | I know of two types how commands can be connected with each other: by using a Pipe (putting std-output into std-input of the next command). by using a Tee (splice the output into many outputs). I do not know if that is all that is possible, so I draw a hypothetical connection type: How could it be possible to implement a circular flow of data among commands like for instance in this pseudo code, where I use variables instead of commands.: pseudo-code:a = 1 # start condition repeat {b = tripple(a)c = sin(b) a = c + 1 } | Circular I/O Loop Implemented with tail -f This implements a circular I/O loop: $ echo 1 >file$ tail -f file | while read n; do echo $((n+1)); sleep 1; done | tee -a file234567[..snip...] This implements the circular input/output loop using the sine algorithm that you mentioned: $ echo 1 >file$ tail -f file | while read n; do echo "1+s(3*$n)" | bc -l; sleep 1; done | tee -a file1.14112000805986722210.721946242815274393511.82812473159858353270.283472721858963494811.75155632167982146959[..snip...] Here, bc does the floating point math and s(...) is bc's notation for the sine function. Implementation of the Same Algorithm Using a Variable Instead For this particular math example, the circular I/O approach is not needed. One could simply update a variable: $ n=1; while true; do n=$(echo "1+s(3*$n)" | bc -l); echo $n; sleep 1; done1.14112000805986722210.721946242815274393511.82812473159858353270.28347272185896349481[..snip...] | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
193,465 | What file mode indicates that a file is a symbolic link (symlink)? My use case is to detect symbolic links inside a git repository (and its history). I was under the impression that a symlink is a symlink because of its file mode, and that file mode is what the tool chmod sets. | File modes cover two different notions: file types and file permissions. A file's mode is represented by the value of st_mode in the result of stat(2) calls, and ls -l presents them all together; see Understanding UNIX permissions and file types for details. Once a file is created its type can't be changed. In addition, on Linux systems you can't specify a symlink's permissions; all that matters is the target's permission (and effectively the full mode since that determines the symlink's behaviour too). See How do file permissions apply to symlinks? for details. On Mac OS X symlinks can have their own permissions. Finally, git uses a simplified model, with a limited number of recognised modes: 040000 for a directory 100644 for a normal file 100755 for an executable file 120000 for a symbolic link You can see these values using commands such as git cat-file -p 'master^{tree}' ; see Pro Git for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
193,472 | I have a Quantum SuperLoader 3 plugged in via SAS to a CentOS 7 system. It shows in dmesg and lsscsi and is handled by the ch driver . $ lsscsi[0:2:0:0] disk LSI MR9271-8i 3.24 /dev/sda[1:0:0:0] tape IBM ULTRIUM-HH6 E4J1 /dev/st0[1:0:0:1] mediumx QUANTUM UHDL 0091 /dev/sch0 Here's the kernel initialization: $ dmesg[ 13.443589] scsi 1:0:0:0: Attached scsi generic sg2 type 1[ 13.444091] scsi 1:0:0:1: Attached scsi generic sg3 type 8[ 13.463023] SCSI Media Changer driver v0.25[ 13.463121] st: Version 20101219, fixed bufsize 32768, s/g segs 256[ 13.572514] ch0: type #1 (mt): 0x0+1 [medium transport][ 13.572516] ch0: type #2 (st): 0x100+16 [storage][ 13.572517] ch0: type #3 (ie): 0x0+0 [import/export][ 13.572518] ch0: type #4 (dt): 0x20+1 [data transfer][ 13.697117] ch0: dt 0x20: ch0: ID/LUN unknown[ 13.697119] ch0: INITIALIZE ELEMENT STATUS, may take some time ...[ 67.097903] ch0: ... finished[ 67.097910] ch 1:0:0:1: Attached scsi changer ch0[ 67.098792] st 1:0:0:0: Attached scsi tape st0[ 67.098796] st 1:0:0:0: st0: try direct i/o: yes (alignment 4 B) The tape drive operates normally using the mt-st package. I have also installed mtx for use with Bacula or Amanda, but mtx seems to expect a different driver than ch . It appears there are certain tools for the ch driver, such as scsi-changer , but they do not appear to be commonly used and so I imagine there must be a way to get mtx to work with the ch driver directly. When invoked the obvious way: $ sudo mtx -f /dev/sch0 status/dev/sch0 is not an sg device, or old sg driver /dev/sch0 is: $ ls -lastZ /dev/sch0crw-rw----. root cdrom system_u:object_r:device_t:s0 /dev/sch0 I'm going to try using the kraxel.org SCSI changer, but given the lack of support within Amanda, any suggestions to solve the mtx issue would be a boon. | Figured it out! mtx functions only on "generic" SCSI devices. The /dev/sch0 device provided by the ch kernel driver is something of a red herring. It turns out that SCSI devices are given "generic" device files, in addition to whatever driver-backed specific devices are created. You can find those using lsscsi : $ lsscsi --generic[0:0:19:0] enclosu CISCO UCS 240 0809 - /dev/sg0[0:2:0:0] disk LSI MR9271-8i 3.24 /dev/sda /dev/sg1[1:0:0:0] tape IBM ULTRIUM-HH6 E4J1 /dev/st0 /dev/sg2[1:0:0:1] mediumx QUANTUM UHDL 0091 /dev/sch0 /dev/sg3 These were actually alluded to in the dmesg output above. Using the generic device, mtx works fine with the SuperLoader 3 on CentOS 7: $ sudo mtx -f /dev/sg3 status Storage Changer /dev/sg3:1 Drives, 16 Slots ( 0 Import/Export )Data Transfer Element 0:Empty Storage Element 1:Empty Storage Element 2:Empty Storage Element 3:Empty Storage Element 4:Empty Storage Element 5:Empty Storage Element 6:Empty Storage Element 7:Empty Storage Element 8:Empty Storage Element 9:Empty Storage Element 10:Empty Storage Element 11:Empty Storage Element 12:Empty Storage Element 13:Empty Storage Element 14:Empty Storage Element 15:Empty Storage Element 16:Empty All that's left to do is to symlink /dev/changer to /dev/sg3 for convenience. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108622/"
]
} |
193,482 | Is there a simple command line to extract the last part of a string separated by hyphens? E.g., I want to extract 123 from foo-bar-123 . | You can use Bash's parameter expansion : string="foo-bar-123" && printf "%s\n" "${string##*-}"123 If you want to use another process, with Awk: echo "foo-bar-123" | awk -F- '{print $NF}' Or, if you prefer Sed: echo "foo-bar-123" | sed 's/.*-//' A lighter external process, as Glenn Jackman suggests is cut : cut -d- -f3 <<< "$string" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3120/"
]
} |
193,499 | I can't switch to user jenkins on an OpenVZ container.There is still a jenkins process running, which was started by this user. I tried # su jenkins ; it does not switch to the jenkins user. There is no error message. /etc/groups shows there is a jenkins group: jenkins:x:498: . I tried id -g jenkins and got this: jenkins(uid=497) . There is a /etc/passwd entry: jenkins:x:497:498:Jenkins Continuous Build server:/var/lib/jenkins:/bin/false Nothing happened to this container. I revisited it some time after it was stopped, I started it and found it in this situation. So there is a jenkins user. Why can't I switch to it? | The reason su jenkins appears to fail is because the user's shell is /bin/false . You can specify a shell with su that will be used instead of the default login shell: su -s /bin/bash jenkins | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107452/"
]
} |
193,575 | I can find the pid's of my program instances by issuing command like this: pidof avconv Which is giving me results 16616 16283 16279 16198 16175 16035 15073 14049 4922 But how can I output the same result line by line, like: 16616 16283 16279 16198 16175 16035 15073 14049 4922 | You could parse the output with sed , as suggested by @Sobrique, or with tr : pidof avconv | tr ' ' '\n' Another approach would be to use pgrep instead: $ pgrep avconv16616 16283 16279 16198 16175 16035 15073 14049 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101660/"
]
} |
193,638 | Let's say we have a string and its delimiter is ? : Leslie Cheung April 1 ? Elvis August 16 ? Leonard Nimoy February 27 I know how to grep the first substring between delimiters: echo $above_string | grep -oP "^[^?]*"Leslie Cheung April 1 How should I change the Regex in order to grep the second or third substring? | How about using cut? If you'd like to print the 2nd pattern echo "$above_string" | cut -f2 -d "?" Second column onward echo "$above_string" | cut -f2- -d "?" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
193,714 | I am aware of following thread and supposedly an answer to it . Except an answer is not an answer in generic sense. It tells what the problem was in one particular case, but not in general. My question is: is there a way to debug ordering cycles in a generic way? E.g.: is there a command which will describe the cycle and what links one unit to another? For example, I have following in journalctl -b (please disregard date, my system has no RTC to sync time with): Jan 01 00:00:07 host0 systemd[1]: Found ordering cycle on sysinit.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on local-fs.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on cvol.service/startJan 01 00:00:07 host0 systemd[1]: Found dependency on basic.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on sockets.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on dbus.socket/startJan 01 00:00:07 host0 systemd[1]: Found dependency on sysinit.target/startJan 01 00:00:07 host0 systemd[1]: Breaking ordering cycle by deleting job local-fs.target/startJan 01 00:00:07 host0 systemd[1]: Job local-fs.target/start deleted to break ordering cycle starting with sysinit.target/start where cvol.service (the one that got introduced, and which breaks the cycle) is: [Unit]Description=Mount Crypto VolumeAfter=boot.mountBefore=local-fs.target[Service]Type=oneshotRemainAfterExit=noExecStart=/usr/bin/cryptsetup open /dev/*** cvol --key-file /boot/***[Install]WantedBy=home.mountWantedBy=root.mountWantedBy=usr-local.mount According to journalctl, cvol.service wants basic.service, except that it doesn't, at least not obviously. Is there a command which would demonstrate where this link is derived from? And in general, is there a command, which would find the cycles and show where each link in the cycle originates? | You can visualise the cycle with the commands systemd-analyze verify , systemd-analyze dot and the GraphViz dot tool: systemd-analyze verify default.target |&perl -lne 'print $1 if m{Found.*?on\s+([^/]+)}' |xargs --no-run-if-empty systemd-analyze dot |dot -Tsvg >cycle.svg You should see something like this: Here you can see the cycle: c.service->b.service->a.service->c.service Color legend: black = Requires dark blue = Requisite dark grey = Wants red = Conflicts green = After Links: systemd-analyze(1) dot(1) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14554/"
]
} |
193,724 | For example: mkdir ~/{1,2,3} Creates directories ~/1 , ~/2 , and ~/3 . It equates to: mkdir ~/1mkdir ~/2mrdir ~/3 But, using the same syntax in the case of CMD < argument > : brew {install, update, doctor} ...amounts to nonsense as interpreted by the shell. It doesn’t mean: brew installbrew updatebrew doctor It’s easy to make a quick script, but there must be a simpler way using expansion or substitution within bash. What am I missing? Running Bash 3.2.57(1)-release on OS X 10.10.2 | xargs seems to be what you want: echo install update doctor | xargs -n1 brew | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67197/"
]
} |
193,753 | I want to remove all texts that include string foo . I can list all files by ack-grep foo , but I couldn't find a way to remove all files like -exec rm {} option like find . How can I delete all files that contains particular string? | With GNU xargs : ack -l --print0 foo | xargs -r0 rm -- ack's --print0 and xargs' -0 cause ack and xargs to write and read using NUL as the delimiter, which guarantees proper filename handling. Without it, xargs will accept a far more wide range of characters as a delimiter. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
193,757 | I'm trying to do this as an April Fool's prank: make a linux machine display a message in the shell every few seconds. My thought is to achieve this by starting an infinite loop that runs as a background job (in .bashrc). For example, this does what I want: while true ; do echo Evil Message; sleep 10; done In order to run it in background I tried: cmd="while true ; do echo Evil Message; sleep 10;"$cmd & but this fails with the error: while: command not found Why do I get the error? Is there a way to make this script work? | while is not a command, it's a shell keyword. Keywords are recognised before variable expansion happens, so after the expansion, it's too late. You have several options: Don't use a variable at all. while true ; do echo Evil Message; sleep 10; done & Use eval to run the shell over the expanded value of the variable eval "$cmd" & Invoke a shell to run the loop bash -c "$cmd" & Use a function (that's what is typically used to store code): cmd() { while true ; do echo Evil Message; sleep 10; done; }cmd & | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/193757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54415/"
]
} |
193,776 | I need to get all the hardisk connected to my PC like, /dev/sda /dev/sdb /dev/sdc later I have to use it in script and monitor using iostat I alrady found fdisk command, it seems to be hard for me to extract all the device from the output of fdisk using grep . Is there any simple way ? | easier than fdisk for your purpose is lsblk: $ lsblk --nodepsNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 223.6G 0 disksdb 8:16 0 298.1G 0 disksr0 11:0 1 12M 0 rom or if you just want the drives: $ lsblk --nodeps -n -o namesdasdbsr0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108802/"
]
} |
193,815 | While researching another problem, I came across a command , locate something | xargs -I {} bash -c "if [ -d "{}" ]; then echo {}; fi" that I wanted to learn more about. So I ran man xargs and get the following output: XARGS(1) General Commands Manual XARGS(1)NAME xargs - build and execute command lines from standard inputSYNOPSIS xargs [-0prtx] [-E eof-str] [-e[eof-str]] [--eof[=eof-str]] [--null] [-d delimiter] [--delimiter delimiter] [-I replace-str] [-i[replace- str]] [--replace[=replace-str]] [-l[max-lines]] [-L max-lines] [--max-lines[=max-lines]] [-n max-args] [--max-args=max-args] [-s max- chars] [--max-chars=max-chars] [-P max-procs] [--max-procs=max-procs] [--interactive] [--verbose] [--exit] [--no-run-if-empty] [--arg-file=file] [--show-limits] [--version] [--help] [command [initial-arguments]]DESCRIPTION This manual page documents the GNU version of xargs... I am trying to get better at using documentation to learn about Linux programs, but that "Synopsis" section is intimidating to new users. It literally looks like gibberish compared to man locate or man free . So far, I understand that square brackets mean optional and nested brackets mean options in optional. But how am I supposed to induce a valid command with that? I am not asking for help with xargs here. I am looking for help interpreting a man page to understand complicated commands. I want to stop making Google-indexed web blogs and personal help from others my first approach to learning Linux commands. | Well, this is my very personal way to read manpages: The manpager When you open a manpage using the man command, the output will be displayed/rendered by the less or more commands, or any other command that will be set as your pager(manpager). If you are using Linux you are probably served with your man infrastructure already configured to use /usr/bin/less -is (unless you installed some minimal distro) as man(1) , explain on it's Options section: -P pagerSpecify which pager to use. This option overrides the MANPAGER environment variable, which in turn overrides the PAGER variable. By default, man uses /usr/bin/less -is. On FreeBSD and OpenBSD is just a matter of editing the MANPAGER environment variable since they will mostly use more , and some features like search and text highlight could be missing. There is a good answer to the question of what differences more , less and most have here (never used most ). The ability to scroll backwards and scroll forward by page with Space or both ways by line with ↓ or ↑ (also, using vi bindings j and k ) is essential while browsing manpages. Press h while using less to see the summary of commands available. And that's why I suggest you to use less as your man pager. less have some essential features that will be used during this answer. How is a command formatted? Utility Conventions : The Open Group Base Specifications Issue 7 - IEEE Std 1003.1, 2013 Edition. You should visit that link before trying to understand a manpage. This online reference describes the argument syntax of the standard utilities and introduces terminology used throughout POSIX.1-2017 for describing the arguments processed by the utilities. This will also indirectly get you updated about the real meaning of words like parameters, arguments, argument option... The head of any manpage will look less cryptic to you after understanding the notation of the utility conventions: utility_name[-a][-b][-c option_argument] [-d|-e][-f[option_argument]][operand...] Have in mind what you want to do. When doing your research about xargs you did it for a purpouse, right? You had a specific need that was reading standard output and executing commands based on that output. But, when I don't know which command I want? Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search . Read the descriptions and find one that will better fit your needs. Example: apropos -r '^report'bashbug (1) - report a bug in bashdf (1) - report file system disk space usagee2freefrag (8) - report free space fragmentation informationfilefrag (8) - report on file fragmentationiwgetid (8) - Report ESSID, NWID or AP/Cell Address of wireless networkkbd_mode (1) - report or set the keyboard modelastlog (8) - reports the most recent login of all users or of a given userpmap (1) - report memory map of a processps (1) - report a snapshot of the current processes.pwdx (1) - report current working directory of a processuniq (1) - report or omit repeated linesvmstat (8) - Report virtual memory statistics Apropos works with regular expressions by default, ( man apropos , read the description and find out what -r does), and on this example I'm looking for every manpage where the description starts with "report". To look for information related with reading standard input/output processing and reaching xargs as a possible option: man -k command| grep inputxargs (1) - build and execute command lines from standard input Always read the DESCRIPTION before starting Take a time and read the description. By just reading the description of the xargs command we will learn that: xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands The default behavior is to act like /bin/echo . This gives you a little tip that if you need to chain more than one xargs , you don't need to use echo to print. We have also learned that unix filenames can contain blank and newlines, that this could be a problem and the argument -0 is a way to prevent things explode by using null character separators. The description warns you that the command being used as input needs to support this feature too, and that GNU find support it. Great. We use a lot of find with xargs . xargs will stop if exit status 255 is reached. Some descriptions are very short and that is generally because the software works on a very simple way. Don't even think of skipping this part of the manpage ;) Other things to pay attention... You know that you can search for files using find . There is a ton of options and if you only look at the SYNOPSIS , you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME , SYNOPSIS , and DESCRIPTION , you will have the following sections: AUTHORS : the people who created or assisted in the creation of thecommand. BUGS : lists any known defects. Could be only implementation limitations. ENVIRONMENT : Aspects of your shell that could be affected by the command, or variables that will be used. EXAMPLES or NOTES : Self explanatory. REPORTING BUGS : Who you will have to contact if you find bugs on this tool or in its documentation. COPYRIGHT : Person who created and disclaimers about the software. All related with the license of the software itself. SEE ALSO : Other commands, tools or working aspects that are related to this command, and could not fit on any of the other sections. You will most probably find interesting info about the aspects you want of a tool on the examples/notes section. Example On the following steps I'll take find as an example, since it's concepts are "more simple" than xargs to explain(one command find files and the other deals with stdin and pipelined execution of other command output). Let's just pretend that we know nothing (or very little) about this command. I have a specific problem that is: I have to look for every file with the .jpg extension, and with 500KiB (KiB = 1024 byte, commonly called kibibyte), or more in size inside a ftp server folder. First, open the manual: man find . The SYNOPSIS is slim. Let's search for things inside the manual: Type / plus the word you want ( size ). It will index a lot of entries -size that will count specific sizes. Got stuck. Don't know how to search with "more than" or "less than" a given size, and the man does not show that to me. Let's give it a try, and search for the next entry found by hitting n . OK. Found something interesting: find \( -size +100M -fprintf /root/big.txt %-10s %p\n \) . Maybe this example is showing us that with -size +100M it will find files with 100MB or more. How could I confirm? Going to the head of the manpage and searching for other words. Again, let's try the word greater . Pressing g will lead us to the head of the manpage. / greater , and the first entry is: Numeric arguments can be specified as +n for **greater** than n, -n for less than n, n for exactly n. Sounds great. It seems that this block of the manual confirmed what we suspected. However, this will not only apply to file sizes. It will apply to any n that can be found on this manpage (as the phrase said: "Numeric arguments can be specified as"). Good. Let us find a way to filter by name: g / insensitive . Why? Insensitive? Wtf? We have a hypothetical ftp server, where "that other OS" people could give a file name with extensions as .jpg , .JPG , .JpG . This will lead us to: -ilname pattern Like -lname, but the match is case insensitive. If the -L option or the -follow option is in effect, this test returns false unless the symbolic link is broken. However, after you search for lname you will see that this will only search for symbolic links. We want real files. The next entry: -iname pattern Like -name, but the match is case insensitive. For example, the patterns `fo*' and `F??' match the file names `Foo', `FOO', `foo', `fOo', etc. In these patterns, unlike filename expan‐ sion by the shell, an initial '.' can be matched by `*'. That is, find -name *bar will match the file `.foobar'. Please note that you should quote patterns as a matter of course, otherwise the shell will expand any wildcard characters in them. Great. I don't even need to read about -name to see that -iname is the case insensitive version of this argument. Lets assemble the command: Command: find /ftp/dir/ -size +500k -iname "*.jpg" What is implicit here: The knowledge that the wildcard ? represents "any character at a single position" and * represents "zero or more of any character". The -name parameter will give you a summary of this knowledge. Tips that apply to all commands Some options, mnemonics and "syntax style" travel through all commands making you buy some time by not having to open the manpage at all. Those are learned by practice and the most common are: Generally, -v means verbose. -vvv is a variation "very very verbose" on some software. Following the POSIX standard, generally one dash arguments can be stacked. Example: tar -xzvf , cp -Rv . Generally -R and/or -r means recursive. Almost all commands have a brief help with the --help option. --version shows the version of a software. -p , on copy or move utilities means "preserve permissions". -y means YES, or "proceed without confirmation" in most cases. Note that the above are not always true though. For example, the -r switch can mean very different things for different software. It is always a good idea to check and make sure when a command could be dangerous, but these are common defaults. Default values of commands. At the pager chunk of this answer, we saw that less -is is the pager of man . The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed. You will have to read the options to find out defaults, or if you are lucky, typing / pager will lead you to that info. This also requires you to know the concept of the pager(software that scrolls the manpage), and this is a thing you will only acquire after reading lots of manpages. Why is that important? This will open up your perception if you find differences on scroll and color behavior while reading man(1) on Linux( less -is pager) or FreeBSD man(1) for example. And what about the SYNOPSIS syntax? After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts: Options are the switches that dictates a command behavior. " Do this "" don't do this " or " act this way ". Often called switches. Option-arguments are used on most cases when an option isn´tbinary(on/off) like -t on mount, that specifies the type of afilesystem( -t iso9660 , -t ext2 ). " Do this with closed eyes " or" feed the animals, but only the lions ". Also called arguments. Operands are things you want that command to act upon. If you use cat file.txt , the operand is a file inside your currentdirectory, and it´s contents will be shown on STDOUT . ls is acommand where an operand is optional. The three dots after the operandimplicitly tells you that cat can act on multiple operands(files) at the same time. You may notice that some commands have set what type ofoperand it will use. Example: cat [OPTION] [FILE]... Related synopsis stuff: Understand synopsis in manpage When will this method not work? Manpages that have no examples Manpages where options have a short explanation When you use generic keywords like and , to , for inside the manpages Manpages that are not installed. It seems to be obvious but, if you don't have lftp (and its manpages) installed you can't know that is a suitable option as a more sophisticated ftp client by running man -k ftp In some cases the examples will be pretty simple, and you will have to make some executions of your command to test, or in a worst case scenario, Google it. Other: Programming languages and it's modules: If you are programming or just creating scripts, keep in mind that some languages have it's own manpages systems, like perl ( perldocs ), python( pydocs ), etc, holding specific information about methods/funcions, variables, behavior, and other important information about the module you are trying to use and learn. This was useful to me when i was creating a script to download unread IMAP emails using the perl Mail::IMAPClient module. You will have to figure out those specific manpages by using man -k or searching online. Examples: [root@host ~]# man -k doc | grep perlperldoc (1) - Look up Perl documentation in Pod format[root@host ~]# perldoc Mail::IMAPClientIMAPCLIENT(1) User Contributed Perl Documentation IMAPCLIENT(1)NAME Mail::IMAPClient - An IMAP Client APISYNOPSIS use Mail::IMAPClient; my $imap = Mail::IMAPClient->new( Server => ’localhost’, User => ’username’, Password => ’password’, Ssl => 1, Uid => 1, ); ...tons of other stuff here, with sections like a regular manpage... With python: [root@host ~]# pydoc sysHelp on built-in module sys:NAME sysFILE (built-in)MODULE DOCS http://www.python.org/doc/current/lib/module-sys.htmlDESCRIPTION This module provides access to some objects used or maintained by the interpreter and to functions that interact strongly with the interpreter....again, another full-featured manpage with interesting info... Or, the help() funcion inside python shell if you want to read more details of some object: nwildner@host:~$ python3.6Python 3.6.7 (default, Oct 21 2018, 08:08:16)[GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> help(round)Help on built-in function round in module builtins:round(...) round(number[, ndigits]) -> number Round a number to a given precision in decimal digits (default 0 digits). This returns an int when called with one argument, otherwise the same type as the number. ndigits may be negative. Bonus: The wtf command can help you with acronyms and it works as whatis if no acronym on it's database is found, but what you are searching is part of the man database. On Debian this command is part of the bsdgames package. Examples: nwildner@host:~$ wtf rtfmRTFM: read the fine/fucking manualnwildner@host:~$ wtf afaikAFAIK: as far as I knownwildner@host:~$ wtf afakGee... I don't know what afak means...nwildner@host:~$ wtf tcptcp: tcp (7) - TCP protocol.nwildner@host:~$ wtf systemdsystemd: systemd (1) - systemd system and service manager | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/193815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99989/"
]
} |
193,827 | What is DISPLAY=:0 and what does it mean? It isn't a command, is it? ( gnome-panel is a command.) DISPLAY=:0 gnome-panel | DISPLAY=:0 gnome-panel is a shell command that runs the external command gnome-panel with the environment variable DISPLAY set to :0 . The shell syntax VARIABLE = VALUE COMMAND sets the environment variable VARIABLE for the duration of the specified command only. It is roughly equivalent to (export VARIABLE = VALUE ; exec COMMAND ) . The environment variable DISPLAY tells GUI programs how to communicate with the GUI. A Unix system can run multiple X servers , i.e. multiple display. These displays can be physical displays (one or more monitor), or remote displays (forwarded over the network, e.g. over SSH), or virtual displays such as Xvfb , etc. The basic syntax to specify displays is HOST : NUMBER ; if you omit the HOST part, the display is a local one. Displays are numbered from 0, so :0 is the first local display that was started. On typical setups, this is what is displayed on the computer's monitor(s). Like all environment variables, DISPLAY is inherited from parent process to child process. For example, when you log into a GUI session, the login manager or session starter sets DISPLAY appropriately, and the variable is inherited by all the programs in the session. When you open an SSH connection with X forwarding, SSH sets the DISPLAY environment variable to the forwarded connection, so that the programs that you run on the remote machine are displayed on the local machine. If there is no forwarded X connection (either because SSH is configured not to do it, or because there is no local X server), SSH doesn't set DISPLAY . Setting DISPLAY explicitly causes the program to be displayed in a place where it normally wouldn't be. For example, running DISPLAY=:0 gnome-panel in an SSH connection starts a Gnome panel on the remote machine's local display (assuming that there is one and that the user is authorized to access it). Explicitly setting DISPLAY=:0 is usually a way to access a machine's local display from outside the local session, such as over a remote access or from a cron job. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/193827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
193,832 | I have a source RPM from which I would like to extract the source as closely as possible to "apt-get source PACKAGE". What are the available methods to do this? | The tool to do this on RH-esque systems is rpm2cpio . If you can find that tool for Debian systems, or build it for Debian systems, that's pretty much all you need. The command is: $ rpm2cpio <RPMfile>.rpm | cpio -idmv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14377/"
]
} |
193,863 | I have to take a list (loads) of IP addresses in this format: 134.27.128.0 111.245.48.0 109.21.244.0 and turn them into this format with a pipe in-between (IPs made up) 134.27.128.0 | 111.245.48.0 | 109.21.244.0 | 103.22.200.0/22 I think it is a find and replace command like sed but I can't get it to work. | Using sed, based on Famous Sed One-Liners Explained, Part I: : 39. Append a line to the next if it ends with a backslash "\" (except here we ignore the part about the backslash, and replace the \n newlines with the required | separator): sed -e :a -e '$!N; s/\n/ | /; ta' mydoc > mydoc2 should produce in mydoc2 134.27.128.0 | 111.245.48.0 | 109.21.244.0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108863/"
]
} |
193,982 | I want to have the FQDN as bash prefix instead of just using the hostname. So I can change root@web: ~$ to [email protected]: ~$ I already know that that is possible by using: PS1="\[\u@$(hostname -f): \w\]\$ " But that is not persistent - it is always the default hostname when I re-login. So is there a way to make this persistent? | Thanks to @dawud and @EsaJokinen comments I found a solution. Replacing PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' with PS1="\[\u@$(hostname -f): \w\]\$ " in /etc/bash.bashrc does the job on Debian 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/193982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109125/"
]
} |
194,028 | I am reading a csv file in bash script as follows: resource='/data/bscs/'while IFS='|'read seqid fname fpath do echo "FILENO: $seqid" echo "FILENAME: $fname" echo "FILE_PATH: $fpath" done < "report.csv" My csv file has following values : 3|sample1.txt|$resource/operation/ I want the $resource to expand inside $fname. Instead, I am getting this output: FILENO: 3FILENAME: sample1.txtSOURCE PATH: $resource/operation I have tried the following: "${fpath}" $(echo $fpath) How can I achieve this? | You can use eval , but the next maintainer will curse your name. Have you tried just appending the variable when using the parsed data, or expanding the variable when creating the data? If the file is created that way, can't you just use some [redacted] techniques to convince the originator to change their wicked ways? If change is literally not possible, then you must have control over which variables are possible. Otherwise your script is vulnerable to all sorts of injection attacks, such as inputs like 3|sample1.txt|$(rm --arr --eff /)/operation/ . Since you obviously have that under control, you can do some literal replacements of variables with their values on a case by case basis: IFS='/' read -a pathnames <<< "$fpath"for pathname in "${pathnames[@]}"do if [ "${pathname::1}" = '$' ] then variable_name="${pathname:1}" printf '%s' "${!variable_name}" else printf '%s' "$pathname" fidone With some additional boilerplate to add slashes between pathnames. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101479/"
]
} |
194,047 | If I do: IFS="/" read -ra PARTS And type in a path manually, it creates the array "PARTS" as hoped, however: IFS="/" read -ra PARTS <<< $(pwd) creates an array with a single element, with the slashes converted to spaces How can I split the current working directory into an array? | It works if you quote the command. IFS="/" read -ra PARTS <<< "$(pwd)"for i in "${PARTS[@]}"do printf '%s\n' "$i"donehomeuser1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108987/"
]
} |
194,050 | I have a text file with following data and each row ends with |END| . T|somthing|something|END|T|something2|something2|END| I am tryig to replace |END| with \n new line with sed. sed 's/\|END\|/\n/g' test.txt But it's producing wrong output like below: T | s o m e ... But what I want is this: T|somthing|somethingT|something2|something2 I also tried with tr . It didn't work either. | Use this: sed 's/|END|/\n/g' test.txt What you attempted doesn't work because sed uses basic regular expressions , and your sed implementation has a \| operator meaning “or” (a common extension to BRE), so what you wrote replaces (empty string or END or empty string) by a newline. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72369/"
]
} |
194,061 | # ldd /usr/bin/ffmpeg linux-vdso.so.1 => (0x00007ffffc1fe000) libavfilter.so.0 => not found libpostproc.so.51 => not found libswscale.so.0 => not found libavdevice.so.52 => not found libavformat.so.52 => not found libavcodec.so.52 => not found libavutil.so.49 => not found libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fdd18259000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fdd1803a000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fdd17c75000) /lib64/ld-linux-x86-64.so.2 (0x00007fdd18583000) I am trying to grep only the names left from the "=>" symbol. It works with echo easily: echo linux-vdso.so.1 | grep -oP "^[a-z0-9.]*"linux-vdso.so.1 But when I perform the same RegEx onto the output of ldd it does display anything: ldd /usr/bin/ffmpeg | grep -oP "^[a-z0-9.]*" So I thought, maybe I have to include some whitespace ldd /usr/bin/ffmpeg | grep -oP "^([a-z0-9.]|\w)*" But, this did not work and so I do not know further... | The best solution for this is to use awk : $ ldd /usr/bin/ppdhtml | awk '/ => / { print $1 }' | head -n1libcupsppdc.so.1 To do this using grep , you will need to use the lookahead and lookbehind features of PCRE: $ ldd /usr/bin/ppdhtml | grep -Po '(?<=\t).+(?= => )' | head -n1libcupsppdc.so.1 The lookahead and lookbehind features affect that match, but are not included in the match. Also note that this would not work if ldd used a variable number of spaces instead of tabs at the start of the line. Lookbehinds can not have a variable length. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
194,068 | I have a script which will be running in each server and copies certain files into it. Script knows where I am running and what files I need to copy. Script will copy files from local datacenter local_dc but if it is down or not responding, then it will copy same files from remote datacenter remote_dc_1 and if that is also down, then it will copy same files from another remote datacenter remote_dc_2 as shown below - do_Copy() { el=$1 PRIMSEC=$2 scp david@"$local_dc":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/. || scp david@"$remote_dc_1":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/. || scp david@"$remote_dc_2":"$dir3"/new_weekly_2014_"$el"_200003_5.data "$PRIMSEC"/.}export -f do_Copy# copying 5 files in parallel simultaneously in primary and secondary folderparallel --retries 10 -j 5 do_Copy {} $PRIMARY ::: $primary_partition ¶llel --retries 10 -j 5 do_Copy {} $SECONDARY ::: $secondary_partition &waitecho "All files copied successfully." Now my question is : Let's say if local_dc machine is down, then will it definitely copy from remote_dc_1 machine for sure? And if remote_dc_1 machine is also down, then will it also copy from remote_dc_2 ? Now if all three machines are down then what will happen? In this case, I want to print the message saying, all three machines are down so exiting out of the shell script. I cannot test this properly without making the machine down. | The best solution for this is to use awk : $ ldd /usr/bin/ppdhtml | awk '/ => / { print $1 }' | head -n1libcupsppdc.so.1 To do this using grep , you will need to use the lookahead and lookbehind features of PCRE: $ ldd /usr/bin/ppdhtml | grep -Po '(?<=\t).+(?= => )' | head -n1libcupsppdc.so.1 The lookahead and lookbehind features affect that match, but are not included in the match. Also note that this would not work if ldd used a variable number of spaces instead of tabs at the start of the line. Lookbehinds can not have a variable length. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22434/"
]
} |
194,087 | I just ran rkhunter --check and all was good except this: Checking if SSH root access is allowed [ Warning] What does this warning mean? SSH root access is not allowed on this system. EDIT #1 Here is how my /etc/ssh/sshd_config is set: PermitRootLogin no and rkhunter.conf root ~ # cat /etc/rkhunter.conf | grep ALLOW_SSH_ROOT_USER#ALLOW_SSH_ROOT_USER=noALLOW_SSH_ROOT_USER=unset | The following values need to match: In rkhunter configuration: cat /etc/rkhunter.conf | grep ALLOW_SSH_ROOT_USER ALLOW_SSH_ROOT_USER=no In sshd configuration: cat /etc/ssh/sshd_config | grep PermitRootLogin PermitRootLogin no Once they do match, you should not be warned by rkhunter any longer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
194,088 | My question originates from my problem in getting ffmpeg started.I have installed ffmpeg and it is displayed as installed: whereis ffmpeg ffmpeg: /usr/bin/ffmpeg /usr/bin/X11/ffmpeg /usr/share/ffmpeg /usr/share/man/man1/ffmpeg.1.gz Later, I figured out, that some programs depend on libraries that do not come with the installation itself, so I checked with ldd command what is missing: # ldd /usr/bin/ffmpeg linux-vdso.so.1 => (0x00007fff71fe9000) libavfilter.so.0 => not found libpostproc.so.51 => not found libswscale.so.0 => not found libavdevice.so.52 => not found libavformat.so.52 => not found libavcodec.so.52 => not found libavutil.so.49 => not found libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5f20bdf000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5f209c0000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5f205fb000) /lib64/ld-linux-x86-64.so.2 (0x00007f5f20f09000) As it turns out my ffmpeg is cut off from 7 libraries too work. I first thought that each of those libraries have to be installed, but than I figured out, that some or all might be installed, but their location unknown to ffmpeg. I read that /etc/ld.so.conf and /etc/ld.so.cache contain the paths to the libraries, but I was confused, because, there was only one line in /etc/ld.so.conf cat /etc/ld.so.confinclude /etc/ld.so.conf.d/*.conf but a very long /etc/ld.so.cache . I am now at a point where I feel lost how to investigate further, It might be a helpful next step to figure out, how I can determine if a given library is indeed installed even if its location unknown to ffmpeg. ---------Output---of----apt-cache-policy-----request---------apt-cache policyPackage files: 100 /var/lib/dpkg/status release a=now 500 http://archive.canonical.com/ubuntu/ trusty/partner Translation-en 500 http://archive.canonical.com/ubuntu/ trusty/partner i386 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://archive.canonical.com/ubuntu/ trusty/partner amd64 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/main Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 700 http://extra.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://extra.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.comPinned packages: | You can use: ldconfig -p | grep libavfilter If there is no output library is not installed. I am not sure if this is 100% reliable. At least in man page of ldconfig for option -p: Print the lists of directories and candidate libraries stored in the current cache. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
194,106 | I need two very specific cron statements: A cron entry that would run on the 2nd Monday of each month every 4 hours beginning at 02:00 and execute the file /opt/bin/cleanup.sh A cron entry that would run at 18:15 on the 3rd day of every month ending in an "r" that executes /opt/bin/verrrrrrrry.sh I've already tried various cron testers: cron checker , cron tester ,and cron translator however none of them seem to be able to handle advanced cron expressions(or I do not know how to format the correct expression) as stated on the cron trigger tutorial and wikipedia How can I check my cron statements? I obviously cannot wait for the actual event to pass so that the daemon may execute them. Is there a good cron tester which supports advanced expressions? Or how to make the cron daemon parse the expression, or how to code these expressions? What I have so far for these statements is: 0 2 * * 0#2 /opt/bin/cleanup.sh 15 18 3 * * /opt/bin/verrrrrrrry.sh But of course these are not correct. For #1, I do not know how to specify the '2nd Monday', nor 'every 4 hours', while still beginning at 02:00. For #2, I have no idea how to only specify months ending in an 'r' except by manually coding them in. Nor do I know how to specify the 3rd day. | To have something execute only on the second Monday of a month the day of week value needs to be 1 and the day of month value has to be 8-14, the hour has to be 2,6,10,14,18,22 and the minute 0. However as dhag correctly commented and provided a solution for, when you specify both the day of week and the day of month (i.e. not as * ), the program is executed when either matches. Therefore you have to test explicitly for either one, and the day of week is easier: 0 2,6,10,14,18,22 8-14 * * test $(date +\%u) -eq 1 && /opt/bin/cleanup.sh The final 1 determines the Monday and the range of day of month (8-14) picks it only when it is the second Monday. The third day of every month ending in an "r" at 18:15: 15 18 3 september,october,november,december * /opt/bin/verrrrrrrry.sh (at least on Vixie cron you can use the names of the months. If yours does not support that you can replace that with 9-12 ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46587/"
]
} |
194,134 | Concise question : Is it possible to somehow bind partition encryption to hardware so it is impossible(very hard) to copy system to another PC and run there? Full story : We have a small embedded PC with Linux running inside a device, we develop. As the device turns on, embedded PC just runs, showing data to users until power-off. Software on this PC is our commercial competitive advantage so we would like to prevent access to it as much as possible (see P.S.). So the idea is to encrypt the system flash or at least a part of it. But then it is possible just to copy the entire flash. Then, the next idea is to bind the encryption to hardware. But how? P.S. I know that everything is a subject for reverse engineering, but it is not the reason to leave reverse engineering of your product flat and unchallenging. P.P.S I'm not paranoid about copying. Unfortunately, we know competitors who will try to steal the technology by names =) | You can bind encrypted data to a specific device using a Trusted Platform Module (TPM). These have become quite common in x86 laptops over the last few years, and can be installed in many servers too. Using a TPM, you can generate an encryption key which only exists in the module, and encrypt data using that; if you generate a key which can't be backed up, then you can be sure that the data can only be encrypted with the corresponding TPM. Copying the raw storage device is useless then. On Linux you could use TrouSerS to manage this. On ARM platforms you could do something similar with a key stored using TrustZone , but I don't know the details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109031/"
]
} |
194,157 | I'm using bash on Linux. I am getting a success from the following if statement, but shouldn't this return a fail code? if [[ ■ = [⅕⅖⅗] ]] ; then echo yes ; fi The square does NOT equal any of the characters, so I don't see why I get a success code. It important for me to keep the double brackets in my case. Is there any other way to do a range in this scenario, or what any other suggestions? | That's a consequence of those characters having the same sorting order. You'll also notice that sort -u << EOF■⅕⅖⅗EOF returns only one line. Or that: expr ■ = ⅕ returns true (as required by POSIX). Most locales shipped with GNU systems have a number of characters (and even sequences of characters (collating sequences)) that have the same sorting order. In the case of those ■⅕⅖⅗ ones, it's because the order is not defined, and those characters whose order is not defined end up having the same sorting order in GNU systems. There are characters that are explicitly defined as having the same sorting order like Ș and Ş (though there's no apparent (to me anyway) real logic or consistency on how it is done). That is the source of quite surprising and bogus behaviours. I have raised the issue very recently on the Austin group (the body behind POSIX and the Single UNIX Specification) mailing list and the discussion is still ongoing as of 2015-04-03. In this case, whether [y] should match x where x and y sort the same is unclear to me, but since a bracket expression is meant to match a collating element, that suggests that the bash behaviour is expected. In any case, I suppose [⅕-⅕] or at least [⅕-⅖] should match ■ . You'll notice that different tools behave differently. ksh93 behaves like bash , GNU grep or sed don't. Some other shells have different behaviours some like yash even more buggy. To have a consistent behaviour, you need a locale where all characters sort differently. The C locale is the typical one. However the character set in the C locale on most systems is ASCII. On GNU systems, you generally have access to a C.UTF-8 locale that can be used instead to work on UTF-8 character. So: (export LC_ALL=C.UTF-8; [[ ■ = [⅕⅖⅗] ]]) or the standard equivalent: (export LC_ALL=C.UTF-8 case ■ in ([⅕⅖⅗]) true;; (*) false; esac) should return false. Another alternative would be to set only LC_COLLATE to C which would work on GNU systems, but not necessarily on others where it could fail to specify the sorting order of multi-byte character. One lesson of that is that equality is not as clear a notion as one would expect when it comes to comparing strings. Equality might mean, from strictest to least strict. Same number of bytes and all byte constituents have the same value. Same number of characters and all characters are the same (for instance, refer to the same codepoint in the current charset). The two strings have the same sorting order as per the locale's collation algorithm (that is, neither a < b nor b > a is true). Now, for 2 or 3, that assumes both strings contain valid characters. In UTF-8 and some other encodings, some sequence of bytes don't form valid characters. 1 and 2 are not necessarily equivalent because of that, or because some characters may have more than one possible encoding. That's typically the case of stateful encodings like ISO-2022-JP where A can be expressed as 41 or 1b 28 42 41 ( 1b 28 42 being the sequence to switch to ASCII and you can insert as many of those as you want, that won't make a difference), though I wouldn't expect those types of encoding still being in use, and GNU tools at least generally don't work properly with them. Also beware that most non-GNU utilities can't deal with the 0 byte value (the NUL character in ASCII). Which of those definitions is used depends on the utility and utility implementation or version. POSIX is not 100% clear on that. In the C locale, all 3 are equivalent. Outside of that YMMV. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/194157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109055/"
]
} |
194,171 | I was playing a game on Steam and all a sudden I got a kernel panic. I manually shut down the computer and booted back into Linux Mint 17.1 (Cinnamon) 64-bit, and went to go check through my log files in /var/log/ , but I couldn't find any references or any kind of messages relating to the kernel panic that happened. It's strange why it never dumped the core or even made any note of it into the log files. How can I make sure that a core is always dumped in case a kernel panic happens again? Doesn't make any sense why nothing was logged when a kernel panic happened. Looking around on Google, people suggest to read through /var/log/dmesg , /var/log/syslog , /var/log/kern.log , /var/log/Xorg.log etc… but nothing. Not even in .Xsession-errors file either. Here are some photographs of the screen: I could always take a photo of the screen when and if it happens again, but I just want to make sure that I can get it to dump the core and create a log file on a kernel panic. | To be sure that your machine generates a "core" file when a kernel failure occurs, you should confirm the "sysctl" settings of your machine. IMO, following should be the settings (minimal) in /etc/sysctl.conf : kernel.core_pattern = /var/crash/core.%t.%pkernel.panic=10kernel.unknown_nmi_panic=1 Execute sysctl -p after making changes in the /etc/sysctl.conf file. You should probably also mkdir /var/crash if it doesn’t already exist. You can test the above by generating a manual dumpusing the SysRq key (the key combination to dump coreis Alt + SysRq + C ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
194,175 | For instance, I have a service named mysshd.service under /usr/lib/systemd/system/ directory. Can I create a symbolic link such as: ln -s /usr/lib/systemd/system/mysshd.service /usr/lib/systemd/system/fool.service so that whatever operation I do with fool.service will be reflected to mysshd.service ( systemctl enable/disable start/stop fool.servce ) ? My purpose is that overwrite the native sshd service by a symbolic link of my own sshd service. | As far as I know, systemd won't deal with this particularly well. As I understand it, you want to override the behavior of sshd.service , right? Luckily for you, systemd is designed for this kind of thing. Simply put your service definition in /etc/systemd/system/ssh.service , execute systemctl daemon-reload to reload unit files, and systemd will automatically use that configuration instead of the system ssh.service . Want to have systemctl enable mysshd.service work, too? No problem. In the [Install] section of your unit file, add a line that says Alias=mysshd.service . Then execute systemctl reenable ssh.service to have systemd fix the unit symlinks, and you're golden. Now, you haven't given details on what mysshd.service is supposed to do. If it's completely different from the normal ssh.service , great! Use the method above. However, if you just want to change one small thing, then you're using the wrong approach. systemd allows you to create "snippets" of unit files that will be applied on top of the normal unit files. This lets you add or override individual directives while allowing the rest of the unit file to receive updates from the package manager. To do this, simply create /etc/systemd/system/ssh.d/my-custom-config.conf (you can change my-custom-config.conf to be whatever you want, and you can also have multiple override files). In that file, place whatever directives you want to change or add to the usual ssh.service . You can even add Alias= directives, so that systemctl start mysshd.service works! Just remember to execute systemctl daemon-reload after you're done (and, if you used Alias= , systemctl reenable ssh.service ). As an aside, never, ever change systemd unit files in /usr/lib/systemd . Ever! The Filesystem Hierarchy Standard requires that /usr is treated as read-only. In practice, this means that the package manager handles /usr (except for /usr/local ), and you don't touch what the package manager handles - especially because whatever you change will probably eventually be overwritten. Instead, put your stuff in somewhere like /etc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33964/"
]
} |
194,180 | I have a file that contains the following text. //// test_file.h// test project//// Created by Test User//#ifndef test_file_constants_h#define test_file_constants_h// Generic includes#include <stdlib.h>#include <string.h>#include <stdbool.h>#define PF_MASTER_VERSION 3.0.0#define PF_BUILD_VERSION 1#endif I need to write a script that can increment the PF_BUILD_VERSION by one each time it runs. I've tried looking at sed and AWK and failed! | As far as I know, systemd won't deal with this particularly well. As I understand it, you want to override the behavior of sshd.service , right? Luckily for you, systemd is designed for this kind of thing. Simply put your service definition in /etc/systemd/system/ssh.service , execute systemctl daemon-reload to reload unit files, and systemd will automatically use that configuration instead of the system ssh.service . Want to have systemctl enable mysshd.service work, too? No problem. In the [Install] section of your unit file, add a line that says Alias=mysshd.service . Then execute systemctl reenable ssh.service to have systemd fix the unit symlinks, and you're golden. Now, you haven't given details on what mysshd.service is supposed to do. If it's completely different from the normal ssh.service , great! Use the method above. However, if you just want to change one small thing, then you're using the wrong approach. systemd allows you to create "snippets" of unit files that will be applied on top of the normal unit files. This lets you add or override individual directives while allowing the rest of the unit file to receive updates from the package manager. To do this, simply create /etc/systemd/system/ssh.d/my-custom-config.conf (you can change my-custom-config.conf to be whatever you want, and you can also have multiple override files). In that file, place whatever directives you want to change or add to the usual ssh.service . You can even add Alias= directives, so that systemctl start mysshd.service works! Just remember to execute systemctl daemon-reload after you're done (and, if you used Alias= , systemctl reenable ssh.service ). As an aside, never, ever change systemd unit files in /usr/lib/systemd . Ever! The Filesystem Hierarchy Standard requires that /usr is treated as read-only. In practice, this means that the package manager handles /usr (except for /usr/local ), and you don't touch what the package manager handles - especially because whatever you change will probably eventually be overwritten. Instead, put your stuff in somewhere like /etc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109068/"
]
} |
194,184 | I'd like to know how to emulate the ICANON behavior of ^D: namely, trigger an immediate, even zero-byte, read in the program on the other end of a FIFO or PTY or socket or somesuch. In particular, I have a program whose specification is that it reads a script on stdin up until it gets a zero byte read, then reads input to feed the script, and I'd like to automatically test this functionality. Simply writing to a FIFO doesn't result in the right thing happening, of course, since there's no zero byte read. Help? Thanks! | As far as I know, systemd won't deal with this particularly well. As I understand it, you want to override the behavior of sshd.service , right? Luckily for you, systemd is designed for this kind of thing. Simply put your service definition in /etc/systemd/system/ssh.service , execute systemctl daemon-reload to reload unit files, and systemd will automatically use that configuration instead of the system ssh.service . Want to have systemctl enable mysshd.service work, too? No problem. In the [Install] section of your unit file, add a line that says Alias=mysshd.service . Then execute systemctl reenable ssh.service to have systemd fix the unit symlinks, and you're golden. Now, you haven't given details on what mysshd.service is supposed to do. If it's completely different from the normal ssh.service , great! Use the method above. However, if you just want to change one small thing, then you're using the wrong approach. systemd allows you to create "snippets" of unit files that will be applied on top of the normal unit files. This lets you add or override individual directives while allowing the rest of the unit file to receive updates from the package manager. To do this, simply create /etc/systemd/system/ssh.d/my-custom-config.conf (you can change my-custom-config.conf to be whatever you want, and you can also have multiple override files). In that file, place whatever directives you want to change or add to the usual ssh.service . You can even add Alias= directives, so that systemctl start mysshd.service works! Just remember to execute systemctl daemon-reload after you're done (and, if you used Alias= , systemctl reenable ssh.service ). As an aside, never, ever change systemd unit files in /usr/lib/systemd . Ever! The Filesystem Hierarchy Standard requires that /usr is treated as read-only. In practice, this means that the package manager handles /usr (except for /usr/local ), and you don't touch what the package manager handles - especially because whatever you change will probably eventually be overwritten. Instead, put your stuff in somewhere like /etc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/194184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109074/"
]
} |
194,232 | How can I automatically source a particular shell script when I open a terminal window by right clicking somewhere and choosing "open in terminal"? For example, every time I open a terminal I need to run the following command line: source myscript How can I make it so that I don't have to type this automatically? My script is written in tsch: #!/bin/tcshsetenv DISPLAY 127.0.0.1:10.0cd /ast/dcm/data I'm using CentOS 7. | I'm not entirely sure how this works with your file manager but, presumably, "open in terminal" is something you use on directories and it just opens a terminal window at that location. If so, it should be enough to source your script from the initialization file for interactive, non-login shells. If you are using bash , that is ~/.bashrc and you need to edit that file and add this line to it: . ~/myscript That assumes that myscript is in your ~/ . Now, each time a new shell is started, including when a new terminal is opened, that file will be sourced. Note, however, that the script you show is not a bash script. There is no setenv command in bash, that's a C-shell thing. The bash equivalent would be: #!/bin/bashexport DISPLAY=127.0.0.1:10.0cd /ast/dcm/data | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/194232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.