source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
161,682
I'm using Xubuntu 14.04 and typing on a keyboard without "Windows" and "context menu" keys (Unicomp Model M). Currently, I get context menu (mouse right-click) with Shift + F10 , but when touch-typing, I often miss F10 key. I wanted to create keyboard shortcut ( Alt + F1 ) for context menu, so when I opened keyboard settings in Xfce I expected to find definition of that shortcut (like I did for Whisker Menu, which I remapped to Alt + ` ), but it wasn't there.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/161682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87563/" ] }
161,698
I have a fresh VPS I just bought, for playing only. No risk of any kind involved, so I noticed that due to my slow connection I have to wait seconds to finish writing commands and opening/closing files. So, I would like to know if I can connect to my VPS without ssh? Putty gives me an option to connect as raw , which upon choosing I can not log in to my VPS.
There are various alternatives to udev out there. Seemingly Gentoo can use something called mdev . Another option would be to attempt to use udev 's predecessor devfsd . Finally, you can always create all the device files you need with mknod . Note that with the latter there is no need to create everything at boot time since the nodes can be created on disk and not in a temporary file system as with the other options. Of course, you lose the flexibility of having dynamically created device files when new hardware is plugged in (eg a USB stick). I believe the standard approach in this era was to have every device file you could reasonably need already created under /dev (ie a lot of device files). Of course the difficultly in getting any of these approaches to work in a modern distro is probably quite high. The Gentoo wiki mentions difficulties in getting mdev to work with a desktop environment (let alone outside of Gentoo). The last devfsd release was 2002, I have no idea if it will even work at all with modern kernels. Creating the nodes manually is probably the most viable approach, but even disabling udev could be a challenge, particularly in distos using systemd ( udev is now part of systemd , which suggests a strong dependency). My advice is stick with udev ;)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/161698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77324/" ] }
161,715
Having a set of PDF files, lets say: in-01.pdf , in-02.pdf , in-03.pdf , ... I would like to combine all of them into a single one, forming an M x N matrix. The command montage allows for doing so ( M and N should be integers): montage -mode concatenate -tile NxM in-*.pdf out.pdf The problem is the size of the resulting PDF is huge, while I would expect it to be (maybe just) a little bigger than the sum of all the input PDF sizes. I think montage is first converting the input PDFs to images and then creating the output PDF out of those images (so for example, the text in the original PDFs is not showed as text in the output PDF, but as an image with lower quality and bigger size). I guess there should be a way to do it (LATEX, for example, allows to insert a PDF image in another PDF without the need to convert it to an image first). I am looking for a command line alternative using free software tools under GNU/Linux systems. NOTE : we can asume those PDF files have all the same exact dimension (width and height). They are auto-generated PDF images normally consisting of a plot/graph (simple shapes line lines and rectangles) and a few text (title, labels...).
You could use the utility program pdfnup from the pdfjam suite. pdfnup in.pdf --nup 3x3 should output the file in-nup.pdf with the pages of in.pdf arranged in a series of pages with a 3x3 matrix from the origin pdf. You should merge all of you pdf files in an only one, also you must want to specify a paper size for the output file, see the pdfjam docs fot the details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66295/" ] }
161,717
I have set Cache-Control in apache for 1 week for my JS Files but when i check in the browser Cache-Control shows no-cache. Where i am missing the configuration ? Below is my configuration in apache <ifModule mod_headers.c> <filesMatch "\.(html|htm|png|js|css)$"> Header set Cache-Control "max-age=604800, public" </filesMatch></ifModule> Request Header in Browser Request URL:http://test.com/Script.js?buildInfo=1.1.200 Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8 **Cache-Control:no-cache** Connection:keep-alive Host:test.com Pragma:no-cache Referer:http://test.com/home.jsp User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36 Query String Parametersview sourceview URL encoded buildInfo:1.1.200 Response Headersview source Cache-Control:max-age=2592000 Connection:keep-alive Content-Encoding:gzip Content-Type:text/javascript Date:Sun, 12 Oct 2014 16:17:46 GMT Expires:Tue, 11 Nov 2014 16:17:46 GMT Last-Modified:Tue, 07 Oct 2014 13:28:08 GMT Server:Apache Transfer-Encoding:chunked Vary:Accept-Encoding
You could use the utility program pdfnup from the pdfjam suite. pdfnup in.pdf --nup 3x3 should output the file in-nup.pdf with the pages of in.pdf arranged in a series of pages with a 3x3 matrix from the origin pdf. You should merge all of you pdf files in an only one, also you must want to specify a paper size for the output file, see the pdfjam docs fot the details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64217/" ] }
161,719
This question arose from my previous question on emacs beta . In short I want to bind C-; to an Emacs functions in a terminal, but it seems that something captures this key before it reaches Emacs: Emacs thinks I pressed ; . The obvious suspect is the terminal emulator, but I've checked many of them (xterm, gnome-terminal, terminator, terminology) and none of them works. Most probably I can exclude window manager, because in the GUI version of Emacs, the key C-; works just fine. I tried also two different shells: bash and zsh, but again without success. What else can I try?
Perhaps your confusion arises from not having used an actual terminal. Back when serious computers were the size of several upright refrigerators, a terminal communicated with a central computer over a serial cable using characters and characters only. The characters were part of some standardized character set, e.g. ASCII or EBCDIC, but typically ASCII. ASCII has 33 control characters and the terminal operator sent them by pressing a special key (such as DEL) or by holding down the CTRL key and pressing another key. The central computer only saw the resulting control character; it did not know what keys were pressed to produce the character. A terminal emulation program such as xterm mimics that behavior. The terminal emulator provides a way to send all 33 ASCII control characters and Emacs will receive those characters if they are sent. But Emacs is like the central computer in the above description--- it has no way of knowing what keys were actually pressed when you run it under a terminal emulator. So if you press CTRL and semicolon, unless the terminal emulation program has mapped those keypresses to some ASCII character, Emacs will not know that anything has been typed. Terminal emulators typically use the following mappings to generate control characters † : keypress ASCII--------------------ESCAPE 27DELETE 127BACKSPACE 8CTRL+SPACE 0CTRL+@ 0CTRL+A 1CTRL+B 2CTRL+C 3etc...CTRL+X 24CTRL+Y 25CTRL+Z 26CTRL+[ 27CTRL+\ 28CTRL+] 29CTRL+^ 30CTRL+_ 31 Note that CTRL+; does not appear in that list. Terminals will usually just send the printable character assigned to key if CTRL+ key isn't mapped to a control character. So what your terminal emulator is telling you by sending ; alone is that it doesn't know what to do when you press CTRL+;. All this applies only if you're using a terminal or a terminal emulation program. If you're running Emacs as a native application under some window system, then Emacs has full access to the keystroke events and not just characters. So Emacs can see that you pressed CTRL and semicolon together and allow you to assign an action to that keystroke pair. † Terminals often have function keys and arrow keys that also generate sequences of characters that include control characters. These sequences typically begin with ASCII code 27 (ESCAPE).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81059/" ] }
161,727
I need to write a script that will add a line to a text file if Enter is pressed. But, if Ctrl + D is pressed, I need to exit that loop in the bash. touch texttest.txtLINE="0"while true; do read LINE; if [ "$LINE" == "^D" ]; then break else echo "$LINE" >> texttest.txt fidone Currently have something like this but cannot figure out how I am do exit the while loop when Ctrl + D is pressed instead of Enter .
You're overthinking it. All you need is this: cat > texttest.txt Cat will read from STDIN if you've not told it different. Since it's reading from STDIN, it will react to the control character Ctrl + D without your having to specify it. And since Ctrl + D is the only thing that will finish the cat subprocess, you don't even need to wrap it in a loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87599/" ] }
161,742
I'm trying to install coreutils on NetBSD 6.1.5 using the pkgsrc system. This is on the default install of 6.1.5. The only change made has been to install zsh and set it as my default shell for users root and any local users. As is the pkgsrc way, I change to the directory with the pkgsrc heirachy containing the package I want to install. In this case it is /usr/pkgsrc/sysutils/coreutils When I enter this directory as root I type make and then get an error: configure: error: you should not run configure as root (setFORCE_UNSAFE_CONFIGURE=1 in environment to bypass this check)See `config.log' for more details*** Error code 1 This is not typical when using pkgsrc as root, and seems to be specific to gnu packages, as I have not experienced it with any other package in pkgsrc. When I do make as a normal user in the same directory I don't have permission to write to any directory under /usr/pkgsrc and make fails due to a bunch of permission denied errors. For example: sh: Cannot create configure.override: permission denied. Copying the package directory to somewhere a local user has write permission and compiling would not seem to be in line with using pkgsrc. Does the user have to be part of a special group to use pkgsrc?
Try the command indicated in the error message: export FORCE_UNSAFE_CONFIGURE=1 && make This being said, it is true the "unsafe configure" requirement seems a bit strange. Double-check the log (config.log) and see if there is something more explicit in there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49008/" ] }
161,770
Is there some order of operations to rm ? I performed rm on a large directory and am curious where I should look to see what might have been deleted. Does rm work on files first, then directories? Or is it based on some information in the inode table? Specs: rm from GNU coreutils 8.22system: Arch Linux running on a beagleboneblackfilesystem operating on was an external Seagate HDD (ext4) using USB 2.0. Backstory: I was performing some directory cleanup and performed cp -r A/ B/ C/ Dest/ Unwittingly, I followed that up with rm -r A/ B/ C/ Dest/ when I meant to simply perform rm -r A/ B/ C/ I caught this and hit Ctrl + C before too long had passed. Specifically, it was < 3 seconds as I was using the time command in conjunction with rm & cp . I went in and examined Dest/ expecting it to be non-existent, but lo and behold it was whole and appeared to not be affected. This is a bit surprising as A/ B/ C/ were quite small. Maybe 100–200 MB total. Dest/ however, is just shy of 1TB. Performing an ls on Dest/ showed that there were both files and directories at both ends of the alphabet (e.g. AFile.txt .... .... Zoo.txt ). Did I get lucky and cancelled the rm before it wrought havoc on my Dest/ directory? Is rm really that slow (thankfully!)? If not, how does rm go about recursively removing things such that I can guess what might have been lost? I'm not really expecting to recover what I might have lost, just curious what potentially was blown away.
rm -r works on each of its arguments in turn. If an argument is a directory, it lists the directory (with the opendir and readdir functions or some equivalent method), and operates on each entry in turn. If an entry is a directory, it explores that entry recursively. This is exactly the same method that other applications use to traverse directories recursively — find , ls -Rf , etc. The order of traversal is unpredictable. On most filesystems, the order is reproducible as long as no file is added, removed or renamed in the directory (the order could in theory be completely random and change every time, but I can't think of a filesystem where that happens). On a few filesystems, the order can in general be deduced from the file names or from the order in which the files were created or a combination of both, but you need to know the fine details of the filesystem, and it could vary depending on the driver version. The order of traversal isn't something you can rely on. Note that ls or echo * do sort files in lexicographic order of their names. find and ls -f do not sort. The one thing you can rely on is that the arguments are handled in order. So if C/ was still partially there, it would mean that Dest/ was untouched. If C/ is gone, you can get an idea of where files have been removed in Dest/ by checking the directory modification times and comparing them with the time C/ was deleted or the time the copy ended. The first file to be deleted could be a file directly in Dest/ or somewhere deep in the hierarchy depending on whether the first entry in Dest/ that rm happened to traverse was a directory or not. The speed of rm is mostly a matter of how many files there are to delete. It takes a very large file to have a noticeable impact on the deletion time. The bulk of the work is deleting each directory entry in turn. The file's data isn't erased, erasing a file's content only requires to mark the blocks that it was using as free, which is relatively fast.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46806/" ] }
161,794
I have a folder with chmod 000 permissions on it with a lot of different stuff in, the way I get in is a start bash in sudo by running sudo bash . Why can't I do && ? I want to cd into the directory with one command like this: sudo bash && cd desktop When I run this, I am still in ~ which is the default directory. I have to run this instead sudo bashcd desktop Also, the desktop is not the folder, its a subfolder of desktop, but it doesn't matter. It's the same thing anyways.
The part after && is executed in the current shell, it is not some argument handed over to the bash you run with sudo . You might be tempted to try sudo bash -c 'cd desktop' but that doesn't work because that bash exits after cd desktop . You can try: sudo sh -c 'cd desktop && exec bash' which "works" (i.e. places you in the directory desktop in a Bash shell with uid=0). I'd rather issue the two separate commands than that one liner.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
161,795
After installing Midnight Commander in CentOS 7, I looked for the configuration file in /home/$USER/.mc.ini but didn't find it. I want to change my default skin to another color scheme, but can't find config file.
It's in the following file: ~/.config/mc/ini .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64125/" ] }
161,821
How can I delete all lines in a file using vi? At moment I do that using something like this to remove all lines in a file: echo > test.txt How can I delete all lines using vi ? Note: Using dd is not a good option. There can be many lines.
In vi do :1,$d to delete all lines. The : introduces a command (and moves the cursor to the bottom). The 1,$ is an indication of which lines the following command ( d ) should work on. In this case the range from line one to the last line (indicated by $ , so you don't need to know the number of lines in the document). The final d stands for delete the indicated lines. There is a shorter form ( :%d ) but I find myself never using it. The :1,$d can be more easily "adapted" to e.g. :4,$-2d leaving only the first 3 and last 2 lines, deleting the rest.
{ "score": 10, "source": [ "https://unix.stackexchange.com/questions/161821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87669/" ] }
161,845
I'm working with large number of files, which I keep in a directory. Every time I go in to that directory and accidentally press Tab twice, it takes too long (may be over a minutes) to show files that match the pattern, and I'm quite annoyed with this behavior. For example, my directory structure is: my-project/├── docs/├── data/ <---- contains around 200k files.└── analyser/ Since I still love completion, is there anyway to disable this feature only at the data/ directory? Like set timeout for 5 sec, or a script that automatically switch off completion when inside specific directory?
This isn't perfect, but then again bash completion is quite a tricky thing... The very simplest way is on a per-command basis, it's slightly more flexible than FIGNORE , you can do: complete -f -X "/myproject/data/*" vi This instructs autocomplete that completion for vi is for files, and to remove patterns matching the -X filter. The downside is that the pattern isn't normalised, so ../data and variations won't match. The next best thing might be a custom PROMPT_COMMAND function: # associative arrays of non-autocomplete directoriesdeclare -A noacdirs=([/myproject/data]=1 )function _myprompt { [[ -n "${noacdirs[$PWD]}" ]] && { echo autocomplete off bind 'set disable-completion on' } || { echo autocomplete on bind 'set disable-completion off' }} PROMPT_COMMAND=_myprompt This disables completion (completely) when you are in the directory, but it disables it for every path not just files in that directory. It would be more generally useful to selectively disable this for defined paths, but I believe the only way is to use a default completion function (bash-4.1 and later with complete -D ) and a lot of messing about. This should work for you, but it may have unintended side effects (i.e. changes to the expected completion in some cases): declare -A noacdirs=([/myproject/data]=1 )_xcomplete() { local cur=${COMP_WORDS[COMP_CWORD]} # the current token name=$(readlink -f "${cur:-./}") # poor man's path canonify dirname=$(dirname "$name/.") [[ -n "${noacdirs[$dirname]}" ]] && { COMPREPLY=( "" ) # dummy to prevent completion return } # let default kick in COMPREPLY=()}complete -o bashdefault -o default -F _xcomplete vi This works for completion of vi , other commands can be added as needed. It should stop completion for files in the named directories regardless of path or working directory. I believe the general approach with complete -D is to dynamically add completion functions for each command as it is encountered. One might also need to add complete -E (completion of command name when input buffer is empty). Update Here's a hybrid version of the PROMPT_COMMAND and completion function solutions, it's a little easier to understand and hack I think: declare -A noacdirs=([/myproject/data]=1 [/project2/bigdata]=1)_xcomplete() { local cmd=${COMP_WORDS[0]} local cur=${COMP_WORDS[COMP_CWORD]} # the current token [[ -z "$cur" && -n "$nocomplete" ]] && { printf "\n(restricted completion for $cmd in $nocomplete)\n" printf "$PS2 $COMP_LINE" COMPREPLY=( "" ) # dummy to prevent completion return } COMPREPLY=() # let default kick in}function _myprompt { nocomplete= # uncomment next line for hard-coded list of directories [[ -n "${noacdirs[$PWD]}" ]] && nocomplete=$PWD # uncomment next line for per-directory ".noautocomplete" # [[ -f ./.noautocomplete ]] && nocomplete=$PWD # uncomment next line for size-based guessing of large directories # [[ $(stat -c %s .) -gt 512*1024 ]] && nocomplete=$PWD} PROMPT_COMMAND=_mypromptcomplete -o bashdefault -o default -F _xcomplete vi cp scp diff This prompt function sets the nocomplete variable when you enter one of the configured directories. The modified completion behaviour only kicks in when that variable is non-blank and only when you try completing from an empty string, thus allowing completion of partial names (remove the -z "$cur" condition to prevent completion altogether). Comment out the two printf lines for silent operation. Other options include a per-directory .noautocomplete flag file that you can touch in a directory as needed; and guessing of directory size using GNU stat . You can use any or all of those three options. (The stat method is only a guess , the reported directory size grows with its contents, it's a "high water mark" that won't usually shrink when files are deleted without some administrative intervention. It's cheaper than determining the real contents of a potentially large directory. Precise behaviour and increment per file depends on underlying filesystem. I find it a reliable indicator on linux ext2/3/4 systems at least.) bash adds an extra space even when an empty completion is returned (this only occurs when completing at the end of a line). You can add -o nospace to the complete command to prevent this. One remaining niggle is that if you back up the cursor to the start of a token and hit tab, the default completion will kick in again. Consider it a feature ;-) (Or you could futz around with ${COMP_LINE:$COMP_POINT-1:1} if you like over-engineering, but I find bash itself fails to set the completion variables reliably when you back up and attempt completion in the middle of a command.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74929/" ] }
161,858
I am a heavy VMware workstation 10.0.3 user, and as such I have 32GB RAM on my system. My only operating system is Arch Linux, using Unity for the desktop. Usually when I have two virtual machines running with about 3GB RAM appointed to each, really often and at random intervals the whole system becomes unresponsive for a few seconds. Running "top" at a terminal, the culprit seems to be the command khugepaged, which runs while the system is unresponsive at 100% CPU and then dissapears. Is there any way to avoid this? I have googled about khugepaged, but I only seem to find ancient posts from 2011 or unanswered questions. These are my full system specs: CPU: Intel i5 [email protected] 32GB Corsair Vengeance RAM@2400MHz M/B ASrock Z87 Pro 4
I have similar problem on Ubuntu. The workaround I use is: echo never > /sys/kernel/mm/transparent_hugepage/defragecho 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag The source of the workaround is in a Fedora bug report “khugepaged eating 100%CPU” . The bug was never fixed. This is less drastic then disabling entire transparent_hugepage support.The detailed explanation of what the command does can be found in the documentation of transparent hugepage support .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87706/" ] }
161,859
I've used dd to clone disks like this: dd if=/dev/sdb of=/dev/sda bs=4096 conv=notrunc,noerror,sync And it's always worked fine. Any and all docs on 'dd' take pains to remind you that the target disk must be the same size or bigger than the source. Does that absolutely have to be true? Now, I quite understand that if I clone to a smaller disk I can't expect any partitions that are even partially 'out of bounds' on the target to be intact. However, knowing full well that I'd need to edit my partitions on the target later, deleting the 'out of bounds' ones, could I still use 'dd' to make a brute force copy of the source up to the limits of the physical size of the target? Or would 'dd' reduce the target to a smoking pile of wreckage when it reached the limit of its size ;-) BTW, researching this, I've seen recommended values for bs= of everything from bs=1024 up to bs=32M , what really is best?
As others have mentioned here using just dd won't work due to the copy of the GPT table placed at the end of the disk. I have managed to perform a migration to a smaller drive using the following method: First - boot into liveCD distro of your choice. Resize the source drive partitions to indeed fit within the smaller drive's constraints (using gparted for example).Then, assuming sda is the source disk, using sgdisk , first create a backup of GPT table from the source drive to be on the safe side: ` sgdisk -b=gpt.bak.bin /dev/sda Assuming sdb is the target, replicate the table from the source drive to the target: sgdisk -R=/dev/sdb /dev/sda sgdisk will now complain that it tried placing the header copy out of the bounds of the target disk, but then will fallback and place the header correctly at the upper bound of the target disk. Verify that a correct clone of the partition table has been created on the target drive using the tool of your choice ( gparted for example). Using dd , copy each partition from the source drive to the target: dd if=/dev/sda1 of=/dev/sdb1 bs=1Mdd if=/dev/sda2 of=/dev/sdb2 bs=1Mdd if=/dev/sda3 of=/dev/sdb3 bs=1Metc... Obviously, if you mix up the names of the drives when replicating the GPT partition table without a backup or when dd ing the content you can kiss your content goodbye :)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56145/" ] }
161,866
I have deleted some files around /var/lib/dpkg/ , namely: /var/lib/dpkg/status/var/lib/dpkg/available/var/lib/dpkg/info/* I understand Debian uses these files to keep some information about installed packages. Now when I do apt-get update , I get following error: Reading package lists... Error!E: Could not open file /var/lib/dpkg/status - open (2: No such file or directory)E: The package lists or status file could not be parsed or opened. As I understand the FHS , files located in /var are not supposed to be system-critical. Rater these should be temporary files, logs, caches, and similar. Is there therefore a way to recreate the deleted files ?
If you look at the purpose of /var as given in the Filesystem Hierarchy Standard , it says: /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files. Note that "transient and temporary" files are just one of the things it contains. It also contains "spool directories and files" and "administrative and logging data". You deleted critical "administrative data". It goes on to explain why /var exists: /var is specified here in order to make it possible to mount /usr read-only. Everything that once went into /usr that is written to during system operation (as opposed to installation and software maintenance) must be in /var . That's the key thing about /var : the data in it changes, unlike /usr (which only changes when you add/remove/update software). Further sections explain the various subdirectories of /var ; for example, /var/lib (where the files you deleted used to live) holds "state information pertaining to an application or the system", defined as "data that programs modify while they run, and that pertains to one specific host." You really shouldn't delete files without knowing what the specific file is for. With the files you deleted, unless you have a backup of these files, I think the only thing left to do is take a backup of /home , /etc etc. and reinstall. Until you do so, you'll be unable to use dpkg (and APT, etc.). Other than that, the system should continue to function.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
161,891
I know commands for setting specific time and/or date, but can't find ones which move the time relative to the current time. What are the commands to move the time forward/backward x seconds/minutes/hours? (And possibly also days/months/years?)
The command to set the system time is date . You need to be root to set the system time. date sets the time to the given time, not to a relative amount from the current time, because that latter behavior would be pretty pointless. You can build a command that modifies the current time by a relative amount by making a calculation on the output of date and feeding it back to date , e.g. (on non-embedded Linux) date $(date +%m%d%H%M%Y.%S -d '1 hour ago') Beware that if you're running a timekeeping system such as NTP, changing the clock like this will confuse it. Stop it first. Running date sets the system time, not the hardware clock. Under Linux, run hwclock --systohc to copy the system time to the hardware clock; this is done automatically on a clean shutdown. If you wanted to see the time in a different timezone, forget all this and set the desired timezone instead. Under Linux, run tzselect to set the system timezone. To run a program in a different timezone, set the TZ environment variable, e.g. export TZ=Asia/Tokyoemacs If you want to run a program and make it believe that the time is different from what it really is, run it under the program faketime . faketime '1 hour ago' date
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87511/" ] }
161,905
I'm trying to add unzipped files to an existing already zipped folder say new folder.zip . Is it possible to use zip -r new folder.zip after adding some unzipped files to them? Will this command compress the folder? Is there any alternative to do this?
Use the update flag: -u Example: zip -ur existing.zip myFolder This command will compress and add myFolder (and it's contents) to the existing.zip . Advanced Usage: The update flag actually compares the incoming files against the existing ones and will either add new files, or update existing ones. Therefore, if you want to add/update a specific subdirectory within the zip file, just update the source as desired, and then re-zip the entire source with the -u flag. Only the changed files will be zipped. If you don't have access to the source files, you can unzip the zip file, then update the desired files, and then re-zip with the -u flag. Again, only the changed files will be zipped. Example: Original Source Structure ParentDir├── file1.txt├── file2.txt├── ChildDir│ ├── file3.txt│ ├── Logs│ │ ├── logs1.txt│ │ ├── logs2.txt│ │ ├── logs3.txt Updated Source Structure ParentDir├── file1.txt├── file2.txt├── ChildDir│ ├── file3.txt│ ├── Logs│ │ ├── logs1.txt│ │ ├── logs2.txt│ │ ├── logs3.txt │ │ ├── logs4.txt &lt-- NEW FILE Usage $ zip -ur existing.zip ParentDir > updating: ParentDir/ChildDir/Logs (stored 0%)> adding: ParentDir/ChildDir/Logs/logs4.txt (stored 96%)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/161905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87738/" ] }
161,922
Is there any Linux program which offers the same (or some of the) functionality of Sysinternals DiskView, especially being able to view to physical location of a file on a hard disk? DiskView URL: http://technet.microsoft.com/en-gb/sysinternals/bb896650
For some file systems like ext4 or btrfs on Linux, you can use filefrag to get the offsets of the data segments for the file on the block device the file system is on. $ seq 1000 > a$ filefrag -v aFilesystem type is: ef53File size of a is 3893 (1 block of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 82784147.. 82784147: 1: eofa: 1 extent found$ sudo dd bs=4k skip=82784147 count=1 if=/dev/storage/home 2>&- | head12345678910 Here the block device is a LVM volume. That volume may have physical volumes on disks, on partitions, on RAID arrays, on files, on RAM, on network block devices... Going back to an actual disk or set of disk may prove difficult. In my case, it's relatively easy, as it's just a logical volume on top of one GPT partition as one linear stretch. $ sudo dmsetup table /dev/storage/home0 1953120256 linear 8:98 384 So /dev/storage/home is 384 sectors within device 8:98, which happens to be /dev/sdg2 for me. $ cat /sys/block/sdg/sdg2/start489060352 So sdg2 is 489060352 sectors within /dev/sdg (the 7th disk on this system). So I can obtain the offset within the single disk that file is on with: $ sudo dd if=/dev/sdg skip="$((489060352+384+82784147*8))" count=1 2> /dev/null | head12345678910
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
161,932
I'm trying to find the sed command so that I can put filename into filename1 . These are my two separate files. INPUT filename has: Cindy 11 22 54Chester 48 12 84 INPUT filename1 has: Name Class1 Class2 Class3Lee 92 94 88Chancy 91 85 95Dora 99 77 96Jefferry 84 98 90 This is the result that I need (output): Name Class1 Class2 Class3Lee 92 94 88Chancy 91 85 95Dora 99 77 96Cindy 11 22 54Chester 48 12 84Jefferry 84 98 90 If I need to clarify anything let me know. Basically Cindy and Chester has to be right in between Dora and Jefferry.
To add all lines from filename to filename1 using sed command you can do the following: sed r filename1 filename Please note however that the result will be slightly different from the output in your question, namely: Name Class1 Class2 Class3Lee 92 94 88Chancy 91 85 95Dora 99 77 96Jefferry 84 98 90Cindy 11 22 54Chester 48 12 84 Edit Some additional sed information useful for this question: To add filename after 4th line of filename1 : sed '4 r filename' filename1 To add filename after line which starts from "Dora" in filename1 : sed '/^Dora/ r filename' filename1 To add filename after 4th line and remove any blank lines from filename1 : sed '/^$/d;4 r filename' filename1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86930/" ] }
161,958
I am trying to figure out a solution for this question. My approach to this problem so far is as below. Append all the characters together to make it a long string. After the above step, remove all the white spaces or tab spaces so that we will just have one big string. I was able to establish the above steps with the below command. column -s '\t' inputfile | tr -d '[:space:]' So for an input file like this, 1 0 0 0 0 00 1 1 1 0 0 After applying the above command I have the values as, 100000011100 Now in this big string I am trying to apply an approach as below. Extract every 6 th character (as the original OP wants), and append it to an array element till the end of the string. So basically, with the above step, I am trying to create the array elements as, 10 (1 st and 7 th character), 01 (2 nd and 8 th character), 01 (3 rd and 9 th character), 01 (4 th and 10 th character), 00 (5 th and 11 th character), 00 (6 th and 12 th character). So my question is, how could I extract every n th character so that I could add them to an array to proceed further? (n=6, in this case).
Two lines Here is a pure- bash solution that produces a bash array: s="100000011100"array=($( for ((i=0; i<${#s}-6; i++)) do echo "${s:$i:1}${s:$((i+6)):1}" done ))echo "${array[@]}" This produces the same output as shown in the question: 10 01 01 01 00 00 The key element here is the use of bash's substring expansion . Bash allows the extraction substrings from a variable, say parameter , via ${parameter:offset:length} . In our case, the offset is determined by the loop variable i and the length is always 1 . General Solution For Any Number of Lines Suppose, for example, that our original string has 18 characters and we want to extract the i-th, the i+6-th, and the i+12-th characters for i from 0 to 5. Then: s="100000011100234567"array=($( for ((i=0; i<6; i++)) do new=${s:$i:1} for ((j=i+6; j<${#s}; j=j+6)) do new="$new${s:$j:1}" done echo "$new" done ))echo "${array[@]}" This produces the output: 102 013 014 015 006 007 This same code extends to an arbitrary number of 6-character lines. For example, if s has three lines (18 characters): s="100000011100234567abcdef" Then, the output becomes: 102a 013b 014c 015d 006e 007f
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/161958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
161,973
I installed oh-my-zsh to make terminal use a bit easier. One thing that bugs me though is the prolific aliases added by it, like "ga", "gap", "gcmsg", "_", which are harder to remember than the original command, and pollutes the command hash table. So is there a way to disable aliases altogether? Or a way to clear all aliases so that I can put it in my .zshrc ?
If you don't want any of oh-my-zsh's aliases, but you want to keep other aliases, you can save the aliases before loading oh-my-zsh save_aliases=$(alias -L) and restore them afterwards. eval $save_aliases; unset save_aliases If you want to remove all aliases at some point, you can use unalias -m '*' (remove all aliases matching * , i.e. all of them). If you absolutely hate aliases and don't want to ever see one, you can make the alias builtin inoperative: unalias -m '*'; alias () { : } . Or you can simply turn off alias expansion with setopt no_aliases .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/161973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38233/" ] }
162,092
How can I use zypper to list the locations of installed files for a given package?
In openSUSE, there are a few ways of listing installed files. With Zypper, Iwould try: zypper search -i Or: zypper search --installed-only With RPM, you can try: rpm -ql packageName | less
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56755/" ] }
162,131
I would like to create a patch from a specific gcc branch comparing it to the official releases; so when I unpack the tarball from the stable release, I can apply the patch and get the equivalent of what was in that specific branch . It's the first time I need to create a patch, so it's my very first time doing this and my main concern is to get the options and the parsing right since we are talking about an extremely important piece of software diff -crB GccStable GccGit > /tmp/fromStabletoBranch.patch Is this enough and the best way of doing it ?
Yes, this is a good way to create a patch. In short: To create patch for single file your command may look like diff -Naru file_original file_updated > file.patch where -N : treat absent files as empty -a : treat all files as text -r : recursively compare any subdirectories found -u : output NUM (default 3) lines of unified context To create patch for whole directory: diff -crB dir_original dir_updated > dfile.patch where -c : output NUM (default 3) lines of copied context -r : recursively compare any subdirectories -B : ignore changes whose lines are all blank After all to apply this patch one can run patch -p1 --dry-run < dfile.patch where switch p instructs patch to strip the path prefix so that files will be identified correctly. In most cases it should be 1 . Remove --dry-run if you are happy from the result printed on the screen.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41194/" ] }
162,133
I want to run a bash script in a detached screen. The script calls a program a few times, each of which takes too long to wait. My first thought was to simply open a screen and then call the script, but it appears that I can't detach (by ctrl-a d ) while the script is running. So I did some research and found this instruction to replace the shebang with following: #!/usr/bin/screen -d -m -S screenName /bin/bash But that doesn't work, either (the options are not recognized). Any suggestions? PS It occurs to me just now that screen -dmS name ./script.sh would probably work for my purposes, but I'm still curious about how to incorporate this into the script. Thank you.
The shebang line you've seen may work on some unix variants, but not on Linux. Linux's shebang lines are limited: you can only have one option. The whole string -d -m -S screenName /bin/bash is passed as a single option to screen , instead of being passed as different words. If you want to run a script inside screen and not mess around with multiple files or quoting, you can make the script a shell script which invokes screen if not already inside screen. #!/bin/shif [ -z "$STY" ]; then exec screen -dm -S screenName /bin/bash "$0"; fido_stuffmore_stuff
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87866/" ] }
162,134
It has been bugging me for a very long time and I'm really sick of it. For example there's a script called testscript that compares two directories. In class the prof can just type testscript dir1 dir2 to get the output, but I have to add ./ before the testscript and hit enter. Then enter dir1 and dir2 next line How did the prof do that?Is it something with the bashrc thing? I never get how it works. If it's related please explain in plain simple language since I'm new to Linux. Thank you!
You need to add directory with your script to the PATH variable: export PATH="$PATH:/path/to/dir" or you can even add current directory to the PATH : export PATH="$PATH:." The later has some security drawback though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85812/" ] }
162,180
I'm trying to make a lightweight VM based on Xubuntu. I want to start with a stock Xubuntu installation and then remove any packages that I don't need. My concept is to build a .deb package that removes the unneeded packages. I've been reasonably successful specifying that my package conflicts with the ones I want to remove. For example: Conflicts: gnumeric, thunderbird, blueman, mousepad, xchat, etc... This causes the named packages to be removed when my package is installed. The problem comes later: if I want to re-install, say, Thunderbird, doing so will cause my package to be uninstalled. What's a good way to clean up the system in an automated manner? Can the postinst script be used to remove packages?
How about splitting your package into two: one part that contains the real functionality, and another part that conflicts with the packages you want to remove. Make the first package recommend the second package so that it gets installed by default, but if you later want to install something that your package conflicts with, then you can choose to remove your second package. That shouldn't be a problem as the cleanup has already taken place.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10004/" ] }
162,211
I'm used to bash 's builtin read function in while loops, e.g.: echo "0 1 1 1 1 2 2 3" |\while read A B; do echo $A + $B | bc;done I've been working on some make project, and it became prudent to split files and store intermediary results. As a consequence I often end up shredding single lines into variables. While the following example works pretty well, head -n1 somefile | while read A B C D E FOO; do [... use vars here ...]; done it's sort of stupid, because the while loop will never run more than once. But without the while , head -n1 somefile | read A B C D E FOO; [... use vars here ...] The read variables are always empty when I use them. I never noticed this behaviour of read , because usually I'd use while loops to process many similar lines. How can I use bash 's read builtin without a while loop? Or is there another (or even better) way to read a single line into multiple (!) variables? Conclusion The answers teach us, it's a problem of scoping. The statement cmd0; cmd1; cmd2 | cmd3; cmd4 is interpreted such that the commands cmd0 , cmd1 , and cmd4 are executed in the same scope, while the commands cmd2 and cmd3 are each given their own subshell, and consequently different scopes. The original shell is the parent of both subshells.
It's because the part where you use the vars is a new set of commands. Use this instead: head somefile | { read A B C D E FOO; echo $A $B $C $D $E $FOO; } Note that, in this syntax, there must be a space after the { and a ; (semicolon) before the } . Also -n1 is not necessary; read only reads the first line. For better understanding, this may help you; it does the same as above: read A B C D E FOO < <(head somefile); echo $A $B $C $D $E $FOO Edit: It's often said that the next two statements do the same: head somefile | read A B C D E FOOread A B C D E FOO < <(head somefile) Well, not exactly. The first one is a pipe from head to bash 's read builtin. One process's stdout to another process's stdin. The second statement is redirection and process substitution. It is handled by bash itself. It creates a FIFO (named pipe, <(...) ) that head 's output is connected to, and redirects ( < ) it to the read process. So far these seem equivalent. But when working with variables it can matter. In the first one the variables are not set after executing. In the second one they are available in the current environment. Every shell has another behavior in this situation. See that link for which they are. In bash you can work around that behavior with command grouping {} , process substitution ( < <() ) or Here strings ( <<< ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19575/" ] }
162,221
There are many ways to replace characters in a variable. The shortest way I found out is tr so far: OUTPUT=a\'b\"c\`d_123and_a_lot_moreOUTPUT=$(echo "$OUTPUT"|tr -d "'\`\"")echo $OUTPUT Is there a faster way?And is this quoting-safe for quotes like ' , " and ` itself?
Let's see. The shortest I can come up with is a tweak of your tr solution: OUTPUT="$(tr -d "\"\`'" <<<$OUTPUT)" Other alternatives include the already mentioned variable substitution which can be shorter than shown so far: OUTPUT="${OUTPUT//[\'\"\`]}" And sed of course though this is longer in terms of characters: OUTPUT="$(sed s/[\'\"\`]//g <<<$OUTPUT)" I'm not sure if you mean shortest in length or in terms of time taken. In terms of length these two are as short as it gets (or as I can get it anyway) when it comes to removing those specific characters. So, which is fastest? I tested by setting the OUTPUT variable to what you had in your example but repeated several dozen times: $ echo ${#OUTPUT} 4900$ time tr -d "\"\`'" <<<$OUTPUTreal 0m0.002suser 0m0.004ssys 0m0.000s$ time sed s/[\'\"\`]//g <<<$OUTPUTreal 0m0.005suser 0m0.000ssys 0m0.000s$ time echo ${OUTPUT//[\'\"\`]}real 0m0.027suser 0m0.028ssys 0m0.000s As you can see, the tr is clearly the fastest, followed closely by sed . Also, it seems like using echo is actually slightly faster than using <<< : $ for i in {1..10}; do ( time echo $OUTPUT | tr -d "\"\`'" > /dev/null ) 2>&1done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0025$ for i in {1..10}; do ( time tr -d "\"\`'" <<<$OUTPUT > /dev/null ) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0029 Since the difference is tiny, I ran the above tests 10 times for each of the twoand it turns out that the fastest is indeed the one you had to begin with: echo $OUTPUT | tr -d "\"\`'" However, this changes when you take into account the overhead of assigning to a variable, here, using tr is slightly slower than the simple replacement: $ for i in {1..10}; do ( time OUTPUT=${OUTPUT//[\'\"\`]} ) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0032$ for i in {1..10}; do ( time OUTPUT=$(echo $OUTPUT | tr -d "\"\`'")) 2>&1 done | grep -oP 'real.*m\K[\d.]+' | awk '{k+=$1;} END{print k/NR}'; 0.0044 So, in conclusion, when you simply want to view the results, use tr but if you want to reassign to a variable, using the shell's string manipulation features is faster since they avoid the overhead of running a separate subshell.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
162,260
Currently I have netcat piping output to tee which is writing to output.txt with nc -l -k -p 9100 | tee output.txt I want to monitor this output, so I'm watching it with tail -f | egrep -i 'regex' via PuTTY so that I only see relevant bits. Every now and then I want to clear the output file. The issue arises that if I do > output.txt and then again try to tail -f | egrep ... I get no output. If I grep through the file, I get no matches, despite knowing that there should be matches (as cat output.txt spits out the file properly) mitch@quartz:~$ grep output.txt -e 'regex'Binary file output.txt matches While the same command on output.txt before emptying it works fine. Basically: > makes grep think my file is a binary file and it won't properly search. Is there a better way to clear the file?
If the only problem is that grep treats it as binary, tell grep to search it regardless: $ head /bin/bash > out$ echo "test" >> out $ grep test out Binary file out matches$ grep -a test out test From man grep : -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73386/" ] }
162,271
How i can read a certain number of lines after find some text? Eg.: Read next 2 lines after find "Unix" on: Test 1Test 2Test 3Test 4UNIXTest 5Test 6Test 7Test 8Test 9 Result can be: Test 5Test 6 Note: The "Unix" on last example is an argument, and so, it can be any other text. What i have: I'm still out of ideas, need just a light. Thinking on create another script to do that.
A grep solution: grep -A2 -P '^UNIX$' file Explanation: -A means: print the next two lines after the match Or awk: awk '$0=="UNIX"{getline; print; getline; print}' file Explanation: Search for UNIX in the line ( $0=="UNIX" ). If found, get the next line into the buffer ( getline ) and print the buffer ( print ). This is done twice. Or use sed : sed -n '/^UNIX$/{n;p;n;p}' file Explanation: Search for UNIX ( /^UNIX$/ ). If found, execute the part in the {...} . n means next, p means print. This is done twice as well.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87669/" ] }
162,286
Answering this question I discovered a very funny (and subtle) difference between behavior in bash and zsh : In bash : romano@RRyS:~$ pwd/home/romanoromano@RRyS:~$ alias x="cd /bin && ./echo A >/dev/null &"romano@RRyS:~$ x[1] 16611romano@RRyS:~$ pwd/home/romano As you can see, the execution of alias x is carried out in a subshell and so the current directory does not change. Not in zsh : [romano:~] % pwd/home/romano[romano:~] % alias x="cd /bin && ./echo A >/dev/null &"[romano:~] % x[1] 16744[1] + 16744 done ./echo A >/dev/null 1& [romano:/bin] % pwd/bin[romano:/bin] % here the directory is changed. It seems that the & in bash has a different priority than in zsh --- I mean, the command seems to be read as (cd /tmp && echo A) & in bash and as cd /tmp && (echo A &) in zsh . Is this correct or the cause of the different behavior is another one?
Different, documented behavior in zshmisc A list is a sequence of zero or more sublists, in which each sublist is terminated by ; , & , &| , &! , or a newline. This terminator may optionally be omitted from the last sublist in the list when the list appears as a complex command inside (...) or {...} . When a sublist is terminated by ; or newline, the shell waits for it to finish before executing the next sublist. If a sub‐list is terminated by a & , &| , or &! , the shell executes the last pipeline in it in the background, and does not wait for it to finish (note the difference from other shells which execute the whole sublist in the background). A backgrounded pipeline returns a status of zero.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52205/" ] }
162,305
How to find the appropriate font for rendering unicode codepoints ? gnome-terminal find that characters like «⼼» can be rendered with fonts like Symbola rather than my terminal font or the codepoint-in-square fallback (). How ?
Using fontconfig, > fc-list ':charset=<hex_code1> <hex_code2>' e.g. > fc-list ':charset=2713 2717' will display any font filenames containing ✓ and ✗. To get the codepoint corresponding to the character use (for example) > printf "%x" \'✓2713> This uses a somewhat obscure feature of the POSIX printf utility : If the leading character is a single-quote or double-quote, the value shall be the numeric value in the underlying codeset of the character following the single-quote or double-quote. Taken together, > printf '%x' \'✓ | xargs -I{} fc-list ":charset={}" This uses the xargs -I flag to replace {} with names from stdin . So this effectively boils down to: > fc-list ":charset=2713"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53351/" ] }
162,319
I am trying to extract application name from ps command like this: ps -af -u sas |grep "java"| grep -v "grep" and it produces the following: sas 24431 1 0 Oct10 ? 00:51:08 /usr/lib/jvm/java-1.7.0-oracle-1.7.0.25.x86_64/jre/bin/java -Denv=DEV -Dapp.name=myApp -Xms512m -Xmx1g -Dlog.dir=/apps/java/logs I tried using egrep but doesn't seems to be working: ps -af -u sas|grep "java"| grep -v "grep"| egrep -o "-Dapp.name" How can I get only the -Dapp.name value "myApp"?
With awk : ps -af -u sas | awk 'BEGIN {RS=" "}; /-Dapp.name/'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72369/" ] }
162,348
I have a grep that works for some of the dates but having trouble getting my brain to make it fully functional. grep 19[6-9][5-6]$ filename it catches a few correctly but I'm looking to grab all years between 1965-1996. Here is the current solution but looking for a one line really, but here's what I've gotten so far: grep 196[5-9]$ filenamegrep 197[0-9]$ filenamegrep 198[0-9]$ filenamegrep 199[0-6]$ filename Looking for better and shorter if possible?
Date ranges & regex aren't really that good a match. If I interpret the $ in your grep correctly the date is the last field on a line. Try this: awk '$NF >= 1965 && $NF <= 1996' filename If you must use grep it becomes more convoluted: grep -E '196[5-9]|19[78][0-9]|199[0-6]$' filename
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82325/" ] }
162,363
Here is the weak attempt at a paste command trying to include a newline: paste -d -s tmp1 tmp2 \n tmp3 \n tmp4 tmp5 tmp6 > tmp7 Basically I have several lines in each tmp and I want the output to read First(tmp1) Last(tmp2)Address(tmp3)City(tmp4) State(tmp5) Zip(tmp6) Am I way off base with using a newline in the paste command? Here is my finished product: THANK YOU FOR THE HELP! cp phbook phbookh2p5 sed 's/\t/,/g' phbookh2p5 > tmp sort -k2 -t ',' -d tmp > tmp0 cut -d',' -f1,2 tmp0 > tmp1 cut -d',' -f3 tmp0 > tmp2 cut -d',' -f4,5,6 tmp0 > tmp3 echo "" > tmp4 paste -d '\n' tmp1 tmp2 tmp3 tmp4 > tmp7 sed 's/\t/ /g' tmp7 > phbookh2p5 cat phbookh2p5 rm tmp*; rm phbookh2p5
Try this solution with two extra temporary files: paste tmp1 tmp2 > tmp12paste tmp4 tmp5 tmp6 > tmp456paste -d "\n" tmp12 tmp3 tmp456 > tmp7 This solution was based on the assumption that the -d option selects the delimiter globally for all input files so it either be a blank or a newline. In a way this is true since later occurences of -d overwrite previous ones. However, as @DigitalTrauma pointed out we can supply more than one delimiter which will be used sequentially. So @DigitalTrauma's solution is more elegant than mine since it completely avoids additional temporary files. One niche application for my solution would be the case in which one or delimiters with more than one character each have to be used. This should not be possible with just using the -d option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82325/" ] }
162,377
I have a very large csv file. How would you remove the very last , with sed (or similar) ? ...[11911,0,"BUILDER","2014-10-15","BUILDER",0,0],[11912,0,"BUILDER","2014-10-15","BUILDER",0,0],[11913,0,"BUILDER","2014-10-15","BUILDER",0,0],] Desired output ...[11911,0,"BUILDER","2014-10-15","BUILDER",0,0],[11912,0,"BUILDER","2014-10-15","BUILDER",0,0],[11913,0,"BUILDER","2014-10-15","BUILDER",0,0]] The following sed command will delete the last occurrence per line, but I want per file. sed -e 's/,$//' foo.csv Nor does this work sed '$s/,//' foo.csv
Using awk If the comma is always at the end of the second to last line: $ awk 'NR>2{print a;} {a=b; b=$0} END{sub(/,$/, "", a); print a;print b;}' input[11911,0,"BUILDER","2014-10-15","BUILDER",0,0],[11912,0,"BUILDER","2014-10-15","BUILDER",0,0],[11913,0,"BUILDER","2014-10-15","BUILDER",0,0]] Using awk and bash $ awk -v "line=$(($(wc -l <input)-1))" 'NR==line{sub(/,$/, "")} 1' input[11911,0,"BUILDER","2014-10-15","BUILDER",0,0],[11912,0,"BUILDER","2014-10-15","BUILDER",0,0],[11913,0,"BUILDER","2014-10-15","BUILDER",0,0]] Using sed $ sed 'x;${s/,$//;p;x;};1d' input[11911,0,"BUILDER","2014-10-15","BUILDER",0,0],[11912,0,"BUILDER","2014-10-15","BUILDER",0,0],[11913,0,"BUILDER","2014-10-15","BUILDER",0,0]] For OSX and other BSD platforms, try: sed -e x -e '$ {s/,$//;p;x;}' -e 1d input Using bash while IFS= read -r linedo [ "$a" ] && printf "%s\n" "$a" a=$b b=$linedone <inputprintf "%s\n" "${a%,}"printf "%s\n" "$b"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
162,393
I've heard that changing the hostname in new versions of fedora is done with the hostnamectl command. In addition, I recently (and successfully) changed my hostname on Arch Linux with this method. However, when running: [root@localhost ~]# hostnamectl set-hostname --static paragon.localdomain[root@localhost ~]# hostnamectl set-hostname --transient paragon.localdomain[root@localhost ~]# hostnamectl set-hostname --pretty paragon.localdomain The changes are not preserved after a reboot (contrary to many people's claims that it does). What is wrong? I really don't want to edit /etc/hostname manually. I should also note that this is a completely stock fedora. I haven't even gotten around to installing my core apps yet.
The command to set the hostname is definitely, hostnamectl . root ~ # hostnamectl set-hostname --static "YOUR-HOSTNAME-HERE" Here's an additional source that describes this functionality a bit more, titled: Correctly setting the hostname - Fedora 20 on Amazon EC2 . Additionally the man page for hostnamectl : HOSTNAMECTL(1) hostnamectl HOSTNAMECTL(1)NAME hostnamectl - Control the system hostnameSYNOPSIS hostnamectl [OPTIONS...] {COMMAND}DESCRIPTION hostnamectl may be used to query and change the system hostname and related settings. This tool distinguishes three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the static hostname which is used to initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the transient hostname which is a default received from network configuration. If a static hostname is set, and is valid (something other than localhost), then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names. The static hostname is stored in /etc/hostname, see hostname(5) for more information. The pretty hostname, chassis type, and icon name are stored in /etc/machine-info, see machine-info(5). Use systemd-firstboot(1) to initialize the system host name for mounted (but not booted) system images. There is a bug in Fedora 21 where SELinux prevents hostnamectl access, found here, titled: Bug 1133368 - SELinux is preventing systemd-hostnam from 'unlink' accesses on the file hostname . This bug seems to be related. There's an issue with the SELinux contexts not being applied properly to the file /etc/hostname upon installation. This manifests in the tool hostnamectl not being able to manipulate the file /etc/hostname . That same thread offered this workaround: $sudo restorecon -v /etc/hostname NOTE: That patches were applied to Anaconda (the installation tool) so that this issue should go away in the future for new users.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78754/" ] }
162,402
I'd like to get a list of fail logs in the current directory for use in an outside script, the logs are determined with the pattern FAIL in the filename, so I've been using a FAIL* to feed my script files to open and process. However, for each FAIL file there are two types, a compressed file and an uncompressed one. I just want to open the uncompressed file. Is it possible to chain find FAIL* but not if *.gz/bz2/whatever exists?
The command to set the hostname is definitely, hostnamectl . root ~ # hostnamectl set-hostname --static "YOUR-HOSTNAME-HERE" Here's an additional source that describes this functionality a bit more, titled: Correctly setting the hostname - Fedora 20 on Amazon EC2 . Additionally the man page for hostnamectl : HOSTNAMECTL(1) hostnamectl HOSTNAMECTL(1)NAME hostnamectl - Control the system hostnameSYNOPSIS hostnamectl [OPTIONS...] {COMMAND}DESCRIPTION hostnamectl may be used to query and change the system hostname and related settings. This tool distinguishes three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the static hostname which is used to initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the transient hostname which is a default received from network configuration. If a static hostname is set, and is valid (something other than localhost), then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names. The static hostname is stored in /etc/hostname, see hostname(5) for more information. The pretty hostname, chassis type, and icon name are stored in /etc/machine-info, see machine-info(5). Use systemd-firstboot(1) to initialize the system host name for mounted (but not booted) system images. There is a bug in Fedora 21 where SELinux prevents hostnamectl access, found here, titled: Bug 1133368 - SELinux is preventing systemd-hostnam from 'unlink' accesses on the file hostname . This bug seems to be related. There's an issue with the SELinux contexts not being applied properly to the file /etc/hostname upon installation. This manifests in the tool hostnamectl not being able to manipulate the file /etc/hostname . That same thread offered this workaround: $sudo restorecon -v /etc/hostname NOTE: That patches were applied to Anaconda (the installation tool) so that this issue should go away in the future for new users.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88039/" ] }
162,411
I am trying to understand how to use find -maxdepth 0 option. I have the below directory structure. --> file1--> parent --> child1 --> file1 --> file2 --> child2 --> file1 --> file2 --> file1 Now, I execute my find command as below. find ./parent -maxdepth 0 -name "file1"find ./ -maxdepth 0 -name "file1"find . -maxdepth 0 -name "file1" With none of the above find commands, file1 gets returned. From man page of find , I see the below information. -maxdepth 0 means only apply the tests and actions to the command line arguments. I searched for some examples with -maxdepth 0 option and couldn't find any proper example. My find version is, find --versionfind (GNU findutils) 4.4.2 Can someone please provide me some pointers on which cases -maxdepth 0 option would be useful? EDIT When I execute the below command, I get the file1 getting listed twice. Is this intended to work this way? find . file1 -maxdepth 1 -name "file1"./file1file1
Let us suppose that we have file1 in the current directory.  Then: $ find . -maxdepth 0 -name "file1"$ find . file1 -maxdepth 0 -name "file1"file1 Now, let's look at what the documentation states: -maxdepth 0 means only apply the tests and actions to the command line arguments. In my first example above,only the directory . is listed on the command line.  Since . does not have the name file1 , nothing is listed in the output.  In my second example above, both . and file1 are listed on the command line,and, because file1 matches -name "file1" , it was returned in the output. In other words, -maxdepth 0 means do not search directories or subdirectories. Instead, only look for a matching file among those explicitly listed on the command line. In your examples, only directories were listed on the command line and none of them were named file1 . Hence, no output. In general, many files and directories can be named on the command line. For example, here we try a find commandwith nine files and directories on the command line: $ lsd1 file1 file10 file2 file3 file4 file5 file6 file7$ find d1 file1 file10 file2 file3 file4 file5 file6 file7 -maxdepth 0 -name "file1"file1 Overlapping paths Consider: $ find . file1 -maxdepth 0 -iname file1file1$ find . file1 file1 -maxdepth 0 -iname file1file1file1$ find . file1 file1 -maxdepth 1 -iname file1./file1file1file1 find will follow each path specified on the command line and look for matches even if the paths lead to the same file, as in . file ,or even if the paths are exact duplicates, as in file1 file1 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
162,425
I have a Kali Linux installed recently, but due to its inflexible mirror selection, all deb packages are downloaded from a mirror that I don't trust. Is it possible to tell apt / dselect to re-download and reinstall all packages, assuming sources.list has been updated to use new mirror?
As Alex pointed out: sudo apt-get cleansudo apt-get install --reinstall $(dpkg --get-selections | grep -w 'install$' | cut -f 1) The first one makes sure apt's cache is empty. That way apt-get will need to download the packages from the repositories. The second, first you need a list of all installed packages, which is what the $(...) part is doing, and then you are using apt-get to reinstall them. There's another way with aptitude: sudo aptitude --reinstall install '~i' The ~i search for "installed packages". For both methods you will end without any automatically installed packages, so you must use apt-mark showauto > packages before doing this, and sudo apt-mark auto $(cat packages) to reestablish the list.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78953/" ] }
162,476
For example, $PATH and $HOME When i type echo $PATH it returns my $PATH , but i want to echo the word $PATH and not what the actual variable stands for, echo "$PATH" doesn't work either.
You just need to escape the dollar $ .: echo \$PATH$PATH Or surround it in single quotes: echo '$PATH'$PATH This will ensure the word is not interpreted by the shell.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/162476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
162,478
Everybody seems to be talking about the POODLE vulnerability today. And everybody recommends disabling SSLv3 in Apache using the following configuration directive: SSLProtocol All -SSLv2 -SSLv3 instead of the default SSLProtocol All -SSLv2 I've done that, and no joy – after testing repeatedly with various tools ( here's a fast one ), I find that SSLv3 is happily accepted by my server. Yes, I did restart Apache. Yes, I did a recursive grep on all configuration files, and I don't have any override anywhere. And no, I'm not using some ancient version of Apache: [root@server ~]# apachectl -vServer version: Apache/2.2.15 (Unix)Server built: Jul 23 2014 14:17:29 So, what gives? How does one really disable SSLv3 in Apache?
I had the same problem...You have to include SSLProtocol all -SSLv2 -SSLv3 within every VirtualHost stanza in httpd.conf The VirtualHost stanzas are generally towards the end of the httpd.conf file. So for example: ......<VirtualHost your.website.example.com:443> DocumentRoot /var/www/directory ServerName your.website.example.com ... SSLEngine on ... SSLProtocol all -SSLv2 -SSLv3 ...</VirtualHost> Also check ssl.conf or httpd-ssl.conf or similar because they may be set there, not necessarily in httpd.conf
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/162478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88086/" ] }
162,506
I have two files with some names in them, when I ran cat file1 file2 | sort , the terminal shows the names sorted alphabetically, but when I run cat file1 file2 > file3 | sort I don't see the sorted output, why?
You have already redirected the output of file1 and file2 to the new file file3 . With this command cat file1 file2 > file3 | sort , sort after pipe. This could be verified as below. cat file1AZBcat file2FGC Now when I run the command as, cat file1 file2 > file3 | sort we could see that the contents of file1 and file2 are written to file3 but it is not sorted. I believe what you are trying to achieve could be fairly easily accomplished as, cat file1 file2 | sort > file3 However, it doesn't show the output in the console window. If you need the output of two files after sorted to be written to a new file and at the same time the sorted output to be available in the console, you could do something like below. cat file1 file2 | sort > file3; cat file3 However, it is good to make use of tee in this case. tee could be effectively used as, cat file1 file2 | sort | tee file3 The above command basically concatenates 2 files and sorts them and displays the output in the console and at the same time writes the output of the pipe to the new file specified using the tee command. As user casey points out, if you have zsh shell available on your system, you could use the below command as well. sort <file1 <file2 | tee file3
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59882/" ] }
162,525
I understand that sudo cd /directory will return: sudo: cd: command not found because cd is a shell builtin and not a binary. But then, why does sudo echo 'this is a test' works fine? What is really going on here? How does sudo find the command echo if it's not a shell?
The reason is simple, cd is a shell builtin (and shell function in some shells), while echo is both a binary and a shell builtin: $ type -a cd cd is a shell builtin$ type -a echo echo is a shell builtinecho is /bin/echo sudo can't handle shell builtins, but can handle binaries in the $PATH . When you use sudo echo , /bin/echo is found in the $PATH , so it uses that, meanwhile sudo cd can't find cd in the $PATH hence it fails.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
162,528
A rather annoying feature of vim is that if you are in insert mode and do an autocomplete (Ctrl-N), arrow key down to the desired item, and press the Enter key, then it inserts the item but also inserts a newline which you then have to delete. Is there a way to select an item out of the autocomplete list without getting an additional unwanted newline?
It depends on which popup menu state you are in (see :help popupmenu-completion ). I understand from your question that you're in state 2 (since you've pressed arrow keys to find a completion). However, the default behavior for Enter in state 2 is to insert the completion without newline; what you describe is normally the behavior of state 1 (which is when you use Ctrl + N / Ctrl + P .) A way that works consistently in all states is to use Ctrl + Y . I like to remember the Y as standing for "yes, accept that word." It's also possible to just start typing the text that should come after the completed word, unless you've remapped things as in geedoubleya's answer. In the same context, you can press Ctrl + E to cancel the menu and leave your text as it was before you invoked it. If you're used to the pairings of Ctrl + E and Ctrl + Y in other contexts (e.g. to scroll up or down in normal mode, or to insert the character below or above the cursor in insert mode), that's one way to remember it here. I guess you could also think of it as "exiting" the menu or similar. See :help popupmenu-keys for more.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/162528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
162,531
I'm confused about following script ( hello.go ). //usr/bin/env go run $0 $@ ; exitpackage mainimport "fmt"func main() { fmt.Printf("hello, world\n")} It can execute. (on MacOS X 10.9.5) $ chmod +x hello.go$ ./hello.gohello, world I haven't heard about shebang starting with // . And it still working when I insert a blank line at the top of the script. Why does this script work?
It isn't a shebang, it is just a script that gets run by the default shell. The shell executes the first line //usr/bin/env go run $0 $@ ; exit which causes go to be invoked with the name of this file, so the result is that this file is run as a go script and then the shell exits without looking at the rest of the file. But why start with // instead of just / or a proper shebang #! ? This is because the file need to be a valid go script, or go will complain. In go, the characters // denote a comment, so go sees the first line as a comment and does not attempt to interpret it. The character # however, does not denote a comment, so a normal shebang would result in an error when go interprets the file. This reason for the syntax is just to build a file that is both a shell script and a go script without one stepping on the other.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/162531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88111/" ] }
162,545
In ksh on an old Solaris box I used: export PS1="$PWD $" to set the prompt to the current directory. It works great until you cd elsewhere, then you see that it's not evaluating PWD each time. I notice that it's fixed by instead setting PS1 like this: export PS1="\$PWD $" Just curious what it's called, how it works, other applications etc. (It's one of those things that's quite resistant to Googling.)
A few pieces of documentation will help to explain this. From the POSIX standards document for the shell : The following variables shall affect the execution of the shell: PS1: Each time an interactive shell is ready to read a command, the value of this variable shall be subjected to parameter expansion and written to standard error. ... Single-Quotes Enclosing characters in single-quotes shall preserve the literal value of each character within the single-quotes. ... Double-Quotes Enclosing characters in double-quotes shall preserve the literal value of all characters within the double-quotes, with the exception of $: The dollar sign shall retain its special meaning introducing parameter expansion ... Escape Character (Backslash) A backslash that is not quoted shall preserve the literal value of the following character. So the value of PS1 is subject to parameter expansion, and this is what you want, so that $PWD will be evaluated every time you get a prompt. This means there needs to be an actual $PWD string in the value of PS1.But, export PS1="$PWD $ " will put the value of PWD at the time the export statement is run into PS1. PS1 will be something like /home/poldie $ , and it'll never change after that. You don't want that. export PS1="\$PWD $ " The backslash will quote the $ , so that PS1 contains the literal string $PWD $ . You want this. export PS1='$PWD $ ' will do the same thing. Parameters are not expanded when surrounded by single quotes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
162,607
I'm writing a bash script that should run on OSX and Ubuntu. I'm not sure if this particular problem is due to an OS mismatch; more likely it's a difference in the date command on the two shells, even though it's bash on both? Let's see. On OSX's bash shell, in order to print a date from two days ago we're doing this: date -v-2d +"%Y"."%m"."%d" That -v flag is invalid on the Ubuntu bash shell however. Instead, there we're using: date --date="2 days ago" +"%Y"."%m"."%d" Unforuntately, the --date flag is unrecognized on our OSX bash shell. I'd love for one command with flags that work in both instances, would anyone know what I could try?
date is not a bash builtin. It is a system utility and that is something on which OSX and Linux differ. OSX uses BSD tools while Linux uses GNU tools. They are similar but not the same. As you have found, on OSX , the -d flag to date controls daylight savings time whereas on Linux, it sets the display time. On OSX, -v adjusts the display date but, on Linux, the -v flag is an invalid option. For the most part, both BSD and GNU strive for compatibility with the POSIX standard. If you check the POSIX standard for date , though, you will see that it is no help in this case: it does not support any syntax for adjusting the date. If you want your code to work on both platforms, try: [ "$(uname)" = Linux ] && date --date="2 days ago" +"%Y"."%m"."%d" || date -v-2d +"%Y"."%m"."%d" Or (requires bash): [ "$OSTYPE" = linux-gnu ] && date --date="2 days ago" +"%Y"."%m"."%d" || date -v-2d +"%Y"."%m"."%d"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162607", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88163/" ] }
162,608
I installed OpenBSD using the default/automatic partition configuration which only allocated ~4MB to /usr . While compiling a large port it gave an error that the disk was full. I have looked for ways to extend to the partition but can find nothing about how to do this. I have plenty of space on other partitions (such as /home ), is there a way to redirect where output files go when I run make so I can point them to somewhere that has space?
Create a file /etc/mk.conf with something like the following: WRKOBJDIR=/home/foo/build/portsDISTDIR=/home/foo/build/distfilesPACKAGE_REPOSITORY=/home/foo/packages The path can be to anywhere you want, so obviously replace /home/foo with the directory you want. You don't need to create the directories; they will be created automatically when you run make . This is covered in the FAQ: 15.3.3 - Configuration of the ports system , with the suggested use-case of using this to create a "read only" ports tree, so /usr/ports can be on a read-only disk/filesystem and you can still build from it, but obviously still works as a workaround for any situation where you cannot write to /usr/ports.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/534016/" ] }
162,624
There is a getopt command in bash command line. getopt can used with short options (such as getopt -o axby "$@" ), and can be used with both short and long options (such as getopt -o axby -l long-key -- "$@" ), but now I need only long options (i.e. short options don't exist at all), however the command getopt -l long-key -- "$@" doesn't parse --long-key option correctly. So how can I use getopt command with only long options? Or is it impossible or is it just a bug of the getopt command?
getopt is perfectly fine with having no short options. But you need to tell it that you have no short options. It's a quirk in the syntax — from the manual: If no -o or --options option is found in the first part, the first parameter of the second part is used as the short options string. That's what's happening in your test: getopt -l long-key -- --long-key foo treats --long-key as the list of options -egklnoy and foo as the sole argument. Use getopt -o '' -l long-key -- "$@" e.g. $ getopt -l long-key -o '' -- --long-key foo --long-key -- 'foo'$ getopt -l long-key -o '' -- --long-key --not-recognized -n foogetopt: unrecognized option '--not-recognized'getopt: invalid option -- 'n' --long-key -- 'foo'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60999/" ] }
162,638
I know chmod 777 allows read , write , and execute for user , group , and others but what if I just do chmod 7 ? Is that only rwx for the user ?
It's the other way around. It will give rwx permission for others . touch samplefilels -l samplefile-rw-rw-r-- 1 ramesh ramesh 0 Oct 16 22:29 samplefile Now after I execute the command, I get the output as, chmod 7 samplefilels -l samplefile-------rwx 1 ramesh ramesh 0 Oct 16 22:29 samplefile From man page of chmod , numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. Now, we can verify the same by executing the command as, chmod 47 samplefilels -l samplefile----r--rwx 1 ramesh ramesh 0 Oct 16 22:29 samplefile As we can see, with chmod 47 on a file will give read permission for group and read , write and execute permission for others .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88180/" ] }
162,677
I'm having difficulty locating a comprehensive up-to-date list of error codes from Bash. e.g.: $ udevadm info /dev/sdx; echo Exit code $?Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.Exit code 4 How is one supposed to look up such exit codes?
tl;dr Exit codes are application specific. There are some loose conventions. false and anything successful prefixed with ! (like ! true ) in POSIX shells return exit code 1, but a developer can use any exit code between 0 and 255 for whatever they want. Ultimately you have to look at its documentation (in the best case) or the code (in the worst case) to know what it means. For programs with man pages exit codes will often be listed in a section named EXIT STATUS (GNU tools like find ). Some popular meanings are listed in /usr/include/sysexits.h - I try to use them whenever possible. As @AnsgarEsztermann points out , these are not a Bash reference, or even an application reference except for those who choose to use it (C/C++ developers primarily according to the ABS ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88196/" ] }
162,694
When I move files and paste in another location, sometimes I make a mistake typing y y instead d d . Then I can't find any means to cancel the copy operation.
Try Ctrl - C . It worked for me. I got it from this discussion: https://bbs.archlinux.org/viewtopic.php?pid=1211500
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45840/" ] }
162,708
I need to create a folder "2015" which re-routes to ./ but I don't know what the command is. Not sure if the term is called in this case an alias (or symbolic link). So I need to make the same "re-route" of the 2015 folder like the 2013 and 2014 folders got the "-> ./" ... www-data 2 Jun 19 2013 2013 -> ./... www-data 2 Jul 24 2013 2014 -> ./... www-data 4096 Oct 17 13:30 2015 For web sub-domains purpose...
Try Ctrl - C . It worked for me. I got it from this discussion: https://bbs.archlinux.org/viewtopic.php?pid=1211500
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162708", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88225/" ] }
162,742
By default, the Firefox (33.0) on my FreeBSD 11.0-CURRENT has the default application for opening PDF files set to Inkscape. Firefox does remember my previous choice, evince , in the “What should Firefox do with this file?” dialog, so until recently I was just confused where this configuration came from, but mostly ignored it because it did barely concern me. I have however recently started using Zotero as my literature database. Zotero runs as a firefox plugin, and ignores the choices made in my Firefox preferences to open PDF files with evince or ask me what to do with them, and just opens them using Inkscape. This made me try to follow up on this particular configuration item of Firefox, but I could not find where that default is set. I could not find the string Inkscape (or inkscape ) in any Firefox-related file in my home directory.
A link to a “similar question” ( xdg-open default applications behavior – not obviously related, but some experimentation showed that the behaviour is indeed equivalent to the one of xdg-open ) led me deeper down the rabbit hole. While Firefox does not rely on, or inherit rules from, xdg-open , it uses the MIME specification files just as xdg-open does. On a user basis, the MIME opening behaviour is configured by the specification file ~/.local/share/applications/mimeapps.list . For me, this file contains just a few reasonable protocols and HTML (and similar) files connected to userapp-Firefox-??????.desktop , but you could easily add a line like application/pdf=evince.desktop to solve that problem on a per-user basis. If the file does not exist yet, make sure to add a section header, such as [Default Applications]application/pdf=evince.desktop Deeper down, the mime types are defined in /usr/local/share/applications/mimeinfo.cache (this may be /usr/share/… if you are not on a FreeBSD system), which does list application/pdf=inkscape.desktop;evince.desktop; . Both evince.desktop and inkscape.desktop in that folder contain MimeType=[…]application/pdf;[…] . The mimeinfo.cache is automatically generated from the mime types listed in the .desktop files without any well-defined order, so you will have to either remove the PDF mime type from Inkscape and regenerate the cache using update-mime-database , or generate a mimeapps.list (either globally in /usr/local/share/applications/ , or for your user in ~/.local/share/applications/mimeapps.list ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88243/" ] }
162,749
This question is old, and I am still not clear about why. Original question in 2014: In a Gnome Terminal tab, I ran $ nohup chromium-browser & But when I close the terminal tab, chromium-browser also exits. Isn't nohup supposed to prevent that? Gilles said: nohup and disown both can be said to suppress SIGHUP, but in different ways. nohup makes the program ignore the signal initially (the program may change this). nohup also tries to arrange for the program not to have a controlling terminal, so that it won't be sent SIGHUP by the kernel when the terminal is closed. disown is purely internal to the shell; it causes the shell not to send SIGHUP when it terminates. So doesn't nohup make chromium-browser ignore SIGHUP? I am seeing this on other executables, too, such as and Emacs (GUI mode). But not on xeyes. This is happening on Ubuntu 12.04, 32-bit when the question was posted. Update in 2015, Now I am running Ubuntu 14.04, with google-chrome instead of chromium-browser installed. Same thing that happened to chromium-browser before also happens to google-chrome now. nohup google-chrome 2>/dev/null & doesn't save it from being closed when the terminal tab is closed. /usr/bin/google-chrome is a link to a bash script /opt/google/chrome/google-chrome . Why does nohup applied to the bash script not work? How can we make it work on bash scripts? What about Python scripts?
When you close a GNOME Terminal window, a SIGHUP is sent to the shell that it was running. The shell will typically send a SIGHUP to every process group that it knows it created - even ones started with nohup - and then exit. If the shell is bash , it will skip sending a SIGHUP to any process group that the user marked with disown . Running a command with nohup makes it ignore SIGHUP, but the process can change that. When the disposition of SIGHUP for a process is the default, then if it receives a SIGHUP, the process will be terminated. Linux provides some tools to examine a running process's signal settings. The chromium-browser shell script does an exec of the compiled app, so its process ID remains the same. So to see its signal settings, I ran nohup chromium-browser & and then looked at /proc/$!/status to see the signal disposition. SigBlk: 0000000000000000SigIgn: 0000000000001000SigCgt: 0000000180014003 Those are hex numbers. This shows that SIGHUP is not caught and is not ignored. Only SIGPIPE (the 13th bit in SigIgn) is ignored. I traced this to the following code : // Setup signal-handling state: resanitize most signals, ignore SIGPIPE.void SetupSignalHandlers() { // Sanitise our signal handling state. Signals that were ignored by our // parent will also be ignored by us. We also inherit our parent's sigmask. sigset_t empty_signal_set; CHECK(0 == sigemptyset(&empty_signal_set)); CHECK(0 == sigprocmask(SIG_SETMASK, &empty_signal_set, NULL)); struct sigaction sigact; memset(&sigact, 0, sizeof(sigact)); sigact.sa_handler = SIG_DFL; static const int signals_to_reset[] = {SIGHUP, SIGINT, SIGQUIT, SIGILL, SIGABRT, SIGFPE, SIGSEGV, SIGALRM, SIGTERM, SIGCHLD, SIGBUS, SIGTRAP}; // SIGPIPE is set below. for (unsigned i = 0; i < arraysize(signals_to_reset); i++) { CHECK(0 == sigaction(signals_to_reset[i], &sigact, NULL)); } // Always ignore SIGPIPE. We check the return value of write(). CHECK(signal(SIGPIPE, SIG_IGN) != SIG_ERR);} Despite the comment, signals ignored by the parent are not being ignored. A SIGHUP will kill chromium. The workaround is to do what @xx4h points out: use the disown command in your bash so that, if bash has to exit, it does not send SIGHUP to the chromium-browser process group. You can write a function to do this: mychromium () { /usr/bin/chromium-browser & disown $!; }
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
162,769
Is xeyes purely for fun? What is the point of having it installed by default in many linux distrubutions (in X)?
xeyes is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with xeyes .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
162,779
Short story I'm looking for the command to enter first found foo-something directory like: cd foo-* but without using wildcard (or other special shell characters). Long story As part of the remote drush build script, I'm trying to find the way of entering folder which folder name could change, but it has common prefix. Same example: drush -y dl ads or drush -y dl ads --dev downloads either ads-7.x-1.0-alpha1 or ads-7.x-1.x-dev ). To make the things more tricky, the command can't consist either wildcard or escaped semicolon , because drush is heavily escaping shell aliases. So ls * is escaped into ls '\''*'\''' and ending up with Command ls '*' failed. error. I've tried also using find , but I can't use -exec primary, because semicolon needs to be escaped , and drush is double escaping it into ( '\''\;'\'' ). Therefore I'm looking to enter foo-* folder without using wildcard (or any other special characters, parameter expansion, command substitution, arithmetic expansion, etc.) if possible. I believe the logic of shell escaping is here and it is intended to work the same way that escapeshellarg() does on Linux. What it does, it's escaping each parameter.
xeyes is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with xeyes .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
162,783
I am developing a small daemon program which needs to run some instructions when a user logs onto the system (all kinds of logins included). In order to do so, I want my program to be woken up whenever this login event occurs. However, I don't want it to check periodically whether a new user arrived, which means it must not: Read log files such as /var/log/auth.log periodically. Besides the fact that I would have to actually parse the file, I would also probably do it far too often (since there are very few logins on my system). Check the output of another command such as ps , who or w and keep track of users internally. Using this method, the program could miss some logins, in case someone logs in and out before my program runs its checks on the output. Since I don't want my program to waste time, I thought about using I/O events, however... I don't quite see where to hook . I have tried watching over /var/run/utmp (using inotify ) but it doesn't seem to react correctly: my program receives a lot of events when terminals are opened/closed, but very few when someone actually logs in (if any at all). Additionally, these events are hardly recognisable, and change from a login attempt to another. For the record, here is a little set of what I was able to catch when running su user : When a terminal opens: IN_OPEN (file was opened), IN_CLOSE_NOWRITE (unwrittable file closed), sometimes IN_ACCESS (file was accessed, when using su -l ). When su is started (password prompt): a few events with no identifier ( event.mask = 0 ). After a successful login attempt (shell started as another user) : nothing. When closing the terminal: another set of unnamed events. Is there another way to hook a program onto " user logins "? Is there a file reflecting user logins on which I could use an inotify watch (just like I could use one on /proc to detect process creations) ? I had another look at /proc contents but nothing seems to be quite what I need. Side note : I thought about posting this on Stack Overflow since it is programming-related, but beyond implementation, I am more interested by the "visible" reactions a Linux system has when a user logs in (by "visible", I mean reactions we could observe/detect/watch out for, programmatically, without wasting time).
Does your system use Pluggable Authentication Modules (PAM)? Most modern Linux or BSD use PAM. PAM allows you to hook into logins. There are a variety of PAM modules available which might meet your needs, or you can write your own in C. There is even a pam-python * binding which allows you to hook in Python code. Given that you want the daemon to be running continuously, I would opt for a simple PAM module which logs to a file and signals the daemon. * The package is named libpam-python under Debian and Ubuntu.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41892/" ] }
162,809
I came across a if/then statement like this: if ! foo ; then echo "blah" exit 1fi What specifically does if ! mean? "If the result of foo is not true?" "If the exit code of foo is not 0"?
! inverts the meaning of the exit status of the command -- it's part of POSIX shell syntax, it's not part of if . From the POSIX spec : If the reserved word ! does not precede the pipeline, the exit status shall be the exit status of the last command specified in the pipeline. Otherwise, the exit status shall be the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status shall be 1; if the last command returns greater than zero, the exit status shall be zero.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
162,839
I have a Bash script I was trying to make to help me run a rather complex command with small changes that it would ask me about through echo and read. I have found solutions to force it to run a terminal to execute the command, but I'm not interested in that. What I would like it to do is, if I space out and just hit Enter on it in Nautilus (making it run with Run Software), it'll just gently pop up a notification saying "Please run this from a terminal." I can get the popup to happen -- as in I know the command -- but I can't get the Bash script to tell if it's being run inside a terminal or not, it seems to always think so. Is it even possible?
From man bash under CONDITIONAL EXPRESSIONS : -t fd True if file descriptor fd is open and refers to a terminal. Assuming fd 1 is standard out, if [ -t 1 ]; then should work for you. The Advanced Shell Scripting Guide claims that -t used this way will fail over ssh , and that the test (using stdin, not stdout) should therefore be: if [[ -t 0 || -p /dev/stdin ]] -p tests if a file exists and is a named pipe. However , I'd note experientially this is not true for me: -p /dev/stdin fails for both normal terminals and ssh sessions whereas if [ -t 0 ] (or -t 1 ) works in both cases (see also Gilles comments below about issues in that section of the Advanced Shell Scripting Guide ). If the primary issue is a specialized context from which you wish to call the script to behave in a way appropriate to that context, you can sidestep all these technicalities and save your self some fuss by using a wrapper and a custom variable: !#/bin/bashexport SPECIAL_CONTEXT=1/path/to/real/script.sh Call this live_script.sh or whatever and double click that instead. You could of course accomplish the same thing with command line arguments, but a wrapper would still be needed to make point and click in a GUI file browser work.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88324/" ] }
162,845
I use dd a lot. I live in a contant fear of making a mistake one day, for example writing on sda (computer disk) instead of sdb (USB disk) and then erasing everything I have on my computer. I know dd is supposed to be a power user tool but still, it doesn't make sense to me that you can basically screw your whole computer by hitting the wrong key. Why ins't there a security measure that prevent dd from writing on the disk it gets the command from ? Not sure how anyone would do this on purpose. Please note that I didn't tried this myself, I've only read about it, so I could be wrong about all that.
I know dd is supposed to be a power user tool but still, it doesn't make sense to me that you can basically screw your whole computer by hitting the wrong key. Consider the kinds of power tools used in civil construction and what you can screw up by doing one little thing wrong. Could those things be made more preventable? Probably, but the counter balance is to what extent making accidents more preventable makes the tool less useful and/or more awkward. Driving automobiles is a similar analogy with potentially much more dire consequences, and yet human beings manage to do this all the time (much too much, in fact). Of course it would be safer if they did it slower, but collectively we have decide what risks are worth taking. Similarly, the computer would be safer if dd did not exist, but since its usefulness is considered to outweigh its risks, it does. Why ins't there a security measure that prevent dd from writing on the disk it gets the command from ? In fact there is, since by default device files (such as /dev/sda1 ) need superuser privileges to write to. So unless you are working as root or via sudo , you actually cannot screw your entire computer with one button using dd . Which brings us to why there are all the caveats about running commands with superuser privileges. These warnings are very prevalent and I think it would be hard to end up operating a *nix system without having seen them, sort of like getting into a construction zone without noticing the HARD HAT AREA signs. If you don't have a reason to be in a construction zone, leave. If you do, take appropriate safety precautions. The world can be a dangerous place and some places more dangerous than others. Don't act without thinking. A degree of safety which ensures nothing bad can happen -- so you don't have to bother thinking -- implies you can't do much either. Sometimes that's desirable, sometimes it is not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54995/" ] }
162,891
I need to append a directory to PKG_CONFIG_PATH . Normally, I would use the standard export PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:$(pyenv prefix)/lib/pkgconfig but PKG_CONFIG_PATH has not been previously set on my system. Therefore, the variable begins with a : character, which tells it to look in the current directory first. I do not want that. I settled on the following, export PKG_CONFIG_PATH=${PKG_CONFIG_PATH}${PKG_CONFIG_PATH:+:}$(pyenv prefix)/lib/pkgconfig but that just seems so ugly. Is there a better way? What is the appropriate way to conditionally append the colon if and only if the variable has already been set?
You are on the right track with the ${:+} expansion operator, you just need to modify it slightly: V=${V:+${V}:}new_V The first braces expand to $V and the colon iff V is set already otherwise to nothing - which is exactly what you need (and probably also one of the reasons for the existence of the operator). Thus in your case: export "PKG_CONFIG_PATH=${PKG_CONFIG_PATH:+${PKG_CONFIG_PATH}:}$(pyenv prefix)/lib/pkgconfig"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37941/" ] }
162,892
I run top on busybox and it shows all processes and their virtual memory size. How do I determine how much RAM is being used by each process?
ps -o pid,user,vsz,rss,comm,args The 4th column (rss) is the resident set size, the non-swapped physical memory used by a task, in kiloBytes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86206/" ] }
162,898
The delete-by-pattern is the D command. But how can I specify a range, say from message number 1234 through 2345? Using D with something like 1234-2345 obviously fails because it treats it as a pattern, not a range of message numbers.
This was very helpful, but still wasn't quite obvious to a complete newbie like me. In gory detail... Type "D" mutt responds with "Delete messages matching:" Type a response following the pattern: "~m 1234-2345" (no quotes)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7107/" ] }
162,899
I am trying to create a script that act as a ps command with it's own properties that I want to display. Say this is how processes would look like in ps command: sas 24431 1 0 Oct10 ? 00:51:08 /usr/lib/jvm/java-1.7.0-oracle-1.7.0.25.x86_64/jre/bin/java -Denv=DEV -Dapp.name=myApp -Xms512m -Xmx1g -Dlog.dir=/apps/java/logs I want to display as below: UID PID APPNAMEsas 24431 -Dapp.name=myAppsas 24432 -Dapp.name=myApp2sas 24433 -Dapp.name=myApp3 Note: the app.name property is a command argument that is extracted from the ps command This is my script: echo -e "PID\tUSERID\t\tAPPNAME"ps -u $USER -f |grep "java"|grep -v "grep"|while read LINE do #Get pid from the line PID=$(cut -d" " -f2 <<< $LINE); #Get parameter value called "-Dapp.name or -DprojectName" #from the ps command for the process APPNAME=$(ps -f $PID | awk 'BEGIN {RS=" "}; /-Dapp.name|-DprojectName/'); USERID=$(cut -d" " -f1 <<< $LINE); echo -e $PID"\t"$USERID"\t"$APPNAME;done; Right now it works the way I want it. But sometimes the alignment getting screwed. Also can this script be optimized into one line command? Any help would be appreciated.
For general purpose tabular alignment, you want the column utility. For example: ( printf 'PID\tUSER\tAPPNAME\n' printf '%s\t%s\t%s\n' "1" "john" "foo bar" printf '%s\t%s\t%s\n' "12345678" "someone_with_a_long_name" "pop tart") | column -t -s $'\t' Results in: PID USER APPNAME1 john foo bar12345678 someone_with_a_long_name pop tart
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72369/" ] }
162,900
What is this folder: /run/user/1000 on my Fedora system and what does it do? ~ $ df -hFilesystem Size Used Avail Use% Mounted ontmpfs 1.2G 20K 1.2G 1% /run/user/1000 EDIT: 7 june 2019. My two answers don't agree on what directory or where the files stored in this place were: Patrick : Prior to systemd , these applications typically stored their files in /tmp . And again here: /tmp was the only location specified by the FHS which is local, and writable by all users. Braiam : The purposes of this directory were once served by /var/run . In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards compatibility. And again here: Programs which have migrated to use /run should cease their usage of /var/run , except as noted in the section on /var/run . So which one is it that is the father of /run/user/1000 , why is there no mention in either answer of what the other says about the directory used before /run/user .
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc. Prior to systemd , these applications typically stored their files in /tmp . They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users. However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with. So systemd came along and created /run/user/$uid . This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control. It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp , you couldn't do this.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/162900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
162,908
A non-root user X cannot message a user Y. This is despite both users having successfully run mesg y . I've tried following advice for similar problems on Ubuntu described in this question . No luck. A root user can message anybody. I have a rough feeling that appropriate configuration of /etc/login.defs or PAM configuration files would solve the problem, but don't know enough to troubleshoot further. Any suggestions? I am locally logged in as user picrin on tty1 and as user iva on tty2. User iva is also sshed into the box. EDIT #1 For the sake of completeness here's more info. This is returned by who : picrin tty1 2014-10-18 22:10iva pts/1 2014-10-19 10:09 (hostXXX-XXX-XX-X.rangeXXX-XXX.btcentralplus.com)iva tty2 2014-10-19 10:13 This is returned when user picrin executes write iva tty2 : write: iva has messages disabled on tty2 This is returned when user picrin executes write iva pts/1 : write: iva has messages disabled on pts/1 This is returned when user iva runs mesg : is y I'm running Fedora 20.
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc. Prior to systemd , these applications typically stored their files in /tmp . They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users. However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with. So systemd came along and created /run/user/$uid . This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control. It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp , you couldn't do this.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/162908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88361/" ] }
162,914
I am about to install a patch for wireless drivers named Compat Wireless in order to solve a problem with my WiFi channel (it locks on the not existing -1 channel) on my Ubuntu Linux v12.04 and Kali Linux v1.0.9 . But first I would like to know if this patch is already installed (why installing something I do have?). I have done some research about, and I can not find a way to know if my patch is already there, nor a generic method to list installed patches. I don't even know if it is possible or not to obtain such info from a running Linux. Any ideas, please?
Has a patch already been applied to my Linux kernel? If one is comfortable enough with Linux to be applying a patch, as this questioner appears to be, then checking if the patch is already in the default kernel is relatively simple: Just check the source code. Use the Source, Luke!¹ The following should work for any distribution derived from Debian GNU/Linux, which includes what the questioner asked for, Ubuntu: apt source linux That will download the Linux source and patch it to be exactly the same state as the distribution used when they compiled it. In the linux-x.y.z directory are the file(s) mentioned in the patch. Just look at the line numbers and make sure that the lines with minus signs aren't there and the ones with plus signs are. That's it. More detailed answer Patches are text files that look like this: Signed-off-by: Alexey Brodkin <[email protected]>--- drivers/usb/core/urb.c | 5 ----- 1 file changed, 5 deletions(-)--- a/drivers/usb/core/urb.c+++ b/drivers/usb/core/urb.c@@ -321,9 +321,6 @@ EXPORT_SYMBOL_GPL(usb_unanchor_urb); */ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) {- static int pipetypes[4] = {- PIPE_CONTROL, PIPE_ISOCHRONOUS, PIPE_BULK, PIPE_INTERRUPT- }; int xfertype, max; struct usb_device *dev; struct usb_host_endpoint *ep;@@ -441,11 +438,6 @@ int usb_submit_urb(struct urb *urb, gfp_ * cause problems in HCDs if they get it wrong. */- /* Check that the pipe's type matches the endpoint's type */- if (usb_pipetype(urb->pipe) != pipetypes[xfertype])- dev_WARN(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",- usb_pipetype(urb->pipe), pipetypes[xfertype]);- /* Check against a simple/standard policy */ allowed = (URB_NO_TRANSFER_DMA_MAP | URB_NO_INTERRUPT | URB_DIR_MASK | URB_FREE_BUFFER); Usually, one uses the patch program to apply patchfiles to the source code before compiling, but for this question, it might not be worth the bother. One can quickly just eyeball a patchfile and see the filename and the line numbers at the top and then open a text editor there (in this example, emacs +321 drivers/usb/core/urb.c ). A quick glance is all it takes to know. (Minus signs mark lines that should be deleted and plus signs, added). But, maybe it's a good idea to run patch On the other hand, if one plans on recompiling Linux, it's not a bad idea to start from the distribution's default kernel. In that case, there's no reason not to try running patch . The program is smart enough to recognize if the patch was already applied and ask the operator if maybe they want to reverse it. Example of running patch (previously unpatched) Here is an example of what it looks like on a patch that was not in the default kernel: $ patch -p1 < foo.patchpatching file drivers/usb/core/urb.cHunk #1 succeeded at 323 (offset 2 lines).Hunk #2 succeeded at 440 (offset 2 lines). If it succeeds, the distribution's default wasn't patched and needs to be recompiled. Fortunately, the Linux source code is now conveniently right there, with the patch applied. Example of running patch (previously patched) And here is what it would look like if the distribution's default kernel had already been patched: $ patch -p1 < foo.patchpatching file drivers/usb/core/urb.cReversed (or previously applied) patch detected! Assume -R? [n] Apply anyway? [n] Skipping patch.2 out of 2 hunks ignored -- saving rejects to file drivers/usb/core/urb.c.rej "Reversed or previously applied" means nothing more needs to be done as the default kernel was already patched. Footnote ¹ Sorry for the bad pun, but don't blame me for it; that was the encouragement I was taught long, long ago by UNIX gurus wise in the ways of the Source. They said that when the Source is with you, you become more powerful than any proprietary vendor could ever imagine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/162914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
162,922
In GIMP, I can import a PDF, and use the GUI to flatten it (if it was made with many layers) by selecting Flatten Image in the Image dropdown menu. I can then export the PDF with a new filename. I would like to automate this. Is there some way to do it via the terminal?
I found these 2 method via Google, in this thread titled: Re: Flattening PDF Files at the UNIX Command Line . Method #1 - using Imagemagick's convert: $ convert -density 300 orig.pdf flattened.pdf NOTE: The quality is reported to be so so with this approach. Method #2 - Using pdf2ps -> ps2pdf: $ pdf2ps orig.pdf - | ps2pdf - flattened.pdf NOTE: This method is reported to retain the image quality.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/162922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36103/" ] }
162,960
I would like to download some files from my server into my laptop, and the thing is that I want this communication to be as stealth and secure as it can be. So, far I came up using VPN, in that way I redirect the whole internet traffic of my laptop via my server. Additionally, I tried to send a file using ftp and observing Wireshark at the same time. The communication seems to be encrypted, however I would like also to encrypt the file itself (as a 2nd step security or something like that). My server is a RasPi running Raspbian.My laptop is Macbook Air. I want firstly to encrypt a file in my Ras Pi and secondly download it. How can I do that?
You can use openssl to encrypt and decrypt using key based symmetric ciphers. For example: openssl enc -in foo.bar \ -aes-256-cbc \ -pass stdin > foo.bar.enc This encrypts foo.bar to foo.bar.enc (you can use the -out switch to specify the output file, instead of redirecting stdout as above) using a 256 bit AES cipher in CBC mode. There are various other ciphers available (see man enc ). The command will then wait for you to enter a password and use that to generate an appropriate key. You can see the key with -p or use your own in place of a password with -K (actually it is slightly more complicated than that since an initialization vector or source is needed, see man enc again). If you use a password, you can use the same password to decrypt, you do not need to look at or keep the generated key. To decrypt this: openssl enc -in foo.bar.enc \ -d -aes-256-cbc \ -pass stdin > foo.bar Notice the -d . See also man openssl .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72520/" ] }
162,973
I've configured dnsmasq as a caching-only DNS server on a Debian server, and it's working well (I'm seeing improved DNS response times via dig). However, I'd like to understand what dnsmasq is caching at any one time, so that I can start to think about the efficiency (i.e. hit rate) that I'm achieving. I've had a look around the man pages, and web, and can't find how I see what dnsmasq is caching at any point (unlike you can do for the leases for example, which are kept in a dnsmasq.lease file). Is the dnsmasq DNS cache held in memory only? Or do I have to do some log file munging?
I do not have access to dnsmasq but according to this thread titled: dnsmasq is it caching? you can send the signal USR1 to the dnsmasq process, causing it to dump statistics to the system log. $ sudo pkill -USR1 dnsmasq Then consult the system logs: $ sudo tail /var/log/syslogJan 21 13:37:57 dnsmasq[29469]: time 1232566677Jan 21 13:37:57 dnsmasq[29469]: cache size 150, 0/475 cache insertions re-used unexpired cache entries.Jan 21 13:37:57 dnsmasq[29469]: queries forwarded 392, queries answered locally 16Jan 21 13:37:57 dnsmasq[29469]: server 208.67.222.222#53: queries sent 206, retried or failed 12Jan 21 13:37:57 dnsmasq[29469]: server 208.67.220.220#53: queries sent 210, retried or failed 6 NOTE: I believe that dnsmasq retains its cache in RAM. So if you want to dump the cache you'll need to enable the -q switch when dnsmasq is invoked. This is mentioned in the dnsmasq man page: -d, --no-daemon Debug mode: don't fork to the background, don't write a pid file, don't change user id, generate a complete cache dump on receipt on SIGUSR1, log to stderr as well as syslog, don't fork new processes to handle TCP queries. Note that this option is for use in debugging only, to stop dnsmasq daemonising in production, use -k. -q, --log-queries Log the results of DNS queries handled by dnsmasq. Enable a full cache dump on receipt of SIGUSR1.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/162973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88401/" ] }
162,979
I'm having an annoying problem. When I'm logged in to a specific host via SSH, the message X11 connection rejected because of wrong authentication. occurs three times seemingly random about once a minute. I have no idea where it comes from. Actually, there is not even any slight problem with X11-forwarding, it works like a charm. But this message keeps appearing and it's driving me crazy. Does anyone have an idea how to get rid of it? I'm facing the problem no matter where I'm coming from, it happens from my Gnome-Desktop and also from a Windows-system using PuTTY, MobaXterm, Cygwin, whatever. After twiddling some more I found the cause to be a monitoring-agent (check_mk). This checks some runtime-parameters of running tasks, the message appeared every time, when this agent was triggered from the monitoring-system, exactly when PostgreSQL-status is checked. It seems this process tries to open an X11-connection but fails. The message is then spit over into my terminal-session as it tried to use my forwarded X11-session. Is there a way to disable this message at all?
Make sure you are not running out of disk space Run df and make sure you have sufficient disk space, if you are low on disk space remove unnecessary files from your system: $ df -h If there are quotas imposed on the file systems, check that you did not exceed your quota: $ quota -s Make sure ~/.Xauthority owned by you Run following command to find ownweship: $ ls -l ~/.Xauthority Run chown and chmod to fix permission problems [replace user:group with your actual username and groupname]: $ chown user:group ~/.Xauthority$ chmod 0600 ~/.Xauthority Make sure X11 SSHD Forwarding Enabled Make sure following line exists in sshd_config file: $ grep X11Forwarding /etc/ssh/sshd_config Sample output: X11Forwarding yes If X11 disabled add following line to sshd_cofing and restart ssh server: X11Forwarding yes Make sure X11 client forwarding enabled Make sure your local ssh_config has following lines: Host *ForwardX11 yes Finally, login to remote server and run X11 as follows from your Mac OS X or Linux desktop system: ssh -X [email protected] Credit for information belongs here: http://www.cyberciti.biz/faq/x11-connection-rejected-because-of-wrong-authentication/ Hope that helps.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84460/" ] }
162,989
I'm using Kali linux on vmware in windows7. I've been trying to crack my wpa password and noticed that when I leave the system running (in process of cracking password) and leave the laptop on and go away for about 10-15 minutes, Kali linux goes to sleep and I am not sure if the cracking process with reaver is still running or not. When I click onto the page a box comes up prompting me to type in my username and password. When I type that in it logs me back on but my screens that were left open cracking the password are no longer there and everything starts freezing up a lot. The mouse is freezing and if I try to click on anything there is a massive delay before anything happens or nothing at all. Also there was no option prevent screen going in an inactive state infinitely..(lock and brightness-maximum time 1 hour) I've since had to switch back to backtrack to do my cracking and has been running perfectly and does not go to sleep when left for long periods. Now what I'd like to know is how can I prevent kali linux from goingto sleep and closing my work that's in progress? Any help on this issue would be appreciated.
With the gui, you do this by changing three settings.To access the settings, click any top right icon (a panel opens), then click the "settings" icon at the bottom left of the opened panel. Once the "All Settings" appears: Power > Power Saving > Blank screen: never Power > Suspend & Power Button > Automatic suspend: off Privacy > Screen Lock: off
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85343/" ] }
162,996
I've MATLAB R2014a UNIX ISO file , but i'm using windows 8.1 , how can i install this version of MATLAB into windows instead of UNIX ?
With the gui, you do this by changing three settings.To access the settings, click any top right icon (a panel opens), then click the "settings" icon at the bottom left of the opened panel. Once the "All Settings" appears: Power > Power Saving > Blank screen: never Power > Suspend & Power Button > Automatic suspend: off Privacy > Screen Lock: off
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/162996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87190/" ] }
163,077
I just installed a rpm using the following command: yum localinstall ./FoxitReader-1.1-0.fc9.i386.rpm Now, this did the trick and I could launch the app using: FoxitReader & What if FoxitReader failed to launch the app and the name was something else. How could I find out what the name of launcher file could be that just got installed?
I usually list out the contents of the RPM and filter it using /bin/ . The files in that directory are executable. $ rpm -ql ImageMagick | grep /bin//usr/bin/animate/usr/bin/compare/usr/bin/composite/usr/bin/conjure/usr/bin/convert/usr/bin/display/usr/bin/identify/usr/bin/import/usr/bin/mogrify/usr/bin/montage/usr/bin/stream
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87762/" ] }
163,120
In my shell script I cannot invoke ant , or mv or cp commands,but the same commands execute on terminal. Below is my script: sample.sh file #! /bin/shcp filename.so filename_org.soandroid update project -p .ant cleanant release PATH is set in the .bashrc file: export PATH=$PATH:/usr/bin/ cp , mv , ant are working only under terminal not via script.
As your script is a shell script ( /bin/sh ), then your PATH entries in .bashrc will not be read as that is for the bash ( /bin/bash ) interactive shell. To make your PATH entries available to /bin/sh scripts run by a specific user, add the PATH entry to the .profile file in that users home directory. Additionally you could add the full path for each of your commands within the script: /bin/cp filename.so filename_org.so Or set the PATH variable including all the required $PATHS at the beginning of your script. PATH=$PATH:/bin:/usr/bin:xxxexport PATH
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88502/" ] }
163,124
Running a server machine with CentOS 7, I've noticed that the avahi service is running by default. I am kind of wondering what the purpose of it is. One thing it seems to do (in my environment) is randomly disabling IPv6 connectivity, which looks like this in the logs: Oct 20 12:23:29 example.org avahi-daemon[779]: Withdrawing address record for fd00::1:2:3:4 on eno1Oct 20 12:23:30 example.org Withdrawing address record for 2001:1:2:3:4:5:6:7Oct 20 12:23:30 example.org Registering new address record for fe80::1:2:3:4 on eno1.*. (the suffixes 1:2:3... are made up) And indeed, after that the public 2001:1:2:3:4:5:6:7 IPv6 address is not accessible anymore. Because of that I've disabled the avahi service via: # systemctl disable avahi-daemon.socket avahi-daemon.service# systemctl mask avahi-daemon.socket avahi-daemon.service# systemctl stop avahi-daemon.socket avahi-daemon.service So far I haven't noticed any limitations. Thus, my question about the use-case(s) of avahi on a server system.
Avahi is the opensource implementation of Bonjour/Zeroconf. excerpt - http://avahi.org/ Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite. This enables you to plug your laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to or find files being shared. Compatible technology is found in Apple MacOS X (branded Bonjour and sometimes Zeroconf). A more detailed description is here along with the Wikipedia article . The ArchLinux article is more useful, specifying the types of services that can benefit from Avahi. In the past I'd generally disable it on servers, since every server I've managed in the past was explicitly told about the various resources that it needed to access. The two big benefits of Avahi are name resolution & finding printers, but on a server, in a managed environment, it's of little value.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
163,145
How can I get the command arguments or the whole command line from a running process using its process name? For example this process: # psPID USER TIME COMMAND 1452 root 0:00 /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid And what I want is /sbin/udhcpc -b -T 1 -A 12 -i eth0 -p /var/run/udhcpc.eth0.pid or the arguments. I know the process name and want its arguments. I'm using Busybox on SliTaz.
You could use the -o switch to specify your output format: $ ps -eo args From the man page : Command with all its arguments as a string. Modifications to the arguments may be shown. [...] You may also use the -p switch to select a specific PID: $ ps -p [PID] -o args pidof may also be used to switch from process name to PID, hence allowing the use of -p with a name: $ ps -p $(pidof dhcpcd) -o args Of course, you may also use grep for this (in which case, you must add the -e switch): $ ps -eo args | grep dhcpcd | head -n -1 GNU ps will also allow you to remove the headers (of course, this is unnecessary when using grep ): $ ps -p $(pidof dhcpcd) -o args --no-headers On other systems, you may pipe to AWK or sed: $ ps -p $(pidof dhcpcd) -o args | awk 'NR > 1'$ ps -p $(pidof dhcpcd) -o args | sed 1d Edit: if you want to catch this line into a variable, just use $(...) as usual: $ CMDLINE=$(ps -p $(pidof dhcpcd) -o args --no-headers) or, with grep : $ CMDLINE=$(ps -eo args | grep dhcpcd | head -n -1)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/163145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81319/" ] }
163,148
I'd like to know if it is possible to configure postfix to redirect to many email addresses (including the original recipient) instead of only one? Here is my scenario:When an e-mail is: Sent from: [email protected] Addressed to: [email protected] Result: redirect e-mail to [email protected] and deliver to the original recipient The question is partly answered here: https://serverfault.com/questions/284702/redirect-specific-e-mail-address-sent-to-a-user-to-another-user
You could use the -o switch to specify your output format: $ ps -eo args From the man page : Command with all its arguments as a string. Modifications to the arguments may be shown. [...] You may also use the -p switch to select a specific PID: $ ps -p [PID] -o args pidof may also be used to switch from process name to PID, hence allowing the use of -p with a name: $ ps -p $(pidof dhcpcd) -o args Of course, you may also use grep for this (in which case, you must add the -e switch): $ ps -eo args | grep dhcpcd | head -n -1 GNU ps will also allow you to remove the headers (of course, this is unnecessary when using grep ): $ ps -p $(pidof dhcpcd) -o args --no-headers On other systems, you may pipe to AWK or sed: $ ps -p $(pidof dhcpcd) -o args | awk 'NR > 1'$ ps -p $(pidof dhcpcd) -o args | sed 1d Edit: if you want to catch this line into a variable, just use $(...) as usual: $ CMDLINE=$(ps -p $(pidof dhcpcd) -o args --no-headers) or, with grep : $ CMDLINE=$(ps -eo args | grep dhcpcd | head -n -1)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/163148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20867/" ] }
163,241
I need to generate MIPS specific code on my machine when I run my C program. When I simply run, gcc -O2 -S -c hello.c On my system, I get the hello.s which seems to generate some assembly code but it doesn't seem to be MIPS specific code. The contents of hello.s file is as below. .file "hello.c" .section .rodata.str1.1,"aMS",@progbits,1.LC0: .string "Hello world" .text .p2align 4,,15.globl main .type main, @functionmain:.LFB11: .cfi_startproc movl $.LC0, %edi xorl %eax, %eax jmp printf .cfi_endproc.LFE11: .size main, .-main .ident "GCC: (GNU) 4.4.7 20120313 (Red Hat 4.4.7-4)" .section .note.GNU-stack,"",@progbits How can I generate the MIPS specific code on my machine? My machine details are as below. archx86_64
Understanding the Basics From the wiki entry of MIPS architecture, it is described as, MIPS (originally an acronym for Microprocessor without Interlocked Pipeline Stages) is a reduced instruction set computer (RISC) instruction set (ISA) developed by MIPS Technologies (formerly MIPS Computer Systems, Inc.). From the wiki entry of the x86-64, it is described as, x86-64 (also known as x64, x86_64 and AMD64) is the 64-bit version of the x86 instruction set. So as per the arch output in the question, it is evident that I have a x86_64 machine and I try to produce the MIPS architecture specific code after running gcc compiler. This is similar to trying and running a diesel car on a petrol engine . No matter how hard we try, without tweaking the gas engine, we could not run the diesel car on a petrol engine. To describe it in a technical manner , gcc can produce assembly code for a large number of architectures, include MIPS . But what architecture a given gcc instance targets is decided when gcc itself is compiled. The precompiled binary you will find in an Ubuntu system knows about x86 (possibly both 32-bit and 64-bit modes) but not MIPS . How to compile a C program to MIPS assembly code Again quoting from the same answer , compiling gcc with a target architecture distinct from the architecture on which gcc itself will be running is known as preparing a cross-compilation toolchain. Or in layman's terms, this cross compilation toolchain is similar to tweaking the petrol engine to run the diesel car . However, setting up a cross-compilation toolchain is quite a bit of work, so rather than describe how to set that up, I will describe how to install a native MIPS compiler into a MIPS virtual machine. This involves the additional steps of setting up an emulator for the VM and installing an OS into that environment, but will allow you to use a pre-built native compiler rather than compiling a cross compiler. We will be first installing qemu to make our system run some virtualized operating systems. Again there are several approaches like installing some cross compiled tool chain as discussed here and using a buildroot as suggested in the answer that I earlier linked. Download the tar ball of qemu from here. After downloading the tar ball, run the following commands. bzip2 -d qe* tar -xvf qe* ./configure make make install Now, after installing qemu on the machine, I tried several methodsof netboot for the debian OS as suggested over here and here . Butunfortunately I was not able to perform the debian OS installationusing the netboot because the correct mirrors were not available. I got an image for debian which targets MIPS architecture from here and I downloaded the kernel and qemu image and from the above link and performed the below steps. I started the qemu as below. qemu-system-mips -M malta -kernel vmlinux-2.6.32-5-4kc-malta -hda debian_squeeze_mips_standard.qcow2 -append "root=/dev/sda1 console=tty0" After the debian system came up, I installed the gcc compiler asbelow. apt-get update && apt-get upgradeapt-get install build-essential Now, I have a perfectly working native gcc compiler inside the MIPS debian virtual machine on qemu, which compiles my C program to MIPS specific assembly code. Testing Inside my debian machine, I just put in a sample C hello world program and saved it as hello.c as below. #include<stdio.h>int main(){ printf("Hello World");} To generate MIPS architecture code for my hello.c program, I ran the C program using the gcc compiler as, gcc -O2 -S -c hello.c The above command generated a hello.s file which generated my MIPS architecture code. .file 1 "hello.c" .section .mdebug.abi32 .previous .gnu_attribute 4, 1 .abicalls .section .rodata.str1.4,"aMS",@progbits,1 .align 2$LC0: .ascii "Hello World\000" .text .align 2 .globl main .set nomips16 .ent main .type main, @functionmain: .frame $sp,0,$31 # vars= 0, regs= 0/0, args= 0, gp= 0 .mask 0x00000000,0 .fmask 0x00000000,0 .set noreorder .set nomacro lui $28,%hi(__gnu_local_gp) addiu $28,$28,%lo(__gnu_local_gp) lui $4,%hi($LC0) lw $25,%call16(printf)($28) nop jr $25 addiu $4,$4,%lo($LC0) .set macro .set reorder .end main .size main, .-main .ident "GCC: (Debian 4.4.5-8) 4.4.5" But how will I know if the above generated code is MIPS assembly code? The arch command's output will tell the machine's architecture. In my debian machine, it produces the output as mips and I also do not have any binutils or cross-compiler tool chains installed in the machine. So, the generated assembly code is MIPS specific.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47538/" ] }
163,271
So I was told to do last > lastloggedin which creates a file that shows the classes last login since the last system reboot, and now I am asked to write an Awk script which is named myawk that counts/determines how many lines of lastloggedin contain the string CFS264 . I've done grep -c CFS264 lastloggedin
To get you started you can use awk to search for lines in a file that contain a string like so: $ awk '/CFS264/ { .... }' lastloggedin The bits in the { .... } will be the commands required to tally up the number of lines with that string. To confirm that the above is working you could use a print $0 in there to simply print those lines that contain the search string. $ awk '/CFS264/ { print $0 }' lastloggedin As to the counting, if you search for "awk counter" you'll stumble upon this SO Q&A titled: using awk to count no of records . The method shown there would suffice for what you describe: $ awk '/CFS264/ {count++} END{print count}' lastloggedin Example $ last > lastloggedin$ awk '/slm/ {count++} END {print count}' lastloggedin 758$ grep slm lastloggedin | wc -l758$ grep -c slm lastloggedin758 NOTE: You don't say which field CFS264 pertains to in the last output. Assuming it's a username then you could further restrict the awk command to search only that field like so: $ awk '$1=="CFS264" { print $0 }' lastloggedin
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86930/" ] }
163,346
I'm using debian live-build to work on a bootable system. By the end of the process i get the typical files used to boot a live system: a squashfs file, some GRUB modules and config files, and an initrd.img file. I can boot just fine using those files, passing the initrd to the kernel via initrd=/path/to/my/initrd.img on the bootloader command line. But when I try to examine the contents of my initrd image, like so: $file initrd.imginitrd.img: ASCII cpio archive (SVR4 with no CRC)$mkdir initTree && cd initTree$cpio -idv < ../initrd.img the file tree i get looks like this: $tree --charset=ASCII.`-- kernel `-- x86 `-- microcode `-- GenuineIntel.bin Where is the actual filesystem tree, with the typical /bin , /etc, /sbin ... containing the actual files used during boot?
The cpio block skip method given doesn't work reliably. That's because the initrd images I was getting myself didn't have both archives concatenated on a 512 byte boundary. Instead, do this: apt-get install binwalklegolas [mc]# binwalk initrd.img DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000"120 0x78 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000"244 0xF4 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000"376 0x178 ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x00005000"21004 0x520C ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000"21136 0x5290 gzip compressed data, from Unix, last modified: Sat Feb 28 09:46:24 2015 Use the last number (21136) which is not on a 512 byte boundary for me: legolas [mc]# dd if=initrd.img bs=21136 skip=1 | gunzip | cpio -tdv | headdrwxr-xr-x 1 root root 0 Feb 28 09:46 .drwxr-xr-x 1 root root 0 Feb 28 09:46 bin-rwxr-xr-x 1 root root 554424 Dec 17 2011 bin/busyboxlrwxrwxrwx 1 root root 7 Feb 28 09:46 bin/sh -> busybox-rwxr-xr-x 1 root root 111288 Sep 23 2011 bin/loadkeys-rwxr-xr-x 1 root root 2800 Aug 19 2013 bin/cat-rwxr-xr-x 1 root root 856 Aug 19 2013 bin/chroot-rwxr-xr-x 1 root root 5224 Aug 19 2013 bin/cpio-rwxr-xr-x 1 root root 3936 Aug 19 2013 bin/dd-rwxr-xr-x 1 root root 984 Aug 19 2013 bin/dmesg
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88640/" ] }
163,351
What is wrong with this for loop? I am trying to find which process has the maximum number of file descriptors. The first command in the for loop ps aux | awk '{print $2}' prints out only the process IDs. I know the first error lsof: illegal process ID: PID is there because the 1st line of the output is PID , but shouldn't the loop work fine for the rest of the lines? [root@serv5 ~]# for i in `ps aux | awk '{print $2}'` ; do `lsof -p $i | wc -l` ; donelsof: illegal process ID: PIDlsof 4.82 latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man usage: [-?abhlnNoOPRtUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] [+|-e s] [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] [-p s][+|-r [t]] [-s [p:s]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names]Use the ``-h'' option to get more help information.-bash: 0: command not found-bash: 22: command not found-bash: 4: command not found-bash: 4: command not found-bash: 4: command not found-bash: 4: command not found^C[root@serv5 ~]# Why is it executing the output of wc -l instead of going back to the loop? Or is there another way I can find the process with maximum file descriptors?
The issue is the backticks in your do ... done section. When writing shell script, you do not need to encapsulate blocks ( if; then ... fi , while; do ... done , etc) in backticks. Doing so results in the shell evaluating the contents of the backticks, and then executing that content. So the backticks are returning a number (the number of open files), and then trying to run that number, resulting in a command not found . Thus you want: for i in `ps aux | awk '{print $2}'` ; do lsof -p $i | wc -l ; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
163,352
I am reading an article about crontab There is something about disabling automatically sending emails. Disable Email By default cron jobs sends an email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line. >/dev/null 2>&1 What is the detailed meaning for 2 > & and 1 ? Why putting this to the end of a crontab file would turn off the email-sending thing?
> is for redirect /dev/null is a black hole where any data sent, will be discarded 2 is the file descriptor for Standard Error > is for redirect & is the symbol for file descriptor (without it, the following 1 would be considered a filename) 1 is the file descriptor for Standard Out Therefore >/dev/null 2>&1 redirects the output of your program to /dev/null . Include both the Standard Error and Standard Out . Much more information is available at The Linux Documentation Project's I/O Redirection page. cron will only email you if there is some output from you job. With everything redirected to null , there is no output and hence cron will not email you.
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/163352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
163,370
Or is shutdown -h now the fastest it can get? I look for some syscall or similar that will allow to skip lots of the stuff done prior to shutdown (in particular to care about the running proccesses). At best I would like a kernel related solution, being ignorant to the init -middleware (like systemd or upstart ). The risks related with i.e. killing directly all services like cups/apache/pulseaudio etc... I would not care.... remark:the solution should be software-vice. Pressing buttons at the device is not what I look for.
It doesn't get much faster than using the System Request (SysRq) functionality and then triggering an immediate reboot . This is a key combination understood by the kernel. Enable SysRq: echo 1 > /proc/sys/kernel/sysrq Now, send it into reboot. echo b > /proc/sysrq-trigger b - Immediately reboot the system, without unmounting or syncing filesystems. Note: Although this is a reboot it will behave like the power has been cut off, which is not recommended. If you want to sync and umount the filesystems before hand then use: echo s > /proc/sysrq-triggerecho u > /proc/sysrq-trigger or if you just want to power off the system then: echo o > /proc/sysrq-trigger Magic key combinations There are also key combinations to use that are interpreted by the kernel: Alt + SysRq / Print Screen + Command Key Command Keys: R - Take control of keyboard back from X. E - Send SIGTERM to all processes, allowing them to terminate gracefully. I - Send SIGKILL to all processes, forcing them to terminate immediately. S - Flush data to disk. U - Remount all filesystems read-only. B - Reboot. Quoting from the Magic SysRq Key Wiki : A common use of the magic SysRq key is to perform a safe reboot of a Linux computer which has otherwise locked up. Hold down the Alt and SysRq (Print Screen) keys. While holding those down, type the following keys in order, several seconds apart: REISUB . Computer should reboot. A way to remember these are: " R eboot E ven I f S ystem U tterly B roken" or simply the word " BUSIER " read backwards. References Magic SysRq Key Wiki Fedora SysRq
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
163,371
How can I modify the contents of my bash_history file? What values or variables control how long the history lasts? Are there any other things I can change to provide finer control of my BASH history??
There are two variables that control the history size: HISTFILESIZE The maximum number of lines contained in the history file. When this variable is assigned a value, the history file is truncated, if necessary, to contain no more than that number of lines by removing the oldest entries. The history file is also truncated to this size after writing it when a shell exits. If the value is 0, the history file is truncated to zero size. Non-numeric values and numeric values less than zero inhibit truncation. The shell sets the default value to the value of HISTSIZE after reading any startup files. and HISTSIZE The number of commands to remember in the command history (see HISTORY below). If the value is 0, commands are not saved in the history list. Numeric values less than zero result in every command being saved on the history list (there is no limit). The shell sets the default value to 500 after reading any startup files. These two variables allow you to control the behavior of your history. Basically, HISTSIZE is the number of commands saved during your current session and HISTFILESIZE is the number of commands that will be remembered across sessions. So, for example: $ echo $HISTSIZE 10$ echo $HISTFILESIZE 5$ history | wc 10 29 173 In the example above, because HISTSIZE is set to 10, history returns a list of 10 commands. However, if you log out and then log back in, history will only return 5 commands because HISTFILESIZE is set to 5. This is because, once you exit your session, your HISTFILESIZE lines of your history are saved to your history file ( ~/.bash_history by default but controlled by HISTFILE ). In other words, commands are added to HISTFILE until that reaches $HISTFILESIZE lines at which point, each subsequent line added means the first command of the file will be removed. You can set the values of these variables in your ~/.profile (or ~/.bash_profile if that file exists). Do not set them in your ~/.bashrc first because they have no business being set there and secondly because that would cause you to have different behavior in login vs non-login shells which can lead to other problems . Other useful variables that allow you to fine tune the behavior of your history are: HISTIGNORE : This allows you to ignore certain common commands that are rarely of interest. For example, you could set: export HISTIGNORE="pwd:df:du" That would cause any command starting with pwd , df , or du to be ignored and not saved in your history. HISTCONTROL : This one lets you choose how the history works. Personally, I set it to HISTCONTROL=ignoredups which causes it to save duplicated commands only once. Other options are ignorespace to ignore commands starting with whitespace, and erasedups which causes all previous lines matching the current line to be removed from the history list before that line is saved. ignoreboth is shorthand for ignorespace and ignoredups. HISTTIMEFORMAT : This allows you to set the time format of the history file. See Pandya's answer or read man bash for details. For further fine tuning, you have: The histappend bash option. This can be set by running shopt -s histappend or adding that command to your ~/.bashrc . If this option is set the history list is appended to the file named by the value of the HISTFILE variable when the shell exits, rather than overwriting the file. This is very useful as it allows you to combine the histories of different sessions (think different terminals for example). The history command has two useful options: history -a : causes the last command to be written to the history file automatically history -r : imports the history file into the current session. You could, for example, add these two commands to your PROMPT_COMMAND (which is executed each time your shell shows the prompt, so whenever you start a new shell and after each command you run in it): export PROMPT_COMMAND='history -a;history -r;' Combined, they ensure that any new terminal you open will immediately import the history of any other shell sessions. The result is a common history across all terminals/shell sessions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
163,399
I have a big file and need to split into two files. Suppose in the first file the 1000 lines should be selected and put into another file and delete those lines in the first file. I tried using split but it is creating multiple chunks.
The easiest way is probably to use head and tail : $ head -n 1000 input-file > output1$ tail -n +1001 input-file > output2 That will put the first 1000 lines from input-file into output1 , and all lines from 1001 till the end in output2
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/163399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
163,481
I know that the cut command can print the first n characters of a string but how to select the last n characters? If I have a string with a variable number of characters, how can I print only the last three characters of the string. eg. "unlimited" output needed is "ted""987654" output needed is "654""123456789" output needed is "789"
Why has nobody given the obvious answer? sed 's/.*\(...\)/\1/' … or the slightly less obvious grep -o '...$' Admittedly, the second one has the drawbackthat lines with fewer than three characters vanish;but the question didn’t explicitly define the behavior for this case.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/163481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88709/" ] }
163,503
As you know, in Windows when we plug in the network cable, the network symbol will change to another status. How can I know whether the cable is plugged in or not via the command prompt in Linux?
The 2 methods I've seen used predominately are to use ethtool or to manually parse the contents of /sys . ethtool For example if your interface is eth0 you can query it using ethtool and then parse for the line, "Link detected". Example $ sudo ethtool eth0Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbag Wake-on: g Current message level: 0x00000001 (1) Link detected: yes Specifically this command: $ ethtool eth0 | grep "Link"Link detected: yes If it were down it would say no. Using /sys Again assuming we're interested in eth0 , you can manually parse the contents of /sys/class/net/ and then eth0 for your device. There are 2 files under this directory that will tell you the status of whether the link is active or not, carrier and operstate : When the wire is connected these 2 files will present as follows: $ cat /sys/class/net/eth0/{carrier,operstate}1up When the wire is disconnected these 2 files will present as follows: $ cat /sys/class/net/eth0/{carrier,operstate}0down References How to detect the physical connected state of a network cable/connector?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88727/" ] }
163,561
I have a live CD that boots into Linux and runs a small Bash script. The script searches for and runs a second program (which is usually a compiled C++ binary). You're supposed to be able to abort the second program by pressing Ctrl + C . What should happen is that the second program halts, and the Bash script continues running cleanup. What actually happens is that both the main application and the Bash script terminate. Which is a problem. So I used the trap builtin to tell Bash to ignore SIGINT. And now Ctrl + C terminates the C++ application, but Bash continues its run. Great. Oh yeah... Sometimes the "second application" is another Bash script. And in that case, Ctrl + C now does nothing whatsoever . Clearly my understanding of how this stuff works is wrong... How do I control which process gets SIGINT when the user presses Ctrl + C ? I want to direct this signal to just one specific process .
After many, many hours of searching the face of the Internet, I have found the answer. Linux has the notion of a process group . The TTY driver has a notion of Foreground Process Group. When you press Ctrl + C , the TTY sends SIGINT to every process in the Foreground Process Group. (See also this blog entry .) This is why both the compiled binary and the script that launches it both get clobbered. In fact I only want the main application to receive this signal, not the startup scripts. The solution is now obvious: We need to put the application in a new process group, and make it the Foreground Process Group for this TTY. Apparently the command to do that is setsid -c <applcation> And that is all. Now when the user presses Ctrl + C , SIGINT will be sent to the application (and any children it may have) and nobody else. Which is what I wanted. setsid by itself puts the application in a new process group (indeed, an entire new "session", which is apparently a group of process groups). Adding the -c flag makes this new process group become the "foreground" process group for the current TTY. (I.e., it gets SIGINT when you press Ctrl + C ) I've seen a lot of conflicting information about when Bash does or does not run processes in a new process group. (In particular, it appears to be different for "interactive" and "non-interactive" shells.) I've seen suggestions that you can maybe get this to work with clever pipe trickery ... I don't know. But the approach above seems to work for me.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
163,570
I have a server, and I want to be able to SSH in with two different users. I have setup public key authentication for the first user, and it works just fine, however, I can't login with the second user. The difference between the authorized_keys file is that, the second user has two keys(both of them fail when authenticating). Both the .ssh directory and the authorized keys file have 755 permissions. The ssh client sends the key, that I want to authenticate with. What could be the problem?
First, the .ssh directory should have 700 permissions and the authorized_keys file should have 600. chmod 700 .sshchmod 600 .ssh/authorized_keys In case you created the files with say root for userB then also do: chown -R userb:userb .ssh If the problem still persist, then post the output from your ssh log file in your question and I'll update my answer. For Debian: less /var/log/auth For Redhat: less /var/log/secure
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88765/" ] }
163,603
I have a Java process running on a RedHat Linux instance. The problem is it keeps reappearing after I kill it. I am not sure where to look. I've already went to crontab, but no luck. I've looked at the PPID, but it points to init (1). Any idea how I can find out the source?
There are a number of possibilities (some mentioned in other answers): A system or user cronjob executing often, In SysV init, an /etc/inittab entry for the service with the respawn directive, In systemd, a unit file with the Restart option set to a value other than no , In Upstart, a service configuration file with the respawn directive, A process monitoring tool such as monit , or An ad-hoc watchdog process for that particular service. An interesting new (linux-only) tool that could provide more insight into where the process is being started is sysdig . Sysdig uses the Linux Kernel's tracepoint features to provide what amounts to a fast, system wide strace . For example, if I wanted to see every process starting ls , I can issue: sudo sysdig evt.type=execve and evt.arg.exe=ls When ls is run somewhere, I will get a message like this: 245490 16:53:54.090856066 3 ls (10053) < execve res=0 exe=ls args=--color=auto. tid=10053(ls) pid=10053(ls) ptid=9204(bash) cwd=/home/steved fdlimit=1024 pgft_maj=0 pgft_min=37 vm_size=412 vm_rss=4 vm_swap=0 env=... I truncated the environment information returned, but as you can see, in the ptid I can see the name and pid of the program calling execve. execve is the system call used in Linux used to execute new commands (all other exec calls are just frontends to execve).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88790/" ] }
163,613
I have a custom $PROMPT_COMMAND in bash that logs the last executed command and its return code. I was using $? for the latter happily until I realized that for piped commands this was insufficient. I thought I'd log ${PIPESTATUS[@]} instead. Unfortunately $PIPESTATUS seems to be set after the invocation of the $PROMPT_COMMAND . Is there any trickery that I can use to access this information during the execution of $PROMPT_COMMAND ?
Commands within your prompt command function alter PIPESTATUS , bash saves and restores PIPESTATUS (and $? ) after your prompt command, see the description of the intended behaviour here . The trick is to save $PIPESTATUS[] (and/or $? ) in the very first statement of your function, after that the original values are overwritten. function myprompt() { _pipestatus=( "${PIPESTATUS[@]}" ) echo "current: ${PIPESTATUS[@]}" echo "cached : ${_pipestatus[@]}"}PROMPT_COMMAND=myprompt then: $ true | false | truecurrent: 0cached : 0 1 0 I do something similar to what you have described, but within a trap handler function for ERR rather than a prompt command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4974/" ] }
163,639
We all know that Linus Torvalds created Git because of issues with Bitkeeper. What is not known (at least to me) is, how were issues/tickets/bugs tracked up until then? I tried but didn't get anything interesting. The only discussion I was able to get on the subject was this one where Linus shared concerns with about using Bugzilla . Speculation: - The easiest way for people to track bugs in the initial phase would have been to put tickets in a branch of its own but am sure that pretty quickly that wouldn't have scaled with the noise over-taking the good bugs. I've seen and used Bugzilla and unless you know the right 'keywords' at times you would be stumped. NOTE: I'm specifically interested in the early years (1991-1995) as to how they used to track issues. I did look at two threads, " Kernel SCM saga ", and " Trivia: When did git self-host? " but none of these made mention about bug-tracking of the kernel in the early days. I searched around and wasn't able to get of any FOSS bug-tracking software which was there in 1991-1992. Bugzilla, Request-tracker, and others came much later, so they appear to be out. Key questions How did then Linus, the subsystem-maintainers, and users report and track bugs in those days? Did they use some bug-tracking software, made a branch of bugs and manually committed questions and discussions on the bug therein (would be expensive and painful to do that) or just use e-mail. Much later, Bugzilla came along (first release 1998) and that seems to be the primary way to report bugs afterwards . Looking forward to have a clearer picture of how things were done in the older days.
In the beginning, if you had something to contribute (a patch or a bug report), you mailed it to Linus. This evolved into mailing it to the list (which was [email protected] before kernel.org was created). There was no version control. From time to time, Linus put a tarball on the FTP server. This was the equivalent of a "tag". The available tools at the beginning were RCS and CVS, and Linus hates those, so everybody just mailed patches. (There is an explanation from Linus about why he didn't want to use CVS.) There were other pre-Bitkeeper proprietary version control systems, but the decentralized, volunteer-based development of Linux made it impossible to use them. A random person who just found a bug will never send a patch if it has to go through a proprietary version control system with licenses starting in the thousands of dollars. Bitkeeper got around both of those problems: it wasn't centralized like CVS, and while it was not Free Software, kernel contributors were allowed to use it without paying. That made it good enough for a while. Even with today's git-based development, the mailing lists are still where the action is. When you want to contribute something, you'll prepare it with git of course, but you'll have to discuss it on the relevant mailing list before it gets merged. Bugzilla is there to look "professional" and soak up half-baked bug reports from people who don't really want to get involved. To see some of the old bug-reporting instructions, get the historical Linux repository . (This is a git repository containing all the versions from before git existed; mostly it contains one commit per release since it was reconstructed from the tarballs). Files of interest include README , MAINTAINERS , and REPORTING-BUGS . One of the interesting things you can find there is this from the Linux-0.99.12 README: - if you have problems that seem to be due to kernel bugs, please mail them to me ([email protected]), and possibly to any other relevant mailing-list or to the newsgroup. The mailing-lists are useful especially for SCSI and NETworking problems, as I can't test either of those personally anyway.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/163639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
163,640
When I have run fsck in the past I have always run it on what I thought was a filesystem partition, ie. fsck -yf /dev/sdaN , where N is the partition number on disk A. A friend of mine recently ran fsck on his brand new disk and it reported a bunch of errors using fsck -yf /dev/sda . I am curious, what is the difference between running fsck on the whole disk ( /dev/sda ), verses running it on just a single partition ( /dev/sda1 )?
When fsck runs, it should first try to locate the superblock of a filesystem to begin traversing the filesystem's structure in order to validate it. Since the /dev/sda device corresponds to whole drive, the first portion of the disk will likely contain the partition table or Master Boot Record and fsck will not be able to locate the superblock for a supported filesystem (unless you get something that somehow matches a magic number for a known filesystem). I would expect it to exit in error or provide inaccurate output (as your friend experienced). However, I have not yet performed this experiment myself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26100/" ] }
163,647
I'm trying to install Arch Linux over virtual box guest machine in a UEFI mode.I've followed beginner's guide to install base system, generate fstab and etc and my system now boots into grub command prompt. I had used GPT partition table to create two partitions. /dev/sda1 - 500Mb fat32 UEFI system partition; /dev/sda2 - 7.5Gb ext4 mounted as / ; /etc/fstab generated with command genfstab -U -p /mnt >> /mnt/etc/fstab and contains: # /dev/sda2UUID=ce8f33a9-4bb8-42b8-b082-c2ada96cc2bb / ext4 rw,relatime,data-ordered 0 1# /dev/sda1UUID=3D70-B6C5 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,error=remount-ro 0 2 grub installed with commands: \# grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck\# mkdir /boot/EFI/boot\# cp /boot/EFI/arch_grub/grubx64.efi /boot/EFI/boot/bootx64.efi (without mkdir and cp it won't boot at all) grub config generated with grub-mkconfig -o /boot/grub/grub.cfg and its contents are quite hard to get and post here; if it's necessary, I'll try. And after reboot system boots into grub> command prompt and nothing helps. Unlike this question: UEFI install (14.04) boots to GRUB command prompt, no GUI in my case command configfile (hd1,1)/boot/grub/grub.cfg does not make anything except clears the screen. I can "boot" to installed system via chroot from installing cd environment, but no way other that that. How can I fix it?
I found a case when I tried to use gummiboot instead of grub. Gummiboot reported an error: that it can't find kernel images. It looks like I mounted /boot and configured fstab after I installed the base system with pacstrap -i . So kernel images that were placed in a /boot directory were lost after mounting, and thus the system could not boot. I wonder what happened with them? Are they were still on hard drive, but were just shadowed with mounted partition? Anyway, I just reinstalled everything again with carefully following instructions on the Arch wiki and it works now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88809/" ] }
163,657
I have a packet rate limit (max. 10 per seconds) which is set by my internet provider. This is a problem if I want to use the AceStream player, because if I exceed the limit I get disconnected. How can I restrict the internet access of this program? I tried the suggested command: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT but I get a fatal error message: FATAL: Error inserting ip_tables (/lib/modules/3.2.0-67-generic/kernel/net/ipv4/netfilter/ip_tables.ko): Operation not permittediptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)Perhaps iptables or your kernel needs to be upgraded. With administor rights: sudo iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT there is no errror message anymore.But it is still not working, I get disconnected. Is there an error in the command line? Or do I have to use other arguments of iptables? Below is the actual message that I get, when I exceed the limits of the provider. Up to now, I tried different approaches, but none of them didn't work. sudo iptables -A INPUT -p tcp --syn --dport 8621 -m connlimit --connlimit-above 10 --connlimit-mask 32 -j REJECT --reject-with tcp-resetsudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 9/second --limit-burst 10 -j ACCEPTsudo iptables -A INPUT -p tcp --destination-port 8621 --syn -m state --state NEW -m limit --limit 9/s --limit-burst 10 -j ACCEPT This approach seems not to help in order to still use the application.So, I posted another question: set connection limit via iptables .
The solution you found was correct: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPT But it is assuming a default policy of DROP or REJECT which is not usual for OUTPUT. You need to add: iptables -A OUTPUT -j REJECT Be sure to add this rule after the ACCEPT one. Either execute them in this order, or use -I instead of -A for the ACCEPT. Also, depending on the application this might kill the connection. In that case try with DROP instead of REJECT or try with a different --reject-with (default is icmp-port-unreachable). I just tested with telnet against a DVR server and it didn't kill the connection. Of course, since a new connection is an output packet, trying to reconnect right after hitting the limit will fail right away if you use REJECT. I gather from the comments that your ISP also expects you to limit your INPUT packets... you cannot do this. By the time you are able to stop them they've already reached your NIC, which means the were already accounted for by your ISP. The INPUT packet count will also increase considerably when you limit your OUTPUT because most of the ACK won't make it out, causing lots of retransmissions. 10 packets per second is insane.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88819/" ] }
163,667
I have several linux devices connected to the same router (of which I am not a administrator). How can I find out the ip addresses of all other devices by executing some commands in one of them?
I believe you could use nmap to get such information. The below command lists me all the machines/devices connected in my network. It is a home network and it lists me all the machines in my home. nmap -sP 192.168.1.0/24 I believe you need to modify the subnet mask and IP range that you are in to suit your requirements.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31584/" ] }
163,681
From the question here , the OP wants to repeatedly poll the pid of a process using pidof in a shell script. Of course this is inefficient as a new process must be started for the pidof program multiple times per second (I don't know that this is the cause of the CPU spikes in the question, but it seems likely). Usually the way around this kind of thing in a shell script is to work with a single program that outputs the data you need on stdout and then doing some text processing if necessary. While this involves more programs to be running concurrently, it is likely to be less CPU intensive since new processes are not being continually created to for polling purposes. So for the above question, one solution might be to have some program which outputs the names and pids of processes as they are created. Then you could do something like: pids-names | grep some_program | cut -f 2 | while read pid; do process-pid "$pid" done The problem with this is that it raises a more fundamental question, how can pids and process names be printed as they are created? I have found a program called ps-watcher , though the problem with this is that it is just a perl script which repeatedly runs ps so it doesn't really solve the problem. Another option is to use auditd which could probably work if the log was processed directly via tail -f . An ideal solution would be simpler and more portable than this, though I will accept an auditd solution if it is the best option.
Linux-specific answer: perf-tools contains an execsnoop that does exactly this. It uses various Linux-specific features such as ftrace. On Debian, its in the perf-tools-unstable package. Example of me running man cat in another terminal: root@Zia:~# execsnoop TIME PID PPID ARGS17:24:26 14189 12878 man cat 17:24:26 14196 14189 tbl 17:24:26 14195 14189 preconv -e UTF-8 17:24:26 14199 14189 /bin/sh /usr/bin/nroff -mandoc -Tutf8 17:24:26 14200 14189 less 17:24:26 14201 14199 locale charmap 17:24:26 14202 14199 groff -mtty-char -Tutf8 -mandoc 17:24:26 14203 14202 troff -mtty-char -mandoc -Tutf8 17:24:26 14204 14202 grotty I doubt there is a portable way to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/163681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48083/" ] }