source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
510,757
I've installed Manjaro Linux. My graphics card is an Nvidia GTX 1050Ti. When I boot my computer and try to watch videos, I notice harsh screen tearing. This problem is resolved when I go to the Nvidia X Server Settings and enable the option "Force Full Composition Pipeline". Is there any way to permanently set this option so that I don't have to manually enable it every time I reboot my computer?
My laptop is equipped with a Quadro K2100M, running Ubuntu Bionic with KDE Plasma 5.17 (from the neon repo ) and using the nvidia-driver-430 . Since I have a different set of configurations for the monitors at home and at work, I needed something dynamic and here is what is working for me: I wrote the following script to handle the dynamic configuration of the screens ( ~/bin/force-composition-pipeline.sh ): #!/bin/bashs="$(nvidia-settings -q CurrentMetaMode -t)"if [[ "${s}" != "" ]]; then s="${s#*" :: "}" nvidia-settings -a CurrentMetaMode="${s//\}/, ForceCompositionPipeline=On\}}"fi I added that script to the autostart: chmod +x ~/bin/force-composition-pipeline.shln -s ~/bin/force-composition-pipeline.sh ~/.config/autostart-scripts/ In KDE Plasma settings, in Display and Monitor -> Compositor , I set the Tearing prevention ("vsync") to Never . Note that I found the non-full ForceCompositionPipeline to be enough for me.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/305314/" ] }
510,778
I am a bit confused with how linux hard drive/ storage device, block files are named. My questions are: How are IDE devices and partitions named? How are EIDE devices and partitions named? How are PATA devices and partitions named? How are SATA devices and partitions named? How are SCSI devices and partitions named? Lastly, I have been reading articles on this subject, and I have seen mentions of 'master drives' and 'slave drives'. What are these, what are they used for, and how are they named?
Introduction First of all, all the devices populate the /dev folder. Also, it is important to note that (E)IDE and PATA terms usually refer to the same thing, which is the interface standard PATA . IDE and PATA are interchangable terms in this context. There was a major change in naming conventions for block devices in Linux, around the release of Linux kernel version 2.6. The kernel supports all ATA devices through libATA , which started with SATA devices support in 2003 and was extended to current PATA support. Therefore, be aware that, depending on your distribution and kernel version, the drives naming convention can differ. Since a while, PATA devices on "modern" distributions are named the ways SATA drives are, since both are now using libATA. For your distribution, you can find this in /lib/udev/rules.d/60-persistent-storage.rules . On my system using Debian 9, it is also the case. For example: $ cat /lib/udev/rules.d/60-persistent-storage.rules | grep "ATA"# ATAKERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="scsi", ATTRS{vendor}=="ATA", IMPORT{program}="ata_id --export $devnode" By browsing this file, you will know how your distribution will name every block device you could connect to your machine. Block devices naming conventions IDE drives IDE drives (using the old PATA driver) are prefixed with "hd" the first device on the IDE controller (master) is hda the second device (slave) is hdb Since there can only be two drives on one IDE controller/cable, the master is the first one and the slave is the second one. Since most motherboard are fitted with two IDE controllers, it goes on the same way with the second controller: hdc being the master drive on the second controller and hdd the slave drive. Be aware that, since Linux kernel 2.6.19, the support of IDE drives has been merged with SATA/SCSI drives and, therefore, will be named like them. SATA and SCSI drives This naming convention started with SCSI drives, and was extended to SATA drives with libATA. It applies to SCSI, SATA, PATA as well as others drives, out of the scope of OP question (USB mass storage, FireWire, etc.). Anyway, usually, all the devices using a serial bus use the same denomination nowadays (except for NVMe drive, but this would a story for PCI devices). SATA/SCSI drives start with "sd" the first one is sda the second one is sdb etc. Partitions naming conventions Regarding partitions, each of them is denoted by a number at the end of each disk, named as described previously, starting from 1 . Except for some other devices not mentioned in OP, it is always the case. By instance, for the partitions on a SATA drive, they would be listed as sda1 , sda2 , and so on, for primary partitions . Logical partitions start at the index "5", while the extended partition takes the index "4". Note that this is obviously only true for drives making use of MBR and not GPT. Below, this is the output of lsblk giving an example for disk called sdd , with 3 primary partitions ( sdd1 , sdd2 , sdd3 ), 1 extended partition ( sdd4 ) and 2 logical partitions ( sdd5 , sdd6 ). $ lsblksdd 8:48 1 1.9G 0 disk ├─sdd1 8:49 1 153M 0 part ├─sdd2 8:50 1 229M 0 part ├─sdd3 8:51 1 138M 0 part ├─sdd4 8:52 1 1K 0 part ├─sdd5 8:53 1 289M 0 part └─sdd6 8:54 1 1.1G 0 part Master and slaves devices A single IDE interface can support two devices. Usually, motherboards come with dual IDE interfaces (primary and secondary) for up to four IDE devices on a system. To allow two drives to operate on the same parallel cable, IDE uses a special configuration called master and slave. This configuration allows one drive's controller to tell the other drive when it can transfer data to or from the computer. The name comes from the fact that the slave drive ask the master if it is communicating with the motherboard; if the master is, it will tell the slave to wait until the operation is finished but if not, it will tell the slave to go ahead. The master/slave role could be chosen thanks to the "Cable Select" feature: you could use a jumper on each drive supporting this feature to select either "Master", "Slave" or "Auto" (this last option meaning that the master is at the end of the IDE cable and the slave is the other).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/510778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/337255/" ] }
510,811
I installed the development packages for X11 today and now want to remove them. I do not remember the exact name of the package that I installed. I installed by running apt-get install ... and now want to remove the development package using apt-get purge --auto-remove <name of package> . Any suggestions?
If you installed them today, they’ll all be listed in /var/log/apt/history.log . Look through that, identify the packages you don’t want, and remove them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179308/" ] }
510,838
I want to use tr to do some rot13 transformation. I can beautifully understand this command: tr A-Za-z N-ZA-Mn-za-m <<< "URYC ZR CYRNFR" which output is HELP ME PLEASE , but I can't figure out how this other command can produce the same rot13 transformation: tr .............A-Z A-ZA-Z <<< "URYC ZR CYRNFR" So I have two questions: What's the magic behind the second tr command? How to make the second command work for both lower and upper case, just like the first command?
It works as follows: SET1-> .............ABCDEFGHIJKLMNOPQRSTUVWXYZSET2-> ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLM So tr will translate SET1 to SET2 . This is equivalent to first one because it is also shifting by 13 units as there 13 dots. To include the lower case letters, you'll have to arrange them in SET1 with a similar offset, i.e.: .............ABCDEFGHIJKLMNOPQRSTUVWXYZ..........................abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzabcdefghijklm That's 26 dots between Z and a , spanning half the upper-case and half the lower-case alphabet. So the tr command itself will be: tr .............A-Z..........................a-z A-ZA-Za-za-z
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/510838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345799/" ] }
510,855
I made some scripts containing some functions which by design needs sudo permission. I have added those path in .bashrc for Linux and .bash_profile for MacOS so that it can be called from anywhere. But I do not want the user to type sudo each time they want to call those script functions. Is there any way I can imply sudo in a way that whenever these functions are called, terminal would assume its being called from root user? I think I should just add sudo -i at the beginning of the script or maybe each function? Or is there any other alternative way of implying sudo ? Also, would be great to know if you think it would be terrible or dangerous to imply sudo and if it is not recommended. An example of dangerous-function script that contains some functions which, I am trying to accomplish without specifying sudo #!/bin/bashstart-one(){ ## do dangerous stuff with sudo systemctl start dangerous.service}start-two(){ systemctl start dangerous1.service}start-launchwizard(){ systemctl start dangerous2.service}## Calling functions one by one..."$@" I dont want to call them by sudo dangerous-function start-one I just want to call them with dangerous-function start-one but still get the same result as the previous one.
The "$@" will expand to the list of command line arguments, individually quoted. This means that if you call your script with ./script.sh start-one it will run start-one at that point (which is your function). It also means that invoking it as ./script.sh ls it would run ls . Allowing a user to invoke the script using sudo (or using sudo inside the script) would allow them to run any command as root, if they had sudo access. You do not want this. Instead, you would need to carefully validate the command line arguments. Maybe something like foo_func () { # stuff printf 'foo:\t%s\n' "$@"}bar_func () { # stuff printf 'bar:\t%s\n' "$@"}arg=$1shiftcase $arg in foo) foo_func "$@" ;; bar) bar_func "$@" ;; *) printf 'invalid sub-command: %s\n' "$arg" >&2 exit 1esac Testing: $ sh script.sh bar 1 2 3bar: 1bar: 2bar: 3 $ sh script.sh bazinvalid sub-command: baz This would be safer to use with sudo , but you would still not want to execute anything that the user gives you within your various functions directly without sanitising the input. The script above does this by restricting the user to a particular set of sub-commands, and each function that handles a sub-command does not execute, eval , or source its arguments. Let me say that again with other words: The script does not, and should not, try to execute the user input as code in any way . It should not try to figure out whether an argument corresponds to a function in the current environment that it can execute (functions may have been put there by the calling environment) and it should not execute scripts whose pathnames were given on the command line etc. If a script is performing administrative tasks, I would be expecting to have to run it with sudo , and I would not want the script itself to ask me for my password, especially not if it's a script that I may want to run non-interactively (e.g. from a cron job). That is, a script performing administrative tasks requiring root privileges should (IMHO) be able to assume it's running with the correct privileges from the start. If you want to test this in the script, you could do so with if [ "$( id -u )" -ne 0 ]; then echo 'please run this script as root' >&2 exit 1fi It then moves the decision of how to run the script with root privileges to the user of the script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318405/" ] }
510,940
I've not found much information in the manual. I've tried to manually modify the file /etc/resolv.conf however this seems to be overwritten by something? How can I achieve this?
networking.nameservers = [ "1.1.1.1" "9.9.9.9" ]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
510,951
How do -f and -o interact in ps ?They shouldn't work together, according to ps: output modifiers vs output format control and https://unix.stackexchange.com/a/446198/674 , since -f implicitly specifies the fields, while -o allows user to specify the fields. man ps says -f Do full-format listing. This option can be combined with many other UNIX-style options to add additional columns. It also causes the command arguments to be printed. When used with -L, the NLWP (number of threads) and LWP (thread ID) columns will be added. See the c option, the format keyword args, and the format keyword comm.f ASCII art process hierarchy (forest). They seem to be unrelated options/arguments. But why ps -f -o cmd works just like ps f , showing the parent-child relation? ps -f -o ... select the same number of processes as ps f ? $ ps f | wc -l224$ ps -f -o pid | wc -l224 ps -f selects different processes with and without -o ? $ ps -f | wc -l5 -e seems not work here? $ ps -e -f -o pid,ppid,comm | wc -l224$ ps -e -f | wc -l414$ ps -e -o pid,ppid,comm | wc -l414 Thanks.
networking.nameservers = [ "1.1.1.1" "9.9.9.9" ]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
510,964
I have a server running sshd. I have a secure machine with a ssh key.I want to allow direct public key login to the server with the securemachine's key. I also have a laptop with a different ssh key, whichmay get compromised if I lose it. I want to require password on top of public key authentication, in case the key has been compromised. Is this configuration possible to achieve modifying sshd_config ? Please note that this question is not about setting both public key and password for login. Instead I'm looking for a way to choose different combinations depending on the public key.
networking.nameservers = [ "1.1.1.1" "9.9.9.9" ]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/510964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199173/" ] }
510,990
The openssl passwd command computes the hash of a password typed at run-time or the hash of each password in a list. The password list is taken from the named file for option -in file, from stdin for option -stdin, and from the command line otherwise. The UNIX standard algorithm crypt and the MD5-based BSD password algorithm 1 and its Apache variant apr1 are available. I understand the term "hash" to mean " turn an input into an output from which is it difficult/impossible to derive the original input ." More specifically, the input:output relationship after hashing is N:M, where M<=N (i.e. hash collision is possible). Why is the output of " openssl passwd " different run successively with the same input? > openssl passwdPassword:Verifying - Password:ZTGgaZkFnC6Pg> openssl passwdPassword:Verifying - Password:wCfi4i2Bnj3FU> openssl passwd -1 "a"$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/> openssl passwd -1 "a"$1$JhSBpnWc$oiu2qHyr5p.ir0NrseQes1 I must not understand the purpose of this function, because it looks like running the same hash algorithm on the same input produces multiple unique outputs . I guess I'm confused by this seeming N:M input:output relationship where M>N.
> openssl passwd -1 "a"$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ This is the extended Unix-style crypt(3) password hash syntax, specifically the MD5 version of it. The first $1$ identifies the hash type, the next part OKgLCmVl is the salt used in encrypting the password, then after the separator $ character to the end of line is the actual password hash. So, if you take the salt part from the first encryption and use it with the subsequent ones, you should always get the same result: > openssl passwd -1 -salt "OKgLCmVl" "a"$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/> openssl passwd -1 -salt "OKgLCmVl" "a"$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ When you're changing a password , you should always switch to a new salt. This prevents anyone finding out after the fact whether the new password was actually the same as the old one. (If you want to prevent the re-use of old passwords, you can of course hash the new password candidate twice: once with the old salt and then, if the result is different from the old password and thus acceptable, again with a new salt.) If you use openssl passwd with no options, you get the original crypt(3) -compatible hash, as described by dave_thompson_085. With it, the salt is two first letters of the hash: > openssl passwd "a"imM.Fa8z1RS.k> openssl passwd -salt "im" "a"imM.Fa8z1RS.k You should not use this old hash style in any new implementation, as it restricts the effective password length to 8 characters, and has too little salt to adequately protect against modern methods. (I once calculated the amount of data required to store a full set of rainbow tables for every classic crypt(3) hash. I don't remember the exact result, but assuming my calculations were correct, it was on the order of "a modest stack of multi-terabyte disks". In my opinion, that places it within the "organized criminals could do it" range.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/510990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214773/" ] }
511,029
I often use the /tmp directory on my Linux machine for storing temporary files (e.g. PDFs from a site that wants me to download it first etc.) and I often create a directory with my username. But at every startup it (including all files) gets deleted. Now I know I can put it in /var/tmp , but I want all its contents to be deleted, but for the directory itself to be kept. So: tmp |- me # this should stay | |- foo1 # this should be deleted... | |- bar1 # ...and this as well |- other stuff... Is there any way to do this? Maybe with permissions or with a special configuration?
I use pam-tmpdir for this: it creates a user-private temporary directory at login. To set it up, add session optional pam_tmpdir.so to the appropriate PAM services; on a Debian-based system, installing the libpam-tmpdir package will offer to do this for you, or you can add the line to /etc/pam.d/common-session . The next time you log in, you’ll find a directory under /tmp/user with your user id, and TMP and TMPDIR set appropriately.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345457/" ] }
511,134
As far as I know, Wayland is not using OpenGL but OpenGL ES for 3D rendering, usually used on embedded systems (except for Intel IGPs). In the long term, I read that OpenGL support would be implemented but was not a priority for now. I guess it is because OpenGL ES is somewhat simpler but it does not seems like a strong point for making such a choice. I was wondering what were the reasons for this decision, and what were (and would be, for the future of Linux) the consequences of this choice. Update: The Wayland FAQ was my fist stop before even thinking about asking it here. Feel free to correct me if I am wrong, but the last part seems, at least, not very clear, IMHO: EGL is the only GL binding API that lets us avoid dependencies on existing window systems, in particular X. As far as I understand, it's not that simple. EGL is an interface between GLs such as OpenGL and OpenGL ES. OpenGL ES calls are possible directly through Wayland/Weston while OpenGL support needs XWayland. GLX obviously pulls in X dependencies and only lets us set up GL on X drawables. The alternative is to write a Wayland specific GL binding API, say, WaylandGL. So, this part refers to what I was saying above and, as far as I know, Wayland development team does not want to take that alternative route. So, for now, people willing to ports their applications which does not make direct use of Wayland/Weston are forced to translate their OpenGL API calls to OpenGL ES ones. A more subtle point is that libGL.so includes the GLX symbols, so linking to that library will pull in all the X dependencies. This means that we can't link to full GL without pulling in the client side of X, so Weston uses OpenGL ES to render. This seems understandable, on a short-time basis, at least. Still, on the long run, Wayland devlopment team wants to add OpenGL APIs as well, so it seems more like a workaround for now to me, until things get serious. This is one of the sentences which triggered my question here in the first place. As detailed above, clients are however free to use whichever rendering API they like. If I am not mistaken, which means going for XWayland for OpenGL applications and Weston OpenGL ES, which seems to be a bigger deal that what the sentence implies, especially when it comes to 3D rendering, not to mention the fact that Wayland/Weston aim to replace Xorg. For the record: : XWayland is a series of patches over the X.Org server codebase that implement an X server running upon the Wayland protocol. The patches are developed and maintained by the Wayland developers for compatibility with X11 applications during the transition to Wayland,[28] and was mainlined in version 1.16 of the X.Org Server in 2014.When a user runs an X application from within Weston, it calls upon XWayland to service the request. N.B.: I am trying to learn more about Wayland/Weston, especially when it comes to (3D) rendering, but exact information on this subject is diffcult to find, especially because it seems that the only people really X11-savvy seem to be Wayland developers. As far as I can tell so far, for OpenGL: if OpenGL function calls are made through GLX interface, it falls back to XWayland, so the programme is (really) not using Wayland. Addendum It might seems that the discussion is out of the scope of the original question but it is actually linked to underlying OpenGL interfaces/libraries and it is difficult to separate all of this from the original question. As it seems to be a complicated and confusing subject, here are some various links and quotes which lead me to think that OpenGL (not ES) is not really supported by Wayland per se, but falls back to X11, through XWayland: What does EGL do in the Wayland stack The Wayland server in the diagram is Weston with the DRM backend. The server > does its rendering using GL ES 2, which it initialises by calling EGL. Hacker News comments Wayland is actually pretty stable. Nvidia has problem with OpenGL in Xwayland (i.e. 3d accel for x11 apps), otherwise, it should work. There are warts though, when using Wayland. When using scaling (doesn't have to be fractional, either), X11 apps are being upscaled, not downscaled, resulting in blurriness. Unfortunately, neither Firefox nor Chrome does support Wayland natively, and who wants to use their most used app on their computer in blurry mode? How come GLX-based applications can be run on Wayland on Ubuntu? So based on the link @genpfault provided: So based on the link @genpfault provided: XWayland is a part of XOrg that's providing an X server on top of Wayland. Any application that's linked against X11 libs and running under Wayland will automatically use XWayland as its backend. So the GLX part of XWayland is the mechanism that allows GLX-based OpenGL applications to run on Wayland. Not being able to use MSAA in GLX-based applications seems to be a known bug of XWayland, at least for Intel and AMD GPUs (cf. https://bugs.freedesktop.org/show_bug.cgi?id=98272 ). But I couldn't find any additional information on the matter.
The premise of your question is wrong. Wayland does not use OpenGL ES or OpenGL at all. Lets get things in order to achieve proper understanding of the software stack: Wayland is an IPC protocol that allows the clients and the compositor to talk to each other. While technically libwayland is just a single implementation of that protocol and should not be solely identified with it, for now it remains the only implementation and is generally called 'wayland' as well. It is not a full compositor that runs your hardware. Wayland Compositor is an application that uses wayland protocol to receive buffers from clients and compose it into a single image shown on the display. The wayland protocol makes relatively little assumptions about the inner workings of the compositor itself. In particular, the choice of the rendering technology is left completely open. The default buffer type defined by the core protocol is a simple shared memory buffer that is not accelerated by the GPU in any way, and is meant mainly for simple applications that render their UI using the CPU only. This buffer type is not interesting in our case, and will be conveniently forgotten in the rest of the answer. Weston is a reference implementation of a wayland compositor. While it is developed by the people involved in the development of the libwayland itself, it is not an essential part of the wayland ecosystem - it is just a single compositor implementation. If you are running any of the Linux distributions that include wayland desktop environments, you are almost certainly not using Weston, but rather some other compositor implementation. Weston uses OpenGL ES for rendering - this is mainly dictated by the fact that the current libGL implementations still link to some X-related libraries, and Weston creators wanted to keep it pure wayland - this is a reference implementation after all. Additionally, it makes it compatible with the embedded devices, which may not support the full OpenGL. EGL - libEGL is a library that contains glue-code that allows to initialize multiple rendering contexts of huge variety (OpenGL, OpenGL ES or OpenVG in different versions). It also allows sharing of data between such contexts - i.e. it allows to pass a framebuffer rendered with OpenGL and pass it to OpenVG for further processing. Sharing of these resources can occur across process boundaries - the receiver of a resource may be a different application than the process that has created it. A reference to a shared resource (buffer) can be passed between processes in a variety of ways, e.g. over a compatible wayland IPC connection. A buffer (EGL Image) passed in such a way does not retain any reference to the rendering API used to obtain it.While it is claimed that the EGL layer is also responsible for binding the framebuffers to the underlying OS elements like windows or displays, in practice that means sharing buffers with some system process that can use it to e.g. paint it in a window or on a particular display. Therefore, it is just a variation of the above functionality rather than a separate feature.The libEGL is heavily extensible, and there is a huge list of extensions available, so your libEGL implementation may be also responsible for other tasks, that do not fit the above description. GLX - an older and more limited variant of the EGL. It allows sharing of the buffers of various kinds, but only between an X11 client and X11 server. It is inherently tied to the X11 protocol - if the client application uses X11 protocol,it can use GLX as well. If it uses wayland protocol, it cannot. EGL was developed as its replacement, to allow sharing of such data more generally. Modern X11 servers allow clients to use EGL instead of GLX as well. So the wayland technology does not require you to use OpenGL ES, nor does it even vaguely point in its direction. The reference compositor Weston uses it internally, but that has no influence on the client rendering API. The only requirement is that whatever you render can be transformed into EGL Image. Since this is the job of libEGL, the choice of the rendering API on the client side is dictated only by the limitations of your libEGL implementation. This is also true for other compositors which may or may not be using OpenGL ES to render the final desktop image. The libEGL is a part of the GPU driver software (just like e.g. libGL) so whether it allows converting OpenGL buffer into an EGL Image (and subsequently EGL Image to OpenGL ES texture on the compositor side) is dependent on your hardware, but in practice virtually every hardware allows that as long as it supports the full OpenGL at all. This is why you have difficulty finding the definitive proof that wayland supports the full OpenGL - wayland does not care about the rendering technology at all. Just as the FAQ says: What is the drawing API? "Whatever you want it to be, honey"[...] Therefore, the question whether OpenGL is supported is out of scope for wayland. It is actually determined solely by the capabilities of the libEGL and the hardware. The client application must use a particular API in order to initialize its windows and the GL(ES) contexts. If the client application uses X11 API to create its windows, then it will connect to the XWayland compatibility shim which pretends to be a full X11 server to the client. Then the client will be able to use either GLX or EGL-on-X11 to initialize its contexts and share rendered buffers with X11 server. If the client uses wayland client API to create its windows, it will be able to use the EGL-on-wayland to initialize its contexts and share rendered buffers with wayland compositor. This choice in most cases lies entirely on the client side. A lot of older software that is not Wayland-aware uses just X11 API and GLX - simply because the wayland and EGL API did not exist (or was not mature enough) during the development. Even more modern software often uses just X11 API for compatibility reasons - there is still quite a lot non-wayland systems out there. Modern UI toolkits like GTK or QT actually support multiple "backends", which means that they can detect the session type on initialization and use the most appropriate API to create windows and drawing contexts. Since games generally don't use such toolkits, the burden of such detection falls entirely on their developers. Not many projects like that bother to actually implement it, and they often rely on X11 and GLX protocol on both X11 and wayland sessions (through Xwayland). So if you have a game uses GLX to initialize OpenGL means that it has opted to use X11 API. Whether this is because the game does not support wayland or EGL at all, or whether the game tried to use EGL to initialize OpenGL and failed for some reason I cannot judge without a ton of additional information. In any case, it is not in any way dependent on the wayland protocol or the used compositor.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159254/" ] }
511,232
It seems that the purpose of cat is to concatenate several files. However, many people still use cat instead of less (or a similar program like more ) to display a file. See, for example, the GNU m4 manual and the answer " How can I display the contents of a text file on the command line? ". Man page: less -F or --quit-if-one-screen Causes less to automatically exit if the entire file can be displayed on the first screen. -X or --no-init Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen. Nowadays, is it a good practice to use cat to display or view a file? Why use cat to view a file? This makes me think to Useless Use Of Cat . Note: This question is not about the differences between less and more . Moreover, it concerns the visualization of a file created earlier. According to the answers and comments , it seems that cat is used beyond its use because it is easier to use than a pager (e.g. more , less ...). Some people think this is an irrelevant fact (or useless) but experience shows that various subtleties pertaining to the shell may have practical consequences: use a shell loop to process a text file , use unquoted variables ... Negative consequences vary in intensity. For example, cat foo bar | less is valid because the user concatenates two files but cat foo | less is not valid . In the same spirit, cat seems to be required in "a pipeline" although it seems that a pager like less works in a pipeline too (note: less is not suited in all cases concerning displaying, e.g. Reading a named pipe: tail or cat? ). See also: How to cat a file with "or" options
I'm going to assume that the "many people" in the question refers to people writing tutorials, manuals, or answers on web-sites such as this one. When writing terminal commands in a text document, the cat command is commonly used to show the contents of a file. An example of this: $ cat script.sh#!/bin/shecho 'hello' $ chmod +x script.sh $ ./script.shhello Here, I show that I have a file called script.sh , what its contents is, that I'm making it executable and that I'm running it and what the result of that is. Using cat in this example is just a way of "showing all one's cards", i.e. to explicitly display all the prerequisites for an example (and doing it as part of a textual representation of a terminal session). less and other screen based pagers, depending on how they are used, would not necessarily give that output in the terminal. So if I wrote $ less script.sh#!/bin/shecho 'hello' and a user tried it by themselves, they may wonder why the text of the script appears different in their terminal and then disappears from the terminal once they closed the less pager (if that's the way they've configured the pager), or whether their less is different from the less used in the answer (or tutorial or whatever it may be), or if they're doing something else wrong. Allowing for the possibility of this train of thought is counterproductive and disruptive for the user. Using cat when showing an example in the terminal as text is good as it gives a fairly easy way of reproducing the exact same results as in the given text. For larger files, it may be better to show the file separately, and then concentrate on how that file is used when writing the terminal command as text. If you prefer to use less , more , most , view , sublime , or some other pager or program to view files, that's totally fine. Go ahead and do that. But if you want to provide a reproducible text describing some workflow in the terminal, you would have to also give the user a warning that the output may differ between what they read and what they see in their own terminal, depending on what pager is used and how it's configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
511,233
I'm trying to add two doubles y1=0.17580197E-01y2=0.11979236E-02sum=`echo $y1+$y2 | bc -l` the above script gives me sum = -2.704405652. How do i resolve this issue?
I'm going to assume that the "many people" in the question refers to people writing tutorials, manuals, or answers on web-sites such as this one. When writing terminal commands in a text document, the cat command is commonly used to show the contents of a file. An example of this: $ cat script.sh#!/bin/shecho 'hello' $ chmod +x script.sh $ ./script.shhello Here, I show that I have a file called script.sh , what its contents is, that I'm making it executable and that I'm running it and what the result of that is. Using cat in this example is just a way of "showing all one's cards", i.e. to explicitly display all the prerequisites for an example (and doing it as part of a textual representation of a terminal session). less and other screen based pagers, depending on how they are used, would not necessarily give that output in the terminal. So if I wrote $ less script.sh#!/bin/shecho 'hello' and a user tried it by themselves, they may wonder why the text of the script appears different in their terminal and then disappears from the terminal once they closed the less pager (if that's the way they've configured the pager), or whether their less is different from the less used in the answer (or tutorial or whatever it may be), or if they're doing something else wrong. Allowing for the possibility of this train of thought is counterproductive and disruptive for the user. Using cat when showing an example in the terminal as text is good as it gives a fairly easy way of reproducing the exact same results as in the given text. For larger files, it may be better to show the file separately, and then concentrate on how that file is used when writing the terminal command as text. If you prefer to use less , more , most , view , sublime , or some other pager or program to view files, that's totally fine. Go ahead and do that. But if you want to provide a reproducible text describing some workflow in the terminal, you would have to also give the user a warning that the output may differ between what they read and what they see in their own terminal, depending on what pager is used and how it's configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346108/" ] }
511,240
I have a range of .html files all containing spaces in between. What I need is to locate the file using find in conjunction with grep, and if a match is found, basically I just want xargs to open it in view mode using less. Nothing fancy. This is what I tried pietro@scum:~/Downloads$ find |grep 'Register\sfor\srehousing.html' |xargs -trE lessecho ./Register for rehousing.html ./Register for rehousing.htmlpietro@scum:~/Downloads$ find |grep 'Register\sfor\srehousing.html' |xargs -0 less./Register for rehousing.html: No such file or directory I have gone through the xargs man but I just can't figure out why xargs doesn't pickup the filename + PATH to file and execute the less command. The file does exist and here is how it looks pietro@scum:~/Downloads$ ls -l |grep 'Register\sfor\srehousing.html' -rw-rw-r-- 1 pietro pietro 764611 Mar 14 14:44 Register for rehousing.html
I'm going to assume that the "many people" in the question refers to people writing tutorials, manuals, or answers on web-sites such as this one. When writing terminal commands in a text document, the cat command is commonly used to show the contents of a file. An example of this: $ cat script.sh#!/bin/shecho 'hello' $ chmod +x script.sh $ ./script.shhello Here, I show that I have a file called script.sh , what its contents is, that I'm making it executable and that I'm running it and what the result of that is. Using cat in this example is just a way of "showing all one's cards", i.e. to explicitly display all the prerequisites for an example (and doing it as part of a textual representation of a terminal session). less and other screen based pagers, depending on how they are used, would not necessarily give that output in the terminal. So if I wrote $ less script.sh#!/bin/shecho 'hello' and a user tried it by themselves, they may wonder why the text of the script appears different in their terminal and then disappears from the terminal once they closed the less pager (if that's the way they've configured the pager), or whether their less is different from the less used in the answer (or tutorial or whatever it may be), or if they're doing something else wrong. Allowing for the possibility of this train of thought is counterproductive and disruptive for the user. Using cat when showing an example in the terminal as text is good as it gives a fairly easy way of reproducing the exact same results as in the given text. For larger files, it may be better to show the file separately, and then concentrate on how that file is used when writing the terminal command as text. If you prefer to use less , more , most , view , sublime , or some other pager or program to view files, that's totally fine. Go ahead and do that. But if you want to provide a reproducible text describing some workflow in the terminal, you would have to also give the user a warning that the output may differ between what they read and what they see in their own terminal, depending on what pager is used and how it's configured.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95581/" ] }
511,284
I have a file with ~ 3 million rows, here is the first few lines of my file: head out.txt NA NA NA NA NA gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753,gene85754 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752,gene85753 gene85752 gene85752 For those rows that are separated by ",", I want to keep everything after the first comma and before the second comma.This is my desired output: outgood.txtNANANANANAgene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85753gene85752gene85752
Since cut prints non-delimited lines by default the following works cut -f2 -d, file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/511284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216256/" ] }
511,289
I am trying to install Ubuntu server but it always get stuck at the last stage while updating grub.I cancelled the process and rebooted my system, it took me to the grub command prompt grub> . I tried a manual boot from prompt: root=(hd1,gpt5) # Ubuntu root partitionlinux /boot/vmlinuz-something- root=/dev/sda5initrd /boot/initramfs-something-boot<enter> After some boot message scrolling, it dropped me in the Busybox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3) built-in. shell (ash) with the initramfs> prompt. From here I did an exit And it showed me the kernel panic! with the following two hints. mount: mounting/says on /root/says failed : No such file or directorymount: mounting /process on /root/process failed: No such file or directory
Not a solution but a couple of Workarounds .Apparently, that's a bug in os-prober .I personally tried the second one and it works!To quote from the link: Workaround 1: (proaction) When you are reaching the “Install the GRUB boot loader to the master boot record?” prompt, (in my case, no such prompt appeared but i figured out timing of the grub-install) switch to a console (ctrl+alt+[f2-f6]), and remove this file: rm /target/etc/grub.d/30_os-prober This will prevent update-grub from running os-prober, which should avoid running into this issue. Of course, other operating systems won't be listed, but at least that should prevent the installation process from getting entirely stuck. I've tested this successfully in a VM with guided (unencrypted) LVM, and standard plus ssh tasks (which is how I initially reproduced your issue). Workaround 2: (reaction) Otherwise, once the process is stuck, locate the process identifier (PID) on the first column of the ps output: ps | grep 'dmsetup create' then kill this dmsetup process. With your output above, that'd be: kill 19676 (Tested successfully in a VM with the same setup/choices as above.) KiBi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332496/" ] }
511,305
I'm trying to make this quoting work, but no success : export perl_script='$| = 1;s/\n/\r/g if $_ =~ /^AV:/;s/Saving state/\nSaving state/'mpv="command mpv"mpvOptions='--geometry 0%:100%'args=("$@")$ sh -c "$mpv $mpvOptions ${args[*]} 2>&1 | perl -p -e $perl_script | tee ~/mpv_all.log"syntax error at -e line 1, at EOFExecution of -e aborted due to compilation errors.sh: 1: =: not foundsh: 1: s/n/r/g: not foundsh: 1: s/Saving: not found So I tried this : $ sh -c "$mpv $mpvOptions ${args[*]} 2>&1 | perl -p -e \"perl_script\" | tee ~/mpv_all.log"Unknown regexp modifier "/h" at -e line 1, at end of lineExecution of -e aborted due to compilation errors. Quoting is such a pain in the neck.
Not a solution but a couple of Workarounds .Apparently, that's a bug in os-prober .I personally tried the second one and it works!To quote from the link: Workaround 1: (proaction) When you are reaching the “Install the GRUB boot loader to the master boot record?” prompt, (in my case, no such prompt appeared but i figured out timing of the grub-install) switch to a console (ctrl+alt+[f2-f6]), and remove this file: rm /target/etc/grub.d/30_os-prober This will prevent update-grub from running os-prober, which should avoid running into this issue. Of course, other operating systems won't be listed, but at least that should prevent the installation process from getting entirely stuck. I've tested this successfully in a VM with guided (unencrypted) LVM, and standard plus ssh tasks (which is how I initially reproduced your issue). Workaround 2: (reaction) Otherwise, once the process is stuck, locate the process identifier (PID) on the first column of the ps output: ps | grep 'dmsetup create' then kill this dmsetup process. With your output above, that'd be: kill 19676 (Tested successfully in a VM with the same setup/choices as above.) KiBi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135038/" ] }
511,395
I wanted to take a look at the manpage of pthread_mutex_trylock . By typing man pthread_mutex_trylock , I got No manual entry for pthread_mutex_trylock . Then I saw a post suggest doing sudo apt-get install manpages-posix manpages-posix-dev . After that I see description like: PTHREAD_MUTEX_LOCK(3POSIX) POSIX Programmer's Manual PTHREAD_MUTEX_LOCK(3POSIX)PROLOG This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the cor‐ responding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. What's the difference between this POSIX Programmer's Manual and the Linux Programmer's Manual that I usually see? What does it mean by saying: The Linux implementation of this interface may differ (consult the cor‐responding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. So where can I find the manpage for The Linux implementation of pthread_mutex_trylock ? Can I use pthread_mutex_trylock on my system ? I am using Ubuntu.
It says that because there's no guarantee that the POSIX manuals (for anything ) corresponds to the actual implementation of the corresponding thing on your particular system. To get the manual for pthread_mutex_trylock() , install the the manual for the library that implements the interface. On Ubuntu systems, the required manual seems to be part of the glibc-doc package (found by searching for the function name on the Ubuntu package search pages ). The POSIX manual are definitely not useless. The local Linux interface should be compatible with the interface described in the POSIX manual, but the implementation-specific manual may also mentions caveats and Linux-specific implementation details and extensions, and similar non-POSIX functions. The POSIX manuals becomes extra important if you are concerned about the portability of your code to other Unix systems, in which case you would want to avoid relying on Linux-specific extensions to the POSIX specification.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208590/" ] }
511,415
I've got problem to replace " to \" and \ to \\ and except " of json test.txt input file "a" "b" {"1":"female","2":"197312","3":"359","4":"201109","5":"mail"}\uff08\u524d\u5bfe\u5fdc I want to output like \"a\" \"b\" {"1":"female","2":"197312","3":"359","4":"201109","5":"mail"}\\uff08\\u524d\\u5bfe\\u5fdc
To be more robust, you could do a full json parsing: perl -0777 -pe ' s@(".*?"|\\)|(\{(?:"(?:\\.|[^"])*+"|(?2)|[^"{}]++)*+\})|[^{}\\"]+@ $1 ? $1 =~ s/["\\]/\\$&/gr : $&@gse' Which on an input like "a" "b" "c{d"{"1":"female","2":"197312","3":"359","4":"201109","5":"mail"}{ "1": {"x": "y"} "2": "}}}" "3": ["{\"x", "}"]}\uff08\u524d\u5bfe\u5fdc gives \"a\" \"b\" \"c{d\"{"1":"female","2":"197312","3":"359","4":"201109","5":"mail"}{ "1": {"x": "y"} "2": "}}}" "3": ["{\"x", "}"]}\\uff08\\u524d\\u5bfe\\u5fdc You may want to clarify what you want to do if the input contains "foo\"bar" or "foo\nbar" outside of json objects.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346264/" ] }
511,443
I'm running the sleep command in terminal using screen and in detached mode. Once the screen immediately returns, I'm running ps command to verify the sleep is running. $ screen -d -m 'sleep 2m'[raj@localhost ~]$ psPID TTY TIME CMD22795 pts/0 00:00:00 bash22869 pts/0 00:00:00 ps But the command didn't show sleep. What is that I'm doing wrong here?
This was confusing to me initially as well. I then re-read the local screen man page for the SYNOPSIS -- the online man page does not give a synopsis) -- and noticed that it said: screen [ -options ] [ cmd [ args ] ] ... which led me to believe that it wanted to see the cmd and args as independent arguments. Since you gave that first argument as a quoted value -- ' sleep 2m ' -- it tried to execute a command named (exactly) ' sleep 2m ', as opposed to what you really wanted, which was sleep with its own argument of 2m . The screen command exited successfully (in my testing), but it did not successfully execute your command. Use, instead: screen -d -m sleep 2m Instead of ps , which will only show processes associated with the current terminal (of which the SCREEN and related processes are not ), use: ps x which will show it: $ ps x PID TTY STAT TIME COMMAND # ... 7514 pts/1 Ss 0:00 -bash 7761 ? Ss 0:00 SCREEN -d -m sleep 2m 7762 pts/2 Ss+ 0:00 sleep 2m 7880 pts/1 R+ 0:00 ps x # ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/253851/" ] }
511,464
$ file1.txt 12345,865221,APPLE,ZZ,QQ,DD,GG,APPLE-FRUIT12346,865222,MANGO,ZZ,QQ,DD,GG,MANGO-FRUIT12347,865222,GRAPE,ZZ,QQ,DD,GG,GRAPE-FRUIT$file2.txtAPPLE-FRUIT,10KGMANGO-FRUIT,12KG I have two files as mentioned above. I need to create a new file, as given below. $Output12345,865221,APPLE,ZZ,QQ,DD,GG,APPLE-FRUIT,10KG12346,865222,MANGO,ZZ,QQ,DD,GG,MANGO-FRUIT,12KG12347,865222,GRAPE,ZZ,QQ,DD,GG,GRAPE-FRUIT One method I worked out was using a while loop. I read each line of file2 and compare first column with 8th column of file1. This way I am able to get desired output. I am looking for a simple awk command to achieve the same.
This was confusing to me initially as well. I then re-read the local screen man page for the SYNOPSIS -- the online man page does not give a synopsis) -- and noticed that it said: screen [ -options ] [ cmd [ args ] ] ... which led me to believe that it wanted to see the cmd and args as independent arguments. Since you gave that first argument as a quoted value -- ' sleep 2m ' -- it tried to execute a command named (exactly) ' sleep 2m ', as opposed to what you really wanted, which was sleep with its own argument of 2m . The screen command exited successfully (in my testing), but it did not successfully execute your command. Use, instead: screen -d -m sleep 2m Instead of ps , which will only show processes associated with the current terminal (of which the SCREEN and related processes are not ), use: ps x which will show it: $ ps x PID TTY STAT TIME COMMAND # ... 7514 pts/1 Ss 0:00 -bash 7761 ? Ss 0:00 SCREEN -d -m sleep 2m 7762 pts/2 Ss+ 0:00 sleep 2m 7880 pts/1 R+ 0:00 ps x # ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230432/" ] }
511,467
I am executing this command: tail -f rest.log | while read LOGLINEdo [[ "${LOGLINE}" == *"Finished building"* ]] && pkill -P $$ taildone It will read a log file until string 'Finished building' appears in the file. If so, it will kill the tail command. Sometimes the string will never appear. For this case I want to quit the loop after a certain time. Kind of a timeout. Lets say it should stop searching the string after 5 minutes. How can I achieve this? I tried to use timeout in front of the first tail command, which did not work for me.
This was confusing to me initially as well. I then re-read the local screen man page for the SYNOPSIS -- the online man page does not give a synopsis) -- and noticed that it said: screen [ -options ] [ cmd [ args ] ] ... which led me to believe that it wanted to see the cmd and args as independent arguments. Since you gave that first argument as a quoted value -- ' sleep 2m ' -- it tried to execute a command named (exactly) ' sleep 2m ', as opposed to what you really wanted, which was sleep with its own argument of 2m . The screen command exited successfully (in my testing), but it did not successfully execute your command. Use, instead: screen -d -m sleep 2m Instead of ps , which will only show processes associated with the current terminal (of which the SCREEN and related processes are not ), use: ps x which will show it: $ ps x PID TTY STAT TIME COMMAND # ... 7514 pts/1 Ss 0:00 -bash 7761 ? Ss 0:00 SCREEN -d -m sleep 2m 7762 pts/2 Ss+ 0:00 sleep 2m 7880 pts/1 R+ 0:00 ps x # ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346305/" ] }
511,510
Using Ubuntu 16.04 Tried to set python 3.6 as default for the python3 command. I found what seemed to be the answer and quickly copy-pasted the following lines without carefully reading: $ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 1update-alternatives: using /usr/bin/python3.6 to provide /usr/bin/python (python) in auto mode$ sudo update-alternatives --set python /usr/bin/python3.6 This is the result: $ python3Python 3.5.2 (default, Nov 12 2018, 13:43:14)[GCC 5.4.0 20160609] on linuxType "help", "copyright", "credits" or "license" for more information.$ pythonPython 3.6.8 (default, Dec 24 2018, 19:24:27)[GCC 5.4.0 20160609] on linuxType "help", "copyright", "credits" or "license" for more information.python A friend of mine tried to set it back like this: $ sudo update-alternatives --install /usr/bin/python3.6 python /usr/bin/python 1update-alternatives: renaming python link from /usr/bin/python to /usr/bin/python3.6 And this was the result: $ pythonzsh: command not found: python And now anything linked with python 3.6 gets the error "Too many levels of symbolic links", like in this example: $ sudo update-alternatives --config pythonupdate-alternatives: warning: alternative /usr/bin/python (part of link group python) doesn't exist; removing from list of alternativesupdate-alternatives: error: cannot stat file '/usr/bin/python3.6': Too many levels of symbolic links The BIG problem is that if you close the terminal like like my friend did then the terminal app stops working all together. He now has to reinstall Ubuntu. And I am in the same situation, just that I still DID NOT CLOSE my terminal and (for now) everything works fine. How can I reverse the symbolic links?
The Python packages don’t use alternatives; to restore a working setup: sudo update-alternatives --remove-all pythoncd /usr/binsudo ln -sf python2.7 pythonsudo ln -sf python3.5 python3 You’ll probably have to re-install your Python 3.6 package since it appears you’ve overwritten the python3.6 binary.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346344/" ] }
511,517
What is the use of option -o for command useradd ? What is a good use case of this option?
useradd ’s -o option, along with its -u option, allows you to create a user with a non-unique user id. One use case for that is to create users with identical privileges (since they share the same user id) but different passwords, and if appropriate, home directories and shells. This can be useful for service accounts (although typically you’d achieve the same result using sudo nowadays); it can also be useful for rescue purposes with a root -equivalent account using a statically-linked shell such as sash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/511517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196137/" ] }
511,524
Can anyone suggest me on what would be the best way to trigger a certain key kombo with another one. Say, I'd like to send CTRL+= while pressing CTRL+[.Basically I want to workaround zoom for Google Chrome.
useradd ’s -o option, along with its -u option, allows you to create a user with a non-unique user id. One use case for that is to create users with identical privileges (since they share the same user id) but different passwords, and if appropriate, home directories and shells. This can be useful for service accounts (although typically you’d achieve the same result using sudo nowadays); it can also be useful for rescue purposes with a root -equivalent account using a statically-linked shell such as sash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/511524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276380/" ] }
511,636
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code. for i in *merged; do while read -r lo; do if [[ $lo == *"ID"* ]]; then echo $lo >> test_result.txt fi if [[ $lo == *"Instance"* ]]; then echo $lo >> test_result.txt fi if [[ $lo == *"NOT"* ]]; then echo $lo >> test_result.txt fi if [[ $lo == *"AI"* ]]; then echo $lo >> test_result.txt fi if [[ $lo == *"Sitting"* ]]; then echo $lo >> test_result.txt done < $idone However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt. KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )KEY_COUNT=0for i in *merged; do while read -r lo; do if [[$lo == ${KEYWORDS[@]} ]]; then echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`" fi done < $idone
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files. Assuming that you don't have many thousands of files, you could do that with a single grep command: grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged . The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE ). The -E option enables the alternation with | in the pattern. Add the -h option to the command if you don't want the names of the files containing matching lines in the output. If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like for file in ./*merged; do grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"done >outputfile which would run the grep command once on each file, or, find . -maxdepth 1 -type f -name '*merged' \ -exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile which would do as few invocations of grep as possible with as many files as possible at once. Related: Why is using a shell loop to process text considered bad practice?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346469/" ] }
511,695
I have a very long text file (from here ) which should contain 6 hexadecimal characters then a 'break' (which appears as one character and doesn't seem to show up properly in the code markdown below) followed by a few words: 00107B Cisco Systems, Inc00906D Cisco Systems, Inc0090BF Cisco Systems, Inc5080 Cisco Systems, Inc0E+00 ASUSTek COMPUTER INC.000C6E ASUSTek COMPUTER INC.001BFC ASUSTek COMPUTER INC.001E8C ASUSTek COMPUTER INC.0015F2 ASUSTek COMPUTER INC.2354 ASUSTek COMPUTER INC.001FC6 ASUSTek COMPUTER INC.60182E ShenZhen Protruly Electronic Ltd co.F4CFE2 Cisco Systems, Inc501CBF Cisco Systems, Inc I've done some looking around and can't see something which would work in this situation. My question is, how can I use grep / sed / awk / perl to delete all lines of this text file which do not start with exactly 6 hexadecimal characters and then a 'break'? P.S. For bonus points, what's the best way of sorting the file alphabetically and numerically according to the hex characters (i.e. 000000 -> FFFFFF )? Should I just use sort ?
$ awk '$1 ~ /^[[:xdigit:]]{6}$/' file00107B Cisco Systems, Inc00906D Cisco Systems, Inc0090BF Cisco Systems, Inc000C6E ASUSTek COMPUTER INC.001BFC ASUSTek COMPUTER INC.001E8C ASUSTek COMPUTER INC.0015F2 ASUSTek COMPUTER INC.001FC6 ASUSTek COMPUTER INC.60182E ShenZhen Protruly Electronic Ltd co.F4CFE2 Cisco Systems, Inc501CBF Cisco Systems, Inc This uses awk to extract the lines that contains exactly six hexadecimal digits in the first field. The [[:xdigit:]] pattern matches a hexadecimal digit, and {6} requires six of them. Together with the anchoring to the start and end of the field with ^ and $ respectively, this will only match on the wanted lines. Redirect to some file to save it under a new name. Note that this seems to work with GNU awk (commonly found on Linux), but not with awk on e.g. OpenBSD, or mawk . A similar approach with sed : $ sed -n '/^[[:xdigit:]]\{6\}\>/p' file00107B Cisco Systems, Inc00906D Cisco Systems, Inc0090BF Cisco Systems, Inc000C6E ASUSTek COMPUTER INC.001BFC ASUSTek COMPUTER INC.001E8C ASUSTek COMPUTER INC.0015F2 ASUSTek COMPUTER INC.001FC6 ASUSTek COMPUTER INC.60182E ShenZhen Protruly Electronic Ltd co.F4CFE2 Cisco Systems, Inc501CBF Cisco Systems, Inc In this expression, \> is used to match the end of the hexadecimal number. This ensures that longer numbers are not matched. The \> pattern matches a word boundary , i.e. the zero-width space between a word character and a non-word character. For sorting the resulting data, just pipe the result trough sort , or sort -f if your hexadecimal numbers uses both upper and lower case letters
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/511695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/324768/" ] }
511,740
Recently I started using tmux inside my terminal on my Mac. However now whenever I'm in a tmux session and I scroll up or down using my mouse, it scrolls through my command history instead of scrolling through my terminal pane. How do I disable this feature and make mouse scrolling go back to the default behavior?
Run this command: $ tput rmcup What happened most likely is that you were, either locally or remotely, running a command (like vim , or top , or many programs that use libraries similar to ncurses ) that uses the terminal's "alternate screen" mode. When this is active, many terminal programs helpfully remap the scrolling action on the mouse to arrow keys, because generally scrolling the local display is less than helpful. If this application terminated ungracefully, your terminal may still think it's in that mode. This command resets this, and should re-enable your ability to scroll. I'm guessing you're using iTerm?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/511740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345760/" ] }
511,751
This cron tqsk doesn't work: [main_usr@localhost ~]$ sudo crontab -l -u root0 * * * * /home/main_usr/cron_test1.sh > /home/main_usr/cron_test1_out.sh.out 2>&1[main_usr@localhost ~]$ And $ ls -al cron_test1.sh-rwxr-xr-x 1 main_usr main_usr 293 Apr 8 05:12 cron_test1.sh As you can see, there's a new line in the cron tasks. And the file exists and is executable. The task was created a day ago. It should've run once in an hour. Nonetheless, 'cron_test1_out.sh.out' hasn't been created. Why?
Run this command: $ tput rmcup What happened most likely is that you were, either locally or remotely, running a command (like vim , or top , or many programs that use libraries similar to ncurses ) that uses the terminal's "alternate screen" mode. When this is active, many terminal programs helpfully remap the scrolling action on the mouse to arrow keys, because generally scrolling the local display is less than helpful. If this application terminated ungracefully, your terminal may still think it's in that mode. This command resets this, and should re-enable your ability to scroll. I'm guessing you're using iTerm?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/511751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345112/" ] }
511,797
I have the following unit file: [Unit]Description=Panel for Systemd ServicesAfter=network.target[Service]User=pysdGroup=pysdPermissionsStartOnly=trueWorkingDirectory=/opt/pysdExecStartPre=/bin/mkdir /run/pysdExecStartPre=/bin/chown -R pysd:pysd /run/pysdExecStart=/usr/local/bin/gunicorn app:app -b 127.0.0.1:8100 --pid /run/pysd/pysd.pid --workers=2ExecReload=/bin/kill -s HUP $MAINPIDExecStop=/bin/kill -s TERM $MAINPIDExecStopPost=/bin/rm -rf /run/pysdPIDFile=/run/pysd/pysd.pidPrivateTmp=true[Install]WantedBy=multi-user.targetAlias=pysd.service I would like to create an environment variable with ExecStartPre and then incorporating this variable to ExecStart . To be more specific, I want to create an environment variable GUNICORN_SERVER , before running the ExecStart , and then using this environment variable for the option -b at ExecStart . I tried something like ExecStartPre=/bin/bash -c 'export GUNICORN_SERVER=127.0.0.1:8100' , but no environment variable was created. How do I achieve this scenario?
You cannot use ExecStartPre to directly set the environment for other ExecStartPre or ExecStart commands - those are all separate processes. (Indirectly, by saving to a file and reading it or something, sure.) Systemd has two ways to set the environment: Environment= and EnvironmentFile= . There are examples of both in man 5 systemd.exec . These affect all processes started by the service, including those for ExecStartPre . If these variables don't have to dynamically set, those are a good option: Environment=GUNICORN_SERVER=127.0.0.1:8080 However, if you need to dynamically set the variables, the manpage says this about EnvironmentFile : The files listed with this directive will be read shortly before the process isexecuted (more specifically, after all processes from a previous unit stateterminated. This means you can generate these files in one unit state, and read itwith this option in the next). So, one option would be to write it to a file in ExecStartPre , and have systemd read that file as part of EnvironmentFile : EnvironmentFile=/some/env/fileExecStartPre=/bin/bash -c 'echo foo=bar > /some/env/file'ExecStart=/some/command # sees bar as value of $foo Another option would be to use a shell in ExecStart : ExecStart=/bin/sh -c 'export GUNICORN_SERVER=127.0.0.1:8080; exec /usr/local/bin/gunicorn ...'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164969/" ] }
511,827
Many people use oneliners and scripts containing code along the lines cat "$MYFILE" | command1 | command2 > "$OUTPUT" The first cat is often called "useless use of cat" because technically it requires starting a new process (often /usr/bin/cat ) where this could be avoided if the command had been < "$MYFILE" command1 | command2 > "$OUTPUT" because then shell only needs to start command1 and simply point its stdin to the given file. Why doesn't the shell do this conversion automatically? I feel that the "useless use of cat" syntax is easier to read and shell should have enough information to get rid of useless cat automatically. The cat is defined in POSIX standard so shell should be allowed to implement it internally instead of using a binary in path. The shell could even contain implementation only for exactly one argument version and fallback to binary in path.
"Useless use of cat " is more about how you write your code than about what actually runs when you execute the script. It's a sort of design anti-pattern , a way of going about something that could probably be done in a more efficient manner. It's a failure in understanding of how to best combine the given tools to create a new tool. I'd argue that stringing several sed and/or awk commands together in a pipeline also sometimes could be said to be a symptom of this same anti-pattern. Fixing instances of "useless use of cat " in a script is a primarily matter of fixing the source code of the script manually. A tool such as ShellCheck can help with this by pointing out the obvious cases: $ cat script.sh#!/bin/shcat file | cat $ shellcheck script.shIn script.sh line 2:cat file | cat ^-- SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead. Getting the shell to do this automatically would be difficult due to the nature of shell scripts. The way a script executes depends on the environment inherited from its parent process, and on the specific implementation of the available external commands. The shell does not necessarily know what cat is. It could potentially be any command from anywhere in your $PATH , or a function. If it was a built-in command (which it may be in some shells), it would have the ability to reorganise the pipeline as it would know of the semantics of its built-in cat command. Before doing that, it would additionally have to make assumptions about the next command in the pipeline, after the original cat . Note that reading from standard input behaves slightly differently when it's connected to a pipe and when it's connected to a file. A pipe is not seekable, so depending on what the next command in the pipeline does, it may or may not behave differently if the pipeline was rearranged (it may detect whether the input is seekable and decide to do things differently if it is or if it isn't, in any case it would then behave differently). This question is similar (in a very general sense) to " Are there any compilers that attempt to fix syntax errors on their own? " (at the Software Engineering StackExchange site), although that question is obviously about syntax errors, not useless design patterns. The idea about automatically changing the code based on intent is largely the same though.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/511827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20336/" ] }
511,837
I need to extract an ID from the output of another command. Currently my extracting command looks like: someID=$(command | grep -oP '(?:^Successfully\sbuilt\s)([\da-z]{12}$)' | grep -oP '([a-z\d]{12})') Example command output: ---> Using cache ---> 9b4624927fa6Successfully built 9b4624927fa6 Expected result: 9b4624927fa6 ID extracted from line Successfully built 9b4624927fa6 How could I merge those two grep statements into single one?
A slight modification of your first grep works for me: $ grep -oP '^Successfully\sbuilt\s\K[\da-z]{12}$' example-output9b4624927fa6 \K in PCRE resets the match start: The escape sequence \K causes any previously matched characters not to be included in the final matched sequence. It's similar to a zero-width positive look-behind assertion (?<=Successfully... ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281348/" ] }
511,849
I'm trying to implement a TCP listener that accepts connections and then simply drops all of its input (it's for a test harness). Right now, I'm using socat - tcp-listen:2003,fork,reuseaddr , but that prints the input to stdout. I don't want that. I can't redirect the output to /dev/null , because I'm doing this in the alpine/socat docker container , and it's not actually using a shell, so redirection doesn't work. If I try to use socat /dev/null tcp-listen:2003,fork,reuseaddr , then any connection is dropped immediately, presumably because socat can't read from /dev/null . What's the best way to implement a TCP listener that simply drops everything on the floor?
A slight modification of your first grep works for me: $ grep -oP '^Successfully\sbuilt\s\K[\da-z]{12}$' example-output9b4624927fa6 \K in PCRE resets the match start: The escape sequence \K causes any previously matched characters not to be included in the final matched sequence. It's similar to a zero-width positive look-behind assertion (?<=Successfully... ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46851/" ] }
511,858
I need to compare the contents of a file located in dir A with actual files in different directory. ex- directory A has a file test.txt , Item mentioned in test.txt and not present in directory B should be highlighted. im doing something like this but not working.. it is only searching last word from the file test.txt #!/bin/shIFS=$'\n' dirA=$1 dirB=$2 for x in $(cat < "$1"); do base_name="${x##/}" set -- "$dirB"/"$base_name"* if [ -e "$1" ]; then for y; do echo "$base_name found in B as ${y##*/}" done else echo "$x not found in B" fi done.
A slight modification of your first grep works for me: $ grep -oP '^Successfully\sbuilt\s\K[\da-z]{12}$' example-output9b4624927fa6 \K in PCRE resets the match start: The escape sequence \K causes any previously matched characters not to be included in the final matched sequence. It's similar to a zero-width positive look-behind assertion (?<=Successfully... ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346529/" ] }
511,983
I'm in the middle of reading the whole Linux 5.0.7 source code , and I've noticed something strange. I'll refer to linux as the parent directory here, correct me if the community uses some other naming convention in the literature. In the file linux/include/asm-generic/param.h , the value CONFIG_HZ is used. The value is not defined in the previous lines, and the only included file is uapi/asm-generic/param.h . I believe this refers to linux/include/uapi/asm-generic/param.h , again, correct me if I'm wrong. In that file, no such value as CONFIG_HZ is ever defined. Now, in your average C program, this would cause a bug. We have 3 options here: I misunderstood something and linux/include/asm-generic/param.h actually includes another file where the value IS defined. This is a bug, and I am a genius for discovering it (least likely option). There is some "magic" going on, like some macros that Linux defines before, or some files the kernel includes before including linux/include/asm-generic/param.h where the value is defined, so that when linux/include/asm-generic/param.h is called the value is already defined. In this case, please point me to what this file is. If none of these is true, what is the reason why this is a correct C program?
Like other CONFIG_ values, CONFIG_HZ is a configuration setting; you’ll find it in kernel/Kconfig.hz , along with various arch-specific overrides in other Kconfig files. Its value is determined during the build and stored in a generated configuration file, include/generated/autoconf.h . The latter is included by the kernel’s build command. To see this in action, pick a file which includes asm/param.h , and build its post-processed equivalent, verbosely; for example make drivers/atm/suni.i V=1 At some point in the build you’ll see gcc -E -Wp,-MD,drivers/atm/.suni.i.d -nostdinc \ -isystem /usr/lib/gcc/x86_64-redhat-linux/8/include \ -I./arch/x86/include -I./arch/x86/include/generated \ -I./include -I./arch/x86/include/uapi \ -I./arch/x86/include/generated/uapi -I./include/uapi \ -I./include/generated/uapi \ -include ./include/linux/kconfig.h \ -include ./include/linux/compiler_types.h ... -DMODULE -DKBUILD_BASENAME='"suni"' -DKBUILD_MODNAME='"suni"' \ -o drivers/atm/suni.i drivers/atm/suni.c and you can see the result in drivers/atm/suni.i , with the expansion of HZ and CONFIG_HZ . The -include ./include/linux/kconfig.h directive ensures that the kernel configuration is always included. include/linux/kconfig.h includes generated/autoconf.h .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346765/" ] }
511,990
From https://dzone.com/articles/java-8-how-to-create-executable-fatjar-without-ide tar xf ExecutableOne.jar but why do I get $ tar xf ExecutableOne.jar tar: This does not look like a tar archivetar: Skipping to next headertar: Exiting with failure status due to previous errors Thanks.
A jar file is a Java ARchive, not a tarball that can be extracted. You can install jar or java and extract the files (or use 7-Zip), but you can't extract them with tar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/511990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
512,071
I stumble across a weird behaviour today.After adding a user to a new group like so : # gpasswd -a test myuser then connecting to a new bash session, here are the result for groups and groups myuser : myuser@mycomputer$ groupswheel myusermyuser@mycomputer$ groups myuserwheel myuser test Only if I reboot the output of groups will be the same as groups myuser , but not necessary showing the groups in the same order. So my question is simple : why ?
Because changes to group membership only take effect after starting a new login shell. Starting a new non-login interactive shell session (which is what you get when you open a new terminal) is irrelevant. So, when you run groups , that prints the groups your user is currently in. However, those were set up when your user first logged in and cannot be changed until you log in again. Therefore, groups doesn't include your new group. On the other hand, when you run groups myuser , the system doesn't look for the groups the current user belongs to at the moment , it looks up the groups that the user myuser belongs to, which it gets by reading the settings file ( /etc/group , presumably). Since your user is set up to belong to the new group in /etc/groups , this command shows that as well, even though you're not currently in that group since you haven't logged in again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346850/" ] }
512,075
question : Is it possible to "add" the contents of a 2nd folder to the current folder file contents? (In essence I was asking from a full Unix is perspective but may be nginx would work too as the goal is for a web server) Example: I have /pub where our server software lives. It has index.php But I also have /static/files where test.php lives. I do not want to place test.php in /pub . What I want is that if I visit the nginx webserver with root /pub that it sees both test.php and index.php in the root. Why? Because nowadays I have add a lot of validation files to our server that need to exist in the root. Files like bing_dfsfsfsdfs.html and google_aaddasdjsad.html . It is a multiserver setup so the files have amounted to quite a few. I was thinking of creating a folder /static/files where I can store all these single files ... and keep the /pub folder clean with the software only.
Because changes to group membership only take effect after starting a new login shell. Starting a new non-login interactive shell session (which is what you get when you open a new terminal) is irrelevant. So, when you run groups , that prints the groups your user is currently in. However, those were set up when your user first logged in and cannot be changed until you log in again. Therefore, groups doesn't include your new group. On the other hand, when you run groups myuser , the system doesn't look for the groups the current user belongs to at the moment , it looks up the groups that the user myuser belongs to, which it gets by reading the settings file ( /etc/group , presumably). Since your user is set up to belong to the new group in /etc/groups , this command shows that as well, even though you're not currently in that group since you haven't logged in again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170217/" ] }
512,139
I want to create a symlink ~/.pm2/logs -> /opt/myapp/log When I run ln -sFf /opt/myapp/log ~/.pm2/logs I get a symlink ~/.pm2/logs/log -> /opt/myapp/log which is not what I want. I'd prefer a POSIX-compatible solution if possible.
You already have a directory at ~/.pm2/logs . Since that directory exists, the symbolic link is put inside it. Would you want that ~/.pm2/logs is a symbolic link rather than a directory, then you will have to remove or rename that existing directory first.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163741/" ] }
512,172
How can I prevent a bash script from closing, so block itself from closing. Is there a command that executes a command if there is no action?
From your other (now deleted) question, it seems you want to run xterm -e your-script And the terminal emulator window not to go away after the script finishes. For that, you could add a command that sleeps forever at the end of your script, or in an EXIT trap. See How to do nothing forever in an elegant way? or Is there a Linux command that does nothing, but never exits? for some options. trap 'sleep infinity' EXIT Would cause the shell to run sleep infinity upon exit, and so never exit. With those sleep implementations that don't support infinity , replace with a large number, like sleep 2147483647 (the largest 32 bit signed integer which should be safe on most systems and is about 68 years). With xterm , you can also use its -hold option which is designed for that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346939/" ] }
512,173
Issue:- The passphrase is repeatedly being requested when I pass show <name of credential> . Doing some Googling, I found you can do that with gpg-preset-passphrase, however I'm not sure where to get it or if it included in the gpg distribution. This is the current version of gpg I'm running. Does anyone know how to get gpg-preset-passphrase installed? gpg --versiongpg (GnuPG) 2.0.22libgcrypt 1.5.3Copyright (C) 2013 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Home: ~/.gnupgSupported algorithms:Pubkey: RSA, ?, ?, ELG, DSACipher: IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224Compression: Uncompressed, ZIP, ZLIB, BZIP2
Looks like it was installed but just not directly accessible through the gpg-preset-passphrase command, but rather: /usr/libexec/gpg-preset-passphrase : /usr/libexec/gpg-preset-passphrase --versiongpg-preset-passphrase (GnuPG) 2.0.22Copyright (C) 2013 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90475/" ] }
512,191
I tried to implement the following line in my .bashrc , alias ./my_exec='printf "foo"' However, the alias doesn't work, and the following line appears : bash: alias: `./my_exec': invalid alias name I know that the zsh terminal can make this work, but I wouldn't switch to it for this only thing. Is there is a way I can make this alias work ?
In bash-4.x $ BASH_ALIASES[./my_exec]='echo yes'$ ./my_execyes According to the bash manpage , you cannot use / in an alias name: The characters / , $ , ` , and = and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. When defining an alias via the alias name=val syntax, bash will refuse any alias name that contains any character defined by the regex: [ \t\n&();<>|\\"'`$/] See the legal_alias_name() function in its source code. Notice that the lack of = above is not an omission; the impossibility of using it in an alias name is simply an artifact of the syntax. But you can use some of those characters in an alias, by defining it indirectly via the BASH_ALIASES array: $ BASH_ALIASES['/a=$']='echo yes'; /a=$yes This was " fixed " in bash-5.0 and you're no longer able to use / in alias names. But, for consolation, you can still use = : bash-5.0-18-g36f2c406$ BASH_ALIASES[ef=g]='echo yess'bash-5.0-18-g36f2c406$ ef=gyessbash-5.0-18-g36f2c406$ echo $ef<nothing> Alias names in the susv4 standard 3.10 Alias Name In the shell command language, a word consisting solely of underscores,digits, and alphabetics from the portable character set and any of thefollowing characters: ! , % , , , @ . Implementations may allow other characters within alias names as an extension. So both bash and zsh (which allows / to be used in alias names directly) are within the standard. Slashes and other funny chars in function names In bash and zsh , a / can be used directly in a function name: $ /bin/sh(){ echo no /bin/sh today; }$ /bin/sh -c lsno /bin/sh today This is a non portable extension; a standard shell is only required to support function names which contain ascii letters, digits and underscores, and don't start with a digit . In bash , a function name can be made up of any characters except $ , with the condition that is doesn't contain only digits and within the constraints imposed by the function definition syntax. You can look at execute_intern_function() and check_identifier() for all the details. In zsh the all-digits constraint doesn't apply, and a function name can also be quoted/escaped in the definition: zsh$ 666() echo "$0"; \$() echo "$0"zsh$ 666; $666$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346975/" ] }
512,204
I'm using Arch Linux, and have both Gnome and the i3 window manager installed. When running i3, I'm trying to initiate the composite manager Compton. But trying to do so results in the following error: $ compton[ 04/11/2019 22:32:36.443 register_cm FATAL ERROR ] Another composite manager is already running I think this means that Compton is already running, or Mutter is running, but I'm not sure which. Is there a command I can use to determine which composite manager is currently running?
You can use inxi . inxi -Gxx | grep compositor The output looks like this alternate: ati,fbdev compositor: compton resolution: <xdpyinfo missing> and you can see that Compton is currently being used as a compositor. With no compositor, there is no grep match. Switches: -G Show graphics info (card(s), driver, display protocol (if available), display server, resolution, renderer, OpenGL version). -xx Show extra, extra data. (with -G , show chip vendor:product ID for each video card; OpenGL compatibility version; compositor (experimental); alternate Xorg drivers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/294686/" ] }
512,250
I have a log file. For every line with a specific number, I want to sum the last number of those lines. To grep and cut is no problem but I don't know how to sum the numbers. I tried some solutions from StackExchange but didn't get them to work in my case. This is what I have so far: grep "30201" logfile.txt | cut -f6 -d "|" 30201 are the lines I'm looking for. I want to sum the last numbers 650, 1389 and 945 The logfile.txt Jan 09 2016|09:15:17|30201|1|SL02|650Jan 09 2016|09:15:18|43097|1|SL01|945Jan 09 2016|09:15:19|28774|2|SB03|1389Jan 09 2016|09:16:21|00788|1|SL02|650Jan 09 2016|09:17:25|03361|3|SL01|945Jan 09 2016|09:17:33|08385|1|SL02|650Jan 09 2016|09:18:43|10234|1|SL01|945Jan 09 2016|09:21:55|00788|1|SL02|650Jan 09 2016|09:24:43|03361|3|SB03|1389Jan 09 2016|09:26:01|30201|1|SB03|1389Jan 09 2016|09:26:21|28774|2|SL02|650Jan 09 2016|09:26:25|00788|1|SL02|650Jan 09 2016|09:27:21|28774|2|SL02|650Jan 09 2016|09:29:32|30201|1|SL01|945Jan 09 2016|09:30:12|34032|1|SB03|1389Jan 09 2016|09:30:15|08767|3|SL02|650
You can take help from paste to serialize the numbers in a format suitable for bc to do the addition: % grep "30201" logfile.txt | cut -f6 -d "|"6501389945% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+650+1389+945% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+ | bc2984 If you have grep with PCRE, you can do it with grep alone using postive lookbehind: % grep -Po '\|30201\|.*\|\K\d+' logfile.txt | cut -f6 -d "|" | paste -sd+ | bc2984 With awk alone: % awk -F'|' '$3 == 30201 {sum+=$NF}; END{print sum}' logfile.txt 2984 -F'|' sets the field separator as | $3 == 30201 {sum+=$NF} adds up the last field's values if the third field is 30201 END{print sum} prints the sum at the END
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/305545/" ] }
512,268
Background I got these messages when updating: Info: org.gnome.Platform is end-of-life, with reason: GNOME 3.24 runtime is no longer supported as of 11th January 2019. Please ask your application developer to migrate to a supported platform.Info: org.gnome.Platform.Locale is end-of-life, with reason: GNOME 3.24 runtime is no longer supported as of 11th January 2019. Please ask your application developer to migrate to a supported platform. As this is a runtime, I now want to find out, which app(s) is/are actually using this outdated runtime, so I can report it as a bug there. Basically, I just want to do what the message told me… Question So, given a name of a runtime ( org.gnome.Platform ) and a version of a runtime (3.24) how can I list all apps that use this runtime in this specific version ? Also, please answer the simpler case without a specific version, so how can I list all apps that use a specific runtime ( org.gnome.Platform )? Tries so far flatpak info --show-runtime <appid> shows the runtime of a specific app… But well… I can hardly do this manually for each app. flatpak list --app shows all apps, but no runtime information. Even flatpak list --app --columns=all does not show something specific. flatpak list --runtime show all runtimes including the version (nice), but not, which apps actually make use of it. I can use flatpak info org.gnome.Platform//3.24 to show information about the runtime, but I have still no idea what app uses it.
You can use flatpak list --app with the --app-runtime option: flatpak list --app --app-runtime org.gnome.Platform//3.30 If you uninstall those apps to clean-up some space, remember to also: flatpak uninstall --unused
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146739/" ] }
512,270
I have a bash script that sets a variable: tmux setw @tmux_man_pane $pane When the bash function that uses this variable is called for the first time, I get: unknown option: @tmux_man_pane I put this in .tmux.conf: setw -g tmux_man_pane 0 setw -g tmux_cheat_pane 0 But still getting the error. Code for context: tmux_man_page() { if [[ "$TERM" =~ 'screen' ]] && [[ -n "$TMUX" ]]; then pane=$(tmux showw -v @tmux_man_pane) output=$(tmux list-panes -t ${pane} 2>&1) if [[ $pane ]] && ! [[ -z "$pane" ]] && ! [[ $output =~ 'find pane' ]]; then tmux -q respawn-pane -k -t $pane man $1 else tmux split-window -vf man $1 pane=$(tmux display-message -p "#{pane_id}") tmux setw @tmux_man_pane $pane tmux select-pane -t {last} fi fi}
You can use flatpak list --app with the --app-runtime option: flatpak list --app --app-runtime org.gnome.Platform//3.30 If you uninstall those apps to clean-up some space, remember to also: flatpak uninstall --unused
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166716/" ] }
512,331
PROBLEM: I have a shell program that I have been writing but I can't find out how to make sure that trap is trapping for cleanup at the end or because of a error in some command, it cleans up either way. Here is the code: ################################### Successful exit then this cleanup ###########################################################3successfulExit(){ IFS=$IFS_OLD cd "$HOME" || { echo "cd $HOME failed"; exit 155; } rm -rf /tmp/svaka || { echo "Failed to remove the install directory!!!!!!!!"; exit 155; }}###############################################################################################################################33####### Catch the program on successful exit and cleanuptrap successfulExit EXIT QUESTION: How can I make trap only trap EXIT on program finish? Here is the full script: debianConfigAwsome.5.3.sh
On entry to the EXIT trap, $? contains the exit status. That's the same value you'd find as $? after calling this script in another shell: either the argument passed to exit (truncated to the range 0–255) or the return status of the preceding command. In the case of an exit due to set -e , it's the return status of the command that triggered the implicit exit . Usually you should save $? and exit again with the same status. cleanup () { if [ -n "$1" ]; then echo "Aborted by $1" elif [ $status -ne 0 ]; then echo "Failure (status $status)" else echo "Success" fi}trap 'status=$?; cleanup; exit $status' EXITtrap 'trap - HUP; cleanup SIGHUP; kill -HUP $$' HUPtrap 'trap - INT; cleanup SIGINT; kill -INT $$' INTtrap 'trap - TERM; cleanup SIGTERM; kill -TERM $$' TERM
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
512,362
I have a directory with over 400 GiB of data in it. I wanted to check that all the files can be read without errors, so a simple way I thought of was to tar it into /dev/null . But instead I see the following behavior: $ time tar cf /dev/null .real 0m4.387suser 0m3.462ssys 0m0.185s$ time tar cf - . > /dev/nullreal 0m3.130suser 0m3.091ssys 0m0.035s$ time tar cf - . | cat > /dev/null^Creal 10m32.985suser 0m1.942ssys 0m33.764s The third command above was forcibly stopped by Ctrl + C after having run for quite long already. Moreover, while the first two commands were working, activity indicator of the storage device containing . was nearly always idle. With the third command the indicator is constantly lit up, meaning extreme busyness. So it seems that, when tar is able to find out that its output file is /dev/null , i.e. when /dev/null is directly opened to have the file handle which tar writes to, file body appears skipped. (Adding v option to tar does print all the files in the directory being tar 'red.) So I wonder, why is this so? Is it some kind of optimization? If yes, then why would tar even want to do such a dubious optimization for such a special case? I'm using GNU tar 1.26 with glibc 2.27 on Linux 4.14.105 amd64.
It is a documented optimization : When the archive is being created to /dev/null , GNU tar tries to minimize input and output operations. The Amanda backup system, when used with GNU tar, has an initial sizing pass which uses this feature.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27672/" ] }
512,448
I'm trying to find most common word sorted alphabetically. For example: 2 went 2 wonder 2 wont 3 began 3 little 3 moment 3 rabbit 3 thing 3 till 4 alice 4 bottle 4 came 4 sure 4 window The output should be alice (has highest value and it is the first word alphabetically). I'm doing sort -nr and can't figure out what to do next.
To sort by two fields, you need to tell sort what they are, and how to sort them; for example: sort -k1,1nr -k2b < input sorts by field 1 ( -k1,1 ) numerically in reverse (descending) order; for lines where field 1 is equal, secondarily sort by the rest of the line ( -k2 ) normally (lexically) not including the leading blanks (the spacing between the first and second field) in the sort key ( b ). The output on your sample input is: 4 alice 4 bottle 4 came 4 sure 4 window 3 began 3 little 3 moment 3 rabbit 3 thing 3 till 2 went 2 wonder 2 wont
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347197/" ] }
512,467
How does one go about using diff to compare the output of two commands? I know how to use it to compare the contents of a file filename1 with the output of a command cmd2 : cmd | diff filename - How do I make it so that I can have another command, say cmd1 in place of filename ? I'm using dash, which doesn't support process substitution.
Based on How to emulate Process Substitution in Dash? (thanks αғsнιη !), adjusted for dash : ( cmd1 | ( cmd2 | ( diff /dev/fd/3 /dev/fd/4 ) 4<&0 ) 3<&0 )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347221/" ] }
512,484
sample input is <bre rt="1600" et="1550794901464" st="1550794899864" tid="8390500116294391399" mh="N" cn="" lc="" ts="N/A" cidc="" IDC="" eidc="BRE-S-TRA-0085418501"/> <r1> <gr1> <a="1" b="smaple data with spaces" c="Created TrasctionInfo" d="1550794901228"/> <e="INITIAL" f="2" g="INITIAL_LEGACY" h="1550794901228" i="LegacyToggle is off. Follow Legacy flow"/> <lx ets="2019-02-22T00:21:41.228Z" trxn="smaple data with spaces 2 record" rn="Derive data" abc="COT def" def="Season occur" trxn="smaple data with spaces 3rd record" den="andys and others" trxn="smaple data with spaces 4th record" kit="Theater - Span day" rns="Span day" trxn="smaple data with spaces 5th record" off="|"/> <cwl wc="2.0766" tot="16" act="116.28960000000001" CSE="CHE-CSFL" wg1.0" high="1" </cwl> </gr1> </r1></bre><bre rt="1234" et="1234794901464" st="1234794899864" tid="2345500116294391399" mh="Y" cn="At123" lc="" ts="NA" cidc="" IDC="some text value" eidc="abc-def-gh-2385418501"/> <r1> <gr1> <a="1" trxn="other data with spaces" c="Created Info" d="3434794545228"/> <e="begin" f="2" g="INITIAL_LEGACY" h="1234709901228" i="Toggle hig. Follow toggle flow"/> <lx ets="2017-02-22T00:21:41.228Z" trxn="another record data" rn="Derive data" abc="COT def" trxn="smaple data with spaces record" def="Season occur" den="andys and others" trxn="smaple data with spaces 4th record" kit="Theater - Span day" rns="Span day" trxn="data with spaces" off="|"/> <cwl wc="2.0766" tot="16" act="116.28960000000001" CSE="CHE-CSFL" wg1.0" high="1" </cwl> </gr1> </r1></bre><bre rt="1234" et="1234794901464" st="1234794899864" tid="2345500116294391399" mh="Y" cn="At123" lc="" ts="NA" cidc="" IDC="some text value" eidc="abc-def-gh-2385418501"/> <r1> <gr1> <a="1" c="Created transaction" b="3434794545228"/> <e="begin" f="2" g="INITIAL_LEGACY" h="1234709901228" i="Toggle hig. Follow toggle flow"/> <lx ets="2017-02-22T00:21:41.228Z" rn="Derive data" abc="COT def" def="Season occur" den="andys and others" kit="Theater - Span day" rns="Span day" off="|"/> <cwl wc="2.0766" tot="16" act="116.28960000000001" CSE="CHE-CSFL" wg1.0" high="1" </cwl> </gr1> </r1></bre> output should be tid="8390500116294391399"ts="N/A"ets="2019-02-22T00:21:41.228Z" trxn="smaple data with spaces 2 record"trxn="smaple data with spaces 3rd record"trxn="smaple data with spaces 5th record"tid="2345500116294391399"ts="NA"ets="2017-02-22T00:21:41.228Z" trxn="other data with spaces"trxn="another record data"trxn="smaple data with spaces record"trxn="data with spaces"tid="2345500116294391399"ts="NA"ets="2017-02-22T00:21:41.228Z" I tried like below sed -e 's/trxn=/\ntrxn=/g' -e 's/tid=/\ntid=/g' -e 's/ts=/\nts=/g'while IFS= read -r vardo if grep -Fxq "$trxn" temp2.txt then awk -F"=" '/tid/{print VAL=$i} /ts/{print VAL=$i} /ets/{print VAL=$i} /trxn/{print VAL=$i} /tid/{print VAL=$i;next}' temp2.txt >> out.txt else awk -F"=" '/tid/{print VAL=$i} /ts/{print VAL=$i} /ets/{print VAL=$i} /tid/{print VAL=$i;next}' temp2.txt >> out.txt fidone < "$input"
Based on How to emulate Process Substitution in Dash? (thanks αғsнιη !), adjusted for dash : ( cmd1 | ( cmd2 | ( diff /dev/fd/3 /dev/fd/4 ) 4<&0 ) 3<&0 )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346823/" ] }
512,503
I enabled ftrace event tracing for sys_enter_openat syscall. The respective output format given at events/syscalls/sys_enter_openat/format is print fmt: "dfd: 0x%08lx, filename: 0x%08lx, flags: 0x%08lx, mode: 0x%08lx", ((unsigned long)(REC->dfd)), ((unsigned long)(REC->filename)), ((unsigned long)(REC->flags)), ((unsigned long)(REC->mode)) As expected a sample output line to ftrace is something like msm_irqbalance-1338 [000] ...1 211710.033931: sys_openat(dfd: ffffff9c, filename: 5af693f224, flags: 2, mode: 0) Is there a way to change output format such that filename: 5af693f224 can be shown as filename: <string> instead of hex(5af693f224) ? So basically is there a way to change output format while tracing a particular event(eg. sys_enter_openat above) to ftrace. I guess this would have been possible using systemtap or krpobe but my setup does not allow its use as of now.
Unfortunately, there is currently not a way to do this. But perhaps in the future I may add it, if I can figure out a sane interface and implementation to do such a thing. Maybe I will add a trigger that will make the output show differently. Although I may be new to StackExchange, I am the author of ftrace (real name Steven Rostedt - look up the git history). The "real answer" will happen when I write the code!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153335/" ] }
512,681
I'm trying to compile the Paraview graphical visualization software for my ARM-based laptop; however, I am getting a few configuration warnings that seem to relate to cmake 'policies'. The warning text and the cmake man page suggest that I should be able to run the command cmake_policy() to set a particular policy; however, I can't figure out how or where to run it. How can I set a particular cmake policy?
The CMake command cmake_policy() is documented in the CMake documentation . It is usually added to the CMakeLists.txt file of the project to change the behaviour of CMake itself, usually to be able to handle older CMakeLists.txt features with newer versions of CMake. You may use it to set an individual policy using cmake_policy(SET CMP<NNNN> OLD) where <NNNN> is a CMake policy number and where OLD indicates that you want the "old behaviour" of this policy (the word OLD could also be NEW ). Or, you may use the command to set policies for compatibility with a particular version of CMake using cmake_policy(VERSION x.xx) where x.xx must be at least 2.4 . In either case, the CMakeLists.txt file of the project is modified, and cmake will have to be re-run. See also the documentation for cmake_minimum_required() .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257802/" ] }
512,702
This command, when run alone, produces the expected result (the last line of the crontab): tail -n 1 /etc/crontab However, when I run it as part of an echo command to send the result to a file, it adds a summary of all the files in the working directory, plus the expected result: sudo bash -c 'echo $(tail -n 1 /etc/crontab) > /path/to/file' Why did this command produce the extra data?
Your crontab line has one or more asterisks * in it, indicating "any time". When that line is substituted in from the command substitution, the result is something like echo * * * * * cmd > /path/to/file While most further expansions are not applied to the output of command substitution, pathname expansion is (as is field splitting) : The results of command substitution shall not be processed for further tilde expansion, parameter expansion, command substitution, or arithmetic expansion. If a command substitution occurs inside double-quotes, field splitting and pathname expansion shall not be performed on the results of the substitution. Pathname expansion is what turns *.txt into a list of matching filenames (globbing), where * matches everything. The end result is that you get every (non-hidden) filename in the working directory listed for every * in your crontab line. You could fix this by quoting the expansion, if the code you posted was a representative of a more complex command: sudo bash -c 'echo "$(tail -n 1 /etc/crontab)" > /path/to/file' but more straightforwardly just lose the echo entirely: sudo bash -c 'tail -n 1 /etc/crontab > /path/to/file' This should do what you want and it's simpler as well (the only other material difference is that this version will omit field splitting that would otherwise have occurred, so runs of spaces won't be collapsed).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36311/" ] }
512,717
I need to build an RPM for a Java software package on MacOS. I'm using rpmbuild from homebrew, version 4.14.2.1. The toolchain is set up correctly, and I get a valid RPM at the end. There is just one snag: The RPM has a target OS string of "darwin", since it was built there, and attempting to install it on a normal Linux (think CentOS) fails with the message Transaction check error: package myrpm.noarch is intended for a different operating system and indeed, querying the RPM confirms the reason: # rpm -qp --qf '%{os}\n' myrpm.noarch.rpmdarwin In order not to change my source tree, I'd like to put the necessary properties in a local .rpmrc file. How would I need to set it up so that I get a truly platform-independent RPM in the end? To clarify this: The rpm contains software and paths that work on any system with a Java 8 JRE and a POSIX-like file system. It should at least be installable on darwin/MacOS and the Redhat / CentOS / SuSE universe.
Your crontab line has one or more asterisks * in it, indicating "any time". When that line is substituted in from the command substitution, the result is something like echo * * * * * cmd > /path/to/file While most further expansions are not applied to the output of command substitution, pathname expansion is (as is field splitting) : The results of command substitution shall not be processed for further tilde expansion, parameter expansion, command substitution, or arithmetic expansion. If a command substitution occurs inside double-quotes, field splitting and pathname expansion shall not be performed on the results of the substitution. Pathname expansion is what turns *.txt into a list of matching filenames (globbing), where * matches everything. The end result is that you get every (non-hidden) filename in the working directory listed for every * in your crontab line. You could fix this by quoting the expansion, if the code you posted was a representative of a more complex command: sudo bash -c 'echo "$(tail -n 1 /etc/crontab)" > /path/to/file' but more straightforwardly just lose the echo entirely: sudo bash -c 'tail -n 1 /etc/crontab > /path/to/file' This should do what you want and it's simpler as well (the only other material difference is that this version will omit field splitting that would otherwise have occurred, so runs of spaces won't be collapsed).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7932/" ] }
512,759
First some specs: my computer is an HP EliteBook 8460p. It comes with an integrated Chicony HP HD webcam. My issue is that a lot of applications (well, at least Skype and guvcview) are displaying multiple lines for the same webcam; indeed, if I do ls -l /dev | grep video , I get the following: crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1 I have 2 /dev/video[n] with only one (integrated) webcam; Skype will work properly with /dev/video0 , but not with /dev/video1 . Same for guvcview. If I plug another USB webcam, for example a logitech one, I get the following with dmesg : [21222.638802] usb 2-2: new high-speed USB device number 20 using xhci_hcd[21222.970684] usb 2-2: New USB device found, idVendor=046d, idProduct=08c2, bcdDevice= 0.05[21222.970755] usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0[21222.972518] uvcvideo: Found UVC 1.00 device <unnamed> (046d:08c2)[21226.044535] uvcvideo 2-2:1.0: Entity type for entity Extension 4 was not initialized![21226.044538] uvcvideo 2-2:1.0: Entity type for entity Extension 8 was not initialized![21226.044540] uvcvideo 2-2:1.0: Entity type for entity Extension 10 was not initialized![21226.044541] uvcvideo 2-2:1.0: Entity type for entity Extension 9 was not initialized![21226.044543] uvcvideo 2-2:1.0: Entity type for entity Extension 3 was not initialized![21226.044545] uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized![21226.044547] uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized![21226.044746] input: UVC Camera (046d:08c2) as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2:1.0/input/input35[21226.137559] usb 2-2: Warning! Unlikely big volume range (=3072), cval->res is probably wrong.[21226.137569] usb 2-2: [5] FU [Mic Capture Volume] ch = 1, val = 4608/7680/1 And the following with ls -l /dev/ | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw---- 1 root video 243, 1 Apr 16 14:06 media1crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1crw-rw----+ 1 root video 81, 2 Apr 16 14:06 video2crw-rw----+ 1 root video 81, 3 Apr 16 14:06 video3 3 new entries: /dev/media1 , /dev/video2 and /dev/video3 . I even found a Sony webcam (CEVCECM) that adds up to 4 new devices. The dmesg logs: [21927.665747] usb 2-2: new high-speed USB device number 23 using xhci_hcd[21927.817330] usb 2-2: New USB device found, idVendor=05e3, idProduct=0608, bcdDevice= 9.01[21927.817339] usb 2-2: New USB device strings: Mfr=0, Product=1, SerialNumber=0[21927.817343] usb 2-2: Product: USB2.0 Hub[21927.824119] hub 2-2:1.0: USB hub found[21927.824814] hub 2-2:1.0: 4 ports detected[21928.113733] usb 2-2.4: new high-speed USB device number 24 using xhci_hcd[21928.223184] usb 2-2.4: New USB device found, idVendor=054c, idProduct=097b, bcdDevice=21.12[21928.223192] usb 2-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3[21928.223197] usb 2-2.4: Product: CEVCECM[21928.223201] usb 2-2.4: Manufacturer: Sony[21928.223206] usb 2-2.4: SerialNumber: DHZD10412EUHK1[21928.227506] uvcvideo: Found UVC 1.00 device CEVCECM (054c:097b)[21928.242592] uvcvideo: Unable to create debugfs 2-24 directory.[21928.242780] uvcvideo 2-2.4:1.0: Entity type for entity Extension 7 was not initialized![21928.242783] uvcvideo 2-2.4:1.0: Entity type for entity Extension 3 was not initialized![21928.242785] uvcvideo 2-2.4:1.0: Entity type for entity Processing 2 was not initialized![21928.242787] uvcvideo 2-2.4:1.0: Entity type for entity Camera 1 was not initialized![21928.242877] input: CEVCECM: CEVCECM as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2.4/2-2.4:1.0/input/input38 And the resulting device files with ls -l /dev | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw---- 1 root video 243, 1 Apr 16 14:18 media1crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1crw-rw----+ 1 root video 81, 2 Apr 16 14:18 video2crw-rw----+ 1 root video 81, 3 Apr 16 14:18 video3crw-rw----+ 1 root video 81, 4 Apr 16 14:18 video4crw-rw----+ 1 root video 81, 5 Apr 16 14:18 video5 5 new entries: /dev/media1 and /dev/video2 to /dev/video5 . I feel like the correct files to use are the /dev/media[n] ones, but Skype and guvcview somehow fail to do so and fallback to the /dev/video[n] . I don't have this issue with Webcamoid for example. If anyone has an idea, I take it. In the meantime I will continue the investigation... --- Edited the 2019-05-14 --- Got some interesting information using v4l2-ctl --device=/dev/video* --all . For the Chicony HP HD webcam, its 2 device files have different device capabilities: # Devices capabilities for /dev/video0Video CaptureStreamingExtended Pix Format# Devices capabilities for /dev/video1Metadata CaptureStreamingExtended Pix Format I get similar results for the USB webcams. So after all, maybe what Skype and guvcview fail to do is to only list video devices that support the Video Capture device capability.
The second device provides metadata about the video data from the first device. The new devices were introduced by this patch: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=088ead25524583e2200aa99111bea2f66a86545a More information on the V4L metadata interface can be found here: https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/dev-meta.html For run of the mill USB Video Class devices, this mostly just provides more accurate timestamp information . For cameras like Intel's RealSense line, provide a wider range of data about how the image was captured . Presumably this data was split out into a separate device node because it couldn't easily be delivered on the primary device node in a compatible way. It's a bit of a pain though, since (a) applications that don't care about this metadata now need to filter out the extra devices, and (b) applications that do care about the metadata need a way to tie the two devices together.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247149/" ] }
512,799
I have the following install script for ubuntu : #!/bin/bashsudo apt updatesudo apt full-upgrade -ysudo apt install jqsudo apt autoclean -ysudo apt autoremove will the following work under fedora, red hat, mageia or other rpm-based distros ...or does the syntax have to change more? #!/bin/bashsudo rpm updatesudo rpm full-upgrade -ysudo rpm install jqsudo rpm autoclean -ysudo rpm autoremove also can I do something to the effect of the following? : #!/bin/bashif [ $(command -v yum) ]then sudo yum update sudo yum full-upgrade -y sudo yum install jq sudo yum autoclean -y sudo yum autoremoveelse sudo rpm update sudo rpm full-upgrade -y sudo rpm install jq sudo rpm autoclean -y sudo rpm autoremovefi
rpm is mostly equivalent to dpkg , not apt ; the apt equivalent is yum (on RHEL and CentOS up to release 7), or dnf (on Fedora, and RHEL and CentOS starting with release 8), or zypper (on SuSE) . For your specific commands: sudo dnf distro-syncsudo dnf install jqsudo dnf clean allsudo dnf autoremove or sudo yum upgradesudo yum install jqsudo yum clean all (This works because jq is packaged under the same name in both cases. This isn’t always true; a given piece of software can be packaged under different names in different distributions or even different releases of a given distribution.) See the Pacman Rosetta and the Ubuntu RHEL migration guide for details. You might want to look into configuration management tools instead, they will help you abstract the differences away (or at least, deal with them more robustly). Your if [ $(command -v yum) ] test is flawed because yum can be installed on Debian derivatives (including Ubuntu); its presence doesn’t mean it’s the package manager. You should probably detect the running operating system and base your choice on that; see How can I reliably get the operating system's name? for details.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228658/" ] }
512,808
I'd like to know why this recursive function in shell works properly: exp ( ) { local result #local op1="$1" #echo $2 if [[ $2 -eq 0 ]]; then echo 1 return fi tmp=$(( $2 - 1 )) local result1=$(exp $1 $tmp ) result=$(( $result1 * $1 )) echo $result } exp 3 4 But when touching $2 in any way, for example like this: exp ( ) { local result echo $2 if [[ $2 -eq 0 ]]; then echo 1 return fi tmp=$(( $2 - 1 )) local result1=$(exp $1 $tmp ) result=$(( $result1 * $1 )) echo $result } exp 3 4 It fails with: 4foo.sh: line 15: 01 * 3 : syntax error in expression (error token is "1 * 3 ")foo.sh: line 15: 23 * 3 : syntax error in expression (error token is "3 * 3 ")9
rpm is mostly equivalent to dpkg , not apt ; the apt equivalent is yum (on RHEL and CentOS up to release 7), or dnf (on Fedora, and RHEL and CentOS starting with release 8), or zypper (on SuSE) . For your specific commands: sudo dnf distro-syncsudo dnf install jqsudo dnf clean allsudo dnf autoremove or sudo yum upgradesudo yum install jqsudo yum clean all (This works because jq is packaged under the same name in both cases. This isn’t always true; a given piece of software can be packaged under different names in different distributions or even different releases of a given distribution.) See the Pacman Rosetta and the Ubuntu RHEL migration guide for details. You might want to look into configuration management tools instead, they will help you abstract the differences away (or at least, deal with them more robustly). Your if [ $(command -v yum) ] test is flawed because yum can be installed on Debian derivatives (including Ubuntu); its presence doesn’t mean it’s the package manager. You should probably detect the running operating system and base your choice on that; see How can I reliably get the operating system's name? for details.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/512808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276996/" ] }
512,849
I came upon this question : What's the use of having a kernel part in the virtual memory space of Linux processes? and based on the answer and the comments on the answer : the kernel memory map includes a direct mapping of all physical memory, so everything in memory appears there; it also includes separate mappings for the kernel, modules etc., so the physical addresses containing the kernel appear in at least two different mappings Is this true? I couldn't find any source or reference for this, and why would it include a map of the entire physical memory and then again have a separate mapping of kernel modules? Isn't that redundant? Can someone explain in a simple manner what is inside the kernel part of virtual memory of processes in 64-bit Linux? and please provide a source for the answer ! because I couldn't find anything related to this in any book or paper.
The kernel’s memory map on x86-64 is documented in the kernel itself . The kernel maps user-space (for the current process) PTI data structures all the physical memory the kernel’s data structures, in various blocks, with holes for ASLR the kernel itself its modules Having a full mapping of physical memory is convenient, but its relevance is debated compared to the security risks it creates, and its address-space burden (since physical memory is effectively limited to half the address space as a result; this prompted the recent expansion to five-level page tables with 56-bit addresses).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347510/" ] }
512,940
Can I install Debian packages from Stretch DVD 2 and 3 after installation using apt? Because installing on VM, it didn't detect the second and the third DVD.
Run apt-cdrom add as root (or using sudo ), and follow the prompts – it will ask you to insert a disk, then scan it and add the relevant information to /etc/apt/sources.list . You will then be able to install packages from it as usual.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347586/" ] }
512,947
There are unknown number of lines in a file. How to delete nth line (when counted from the bottom) with one-liner command (you may use more than one if it is necessary) on Unix platform.
To remove for example the 4th line from the bottom using sed : tac input | sed '4d' | tac To overwrite the input file: tmpfile=$(mktemp)tac input | sed '4d' | tac > "$tmpfile" && mv "$tmpfile" input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/512947", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347577/" ] }
512,953
File1: 123234345456 File2: 123234343758 Expected output:File3: TRUETRUEFALSEFALSE so the code should compare two files and print 'TRUE' if it matches otherwise it should print 'FALSE' in the new file. Could anyone please provide the solution for this?
Use diff command as following, in bash or any other shell that supports <(...) process substitutions or you can emulate it as shown here : diff --new-line-format='FALSE'$'\n' \ --old-line-format='' \ --unchanged-line-format='TRUE'$'\n' \<(nl file1) <(nl file2) Output would be: TRUETRUEFALSEFALSE --new-line-format='FALSE'$'\n' , print FALSE if lines were differ and with --old-line-format='' we disable output if line was differ for file1 which is known as old file to diff command (We could swap these as well, meaning that one of them should print FALSE another should be disabled.) --unchanged-line-format='TRUE'$'\n' , print TRUE if lines were same. the $'\n' C-style escaping syntax is used to printing a new line after each line output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347601/" ] }
512,957
My work has set up an Ubuntu server for my team to host a project on. They set up a sudo user for me with my name as the username, and a default password. I was able to SSH into the server using these original username and password. Since my whole team would be accessing the server, I wanted to change the username and password so that it wasn't my name, but instead the project name. After doing so, I think I was still able to SSH in using the new username and password. However, now when I try to SSH in I get the error ssh: connect to host xxx.xx.xx.xx port 22: Connection refused . The server was set up so that it can only be accessed from any of our office's networks. Is this sudden refused connection due to me changing the username and password (maybe something to do with the RSA keys, I don't know), or else is it more likely to be an issue with firewalls or my office's network? Edit: Here is a detailed description of the process I took to change the username from 'abc' to 'xyz': While logged in as user 'abc', I tried to run sudo usermod -l xyz abc , but I couldn't as it said process abc is already running . I then created a new user named 'temp' with sudo access. I SSHd in as 'temp', ran sudo usermod -l xyz abc without any errors. I then SSHd in with 'xyz' successfully, deleted user 'temp' and ran passwd to change xyz's password. I'm pretty sure I exited from SSH, then successfully SSHd in again with the new username and password, but I may be wrong here - I can't remember.
Use diff command as following, in bash or any other shell that supports <(...) process substitutions or you can emulate it as shown here : diff --new-line-format='FALSE'$'\n' \ --old-line-format='' \ --unchanged-line-format='TRUE'$'\n' \<(nl file1) <(nl file2) Output would be: TRUETRUEFALSEFALSE --new-line-format='FALSE'$'\n' , print FALSE if lines were differ and with --old-line-format='' we disable output if line was differ for file1 which is known as old file to diff command (We could swap these as well, meaning that one of them should print FALSE another should be disabled.) --unchanged-line-format='TRUE'$'\n' , print TRUE if lines were same. the $'\n' C-style escaping syntax is used to printing a new line after each line output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/512957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171446/" ] }
513,009
Systemd allows you to create template units, as documented in systemd.unit . There are a number of variables you can use in your template unit. I'm interested in "%j" : This is the string between the last "-" and the end of the prefix name. The "prefix name" is also defined: For instantiated units, this refers to the string before the first "@" character of the unit name. I'm clear what they are, not clear why they exist. I'm guessing perhaps they are running multiple versions of the same service. What's a real-world example of how this is used?
Units can have additional settings in .d/ directories alongside the unit. For example, foo.service can be extended via foo.service.d/*.conf . Template units will use two directories – instance and template, so [email protected] will be extended from both [email protected]/*.conf and [email protected]/*.conf . This way you can extend all instances of the unit at once. In both cases, you the unit and its extension configs may use %i to get the "ttyS1" bit. However, some units cannot use templates, e.g. slices (representing cgroups) are named user-<UID>.slice and not user@<UID>.slice because these units' name represents a filesystem path (the dash is mapped to a slash, and therefore "user-123.slice" is a child of "user.slice"). Because it is desired to be able to configure all individual user slices (e.g. give each slice x% of memory), a similar mechanism was added for units which use path-like names: similarly to the getty example above, the unit user-1000.slice can be extended from both user-1000.slice.d/ and user-.slice.d/ , with files in the latter generic directory being able to use %j to get the "1000" bit. This last example is sort of used in practice by the default systemd installation: $ systemctl cat user-1000.slice# /usr/lib/systemd/system/user-.slice.d/10-defaults.conf[Unit]Description=User Slice of UID %j
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20239/" ] }
513,042
I have a large folder, 2TB, with 1000000 files on a Linux machine. I want to build a package with tar. I do not care about the size of the tar file, so I do not need to compress the data. How can I speed tar up? It takes me an hour to build a package with tar -cf xxx.tar xxx/ . I have a powerful CPU with 28 cores, and 500GB memory,is there a way to make tar run multithreaded? Or, alternatively, is there any good way to transfer a large number of small files between different folders and between different servers? My filesystem is ext4.
As @Kusalananda says in the comments, tar is disk-bound. One of the best things you can do is put the output on a separate disk so the writing doesn't slow down the reading. If your next step is to move the file across the network, I'd suggest that you create the tar file over the network in the first place: $ tar -cf - xxx/ | ssh otherhost 'cat > xxx.tar' This way the local host only has to read the files, and doesn't have to also accommodate the write bandwidth consumed by tar. The disk output from tar is absorbed by the network connection and the disk system on otherhost .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347678/" ] }
513,057
In the given data is it possible to uniq sort and print only the top hits against each region? Given Data aza1 18bcn1 16sat2 12lcy2 12fra1 12aza1 12bcn1 10sat2 8lcy2 9fra1 13aza1 21bcn1 2sat2 10lcy2 0fra1 1 Required Output aza1 21bcn1 16sat2 12lcy2 12fra1 13
Solution if order matters, using only sort and uniq <INPUT_FILE sort -k 1,1 -k 2nr,2 | uniq -w4 OUTPUT: aza1 21bcn1 16fra1 13lcy2 12sat2 12 Sort parameters: -k: sort by key (in this case column, pairs with -t) -n: sort as a number -r: reverse order (optional) -t: in case you want to change the key separator (default: space) Uniq parameter: -w: choose the first N characters Explanation: In your problem, we need to first sort the first column and then the second one. So there is a -k 1,1 followed by -k 2,2 . But, the second key (ONLY) must be sorted as a number and in the reverse order.Thus, it should be -k 2nr,2 . Note that if the -n or -r sort parameters are outside -k parameter, they are applied to the whole input instead of specific keys. Lastly, me must find the unique lines, but matching only the first 4 chars. Thus, uniq -w 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/304472/" ] }
513,078
I used to work on linux in a virtual Box and now I'm using a computer with Ubuntu 16.04.I have some ps files I need to convert to pdf and I used to run the command ps2pdf file.ps file.pdf in my previous computer, but now it doesn't work, I get the following error: /usr/bin/gs: symbol lookup error: /usr/lib/libgs.so.9: undefined symbol: FT_Property_Set I tried using convert file.ps file.pdf and it doesn't work either, I get the error: convert.im6: not authorized `sc1.ps' @ error/constitute.c/ReadImage/454.convert.im6: no images defined `sc.pdf' @ error/convert.c/ConvertImageCommand/3044.
Solution if order matters, using only sort and uniq <INPUT_FILE sort -k 1,1 -k 2nr,2 | uniq -w4 OUTPUT: aza1 21bcn1 16fra1 13lcy2 12sat2 12 Sort parameters: -k: sort by key (in this case column, pairs with -t) -n: sort as a number -r: reverse order (optional) -t: in case you want to change the key separator (default: space) Uniq parameter: -w: choose the first N characters Explanation: In your problem, we need to first sort the first column and then the second one. So there is a -k 1,1 followed by -k 2,2 . But, the second key (ONLY) must be sorted as a number and in the reverse order.Thus, it should be -k 2nr,2 . Note that if the -n or -r sort parameters are outside -k parameter, they are applied to the whole input instead of specific keys. Lastly, me must find the unique lines, but matching only the first 4 chars. Thus, uniq -w 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347710/" ] }
513,108
The man page for grep reads -i , --ignore-case Ignore case distinctions in both the PATTERN and the input files.  ( -i is specified by POSIX.) However, if I change case on a filename, it won't work. $ touch WHATEVER$ grep -i pattern whatevergrep: whatever: No such file or directory Am I missing something?
That confusing snippet was changed in newer versions of GNU grep to: -i , -ignore-case Ignore case distinctions, so that characters that differ only in case match each other. See this commit: http://git.savannah.gnu.org/cgit/grep.git/commit/?id=e1ca01be48cb64e5eaa6b5b29910e7eea1719f91 .BR \-i ", " \-\^\-ignore\-case-Ignore case distinctions in both the-.I PATTERN-and the input files.+Ignore case distinctions, so that characters that differ only in case+match each other. As to where the old formulation may originate, some programs like less(1) have a (mis)feature[1] where using an uppercase letter in a pattern will turn off case insensitivity for a particular search (override the -i flag). The author of that doc snippet probably assumed that many people expected that behavior, and instead of some direct caveat, preferred that non-committal sentence. FWIW, such a feature was never a part of ed(1) , grep(1) , vi(1) , perl(1) etc. or of the regex(3) or pcre(3) APIs. [1] that seems to have its origins in emacs , where it's the default; there you can turn it off by setting the (customizable) search-upper-case variable to nil .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/513108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347741/" ] }
513,203
I have a file with 1 million lines.I want to extract lines from line 10001 to 500000 How to do this?
sed is your friend: sed -n '10001,500000p;500001q' Note that 500001q is needed to stop further file processing. Otherwise it will still read the file till the very end. Thanks for hint on this to @Freddy.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299440/" ] }
513,208
Let's say I have multiple files with .ext extension containing multiple lines. I need to print filenames containing all 3 keywords: kwd1 , kwd2 , and kwd3 . How do I do it?
To find all filenames ending in .ext and containing the three keywords kwd1 , kwd2 and kwd3 , anywhere in or below the current directory: find . -name '*.ext' -name '*kwd1*' -name '*kwd2*' -name '*kwd3*' Or, setting the keywords in a more dynamic way, set -- "kwd1" "kwd2" "kwd3"for word do set -- "$@" -name "*$word*" shiftdonefind . -name "*.ext" "$@" -print Or, if you want to search for the keywords inside the files: set -- "kwd1" "kwd2" "kwd3"for word do set -- "$@" -exec grep -q -wF -e "$word" {} ';' shiftdonefind . -name "*.ext" "$@" -print I'm using -wF with grep here to only do a string comparison ( -F ) of whole words ( -w ) in the files. In a shell supporting named arrays, that last bit of code might look like keywords=( "kwd1" "kwd2" "kwd3" )and_expr=()for word in "${keywords[@]}"; do and_expr+=( -exec grep -q -wF -e "$word" {} ';' )donefind . -name "*.ext" "${and_expr[@]}" -print
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290191/" ] }
513,237
I installed the Windows 10 preview releases awhile back because I wanted to try the Sets feature that was being worked on. Sadly, this was removed from the beta releases, and has not returned. Is there a Linux window manager that has this capability? (Using tabs of multiple different programs in one window.)
This table of Window Managers shows Linux Window Managers with tabbed windows include: xmonad , wmii , Window Maker , WMFS, PekWM, Ion , i3 , FVWM , Fluxbox , and Compiz . Some Desktop Environments are locked in to a specific Window Manager (e.g., Cinnamon), but GNOME and KDE are not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302619/" ] }
513,246
I'm running an arch system with KDE4/Plasma, wpa_supplicant, networkmanager, systemd ... # cat /proc/version Linux version 5.0.0-arch1-1-ARCH (builduser@heftig-18825) (gcc version 8.2.1 20181127 (GCC)) #1 SMP PREEMPT Mon Mar 4 14:11:43 UTC 2019 The content of my /etc/hostname reads localhost . After boot, the shell command hostname now outputs localhost . More precisely: # hostnamectl Static hostname: localhostTransient hostname: localhost.localdomain Icon name: computer-laptop Chassis: laptop Machine ID: 7e0a101cd2f0406497a6e4354fc9b3b7 Boot ID: a1424a0995da4e84b1e55b7f79df957e Operating System: Arch Linux Kernel: Linux 5.0.0-arch1-1-ARCH Architecture: x86-64 When I turn on WiFi, networkmanager connects to a WiFi network and then the hostname changes. For instance: # hostnamectl Static hostname: localhostTransient hostname: localhost.localdomain Icon name: computer-laptop Chassis: laptop Machine ID: 7e0a101cd2f0406497a6e4354fc9b3b7 Boot ID: a1424a0995da4e84b1e55b7f79df957e Operating System: Arch Linux Kernel: Linux 5.0.0-arch1-1-ARCH Architecture: x86-64 The shell command hostname now outputs localhost.localdomain instead of localhost . As a consequence, the KDE lock-screen cannot be unlocked and I cannot start any X applications from the terminal in KDE (or any other desktop). A typical error message is this: $ gvimInvalid MIT-MAGIC-COOKIE-1 keyE233: cannot open display When I issue hostnamectl set-hostname localhost as root, the behavior resumes to normal. In some other WiFis, the hostname after connect is not localhost.localdomain but something even more random (it seems to be a hostname determined by the WiFi provider, mostly in big corporate networks). Why does a WiFi provider have the power to set my hostname? Can this be changed somehow?
Ivanivan's answer (tuning dhcpcd.con ), though plausible, didn't work in my case. So I suspect it is not about DHCP. I stumbled upon this post which told me that the problem is not about DHCP but about NetworkManager. Adding the following to /etc/NetworkManager/NetworkManager.conf solved the problem for me: [main]plugins=keyfile hostname-mode=none See man 5 NetworkManager.conf for details on the hostname-mode option. Setting it to none prevents NetworkManager from setting a transient hostname which is what happened in my case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347887/" ] }
513,265
I had a problem while trying to install glibc 2.14 , I got this error /home/myname/glibc_install/glibc-2.14/build/elf/ldconfig: Can't open configuration file /opt/glibc-2.14/etc/ld.so.conf: No such file or directory The fix suggested this :/opt/glibc-2.14/etc$ sudo sh -c "echo '/opt/lib' >> ld.so.conf" AFAIK sudo sh -c "echo '/opt/lib' >> ld.so.conf" means open the sh program(the shell) and give it this command "echo '/opt/lib' >> ld.so.conf" to execute, which creates a file named ld.so.conf in the current directory and save in it /opt/lib , is that right ? what does the entire line means, or what the shell is going to do step by step ?
Ivanivan's answer (tuning dhcpcd.con ), though plausible, didn't work in my case. So I suspect it is not about DHCP. I stumbled upon this post which told me that the problem is not about DHCP but about NetworkManager. Adding the following to /etc/NetworkManager/NetworkManager.conf solved the problem for me: [main]plugins=keyfile hostname-mode=none See man 5 NetworkManager.conf for details on the hostname-mode option. Setting it to none prevents NetworkManager from setting a transient hostname which is what happened in my case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254900/" ] }
513,275
Omit.txt 0010060080016 Filetogrepfrom.txt 00100600700800160054600800310000210016 I want to do cat filetogrepfrom.txt | grep -a 00 | grep -v {lines from omit.txt}
Ivanivan's answer (tuning dhcpcd.con ), though plausible, didn't work in my case. So I suspect it is not about DHCP. I stumbled upon this post which told me that the problem is not about DHCP but about NetworkManager. Adding the following to /etc/NetworkManager/NetworkManager.conf solved the problem for me: [main]plugins=keyfile hostname-mode=none See man 5 NetworkManager.conf for details on the hostname-mode option. Setting it to none prevents NetworkManager from setting a transient hostname which is what happened in my case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296757/" ] }
513,315
What is the default root password for raspbian jessie or debian 9? I have Raspbian Jessie stretch iso "Raspberry Pi Desktop" or pixel Virtualbox and I need to install keys with root "su", what is the default password since raspberry nor pi aren't working.
From the offical documentation: Linux users User management in Raspbian is done on the command line. The default user is pi , and the password is raspberry . Root user/sudo You won't normally log into the computer as root, but you can use the sudo command to provide access as the superuser. If you log into your Raspberry Pi as the pi user, then you're logging in as a normal user. You can run commands as the root user by using the sudo command before the program you want to run. You can also run a superuser shell by using sudo su . When running commands as a superuser there's nothing to protect against mistakes that could damage the system. It's recommended that you only run commands as the superuser when required, and to exit a superuser shell when it's no longer needed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/513315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347943/" ] }
513,374
I am trying to pass a variable into jq like this '.Linux.date.$var' so far I have tried quoting them by name which is working fine. But I want to use variable to call them. I have this, which is working fine exectime=$(date -d now); cp $check_exec_history $check_exec_history.tmp jq --arg key1 true --arg key2 "$exectime" --arg name "$name" '.Linux.script_executed.first = $key1 | .Linux.date_executed.first = $key2' $check_exec_history.tmp > $check_exec_history; rm $check_exec_history.tmp; I want to get to this, but not working: name=first;exectime=$(date -d now);cp $check_exec_history $check_exec_history.tmpjq --arg key1 true --arg key2 "$exectime" --arg name "$name" ".Linux.script_executed.$name = $key1 | .Linux.date_executed.$name = $key2" $check_exec_history.tmp > $check_exec_history; rm $check_exec_history.tmp; I came this far: using this answer https://stackoverflow.com/q/40027395/9496100 But I am not sure where I am doing mistake. name=first;exectime=$(date -d now); cp $check_exec_history $check_exec_history.tmp jq --arg key1 true --arg key2 "$exectime" --arg name "$name" '.Linux.script_executed.name==$name = $key1 | .Linux.date_executed.name==$name = $key2' $check_exec_history.tmp > $check_exec_history; rm $check_exec_history.tmp;
You can use square bracket indexing on all objects in jq, so [$name] works for what you're trying: jq --arg key1 true --arg name "$name" '.Linux.script_executed[$name] = $key1 ...' This use of square brackets is not very well documented in the manual , which makes it look like you can only use .[xyz] , but ["x"] works anywhere that .x would have as long as it's not right at the start of an expression (that is, .a.x and .a["x"] are the same, but ["x"] is an array construction). Note the use of single quotes above - that is so Bash won't try to interpret $name and $key1 as shell variables. You should keep the double quotes for --arg name "$name" , because that really is a shell variable, and it should be quoted to make it safe to use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318405/" ] }
513,377
I have downloaded audio with youtube-dl , and then I wanted to change name with mv : mv "Powerwolf - Resurrection By Erection-Hiu1hPdJk-Y.mp3" "Powerwolf - Resurrection by Errection.mp3>" But, when I want to do something with renamed file, bash prints: root@bananapi:~/Music# mv "Powerwolf - Resurrection by Errection.mp3 " Allmv: cannot stat 'Powerwolf - Resurrection by Errection.mp3 ': No such file or directory When I type ls -l , bash prints: root@bananapi:/home/music/Music# root@bananapi:/home/music/Music# ls -ltotal 3860drwxr-xr-x 2 music music 4096 Apr 19 11:49 All-rw-r--r-- 1 music music 360 Apr 19 12:34 download.pydrwxr-xr-x 2 music music 4096 Apr 19 11:48 Elevendrwxr-xr-x 2 music music 4096 Apr 18 20:49 KlemenSlakonjadrwxr-xr-x 2 music music 4096 Apr 19 11:49 LittleBigdrwxr-xr-x 2 music music 4096 Apr 18 20:28 Powerwolf-rw-r--r-- 1 root root 3924591 Oct 24 15:03 Powerwolf - Resurrection by Errection.mp3 ? Now, I want to delete this file, but I can't.
Your initial mv renamed the file to a name containing a newline at the end. You forgot to close the quoted string of the new name and pressed Enter . After pressing Enter (inserting a newline), you closed the double quote. This inserted a newline into the filename. To rename the file, use mv $'Powerwolf - Resurrection by Errection.mp3 \n' 'Powerwolf - Resurrection by Errection.mp3' Note the space before the \n . It looks like this should be there according to the ls output. You could also use a * to match the end of the name with a newline: mv "Powerwolf - Resurrection by Errection.mp3"* "Powerwolf - Resurrection by Errection.mp3"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/348008/" ] }
513,438
I was previously using Ubuntu. Now I've moved to Debian. Where did signal-desktop for Linux store private messages? I checked for a ~/.signal and the like, the app was installed to /opt .
Apparently it stores the user data in /home/$USER/.config/Signal If you migrate that directory signal-desktop will seamlessly start as before.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
513,466
When i use 'trap' combined with select loop, namely when i try to hit CTRL+C to break out while options are displayed, it will just print ^C in terminal. If i remove 'trap' from script it will normally exit out, that is it will accept CTRL+C. I've tested this on two different versions of bash (One shipped with CentOS and one shipped with Fedora), and i have an issue with the one from Fedora (4.4.23(1)-release). Bash version 4.2.46(2)-release that is shipped with CentOS seems to work fine. I've also tested this on local terminal and remote (via ssh). And problem is always on Fedora side. I will post code to see what I'm talking about This one doesn't work: #!/bin/bashtrap exit SIGINTselect opt in One Two Three; do breakdone If i were to remove the entire 'trap exit SIGINT' line, it will work fine and accept CTRL+C without issues. Any ideas how to fix or bypass this ?
Any ideas how to fix or bypass this ? You can bypass it by turning on the posix mode, either with the --posix option, or temporarily with set -o posix : set -o posixselect opt in foo bar baz; do echo "opt=$opt"doneset +o posix For an explanation for this behavior, you can look at the zread() function, which is used by the read builtin (which is also called internally by bash in select ): while ((r = read (fd, buf, len)) < 0 && errno == EINTR) /* XXX - bash-5.0 */ /* We check executing_builtin and run traps here for backwards compatibility */ if (executing_builtin) check_signals_and_traps (); /* XXX - should it be check_signals()? */ else check_signals (); For some special reason, the executing_builtin is only set when the read builtin is called explicitly, not when it's called by select . This very much looks like a bug, not something deliberate. When running in posix mode, a signal will cancel the read builtin. In that case, zreadintr() is called, which unlike zread() , is not re-calling the interrupted read(2) syscall after running the traps. See builtins/read.def : if (unbuffered_read == 2) retval = posixly_correct ? zreadintr (fd, &c, 1) : zreadn (fd, &c, nchars - nr); else if (unbuffered_read) retval = posixly_correct ? zreadintr (fd, &c, 1) : zread (fd, &c, 1); else retval = posixly_correct ? zreadcintr (fd, &c) : zreadc (fd, &c); More details about bash's "restarting" read builtin here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/308537/" ] }
513,489
I am using a fresh system with PulseAudio. This system previously did not have this issue when using GNOME, but with this reinstall I am using i3wm without a DE. The issue is that after my analog audio output idles for a few seconds (between 5 and 10), it begins to buzz. As soon as something opens the audio device (including pavucontrol ), the buzz goes away. My suspicion is the device is being disabled after idling and some interference from the AC power source is causing the noise. I don't want to have to make something hold ownership of the device, because sometimes I use other software that doesn't play nice with PulseAudio and I need it to access this device. What can I do to remedy this behavior?
I found in /etc/pulse/default.pa and /etc/pulse/system.pa a line: load-module module-suspend-on-idle After commenting these out, the problem is solved.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/513489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53301/" ] }
513,578
I am looking for a clean, "modern" way to configure, start, and stop the dummy0 network interface (from the dummy kernel module). My /etc/network/interfaces used to work on an older system but now fails silently on ifup dummy0 : iface dummy0 inet static address 10.10.0.1 netmask 255.255.255.0 # post-up ip link set dummy0 multicast on Uncommenting the post-up line produces this error (showing that it runs but that the interface is never created): dummy0: post-up cmd 'ip link set dummy0 multicast on'failed: returned 1 (Cannot find device "dummy0") This shell script works perfectly but isn't a nice clean config file: #!/bin/shsudo ip link add dummy0 type dummysudo ip link set dummy0 multicast onsudo ip addr add 10.10.0.1/24 dev dummy0sudo ip link set dummy0 up My intention is to use it both manually and with a systemd service : [Service]Type=oneshotRemainAfterExit=yesExecStart=/sbin/ifup dummy0ExecStop=/sbin/ifdown dummy0StandardOutput=syslog+console Environment: Kubuntu 18.04.2 LTS NetworkManager 1.10.6 iproute2 4.15.0 ifupdown2 1.0 systemd 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid Questions: How can I convert the shell script into a working /etc/network/interfaces configuration? Are there any another cleaner or recommended ways to do this?
The interface wasn't "created" previously; ifupdown relied on it magically appearing as soon as the 'dummy' kernel module was loaded. This is old compatibility behavior, and (AFAIIRC) it also interfered with explicit creation of the same interface name, so it was disabled through a module parameter. Now dummy0 has to be created the same way dummy1 or dummyfoobar are created. You should be able to create the interface in a "pre-up" command: iface dummy0 inet static address 10.10.0.1/24 pre-up ip link add dummy0 type dummy If you also use NetworkManager on this system, recent NM versions support dummy interfaces. nmcli con add type dummy ifname dummy0 ipv4.addresses 10.10.0.1/24 [...] If the interface should be created on boot and remain forever, that can be done using systemd-networkd (one .netdev configuration to create the device, one .network config to set up IP addresses). However, 'networkctl' still does not have manual "up" or "down" subcommands.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37818/" ] }
513,648
I have 4 programs, will be increased in the future, these programs have to connect to the same ip:port to send and receive messages at the same time. Until now I have the socket opened, I also would like to keep the connection alive between the programs and the server. #!/bin/shnc -lvk 88.109.110.161 100 > port100.txt 2>&1
nc does not handle multiple connected clients in parallel and is the wrong tool for this job.There are quite a few right tools for this job, including: Bernstein tcpserver (original or djbwares) or Hoffman tcpserver : tcpserver -v -R -H -l 0 88.109.110.161 100 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ my tcpserver shim: tcpserver -v 88.109.110.161 100 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ my UCSPI-TCP tools: tcp-socket-listen 88.109.110.161 100 tcp-socket-accept --verbose sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ Bercot s6-tcpserver4 : s6-tcpserver4 -v 2 88.109.110.161 100 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ Bercot s6-networking tools: s6-tcpserver4-socketbinder 88.109.110.161 100 s6-tcpserver4d -v 2 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ Pape tcpsvd : tcpsvd -v 88.109.110.161 100 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ Sampson onenetd : onenetd -v 88.109.110.161 100 sh -c 'exec cat 1>&2' 2>&1 |cyclog port100/ And one can substitute multilog , s6-log , svlogd , or tinylog for cyclog . Further reading Protocol: Jonathan de Boyne Pollard (2016). The gen on the UNIX Client-Server Program Interface . Frequently Given Answers. Daniel J. Bernstein (1996). UNIX Client-Server Program Interface . cr.yp.to. toolsets: Daniel J. Bernstein. ucspi-tcp . cr.yp.to. Erwin Hoffmann. ucspi-tcp6 . fehcom.de. s6-networking . Laurent Bercot. skarnet.org. Jonathan de Boyne Pollard (2019). nosh . Softwares. Jonathan de Boyne Pollard (2019). djbwares . Softwares. ipsvd . Gerrit Pape. smarden.org. onenetd . Adam Sampson. offog.org. reference manuals: Daniel J. Bernstein. The tcpserver program . ucspi-tcp. Erwin Hoffmann. tcpserver . ucspi-tcp6 . fehcom.de. s6-tcpserver4 . Laurent Bercot. s6-networking . skarnet.org. tcpsvd . ipsvd . Gerrit Pape. smarden.org. Jonathan de Boyne Pollard (2019). tcpserver . djbwares . Softwares. Jonathan de Boyne Pollard (2019). tcp-socket-listen . nosh Guide . Softwares. Jonathan de Boyne Pollard (2019). tcp-socket-accept . nosh Guide . Softwares. Jonathan de Boyne Pollard (2019). tcpserver . nosh Guide . Softwares. Logging: https://unix.stackexchange.com/a/340631/5132 https://unix.stackexchange.com/a/505854/5132
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/513648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288877/" ] }
513,657
I have two simple programs: A and B . A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be: ./A | ./B If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer ( A ) and writes to a consumer ( B ). Is that a correct description? Am I missing anything?
The only thing about your question that stands out as wrong is that you say A would run first, then B gets the stdout of A In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A , its writes will block until its output is read (some of it will be buffered by the pipe). The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing across the pipe. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates. The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into [the input of] B "). The shell does the required plumbing to allow this to happen. If you want to use the words "consumer" and "producer", I suppose that's ok too. The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/513657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/348269/" ] }
513,693
Over the past couple of days I've been trying to dig through directories of files to report on files containing key words. Through the help of other users showing me the correct syntax for grep and sed , I’ve come up with the following solution which has worked well. Unfortunately, it’s a bit repetitive and I'd like to refactor it. echo "<HR><BR><B><h2>Search for Keyword: KeyWord1<BR></B></h2><ol>" >> temp.txtgrep -lr -Fiw 'KeyWord1' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txtecho "</ol>" >> temp.txt echo "<HR><BR><B><h2>Search for Keyword: KeyWord2<BR></B></h2><ol>" >> temp.txtgrep -lr -Fiw 'KeyWord2' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txtecho "</ol>" >> temp.txt echo "<HR><BR><B><h2>Search for Keyword: KeyWord3<BR></B></h2><ol>" >> temp.txtgrep -lr -Fiw 'KeyWord3' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txtecho "</ol>" >> temp.txt I would like to produce the same results as the output generated by the above, but eliminate the clutter. Please advise on how to properly achieve something like the following: Var myList = "KeyWord1, KeyWord2, KeyWord3"while myList; Do echo "<HR><BR><B><h2>Search for Keyword: $myList<BR></B></h2><ol>" >> temp.txt grep -lr -Fiw '$myList' * | sed -e 's|.*|<li><a href="http://&">&</a></li>|' >> temp.txt echo "</ol>" >> temp.txt ; done
The only thing about your question that stands out as wrong is that you say A would run first, then B gets the stdout of A In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A , its writes will block until its output is read (some of it will be buffered by the pipe). The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing across the pipe. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates. The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into [the input of] B "). The shell does the required plumbing to allow this to happen. If you want to use the words "consumer" and "producer", I suppose that's ok too. The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/513693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271874/" ] }
514,078
What is /etc/mtab in Linux? Why is it needed and advantages of having it?
% file /etc/mtab/etc/mtab: symbolic link to ../proc/self/mounts% file /proc/mounts/proc/mounts: symbolic link to self/mounts% /etc/mtab is a compatibility mechanism. Decades ago, Unix did not have a system call for reading the existing mount information. Instead, programs that mounted filesystems were expected to coöperatively and voluntarily maintain a table in /etc/mtab of what was mounted where. For obvious reasons, this was not an ideal mechanism. Linux gained the notion of a "procfs", and one of the things that it gained was a kernel-maintained version of this table, in the form of a mounts pseudo-regular file. The "system call" to read the mount information out of the kernel became an open-read-close sequence against that file, followed by parsing the result from human-readable to machine-readable form (something that has some subtle catches, as you can see from the bug reports from just over a fortnight ago). /etc/mtab thus has popularly become a symbolic link to /proc/mounts , allowing programs that had hardwired that name to keep reading a mount table from that file, which the programs that mounted and unmounted filesystems no longer have to explicitly do anything themselves to keep up to date. (Some of them still will, though, if /etc/mtab turns out to be a writable regular file. And there are a few corner cases where the normalized information in mounts that lacks all non-kernel stuff is not quite what is needed; although they do not outweigh the general problems with /etc/mtab .) Each process can nowadays have its own individual view of what is mounted, and there are as a consequence now individual mounts files for each process in the procfs, each process's own table being accessible to it via the self symbolic link as self/mounts , and /proc/mounts is also now a compatibility mechanism. (Interestingly, neither per-process mounts nor the format of mounts are documented in the current Linux doco, although the similar mountinfo pseudo-regular file is.) SunOS/Solaris has a similar mechanism. The /etc/mnttab file is actually a single-file filesystem, and in addition to reading the table, via an open file descriptor to that file, with the read() system call, one can watch for mount point changes with poll() and obtain various further pieces of information with ioctl() . In HP-UX, /etc/mnttab is likewise the name of the file, but as of version 11 it was still a regular file whose contents were coöperatively maintained by the system utility programs. AIX does not export a human-readable text table that programs have to parse, and there is no equivalent file. The BSDs, similarly, have fully-fledged system calls, getfsstat() on FreeBSD and OpenBSD, for programs to obtain the mount table from the kernel in machine-readable form without marshalling it through a human-readable intermediate form. Further reading Zygmunt Krynicki (2019-03-16). \r in path confuses mount units . #12018. systemd issues. Zbigniew Jędrzejewski-Szmek (2019-04-04). [df] incorrect parsing of /proc/self/mountinfo with \r in mount path . #35137. GNU coreutils bugs. /proc/mounts . Documentation/filesystems/proc.txt . Linux 5.1. Jonathan de Boyne Pollard (2019-02-28). Re: what is the purpose of fstab-decode . Bug #567071. Debian bugs. getfsstat() . FreeBSD System Calls Manual . 2016-12-27.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/514078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61379/" ] }
514,136
There are different methods of storing a certificate file? DER, PEM, PKCS: PKCS7 & PKCS12? Are all of these formats accept as valid in /usr/local/share/ca-certificates ?
Certificates are added to the CA certificate database using the update-ca-certificates command. This is a shell script that scans the source certificate directories and adds any certificates found to the certificate bundle ( /etc/ssl/certs/ca-certificates.crt ) as well as creating a symlink in /etc/ssl/certs to the certificate. The ca-certificates.crt file is a concatenation of certificates, each in PEM format. The script doesn't convert any certificate formats, therefore it assumes that all certificates in the source folders are in PEM format with a .crt file extension.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/514136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
515,249
consider a file names 'file.txt'.It contains the following. dove is a birdtiger is an animalcricket is a game. Expected output: is a birdis an animalis a game.
To do it using cut cut -f 2- -d ' ' file.txt > new_file.txt "Give me the second and any other field beyond, using space as a delimiter, from the file.txt file and direct the output to new_file.txt"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347601/" ] }
515,251
I would like to have 5 partitions: Boot Swap 3 separate (one is 'main' system partition) LUKS encrypted partitions I have boot partition as primary bootable and swap as primary linux swap. At this point I'm stuck because I can't have next 3 primary partitions and I'm not sure how to do it with extended partitions. I'm using cfdisk program.
To do it using cut cut -f 2- -d ' ' file.txt > new_file.txt "Give me the second and any other field beyond, using space as a delimiter, from the file.txt file and direct the output to new_file.txt"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345828/" ] }
515,268
I'm trying to find the size of a remote http/https directory (non recursively) using a given regexp ( \.mp4 ) from the command line, is there a tool to do so or do I need to parse the index.html ? EDIT0 : Here is the output I want to parse : $ URL=https://cdn.sermons.love/mp4/Joseph%20Prince/$ curl -s $URL | html2text -width $COLUMNS****** Index of /mp4/Joseph Prince ******[Icon ] Name Last_modified SizeDescription=======================================================================================================================[[PARENTDIR]] Parent_Directory -[[ ]] Joseph_Prince_-_Activate_the_Grace_Covenant_Through_Tongues.mp4 2019-04-16 22:32 428M[[ ]] Joseph_Prince_-_Align_Yourself_With_His_Purpose_and_Prosper.mp4 2019-04-16 22:36 452M[[ ]] Joseph_Prince_-_Amazing_Things_Happen_When_You_Flow_with_The_Spirit.mp4 2019-04-16 21:48 391M[[ ]] Joseph_Prince_-_Are_You_Frustrating_The_Favor_of_God.mp4 2019-04-16 21:27 524M[[ ]] Joseph_Prince_-_As_Jesus_Is,_So_Are_You.mp4 2019-04-16 22:28 894M[[ ]] Joseph_Prince_-_Blessings_Flow_Through_Grace.mp4 2019-04-30 16:00 761M[[ ]] Joseph_Prince_-_Break_Every_Bad_Habit_With_Christ.mp4 2019-04-16 22:48 462M[[ ]] Joseph_Prince_-_Break_Free_from_Addiction_and_Shame.mp4 2019-04-16 22:12 388M[[ ]] Joseph_Prince_-_Come_As_You_Are_and_Receive_Your_Miracle.mp4 2019-04-16 22:16 444M[[ ]] Joseph_Prince_-_Discerning_the_Lord's_Body_for_Greater_Health.mp4 2019-04-16 22:38 705M[[ ]] Joseph_Prince_-_Discover_God's_Way_to_Bless_You.mp4 2019-04-16 21:28 524M[[ ]] Joseph_Prince_-_Don't_Fight_-_Feed!.mp4 2019-04-16 22:18 453M[[ ]] Joseph_Prince_-_Draw_The_Blood_Line_Of_Protection.mp4 2019-04-25 13:21 804M[[ ]] Joseph_Prince_-_Easter_from_New_Creation_Church.mp4 2019-04-16 21:23 843M[[ ]] Joseph_Prince_-_Enjoy_Jesus'_Supply_And_Delight_His_Heart.mp4 2019-04-16 21:38 609M[[ ]] Joseph_Prince_-_Experience_God's_Restoration_For_Every_Regret.mp4 2019-04-16 22:37 421M[[ ]] Joseph_Prince_-_Experience_The_Grace_Revolution.mp4 2019-04-16 21:41 189M[[ ]] Joseph_Prince_-_Find_Freedom_from_Every_Bondage_and_Addiction.mp4 2019-04-16 22:13 523M[[ ]] Joseph_Prince_-_Find_True_Fulfillment_In_Life.mp4 2019-04-16 22:20 209M[[ ]] Joseph_Prince_-_Five_Words_To_Live_By__The_Battle_Is_The_Lord's.mp4 2019-04-16 21:57 226M[[ ]] Joseph_Prince_-_Fresh_Grace_For_Every_Trial.mp4 2019-04-16 22:14 609M[[ ]] Joseph_Prince_-_Give_Jesus_Your_Cares_And_Live_Stress-Free.mp4 2019-04-16 21:00 428M[[ ]] Joseph_Prince_-_God_Is_A_Gracious_Rewarder.mp4 2019-04-16 22:42 224M[[ ]] Joseph_Prince_-_God's_Blueprint_For_Leadership.mp4 2019-04-16 21:30 776M[[ ]] Joseph_Prince_-_God's_Perfect_Timing_In_The_Christmas_Story.mp4 2019-04-16 21:29 441M[[ ]] Joseph_Prince_-_God's_Plan_To_Bless_You.mp4 2019-04-16 21:56 200M[[ ]] Joseph_Prince_-_Godly_Discipline_And_Correction_Brings_Promotion.mp4 2019-04-16 21:56 776M[[ ]] Joseph_Prince_-_Got_A_Weakness__God_Can_Use_You!.mp4 2019-04-16 21:25 524M[[ ]] Joseph_Prince_-_Grace_Leadership_In_Action__How_To_Represent_God's_Heart.mp4 2019-04-16 22:30 732M[[ ]] Joseph_Prince_-_He_Is_Risen.mp4 2019-04-16 21:47 442M[[ ]] Joseph_Prince_-_Healing_Flows_When_Grace_Is_Exalted.mp4 2019-04-16 21:51 225M[[ ]] Joseph_Prince_-_Hear_Jesus_Only_And_Be_Uplifted.mp4 2019-04-16 21:25 436M[[ ]] Joseph_Prince_-_Hear_the_Preached_Word_and_See_Breakthroughs.mp4 2019-04-16 21:31 459M[[ ]] Joseph_Prince_-_Hesed_Wisdom_to_Live_Skillfully.mp4 2019-04-16 21:32 389M[[ ]] Joseph_Prince_-_His_Healing_Is_For_The_Undeserving.mp4 2019-04-16 22:20 217M[[ ]] Joseph_Prince_-_His_Promises_Are_Yours_To_Own.mp4 2019-04-16 22:23 524M[[ ]] Joseph_Prince_-_His_Radiance_Upon_You_Brings_Favor.mp4 2019-04-16 22:00 689M[[ ]] Joseph_Prince_-_His_Resurrection,_Proof_Of_Your_Righteousness.mp4 2019-04-19 11:03 809M[[ ]] Joseph_Prince_-_How_To_Be_Blessed_God's_Way.mp4 2019-04-16 21:08 776M[[ ]] Joseph_Prince_-_How_To_Live_Free_From_The_Curse.mp4 2019-04-16 22:20 614M[[ ]] Joseph_Prince_-_How_To_Make_Spirit-Led_Decisions.mp4 2019-04-16 21:14 871M[[ ]] Joseph_Prince_-_How_To_Pray_When_You_Have_No_Prayer.mp4 2019-04-16 22:31 372M[[ ]] Joseph_Prince_-_How_You_See_Jesus_Is_How_You_Will_Receive.mp4 2019-04-16 21:50 657M[[ ]] Joseph_Prince_-_Immanuel__What_It_Means_To_Have_The_Lord_With_You.mp4 2019-04-16 21:30 210M[[ ]] Joseph_Prince_-_Inherit_God's_Promises_By_Faith,_Not_by_Works.mp4 2019-04-16 21:49 521M[[ ]] Joseph_Prince_-_Jesus_Draws_Near_When_You_Are_Discouraged_(Live_In_Israel).mp4 2019-04-16 21:18 561M[[ ]] Joseph_Prince_-_Jesus_Our_Jubilee.mp4 2019-04-16 21:53 694M[[ ]] Joseph_Prince_-_Jesus__Your_Reason_For_A_Fear-Free_Life.mp4 2019-04-16 21:05 210M[[ ]] Joseph_Prince_-_Keys_To_Healing_In_The_Hebrew_Language.mp4 2019-04-16 21:00 250M[[ ]] Joseph_Prince_-_Last_To_First_When_You_Trust_His_Goodness.mp4 2019-04-16 22:50 781M[[ ]] Joseph_Prince_-_Learn_to_See_What_God_Sees.mp4 2019-04-16 21:46 419M[[ ]] Joseph_Prince_-_Let_Go_And_Let_His_Supply_Flow.mp4 2019-04-16 22:33 461M[[ ]] Joseph_Prince_-_Let_Go_and_Flow_in_the_Vine_Life.mp4 2019-04-16 22:26 458M[[ ]] Joseph_Prince_-_Live_Bold_Without_Guilt_and_Fear.mp4 2019-04-16 22:04 427M[[ ]] Joseph_Prince_-_Live_Confident.mp4 2019-04-16 21:03 426M[[ ]] Joseph_Prince_-_Live_Life_Loved_By_The_Shepherd.mp4 2019-04-16 21:39 428M[[ ]] Joseph_Prince_-_Live_Long,_Live_Strong.mp4 2019-04-16 22:35 621M[[ ]] Joseph_Prince_-_Live_Strong_In_The_Father's_Love.mp4 2019-04-16 21:11 465M[[ ]] Joseph_Prince_-_Live_Undefeated_In_Christ.mp4 2019-04-16 21:57 438M[[ ]] Joseph_Prince_-_Make_Grace_Your_Way_of_Life.mp4 2019-04-16 22:15 520M[[ ]] Joseph_Prince_-_Move_From_Predicament_To_Promotion.mp4 2019-04-16 22:46 813M[[ ]] Joseph_Prince_-_No_Condemnation_Leads_to_Divine_Health.mp4 2019-04-16 21:59 685M[[ ]] Joseph_Prince_-_Not_Ashamed_Of_The_Gospel.mp4 2019-04-16 22:22 235M[[ ]] Joseph_Prince_-_Practical_Leadership_Keys_To_Living_Holy.mp4 2019-04-16 21:36 829M[[ ]] Joseph_Prince_-_Receive_God's_Supply_for_All_of_Life's_Demands.mp4 2019-04-16 22:44 525M[[ ]] Joseph_Prince_-_Redemption_Truths_That_Bless_Your_Relationships.mp4 2019-04-16 21:09 217M[[ ]] Joseph_Prince_-_Rest_And_Receive_At_Jesus'_Feet.mp4 2019-04-16 21:54 449M[[ ]] Joseph_Prince_-_Rest_In_Jesus'_Faith_For_Miracles.mp4 2019-04-16 21:04 519M[[ ]] Joseph_Prince_-_Rest_Till_Your_Enemies_Become_Your_Footstool.mp4 2019-04-16 20:59 472M[[ ]] Joseph_Prince_-_Rest!_God_Is_Working_Behind_The_Scenes.mp4 2019-04-16 22:43 523M[[ ]] Joseph_Prince_-_Say_Amen_To_God's_Promises.mp4 2019-04-16 21:45 521M[[ ]] Joseph_Prince_-_Set_Apart_To_Be_Kings_And_Priests.mp4 2019-04-16 21:34 802M[[ ]] Joseph_Prince_-_Set_Free_to_Reign_in_Life.mp4 2019-04-16 22:06 528M[[ ]] Joseph_Prince_-_Speak_Out_and_Find_Strength.mp4 2019-04-16 22:11 1.1G[[ ]] Joseph_Prince_-_Speak_Out_by_Faith_and_Win.mp4 2019-04-16 21:20 412M[[ ]] Joseph_Prince_-_Stay_on_Grace_Ground_and_Experience_True_Life.mp4 2019-04-16 21:41 1.0G[[ ]] Joseph_Prince_-_Stronger_Through_Every_Trial_And_Battle.mp4 2019-04-16 22:47 789M[[ ]] Joseph_Prince_-_The_Four_Gospels_Unlocked_for_Your_Blessings.mp4 2019-04-16 22:17 922M[[ ]] Joseph_Prince_-_The_Friend_You_Can_Always_Depend_On.mp4 2019-04-16 22:40 1.1G[[ ]] Joseph_Prince_-_The_God_Who_Goes_Before.mp4 2019-04-16 22:41 442M[[ ]] Joseph_Prince_-_The_Health-Giving_Power_Of_A_Relaxed_Heart.mp4 2019-04-16 21:10 613M[[ ]] Joseph_Prince_-_The_Heart_of_the_Father_Revealed.mp4 2019-04-16 21:44 468M[[ ]] Joseph_Prince_-_The_Lord_Our_Righteousness.mp4 2019-04-16 21:02 423M[[ ]] Joseph_Prince_-_The_Secret_of_Hearing_That_Brings_Untold_Blessings.mp4 2019-04-16 22:25 687M[[ ]] Joseph_Prince_-_The_Spirit's_Rivers_Of_Provision_And_Healing.mp4 2019-04-16 21:52 404M[[ ]] Joseph_Prince_-_Turn_Your_Frustrations_Into_Breakthroughs.mp4 2019-04-16 21:12 461M[[ ]] Joseph_Prince_-_Unlocking_Redemption's_Blessings_In_Your_Life.mp4 2019-04-16 21:43 686M[[ ]] Joseph_Prince_-_Victory_in_Your_Day_of_Trouble.mp4 2019-04-16 22:05 456M[[ ]] Joseph_Prince_-_Walk_In_Constant_Victory_Over_Fear.mp4 2019-04-16 22:09 1.1G[[ ]] Joseph_Prince_-_What_Is_Earnest_Prayer_To_God.mp4 2019-04-16 22:31 215M[[ ]] Joseph_Prince_-_What_Makes_No_Weapon_Prosper_Against_You.mp4 2019-04-30 15:31 253M[[ ]] Joseph_Prince_-_Where_Is_God_In_The_Midst_Of_Your_Trouble.mp4 2019-04-16 21:17 833M[[ ]] Joseph_Prince_-_Win_Over_Discouragement,_Depression_and_Burnout.mp4 2019-04-16 22:24 396M[[ ]] Joseph_Prince_-_Win_Over_Fear_and_Pride.mp4 2019-04-16 21:19 459M[[ ]] Joseph_Prince_-_Win_Over_Guilt_and_Condemnation.mp4 2019-04-16 21:15 223M[[ ]] Joseph_Prince_-_Wisdom_For_Holy_Living.mp4 2019-04-16 21:07 791M[[ ]] Joseph_Prince_-_You_Are_Forgiven.mp4 2019-04-16 22:04 440M[[ ]] Joseph_Prince_-_You_Can_Live_Healed.mp4 2019-04-16 22:33 217M[[ ]] Joseph_Prince_-_Your_Blessed_Hope_In_Dark_Times.mp4 2019-04-16 22:21 464M[[ ]] Joseph_Prince_-_Your_Only_Battle_is_Fight_to_Remain_at_Rest.mp4 2019-04-16 21:01 616M[[ ]] Joseph_Prince_-_Your_Past_Does_Not_Determine_Your_Future.mp4 2019-04-16 21:22 685M[[ ]] Joseph_Prince_-_Your_Reason_For_A_Fear_Free_Life.mp4 2019-04-16 21:13 468M[[ ]] Joseph_Prince_-_Your_Security_in_Time_of_Shaking.mp4 2019-04-16 22:03 681M======================================================================================================================= EDIT1 : I tried this bit it does not work because the file sizes in Gigs are not added : $ curl -s $URL | html2text -width $COLUMNS | sed "s/M$//;s/G$/*1024/" | awk '/\.mp4/{size=$6;print"=> 6th field = "$6" size = "size >"/dev/stderr";total+=size}END{print "=> total = "total}'=> 6th field = 428 size = 428=> 6th field = 452 size = 452=> 6th field = 391 size = 391=> 6th field = 524 size = 524=> 6th field = 894 size = 894=> 6th field = 462 size = 462=> 6th field = 388 size = 388=> 6th field = 444 size = 444=> 6th field = 705 size = 705=> 6th field = 524 size = 524=> 6th field = 453 size = 453=> 6th field = 843 size = 843=> 6th field = 609 size = 609=> 6th field = 421 size = 421=> 6th field = 189 size = 189=> 6th field = 523 size = 523=> 6th field = 209 size = 209=> 6th field = 226 size = 226=> 6th field = 609 size = 609=> 6th field = 428 size = 428=> 6th field = 224 size = 224=> 6th field = 776 size = 776=> 6th field = 441 size = 441=> 6th field = 200 size = 200=> 6th field = 776 size = 776=> 6th field = 524 size = 524=> 6th field = 732 size = 732=> 6th field = 442 size = 442=> 6th field = 225 size = 225=> 6th field = 436 size = 436=> 6th field = 459 size = 459=> 6th field = 389 size = 389=> 6th field = 217 size = 217=> 6th field = 524 size = 524=> 6th field = 689 size = 689=> 6th field = 809 size = 809=> 6th field = 776 size = 776=> 6th field = 614 size = 614=> 6th field = 871 size = 871=> 6th field = 372 size = 372=> 6th field = 657 size = 657=> 6th field = 210 size = 210=> 6th field = 521 size = 521=> 6th field = 561 size = 561=> 6th field = 694 size = 694=> 6th field = 210 size = 210=> 6th field = 250 size = 250=> 6th field = 781 size = 781=> 6th field = 419 size = 419=> 6th field = 461 size = 461=> 6th field = 458 size = 458=> 6th field = 427 size = 427=> 6th field = 426 size = 426=> 6th field = 428 size = 428=> 6th field = 621 size = 621=> 6th field = 465 size = 465=> 6th field = 438 size = 438=> 6th field = 520 size = 520=> 6th field = 813 size = 813=> 6th field = 685 size = 685=> 6th field = 235 size = 235=> 6th field = 829 size = 829=> 6th field = 525 size = 525=> 6th field = 217 size = 217=> 6th field = 449 size = 449=> 6th field = 519 size = 519=> 6th field = 472 size = 472=> 6th field = 523 size = 523=> 6th field = 521 size = 521=> 6th field = 802 size = 802=> 6th field = 528 size = 528=> 6th field = 1.1*1024 size = 1.1*1024=> 6th field = 412 size = 412=> 6th field = 1.0*1024 size = 1.0*1024=> 6th field = 789 size = 789=> 6th field = 922 size = 922=> 6th field = 1.1*1024 size = 1.1*1024=> 6th field = 442 size = 442=> 6th field = 613 size = 613=> 6th field = 468 size = 468=> 6th field = 423 size = 423=> 6th field = 687 size = 687=> 6th field = 404 size = 404=> 6th field = 461 size = 461=> 6th field = 686 size = 686=> 6th field = 456 size = 456=> 6th field = 1.1*1024 size = 1.1*1024=> 6th field = 215 size = 215=> 6th field = 450 size = 450=> 6th field = 833 size = 833=> 6th field = 396 size = 396=> 6th field = 459 size = 459=> 6th field = 223 size = 223=> 6th field = 791 size = 791=> 6th field = 440 size = 440=> 6th field = 217 size = 217=> 6th field = 464 size = 464=> 6th field = 616 size = 616=> 6th field = 685 size = 685=> 6th field = 468 size = 468=> 6th field = 681 size = 681=> total = 49588.3 EDIT2 : I finally did that in Perl : $ curl -s $URL | html2text -width $COLUMNS | perl -n -ale 'if(/mp4 /){$F[-1] =~ s/M$//;$F[-1] =~ s/G$/*1024/;$total+=eval $F[-1]}END{print "=> total = ",$total}'=> total = 53987.2 EDIT3 : I also tried a combination of pup and jq but it seems date and size not in array, therefore it is not easy to parse : $ curl -s $URL | pup 'json{}' | jq '.[] | .children[0].children[1].children[1].text' "- 2019-04-16 22:32 428M 2019-04-16 22:36 452M 2019-04-16 21:48 391M 2019-04-16 21:27 524M 2019-04-16 22:28 894M 2019-04-30 16:00 761M 2019-04-16 22:48 462M 2019-04-16 22:12 388M 2019-04-16 22:16 444M 2019-04-16 22:38 705M 2019-04-16 21:28 524M 2019-04-16 22:18 453M 2019-04-25 13:21 804M 2019-04-16 21:23 843M 2019-04-16 21:38 609M 2019-04-16 22:37 421M 2019-04-16 21:41 189M 2019-04-16 22:13 523M 2019-04-16 22:20 209M 2019-04-16 21:57 226M 2019-04-16 22:14 609M 2019-04-16 21:00 428M 2019-04-16 22:42 224M 2019-04-16 21:30 776M 2019-04-16 21:29 441M 2019-04-16 21:56 200M 2019-04-16 21:56 776M 2019-04-16 21:25 524M 2019-04-16 22:30 732M 2019-04-16 21:47 442M 2019-04-16 21:51 225M 2019-04-16 21:25 436M 2019-04-16 21:31 459M 2019-04-16 21:32 389M 2019-04-16 22:20 217M 2019-04-16 22:23 524M 2019-04-16 22:00 689M 2019-04-19 11:03 809M 2019-04-16 21:08 776M 2019-04-16 22:20 614M 2019-04-16 21:14 871M 2019-04-16 22:31 372M 2019-04-16 21:50 657M 2019-04-16 21:30 210M 2019-04-16 21:49 521M 2019-04-16 21:18 561M 2019-04-16 21:53 694M 2019-04-16 21:05 210M 2019-04-16 21:00 250M 2019-04-16 22:50 781M 2019-04-16 21:46 419M 2019-04-16 22:33 461M 2019-04-16 22:26 458M 2019-04-16 22:04 427M 2019-04-16 21:03 426M 2019-04-16 21:39 428M 2019-04-16 22:35 621M 2019-04-16 21:11 465M 2019-04-16 21:57 438M 2019-04-16 22:15 520M 2019-04-16 22:46 813M 2019-04-16 21:59 685M 2019-04-16 22:22 235M 2019-04-16 21:36 829M 2019-04-16 22:44 525M 2019-04-16 21:09 217M 2019-04-16 21:54 449M 2019-04-16 21:04 519M 2019-04-16 20:59 472M 2019-04-16 22:43 523M 2019-04-16 21:45 521M 2019-04-16 21:34 802M 2019-04-16 22:06 528M 2019-04-16 22:11 1.1G 2019-04-16 21:20 412M 2019-04-16 21:41 1.0G 2019-04-16 22:47 789M 2019-04-16 22:17 922M 2019-04-16 22:40 1.1G 2019-04-16 22:41 442M 2019-04-16 21:10 613M 2019-04-16 21:44 468M 2019-04-16 21:02 423M 2019-04-16 22:25 687M 2019-04-16 21:52 404M 2019-04-16 21:12 461M 2019-04-16 21:43 686M 2019-04-16 22:05 456M 2019-04-16 22:09 1.1G 2019-04-16 22:31 215M 2019-04-30 15:31 253M 2019-04-16 21:17 833M 2019-04-16 22:24 396M 2019-04-16 21:19 459M 2019-04-16 21:15 223M 2019-04-16 21:07 791M 2019-04-16 22:04 440M 2019-04-16 22:33 217M 2019-04-16 22:21 464M 2019-04-16 21:01 616M 2019-04-16 21:22 685M 2019-04-16 21:13 468M 2019-04-16 22:03 681M" Isn't there an easier or "cleaner" way ?
To do it using cut cut -f 2- -d ' ' file.txt > new_file.txt "Give me the second and any other field beyond, using space as a delimiter, from the file.txt file and direct the output to new_file.txt"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135038/" ] }
515,303
When trying to install python 3.7 on Ubuntu 18.04 I get error messages like: zipimport.ZipImportError: can't decompress data; zlib not available or ModuleNotFoundError: No module named '_ctypes' or ~/.pyenv/plugins/python-build/bin/python-build: line 775: make: command not found or configure: error: no acceptable C compiler found in $PATH
From https://bugs.python.org/issue31652#msg321260 sudo apt-get install build-essential libsqlite3-dev sqlite3 bzip2 libbz2-dev zlib1g-dev libssl-dev openssl libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libncursesw5-dev libffi-dev uuid-dev
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200682/" ] }
515,595
Is there any way in Linux/Unix to determine the client application user is using to connect to OS. In our environment thousand of users connect to our system via client applications like WinSCP, Dbvisualizer, putty etc. We need to check the client application that users are using to connect to Server.
Simple answer: No, there is no sure way. Terminal applications often will report available features when queried with the right escape sequences but this is sometimes not enough to detect which application is used, and it can very easily be faked. In general, the actual terminal application used should not matter, so maybe you should rephrase your question to state the actual problem you try to solve instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227193/" ] }
515,602
How to list modified or newly created files or directories in linux. So that I can trigger another command or shell script for another task. for eg. A file a.txt and test.txt are modified and I want to find the lasted changed files using linux cmd and then trigger a restart.sh script(say) to get changes.
Simple answer: No, there is no sure way. Terminal applications often will report available features when queried with the right escape sequences but this is sometimes not enough to detect which application is used, and it can very easily be faked. In general, the actual terminal application used should not matter, so maybe you should rephrase your question to state the actual problem you try to solve instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347024/" ] }
515,614
Presently I am using this for controlpath ControlPath /home/user/.ssh/sockets/ssh_mux_%h_%p_%r If i connect to a hostname 'redishost' it creates socket with redishost If i connect to same host 'redishost' with its ip address it creates socket with ip address Is it possible to use ip for all ssh connections instead of hostname %h in controlpath ?
Simple answer: No, there is no sure way. Terminal applications often will report available features when queried with the right escape sequences but this is sometimes not enough to detect which application is used, and it can very easily be faked. In general, the actual terminal application used should not matter, so maybe you should rephrase your question to state the actual problem you try to solve instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205721/" ] }
515,745
I came across this article saying that yaourt is deprecated.I wanted to look up more information about that on Arch Wiki, but found that yaourt is not listed on the list of AUR helpers , which is bewildering. Is yaourt really deprecated? If so, what are the reasons? Is there an official announcement on Arch Wiki?And what are some widely recognized alternatives to yaourt among those listed on the page just cited (if not all of them)?
Yes, Yaourt has been removed from the Aur . If you really want a Aur helper, please choose one on the list of the Aur helpers (I would personally recommend YaY).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227333/" ] }
515,862
I tried the following command after watch this video on pipe shenanigans. man -k . | dmenu -l 20 | awk '{print $1}' | xargs -r man -Tpdf | zathura - It basically prints a list of manpages to dmenu for the user to select one of them, then it uses xargs to run man -Tpdf % (print to stdout a pdf of the manpage git from the xargs' input) and pass the pdf to a pdf reader (zathura). The problem is that (as you can see in the video) the pdf reader starts even before I select one manpage in dmenu. And if I click Esc and select none, the pdf reader is still open showing no document at all. How can I make the pdf reader (and any other command in a pipe chain) to only run when its input reach a end-of-file or when it receives an input at all? Or, alternatively, how can I make a pipe chain to stop after one of the chained commands returns a non-zero exit status (so that if dmenu returns an error for not selecting an option, the following commands are not run)?
How can I make the pdf reader (and any other command in a pipe chain) to only run when its input reach a end-of-file or when it receives an input at all? There is ifne (in Debian it's in moreutils package): ifne runs the following command if and only if the standard input is not empty. In your case: … | ifne zathura -
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/515862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233964/" ] }
515,869
Call me a dreamer, but imagine a worldwhere "every" CLI tool we use had an option to produce a stable output, say in JSON. Programmatic use of CLI tools like ls , free , df , fdisk would be a breeze. The way GNU standardized argument syntax conventions ,can it standardize the output along the lines of " --json produces a tool-specific report formatted according to JSON spec" ? Has this been attempted and rejected perhaps? If not, how do we push for something like this?
You would advocate for this on the mailing lists dedicated to the specific tools you are interested in. The available GNU mailing lists are available here: https://lists.gnu.org/mailman/listinfo/ If one or other of the tools you are interested in is not represented by any GNU mailing list, then you would have to investigate who's maintaining it and whether there's an associated mailing list that they maintain. Note that feature requests to open source projects have a much higher chance of getting accepted if you can provide a patch of the source code that implements the feature and that works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339722/" ] }
515,874
I'm trying to create a new user form my rails application. I already ran psql -p 5432 -h localhsot -U postgres and created a new user. Then I added the user with sudo adduser user_name and afterwards changed to the user with sudo su user_name . This works, but when I try to create a new app with rails new app -d postgresql , I'm getting the error “command rails not found”. When I try to install the rails command with apt install ruby-railties , I'm getting the following error. E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
You would advocate for this on the mailing lists dedicated to the specific tools you are interested in. The available GNU mailing lists are available here: https://lists.gnu.org/mailman/listinfo/ If one or other of the tools you are interested in is not represented by any GNU mailing list, then you would have to investigate who's maintaining it and whether there's an associated mailing list that they maintain. Note that feature requests to open source projects have a much higher chance of getting accepted if you can provide a patch of the source code that implements the feature and that works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/515874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350299/" ] }
515,881
root@macine:~# getcap ./some_bin./some_bin =ep What does "ep" mean? What are the capabilities of this binary?
# getcap ./some_bin./some_bin =ep That binary has ALL the capabilites permitted ( p ) and effective ( e ) from the start. In the textual representation of capabilities, a leading = is equivalent to all= . From the cap_to_text(3) manpage: In the case that the leading operator is = , and no list of capabilities is provided, the action-list is assumed to refer to all capabilities. For example, the following three clauses are equivalent to eachother (and indicate a completely empty capability set): all= ; = ; cap_chown,<every-other-capability>= . Such a binary can do whatever it pleases, limited only by the capability bounding set, which on a typical desktop system includes everything (otherwise setuid binaries like su wouldn't work as expected). Notice that this is just a "gotcha" of the textual representation used by libcap : in the security.capability extended attribute of the file for which getcap will print /file/path =ep , all the meaningful bits are effectively on ; for an empty security.capability , /file/path = (with the = not followed by anything) will be printed instead. If someone is still not convinced, here is a small experiment: # cp /bin/ping /tmp/ping # will wipe setuid bits and extented attributes# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted# setcap =ep /tmp/ping# su user -c '/tmp/ping localhost' # will work because of cap_net_rawPING localhost(localhost (::1)) 56 data bytes64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.073 ms^C# setcap = /tmp/ping# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted Notice that an empty file capability is also different from a removed capability ( capset -r /file/path ), an empty file capability will block the Ambient set from being inherited when the file executes. A subtlety of the =ep file capability is that if the bounding set is not a full one, then the kernel will prevent a program with =ep on it from executing (as described in the "Safety checking for capability-dumb binaries" section of the capabilities(7) manpage).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/515881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350305/" ] }
515,891
I generated the keypair in Computer 1. And move the public key to the Computer2(server) and make it in authorized_keys. And move the private key to Computer3(client) and use ssh-add to add it. Why I can directly login to server without offering a public key? What's the real workflow of ssh key authorization?
# getcap ./some_bin./some_bin =ep That binary has ALL the capabilites permitted ( p ) and effective ( e ) from the start. In the textual representation of capabilities, a leading = is equivalent to all= . From the cap_to_text(3) manpage: In the case that the leading operator is = , and no list of capabilities is provided, the action-list is assumed to refer to all capabilities. For example, the following three clauses are equivalent to eachother (and indicate a completely empty capability set): all= ; = ; cap_chown,<every-other-capability>= . Such a binary can do whatever it pleases, limited only by the capability bounding set, which on a typical desktop system includes everything (otherwise setuid binaries like su wouldn't work as expected). Notice that this is just a "gotcha" of the textual representation used by libcap : in the security.capability extended attribute of the file for which getcap will print /file/path =ep , all the meaningful bits are effectively on ; for an empty security.capability , /file/path = (with the = not followed by anything) will be printed instead. If someone is still not convinced, here is a small experiment: # cp /bin/ping /tmp/ping # will wipe setuid bits and extented attributes# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted# setcap =ep /tmp/ping# su user -c '/tmp/ping localhost' # will work because of cap_net_rawPING localhost(localhost (::1)) 56 data bytes64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.073 ms^C# setcap = /tmp/ping# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted Notice that an empty file capability is also different from a removed capability ( capset -r /file/path ), an empty file capability will block the Ambient set from being inherited when the file executes. A subtlety of the =ep file capability is that if the bounding set is not a full one, then the kernel will prevent a program with =ep on it from executing (as described in the "Safety checking for capability-dumb binaries" section of the capabilities(7) manpage).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/515891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350311/" ] }
515,895
What is the difference between a[bc]d and a{b,c}d ? Why do people use a{b,c}d when there is already a[bc]d ?
The two are quite different. a[bc]d is a filename pattern (in shells other than fish ). It will expand to the two filenames abd and acd if those are names of existing files in the current directory. The [...] part is a bracketed expression that matches a single character out of the ones listed (or collating elements when ranges are included). To match the pattern a[bc]d , the character between the strings a and d in a filename must be either a b or a c . If abd exists, but acd does not, then it would only expand to abd , and vice versa. If neither abd , nor acd exist, depending on the shell and the options, it would trigger an error (original Unix sh , (t)csh , zsh , fish , bash -O failglob ) and possibly exit the shell, or leave the pattern unexpanded¹ (Bourne-like and rc -like shells) or expand to nothing ( bash/zsh/yash -o nullglob , some older versions of fish , original Unix sh and (t)csh if there are other matching globs in the same command). a{b,c}d is a brace expansion (in shells that support these). It will expand to the two strings abd and acd . The {...} part is a comma-delimited set of strings (in this example; in some shell, it may also be a range such as a..k or 20..25 or more advanced ones like 00..20..2 or 0..20..2%02d ), and the expansion is computed by combining each of these strings with the flanking strings a and d . These strings could be longer than a single character and could also be brace expansions themselves. The expansion happens regardless of whether these strings corresponds to existing filenames or not. If you are constructing strings, use a brace expansion. If you are matching filenames, use a filename pattern. ¹ In this particular case, a[bc]d could happen to be the name of an existing file which is why it's potentially dangerous to use things like rm -f ./*.[ch] in those shells and rm -f ./*.{c,h} is less of a problem.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/515895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }