source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
356,710 | My file shows as attached.Is it corrupted? | n8te commented that the files are in the subdirectory Recordings of your home directory . My answer covers how to find the files if the application doesn't give you a clue. While an application has the file open, you can use lsof to locate it. Note that this only works while the file is open at the operating system level, which may not always be the case while the application displays the file. For example a text or image editor typically opens the file to read or save it, but closes it immediately after each load or save operation. But I would expect a sound recorder to write progressively to the output file, and for that it would keep the file open as long as it's recording. To find what files an application has open, first install lsof . It's available as a package on most distributions. Open a terminal; all my instructions use the command line. You'll need to determine the process ID of the application. You can run the command ps xf (that's on Linux; other Unix variants have different options for the ps command; as a last resort you can use ps -e to list everything). Try pgrep sound ps x | grep -i sound to locate all the running programs whose name contains “sound”. Alternatively, run xprop | grep _NET_WM_PID and click on the program window. Once you've determined the process ID, for example 1234, run lsof -p1234 Another approach is to look for recently modified files. You can use the find command for that. For example, to look for files modified in the last 5 minutes: find ~ -type f -mmin -5 ~ means your home directory. A saved file would normally be in your home directory because that's the only location where an application is guaranteed to be able to write, except for temporary files that can be wiped out as soon as the application exits. -type f restricts to regular files (we don't need to see directories here) and -mmin 5 means “less than 5 minutes ago”. There's also -mtime which counts in days instead of minutes. If you're looking for a file that's been moved rather than created or modified, use -cmin instead of -mmin ; the ctime is the time at which anything was last done on the file except for reading it (but including changing permissions, moving, etc.). You can also look for files by name, e.g. find ~ -name '*blendervid*' -type f looks for files whose name contains blendervid (and you can add something like `-mmin -5 further restrict matches to recent files). If you know part of the name of a file and the file was created a while ago, you can use the locate command. locate blendervid locate is a lot faster than find because it uses a pre-built index. But it can only find files that existed when the index was built. Most distributions arrange for the index to be rebuilt every night, or soon after boot (via anacron ) if the system isn't always on. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/356710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224965/"
]
} |
356,732 | When I open a file into ranger with a GUI application not listed in the rifle.conf file (i.e. using the open_with command), the ranger terminal window gets "suspended" until I close the GUI app. For this reason, I'd like to have a way to open files with a specific application, but still get the ability to navigate the files in the ranger terminal. This is the default behaviour when you open the same file with one of the application listed in the rifle.conf file. Is there any way to achieve the goal? | Try open_with with the f or t flag: open_with [application] [flags] [mode] Open the selected files with the given application, unless it is omitted, in which case the default application is used. flags change the way the application is executed and are described in their own section in this man page. The mode is a number that specifies which application to use. The list of applications is generated by the external file opener "rifle" and can be displayed when pressing "r" in ranger. Note that if you specify an application, the mode is ignored. Flags give you a way to modify the behavior of the spawned process. They are used in the commands :open_with (key "r") and :shell (key "!"). f Fork the process. (Run in background) c Run the current file only, instead of the selection r Run application with root privilege (requires sudo) t Run application in a new terminal window | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/356732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85082/"
]
} |
356,738 | I just recently bought a new PC, and I am unable to install debian system proprerly. Now I will provide you in most detailed way my configuration and status: PC: CPU: i7 7700K (Kaby Lake with Intel HD graphics 630) MB: MSi Z270 SLI PLUS RAM: Kingston HYPER 2x8GB RAM @ 2400MHz (12CL) SSD: intel 600 256GB DISTRO: Debian GNU/Linux 8.7 Jessie After the instalation of the system, OS started properly however xserver runned only in low resolution (1024x768). I assumed that it is a driver problem, so I have installed some drivers from this site with no succes, then I tried some new kernels: 3.16.43 X 3.18.20 X 4.1.39 X 4.4.59 X 4.9.20 M 4.10.8 M 4.11-rc5 M With kernel with sign X (see above) the situation was the same as with original kernel 3.16.0-4 , however with ones with M sign, was different: It looked that it started with a proper resolution, however xserver crashed in /var/log/Xorg.0.log there was message: Screens found,but none have a usable configuration and then Fatal server error: no screens found I tried to change some xorg configurations or some settings in i915 module, but with no success any help would be appreciated Thank you! EDIT: After removing all manually installed kernels, installing kernel 4.9 from jessie-backports and removing the xserver-xorg-video-intel driver, the command: grep EE /var/log/Xorg.0.log will return (WW) warning, (EE) error, (NI) not implemented, (??) unknown.[ 2.670] (EE) Failed to load module "intel" (module does not exist, 0)[ 2.671] (EE) open /dev/dri/card0: No such file or directory[ 2.671] (EE) open /dev/dri/card0: No such file or directory[ 2.672] (EE) open /dev/fb0: No such file or directory[ 2.672] (EE) open /dev/fb0: No such file or directory[ 2.672] (EE) Screen 0 deleted because of no matching config section.[ 2.672] (EE) Screen 0 deleted because of no matching config section.[ 2.672] (EE) Screen 0 deleted because of no matching config section.[ 2.672] (EE) Device(s) detected, but none match those in the config file.[ 2.672] (EE) [ 2.672] (EE) no screens found(EE) [ 2.672] (EE) [ 2.672] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.[ 2.672] (EE) [ 2.672] (EE) Server terminated with error (1). Closing log file. EDIT 2: the whole /var/log/Xorg.0.log : [ 2.630] X Protocol Version 11, Revision 0[ 2.630] Build Operating System: Linux 3.16.0-4-amd64 x86_64 Debian[ 2.630] Current Operating System: Linux Bobor 4.9.0-0.bpo.2-amd64 #1 SMP Debian 4.9.13-1~bpo8+1 (2017-02-27) x86_64[ 2.630] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.9.0-0.bpo.2-amd64 root=UUID=97e2dfda-29d2-44b4-ac08-80ea49496bb6 ro quiet[ 2.630] Build Date: 11 February 2015 12:32:02AM[ 2.630] xorg-server 2:1.16.4-1 (http://www.debian.org/support) [ 2.630] Current version of pixman: 0.32.6[ 2.630] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.[ 2.630] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown.[ 2.630] (==) Log file: "/var/log/Xorg.0.log", Time: Sun Apr 9 19:23:09 2017[ 2.631] (==) Using system config directory "/usr/share/X11/xorg.conf.d"[ 2.632] (==) No Layout section. Using the first Screen section.[ 2.632] (==) No screen section available. Using defaults.[ 2.632] (**) |-->Screen "Default Screen Section" (0)[ 2.632] (**) | |-->Monitor "<default monitor>"[ 2.632] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration.[ 2.632] (==) Automatically adding devices[ 2.632] (==) Automatically enabling devices[ 2.632] (==) Automatically adding GPU devices[ 2.634] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.[ 2.634] Entry deleted from font path.[ 2.636] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, built-ins[ 2.636] (==) ModulePath set to "/usr/lib/xorg/modules"[ 2.636] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices.[ 2.636] (II) Loader magic: 0x559d23f1ed80[ 2.636] (II) Module ABI versions:[ 2.636] X.Org ANSI C Emulation: 0.4[ 2.636] X.Org Video Driver: 18.0[ 2.636] X.Org XInput driver : 21.0[ 2.636] X.Org Server Extension : 8.0[ 2.637] (--) PCI:*(0:0:2:0) 8086:5912:1462:7a59 rev 4, Mem @ 0xde000000/16777216, 0xc0000000/268435456, I/O @ 0x0000f000/64, BIOS @ 0x????????/131072[ 2.637] (II) LoadModule: "glx"[ 2.638] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so[ 2.644] (II) Module glx: vendor="X.Org Foundation"[ 2.644] compiled for 1.16.4, module version = 1.0.0[ 2.644] ABI class: X.Org Server Extension, version 8.0[ 2.644] (==) AIGLX enabled[ 2.644] (==) Matched intel as autoconfigured driver 0[ 2.644] (==) Matched modesetting as autoconfigured driver 1[ 2.644] (==) Matched fbdev as autoconfigured driver 2[ 2.644] (==) Matched vesa as autoconfigured driver 3[ 2.644] (==) Assigned the driver to the xf86ConfigLayout[ 2.644] (II) LoadModule: "intel"[ 2.645] (WW) Warning, couldn't open module intel[ 2.645] (II) UnloadModule: "intel"[ 2.645] (II) Unloading intel[ 2.645] (EE) Failed to load module "intel" (module does not exist, 0)[ 2.645] (II) LoadModule: "modesetting"[ 2.645] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so[ 2.646] (II) Module modesetting: vendor="X.Org Foundation"[ 2.646] compiled for 1.16.4, module version = 0.9.0[ 2.646] Module class: X.Org Video Driver[ 2.646] ABI class: X.Org Video Driver, version 18.0[ 2.646] (II) LoadModule: "fbdev"[ 2.646] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so[ 2.646] (II) Module fbdev: vendor="X.Org Foundation"[ 2.646] compiled for 1.15.99.904, module version = 0.4.4[ 2.646] Module class: X.Org Video Driver[ 2.646] ABI class: X.Org Video Driver, version 18.0[ 2.646] (II) LoadModule: "vesa"[ 2.646] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so[ 2.646] (II) Module vesa: vendor="X.Org Foundation"[ 2.646] compiled for 1.15.99.904, module version = 2.3.3[ 2.646] Module class: X.Org Video Driver[ 2.646] ABI class: X.Org Video Driver, version 18.0[ 2.646] (II) modesetting: Driver for Modesetting Kernel Drivers: kms[ 2.646] (II) FBDEV: driver for framebuffer: fbdev[ 2.646] (II) VESA: driver for VESA chipsets: vesa[ 2.646] (++) using VT number 7[ 2.647] (EE) open /dev/dri/card0: No such file or directory[ 2.647] (WW) Falling back to old probe method for modesetting[ 2.647] (EE) open /dev/dri/card0: No such file or directory[ 2.647] (II) Loading sub module "fbdevhw"[ 2.647] (II) LoadModule: "fbdevhw"[ 2.647] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so[ 2.647] (II) Module fbdevhw: vendor="X.Org Foundation"[ 2.647] compiled for 1.16.4, module version = 0.0.2[ 2.647] ABI class: X.Org Video Driver, version 18.0[ 2.647] (EE) open /dev/fb0: No such file or directory[ 2.647] (WW) Falling back to old probe method for fbdev[ 2.647] (II) Loading sub module "fbdevhw"[ 2.647] (II) LoadModule: "fbdevhw"[ 2.647] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so[ 2.647] (II) Module fbdevhw: vendor="X.Org Foundation"[ 2.647] compiled for 1.16.4, module version = 0.0.2[ 2.647] ABI class: X.Org Video Driver, version 18.0[ 2.647] (EE) open /dev/fb0: No such file or directory[ 2.647] vesa: Ignoring device with a bound kernel driver[ 2.647] (WW) Falling back to old probe method for vesa[ 2.647] (EE) Screen 0 deleted because of no matching config section.[ 2.647] (II) UnloadModule: "modesetting"[ 2.647] (EE) Screen 0 deleted because of no matching config section.[ 2.647] (II) UnloadModule: "fbdev"[ 2.647] (II) UnloadSubModule: "fbdevhw"[ 2.647] (EE) Screen 0 deleted because of no matching config section.[ 2.647] (II) UnloadModule: "vesa"[ 2.647] (EE) Device(s) detected, but none match those in the config file.[ 2.647] (EE) Fatal server error:[ 2.647] (EE) no screens found(EE) [ 2.647] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 2.647] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.[ 2.647] (EE) [ 2.647] (EE) Server terminated with error (1). Closing log file. | For Kaby Lake (and any Intel graphics since Broadwell), you need to install a new kernel and firmware from Jessie backports; as root: echo deb http://http.debian.net/debian jessie-backports main contrib non-free > /etc/apt/sources.list.d/jessie-backports.listapt-get updateapt-get -t jessie-backports install linux-image-amd64 firmware-misc-nonfree You also need to remove (paradoxically) the X.org Intel video driver (as indicated in the package description : the X server can use the kernel’s mode-setting features without a separate video driver): apt-get remove xserver-xorg-video-intel When you run this, if apt-get tells you it’s going to remove other packages, don’t let it do so; you might need to install xserver-xorg-video-dummy to satisfy dependencies. You should also remove the kernels you installed manually. Once all that’s done, reboot and you should find your system working much better. If that fails though, you can try installing the backported Intel driver instead (along with the new kernel and firmware): apt-get -t jessie-backports install xserver-xorg-video-intel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/356738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104915/"
]
} |
356,753 | After a recent update bash seems to always output to less , which is resulting in pagination for many commands. Does anyone know how to turn less off? Example output for systemctl status | The man page for systemctl ( man systemctl ) explains this behaviour clearly, and even offers options to change it: $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER . If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less (1) and more (1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value " cat " is equivalent to passing --no-pager . So in your case the solution is to set the environment variable when you log in: export SYSTEMD_PAGER=cat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/356753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180820/"
]
} |
357,814 | In the 1970’s we had hardware terminal with CUI (character user interface) interface for input/output. Now, we have terminal emulators ( Ctrl + Alt + Fn ) in Unix/Linux world. In Ubuntu OS, I see seven terminal emulators, where GUI is occupying 7th terminal emulator ( Ctrl + Alt + F7 ). For example, this is my first terminal ( Ctrl + Alt + F1 ) emulator $ tty/dev/tty0 Why does Unix/Linux provide multiple terminal emulators? | Why does UNIX/Linux provide multiple terminal emulators [on the console]? For the same reason your GUI terminal emulator likely supports tabs (e.g. GNOME Terminal), and if not (e.g. rxvt ), then for the same reason launching a second GUI terminal app instance doesn't just pull the first one to the foreground and exit, forcing you to use the first instance. I routinely use at least 3 terminal windows in my work, and often more: Text editor for the server side of the system I'm working on Text editor for the client side of the same system Command window for running the server I rarely need a fourth terminal for running the client program, since it usually runs elsewhere (web app, native GUI app, mobile app, etc.), but if I were developing a CLI client for my server app, I'd have a separate terminal open for it, too. In the past, before sudo became popular, I kept a root terminal open all the time. I rarely use Unix/Linux boxes interactively at the console without a GUI these days, but I do often run them headless and access them over SSH. My SSH terminal client of choice supports tabs, configured as above. One of my current hobby projects has me using a real old glass terminal occasionally, which means I no longer have multiple terminal windows, so I'm finally learning a bit about GNU screen , a program I never had much use for before, since I had either multiple console terminals or multiple GUI terminals. And what does screen do? Among other things, you can configure it to give you multiple virtual terminals on a single screen, just like Linux does with Ctrl - Alt - F x . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/357814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62659/"
]
} |
357,893 | Is there a difference between pwd and cd (no arguments)? They both print the current directory's path but is there a subtle difference that I'm missing, and if so, when should I use which? | Yes, they are completely different commands that do different things. pwd prints the directory you are currently in. It does nothing else. pwd does not take any arguments. cd without arguments changes your working directory to your home directory. It does not print anything by default. cd with an argument will change your working directory to whatever directory you supplied as an argument. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/357893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226063/"
]
} |
357,920 | I am studying RHEL 7 and have some questions. Using hostnamectl command, I am getting information of RHEL 7 hostname and other. If i want to change hostname there are some options using hostnamectl : [root@linux7 ~]# hostnamectl set-set-chassis set-deployment set-hostname set-icon-name To change hostname, static hostname set-hostname is used. So what about set-icon-name and --pretty and which particular file it get changed? Below example is given: [root@linux7 ~]# hostnamectl set-hostname Linuxindia[root@linux7 ~]# hostnamectl set-icon-name mumbailinux[root@linux7 ~]# systemctl restart systemd-hostnamed.service [root@linuxindia ~]# hostnamectl set-set-chassis set-deployment set-hostname set-icon-name [root@linuxindia ~]# hostnamectl set-hostname "hellolinux" --pretty[root@linuxindia ~]# hostnamectl status Static hostname: linuxindia Pretty hostname: hellolinux Icon name: mumbailinux Chassis: vm Machine ID: f3ffdd0447604e20a0a4278c56f4275b Boot ID: 70c3c85ec1fa4dceb5a7f52789eed524 Virtualization: kvm Operating System: Red Hat Enterprise Linux Server 7.3 Beta (Maipo) CPE OS Name: cpe:/o:redhat:enterprise_linux:7.3:beta:server Kernel: Linux 3.10.0-493.el7.x86_64 Architecture: x86-64[root@linuxindia ~]# Requesting to get some information on Transient hostname also. | Icon name is the machine identifying name according to XDG Icon Naming Specification . When --pretty is used, the machine pretty hostname was set. This name is human readable name, is present to the user, not the machine. It does not have the limitation of internet domain name, you can use any valid UTF-8 name for it: $ hostnamectl --pretty set-hostname "$(perl -CO -le 'print "\x{1f389}"')"$ hostnamectl --pretty status Note that while DNS allows domain name up to 255 characters, the hostname in Linux is limited to 64 characters only: $ hostnamectl set-hostname "$(perl -le 'print "A" x 65')"$ awk '{print length}' /etc/hostname64 The hostname was stored in /etc/hostname , pretty name and icon name are stored in /etc/machine-info . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/357920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37046/"
]
} |
357,928 | AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. I want to develop an application that monitors the internet usage of users. Each user has a fixed IP address. I and some other people are connected to a DES-108 8-Port Fast Ethernet Unmanaged Desktop Switch As said earlier I want to capture all the traffics from all users not only those packets that are belong to me. How should I force my NIC or other components to receive all of packets? | AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. Correction: it rejects those packets which their destination MAC address is not equal to its MAC address (or multicast or any additional addresses in its filter. Packet capture utilities can trivially put the network device into promiscuous mode, which is to say that the above check is bypassed and the device accepts everything it receives. In fact, this is usually the default: with tcpdump , you have to specify the -p option in order to not do it. The more important issue is whether the packets you are interested are even being carried down the wire to your sniffing port at all. Since you are using an unmanaged ethernet switch, they almost certainly are not. The switch is deciding to prune packets that don't belong to you from your port before your network device can hope to see them. You need to connect to a specially configured mirroring or monitoring port on a managed ethernet switch in order to do this. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/357928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172829/"
]
} |
357,948 | If I create any new file/directory/link, sham@mohet01-ubuntu:~$ ls -ltotal 48drwxr-xr-x 3 sham sham 4096 Apr 5 19:03 Desktopdrwxrwxr-x 2 sham sham 4096 Apr 7 11:19 docsdrwxr-xr-x 3 sham sham 4096 Apr 5 18:28 Documentsdrwxr-xr-x 2 sham sham 4096 Apr 5 18:56 Downloads-rw-r--r-- 1 sham sham 8980 Apr 5 10:43 examples.desktopdrwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Musicdrwxr-xr-x 2 sham sham 4096 Apr 5 18:46 Picturesdrwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Publicdrwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Templatesdrwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Videos I see the group name as sham . user sham is the owner of these files. Question: How can a group name be same as owner name? What does it imply for a group name to e same as owner name? | User names and group names exist in two independent namespaces, so same name does not need to imply anything. It is simply group which happens to have this name (numeric group id will be likely different than numeric user id for example). Nevertheless, lot of Linux distributions create new group together with creating new user's account and this group becomes default group for this user (containing, by default, only this one user id). So same group and user names usually (!) implies that the file belongs to group with only this one user in it. (But there is nothing preventing admin to add more users into this group, or even create group of this name which is not related to user of same name in any way.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/357948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62659/"
]
} |
358,050 | Etries in my /proc/iomem are all 00000000-00000000 The same with /proc/ioports. They're all 0000-0000 Like: 00000000-00000000 : reserved00000000-00000000 : System RAM00000000-00000000 : reserved I'm running 4.10.3-1-ARCH x86_64 Any advice on how to find out the reason by myself is also welcomed, thanks. | Try using sudo in front of your command, like sudo less /proc/io{mem,ports} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191005/"
]
} |
358,079 | My company resells an application whose brand name is mixed case, for example "ApplicationName". The application's installer creates all paths and file names in this standard. E.g. The main directory is /opt/ApplicationName , the init file is called ApplicationName so I have to run service ApplicationName status and so on. To me, this breaks all sensible conventions and I feel the files and directories should all be lower case (there is precedent in other applications such as MySQL, whose files and dirs are all called mysql , even applications like Apache and Tomcat do away with the preceding upper case letter). If I raise this as a bug report, I'd like to put up a stronger argument than just "I think it's wrong". So is it dictated in something like the POSIX standard that system files like this should be lower case? | The POSIX standard has a section with guidelines for conforming utilities (i.e., "such as those written specific to a local system or that are components of a larger application") that says Utility names should be between two and nine characters, inclusive. Utility names should include lowercase letters (the lower character classification) and digits only from the portable character set. [ref: 12.2 Utility Syntax Guidelines ] It's unclear to me whether the use of the words "should include" really means "should only include". (The consensus in the comments below is that it means "should only include"). An application on a Unix system that does not claim to be a POSIX conformant utility may otherwise use whatever name it wants. If it does claim to be a POSIX conformant utility that is part of the POSIX shell utilities , the text after the guidelines in section 12.2 says that "should" changes meaning to "shall". There are no similar guideline regarding directory names as far as I know. macOS (which is a certified UNIX 03 product when running on an Intel-based Mac computer) uses /Users as the prefix for user's home directories, for example, as well as a number of other mixed-case directory names. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/358079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226207/"
]
} |
358,089 | I'm trying to set up SSL on my apache2 webserver, but it seems that it does not work at all. I have followed a tutorial to create cert files with openssl and configured the /etc/apache2/sites-available/default-ssl.conf properly. Every time I try to open my website with https, my browser refuse to connect due to security issues. It says that I haven't configured my website correctly. In my /var/log/apache2/error.log I'm getting warnings, which say that my server certificate does not include an ID which matches the server name. [Mon Apr 10 11:03:24.041813 2017] [mpm_prefork:notice] [pid 1222] AH00169: caught SIGTERM, shutting down[Mon Apr 10 11:03:30.566578 2017] [ssl:warn] [pid 661] AH01909: 127.0.0.1:443:0 server certificate does NOT include an ID which matches the server name[Mon Apr 10 11:03:31.579088 2017] [ssl:warn] [pid 1194] AH01909: 127.0.0.1:443:0 server certificate does NOT include an ID which matches the server name[Mon Apr 10 11:03:31.592958 2017] [mpm_prefork:notice] [pid 1194] AH00163: Apache/2.4.25 (Raspbian) OpenSSL/1.0.2k configured -- resuming normal operations[Mon Apr 10 11:03:31.593136 2017] [core:notice] [pid 1194] AH00094: Command line: '/usr/sbin/apache2' Do you have any ideas on how to solve this? Thanks in regard! | Okay, I noticed that this post is viewed quite often recently and so it seems that a lot of people are facing the same issue that I did. If so then this might help you. I have followed a simple step-by-step tutorial to create a SSL-certification for my webserver. Like so many tutorials out there the outcome of the tutorial I followed was a self-signed certificate using OpenSSL. Yep self-signed , that was the problem. The browser could not trust the server due to it's certificate which is signed by itself. Well I wouldn't do either... A certificate has to be signed by an external trustworthy certificate authority (CA). So I stumbled upon Let's Encrypt which does all the work for you and is even easier to set up and the best is: it is absolutely free. Installation 1) Delete your old ssl cert files which you have created by using OpenSSL 2) Open backports to get certbot client on Debian. You should know that this will open a hole for unfinished software! Install only the packages when you are aware about what you are doing. echo 'deb http://ftp.debian.org/debian jessie-backports main' | sudo tee /etc/apt/sources.list.d/backports.list 3) Update your linux system sudo apt-get update 4) Install certbot sudo apt-get install python-certbot-apache -t jessie-backports 5) Set up apache ServerName and ServerAlias sudo nano /etc/apache2/sites-available/000-default.conf 6) Edit apache config file <VirtualHost *:80> . . . ServerName example.com ServerAlias www.example.com . . .</VirtualHost> 7) Check for a correct syntax sudo apache2ctl configtest 8) If the config file looks fine, restart apache server sudo systemctl restart apache2 9) Set up a certificate using certbot and follow the instruction on screen. sudo certbot --apache Renewal All certificates by Let's Encrypt are valid through 3 months. To renew the you can manually run sudo certbot renew Or automate this service as a cron job sudo crontab -e and enter the following row to invoke a renewal every Monday at 2:30 am. . . .30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log You can follow a more detailled tutorial here: https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-debian-8 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223023/"
]
} |
358,096 | I have seen advice in several places to use the following shebang line #!/usr/bin/env bash instead of #!/usr/bin/bash My knee-jerk reaction is, "what if somebody substitutes this executable for their own in say ~/.local/bin ?" That directory is often set up in the user's path before the system-wide paths. I see this raised as a security issue often as a side note rather than anything to take seriously, but I wanted to test the theory. To try this out I did something like this: echo -e "#!/usr/bin/python\nprint 'Hacked!'" > $HOME/.local/bin/bashchmod 755 $HOME/.local/bin/bashPATH=$HOME/.local/bin env bash This yields /usr/bin/env: ‘bash’: No such file or directory To check whether it was picking up anything at all I also did echo -e "#!/usr/bin/python\nprint 'Hacked!'" > $HOME/.local/bin/perlchmod 755 $HOME/.local/bin/perlPATH=$HOME/.local/bin env perl which prints, as I expected, Hacked! Can someone explain to me why the substitute bash is not found, but the substitute perl is? Is this some sort of "security" measure that (from my point of view) misses the point? EDIT: Because I have been prompted: I am not asking how /usr/bin/env bash is different from using /bin/bash . I am asking the question as stated above. EDIT2: It must have been something I was doing wrong. Tried again today (using explicit path to env instead of implicit), and no such "not found" behaviour. | "what if somebody substitutes this executable for their own in say ~/.local/bin ? Then the script doesn't work for them. But that doesn't matter, since they could conceivably break the script for themselves in other ways, or run another program directly without messing with PATH or env . Unless your users have other users' directories in their PATH , or can edit the PATH of other users, there's really no possibility of one user messing another one. However, if it wasn't a shell script, but something that grants additional privilege, such as a setuid wrapper for some program, then things would be different. In that case, it would be necessary to use an absolute path to run the program, place it in a directory the unprivileged users cannot modify, and clean up the environment when starting the program. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226221/"
]
} |
358,106 | If I start an asynchronous ("backgrounded") process, some info, including the new process's PID, gets printed to the terminal before the process runs; for example $ sleep 3 &[1] 8217$ [1] + done sleep 3$ Is there a way to have such info (especially the PID) printed at the start of every process, not just those that get started asynchronously? Background The reason for wanting this is that, due to the peculiarities of my everyday working set up, often enough it happens that a synchronous long-running process fails to respond to Ctrl-C . (Invariably, what makes these processes "long-running" is that they produce a lot more output than I had anticipated.) The surest way to stop such a process is to kill -9 it from a different window, and it would be nice to have its PID readily on hand for this. UPDATE: In my original post I neglected to mention that Ctrl-Z is not an option. (I'm working on a shell running under Emacs, so Ctrl-Z just suspends Emacs.) | "what if somebody substitutes this executable for their own in say ~/.local/bin ? Then the script doesn't work for them. But that doesn't matter, since they could conceivably break the script for themselves in other ways, or run another program directly without messing with PATH or env . Unless your users have other users' directories in their PATH , or can edit the PATH of other users, there's really no possibility of one user messing another one. However, if it wasn't a shell script, but something that grants additional privilege, such as a setuid wrapper for some program, then things would be different. In that case, it would be necessary to use an absolute path to run the program, place it in a directory the unprivileged users cannot modify, and clean up the environment when starting the program. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
358,113 | I need a command that deletes all files, folders and sub-folders that were not updated longer than 31 days.I tried this one find . -mindepth 1 -mtime +31 -exec rm -rf "{}" \; But if I have hierarchy like this .├── old_sub_folder1└── old_sub_folder2 ├── old_file └── old_sub_folder3 └── new_file where old_* are old folders\files and new_file is a new file. This command will delete all contents. Because old_sub_folder2 date was not updated after new_file was created. I need a command that would not delete old_sub_folder2/old_sub_folder3/new_file | The problem is that you added the -r option to your rm command. This will delete the folders even if they are not empty. You need to do this in two steps: Delete only the old files : find . -type f -mtime +31 -delete To delete any old folders, if they are empty, we can take a peek here , and tweak it a bit: find . -type d -empty -mtime +31 -delete | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226235/"
]
} |
358,224 | I believe (not sure) that the owner of a file/directory and the root user are the only users that are allowed to change the permissions of a file/directory. Am I correct or are there other users that are also allowed to change the permissions? | Only the owner and root (super user) are allowed to the change the permission of a file or directory. This means that the owner and the super user can set the read ( r ), write ( w ) and execute ( x ) permissions. But changing the ownership (user/group) of files and directories with the commands chown / chgrp is only allowed to root . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/358224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226341/"
]
} |
358,229 | can't mount my hard disk with ntfs file format in linux mint | Only the owner and root (super user) are allowed to the change the permission of a file or directory. This means that the owner and the super user can set the read ( r ), write ( w ) and execute ( x ) permissions. But changing the ownership (user/group) of files and directories with the commands chown / chgrp is only allowed to root . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/358229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226347/"
]
} |
358,270 | I have find command that display files in my project: find . -type f -not -path './node_modules*' -a -not -path '*.git*' \ -a -not -path './coverage*' -a -not -path './bower_components*' \ -a -not -name '*~' How can I filter the files so it don't show the ones that are in .gitignore? I thought that I use: while read file; do grep $file .gitignore > /dev/null && echo $file;done but .gitignore file can have glob patterns (also it will not work with paths if file is in .gitignore), How can I filter files based on patterns that may have globs? | git provides git-check-ignore to check whether a file is excluded by .gitignore . So you could use: find . -type f -not -path './node_modules*' \ -a -not -path '*.git*' \ -a -not -path './coverage*' \ -a -not -path './bower_components*' \ -a -not -name '*~' \ -exec sh -c ' for f do git check-ignore -q "$f" || printf '%s\n' "$f" done ' find-sh {} + Note that you would pay big cost for this because the check was performed for each file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
358,272 | The variable BUILDNUMBER is set to value 230. I expect 230_ to be printed for the command echo $BUILDNUMBER_ but the output is empty as shown below. # echo $BUILDNUMBER_# echo $BUILDNUMBER230 | The command echo $BUILDNUMBER_ is going to print the value of variable $BUILDNUMBER_ which is not set (underscore is a valid character for a variable name as explicitly noted by Jeff Schaller) You just need to apply braces (curly brackets) around the variable name or use the most rigid printf tool: echo "${BUILDNUMBER}_"printf '%s_\n' "$BUILDNUMBER" PS: Always quote your variables. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/358272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29049/"
]
} |
358,319 | I need to find out what's contributing to the disk usage on a specific filesystem ( /dev/sda2 ): $ df -h /Filesystem Size Used Avail Use% Mounted on/dev/sda2 96G 82G 9.9G 90% / I can't just do du -csh / because I have many other filesystems mounted underneath / , some of which are huge and slow: $ df -hFilesystem Size Used Avail Use% Mounted on/dev/sda2 96G 82G 9.9G 90% //dev/sdb1 5.2T 3.7T 1.3T 76% /disk3/dev/sda1 99M 18M 76M 20% /boottmpfs 16G 4.0K 16G 1% /dev/shmnfshome.XXX.net:/home/userA 5.3T 1.6T 3.5T 32% /home/userAnfshome.XXX.net:/home/userB 5.3T 1.6T 3.5T 32% /home/userB How can I retrieve disk usage only on /dev/sda2 ? None of these work: Attempt 1: $ du -csh /dev/sda20 /dev/sda20 total Attempt 2: $ cd /dev/sda2/cd: not a directory: /dev/sda2/ | Use the -x (single file system) option: du -cshx / This instructs du to only consider directories of / which are on the same file system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/358319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
358,410 | I have files (say, infile.tex ) of the form AAAABBBB AAAACCCC BBBB AAAA%%## Just some text\begin{example}[foobar]\begin{Sinput}> set.seed(271)> U <- runif(10)> plot(U, 1-U)\end{Sinput}AAAA BBBB CCCC\begin{Sinput}> plot(qnorm(cbind(U, 1-U)))\end{Sinput}\end{example} and I would like to extract all lines starting with %%## and all lines between \begin{Sinput} and \end{Sinput} , so %%## Just some text\begin{Sinput}> set.seed(271)> U <- runif(10)> plot(U, 1-U)\end{Sinput}\begin{Sinput}> plot(qnorm(cbind(U, 1-U)))\end{Sinput} I tried to work with sed : sed -n '/%%##\|\\begin{Sinput}/,/\\end{Sinput}/p' infile.tex # but also contains \begin{example}[foobar] sed -n '/^%%##\|\\begin{Sinput}/,/\\end{Sinput}/p' infile.tex # but does not contain lines starting with %%## Note: The above is somewhat derived from this here . Also, a 'two-step' solution (first extracting all lines starting with... and then all chunks) might be possible, too (I just didn't see how and it seems that sed allows to choose several 'patterns' so that seems more elegant). | awk with its range operator (,) works pretty well for this. Tag an extra filter on the end (;) and hey presto. awk '/^\\begin\{Sinput\}/,/^\\end\{Sinput\}/;/^%%##/' infile.tex%%## Just some text\begin{Sinput}> set.seed(271)> U <- runif(10)> plot(U, 1-U)\end{Sinput}\begin{Sinput}> plot(qnorm(cbind(U, 1-U)))\end{Sinput} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37937/"
]
} |
358,411 | I have 1000s of remote machines. I want to run some specific commands on all of them, but in parallel.I use these commands which run sequentially: for f in `cat host.lst`do ./runScript.sh $fdone Let's suppose host.lst contains 100 hosts. I want to run runScript.sh on 100 hosts in parallel. Also, logs should be maintained. I can not install any utility on my machine such as PSSH . I have done a lot of research and found these links but they did not help. I do not understand how they work: Automatically run commands over SSH on many servers Execute command on multiple files matching a pattern in parallel Can any one explain the logic? | logdir=`mktemp -d`bunch=200IFS=$'\n'for hosts in $(< hosts.lst xargs -r -L "$bunch"); do IFS=" "; for host in $hosts; do ssh -n -o BatchMode=yes "$host" './runScript.sh' 1>"$logdir/$host.log" 2>&1 & done waitdone Assuming the 1000s (thousands) of hosts are listed one/line in the hosts.lst file and then from these a bunch are selected in one time (200), and on each of these 200 hosts are spawned your runScript.sh using ssh in batch mode and at the same time preserving the stdout+stderr spewing forth from each of these backgrounded job into a file with the name of host in the directory $logdir , which may be examined as and when required. Finally we wait for one bunch to get over before we launch the next bunch, by means of the wait command at the end of the inner for loop. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82208/"
]
} |
358,435 | What would be the best shell command "one liner" that I could use to list all of the files in a directory, only showing those that I own? | A short one-liner would be: find . -maxdepth 1 -user $USER If you're looking in the current directory, you can omit the . . If you don't know whether $USER is available, you can replace it with $LOGNAME or $(whoami) . Add -ls to show file details, e.g.: find / -maxdepth 1 -user root -ls If you want to supply custom flags to ls you can use it via -exec : find / -maxdepth 1 -user root -exec ls -ld {} + (In that case the -d flag to ls is required to list directories as themselves and not their content.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226515/"
]
} |
358,523 | Suppose i have a file with many words, i want to find only the first word with the pattern "xyz". How do I do it if there are multiple words with this pattern on the same line? -m returns all the words in the first line in which it matches. I need only grep command. | By default grep prints the lines matching a pattern, so if the pattern appears one or more times into a line, grep will print that whole line. Adding the flag -m 7 will tell grep to print only the first 7 lines where the pattern appears. So this should do what you want (I haven't tested it): grep -o -m 1 xyz myfile | head -1 Edit: as pointed out by @Kusalananda, you don't strictly need the -m flag but using it means grep won't need to parse the whole file, and will output the result faster, especially if myfile is a large file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226580/"
]
} |
358,528 | I have accidently :set rl thinking it's for relative lines. I have activated right to left mode . The doc for rightleft doesn't say how to reverse rl mode without exiting vim. How does one go left to right in vim? | :set norl or :set norightleft Each boolean option in Vim has a corresponding no -option that turns it off. The option you were originally looking for might have been relativenumber ( rnu ), which acts like number ( nu ) but adds line numbers that are relative to the current line rather than to the start of the editing buffer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50112/"
]
} |
358,541 | I have an example below where I need to replace the column 9 value if it is less than 8 else exit or ignore using sed or awk function: ) in datadbs extent size 16 next size 4 lock mode row; If I use the below awk function it only prints the value I need in column 9, but I still want to maintain the sentence structure. echo ") in datadbs extent size 16 next size 4 lock mode row;" | awk '{if ($9 < 8 ) print 8;}' OUTPUT: 8 What I want is the below: ) in datadbs extent size 16 next size 8 lock mode row; | Without knowing any awk I'd suggest to change the parameter and print everything: echo ") in datadbs extent size 16 next size 4 lock mode row;" | awk '{if ($9 < 8 ) $9 = 8; print;}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186444/"
]
} |
358,544 | I use this command to find all the file in a directory that contains a specific string: grep -lir "string" path_to directory Example grep -lir "users" /var/www/mysite This command displays all files in the specified directory that contain the string 'users'. However, I want to sort them by descending modification date; newest to the oldest. Any help? | First we use the Z option then at the other end xargs with -0 option will catch the file names and stat them, sort and remove the timing info to reveal a sorted newest first list. grep -Zlir users /var/www/mysite | xargs -0 -r stat --format='%Y+%n' | sort -t+ -k 1,1nr | cut -d+ -f2- | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216688/"
]
} |
358,561 | I want to rename file with weird name to something reasonable, however I'm not able to :/ $ mv *_000c.jpg 000c.jpgmv: cannot move '?j?Z?R?C1_000c.jpg' to '000c.jpg': No such file or directory I've tried using inode number as was recommend in few places on the internet: $ ls -il *000c.jpgls: '?j?Z?R?C1_000c.jpg': No such file or directory213915 -rw-r--r-- 1 wolf wolf 794655 Jul 21 2012 '?j?Z?R?C1_000c.jpg'$ find . -inum 213915 -print0 | xargs -0 -I '{}' mv '{}' 000c.jpgmv: cannot move './?j?Z?R?C1_000c.jpg' to '000c.jpg': No such file or directory What should I do? | Summary of relevant comments: Unix file systems allows any character in a file name apart from \0 (nul) and / (forward slash). The fact that ls shows question marks is only because it can't display some of the characters of the filename in the current locale (which is one of the reasons why you should avoid parsing the output of ls ). However, with a Samba share, you apparently have more strict requirements on filenames than on a standard Unix filesystem. Since the file had a name that was "illegal" on your intermediate Samba share, the file was inaccessible by its correct name on the machine mounting the share. The mv didn't work since the name returned from the Samba share for the expansion of the globbing pattern was not the actual name of the file on the hosting filesystem, only Samba's own mangled version of the name. Your solution was to log into the server that actually hosted the file (on a filesystem with less restrictive naming rules than Samba) and change the name of the file there. This was the correct course of action. See also the Unix&Linux chat about this question . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112194/"
]
} |
358,587 | I have a job on a batch system that runs extremely long and produces tons of output. So much actually that I have to pipe the standard output through gzip to keep the batch node from filling its work area and subsequently crashing. longscript | gzip -9 > log.gz Now, I would like to investigate the output of the job while it is still running.So I do this: gunzip log.gz This runs very long, as it is huge file (several GB). I can see the output file being created while it is running and can look at it while it is being built. tail log> some-line-of-the-log-filetail log> some-other-line-of-the-log-file However, ultimately, gzip encounters the end of the gzipped file. Since the job is still running and gzip is still writing the file, there is no proper footer yet, so this happens: gzip: log.gz: unexpected end of file After this, the extracted log file is deleted, as gzip thinks that the corrupted extracted data is of no use to me. I, however, disagree - even if the last couple of lines are scrambled, the output is still highly interesting to me. How can I convince gzip to let me keep the "corrupted" file? | Apart from the very end of the file, you will be able to see the uncompressed data with zcat (or gzip -dc , or gunzip -c ): zcat log.gz | tail or zcat log.gz | less or zless log.gz gzip will do buffering for obvious reasons (it needs to compress the data in chunks), so even though the program may have outputted some data, that data may not yet be in the log.gz file. You may also store the uncompressed log with zcat log.gz > log ... but that would be silly since there's obviously a reason why you compress the output in the first place. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88115/"
]
} |
358,656 | I removed the title bar from Openbox by modifying the /openbox/rc.xml . I know that I can use keybindings to minimize, maximize, close, etc. But, how can I drag the windows with the mouse like I did before? | According to this Ubuntu forums post, add the following to your rc.xml file: Re: window dragging in openbox I think you would want to change (or add) a mousebind entry to the Mouse section of rc.xml with a different binding. Mine currently says this: <mousebind button="A-Left" action="Drag"><action name="Move"/></mousebind> Then you can move windows by dragging them while pressing Alt . To use the Super -drag instead use button="W-Left" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
358,673 | I search a way to automatically convert an sfd file (The work format of Fontforge) to the main font format (at least otf, ttf, woof, svg). But I need to do it in command line, I don’t need to do it from GUI. Unfortunately it seams that the Fontforge application don’t support it (I read the manpage and there is no mention of this usage). Anyway, I need to do it from command line but it’s not necessary to do it from Fontforge. Any other application who can convert the Fontforge work format “SFD” to the main font format. So, how can I get a commands like this: sfd2ttf input.sfd output.ttfsfd2otf input.sfd output.otfsfd2woff input.sfd output.woffsfd2svg input.sfd output.svg | You can do it with Fontforge, see here : -c script-string If FontForge's first (or second, if the first is -lang) argument is "-c" then the argument that follows will be treated as a string containing scripting commands, and those commands will be executed. All remaining arguments will be passed to the script. $ fontforge -c 'Open($1); Generate($2)' foo.sfd foo.ttf Will read a font from "foo.sfd" and then generate a truetype font from it called "foo.ttf" In your case you can create a script, say convertsfd , like this #!/bin/bashfontforge -lang=ff -c 'Open($1); Generate($2)' "$1" "$2" make it executable, and call it like this: $ ./convertsfd foo.sfd foo.ttf Change the second argument to foo.otf or to other formats as needed, I only tested with ttf and otf . To call the script from anywhere, just place it in your ~/.local/bin , or some other directory in your PATH . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56505/"
]
} |
358,740 | I created special user in /etc/passwd with: secure:x:2000:2000:secure:/bin:/usr/sbin/nologin I don't want to allow login of this user (via console, ssh, ftp, any way). He is just for running one script via: sudo su secure -c '/home/someuser/secure.script' But it gives me This user is currently not available. . How to set it up to be able to run script this way but prevent any login (console, ssh, ftp,...) of this user to system? I have noticed thatwhen I type /usr/sbin/nologin on the command-line, the computer responds with This account is currently not available. . | This is a typical use case for sudo . You're mixing sudo which allows running commands as another user and is highly configurable (you can selectively specify which user can run which command as which user) and su which switches to another user if you know the password (or are root). su always runs the shell written in /etc/passwd , even if su -c is used. Because of this su isn't compatible with /usr/sbin/nologin . You should use sudo -u secure /home/someuser/secure.script As sudo is configurable you can control who can use this command and if he/she needs to enter a password to run it. You need to edit /etc/sudoers using visudo to do this. (Be careful when editing /etc/sudoers and always use visudo to do it. The syntax isn't trivial and one error can lock you out from your root account.) This line in sudoers allows anyone in group somegroup to run the command as secure : %somegroup ALL=(secure) /home/someuser/secure.script This allows anyone in group somegroup to run the command as secure without entering a password: %somegroup ALL=(secure) NOPASSWD: /home/someuser/secure.script This allows user1 to run the command as secure without entering a password: user1 ALL=(secure) /home/someuser/secure.script | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79705/"
]
} |
358,741 | The internal microphone is listed by the Sound tool of Linux Mint Cinnamon but I cannot use it in any way. Also in pavucontrol : Chances that the mic itself is faulty are negligible, it's a brand new Asus, and similar issues have been reported lately but without a solution -like here , also here . I hope I have more luck here. ~ $ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0x81410000 irq 315 EDIT in response to dirkt 's comment: alsamixer shows this: There is a MM to the right but not under PCM (isn't that the mic?). amixer -c0 contents gives this (pastebin) I have tried to use with Skype and then tested with the 'Sound Recorder' tool, which only creates empty files. - Also I have tried aplay /tmp/test-mic.wav & aplay /tmp/test-mic.wav (like here ) to no effect. Edit in response to dirkt 's second comment: the output of cat /proc/asound/card*/codec\#* - here .The laptop only has one audio jack (entry), for headphones, I do not have an external mic to test if that could work for mic too. | (This might be an Asus-X540S-specific issue.) I have solved it according to this answer: https://askubuntu.com/a/824806/47206 sudo apt-get install alsa-tools-gui Then launch hdajackretask Then: Check 'Show unconnected pins' Check override pin 0x12 to internal mic. Apply and test. Be sure that the mic level is high enough in sound settings (pavucontrol, etc) If it worked, select 'Install boot override'. UPDATE In the Ubuntu 18.04-based Linux Mint 19, I had to check override pin 0x13 to internal mic instead of 0x12. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
358,778 | I have read many controversal statements about ZFS on low memory systems on the internet, but most of the use cases was for performant data storage. I want to use ZFS not for performance reasons, but because it supports transparent compression and deduplication (the latter may be optional) and still seems to be more mature than BTRFS. I don't want to use any RAID configuration. I want to use it on a laptop computer, for root and home file system, and storage space and data safety (recoverability after power loss or other random inconsistencies, very low risk of corruption due to low RAM, etc.) is more important than disk performance. I want comparable safety as ext2/3/4 give. I would like to use ext4 ontop of a ZVOL. So, the questions are: Can ZFS be configured to work reliably with "low RAM" if IO performance/ caching is not of concern, and no RAID funtionality is wanted? How does the RAM needed change if I do not use ZFS as a filesystem itself, but just use ZVOLs where I put another filesystem ontop? How does RAM needed change with deduplication turned on? If deduplication is turned on and RAM starts to get low, is it still safe -- can ZFS just suspend deduplication and use less RAM? Is it possible to deactivate automatic deduplication, but run it from time to time manually? Can ext4 ontop of a ZVOL reliably store my data even on low RAM situations, and if inconsistencies happen, success chances for repairs are high (as it is with ext2/3/4)? Does ext4 ontop of a ZVOL increase rubustness because it adds ext4's robustness, or is data as robust as the underlying ZVOL is? System specs: Linux 8 GiB RAM (shared with graphics card), but most (at least 7 GiB) of it should be available for user space software, about 700 GiB SSD storage to use for the ZFS, maybe on another system 128 GiB of eMMC to use for ZFS. Current disk usage ( du -sh of the bigger directories at / ) (/ is ext4, /var mounted ontop is reiserfs) (want to move that to a storage with transparent compression): 74M /etc342G /home5.0G /opt1.5G /root261M /tmp35G /usr30G /var OR, just use BTRFS (have read that severe/ hard to recover data loss can occur due to "bugs", but that is all controversial ...)? | Short answer: Yes, its possible to use low RAM (~ 1 GB) with ZFS successfully. You should not use dedup, but RAID and compression is usually ok. Once you have duplication enabled, it works for all newly written data and you cannot easily get rid of it. You cannot enable dedup retroactive, because it works on online data only. Your idea is needlessly complex for no good reason, so I would recommend to just use ZFS and call it a day. Long answer: Can ZFS be configured to work reliably with "low RAM" if IO performance/ caching is not of concern, and no RAID funtionality is wanted? Yes, even with RAID features enabled. You need much less than people claim on the net, for example look at this guy who runs a speedy file server with FreeBSD, 2 cores and 768 MB virtualized. Or have a look at the SolarisInternals Guide (currently only available through archive.org), where 512 MB is mentioned as the bare minimum, 1 GB as minimum recommendation and 2 GB as a full recommendation. I would stay away from dedup, though. Not because it is slow because of paging memory, but because you cannot go back to non-dedup if your system grinds to a halt. Also, its a trade between RAM and disks, and on a budget system you have neither, so you will gain not much. How does the RAM needed change if I do not use ZFS as a filesystem itself, but just use ZVOLs where I put another filesystem ontop? You would need additional memory for the second filesystem and for the layer above ZFS, depending on how you plan to access it (virtualization like KVM, FUSE, iSCSI etc.) How does RAM needed change with deduplication turned on? If deduplication is turned on and RAM starts to get low, is it still safe -- can ZFS just suspend deduplication and use less RAM? You cannot suspend deduplication, but your data is still safe. There will be a lot of memory swapping and waiting, so it might not be very usable. Deduplication is online, so to disable it, you would need to turn dedup off and write all data again (which is essentially copying all data to a new filesystem and destroying the old one). Is it possible to deactivate automatic deduplication, but run it from time to time manually? No, because it does not affect data at rest. If you have dedup on and want to write a block, ZFS looks if it is present in the dedup table. If yes, then the write is discarded and a reference is added to the dedup table. If no, it is written and the first reference is added. This means that your old data is not affected by dedup, and turning it on without writing any new block does nothing reagarding the used size of the old data. Can ext4 ontop of a ZVOL reliably store my data even on low RAM situations, and if inconsistencies happen, success chances for repairs are high (as it is with ext2/3/4)? Does ext4 ontop of a ZVOL increase rubustness because it adds ext4's robustness, or is data as robust as the underlying ZVOL is? In my eyes this is needless complexity, as you would get no new features (like in the reverse case with ext4 below and ZFS on top, e. g. snapshots), and additionally get some new responsibilities like fsck and more fdisk formatting exercises. The only use case where I would do something like that is if had a special application that demands a specific file system's low-level features or has hard-coded assumptions (fortunately, that behavior seems to have died in recent times). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133286/"
]
} |
358,792 | Below code is working fine: #!/bin/bashstr='fail'var1='pass'var2='ok'var3='fail'var4='pass'case $str in $var1|$var2|$var3|$var4) echo yes ;; *) echo no ;;esac When I execute this, as expected I get output yes . In above code, value of variables are not hard-coded, these are coming from previous run, so it keep changing. Here the problem is, sometime it comes like: var3='partial|fail' Any variable value can change like this. So in this case it gives no . What should I do change in my code so it handle this situation and match fail word and show the result yes ? | Short answer: Yes, its possible to use low RAM (~ 1 GB) with ZFS successfully. You should not use dedup, but RAID and compression is usually ok. Once you have duplication enabled, it works for all newly written data and you cannot easily get rid of it. You cannot enable dedup retroactive, because it works on online data only. Your idea is needlessly complex for no good reason, so I would recommend to just use ZFS and call it a day. Long answer: Can ZFS be configured to work reliably with "low RAM" if IO performance/ caching is not of concern, and no RAID funtionality is wanted? Yes, even with RAID features enabled. You need much less than people claim on the net, for example look at this guy who runs a speedy file server with FreeBSD, 2 cores and 768 MB virtualized. Or have a look at the SolarisInternals Guide (currently only available through archive.org), where 512 MB is mentioned as the bare minimum, 1 GB as minimum recommendation and 2 GB as a full recommendation. I would stay away from dedup, though. Not because it is slow because of paging memory, but because you cannot go back to non-dedup if your system grinds to a halt. Also, its a trade between RAM and disks, and on a budget system you have neither, so you will gain not much. How does the RAM needed change if I do not use ZFS as a filesystem itself, but just use ZVOLs where I put another filesystem ontop? You would need additional memory for the second filesystem and for the layer above ZFS, depending on how you plan to access it (virtualization like KVM, FUSE, iSCSI etc.) How does RAM needed change with deduplication turned on? If deduplication is turned on and RAM starts to get low, is it still safe -- can ZFS just suspend deduplication and use less RAM? You cannot suspend deduplication, but your data is still safe. There will be a lot of memory swapping and waiting, so it might not be very usable. Deduplication is online, so to disable it, you would need to turn dedup off and write all data again (which is essentially copying all data to a new filesystem and destroying the old one). Is it possible to deactivate automatic deduplication, but run it from time to time manually? No, because it does not affect data at rest. If you have dedup on and want to write a block, ZFS looks if it is present in the dedup table. If yes, then the write is discarded and a reference is added to the dedup table. If no, it is written and the first reference is added. This means that your old data is not affected by dedup, and turning it on without writing any new block does nothing reagarding the used size of the old data. Can ext4 ontop of a ZVOL reliably store my data even on low RAM situations, and if inconsistencies happen, success chances for repairs are high (as it is with ext2/3/4)? Does ext4 ontop of a ZVOL increase rubustness because it adds ext4's robustness, or is data as robust as the underlying ZVOL is? In my eyes this is needless complexity, as you would get no new features (like in the reverse case with ext4 below and ZFS on top, e. g. snapshots), and additionally get some new responsibilities like fsck and more fdisk formatting exercises. The only use case where I would do something like that is if had a special application that demands a specific file system's low-level features or has hard-coded assumptions (fortunately, that behavior seems to have died in recent times). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102866/"
]
} |
358,805 | I'm trying to setup nginx and cgit on FreeBSD but nginx can't access /var/run/fcgiwrap/fcgiwrap.sock . In my /etc/rc.conf I already set fcgiwrap_user="www" , and www is also the user nginx runs as. When I make fcgiwrap.sock owned by www by performing chown www /var/run/fcgiwrap/fcgiwrap.sock , everything works the way I want. However this is of course not the proper way to do this, and it will only last until reboot. I was under the assumption that setting fcgiwrap_user="www" would also determine this. Am I missing something? Update: I noticed that when I use service fcgiwrap start or restart , the message Starting fcgiwrap is followed by chmod: /var/run/fcgiwrap/fcgiwrap.sock: No such file or directory . However /var/run/fcgiwrap/fcgiwrap.sock does exist afterwards. | The RC script is located at /usr/local/etc/rc.d/fcgiwrap . Looking at the code, fcgiwrap_user sets the owner of the process running the daemon (default root ). You need to set fcgiwrap_socket_owner="www" to set the owner of the socket. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226796/"
]
} |
358,806 | I'm running Linux Mint Maya. When I enter http://localhost/ in my browser, I get an "unable to connect" message. I've tried uninstalling and reinstalling apache but that didn't work. When I enter sudo service apache2 status in the terminal, I get /etc/init.d/apache2: 51: .: Can't open /etc/apache2/envvars Here is the contents of /etc/apt/sources.list deb http://packages.linuxmint.com/ maya main upstream importdeb http://archive.ubuntu.com/ubuntu/ precise main restricted universe multiversedeb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiversedeb http://security.ubuntu.com/ubuntu/ precise-security main restricted universe multiversedeb http://archive.canonical.com/ubuntu/ precise partner# deb http://archive.getdeb.net/ubuntu precise-getdeb apps# deb http://archive.getdeb.net/ubuntu precise-getdeb games When I run the Update Manager I've been getting an error but I didn't think it mattered so I've been ignoring it. Here's the error: There is no public key available for the following key IDs:1397BC53640DB551Failed to fetch http://ppa.launchpad.net/heyarje/libav- 11/ubuntu/dists/precise/main/source/Sources 404 Not FoundFailed to fetch http://ppa.launchpad.net/heyarje/libav-11/ubuntu/dists/precise/main/binary-i386/Packages 404 Not FoundSome index files failed to download. They have been ignored, or old ones used instead. What am I doing wrong? | The RC script is located at /usr/local/etc/rc.d/fcgiwrap . Looking at the code, fcgiwrap_user sets the owner of the process running the daemon (default root ). You need to set fcgiwrap_socket_owner="www" to set the owner of the socket. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154653/"
]
} |
358,850 | There are 2 main ways that I know of so far: Explicitly : wrapping parentheses around a list of commands Implicitly : every command in a pipeline Are there more ways, either explicitly or implicitly, in which one creates subshells in bash? | From man bash : If a command is terminated by the control operator & , the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. A coprocess is a shell command preceded by the coproc reserved word. A coprocess is executed asynchronously in a subshell, as if the command had been terminated with the & control operator Shell builtin complete command: when called with the -C command option, command is executed in a subshell environment, and its output is used as the possible completions. Command substitution, commands grouped with parentheses, and asynchronous commands are invoked in a subshell environment that is a duplicate of the shell environment | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196106/"
]
} |
358,853 | I have a huge text file which look like this: 36,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,336,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,836,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,1436,53,15596,0.58454577855,0.26119,2.24878677855,0.116147072052964,12 The desired output is this: 36,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-0336,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-0836,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-1436,53,15596,0.58454577855,0.26119,2.24878677855,0.116147072052964,MI-12 I have tried other relevant posts here and on other communities but could not exactly get what I want. UPDATE This is the cross-question (I wanted both Unix/perl answers and batch/powershell solutions for this.) that has interesting answers. | awk approach with sprintf function(to add leading zeros): awk -F, -v OFS=',' '$8=sprintf("MI-%02d",$8);' file The output: 36,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-0336,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-0836,53,90478,0.58699759849,0.33616,4.83449759849,0.0695335954050315,MI-1436,53,15596,0.58454577855,0.26119,2.24878677855,0.116147072052964,MI-12 -F, - set comma , as field separator $8 - points to the eighth field %02d - format which treats function argument as 2 -digit number Note , the last field in a record can be presented by $NF . NF is a predefined variable whose value is the number of fields in the current record So, $NF is the same as $8 (for your input) awk -F, -v OFS=',' '$(NF)=sprintf("MI-%02d", $(NF))' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222183/"
]
} |
358,914 | Coming from Gentoo, I'm still used to partitions, and Logical Volume Management. Having just installed and updated FreeBSD-11-RELEASE, using an entire 500GB disk, like so: % sudo zpool listPassword:NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOTzroot 460G 10.7G 449G - 1% 2% 1.00x ONLINE - I'm trying to get my head around the jails concept . While I understand that a jail is akin to the chroot command, I'm missing the meaning in the following command: zfs create -o mountpoint=/usr/local/jails zroot/jailszfs create zroot/jails/fulljail1 while reading through FreeBSD Jails the hard way . Can I create a zfs "partition" in an already active pool for an entire disk, or do I need to create the pool sizes manually in the BSD Installer partitioning screen?? | These commands aren't specific to BSD Jails and there are no nested pools here, just a single pool. Under ZFS, you can create as many datasets as you like in a pool. These datasets can be either volumes or file systems. Here two extra file systems are created. They are laid out a hierarchical manner so here are the three file systems present in the pool: zroot zroot/jails zroot/jails/fulljail1 and their mount points are: / /usr/local/jails /usr/local/jails/fulljail1 Under ZFS, creating a file system is a much lighter operation than with traditional file systems as there is no need to have a dedicated volume for it. All file systems share the same disk space. Creating a file system is nearly as lightweight as creating a new directory but has many advantages comparing to mkdir . For example you can create snapshots, clone, send, receive, set properties like compression or case sensitivity, mount elsewhere a zfs file system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64157/"
]
} |
358,981 | I have found this script to change my current terminal tab title: #!/usr/bin/env bashT=$1ORIG=$PS1TITLE="\e]2;$T\a"PS1=${ORIG}${TITLE}echo "Terminal tab title changed to $T" It works well if I type them directly in the terminal (with TITLE="\e]2;HELLO\a" for example) But inside a shell script (rename.sh) $PS1 is empty and the script does not work. rename.sh HELLO outputs "Terminal tab title changed to HELLO" but the terminal title is not changed.Inside the script $PS1 is empty. Someone can help me understand this ? | The script works by setting the shell's interactive prompt to a string which includes control codes to manipulate the xterm window title. Each time the shell's prompt is displayed, the control codes to change the window title are output. But of course, inside a script, no interactive prompt is ever displayed, so these commands have no observable effect (though if you started another interactive shell from within the script, you could see the window title change). And because no script can change the environment of its parent process, the change is lost once your script terminates. Anyway, from your script, you could of course print out the control codes directly. printf '\033]2;Hello\a' This changes the window's title once, but if any other program later changes it again, your old title will be lost. The trick to change your prompt is widespread because some popular programs in the past would often change your window title soon after you changed it to your liking (though I don't think this is a common problem any longer). The drawback is that if something has a genuine reason to change your window title, that will now be superseded as soon as your shell displays its prompt again. If you want code to change your current shell's prompt, you can't put those in a regular script; but you can source the script file, or put the commands in a shell function instead (commonly done in your Bash profile in order to make it persistent). Incidentally, the Bash prompt should include additional control codes to tell Bash when a part of the prompt is effectively zero width, as far as calculating the display width of the prompt is concerned. You will find that line wrapping is erratic if you type a long command and then need to backspace, for example; Bash will attempt to redraw the prompt, but does it in the wrong place, because it thinks the screen control codes contribute to the prompt's width. You'll want to add these \[ and \] Bash control codes around them. PS1="$ORIG\[$TITLE\]" (The curly braces aren't really contributing anything, and hamper legibility, so I took them out.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226932/"
]
} |
358,982 | I am trying to hunt down information for why a network interface name would have an at sign, but there's too much noise in the results I am so far getting (I lack the correct terminology to search on) I have a LXC container on a Ubuntu host. Inside the container I run and get: # ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:37:a0:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.3.195/24 brd 10.0.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fe37:a07a/64 scope link valid_lft forever preferred_lft forever Note that eth0@if10 What is this @ portion called / referring to? On the host there is no such if10 , another container I have has an eth0@if8 - I must assume this is part of LXC's/containers' handling of network translations somehow, but I had not noticed this existing previously, and wonder if it's a complement to bridging, that might exist in other scenarios ? | eth0@if10 means: your network interface is named still simply eth0 and network tools and apps can only refer to this name, without the @ appendix. (As a sidenote, this is most probably a veth peer, but the name does not need to reflect this.) @if10 means: eth0 is connected to some (other) network interface with index 10 (decimal). Since there is also a link-netnsid 0 shown, this other network interface is in another network namespace (kind of virtual IP stack), presumably the root (a.k.a. host) network namespace. If you use ip link show in your host, and not in your container, then one of the network interfaces listed there should have an @9 appendix; the interface name will probably start with veth... . This interface is the peer to the eth0@10 interface you asked about. Veth interfaces come in pairs connected to each other, like a virtual cable. So, the @... is an appendix created by the ip tool, and it is not part of Linux' network interface names. The number after the @ refers to another network interface with the index number that is shown after the @. The index numbers are printed before the network interface names, such as in 9: eth0@if10 . The peer network interface can be in a different network namespace. Unfortunately, finding the correct network namespace for the link-netnsid .. is rather involved, see how to find the network namespace of a veth peer ifindex . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226221/"
]
} |
358,992 | Unhappy with the unreasonably large text on my 1920x1080 external screen connected to a laptop with a 3200x1800 display in Fedora 24, I tried rescaling the external screen by using: xrandr --output HDMI-2 --scale 2x2 --mode 1920x1080 --fb 7040x2160 --pos 0x0xrandr --output eDP-1 --scale 1x1 --pos 3840x0 This has the desired effect, but it causes the cursor to flicker when I move the mouse on the laptop screen. The cursor does not flicker when it is on the external screen. Flickering stops when I revert to 1x1 scaling on the external screen: xrandr --output HDMI-2 --scale 1x1 --mode 1920x1080 --fb 4120x1800 --pos 0x0xrandr --output eDP-1 --scale 1x1 --pos 1920x0 How can I stop this flickering? | This workaround helped me . What I do now is after performing a xrandr scale, I run an extra command which stops the mouse flicker. xrandr --output eDP-1 --auto --output HDMI-2 --auto --scale 2x2 --right-of eDP-1 # Simpler oneliner scalingxrandr --output eDP-1 --scale 0.9999x0.9999 # Stop flicker | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/358992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16573/"
]
} |
358,994 | I understand* the primary admin user is given a user ID of 501 and subsequent users get incremental numbers ( 502 , 503 , …). But why 501 ? What’s special about 50x , what’s the historical/technical reason for this choice? * I started looking into this when I got curious as to why my external hard drive had all its trashed files inside .Trashes/501 . My search led me to the conclusion 501 is the user ID for the primary admin in *nix systems (I am on macOS), but not why . | Many Unix systems start handing out UIDs to users at some particular number. Solaris will give the first general purpose user UID 100, on OpenBSD it's 1000, and on macOS it appears it's UID 501 that will be the UID for the first created interactive user, which is also likely a macOS admin user (which is not the same as the root user). The accounts with lower numbers are system user accounts for daemons etc. This makes it easier to distinguish interactive "human" accounts from system services accounts. This may also make user management, authentication etc. easier in various software. YP/NIS , a slightly outdated system for keeping user accounts (and other information) on a central server without having to create local users on multiple client machines, for example, has a MINUID and MAXUID setting for the range of user accounts that it should handle. On some Unices, a range of the system service accounts may be allocated to third-party software, such as UIDs 50 to 999 on FreeBSD or 500 to 999 on OpenBSD. All of these ranges are chosen by the makers and maintainers of the individual Unices according to the expected needs of their operating system. The POSIX standard does not say anything about these things. The lowest and highest allocatable UID (and GID) is often configurable by a local admin (see your adduser manual). Most Unices reserve UID 0 for root , the super-user, and assigns the highest possible UID (or at least some high value) to the user nobody (Solaris uses UID 60001, OpenBSD uses 32768, but UIDs may be much larger than that). (See comments about UID 0 always being root (or not), which is a slight digression from this topic) Update: The OpenBSD project recently rejected the idea of randomizing UID/GID allocation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/358994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49593/"
]
} |
358,998 | I am migrating from upstart to systemd . I am having a bit of trouble making the transition with the EnvironmentFile directive. I cannot get this EnvironmentFile to work: ######################################################### Catalina SettingsCLUSTER_BASE=/d01/tomcat/prod/xyzCATALINA_BASE=$CLUSTER_BASE/1CATALINA_TMPDIR=$CATALINA_BASE/tempCATALINA_HOME=/usr/share/tomcat7CATALINA_PID=/run/tomcat/tc-prod-xyz-1.pid######################################################### Java SettingsJAVA_HOME=/usr/lib/jvm/default-java/jreJAVA_OPTS=-Djava.awt.headless=trueJAVA_OPTS=$JAVA_OPTS -serverJAVA_OPTS=$JAVA_OPTS -Xms2048mJAVA_OPTS=$JAVA_OPTS -Xmx2048mJAVA_OPTS=$JAVA_OPTS -XX:MaxPermSize=2048mJAVA_OPTS=$JAVA_OPTS -XX:+UseParallelGCJAVA_OPTS=$JAVA_OPTS -XX:+AggressiveHeapJAVA_OPTS=$JAVA_OPTS -javaagent:$CLUSTER_BASE/newrelic/newrelic.jar It would appear that this type of statement where I re-use a variable: JAVA_OPTS=$JAVA_OPTS -XX:+UseParallelGC is not supported in systemd like it was in upstart . Does systemd support something like this or do I need to make one long hard to read statement? | Unfortunately that file you have is actually a shell script. In the past, most init systems/scripts have interpreted files which provide environment variables by using the shell, so you could get away with doing shell things in them. Systemd however does not do this. The environment file is truly an environment file, not a script. This is documented in the systemd.exec man page : Variable expansion is not performed inside the strings, however, specifier expansion is possible. The $ character has no special meaning. Therefore you have 2 options. Expand out all your variables manually. Meaning use CATALINA_BASE=/d01/tomcat/prod/xyz/1 . Evaluate the file with the shell: ExecStart=/bin/bash -ac '. /path/to/env_file; exec /path/to/program' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/358998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42908/"
]
} |
359,038 | I am studying the history of computers to better understand why Linux terminals work the way they do. I have read that in the mid 1970's to the mid 1980's, most people used real terminals (as opposed to terminal emulators) to communicate with large computers, this is an example of a real terminal: But I am unable to find information about these large computers that the real terminals were connected to. Can anybody provide a name/picture of such large computer? | That terminal would typically be connected to a PDP-11 , or a VAX-11 (it can be used with many, many different types of computers though!). The PDP-11, like many mini-computers, was often housed in a rack: You can see detailed photos of a Data General Nova rack (along with a terminal) on our sister Retrocomputing site . Some variants were housed in cabinets; this was also typically the case for Vaxen: (Both photos taken from the Wikipedia articles linked above.) Terminals were used with computers of all sizes, from room-sized mainframes such as the PDP-10 to tower PC-sized VAXServers (thanks to hobbs for the link to that photo — the server shown there is smaller than many PC servers of the time!)or even pizza-box workstations in the mid-nineties. You can still connect many of these terminals to a modern PC running Linux or various other operating systems, as long as the PC has serial ports, or USB-to-RS-232 adapters (as pointed out by Michael Kjörling ), and you use null-modem cables to connect them (as pointed out by Mark Plotnick ). Check out Dinosaur’s Pen for many, many more photos of such systems in actual use. Some applications still in production use software dating back to these kinds of systems, although commonly the hardware is emulated; an example was given recently at Systems we love . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/359038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226968/"
]
} |
359,088 | I want to force the windows in my tmux session to be a particular size, regardless of the the size of my terminal. How can I do this? Context I am trying to record a tmux in asciinema as described here https://github.com/asciinema/asciinema/wiki/Recording-tmux-session (run asciinema on a tmux attach command). However the display is too big, I want to force the size of the tmux window . Things that I have tried I have a successful work around where I use a second view of the tmux session in mate-terminal -e 70x20 to force the window size... but this seems like a hack. Trying to force the session size with -x tmux new-session -x $X -y $Y -d These options seem to be ignored (I've tried fiddling with the aggressive resize setting) | You probably need to have at least 3 panes open to occupy the unwanted areas. Try something like tmux new-session \; split-window -h \; split-window -v \; resize-pane -x 70 -y 20 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36185/"
]
} |
359,219 | Having some trouble using apt on my Mac. If I run sudo apt search or sudo apt-get I get this error in the terminal: Unable to locate an executable at "/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/bin/apt" (-1) I am running Sierra 10.12.4 and am trying to use the md5sum command on Mac. Any thoughts on this? | apt , the package manager, is a Linux tool, from Debian GNU/Linux. macOS does not have it. The apt program that happens to be in your search path is Java's annotation processing tool , and will not do what you want. There are projects like Homebrew, MacPorts and Fink that provides packaged third-party software for macOS. Homebrew: https://brew.sh/ MacPorts: https://www.macports.org/ Fink: http://www.finkproject.org/ NetBSD's pkgsrc also works nicely on macOS: http://www.pkgsrc.org/ As for md5sum : On the BSD Unices, of which macOS is one, there is often a utility called md5 available that performs the same service (but with slightly different format of output). If you install GNU coreutils using the tools provided by one of the above projects, md5sum will be installed. The executable is sometimes called gmd5sum (note the added g prefix, which also gets added to all other GNU coreutils executables). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227113/"
]
} |
359,224 | Consider this script: function alfa(bravo, charlie) { if (charlie) return "charlie good" else { return "charlie bad" }}BEGIN { print alfa(1, 1) print alfa(1, 0) print alfa(1, "") print alfa(1)} Result: charlie goodcharlie badcharlie badcharlie bad Does Awk have a way to tell when an argument has not been provided? | apt , the package manager, is a Linux tool, from Debian GNU/Linux. macOS does not have it. The apt program that happens to be in your search path is Java's annotation processing tool , and will not do what you want. There are projects like Homebrew, MacPorts and Fink that provides packaged third-party software for macOS. Homebrew: https://brew.sh/ MacPorts: https://www.macports.org/ Fink: http://www.finkproject.org/ NetBSD's pkgsrc also works nicely on macOS: http://www.pkgsrc.org/ As for md5sum : On the BSD Unices, of which macOS is one, there is often a utility called md5 available that performs the same service (but with slightly different format of output). If you install GNU coreutils using the tools provided by one of the above projects, md5sum will be installed. The executable is sometimes called gmd5sum (note the added g prefix, which also gets added to all other GNU coreutils executables). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
359,225 | I would like to create self-signed certificates on the fly with arbitrary start- and end-dates, including end-dates in the past . I would prefer to use standard tools, e.g., OpenSSL, but anything that gets the job done would be great. The Stack Overflow question How to generate openssl certificate with expiry less than one day? asks a similar question, but I want my certificate to be self-signed. In case you were wondering, the certificates are needed for automated testing. | You have two ways of creating certificates in the past. Either faking the time (1)(2), or defining the time interval when signing the certificate (3). 1) Firstly, about faking the time: to make one program think it is in a different date from the system, have a look at libfaketime and faketime To install it in Debian: sudo apt-get install faketime You would then use faketime before the openssl command. For examples of use: $faketime 'last friday 5 pm' /bin/dateFri Apr 14 17:00:00 WEST 2017$faketime '2008-12-24 08:15:42' /bin/dateWed Dec 24 08:15:42 WET 2008 From man faketime : The given command will be tricked into believing that the current system time is the one specified in the timestamp. The wall clock will continue to run from this date and time unless specified otherwise (see advanced options). Actually, faketime is a simple wrapper for libfaketime, which uses the LD_PRELOAD mechanism to load a small library which intercepts system calls to functions such as time(2) and fstat(2). So for instance, in your case, you can very well define a date of 2008, and create then a certificate with the validity of 2 years up to 2010. faketime '2008-12-24 08:15:42' openssl ... As a side note, this utility can be used in several Unix versions, including MacOS, as an wrapper to any kind of programs (not exclusive to the command line). As a clarification, only the binaries loaded with this method (and their children) have their time changed, and the fake time does not affect the current time of the rest of the system. 2) As @Wyzard states, you also have the datefudge package which is very similar in use to faketime . As differences, datefudge does not influence fstat (i.e. does not change file time creation). It also has it´s own library, datefudge.so, that it loads using LD_PRELOAD. It also has a -s static time where the time referenced is always returned despite how many extra seconds have passed. $ datefudge --static "2007-04-01 10:23" sh -c "sleep 3; date -R"Sun, 01 Apr 2007 10:23:00 +0100 3) Besides faking the time, and even more simply, you can also define the starting point and ending point of validity of the certificate when signing the certificate in OpenSSL. The misconception of the question you link to in your question, is that certificate validity is not defined at request time (at the CSR request), but when signing it. When using openssl ca to create the self-signed certificate, add the options -startdate and -enddate . The date format in those two options, according to openssl sources at openssl/crypto/x509/x509_vfy.c , is ASN1_TIME aka ASN1UTCTime: the format must be either YYMMDDHHMMSSZ or YYYYMMDDHHMMSSZ. Quoting openssl/crypto/x509/x509_vfy.c : int X509_cmp_time(const ASN1_TIME *ctm, time_t *cmp_time){ static const size_t utctime_length = sizeof("YYMMDDHHMMSSZ") - 1; static const size_t generalizedtime_length = sizeof("YYYYMMDDHHMMSSZ") - 1; ASN1_TIME *asn1_cmp_time = NULL; int i, day, sec, ret = 0; /* * Note that ASN.1 allows much more slack in the time format than RFC5280. * In RFC5280, the representation is fixed: * UTCTime: YYMMDDHHMMSSZ * GeneralizedTime: YYYYMMDDHHMMSSZ * * We do NOT currently enforce the following RFC 5280 requirement: * "CAs conforming to this profile MUST always encode certificate * validity dates through the year 2049 as UTCTime; certificate validity * dates in 2050 or later MUST be encoded as GeneralizedTime." */ And from the CHANGE log (2038 bug?) - This change log is just as an additional footnote, as it only concerns those using directly the API. Changes between 1.1.0e and 1.1.1 [xx XXX xxxx] *) Add the ASN.1 types INT32, UINT32, INT64, UINT64 and variants prefixed with Z. These are meant to replace LONG and ZLONG and to be size safe. The use of LONG and ZLONG is discouraged and scheduled for deprecation in OpenSSL 1.2.0. So, creating a certificate from the 1st of January 2008 to the 1st of January of 2010, can be done as: openssl ca -config /path/to/myca.conf -in req.csr -out ourdomain.pem \-startdate 200801010000Z -enddate 201001010000Z or openssl ca -config /path/to/myca.conf -in req.csr -out ourdomain.pem \-startdate 0801010000Z -enddate 1001010000Z -startdate and -enddate do appear in the openssl sources and CHANGE log; as @guntbert noted, while they do not appear in the main man openssl page, they also appear in man ca : -startdate date this allows the start date to be explicitly set. The format of the date is YYMMDDHHMMSSZ (the same as an ASN1 UTCTime structure). -enddate date this allows the expiry date to be explicitly set. The format of the date is YYMMDDHHMMSSZ (the same as an ASN1 UTCTime structure). Quoting openssl/CHANGE : Changes between 0.9.3a and 0.9.4 [09 Aug 1999] *) Fix -startdate and -enddate (which was missing) arguments to 'ca' program. P.S. As for the chosen answer of the question you reference from StackExchange: it is generally a bad idea to change the system time, especially in production systems; and with the methods in this answer you do not need root privileges when using them. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/359225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14097/"
]
} |
359,236 | For the last month or so, I've been attempting to get jetbrains-toolbox to work. It used to work (and is how I installed IntelliJ IDEA and Gogland.) When I went to update the IDEA I'm currently using Arch. Here are the things I have tried. Loading jetbrains-toolbox from within Sway. Reinstalling jetbrains-toolbox from the aur. Reinstalling jetbrains-toolbox from the Jetbrains website. Launching it with --disable-gpu Clearing ~/.local/share/JetBrains/Toolbox Googling all messages that I get. Loading jetbrains-toolbox in different DEs. I tried GNOME, KDE, and i3. The settings file (~/local/share/JetBrains/Toolbox/.settings.json), even after being cleared by action number 5, is able to regenerate, so I assume that there is something, somewhere on my filesystem that it isn't going away. This is what I think might be causing the problems. I have verified that the settings file was deleted by looking at Thunar's trash folder. However, doing a search for my email address (contained in the settings file) from ripgrep did not turn up anything relevant. These are the commands I ran: cd ~/sudo rg --hidden "MY_EMAIL_HERE" >> ~/Desktop/home_search.txt cd /usr/ sudo rg --hidden "MY_EMAIL_HERE" >> ~/Desktop/home_search.txt The only relevant results of this were: .local/share/JetBrains/Toolbox/.settings.json: "email": "MY_EMAIL_HERE",.local/share/Trash/files/Toolbox/.settings.json: "email": "MY_EMAIL_HERE", I'm not exactly proficient with Linux, but I've been asking around for help with this for a while. If you have any advice, please have patience with me. I might be a bit stupid. When I run it from the terminal, this is the message that shows up: john@john ~/D/jetbrains-toolbox-1.2.2314> ./jetbrains-toolbox [0415/155414:WARNING:resource_bundle.cc(311)] locale_file_path.empty() for locale This is a message that will show up occasionally through a system tray notification (it does not use my notification daemon): failed to find application to url: share/jetbrains-toolbox/jetbrains-toolbox Maybe I need some folder in /usr/share or ~/.local/share named jetbrains-toolbox? I do not have that folder in either location. These are two log files. One is from executing ToolBox and leaving it open for a bit. Another is from uninstalling ToolBox from the aur and deleting ~/.local/share/JetBrains/Toolbox and leaving it open for a bit. They have been labeled appropriately. https://gist.github.com/gonzalezjo/4cf09eb4b7ad849df5557fd297a7061c When I open ToolBox, I'm greeted with a black screen. After about 15 seconds, it becomes white. Here's an imgur gallery showcasing this. http://imgur.com/a/JS08D (Note: I don't have enough reputation to include these as separate images while still including a link to the logs. Sorry about that :\) From the moment the black screen shows to the moment it becomes white, I've timed it down to an average of 13.7 seconds using a stopwatch app on my phone and three trials. From the moment I type ./jetbrains-toolbox to the moment it becomes white, it's an average of about 16.1 seconds. Again, three trials. My CPU is a Haswell i7 (i7-4790k) and my GPU is Pascal (Nvidia's GTX 1050). I think it's possible that this could be graphics driver or X related (or both? I am clueless here.) based off of a scary experience upgrading drivers prevented me from entering a DE. That experience was resolved after xorg (or something like that?) and the nvidia package were reinstalled. According to nvidia-smi, my driver version is: NVIDIA-SMI 378.13 Driver Version: 378.13 I've tried to provide all the information I can, but if anything else is needed, I'm happy to provide. | Just launch with --disable-seccomp-filter-sandbox and it should work. I found it in https://bbs.archlinux.org/viewtopic.php?id=229859 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227119/"
]
} |
359,253 | On Linux Mint, when I view the /etc/resolv.conf file, the first comment states that the /etc/resolv.conf file is generated by resolvconf(8) . ~ $ cat /etc/resolv.conf# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) To paraphrase the resolvconf(8) man page: the resolvconf program is run by DHCP clients such as dhclient I run dhclient wlan0 . ~ $ dhclient wlan0 Dhclient should cause the resolvconf program to update /etc/resolv.conf . The /var/lib/dhcp/dhclient.leases file verifies that I am able to lease the IP address of the nameserver (192.168.0.6). ~ $ cat /var/lib/dhcp/dhclient.leases lease { interface "wlan0"; . . . option domain-name-servers 192.168.0.6; . . .} However, the /etc/resolv.conf file is not updated. The /etc/resolv.conf file has nameserver 127.0.1.1. ~ $ cat /etc/resolv.conf# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 127.0.1.1search software.eng.apl There are no nameservers listed in /etc/network/interfaces . ~ $ cat /etc/network/interfaces# interfaces(5) file used by ifup(8) and ifdown(8)auto loiface lo inet loopback I am not sure what I am missing here to get the /etc/resolv.conf file to update using the nameserver being leased from the DHCP server. The DHCP server is a Linux CentOS machine using DHCPD. | Mint and other modern distros ship with mdns by default, which wraps the regular public DNS with a local "decentralized" wrapper which enables zeroconf support for your local network. Basically, a local DNS server resolves names in the local network it has discovered, then falls back to the (now proxied) public DNS for public Internet resolution, i.e. for names outside of your local network. In so many words, your resolv.conf is correct and appropriate for this scenario, and if mdns has problems accessing your ISP's nameserver, you should look inside its configuration - though of course, if you don't care about zeroconf support, disabling mdns (and then probably also Avahi) lets you manage resolv.conf in the traditional fashion. See also e.g. https://help.ubuntu.com/community/HowToZeroconf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149002/"
]
} |
359,303 | How can I check the validity of a gz file, I do not have the hash of the file, I'm using gzip -t but it is not returning any output. | The gzip -t command only returns an exit code to the shell saying whether the file passed the integrity test or not. Example (in a script): if gzip -t file.gz; then echo 'file is ok'else echo 'file is corrupt'fi Adding -v will make it actually report the result with a message. Example: $ gzip -v -t file.gzfile.gz: OK So the file is ok. Let's corrupt the file (by writing the character 0 at byte 40 in the file) and try again. $ dd seek=40 bs=1 count=1 of=file.gz <<<"0"1+0 records in1+0 records out1 bytes transferred in 0.000 secs (2028 bytes/sec) $ gzip -v -t file.gzfile.gz: gzip: file.gz: Inappropriate file type or format The integrity of a file with respect to its compression does not guarantee that the file contents is what you believe it is. If you have an MD5 checksum (or some similar checksum) of the file from whomever provided it, then you would be able to get an additional confirmation that the file not only is a valid gzip archive, but also that its contents is what you expect it to be. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/359303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17678/"
]
} |
359,312 | I have a big .gz file, which is 2.6 GB in itself. I cannot uncompress it due to size limitation. The file is a single large text file. I am not being able to decompress it completely due to size limitation. I want to split it into say 10 individual parts and decompress each one individually so that I can use each individual files: My questions are: Is that possible ? Also, as part of the answer, if the commands can also be provided as I am not very well versed in these commands Thanks | The gzip compression format supports decompressing a file that has been concatenated from several smaller compressed files (the decompressed file will then contain the concatenated decompressed data), but it doesn't support decompressing a cut up compressed file. Assuming you would want to end up with a "slice" of the decompressed data, you may work around this by feeding the decompressed data into dd several times, each time selecting a different slice of the decompressed data to save to a file and discarding the rest. Here I'm using a tiny example text file. I'm repeatedly decompressing it (which will take a bit of time for large files), and each time I pick a 8 byte slice out of the decompressed data. You would do the same, but use a much larger value for bs ("block size"). $ cat filehelloworld123ABC$ gzip -f file # using -f to force compression here, since the example is so small$ gunzip -c file.gz | dd skip=0 bs=8 count=1 of=fragment1+0 records in1+0 records out8 bytes transferred in 0.007 secs (1063 bytes/sec)$ cat fragmenthellowo$ gunzip -c file.gz | dd skip=1 bs=8 count=1 of=fragment1+0 records in1+0 records out8 bytes transferred in 0.000 secs (19560 bytes/sec)$ cat fragmentrld12 (etc.) Use a bs setting that is about a tenth of the uncompressed file size, and in each iteration increase skip from 0 by one. UPDATE: The user wanted to count the number of lines in the uncompressed data (see comments attached to the question). This is easily accomplished without having to store any part of the uncompressed data to disk: $ gunzip -c file.gz | wc -l gunzip -c will decompress the file and write the uncompressed data to standard output. The wc utility with the -l flag will read from this stream and count the number of lines read. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17678/"
]
} |
359,314 | I would like to run BASIC code, like you used to do on older computers, in Linux.(I am looking for a BASIC interpreter that can run OS functions natively) What options do I have? (Preferably for Debian-based and Arch-based) | If you want to run natively in Linux BASIC you have several packages in Debian to choose from: brandy - compatible BBC Micro BASIC - works in a X11 graphical interface, aparently supports sound and graphics; Brandy is an interpreter for BBC BASIC V, the dialect of BASIC thatAcorn Computers supplied with their ranges ofdesktop computers that use the ARM processor such as the Archimedes and Risc PC, and is still in use on these andcompatibles. BASIC V is a much extended version of BBC BASIC. This was the BASIC used on the 6502-based BBC Micro that Acorn made during the 1980s. bwbasic - byWater BASIC - text mode, claims to be ANSI compatible, has shell aware extensions, and claims to be capable of emulating/have good compatibility with several types of "old" BASIC dialects - including IBM BASICA, Microsoft BASIC and gwBASIC. bwBASIC can be configured to emulate features, commands, and functions available on different types of BASIC interpreters; bwBASIC implements one feature not available in previous BASIC interpreters: a shell command can be entered interactively at the bwBASIC prompt, and the interpreter will execute it under a command shell. For instance, the command "dir *.bas" can be entered in bwBASIC (under DOS, or "ls -l *.bas" under UNIX) and it will be executed as from the operating system command line. Shell commands can also be given on numbered lines in a bwBASIC program, so that bwBASIC can be used as a shell programming language. bwBASIC's implementation of the RMDIR, CHDIR, MKDIR, NAME, KILL, ENVIRON, and ENVIRON$() commands and functions offer further shell-processing capabilities. To install them: sudo apt-get install brandy bwbasic As for my personal experience, I do prefer bwbasic as it allows to have the power of BASIC in a text command line or shell script. As an alternative, you also have several emulation packages of computers of the old, that obviously besides the BASIC syntax implement all the environment of some old computer you may want to relive. Interestingly enough, bwbasic in theory could allow automating operations in Unix e.g. building scripts using the BASIC language. Never tried it though. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227183/"
]
} |
359,403 | I'm trying to check if a machine is a ThinkPad or not using something like this: sudo dmidecode | grep ThinkPad I want the end result return true or false (or 1/0). I'm thinking the solution might be something like this: sudo dmidecode | grep -c ThinkPad | test xargs -gt 0 But I'm not sure how to properly use xargs here. | Just tack the exit status check after grep , it will always get the exit status from the last command of the pipeline by default: sudo dmidecode | grep -q ThinkPad; echo $? Use -q to suppress any output from grep as we are interested in exit status only. You can use command grouping if you fancy, but this is somewhat redundant here: sudo dmidecode | { grep -q ThinkPad; echo $? ;} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
359,428 | I installed Open VPN and generated .crt and .key files but I could not able to generate ta.key file which gives me options error : --tls-auth fails with ta.key : No such file or directory. How could I create this file. I couldn't find ta.key in any directory of Open VPN. | To generate the tls-auth key: openvpn --genkey --secret /etc/openvpn/ta.key | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227271/"
]
} |
359,470 | Is there a way to re-write the command structure A && B || C | D so that either B or C is piped into D? With the current command either only B or both C and D are run. For example: | Yes, in bash you can use parentheses: (A && B || C) | D This way the output of A && B || C will be piped into D . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/359470",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
359,531 | I now use a PC (of the lab where I work now) on which I have successfully installed Arch Linux not long ago. I want to connect to the HP printer there, already connected to the Ethernet of the lab. The printer is a "HP Color LaserJet M552". I have installed hplip (refer to: CUPS/Printer-specific problems ); when installing, I recall there were a lot of error messages. When I tried to print some document, no printer was found. I ran sudo hp-setup (as advised here: Configure your printer using hp-setup ). A dialog box appeared, which asked me for "PPD" file, and I don't know where it is and what it is for. When I was finding material to solve this, unfortunately I find HP seems not to support Arch Linux. There are the console error messages when I invoke hp-setup : HP Linux Imaging and Printing System (ver. 3.16.11)Printer/Fax Setup Utility ver. 9.0Copyright (c) 2001-15 HP Development Company, LPThis software comes with ABSOLUTELY NO WARRANTY.This is free software, and you are welcome to distribute itunder certain conditions. See COPYING file for more details.Searching... (bus=net, timeout=5, ttl=4, search=(None) desc=0, method=slp)error: No PPD found for model color_laserjet_m552 using old algorithm.error: No appropriate print PPD file found for model hp_color_laserjet_m552kf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33404 byteskf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33834 byteskf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33922 byteskf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33582 bytes kf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33940 bytes kf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33514 bytes kf5.kio.core: KLocalSocket(0x129ca60) Jumbo packet of 33928 bytes Meanwhile, I was prompted to choose a PPD file. The default folder for me to choose is /usr/share/ppd/hp , but when I choose anything, the box is still empty, saying I should choose a file. The printer name is shown, so I think connection is fine. P.S.: I know this sort of thing is difficult to debug without playing around with the computer in person. If there is any information missing, just ask. | With system-config-printer Following these steps, I can now print documents using Evince on Arch Linux 4.16.9 with an HP LaserJet P1102 connected via USB: Install CUPS : sudo pacman -S cups Start and enable (make it start after boot) the CUPS printing service : sudo systemctl enable --now cups (the name of the service unit used to be org.cups.cupsd ) Install HP Linux Imaging and Printing : sudo pacman -S hplip Install a driver plug-in via sudo hp-setup -i . Root privileges are important here, otherwise it says "error: No device selected/specified or that supports this functionality." when selecting a connection method. During installation of the plug-in, I selected the default option each time. Install system-config-printer , a GUI tool to configure printers. Start system-config-printer and click the button to add a printer. Select your printer and choose HPLIP as the connection method (see screenshot). system-config-printer should now allow you to print a test page. In order for a GTK application like Evince to show your printer in the printing dialog, you need to install gtk3-print-backends as well. With CUPS web interface Instead of system-config-printer described above, you can use CUPS' web interface, reachable at localhost:631 . Before administrating printers, you have to add your user to the group sys , otherwise you'll run into errors in the web interface like "Unable to modify printer: Forbidden". gpasswd -a theUser sys Alternatively, use vigr to edit /etc/group . The web interface will prompt for this user's name and their password. /etc/cups/cups-files.conf defines that members of groups sys (and root ) can administrate printers: SystemGroup sys root After taking care of group membership, you can add printers and perform other administrative tasks: After selecting a printer in localhost:631/printers , you can also print a test page via the web interface: Troubleshooting Keep lib in sync with driver The library hplip from pacman and the driver plug-in installed via hp-setup -i have to have the same version, otherwise you'll be unable to print and see this error message in your systemd journal (inspect it with journalctl -e ): validate_plugin_version() Plugin version[3.17.7] mismatch with HPLIP version[3.18.4] To fix this, you can run hp-setup -i again which will download and install the current driver. I added the following to ~/.bash_aliases to prevent the driver and the library getting out of sync: alias upgrade-ignore-hp="(set -x; sudo pacman -Syu --ignore hplip)" Serial number changed Recently, my printer would refuse to print; system-config-printer as well as the CUPS web interface would show it as paused and lpc status , yielded that the printer has "printing disabled". cupsenable Hewlett-Packard-HP-LaserJet-Professional-P1102 didn't help. I solved this by changing the printer's connection. Using the CUPS web interface mentioned before, I selected my printer and clicked "Modify Printer" in the drop-down list. Here, I changed the connection from hp:/usb/HP_LaserJet_Professional_P1102?serial=000000000Q80X0EGPR1a to HP LaserJet Professional P1102 USB 000000000Q80X0EGSI1c HPLIP (HP LaserJet Professional P1102) Note that those two serial numbers differ. I don't know where this serial number belongs to and why it changed since I didn't get a new printer; it's not the one on the label on the printer's back. This serial number does show up in the output of hp-info , though. "error: No device selected/specified or that supports this functionality." This error persisted when calling sudo hp-setup -i and I'm not sure the printer is supported anymore by HP for Arch Linux.I've since ditched the HP LaserJet P1102 and got a Brother DCP-L3550CDW whose monochrome printing feature worked out of the box on Arch Linux 5.3.12: In system-config-printer , I selected "LPD/LPR queue 'BINARY_P1" as the connection and "PCL Laser" as the driver. I used this driver to enable color printing. To get the device's built-in scanner working, I followed these instructions . A second Brother printer/scanner that I got working on Arch Linux is the DCP-1610W . Here are some notes to make it print and scan using Wi-Fi. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227333/"
]
} |
359,539 | Should it be 'star nix' or 'nix' or 'unix-like' or something totally different? | The canonical name is "Unix-like". "UN*X" and similar are just fancy/legal ways to write it. Interesting readings: https://en.wikipedia.org/wiki/Unix-like http://catb.org/jargon/html/U/UN-asterisk-X.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63603/"
]
} |
359,598 | I suppose an executable file with SetUID bit set should be running as its owner but I cannot really reproduce it. I tried the following. $ cat prepare.shcp /bin/bash .chown root.root bashchmod 4770 bash # Verified$ sudo sh prepare.sh$ ./bash$ id -u1000$ exit$ $ cat test.c#include<stdio.h>#include<unistd.h>int main(){ printf("%d,%d\n", getuid(), geteuid()); return 0;}$ gcc -o test test.c$ chmod 4770 test # Verified$ sudo chown root.root test$ ./test1000,1000$ # Why??? However $ su# ./bash# id -u0# ./test0,0# exit# exit$ Note: The mount point has no nosuid nor noexec set. Can anyone explain why it's failing to work on Ubuntu 16.04 LTS? | For the compiled executable, from man 2 chown : When the owner or group of an executable file are changed by anunprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIXdoes not specify whether this also should happen when root does thechown(); the Linux behavior depends on the kernel version. Reversing the chown and chmod order works for me: $ sudo chmod 4770 foo$ sudo chown root:root foo$ stat foo File: 'foo' Size: 8712 Blocks: 24 IO Block: 4096 regular fileDevice: 801h/2049d Inode: 967977 Links: 1Access: (0770/-rwxrwx---) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2017-04-18 15:15:15.074425000 +0900Modify: 2017-04-18 15:15:15.074425000 +0900Change: 2017-04-18 15:15:33.683725000 +0900 Birth: -$ sudo chmod 4777 foo$ ./foo1000,0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211239/"
]
} |
359,604 | I'm working on a Raspberry Pi 3 system and building my own rootfs using Buildroot. At first, I was using BusyBox as the init system and when configuring the on-board wifi card using wpa_supplicant, the router always assigned the same IP address to the board, even if I rebuilt/reflashed the SD card with a new rootfs. I then switched to using systemd. Now, whenever I rebuild/reflash the SD card, the router seems to think that the device is different and assigns it an different IP address every time I reflash the rootfs, even though the MAC address has stayed the same. What could be causing this issue? | For the compiled executable, from man 2 chown : When the owner or group of an executable file are changed by anunprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIXdoes not specify whether this also should happen when root does thechown(); the Linux behavior depends on the kernel version. Reversing the chown and chmod order works for me: $ sudo chmod 4770 foo$ sudo chown root:root foo$ stat foo File: 'foo' Size: 8712 Blocks: 24 IO Block: 4096 regular fileDevice: 801h/2049d Inode: 967977 Links: 1Access: (0770/-rwxrwx---) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2017-04-18 15:15:15.074425000 +0900Modify: 2017-04-18 15:15:15.074425000 +0900Change: 2017-04-18 15:15:33.683725000 +0900 Birth: -$ sudo chmod 4777 foo$ ./foo1000,0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23546/"
]
} |
359,611 | The output would include the directory name, file name and file size. One (largest file) for each directory from where the command is run. If possible the average size of the files in that directory as well. The purpose is to can the directories looking for files that are much larger than the others in the directory so they can be replaced | With GNU find , sort and sed (4.2.2 or above), sort once on the file sizes and again on directory paths: find /some/dir -type f -printf '%s %f%h\0' | sort -zrn | sort -zut/ -k2 | sed -zre 's: ([^/]*)(/.*): \2/\1:' Explanation: The file size, name and path are printed (the first separated by a space and the next two separated by / ), and each entry is terminated by the ASCII NUL character. Then we sort numerically using the size, assuming NUL-delimited output (and in reverse order, so largest files first). Then we use sort to print only the first unique entries using everything from the second / -separated field, which would be the path to the directory containing the file. Then we use sed to swap the directory and filenames, so that we get a normal path. For readable output, replace the ASCII NUL with newlines: find /some/dir -type f -printf '%s %f%h\0' | sort -zrn | sort -zut/ -k2 | sed -zre 's: ([^/]*)(/.*): \2/\1:' | tr '\0' '\n' Example output: $ find /var/log -type f -printf '%s %f%h\0' | sort -zrn | sort -zt/ -uk2 | sed -zre 's: ([^/]*)(/.*): \2/\1:' | tr '\0' '\n'3090885 /var/log/syslog.139789 /var/log/apt/term.log3968 /var/log/cups/access_log.131 /var/log/fsck/checkroot467020 /var/log/installer/initial-status.gz44636 /var/log/lightdm/seat0-greeter.log15149 /var/log/lxd/lxd.log4932 /var/log/snort/snort.log3232 /var/log/unattended-upgrades/unattended-upgrades-dpkg.log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227402/"
]
} |
359,684 | This is an OpenSuse Leap 42.1 and I don't know why or how this happens: $ dateTue 18 Apr 10:49:34 -03 2017 The timezone appears as -03 (or -02) instead of a meaningful name (In my case, BRT/BRST). Tried to search that but this seems to be so obscure that the results are always in the form "how to change timezone" EDIT:Replies to comments: $ date +%Z-03$ timedatectl Local time: Tue 2017-04-18 11:38:26 -03 Universal time: Tue 2017-04-18 14:38:26 UTC RTC time: Tue 2017-04-18 14:38:26 Timezone: America/Sao_Paulo (-03, -0300) NTP enabled: yesNTP synchronized: yes RTC in local TZ: no | That's how it's now defined at the IANA official standard timezone database See ftp://ftp.iana.org/tz/tzdb-2017b/southamerica The name for the timezone in Winter time is -03 . That corresponds to the UTC offset. It's more useful than things like CET that are ambiguous (mean different things to different people). That apparently changed recently. Compare ftp://ftp.iana.org/tz/tzdb-2017a/southamerica (2017-02-28) with ftp://ftp.iana.org/tz/tzdb-2016j/southamerica (2016-11-23) that had BRT instead. The NEWS file for the 2017a release states: [...] Switch to numeric time zone abbreviations for South America, as part of the ongoing project of removing invented abbreviations. This avoids the need to invent an abbreviation for the new Chilean new zone. Similarly, switch from invented to numeric time zone abbreviations for Afghanistan, American Samoa, the Azores, Bangladesh, Bhutan, the British Indian Ocean Territory, Brunei, Cape Verde, Chatham Is, Christmas I, Cocos (Keeling) Is, Cook Is, Dubai, East Timor, Eucla, Fiji, French Polynesia, Greenland, Indochina, Iran, Iraq, Kiribati, Lord Howe, Macquarie, Malaysia, the Maldives, Marshall Is, Mauritius, Micronesia, Mongolia, Myanmar, Nauru, Nepal, New Caledonia, Niue, Norfolk I, Palau, Papua New Guinea, the Philippines, Pitcairn, Qatar, Réunion, St Pierre & Miquelon, Samoa, Saudi Arabia, Seychelles, Singapore, Solomon Is, Tokelau, Tuvalu, Wake, Vanuatu, Wallis & Futuna, and Xinjiang; for 20-minute daylight saving time in Ghana before 1943; for half-hour daylight saving time in Belize before 1944 and in the Dominican Republic before 1975; and for Canary Islands before 1946, for Guinea-Bissau before 1975, for Iceland before 1969, for Indian Summer Time before 1942, for Indonesia before around 1964, for Kenya before 1960, for Liberia before 1973, for Madeira before 1967, for Namibia before 1943, for the Netherlands in 1937-9, for Pakistan before 1971, for Western Sahara before 1977, and for Zaporozhye in 1880-1924. [...] Usually, you'd be able to specify the names for Winter and Summer time and the rules for when to change from one to the other by hand in the TZ variable, but it looks like for Brazil, it's not really possible as according to that timezone database: http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm [t]he DST period in Brazil now on will be from the 3rd Oct Sunday to the 3rd Feb Sunday. There is an exception on the return date when this is the Carnival Sunday then the return date will be the next Sunday... There's no way to specify this kind of exception in the simple TZ rule specification. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37366/"
]
} |
359,697 | I have a text file like so foo bar baz1 a alpha2 b beta3 c gamma I can use awk to print certain columns, like 1 and 3, with {print $1, $3} , but I want to specify the columns to print by specifying the header of the column instead, something like {print $foo, $baz} . This is useful so I don't have to open the file and count the columns manually to see which column is which, and I don't have to update the script if the column number or order changes. Can I do this with awk (or another shell tool)? | awk 'NR==1 { for (i=1; i<=NF; i++) { f[$i] = i }}{ print $(f["foo"]), $(f["baz"]) }' filefoo baz1 alpha2 beta3 gamma That is an immensely useful idiom. I have a lot of data in spreadsheets and different spreadsheets might have a common subset of columns I'm interested in but not necessarily in the same order across all spreadsheets or with the same numbers of other columns before/between them so being able to export them as CSV or similar and then simply run an awk script using the column names instead of column numbers is absolutely invaluable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48907/"
]
} |
359,740 | When attempting to establish an ssh tunnel, I noticed that even if the connection fails, the process stays alive. For example, if I try to run this command while hostname is down: /usr/bin/ssh -f -i /home/user/.ssh/id_rsa -N -R 3000:localhost:22 user@hostname Occasionally I get the response: Warning: remote port forwarding failed for listen port 3000 I only get this error message when the original process (running on the local machine) dies but the remote server does not realize yet. The process tries to restart but the server thinks it still has a connection on 3000 and won't except a new connection, resulting in the warning above. But if I do a pgrep -x ssh I can see that the process is still alive. I would like to run this ssh command as part of a bash script in a cronjob which first checks to see if the tunnel is established and if not reestablishes it, but the way I have the script setup it either a) sees that the tunnel is down and attempts to create a new one (which secretly fails), or b) sees that the failed process is alive and does nothing. The result is that the tunnel never gets reestablished so long as that failed process still exists. Is there a way to just kill the process if the connection fails instead of getting a warning? | For anyone else who may find the answer, I found the option that I wanted in this answer : adding -o ExitOnForwardFailure=True to the command forces ssh to exit if the port forwarding failed, instead of creating a zombie process: /usr/bin/ssh -f -i /home/user/.ssh/id_rsa -N -R 3000:localhost:22 user@hostname -o ExitOnForwardFailure=True | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227462/"
]
} |
359,773 | I'm a bash user, starting a new job at a place where people use fish shell. I'm looking at the history command which I often use in bash. When I use it in fish I get a long list of my history which I can scroll up and down on with the arrow keys. There are no numbers like in bash and pressing enter is the same as the down key. How can I run a past command with fish shell's history ? | The history command in the fish shell isn't bash-compatible, it's just displaying it in a pager (e.g. less ). To select an old command, you'll probably want to enter the part you remember right into the commandline , press up-arrow until you have found what you want and then press enter to execute. E.g. on my system I enter mes , press up and rm -I meson.build appears (with the "mes" part highlighted). I then press enter and it executes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
359,782 | I need to search the current directory and its sub-directories for regular files containing the words: "hello" Why does this not work for me: find . -type f | grep "hello" | The history command in the fish shell isn't bash-compatible, it's just displaying it in a pager (e.g. less ). To select an old command, you'll probably want to enter the part you remember right into the commandline , press up-arrow until you have found what you want and then press enter to execute. E.g. on my system I enter mes , press up and rm -I meson.build appears (with the "mes" part highlighted). I then press enter and it executes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227505/"
]
} |
359,794 | long time Windows user here. Last week I got fed up with Windows 10 and so I used YUMI to create a bootable Ubuntu 16.04 64 Bit USB stick. So far, I love what I see!I installed some software and tried a few things... Now my question is, can I buy myself a new SSD and install/transfer my YUMI USB Ubuntu on it...So that I do not need to install all the software again and setup all the things I have done so far on the USB Ubuntu... Any advise / help would be welcome! Thank you! | The history command in the fish shell isn't bash-compatible, it's just displaying it in a pager (e.g. less ). To select an old command, you'll probably want to enter the part you remember right into the commandline , press up-arrow until you have found what you want and then press enter to execute. E.g. on my system I enter mes , press up and rm -I meson.build appears (with the "mes" part highlighted). I then press enter and it executes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227510/"
]
} |
359,810 | I've just encounter the following question in Unix Programming Environment , the Kernighan and Pike's classic book on Unix (I found the below text on p. 79 of year 1984 edition, ISBN:0-13-937699-2): Exercise 3-6. (Trick question) How do you get a / into a filename (i.e., a / that doesn't separate components of the path? I'be been working with Linux for years, both as end-user and programmer, but I cannot answer this question. There is no way to put slashes in filenames, it's absolutely forbidden by the kernel. You can patch your filesystem via block device access, or use similarly-looking characters from the Unicode, but those aren't solutions. I understand that Linux ≠ Unix, but the same principle should apply, since the system has to be able to unambiguously extract directory hierarchy from paths. Does somebody know, what exactly Kernighan and Pike thought about when asking this questions? What was the supposed answer? What exactly is the 'trick'? Or maybe original Unix system simply allowed to escape this slash somehow? UPD: I contacted Brian Kernighan about the question and that's what he replied: The answer is (or was) “You can't.” Hence, Timothy Martin was right and gets the green tick. | Perhaps the answer is the same as part of the answer in this trick question: How do you get down off an elephant? You don't. You get it from a goose. From "The Practice of Programming" by Brian W. Kernighan and Rob Pike, Ch. 6, pg. 158: When Steve Bourne was writing his Unix shell (which came to be known as the Bourne shell), he made a directory of 254 files with one-character names, one for each byte value except '\0' and slash, the two characters that cannot appear in Unix file names. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/359810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157435/"
]
} |
359,832 | I have a number of large CSV files and would like them in TSV (tab separated format). The complication is that there are commas in the fields of the CSV file, eg: A,,C,"D,E,F","G",I,"K,L,M",Z Expected output: A C D,E,F G I K,L,M Z (where whitespace in between are 'hard' tabs) I have Perl, Python, and coreutils installed on this server. | Python Add to file named csv2tab , and make it executable touch csv2tab && chmod u+x csv2tab Add to it #!/usr/bin/env pythonimport csv, syscsv.writer(sys.stdout, dialect='excel-tab').writerows(csv.reader(sys.stdin)) Test runs $ echo 'A,,C,"D,E,F","G",I,"K,L,M",Z' | ./csv2tab A C D,E,F G I K,L,M Z $ ./csv2tab < data.csv > data.tsv && head data.tsv 1A C D,E,F G I K,L,M Z2A C D,E,F G I K,L,M Z3A C D,E,F G I K,L,M Z | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/359832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
359,902 | I have CentOS 5.6 on my laptop. When I type yum update , I get the below error: Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfileYumRepo Error: All mirror URLs are not using ftp, http[s] or file.Eg. Invalid release/removing mirrorlist with no valid mirrors: /var/cache/yum/base/mirrorlist.txtError: Cannot find a valid baseurl for repo: base Below is my /etc/yum.repos.d/CentOS-Base.repo file (I didn't change anything in it): [base]name=CentOS-$releasever - Basemirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5#released updates[updates]name=CentOS-$releasever - Updatesmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5#additional packages that may be useful[extras]name=CentOS-$releasever - Extrasmirrorlist=http://mirrorlist.centos.org/? release=$releasever&arch=$basearch&repo=extras#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5#additional packages that extend functionality of existing packages[centosplus]name=CentOS-$releasever - Plusmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/gpgcheck=1enabled=0gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5#contrib - packages by Centos Users[contrib]name=CentOS-$releasever - Contribmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/gpgcheck=1enabled=0gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 Below is my /etc/yum.conf file (I didn't change anything in it): [main]cachedir=/var/cache/yumkeepcache=0debuglevel=2logfile=/var/log/yum.logdistroverpkg=redhat-releasetolerant=1exactarch=1obsoletes=1gpgcheck=1plugins=1bugtracker_url=http://bugs.centos.org/set_project.php?project_id=16&ref=http://bugs.centos.org/bug_report_page.php?category=yum Why I can't update my CentoOS to 5.11? Previously I was able to update CentOS to 5.11 without any problems. Can someone please help me? | CentOS-5 reached end-of-life on March 31, 2017. This means that no new updates will be released by Red Hat. The current 5.11 tree you seek has been moved to vault.centos.org . To obtain access to the 5.11 branch, edit /etc/yum.repos.d/CentOS-Base.repo and comment out the mirrorlist directives. Furthermore, in each enabled section add baseurl=http://vault.centos.org/5.11/os/$basearch or baseurl=http://vault.centos.org/5.11/updates/$basearch , appropriately. For example, for a base repo that looks like: [base]name=CentOS-$releasever - Basemirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=osgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 ...change to: [base]name=CentOS-$releasever - Base# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=osbaseurl=http://vault.centos.org/5.11/os/$basearchgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/359902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111388/"
]
} |
359,907 | The file paths coming from this find command find . -printf "%p \n" do not escape unusual (whitespace, backslash, double quote...) characters. The -ls option does print the escaped paths, but it just prepends the output of ls -dils to the output of printf . I need a highly efficient command, so running an extra ls does not help, and neither does printing out all the extra characters. Is there any other (elegant) way to ouput escaped paths with find ? | Usually you'd want to use find -exec to run a command for all file names, or find -print0 to pipe the names to some command that can read entries separated by nul bytes (like xargs -0 ). If you really want to have quoted strings, Bash has a couple of options to do that: $ find -exec bash -c 'printf "%s\n" "${@@Q}"' sh {} +'./single'\''quote''./space 1'$'./new\nline''./double"quote'$ find -exec bash -c 'printf "%q\n" "$@"' sh {} +./single\'quote./space\ 1$'./new\nline'./double\"quote This does require an extra invocation of the shell, but handles multiple file names with one exec. Regarding saving the permission bits (not ACL's though), you could do something like this (in GNU find): find -printf "%#m:%p\0" > files-and-modes That would output entries with the permissions, a colon, the filename, and a nul byte, like: 0644:name with spaces \0 . It will not escape anything, but instead will print the file names as-is (unless the output goes to a terminal, in which case at least newlines will be mangled.) You can read the result with a Perl script: perl -0 -ne '($m, $f) = split/:/, $_, 2; chmod oct($m), $f; ' < files-and-modes Or barely in Bash, see comments: while IFS=: read -r -d '' mode file ; do # do something useful printf "<%s> <%s>\n" "$mode" "$file" chmod "$mode" "$file"done < files-and-modes As far as I tested, that works with newlines, quotes, spaces , and colons . Note that we need to use something other than whitespace as the separator, as setting IFS=" " would remove trailing spaces if any names contain them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/359907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227582/"
]
} |
360,063 | Docker used to work, but now it gives an error whenever running a container: docker: Error response from daemon: failed to create endpoint - failed to add host Example: docker run -it debian Resulting Error: docker: Error response from daemon: failed to create endpoint dazzling_ptolemy on network bridge: failed to add the host (veth1e8eb9b) <=> sandbox (veth73c911f) pair interfaces: operation not supported I have restarted Docker using systemctl restart docker I also did a network prune docker network prune Nothing seemed to work. What can be the cause? | I haven't taken the time to figure out why, but you should just need to reboot your machine; it worked for me. A search for the error on github came up with this , which links to this github issue from a while ago: https://github.com/moby/moby/issues/15341#issuecomment-218930712 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227716/"
]
} |
360,069 | I am writing a script to create a Makefile. I used a for-loop to iterate through all my arguments to 'echo ... >> Makefile' into the command section of each target. The expected output goes something like this: $ makemake.sh a.out -Hello -World$ cat Makefile> a.out : appointment.o calendar.o day.o dayofweek.o time.o year.o > g++ -ansi -Wall -g -o a.out -Hello -World However, using the technique above: echo -n "g++ -ansi -Wall -g -o " >> Makefile for arg in $@; do echo -n "$@ " >> Makefile done Yields the following: a.out : appointment.o calendar.o day.o dayofweek.o time.o year.o g++ -ansi -Wall -g -o a.out -Hello -World a.out -Hello -World a.out -Hello -World My professor recommended I use shift, but this would make it more difficult to recall arguments for other targets. Why is this happening and what can I do? Though I still seek an answer, I am very interested in the logic behind this reaction. | I haven't taken the time to figure out why, but you should just need to reboot your machine; it worked for me. A search for the error on github came up with this , which links to this github issue from a while ago: https://github.com/moby/moby/issues/15341#issuecomment-218930712 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227505/"
]
} |
360,134 | I recently switched from terminal prompt login (getty?) to GNOME Display Manager. It seems that GDM always reads .profile , regardless of user's setting of login shell (Zsh in my case). Why is that? I assume it's hardcoded in their source , but I can't find. Why did they do that? Does the software depends on some functionality of Bourne shell? This is not very good if I want to use both GDM and getty (as fallback), because I then need to keep my .profile and .zprofile in sync. I'm not so confident about sourcing .profile in .zprofile (I met some compatibility issues before, when I tried to source .bashrc in .zshrc ). I think Bash called as /bin/sh behaves in POSIX mode, but I'm not sure whether it avoids all the pitfalls. In case it matters, I'm on latest Arch Linux, running GNOME with Wayland (so there should not be any Xsession script involved). | Your problems with .bashrc are unrelated. .profile needs to be compatible with all sh -compatible shells, whereas of course .bashrc is specific to Bash and should generally not be sourced by other shells. Generally, put the stuff you want to share between shells in .profile , and make sure you do source it from the startup files of your other shells (unless of course they already do that by default). Obviously, you need to make sure you avoid code which behaves differently in different shells (lack of quoting is okay in Zsh but a problem in properly Bourne-compatible shells, for example). As for the "why" part of your question, this is so that settings in your .profile are available to programs you run from your GUI session, not just by the ones you run from within a shell (or maybe we should say "traditional" shell, and regard your GUI session as a "non-traditional" shell). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118138/"
]
} |
360,160 | I'm performing a nested grep like this: grep -ir "Some string" . |grep "Another string I want to find in the other grep's results" This works perfectly as intended (I get the results from the first grep filtered by the second grep as well), but as soon as I add an "-l" option so I only get the list of files from the second grep, I don't get anything. grep -ir "Some string" . |grep -l "Another string I want to find in the other grep's results" This results in the following output: (standard input) I guess piping doesn't work when I just want the list of files. Any alternatives? | The -l option to grep will make the utility print only the name of the file containing the specified pattern. The manual on my system says the following about this option: Only the names of files containing selected lines are written tostandard output. grep will only search a file until a match hasbeen found, making searches potentially less expensive.Pathnames are listed once per file searched. If the standardinput is searched, the string "(standard input)" is written. Since the second grep in your pipeline is reading from standard input, not from a file, it is unaware of where the data is coming from other than that it's arriving on its standard input stream. This is why it's returning the text string (standard input) . This is as close as it can get to where the match was located. To combine the two patterns in the first grep (which does know what files it's looking in), see How to run grep with multiple AND patterns? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111899/"
]
} |
360,162 | I have 3 embedded CPU system running Linux 2.6.37, which are connected via Ethernet. Each CPU has its own NAND flash memory. One of them is "main" CPU number 0, while 2 others are his companions.I want all three to run from the same root file system residing on the CPU0 to avoid tripling of possible updates/changes in RootFS files. For this I wanted to export the '/' (root file system) via NFS on CPU0, while CPU1 & CPU2 will boot up from CPU0 via NFS (nfsroot). But this seems to not work - any attempt to export the '/' fails with the message :exportfs: / does not support NFS export Are there any principle limitations on exporting '/'?If yes, any ideas what can be done to reach the goal?Thanks a lot ahead. Addition/Update: Each CPU knows its number, the boot loader (u-boot) will put correct parameters into Linux command line to boot from NAND(CPU0) or from NFS(CPU1-2). The same way CPU0 will start the NFS server, while CPU1-2 will not. There is no need in "private" files, as in any case the root file system is mounted read-only also today. Just each CPUx has its own private NAND, while I want to eliminate this. This is not the same as "diskless" case, because in diskless case some SUBDIRECTORY is exported as root FS, while in my case all the root FS must be exported. I should note that exporting of any subdirectory from NAND works fine (I at least tried several). Just exporting '/' fails. | The -l option to grep will make the utility print only the name of the file containing the specified pattern. The manual on my system says the following about this option: Only the names of files containing selected lines are written tostandard output. grep will only search a file until a match hasbeen found, making searches potentially less expensive.Pathnames are listed once per file searched. If the standardinput is searched, the string "(standard input)" is written. Since the second grep in your pipeline is reading from standard input, not from a file, it is unaware of where the data is coming from other than that it's arriving on its standard input stream. This is why it's returning the text string (standard input) . This is as close as it can get to where the match was located. To combine the two patterns in the first grep (which does know what files it's looking in), see How to run grep with multiple AND patterns? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227801/"
]
} |
360,175 | I have a file that contains these lines: war { baseName = 'myApp' version = '1.0.2'} And a variable like: variable=b123 I want to edit the file appending $variable value to the version number so result will be: war { baseName = 'myApp' version = '1.0.2_b123'} How achieve this goal with a bash script? | Would a simple sed do? $ var=_b123$ sed -Ee "/version/s/'(.*)'/'\1$var'/" file war { baseName = 'myApp' version = '1.0.2_b123'} ( /version/ checks if the line contains that string, if it does we s ubstitute a string inside single quotes with the same string ( (...) captures, \1 restores) plus the text in the variable. The quoting is not an issue here since everything we need is safe within double-quotes.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156245/"
]
} |
360,188 | I have already configured login to ssh with keys and it works fine. The problem is that when I'm connecting to the server with key and included password but I don't see any failed login attempts when I type a wrong password. There are no failed login attempts using key in: /var/log/audit/audit.log or /var/log/secure Other words i can type password to key til i die without any action.Do you have any ideas how to log to file failed login attemps to ssh using key with password ? OS is : Red Hat Enterprise Linux Server release 7.3 (Maipo) Thank you in advance. This is log from the server when i have typed many times wrong password: Connection from my_ip port 51115 on server_ip port 22sshd[3639]: Found matching RSA key: 00:12:23 ...sshd[3639]: Postponed publickey for some_user from ip_address port 51115 ssh2 [preauth] | It sounds like you've configured your client key to require a password to open the key before connecting to the server. It won't be logged by your server because that occurs on the client machine. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227817/"
]
} |
360,194 | I want to connect to multiple aws ec2 instance at once using putty. I have same ppk file for all instance by time is getting more consumed when i am connecting one by one so I would like to connect to all instance ip at once using putty | It sounds like you've configured your client key to require a password to open the key before connecting to the server. It won't be logged by your server because that occurs on the client machine. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227809/"
]
} |
360,212 | I have a question and I'm not sure it's a pertinent one (maybe I miss something). saying that on linux everything is a file means that : 1 - communication with modules and processes is writing to files : is that correct 2 - if it's correct, those files are stored on hard drive ? 3 - if it's correct, doest it take time to R/W hard drive ? | Everything may be a file, but not everything is real . Consider the contents of /proc . On my Linux system, there is a file /proc/uptime , whose current contents are: 831801.89 1241295.64 If I were to cat the file again, it would contain different numbers. My hard drive is mounted read-only, so it can't possibly be the case that something is writing these numbers to disk every fraction of a second.In fact, nothing under /proc is on disk. Each interaction with a file in that directory simply runs kernel code, due to the nature of procfs . Then there are temporary files. Chances are, your /tmp is mounted tmpfs , meaning its contents are stored in RAM instead of on disk. Another interesting place is /dev/tcp , for communicating with the network. On some systems, this only even exists under bash but not other shells, so it can't possibly be on-disk in those systems. These examples all show that the filesystem and the hard drive are separate, and the "Everything is a file" philosophy does not impact performance on account of I/O speed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
360,332 | I'm on FreeBsd 11. I have a user "user123" belonging to the group wheel and wheel has %wheel ALL=(ALL) NOPASSWD: ALL in /usr/local/etc/sudoers . I don't know the password of root. However, I'm able to run "sudo" without one. I've installed Postgresql and run it via "service start". Now I want to log in as the postgresql user and create a database or other stuff: $ su postgresPassword:# or$ su - postgresPassword: But I don't know the password. Is this the password of the user root or the user postgres ? I don't know any of them. Is there a standard workaround for this? | If you want to login as postgres , and you have sudo access without password requirements, do: sudo -iu postgres The -i starts a login shell. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227643/"
]
} |
360,375 | Is it good to delete the variables used in a shell-script at the end of a script. rm -rf $abcrm -rf $def or unset $abcunset $def and so on. Is this actually a good practice? | This is a very bad practice. rm deletes files . It is nothing to do with variables. In any case, the variables themselves will be disposed of when the script ends and the operating system reclaims the shell's memory. In the simple case, a file with the same name as the value of one of the variables will be deleted by this construction: abc=filenamerm -f $abc # Deletes "filename" in the current directory It gets worse. If abc and def contain the names of files as individual words separated by a space (or any other character of IFS ), you will delete those files, and wildcards like * will be expanded too if they appear in any of the words. abc='hello world'rm -f $abc # Deletes file "hello" and "world" (leaves "hello world" alone)abc='5 * 3'rm -f $abc # Deletes all files, because * is expanded (!)def='-r /'rm -f $def # Really deletes *all* files this user can access Shell parameter expansion with $var is subject to word splitting , where every character of the IFS variable divides the variable into different arguments. Each word is then subject to filename expansion , which uses * , ? , and [abc...] patterns to create filenames. This could get very bad, depending on what your variables have in them. Do not do this. There is no need to blank or unset variables at the end of a shell script in any way. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/360375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
360,418 | I am trying to add 0 to the beginning, IF there is a "." at the 2nd character of that line. I couldn't combine these two; awk '{ print substr( $0, 2, 1 ) }' file.txt showing the second character sed -ie "s/.\{0\}/0/" file.txt adding a zero to the beginning. There should be an "if the second character is a dot". sample file: 1.02.2017 23:40:0010.02.2017 23:40:00 final: 01.02.2017 23:40:0010.02.2017 23:40:00 | We may use either of sed or awk to completely solve the problem. With sed : $ sed 's/^.\./0&/' file.txt When & occurs in the replacement part of the substitution command ( s ), it will be expanded to the part of the input line that matches the pattern part of the command. The regular expression ^.\. means " match all lines that starts with ( ^ ) an arbitrary character ( . ) followed by a literal dot ( \. ) ". If the line is 1.02.2017 23:40:00 , the pattern will match, and 1. would be replaced by 01. at the start of the line. With awk : Building on the partial awk code in the question... This will, as stated, print the second character of each line of input: $ awk '{ print substr($0, 2, 1) }' file.txt We can use the fact that substr($0, 2, 1) returns the second character and use that as the condition: $ awk 'substr($0, 2, 1) == "." { ... }' file.txt What goes into { ... } is code that prepends $0 , which is the contents of the current line, with a zero if the preceding condition is true: $ awk 'substr($0, 2, 1) == "." { $0 = "0" $0 }' file.txt Then we just need to make sure that all lines are printed: $ awk 'substr($0, 2, 1) == "." { $0 = "0" $0 } { print }' file.txt The condition substr($0, 2, 1) == "." may of course be changed into a regular expression too (we use exactly the same expression as we used in the sed solution): $ awk '/^.\./ { $0 = "0" $0 } { print }' file.txt Some people who thinks "shorter is always better" would write that as $ awk '/^.\./ { $0 = "0" $0 } 1' file.txt (and probably also remove most spaces: awk '/^.\./{$0="0"$0}1' file.txt ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227998/"
]
} |
360,434 | I'm trying to install software and the autoreconf fails because libtoolize is not installed on my system. I don't have root access on this system, but I need to install libtoolize . How can I install libtoolize ? What source code package provides this? | The software package is libtool : https://www.gnu.org/software/libtool/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220362/"
]
} |
360,443 | I have many files that look similar to this: 56.mp3?referredby=rss What I want to do is remove the ?referredby=rss so they'll be like this: 56.mp3 How would I do this? | If you have Perl rename , it’s as easy as rename 's/\?referredby=rss//' ./*referredby=rss With util-linux rename : rename '?referredby=rss' '' ./*referredby=rss | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117923/"
]
} |
360,537 | I am using Manjaro 17 with i3wm (if any relevance). I want to run a single command on start-up to fix the my touchpad tap click setting. I wrote the script that enables the option in /usr/bin/ and change its mode as executable. /usr/bin/touchpad-enable-tap-click: #!/bin/bashexec xinput set-prop 11 290 1 The script can be smootly executed in terminal without causing any problem. Based on my research, I prepared a simple service file in /etc/systemd/system/ . /etc/systemd/system/touchpad-enable-tap-click.service: [Unit]Description=Allow touchpad tap click[Service]Type=oneshotExecStart=/usr/bin/touchpad-enable-tap-click[Install]WantedBy=multi-user.target than executed following command before reboot: [sercan@compaq ~]$ sudo systemctl enable touchpad-enable-tap-click.serviceCreated symlink /etc/systemd/system/multi-user.target.wants/touchpad-enable-tap-click.service → /etc/systemd/system/touchpad-enable-tap-click.service. I also tried full path. The service is not working, as a result: systemctl status [sercan@compaq ~]$ systemctl status touchpad-enable-tap-click.service● touchpad-enable-tap-click.service - Allow touchpad tap click Loaded: loaded (/etc/systemd/system/touchpad-enable-tap-click.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sat 2017-04-22 01:51:17 +03; 14min ago Main PID: 32429 (code=exited, status=1/FAILURE)Nis 22 01:51:17 compaq systemd[1]: Starting Allow touchpad tap click...Nis 22 01:51:17 compaq bash[32429]: Unable to connect to X serverNis 22 01:51:17 compaq systemd[1]: touchpad-enable-tap-click.service: Main process exited, code=exited, status=1/FAILURENis 22 01:51:17 compaq systemd[1]: Failed to start Allow touchpad tap click.Nis 22 01:51:17 compaq systemd[1]: touchpad-enable-tap-click.service: Unit entered failed state.Nis 22 01:51:17 compaq systemd[1]: touchpad-enable-tap-click.service: Failed with result 'exit-code'. journal -xe after attempting restart service: Nis 22 02:09:52 compaq sudo[21550]: sercan : TTY=pts/0 ; PWD=/home/sercan ; USER=root ; COMMAND=/usr/bin/systemctl restart touchpad-enable-tap-click.serviceNis 22 02:09:52 compaq sudo[21550]: pam_unix(sudo:session): session opened for user root by (uid=0)Nis 22 02:09:52 compaq systemd[1]: Starting Allow touchpad tap click...-- Subject: Unit touchpad-enable-tap-click.service has begun start-up-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit touchpad-enable-tap-click.service has begun starting up.Nis 22 02:09:52 compaq bash[21553]: Unable to connect to X serverNis 22 02:09:52 compaq systemd[1]: touchpad-enable-tap-click.service: Main process exited, code=exited, status=1/FAILURENis 22 02:09:52 compaq systemd[1]: Failed to start Allow touchpad tap click.-- Subject: Unit touchpad-enable-tap-click.service has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit touchpad-enable-tap-click.service has failed.-- -- The result is failed.Nis 22 02:09:52 compaq systemd[1]: touchpad-enable-tap-click.service: Unit entered failed state.Nis 22 02:09:52 compaq systemd[1]: touchpad-enable-tap-click.service: Failed with result 'exit-code'.Nis 22 02:09:52 compaq sudo[21550]: pam_unix(sudo:session): session closed for user root I hope you can help me, I appreciate. | The GUI is a distinct part of the operating system, and a machine can have multiple GUI environments. Your attempts with systemd aren't working because the services are executed outside of a GUI context. In fact, they're executed before the GUI starts. To run xinput , you need to have a GUI, which is provided by an X server . Applications know what the GUI context is (i.e. which X server to communicate with) through the DISPLAY environment variable. This is a way to check whether a GUI is available: if that variable is not set, you're outside of a GUI context. (Setting the variable won't create a GUI context. It could let you connect to an existing GUI context from outside but that's not relevant here.) If your login prompt is in graphical mode, then you're using a display manager . You can configure the display manager to run xinput , and then the settings will be applied as soon as the login prompt is displayed. How to do that depends on which display manager you're using; see How can I run a script that starts before my login screen? for more details. No matter how you log in, you can apply the settings as part of your login scripts. If you're using .xinitrc or .xsession to start your GUI session, add the command there. If you're using a desktop environment which has a concept of startup applications, add the xinput command, or a script that runs it, to your startup applications. If you're using a window manager directly, check its documentation for how to run a command at startup (almost any window manager can do this). Since you're using i3, you can run a command at GUI login time by putting an exec command in your ~/.i3/config : exec xinput set-prop 11 290 1 Although systemd starts the display manager as a service, I don't think it provides a way to run a command in the resulting GUI context. It may provide a way to run a command when you log in however; see the Arch Wiki for examples. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228077/"
]
} |
360,540 | I have a simple bash function dividing two numbers: echo "750/12.5" | bc I'd like to take the output from bc and append /24 and pipe said result to another instance of bc . Something like: echo "750/12.5" | bc | echo $1 + "/24" | bc Where $1 is the piped result. P.S. I realize I could just do echo "750/12.5/24" | bc my question is more in regards to the appending of text to a pipe result. | In the simplest of the options, this does append to the pipe stream: $ echo "750/12.5" | { bc; echo "/24"; }60/24 However that has an unexpected newline, to avoid that you need to either use tr: $ echo "750/12.5" | { bc | tr -d '\n' ; echo "/24"; }60/24 Or, given the fact that a command expansion removes trailing newlines: $ printf '%s' $( echo "750/12.5" | bc ); echo "/24"60/24 But probably, the correct way should be similar to: $ echo "$(echo "750/12.5" | bc )/24"60/24 Which, to be used in bc, could be written as this: $ bc <<<"$(bc <<<"750/12.5")/24"2 Which, to get a reasonable floating number precision should be something like: $ bc <<<"scale=10;$(bc <<<"scale=5;750/12.5")/24"2.5000000000 Note the need of two scale, as there are two instances of bc. Of course, one instance of bc needs only one scale: $ bc <<<"scale=5;750/12.5/24" In fact, what you should be thinking about is in terms of an string: $ a=$(echo "750/12.5") # capture first string.$ echo "$a/24" | bc # extend the string2 The comment about scale from above is still valid here. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
360,545 | I recently decided to change my PS1 variable to accommodate some pretty Solarized colors for my terminal viewing pleasure. When not in a tmux session, everything is great! Rainbows, ponies, unicorns and a distinguishable prompt! Cool! The problem is within tmux, however. I've verified that the value of PS1 is what I expect it to be and the same as it is when tmux isn't running, namely \[\033]0;\w\007\]\[\[\]\]\u\[\]@\[\[\]\]\h\[\]:\[\]\W\[\]$ \[\] . All of my aliases, etc. in my .bash_profile are also functioning as expected. tmux is also displaying colors without incident, as echo -ne "\033[1;33m hi" behaves as expected as does gls --color . The current relevant line in my .bash_profile is export PS1="\[\033]0;\w\007\]\[\[\]\]\u\[\]@\[\[\]\]\h\[\]:\[\]\W\[\]$ \[\]" , although originally I was sourcing a script located in a .bash_prompt file to handle some conditionals, etc. I tried reverting to the simpler version. Executing bash will cause the prompt to colorize, but must be done in each pane. export PS1=[that long string I've already posted] will not. My .tmux.conf is as follows: set-option -g default-command "reattach-to-user-namespace -l /usr/local/bin/bash"set -g default-terminal "xterm-256color"set-window-option -g automatic-rename onbind '"' split-window -c "#{pane_current_path}"bind % split-window -h -c "#{pane_current_path}"bind c new-window -c "#{pane_current_path}" Relevant portions of .bash_profile: export TERM="xterm-256color"if which tmux >/dev/null 2>&1; then test -z "$TMUX" && (tmux attach || tmux new-session)fi I'm using macOS Sierra, iTerm 2, I've tried both the current homebrew version of bash and the system bash (it's currently using the homebrew), tmux 2.4. I also placed touch testing_touch_from_bash_profile in my .bash_profile while in a tmux session with two panes, killed one pane, opened a pane and verified that the file was in fact created. echo $TERM returns xterm-256color . I've ensured that when exiting tmux to test settings changes that I've exited tmux and that no tmux process is currently running on the system via ps -ax | grep tmux . Oddly, sourcing the .bash_prompt script also changes the color so long as I do it within each tmux pane. I've looked at https://stackoverflow.com/questions/21005966/tmux-prompt-not-following-normal-bash-prompt-ps1-w and tried adding the --login flag after the bash call in the first line of my .tmux.conf. Launching tmux with tmux new bash will cause the first pane to colorize, but subsequent panes will not. The $PS1 variable is being honored for seemingly all aspects except colorizing any of the fields. Anyone have any ideas? | On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/360545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228084/"
]
} |
360,547 | In my LAN I am using a PFSense server with one DHCP server on it. I need to block a second DHCP server showing up in my LAN. I think I can use the PfSense firewall to refuse the other DHCP server IP address. What should I do? | On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/360547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227734/"
]
} |
360,559 | I am trying to install Debian but i don't know how to partition it. I have an 1tb hard disk. I want to give 60gb for debian files,2gb for swap and want to use the others for media files. What is /,/home,/usr/local? | On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/360559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227559/"
]
} |
360,582 | How to get a list of all disks, like this? /dev/sda/dev/sdb | ls (shows individual partitions though) # ls /dev/sd*/dev/sda /dev/sda1 ls (just disks, ignore partitions) # ls /dev/sd*[a-z]/dev/sda fdisk # fdisk -l 2>/dev/null |awk '/^Disk \//{print substr($2,0,length($2)-1)}'/dev/xvda | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
360,631 | I'm trying to do the following: cat file1.txt | xargs -I{} "cat file2.txt | grep {}" I'm expecting each line from file1 to be the value for grep at the end of the third pipe. It's not working as expected. Is this because -I{} stops looking for things to replace once it hits the pipe? Is there a way around this? | It's because you need a shell to create a pipe or perform redirection. Note that cat is the command to concatenate, it makes little sense to use it just for one file. cat file1.txt | xargs -I{} sh -c 'cat file2.txt | grep -e "$1"' sh {} Do not do: cat file1.txt | xargs -I{} sh -c 'cat file2.txt | grep -e {}' as that would amount to a command injection vulnerability. The {} would be expanded in the code argument to sh so interpreted as shell code. For instance, if one the line of file1.txt was $(reboot) that would call reboot . The -e (or you could also use -- ) is also important. Without it, you'd have problems with regexps starting with - . You can simplify the above using redirections instead of cat : < file1.txt xargs -I{} sh -c '< file2.txt grep -e "$1"' sh {} Or simply pass the file names as argument to grep instead of using redirections in which case you can even drop the sh : < file1.txt xargs -I{} grep -e {} file2.txt You could also tell grep to look for all the regexps at once in a single invocation: grep -f file1.txt file2.txt Note however, that in that case, that's just one regexp for each line of file1.txt , there's none of the special quote processing done by xargs . xargs by default considers its input as a list of blank (with some implementations only space and tab, on others any in the [:blank:] character class of the current locale) or newline separated words for which backslash and single and double quotes can be used to escape the separators (newline can only be escaped by backslash though) or each other. For instance, on an input like: 'a "b'\" "bar baz" x\y xargs without -I{} would pass a "b" , bar baz and x<newline>y to the command. With -I{} , xargs gets one word per line but still does some extra processing. It ignores leading (but not trailing) blanks. Blanks are no longer considered as separators, but quote processing is still being done. On the input above xargs -I{} would pass one a "b" foo bar x<newline>y argument to the command. Also note that one many systems, as required by POSIX, that won't work if words are more than 255 characters long. All in all, xargs -I{} is pretty useless. If you want each line to be passed verbatim as argument to the command you could use GNU xargs -d '\n' extension: < file1.txt xargs -d '\n' -n 1 grep file2.txt -e (here relying on another extension of GNU grep that allows passing options after arguments (provided POSIXly correct is not in the environment) or portably: sed "s/'/'\\\\\\''/g;s/.*/'&'/" file1.txt | xargs -n1 sh -c ' for line do grep -e "$line" file2.txt done' sh If you wanted each word in file1.txt (quotes still recognised) as opposed to each line to be looked for (which would also work around your trailing space issue if you have one word per line anyway), you can use xargs -n1 alone instead of using -I : < file1.txt xargs -n1 sh -c ' for word do grep -e "$word" file2.txt done' sh To strip leading and trailing blanks (but without the quote processing that xargs does), you could also do: unset IFS # restore word splitting to its defaultwhile read -r regexp; do grep -e "$regexp" file2.txtdone < file1.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
360,740 | I have installing Midnight commander from FreeBSD 12.0-current with: pkg install mc When calling mc as as root, it runs as supposed; however when running as a normal user, it aborts with the error: common.c: unimplemented subshell type 1read (subshell_pty...): No error: 0 (0) What to do? | According to this thread , there is a bug/problem with mc, depending also on how it is compiled. The option is to recompile it with SUBSHELL off; or to run it as: mc -u So, the easiest option is to create an alias to mc as mc -u . As in: alias mc='mc -u' From man mc : -u, --nosubshell Disable use of the concurrent shell (only makes sense if Midnight Commander has been built with concurrent shell support). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138261/"
]
} |
360,745 | Pardon if this question has an exact duplicate somewhere else, but so far all of the answers I have found on SE or other sites in general do not answer this question specifically. I am taking an operating systems course in my college and hence I am pretty new to file systems in general. I understand that in most file systems, there is a root directory which contains file directory entries. These entries contain a mapping from filename to inode number, and are variable size in length. According to this answer , I guess these entries are stored in a linear fashion, like below: I can fully understand what inodes are and how they map to a file's data block numbers on the physical disk, using their Table of Contents (TOC) entries. However, my question is: How and where are subdirectory file directory entries stored? I would believe that they are either stored in the same location as the root directory, at some offset. However, I cannot envision how this offset can be retrieved from the inode. Hence, I have a feeling that the directory entries of subdirectories are actually stored in the data region of the disk, instead of with the root directory's entries. Hence, if this is the case, traversing from one directory to another requires the disk to read from seemingly arbitrary locations, which seems a little inefficient to me. Nevertheless, I would like to simply clear up my misconceptions on the location of the file directory entries of a subdirectory. Much help is appreciated. | Directories are usually implemented as files. They have an inode, and a data area, but of course are usually accessed (at least written to) by special system calls. Some systems allow for reading directories with the usual read(2) system call (Linux doesn't, FreeBSD did when I last checked). The data area of the directory-file then contains the directory entries. On ext4 , the root directory also has an inode, it's fixed to inode number 2 (try ls -lid / ). Having the directory act like a file makes it easy to allocate space for the directory entries, etc, as the functions to allocate blocks for files must always be there. Also, since they use the same data blocks as needed, there's no need to allocate space between file data and directory listings beforehand. The internals of how directory entries are stored varies between file systems, and has for example evolved between ext2 and ext4 . Modern systems use trees instead of linear lists for faster lookups. See here . Even the venerable FAT filesystem stores directories as files, but at least in older FATs, the root directory is special. (The structure of the directory entries in FAT is of course different from Unix filesystems.) Hence, if this is the case, traversing from one directory to another requires the disk to read from seemingly arbitrary locations, which seems a little inefficient to me. Yep. But often-accessed directory entries (or the underlying data blocks) are likely to be cached in modern operating systems. Saving the contents of all directories centrally would require pre-allocating a large area, and would still require disk seeks within the directory data area. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117500/"
]
} |
360,774 | There is a directory in my Linux system because of some software malfunction some directories with junk name as you can see below is created, I have problem in deleting them, $ lltotal 1532drwxr-xr-x 2 sensage sensage 4096 Apr 19 16:56 -?$??drwxrwxr-x 248 sensage sensage 4096 Apr 23 11:37 .drwxrwxr-x 99 sensage sensage 4096 Apr 16 14:23 ..drwxr-xr-x 2 sensage sensage 4096 Apr 6 14:54 }???;?drwxr-xr-x 2 sensage sensage 4096 Apr 19 03:01 }??=?|-rw-r--r-- 1 sensage sensage 88 Apr 22 13:37 $drwxr-xr-x 2 sensage sensage 4096 Apr 2 12:43 ?drwxr-xr-x 2 sensage sensage 4096 Mar 20 02:51 ?=??&?drwxr-xr-x 2 sensage sensage 4096 Apr 11 08:40 ?;%??;drwxr-xr-x 2 sensage sensage 4096 Apr 14 09:38 ?:????drwxr-xr-x 2 sensage sensage 4096 Mar 22 17:21 ?(?>~?drwxr-xr-x 2 sensage sensage 4096 Apr 1 13:45 ?[???%drwxr-xr-x 2 sensage sensage 4096 Apr 3 14:03 ?@????drwxr-xr-x 2 sensage sensage 4096 Apr 12 16:18 ??drwxr-xr-x 2 sensage sensage 4096 Apr 17 16:38 ??&???drwxr-xr-x 2 sensage sensage 4096 Mar 25 02:43 ??+???drwxr-xr-x 2 sensage sensage 4096 Apr 19 00:46 Ü¡?,??drwxr-xr-x 2 sensage sensage 4096 Mar 28 18:54 ÚŸ??"?drwxr-xr-x 2 sensage sensage 4096 Mar 27 01:04 ???(?drwxr-xr-x 2 sensage sensage 4096 Apr 19 22:41 ??ͨ?`drwxr-xr-x 2 sensage sensage 4096 Apr 15 11:44 ?????- as you can see directory names in blue. when I want to delete them I get below error: $ ls -1 | grep -v 20 | xargs rm -rf xargs: unmatched double quote; by default quotes are special to xargs unless you use the -0 optionrm: invalid option -- ¼Try `rm ./'-¼$Þ¸Í'' to remove the file `-\274$\336\270\315'.Try `rm --help' for more information. what should I do with them? | ls will print non-ASCII characters (or rather, characters not supported in the current locale) as ? . This is one of the reasons why parsing the output of ls is a bad thing to do. The output from ls is meant to be looked at . In some cases, like this, those are not the actual names that exist in the filesystem. Try instead something like (these will delete all files and directories, including /path/to/dir ) rm -rf /path/to/dir or find /path/to/dir -delete or find /path/to/dir -exec rm -rf {} + or find /path/to/dir -print0 | xargs -0 rm -rf Modify to fit your needs. To only delete files, add -type f after the path in the find examples, for example. Doing just rm -rf * inside that directory (that's important , the current working directory must be the directory whose files and directories you want to delete) may also be enough. See also Why not parse ls ? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78188/"
]
} |
360,800 | I'd like to know what the minus (-) and the EOC in the command below means. I know some languages like Perl allows you to chose any combination of character (not bound to EOF) but is that the case here? And the minus is a complete mystery for me. Thanks in advance! ftp -v -n $SERVER >> $LOG_FILE <<-EOC user $USERNAME $PWD binary cd $DIR1 mkdir $dir_lock get $FILE byeEOC | That's a here-document redirection. command <<-wordhere-document contentsword The word used to delimit the here-document is arbitrary. It's common, but not necessary, to use an upper-case word. The - in <<-word has the effect that tabs will be stripped from the beginning of each line in the contents of the here-document. cat <<-SERVICE_ANNOUNCEMENT hello worldSERVICE_ANNOUNCEMENT If the above here-document was written with literal tabs at the start of each line, it would result in the output helloworld rather than hello world Tabs before the end delimiter are also stripped out with <<- (but not without the - ): cat <<-SERVICE_ANNOUNCEMENT hello world SERVICE_ANNOUNCEMENT (same output) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37103/"
]
} |
360,817 | How can I replace all of the spaces at the beginning of each line with a tab? I would prefer to use sed for this. | Portably. TAB=$(printf '\t')sed "s/^ */$TAB/" < file.in > file.out Some shells ( ksh93 , zsh , bash , mksh and FreeBSD sh at least) also support a special form of quotes ( $'...' ) where things like \t are expanded. sed $'s/^ */\t/' < file.in > file.out The fish shell expands those outside of quotes: sed 's/^ */'\t/ < file.in > file.out Some sed implementations like GNU sed also recognise \t as meaning TAB by themselves. So with those, this would also work: sed 's/^ */\t/' < file.in > file.out Portably, awk does expand \t inside its double quotes. And also uses extended regular expressions, so one can use x+ in place of xx* : awk '{sub(/^ +/, "\t"); print}' < file.in > file.out | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221298/"
]
} |
360,888 | I'm debugging an app for a client and I found the information from the DB which could be solution. I ask the client to extract it but unfortunately the client sent me the raw data in hexadecimal... I ask the client to resend me the plain text from the DB tools but awaiting their response I'm looking for a bash solution. I know the encoded data is a UTF-8 encoded string: is there a way to decode it with Unix tools? | With xxd (usually shipped with vim ) $ echo 5374c3a97068616e650a | xxd -p -rStéphane If your locale's charset (see output of locale charmap ) is not UTF-8 (but can represent all the characters in the encoded string), add a | iconv -f UTF-8 . If it cannot represent all the characters, you could try | iconv -f UTF-8 -t //TRANSLIT to get an approximation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/360888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91490/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.