output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
You need to install the relevant C library. Since Parrot OS is based on Debian, and provides arm64 binaries, the following should work:Enable the arm64 architecture (this matches aarch64): sudo dpkg --add-architecture arm64Update the local repository caches: sudo apt updateInstall the arm64 C library: sudo apt install libc6:arm64This will fail if your system isn’t up-to-date, so you may need to run sudo apt upgrade first. If a.out needs other libraries, you’ll need to install the corresponding :arm64 packages too.
I'm sure this is a dumb question, but I'm new to QEMU so please bear with me. └──╼ $ qemu-aarch64 ./a.out qemu-aarch64: Could not open '/lib/ld-linux-aarch64.so.1': No such file or directoryI'm assuming I just failed to install something, but I can't seem to figure it out and SO would probably shoot this question down so here I am. Thanks in advance. OS: Linux ParrotOS Arch: x86-64
qemu-aarch64: Could not open '/lib/ld-linux-aarch64.so.1': No such file or directory
Thanks to the folks over at https://github.com/zfsonlinux/ I managed to resolve the issue. Looks like the kernel-devel package contained scripts that were compiled for x86 instead of arm64: # file /usr/src/kernels/4.19.86-v8+/basic/fixdep basic/fixdep: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=f96cf37e4ab3abdfa90880655b262c4ae72c937a, for GNU/Linux 3.2.0, not strippedThe result for the other files in the srcipts directory was very similar. Running make scripts in the kernel-source directory resolved the issue: # cd /usr/src/kernels/4.19.86-v8+/ # make scripts ...
I'm trying to compile zfs on my raspberry pi4, running CentOS 7.7 (64bit). I followed the instructions provided here: https://github.com/zfsonlinux/zfs/wiki/Building-ZFS but can't get past the following error: checking kernel source version... 4.19.86-v8+ checking kernel file name for module symbols... Module.symvers checking whether modules can be built... no configure: error: *** Unable to build an empty module.Here is the link to the entire config.log - I couldn't really make sense of it: https://pastebin.com/Whmg6hFw And here's some background information on my system: # uname -a Linux localhost 4.19.86-v8+ #1 SMP PREEMPT Sat Dec 7 12:40:40 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux# rpm -qa | grep kernel kernel-4.19.86_v8+-1.aarch64 kernel-devel-4.19.86_v8+-1.aarch64 kernel-headers-4.19.86_v8+-1.aarch64# cat /etc/system-release CentOS Linux release 7.7.1908 (AltArch)I compiled the kernel and their rpms myself from the official GitHub repo; no issues or whatsoever with those (just in case that's part of the error - although I don't think so). https://github.com/raspberrypi/linux Any help appreciated - thanks in advance😉!
Compiling ZFS on aarch64 CentOS
The answer to your literal question is yes: all systems use the GPU to display startup messages and splash screens. That's because going through the GPU is the only way to display something on the monitor. However, the answer to the question you meant to ask is no: the ways the GPU is used during startup and after the system has fully booted are different. During startup, the operating system uses the GPU in text mode or as a simple framebuffer. These involve very little work from the GPU, so they are unlikely to trigger GPU bugs or to make it overheat. Text mode is limited in that it can only show text in a single monospace font. Framebuffer mode can show arbitrary images, but it's slow. Both modes may use a resolution that's less than the maximum that the GPU and monitor can do. Once the system has fully booted, it likely starts using the GPU in a different ways, using its computational capabilities. This involves a complex driver in the operating system, and may involve some nontrivial computation on the GPU. Under Linux, this mode is part of the X window system (or a replacement for it such as Wayland. You may be able to get a GUI on Linux with the X.org fbdev driver (which uses the GPU as a simple framebuffer) or with the X.org VESA driver (which is a very old standard that does little more than a framebuffer and has limited resolution). It won't be fast, and it might not be pretty, but it's better than nothing. You may need to work in text mode first to prevent X from starting in a mode that doesn't work. The way to do this depends on the distribution. The Arch Wiki may be useful even if you don't use Arch. Once you've logged in as root, create or edit /etc/X11/xorg.conf to choose the video driver. For example, for fbdev, you need something like this (untested): Section "Device" Identifier "fbdev" Driver "fbdev" Option "fbdev" "/dev/fb0" EndSectionYou also need to install the appropriate driver (if it isn't already present), which again is distribution-dependent. For example, on Debian/Ubuntu, that's apt-get install xserver-xorg-video-fbdev
I was wondering if Unix systems use the GPU for the startup splash/loading screen because I've been having some trouble with an overheating Mac with graphics issues. Unix-type systems (such as MacOS 10.6, 10.10 and different versions of Ubuntu) show the splash screen, but never actually boot into the GUI (typically just a plain black/blue/white screen after the startup splash). Windows, however, starts up (I assume this is what's happening as I can hear hard drive activity) and only shows a black screen (no splash or loading screen). This just made me curious as I have a cursed 2008 ATI iMac. I plan later to try reapplying thermal paste to see if that does any good, and then try a reflow (I know this'll only be a very temporary solution but I just want to see if anything will work), but if all else fails, it'll probably go into the bin.
Do Unix systems and other similar systems use the GPU for the startup splash/loading screen (when there is one)?
You should check out Monica - it's a handy GUI frontend that let's you calibrate your monitor. Monica depends on fltk-devel AFAIK. Read the wiki article on color management under Linux. A little bonus (and one of my favourite monitor tools) is Redshift which is the Linux equivalent of F.lux. It sets the temperature of your monitor according to the position of the sun, so that your eyes will be more confortable staring at your screen at late hours.
I installed Crunchbang Linux. I successfully installed the ATI drivers, and all works well. Except that the colors are bit weird. Hard to say what exactly is wrong, it is the notion that on another laptop (or windows on the same laptop) everything looks better. It's probably some combination of brightness/contrast/gamma settings. Is there any comfortable way to adjust the display settings under Crunchbang, or Linux in general? Some nice calibration tool? I don't need a photography-quality display. I used aticonfig to djust brigtness and contrast, and xgamma for gamma. but they are not very handy.
Calibrate LCD display in laptop?
I think I finally fixed it! I need to create a xorg.conf file and add set the correct driver to radeon; it was fbdev before. Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "ShadowFB" # [<bool>] #Option "Rotate" # <str> #Option "fbdev" # <str> #Option "debug" # [<bool>] Identifier "Card0" Driver "radeon" BusID "PCI:1:0:0" EndSectionNow my glxinfo doesn't display libvmpipe anymore: OpenGL renderer string: Gallium 0.4 on AMD REDWOOD GL_MESA_texture_signed_rgba, GL_NV_conditional_render, GL_NV_depth_clamp, GL_MESA_window_pos, GL_NV_blend_square, GL_NV_conditional_render, Gala is using 0% CPU now. @mikeserv, your comment pointed me into the right direction. I knew my X was being software rendered somehow, but didn't know exactly how. Thank you.
I am using ElementaryOS with the default Drivers and my Gala process is constantly using tons of CPU (sometimes more than 200%). I have looked everywhere but couldnt find a solution. I tried to install the proprietary ATI drivers but then I can't login in the system (black screen). My graphic card is an ATI Mobility Radeon 5730. Here is the result of glxinfo command: name of display: :0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_MESA_copy_sub_buffer, GLX_INTEL_swap_event client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_EXT_create_context_es2_profile, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_multithread_makecurrent, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_EXT_texture_from_pixmap OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.4 OpenGL shading language version string: 1.20 OpenGL extensions: GL_ARB_multisample, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture, GL_EXT_polygon_offset, GL_EXT_subtexture, GL_EXT_texture_object, GL_EXT_vertex_array, GL_EXT_compiled_vertex_array, GL_EXT_texture, GL_EXT_texture3D, GL_IBM_rasterpos_clip, GL_ARB_point_parameters, GL_EXT_draw_range_elements, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_separate_specular_color, GL_EXT_texture_edge_clamp, GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_ARB_multitexture, GL_IBM_multimode_draw_arrays, GL_IBM_texture_mirrored_repeat, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_transpose_matrix, GL_EXT_blend_func_separate, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays, GL_EXT_secondary_color, GL_EXT_texture_env_add, GL_EXT_texture_lod_bias, GL_INGR_blend_func_separate, GL_NV_blend_square, GL_NV_light_max_exponent, GL_NV_texgen_reflection, GL_NV_texture_env_combine4, GL_SUN_multi_draw_arrays, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_EXT_framebuffer_object, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_MESA_window_pos, GL_NV_packed_depth_stencil, GL_NV_texture_rectangle, GL_ARB_depth_texture, GL_ARB_occlusion_query, GL_ARB_shadow, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_window_pos, GL_EXT_stencil_two_side, GL_EXT_texture_cube_map, GL_NV_fog_distance, GL_APPLE_packed_pixels, GL_APPLE_vertex_array_object, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_shader, GL_ARB_shader_objects, GL_ARB_vertex_program, GL_ARB_vertex_shader, GL_ATI_draw_buffers, GL_ATI_texture_env_combine3, GL_ATI_texture_float, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_MESA_pack_invert, GL_MESA_ycbcr_texture, GL_NV_primitive_restart, GL_ARB_fragment_program_shadow, GL_ARB_half_float_pixel, GL_ARB_occlusion_query2, GL_ARB_point_sprite, GL_ARB_shading_language_100, GL_ARB_sync, GL_ARB_texture_non_power_of_two, GL_ARB_vertex_buffer_object, GL_ATI_blend_equation_separate, GL_EXT_blend_equation_separate, GL_OES_read_format, GL_ARB_pixel_buffer_object, GL_ARB_texture_compression_rgtc, GL_ARB_texture_float, GL_ARB_texture_rectangle, GL_ATI_texture_compression_3dc, GL_EXT_packed_float, GL_EXT_pixel_buffer_object, GL_EXT_texture_compression_rgtc, GL_EXT_texture_mirror_clamp, GL_EXT_texture_rectangle, GL_EXT_texture_sRGB, GL_EXT_texture_shared_exponent, GL_ARB_framebuffer_object, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_packed_depth_stencil, GL_ARB_vertex_array_object, GL_ATI_separate_stencil, GL_ATI_texture_mirror_once, GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_gpu_program_parameters, GL_EXT_texture_compression_latc, GL_EXT_texture_sRGB_decode, GL_OES_EGL_image, GL_ARB_copy_buffer, GL_ARB_draw_instanced, GL_ARB_half_float_vertex, GL_ARB_instanced_arrays, GL_ARB_map_buffer_range, GL_ARB_texture_rg, GL_ARB_texture_swizzle, GL_ARB_vertex_array_bgra, GL_EXT_separate_shader_objects, GL_EXT_texture_swizzle, GL_EXT_vertex_array_bgra, GL_NV_conditional_render, GL_AMD_draw_buffers_blend, GL_ARB_ES2_compatibility, GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex, GL_ARB_explicit_attrib_location, GL_ARB_fragment_coord_conventions, GL_ARB_provoking_vertex, GL_ARB_sampler_objects, GL_ARB_shader_texture_lod, GL_ARB_vertex_type_2_10_10_10_rev, GL_EXT_provoking_vertex, GL_EXT_texture_snorm, GL_MESA_texture_signed_rgba, GL_ARB_robustness, GL_ARB_texture_storagexorg.log file -> http://pastebin.com/acbCeMB5
ElementaryOS Gala using more than 100% CPU constantly
Partial answer: The Radeon can have both an internal (part of the GPU) or an external (extra chip) decoder. They usually have registers where you can set which signals gets output on which DAC (digital/analog converter). As the encoders were meant for analog TVs, a usual setting "composite" (both luminance and chrominance) on one channel, and "luminance" (Y) and "chrominance" (C) on two other channels. In this way, you can attach SVideo (Y/C) and Composite cables. Most external encoders also have an option to output RGB instead (for which you'd need a SCART connector in Europe). Looking at the source code, the internal ("legacy") encoder is set up for Y on Red, C on Green, and Composite on Blue: WREG32(RADEON_TV_PRE_DAC_MUX_CNTL, (RADEON_Y_RED_EN | RADEON_C_GRN_EN | RADEON_CMP_BLU_EN | RADEON_DAC_DITHER_EN));There's also some kind of autodetection which may assign signals differently. So that's what you'll get from the DIN plug. The order may be wrong for your particular card, however. And it may be different for external encode chips. Assuming you have connected it via SVideo (and not via SCART/RGB), if it "has color" in grub, it means the BIOS has correctly assigned C and Y to the right DACs. The driver may assign those differently for a variety of reasons, so you may end up with no chrominance at all (and hence no color) once the driver is active. I couldn't find any way to override this assignment in the code. If the load detection is a bitfield and not just a boolean, this may mean your SVideo cable chrominance termination resistor is not correctly detected (but that's a guess). So the choice is to either (1) change the driver code to allow manual override of DAC assignment/enable, or (2) tinker with the cable, so you can get at the chrominance signal if it's on the wrong DAC, or so it makes the detection work correctly. Neither of which is easy.
Update: see comment below, Switching Radeon 3450 tv out mode between component and svideo I have this card that works in Windows, and in Grub it has color and looks fine, but in xorg I just get black and white from the component (RGB) cables. I have tried changing tv format with xrandr, but no change. I can change the res though. I suspect maybe its in composite mode instead of component mode. The card has a DVI and a DIN plug, that adapts to svideo or component (rgb) These are the outputs of some commands: lspci: 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV620 LE [Radeon HD 3450] (prog-if 00 [VGA controller]) Subsystem: Dell OptiPlex 980 Flags: bus master, fast devsel, latency 0, IRQ 31 Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at f7d20000 (64-bit, non-prefetchable) [size=64K] I/O ports at e000 [size=256] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] Express Legacy Endpoint, MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> Kernel driver in use: radeon Kernel modules: radeonxrandr --verbose: Screen 0: minimum 320 x 200, current 1024 x 768, maximum 8192 x 8192 DIN connected primary 1024x768+0+0 (0x55) normal (normal left inverted right x axis y axis) 0mm x 0mm Identifier: 0x51 Timestamp: 29475 Subpixel: no subpixels Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 0 CRTCs: 0 1 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: _MUTTER_PRESENTATION_OUTPUT: 0 tv standard: ntsc supported: ntsc, pal, pal-m, pal-60, ntsc-j, scart-pal, pal-cn, secam load detection: 1 range: (0, 1) 1024x768 (0x55) 63.500MHz -HSync +VSync *current h: width 1024 start 1072 end 1176 total 1328 skew 0 clock 47.82KHz v: height 768 start 771 end 775 total 798 clock 59.92Hz 800x600 (0x56) 38.250MHz -HSync +VSync h: width 800 start 832 end 912 total 1024 skew 0 clock 37.35KHz v: height 600 start 603 end 607 total 624 clock 59.86Hz 848x480 (0x57) 31.500MHz -HSync +VSync h: width 848 start 872 end 952 total 1056 skew 0 clock 29.83KHz v: height 480 start 483 end 493 total 500 clock 59.66Hz 720x480 (0x58) 26.750MHz -HSync +VSync h: width 720 start 744 end 808 total 896 skew 0 clock 29.85KHz v: height 480 start 483 end 493 total 500 clock 59.71Hz 640x480 (0x59) 23.750MHz -HSync +VSync h: width 640 start 664 end 720 total 800 skew 0 clock 29.69KHz v: height 480 start 483 end 487 total 500 clock 59.38Hz
Switching Radeon 3450 tv out mode between component and svideo
It looks like you should burn your laptop:Your card is not supported by any driver, closed or open source. You need to use what is working for you at the moment and wait for AMD to start supporting your card via their drivers.After screwing around with a HP/dv6 I vowed never to get ATI ever again (and mine even works).
I've been researching on problems with ATI catalyst drivers and there is no fix as far as I can see. I have an ATI Radeon 7470m (HP Pavilion dm4). And I haven't managed to have a properly working distro. Is there any Linux distro that behaves well with this card and allows me to have GNOME3 without burning my laptop? If there is not a solution yet this question should help new users to get a quick update :)
ATI friendly distro?
I think I'm starting to get what happened. One of the answers on the page you linked to tells you to run this: cd /usr ; sudo ln -svT lib /usr/lib64That will i) move you into the /usr directory and ii) create a link called lib64 (which will be /usr/lib64 if you run the ln command in /usr) pointing to /usr/lib. The command should not be run as your admin user (that's why it has sudo) and it should certainly not be run from your admin user's $HOME directory. Please re-read the instructions and follow them exactly. Also read this note (included just under the ln command):(Note: The second command shouldn't be necessary if there is already such a symbolic link named lib64 pointing to folder lib there. And if there is already a real folder by that name (determined with ls -l /usr/lib64), you should ensure that its contents are safely moved into folder /usr/lib and then delete --the now empty-- folder /usr/lib64 before executing this command).So, make sure there is no /usr/lib64 directory, if there is, move its contents to /lib before running the ln command. NOTE: The actual ln command in my answer is ln -svT lib /usr/lib64, if that is really what you ran, the link will be created at /usr/lib64 irrespective of where you run it from.
I've installed AMD Catalyst 13.8 BETA2 by following these directions. It works fine on the administrative user it was setup with, but a non-admin user gets the black screen on login. Non-admin user works in software rendering mode but not hardware. cat /var/log/Xorg.0.log [ 42815.421] (EE) AIGLX error: failed to open /usr/X11R6/lib64/modules/dri/fglrx_dri.so, error[/usr/X11R6/lib64/modules/dri/fglrx_dri.so: cannot open shared object file: No such file or directory] [ 42815.421] (EE) AIGLX error: failed to open /usr/lib64/dri/fglrx_dri.so, error[/usr/lib64/dri/fglrx_dri.so: cannot open shared object file: No such file or directory] [ 42815.421] (EE) AIGLX error: failed to open /usr/X11R6/lib/modules/dri/fglrx_dri.so, error[/usr/X11R6/lib/modules/dri/fglrx_dri.so: cannot open shared object file: No such file or directory]From the installation, there was this command: sudo ln -s lib /usr/lib64ls -l /lib shows that root user and root group own the directory and subdirectories. How can non-admin users safely get the necessary access to these files?Note: the symlink command has been corrected in the linked instructions.
How to safely give non-root access to lib so that Catalyst hardware acceleration can function?
The ES1000 is built-in to your motherboard, the NVS300 is an optional extra. Which is why are you getting an error message saying NVRM: No NVIDIA graphics adapter found! The text you quoted says that if you want higher resolution than what the ATI ES1000 supports, then you can install an Nvidia NVS300, which is a completely different and separate GPU card. The NVS300 is also a fairly old card. you could probably install any other recent AMD or Nvidia card that would physically fit into the slot (would need a pci-e x16 slot) and into the case (you might need a small fanless card). e.g. an Nvidia GTX-750 (around $110USD) completely wipes the floor with an NVS300, it's so much faster that it's beyond comparison - and the 750 isn't even close to a top-of-the-range modern GPU. Even much cheaper cards like the ~$40USD GT610 are significantly faster than the NVS300. According to http://www.fujitsu.com/tw/Images/ds-py-tx100-s3-en.pdf your system has 1 pci-e 3.0 slot that is physically x16 (so it can take a full size x16 GPU card) but only x8 electronically, so the card would run fine but with slightly reduced bandwidth (GPUs don't use anywhere near the full bandwidth of pci-e 3.0 @ x16 anyway). Finally, if you just want the ES1000 built-in GPU to work, it should Just Work with a reasonably modern linux kernel and X. Don't expect high resolution or fast graphics, though.
On my system I'm unable to install the recommended graphics driver, so something must be wrong with my installation. The GPU chipset is ATI ES1000, but the recommended driver is NVIDIA NVS300 downloaded from the server vendor's site.The maximum graphics resolution of the onboard graphics controller ATI ES1000 with the native driver of Microsoft Windows 2012 is 1280 x 1024. ATI has not planned to support ATI ES1000 graphics chip with Windows 2012. So there"s no OEM driver available which could be installed on PRIMERGY TX100 S3 or TX100 S3p with Microsoft Windows 2012. For higher graphics resolutions on PRIMERGY TX100 S3 or TX100 S3p, the PCIe graphics controller NVIDIA® Quadro® NVS 300 can be used.Before installation I switched to runlevel 3 (init 3) and blacklisted nouveau driver (echo blacklist nouveau > /etc/modprobe.d/nvidia.conf). None of the conflicting drivers is present: # lsmod | grep -e nouveau -e rivafb -e nvidiafb (empty)These are all steps that should be needed, what else can be wrong on my Oracle Linux (based on Red Hat Enterprise Linux 6.7, Kernel Linux 3.8.13-118.2.1.el6uek.x86_64, GNOME 2.28.2), I was thinking incompatible kernel or some GPU driver conflict? List of OS supported by the driver: Red Hat Enterprise Linux 6.6 (x86_64) Red Hat Enterprise Linux 6.7 (x86_64) Red Hat Enterprise Linux 7 GA (x86_64) Red Hat Enterprise Linux 7.1 (x86_64) SUSE Linux Enterprise Server 11 SP3 (x86_64) SUSE Linux Enterprise Server 11 SP4 (x86_64)The main error:ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.Output from /var/log/nvidia-installer.log: -> Kernel module compilation complete. -> Unable to determine if Secure Boot is enabled: No such file or directory ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.Please see the log entries 'Kernel module load error' and 'Kernel messages' at the end of the file '/var/log/nvidia-installer.log' for more information. -> Kernel module load error: insmod: error inserting './kernel/nvidia.ko': -1 No such device -> Kernel messages: survey done event(5c) band:0 for wlan0 ==>rtw_ps_processor .fw_state(8) ==>ips_enter cnts:5 ===> rtw_ips_pwr_down................... ====> rtw_ips_dev_unload... usb_read_port_cancel usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1) usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1) usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1) usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1) usb_write_port_cancel ==> rtl8192cu_hal_deinit bkeepfwalive(0) card disble without HWSM........... <=== rtw_ips_pwr_down..................... in 29ms usb 2-1.2: USB disconnect, device number 7 usb 2-1.2: new low-speed USB device number 8 using ehci-pci usb 2-1.2: New USB device found, idVendor=093a, idProduct=2510 usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-1.2: Product: USB Optical Mouse usb 2-1.2: Manufacturer: PixArt input: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/input/input7 hid-generic 0003:093A:2510.0005: input,hidraw1: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb-0000:00:1d.0-1.2/input0 NVRM: No NVIDIA graphics adapter found! NVRM: NVIDIA init module failed! ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com.
Unable to load the kernel module 'nvidia.ko'
As of right now, FGLRX graphics switching will not work on mbp 8,x model. The only way to use fglrx for your model is through BIOS emulation mode, and because of that it will not work. You can however get graphics switching to work on your model by setting up EFI boot. Doing so is not easy and requires a series of grub2/kernel patches. Instead of using the FGLRX, you will be using the opensource radeon drivers. Following the directions at this link http://dentifrice.poivron.org/laptops/macbookpro8,2/ should help. I must warn you that the open source radeon drivers are a bit lacking in the 3d department (performance wise), but the open source Intel drivers are very good.
I have a 8,3 MacBook Pro with an onboard Radeon 6700M GPU alongside the integrated Intel Sandy Bridge GPU. I recently upgraded to FGLRX 8.930/12.1, and Catalyst Control Center seemed to look the same, there weren't any different options as opposed to 8.920/11.12. However, here it mentions that someone tested things out and was able to determine that with FGLRX on the same card as me, they were able to determine that GPU switching was, in fact, working. Unfortunately, the blog doesn't allow comments, so I can't ask this question there. Is there a way for me to determine if GPU switching is working or not between ATI and Intel cards? How can I know if it's working or not? Again, there doesn't seem to be anything in CCC that tells me whether it's enabled or not. There's "PowerPlay," but that seems to be more of a setting for battery-saving in general and doesn't make any mention of GPU switching. How can I test if GPU-switching/hybrid graphics is enabled and working on my Ubuntu 11.10 machine?
Determining if GPU switching works?
A generally good experience. I did have KDE problems (some minor crashes, really rarely) with my Radeon 7870, but this never happened on Ubuntu with Unity. Installing the driver is pretty straightforward. I used the AMD installer to generate the .deb files and installed them by hand. Then I generated the config file with (aticonfig --initial), and everything worked. Games run, wine works, videos play, flash runs, Chrome works. I can't recommend you a card (as it's not allowed), but keep in mind that Nvidia >> ATI. When it comes to drivers (especially Linux). However an ATI card will give you a better performance (on Windows at least) for a lower price. If I were to use Linux all day, I would buy an Nvidia card no doubt. But I took the leap and bought this Radeon. I'm satisfied on Windows, but it's got some problems under Linux.
The time has come for me to upgrade my aging gpu (9800 gt). The AMD 7950 has caught my attention because of the attractive price with pleasing benchmarks. But it is common knowledge that AMD GPUs have poor support on Linux. What sort of performance can I expect, with say, the latest version of Ubuntu? Will I have issues starting X. Will I have issues with a dual monitor setup? Will basic games such as minecraft or neverball play as expected without problems? Will there be any issues with video playback or flash? I tried doing some research with how this card performs on Linux, but google lacks any usable resources. If the 7950 is completely useless when paired with a Linux system, can someone suggest a comparable nvidia gpu for the same price range with similar performance?
How well will the AMD Radeon HD 7950 gpu perform on Linux?
I have fixed this by using nouveau, then setting the Radeon to be the output
I have a ATI Radeon 2400 XT, and a Nvidia GTX 580 in my debian computer. The 580 has 3 ports, but only 2 of them could be used at the same time. I bought the refurbished Radeon so that I could use another screen, but it was being ignored. I reconfigured my bios so that the Radeon was the primary display, and the ttys now use that display. After more fiddling, I managed to get my computer to show the cursor in the third screen when I moved my mouse into it, but in Gnome windows do not move with it. I have also added the ppa xorg-edgers I stopped gdm, and tried with xinit, xterm and openbox. I had the same problem. I looked at:http://web.archive.org/web/20120906222652/http://en.gentoo-wiki.com/wiki/X.Org/Dual_Monitors https://bbs.archlinux.org/viewtopic.php?id=141041I could not find anything for debian except for how to setup each individual graphics card. I am using the free xserver-xorg-video-radeon driver and the proprietary nvidia-driver xrandr does not detect the Radeon GPU, but lspci and X do. The gnome cursor passes between them EDIT: After looking at https://askubuntu.com/questions/593938, it almost works. Interactions with windows still works, and so does the mouse. However, the graphics do not transfer, and I am left with a glitched screen.
Nvidia and ATI gpu system for three monitors
The firmware for your graphics card is missing. You have to explicitly install firmware-linux-nonfree from the non-free repository. Add the non-free repository to /etc/apt/sources.list (or /etc/apt/sources.list.d/) Run apt-get update as root Install firmware-linux-nonfree with apt-get install firmware-linux-nonfree You probably have to reboot after this step or reload your device driver.Just some additional background information: most current devices require some kind of firmware blob to run. Debian decided to move these kind of blobs into a non-free package (you can't alter them, you don't know what they are doing and sometimes they are not even distributable).
Installed a fresh Debian Wheezy to enjoy Gnome 3 but it starts in fallback mode. I suppose that's because the loaded drivers do not support 3D acceleration. Installed packages I know are relevant:xserver-xorg-video-ati libgl1-mesa-driThe Gnome 3 was working fine with Ubuntu 12.04, and I belive it was using the FOSS drivers. Interestingly there is no /etc/X11/xorg.conf and when I try to generate it with Xorg -configure I get: X.Org X Server 1.12.1 Release Date: 2012-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-2-amd64 x86_64 Debian Current Operating System: Linux blackwhisper 3.2.0-2-amd64 #1 SMP Mon Apr 30 05:20:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-2-amd64 root=UUID=e6f57a36-19aa-4dfc-9b61-32d5e08abcc6 ro quiet Build Date: 07 May 2012 12:15:23AM xorg-server 2:1.12.1-2 (Cyril Brulebois <[emailprotected]>) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Sat May 19 20:15:31 2012 List of video drivers: mga ...MANYMORE radeon ...MANYMORE ati ...MANYMORE vesa (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Number of created screens does not match number of detected devices. Configuration failed. Server terminated with error (2). Closing log file.ADDITION I found now at the message boot: [ 8.121829] [drm] Loading RS780 Microcode [ 8.156063] r600_cp: Failed to load firmware "radeon/RS780_pfp.bin" [ 8.156092] [drm:r600_startup] *ERROR* Failed to load firmware!
How to configure FOSS ATI drivers on Debian Wheezy and ATI RS880 [Radeon HD 4250]?
I've solved. I looked the log: backup framebuffer data, that it means that it changes framebuffer. I've thinked: "The framebuffer doesn't work maybe?". So I have try to change framebuffer using this: https://wiki.archlinux.org/index.php/Uvesafb and now it works. And I think this is also the only way, for ATI proprietary drivers, to really change TTY resolution.
I've the ATI proprietary drivers. When I power on the computer and I do the login all works well, but when I run xorg I can't change tty or exit from xorg because if I try I see only a black screen (the monitor backlight stays on).If I change tty (ctrl alt f2) I've the black screen, if then I return to xorg (ctrl alt f1) it works. If I close or kill xorg I've the black screen and I must reset the computer.This is the Xorg log when I go to tty2 and during the blackscreen[ 312.470] (**) Option "fd" "24" [ 312.470] (**) Option "fd" "17" [ 312.470] (**) Option "fd" "23" [ 312.470] (**) Option "fd" "33" [ 312.470] (**) Option "fd" "20" [ 312.471] (**) Option "fd" "22" [ 312.471] (**) Option "fd" "21" [ 312.471] (II) AIGLX: Suspending AIGLX clients for VT switch [ 312.471] (II) fglrx(0): Backup framebuffer data. [ 312.560] (II) fglrx(0): Backup complete. [ 312.596] (II) systemd-logind: got pause for 13:68 [ 312.596] (II) systemd-logind: got pause for 13:67 [ 312.596] (II) systemd-logind: got pause for 13:69 [ 312.596] (II) systemd-logind: got pause for 13:65 [ 312.596] (II) systemd-logind: got pause for 13:64 [ 312.596] (II) systemd-logind: got pause for 13:66 [ 312.596] (II) systemd-logind: got pause for 13:70What can I do?
Black screen when I move from X session to tty session
passwd -l useris what you want. That will lock the user account. But you'll still be able to su - userbut you'll have to su - user as root. Alternatively, you can accomplish the same thing by prepending a ! to the user's password in /etc/shadow (this is all passwd -l does behind the scenes). And passwd -u will undo this.
Let's say I create a user named "bogus" using the adduser command. How can I make sure this user will NOT be a viable login option, without disabling the account. In short, I want the account to be accessible via su - bogus, but I do not want it to be accessible via a regular login prompt. Searching around, it seems I need to disable that user's password, but doing passwd -d bogus didn't help. In fact, it made things worse, because I could now login to bogus without even typing a password. Is there a way to disable regular logins for a given a account? Note: Just to be clear, I know how to remove a user from the menu options of graphical login screens such as gdm, but these methods simply hide the account without actually disabling login. I'm looking for a way to disable regular login completely, text-mode included.
Disable a user's login without disabling the account
There's no generic way to do exactly that. If the filesystem doesn't have a notion of file ownership, it probably has a mount option (uid) to decide which user the files will belong to. If the filesystem does have a notion of file ownership, mount it read-write, and users will be able to write every file they have permission to. If you only want a specific user to access the filesystem, and there is a FUSE driver for it, then arrange for the user to have read-write access to the device and mount it through FUSE as that user. Another way to only let a specific user (or a specific group, or better fine-tuning through an ACL) is to place the mount point underneath a restricted-access directory: mkdir -p /media/restricted/joe/somedisk chown joe /media/restricted/joe/somedisk chmod 700 /media/restricted/joe/somedisk mount /dev/sdz42 /media/restricted/joe/somediskIf you want some users to have read-write access and others to have read-only access regardless of file permissions, mount the filesystem read-write under a restricted access directory and use bindfs to make a read-only view of that filesystem. bindfs -o perms=a-w /media/private/somedisk /media/public-read-only/somediskYou can also make a bindfs view read-write to some users and read-only for others; see the -m and -M options in the bindfs man page. Remember to put the primary mount point under a directory that only root can access.
How can I mount some device with read-write access for a given user?
Mount device with r/w access to specific user
D-Bus isn't using the magic cookie file here; it's passing credentials over the UNIX domain socket (SCM_CREDENTIALS). The magic cookie file is only one of several D-Bus authentication mechanisms. D-Bus implements a SASL-compliant interface (see RFC4422) to support a wide range of authentication mechanisms. One of these mechanisms is called "EXTERNAL" auth, and it means that the transport channel itself should be used to guarantee authentication. At least in the case of D-Bus over UNIX sockets, this appears to be the first authentication mechanism that is tried. From the D-Bus spec:Special credentials-passing nul byte Immediately after connecting to the server, the client must send a single nul byte. This byte may be accompanied by credentials information on some operating systems that use sendmsg() with SCM_CREDS or SCM_CREDENTIALS to pass credentials over UNIX domain sockets. However, the nul byte must be sent even on other kinds of socket, and even on operating systems that do not require a byte to be sent in order to transmit credentials. The text protocol described in this document begins after the single nul byte. If the first byte received from the client is not a nul byte, the server may disconnect that client. A nul byte in any context other than the initial byte is an error; the protocol is ASCII-only. The credentials sent along with the nul byte may be used with the SASL mechanism EXTERNAL.If you strace an instance of dbus-daemon, you can see that when you connect to it, it checks the credentials of the connecting user: $ strace dbus-daemon --session --nofork ... accept4(4, {sa_family=AF_LOCAL, NULL}, [2], SOCK_CLOEXEC) = 8 ... recvmsg(8, {msg_name(0)=NULL, msg_iov(1)=[{"\0", 1}], msg_controllen=0, msg_flags=0}, 0) = 1 getsockopt(8, SOL_SOCKET, SO_PEERCRED, {pid=6694, uid=1000, gid=1000}, [12]) = 0So to answer your questions:The D-Bus daemon is using your kernel-verified user ID to verify your identity. By using socat to proxy connections, you are letting anybody connect to the D-Bus daemon using your UID.If you try to connect directly to the socket from another UID, the daemon recognizes that the connecting UID is not a UID that is supposed to be allowed to connect. I believe the default is that only the daemon's own UID is allowed, but haven't formally verified that. You can allow other users, though: see the configuration files in /etc/dbus-1/, and also man dbus-daemon.This is the D-Bus server replacing old/expired cookies with new ones. According to the DBUS_COOKIE_SHA1 section of the D-Bus spec, a cookie is stored along with its creation time, and the server is supposed to delete cookies that it decides are too old. Apparently the lifetime "can be fairly short".
I'm trying to set up remote access to D-Bus, and I don't understand how authentication and authorization are (not) working. I have a D-Bus server listening on an abstract socket. $ echo $DBUS_SESSION_BUS_ADDRESS unix:abstract=/tmp/dbus-g5sxxvDlmz,guid=49bd93b893fe40d83604952155190c31I run dbus-monitor to watch what's going on. My test case is notify-send hello, which works when executed from the local machine. From another account on the same machine, I can't connect to that bus. otheraccount$ DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-g5sxxvDlmz,guid=49bd93b893fe40d83604952155190c31 dbus-monitor Failed to open connection to session bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. otheraccount$ DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-g5sxxvDlmz,guid=49bd93b893fe40d83604952155190c31 notify-send helloAfter browsing the D-Bus specification, I copied ~/.dbus-keyrings/org_freedesktop_general to the other account, but it doesn't help. I tried forwarding the D-Bus socket over TCP, inspired by schedar's Access D-Bus remotely using socat. socat TCP-LISTEN:8004,reuseaddr,fork,range=127.0.0.1/32 ABSTRACT-CONNECT:/tmp/dbus-g5sxxvDlmzI can connect to the TCP socket from my account. DBUS_SESSION_BUS_ADDRESS=tcp:host=127.0.0.1,port=8004 notify-send helloBut not from the other account, neither with dbus-monitor nor with notify-send. Same error message for dbus-monitor as above with the abstract socket; notify-send now emits a trace: otheraccount$ DBUS_SESSION_BUS_ADDRESS=tcp:host=127.0.0.1,port=8004 notify-send hello** (notify-send:2952): WARNING **: The connection is closedStracing reveals that this version of notify-send doesn't try to read the cookie file, so I understand why it wouldn't be able to connect. I also tried SSHing into another machine and forwarding the TCP connection. ssh -R 8004:localhost:8004 remotehostSurprisingly, dbus-monitor works without a cookie file! I can watch the D-Bus traffic from the remote host. I see a notice about eavesdropping in my local dbus-monitor instance. remotehost$ DBUS_SESSION_BUS_ADDRESS=tcp:host=127.0.0.1,port=8004 dbus-monitor signal sender=org.freedesktop.DBus -> dest=:1.58 serial=2 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=NameAcquired string ":1.58" method call sender=:1.58 -> dest=org.freedesktop.DBus serial=2 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=AddMatch string "eavesdrop=true"If I run notify-send on the local machine, dbus-monitor on the remote host sees the notification. It's definitely reached a level of access that should require authentication. notify-send complained about not finding a cookie. After copying the cookie file, notify-send works from the remote machine. The local machine runs Debian wheezy. The remote machine runs FreeBSD 10.1. I don't understand how D-Bus authentication and authorization work.Why can I eavesdrop, as far as I can tell, with no credentials from the remote machine? What am I exposing when I forward D-Bus to a TCP connection? Why are authorizations for dbus-monitor and notify-send different? Why can I not eavesdrop from another account on the same machine, whether over the abstract socket or over the TCP connection? I noticed that the cookie file changes every few minutes (I haven't figured out if it's at regular intervals or not). Why?(I know I can launch a D-Bus daemon that listens on TCP. That's not the purpose of my question, I want to understand why what I did did and did not work.)
D-Bus authentication and authorization
So, it turns out the answer was actually way, way simpler than I thought it would be. I do however have to thank '@jeff schaller' for his comments, if it hadn't of been for him I wouldn't have started looking into how the SSH 'Match' configuration works. Anyway The trick is to set your /etc/ssh/sshd_config file up as default to be the configuration you would like to have for the access coming in from the external internet connection. In my case, this meant setting the following PermitRootLogin no PasswordAuthentication no UsePAM noBy doing this, I'm forcing ALL logins no matter where they come from to need to be key based logins using an SSH key. I then on the windows machines used 'PuttyGen' to generate a public/private key pair which I saved to disk, and an appropriate ssh entry for my "authorized_hosts" file in the external users home directory. I pasted this ssh key into the correct place in my users home folder, then set putty up to use the private (ppk) file generated by PuttyGen for log in and saved the profile. I then saved the profile, and sent that and the ppk key file to the external user using a secure method (Encrypted email with a password protected zip file attached) Once the user had the ppk and profile in their copy of putty and could log in, I then added the following as the last 2 lines on my sshd_config file Match Host server1,server1.internalnet.local,1.2.3.4 PasswordAuthentication yesIn the "Match" line I've changed the server names to protect the names of my own servers. Note each server domain is separated by a comma and NO SPACES, this is important. If you put any spaces in it causes SSHD to not load the config and report an error, the 3 matches I have in there do the following: server1 - matches on anyone using just 'server1' with no domain to connect EG: 'fred@server1' server1.internalnet.local - matches on anyone using the fully qualified internal domain name EG: '[emailprotected]' (NOTE: you will need an internal DNS to make this work correctly) 1.2.3.4 - matches on the specific I.P. address assigned to the SSH server EG: '[emailprotected]' this can use wild cards, or even better net/mask cidr format EG: 1.2.* or 192.168.1.0/8 if you do use wild cards however, please read fchurca's answer below for some important notes. If any of the patterns provided match the host being accessed, then the one and only single change to be made to the running config is to turn back on the ability to have an interactive password login. You can also put other config directives in here too, and those directives will also be turned back on for internal hosts listed in the match list. do however read this: https://man.openbsd.org/OpenBSD-current/man5/ssh_config.5 carefully, as not every configuration option is allowed to be used inside a match block, I found this out when I tried to "UsePAM yes" to turn PAM authentication back on, only to be told squarely that wasn't allowed. Once you've made your changes, type sshd -Tfollowed by return to test them before attempting to restart the server, it'll report any errors you have. In addition to everything above, I got a lot of help from the following two links too: https://raymii.org/s/tutorials/Limit_access_to_openssh_features_with_the_Match_keyword.html https://www.cyberciti.biz/faq/match-address-sshd_config-allow-root-loginfrom-one_ip_address-on-linux-unix/
I'm currently trying to set up an SSH server so that access to it from outside the network is ONLY allowed using an SSH Key and does not allow access to root or by any other username/password combination. At the same time, internal users inside the network, still need to be able to connect to the same system, but expect to log in in the more traditional sense with a user name and password. Users both external & internal will be accessing the system from windows using PuttySSH and the external access will be coming into the system via a port forwarding firewall that will open the source port to the outside world on some arbitrarily chosen high numbered port like 55000 (or what ever the admins decide) The following diagram attempts to show the traffic flows better.I know how to set up the actual login to only use keys, and I know how to deny root, what I don't know is how to separate the two login types. I had considered running two copies of SSHD listening on different ports on the same IP and having two different configurations for each port. I also considered setting up a "match" rule, but I'm not sure if I can segregate server wide configurations using those options. Finally, the external person logging in will always be the same user let's call them "Frank" for the purposes of this question, so "Frank" will only ever be allowed to log in from the external IP, and never actually be sat in front of any system connecting internally, where as every other user of the system will only ever connect internally, and never connect from an external IP. Franks IP that he connects from is a dynamically assigned one but the public IP he is connecting too is static and will never change, the internal IP of the port forwarder like wise will also never change and neither will the internal IP address of the SSH server. Internal clients will always connect from an IP in the private network range that the internal SSH servers IP is part of and is a 16 bit mask EG: 192.168.0.0/16 Is this set up possible, using one config file and one SSH server instance? If so, how do I do it? or Am I much better using 2 running servers with different config? For ref the SSH server is running on Ubuntu 18.04.
Is it possible to have 2 ports open on SSH with 2 different authentication schemes?
If you're absolutely sure that you've restarted the SSH server or told it to reload its configuration file by sending it a SIGHUP… Maybe the AllowUsers is in a Match section? If there is a previous Match directive, it might cause jonathan, arwen and the others to be only allowed in certain circumstances, as in … Match localhost PasswordAuthentication yes# Whitelist users who may ssh in AllowGroups admin AllowUsers jonathan daniel rafael simon thomas li arwen(The comment is misleading: these AllowGroups and AllowUsers directives only apply when logging in to localhost. The Match localhost directive should be moved below these.)
I've inherited the administration of a linux box in my workplace; it was set up by a colleague who is now gone. Recently, I added a new user to the system, and tried to give her ssh access as well; the way most people who use the machine access it. This, I can't get to work. Here's what happens: scmb-bkobe03m:~ xzhang$ ssh -v -X -p 22 arwen@myServer OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to myServer [152.98.xx.xx] port 22. debug1: Connection established. debug1: identity file /Users/xzhang/.ssh/identity type -1 debug1: identity file /Users/xzhang/.ssh/id_rsa type 1 debug1: identity file /Users/xzhang/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze2 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '[myServer]:22' is known and matches the RSA host key. debug1: Found key in /Users/xzhang/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/xzhang/.ssh/identity debug1: Offering public key: /Users/xzhang/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/xzhang/.ssh/id_dsa debug1: No more authentication methods to try.Now, I have of course added her public ssh-key to authorized_keys file. So I had a look in /var/log/auth.log and found Jan 7 11:37:12 sauron sshd[5002]: User arwen from myClientMachine not allowed because not listed in AllowUsersWhich is funny since I did add her to AllowUsers: daniel@sauron:~$ sudo more /etc/ssh/sshd_config | grep AllowUsers AllowUsers jonathan daniel rafael simon thomas li arwenI don't know where to go from here. Any takers?
User denied ssh access while in AllowUsers list
I was having exactly the same problem, with both client and server Arch linux. The solution was to use the hostname in /etc/exports instead of the IP address. I changed this: /srv/nfs 192.168.10(rw,fsid=root,no_subtree_check) /srv/nfs/media 192.168.10(rw,no_subtree_check) /srv/nfs/share 192.168.10(rw,no_subtree_check)To this: /srv/nfs iguana(rw,fsid=root,no_subtree_check) /srv/nfs/media iguana(rw,no_subtree_check) /srv/nfs/share iguana(rw,no_subtree_check)This resulted in a slightly different problem: [root@iguana data]# mount -t nfs4 frog:/srv/nfs/media /data/media mount.nfs4: Protocol not supportedI don't have a lot of experience with NFS4; apparently you are not supposed to include the NFS root path in the mount command. This finally worked and mounted the volume: [root@iguana data]# mount -t nfs4 frog:/media /data/media
My error is: mount.nfs4: access denied by server while mounting fileserver:/export/path/oneMy question is:where would the detailed log information be on the server (under systemd)?More information: I asked a similar question from the Ubuntu client perspective on AskUbuntu. My focus in this question is on the Arch Linux server. In particular, I am looking for logs on the server that will help me understand the problem. Here's the background: Our small LAN is running an Arch Linux NFS v4 file server. We have several clients running Ubuntu 15.10 and 16.04. We have one client running Ubuntu 14.04. The 14.04 client will not connect to the file server. The others all connect fine. The settings are the same on all clients. And all clients are listed in /etc/exports on the server. I need to find more detailed error information on the Arch linux server. However, journalctl does not show anything related to nfs and it does not contain any entries that are related to the nfs access denied errors. The 14.04 client can ping the fileserver as well as log in via SSH. The user name / ID as well as group match. (I'm using the same user account / uid on both client and server. It is uid 1000.) Even more info: $ sudo mount -a (on client) mount.nfs4: access denied by server while mounting fileserver:/export/path/one mount.nfs4: access denied by server while mounting fileserver:/export/path/twoThe client can ping the fileserver (and vice versa): $ ping fileserver PING fileserver (192.168.1.1) 56(84) bytes of data. 64 bytes from fileserver (192.168.1.1): icmp_seq=1 ttl=64 time=0.310 msThe client successfully logs into the LAN-based fileserver: $ ssh fileserver Last login: Tue Aug 16 14:38:26 2016 from 192.168.1.2 [me@fileserver ~]$ The fileserver's mount export and rpcinfo are exposed to the client: $ showmount -e fileserver # on client Export list for fileserver: /export/path/one/ 192.168.1.2 /export/path/two/ 192.168.1.2,192.168.1.3$ rpcinfo -p fileserver (on client) program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 58344 status 100024 1 tcp 58561 status 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100003 4 tcp 2049 nfs 100003 4 udp 2049 nfsThis is the error when mounting the export directly: $ sudo mount -vvv -t nfs4 fileserver:/export/path/one /path/one/ mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "fileserver:/export/path/one" mount: node: "/path/one/" mount: types: "nfs4" mount: opts: "(null)" mount: external mount: argv[0] = "/sbin/mount.nfs4" mount: external mount: argv[1] = "fileserver:/export/path/one" mount: external mount: argv[2] = "/path/one/" mount: external mount: argv[3] = "-v" mount: external mount: argv[4] = "-o" mount: external mount: argv[5] = "rw" mount.nfs4: timeout set for Tue Aug 16 16:10:43 2016 mount.nfs4: trying text-based options 'addr=192.168.1.1,clientaddr=192.168.1.2' mount.nfs4: mount(2): Permission denied mount.nfs4: access denied by server while mounting fileserver:/export/path/one
Where are NFS v4 logs under systemd?
You can use pam_exec to invoke an external command. Beware that pam_exec runs in an environment that is under the control of the user who calls the login service, so don't invoke it from su, only from services with a predictable environment such as sshd or login. sudo has no option to update a user's time stamp, only to remove it. So you'll have to update the time stamp manually. If you aren't using the tty_tickets option (which is not very useful), all you need to do is update the timestamp on the directory. session optional pam_exec.so seteuid /usr/local/sbin/update_sudo_ticketwhere /usr/local/sbin/update_sudo_ticket is something like #!/bin/sh touch -c "/var/lib/sudo/$PAM_USER" 2>/dev/null
What I would like to do is be able to login and sudo commands immediately without entering a password again. It is very redundant to type my password twice in a row when I need to login and run a privileged command. I understand the security risk which requires us to reenter a password when we've been away for awhile but it seems like login should automatically set this session by default to prevent this but it doesn't for some reason. I am aware of these solutions but they both rely on gdm and I appear to only have LightDM installed for starters. Furthermore, I don't login to a GUI interface and AFAIK the console doesn't use either of these to manage logins in the first place. I'm using Ubuntu 13.04 in VMware if that matters. I do have a KDE installed but I don't usually load it unless I have a reason to. The ideal solution would also work from SSH logins. Update: Based on Gilles' suggestions I now have this working setup: ~ tail -n1 /etc/pam.d/sshd session optional pam_exec.so seteuid /usr/local/sbin/update_sudo_ticketand (be sure to use sudo visudo to edit sudoers) ~ sudo head -n12 /etc/sudoers|tail -n1 Defaults !tty_ticketsand create a new script ~ cat /usr/local/sbin/update_sudo_ticket #!/bin/sh touch -c "/var/lib/sudo/$PAM_USER" 2>/dev/nulland make it executable ( sudo chmod u=+rwx,g=+rx-w,o=-rwx /usr/local/sbin/update_sudo_ticket ): ~ ls -la /usr/local/sbin/update_sudo_ticket -rwxr-x--- 1 root root 57 Sep 21 20:52 /usr/local/sbin/update_sudo_ticket
How to automatically enter sudo's grace period upon CLI login? [duplicate]
From section DECLARING ACTIONS of polkit - Authorization Framework:defaults This element is used to specify implicit authorizations for clients. Elements that can be used inside defaults includes: allow_any Implicit authorizations that apply to any client. Optional. allow_inactive Implicit authorizations that apply to clients in inactive sessions on local consoles. Optional. allow_active Implicit authorizations that apply to clients in active sessions on local consoles. Optional. Each of the allow_any, allow_inactive and allow_active elements can contain the following values: no Not authorized. yes Authorized. auth_self Authentication by the owner of the session that the client originates from is required. auth_admin Authentication by an administrative user is required. auth_self_keep Like auth_self but the authorization is kept for a brief period. auth_admin_keep Like auth_admin but the authorization is kept for a brief period.I hope this makes it clear for you.
I am using Ubuntu 16.04. There is a file located at /usr/share/polkit-1/actions/org.freedesktop.login1.policy which seems to control the permissions regarding shutdown/suspend/hibernate options. In this file, the revelant options are in this format: <defaults> <allow_any>no</allow_any> <allow_inactive>auth_admin_keep</allow_inactive> <allow_active>yes</allow_active> </defaults>corresponding to every action (shutdown, suspend etc.). Here is the full version of that file. I want to know the meaning of allow_any, allow_inactive and allow_active options. What do they mean exactly ? The reason for my curiosity is that I want to hibernate non-interactively without root (from cron), but am getting authorization errors. And it seems that those errors can be solved by modifying this file.
Explanation of file - org.freedesktop.login1.policy
With the visudo command you edited the file /etc/sudoers, which only applies if you prefix your commands with sudo, in your case sudo service nginx start.
I'm pretty new to the deployment world but this is what's going on. I have a new Ubuntu (Ubuntu 16.04.4 LTS) droplet from DigitalOcean. I installed and configured nginx and everything is working smooth. I turn it on and off with: service nginx start/service nginx stop but I need to be able to do this with a different user called pepito. When I try to run service nginx start with pepito I get: ~# service nginx restart ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to restart 'nginx.service'. Authenticating as: pepito Password: But I'm going to be running this from Capistrano so I don't want to be asked to enter the password, so I added this to visudo like this: pepito ALL=(ALL) NOPASSWD: /usr/sbin/service nginx*Tried again and same problem. Keep googling and reading and find out that ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === is a message from Polkit so I read a little about it and created the following file in: /etc/polkit-1/localauthority/50-local.d/nginx.pkla Identity=unix-user:pepito Action=org.freedesktop.systemd1.manage-units ResultInactive=yes ResultActive=yesOf course it doesn't work when I try to start and stop nginx from pepito. I don't know what else to try!
Trying to run "service nginx restart" from a non root user
Have you tried mod-auth external, it allows you to do your custom authentication mechanism for Apache. It gives you access to environment variables such as IP, USER, PASS, etc. You can write a script in a language that you are familiar with and go fetch the authentication data from your database. The wiki has some examples. If you build a custom authentication script, make sure it's well coded (security-wise). The module is available on CentOS (mod_authnz_external) and on Ubuntu (libapache2-mod-authnz-external) Here's a basic Apache configuration example : LoadModule authnz_external_module modules.d/mod_authnz_external.so DefineExternalAuth my_auth environment /tmp/auth.sh<Location /> AuthType Basic AuthName "My super special access" AuthBasicProvider external Require valid-user AuthExternal my_auth </Location>Here's the very simple script that logs the IP the USER and the PASSWORD, and accept the authentication only if the user provided is 'Tony'. In this particular example, The script is saved under /tmp/auth.sh with executable bit set. You can do whatever you want (filter by IP, username, etc). #!/bin/bashecho $(date) ${IP} >> /tmp/log.txt echo $(date) ${USER} >> /tmp/log.txt echo $(date) ${PASS} >> /tmp/log.txt#Very basic filtering. if [[ "${USER}" != "Tony" ]] then exit 1; fi
I have read the answer to this question: https://stackoverflow.com/questions/4102763/apache-basic-authentication-except-for-those-allowed It helped me understand how to not authenticate some users (according to the IP): <Directory /var/www/files/> Require valid-user Allow from 192.168.1.2 Satisfy Any AuthUserFile /etc/apache2/basic.pwd AuthName "Please enter username and password" AuthType Basic </Directory>Imagine I have this DB (Different from the DB used for authentication): User IP Mark 192.168.1.2 Mike 192.168.1.3 Karl 192.168.1.41- can I allow all the IP addresses stored in the DB using a configuration in Apache? I don't want a static solution (the DB is changing dynamically)? 2- another problem is the authorization of the allowed IP is lost, can Apache use this DB for authorization, if the user is allowed to get the pages without authentication?in details: We know when Apache authenticate users, it knows the user name from authentication credentials, but with the Allowing, the user name will be lost, I want Apache to extract the user name of the IP its allowing from the same table it extract the IP address? UPDATE: Note: I think Tony answer might be helpful but I want other answers too (which don't obligate me to build a module). My Goal of this question is "single sign on":I use freeradius to authenticate the internal (inside Network) users so I don't want Apache to re-authenticate them. I want Apache to Authenticate external users using LDAP. my solution is to use Allow Directive to let the internal users without Authentication but I need to Allow them using DB ( first Problem)? and trying to configure Apache to Authorize internal users ( which I didn't authenticate) (second problem)?Note: Authorize the external users is very easy using LDAP (because Apache Knows the name of the user it's dealing with from authentication credentials). Is my suggested solution eligible to do what I want to do, If not what do you suggest as a solution?
Apache Authorization for the Allowed Users?
When troubleshooting problems with daemons, you should always check the system logs. In this particular case, if you check your system logs on the NAS host, you'll see something similar to: Authentication refused: bad ownership or modes for directory /home/adminThe problem is shown in this output: admin@NAS:~$ ls -alh drwxrwxrwx 6 admin users 4.0K Jun 26 07:28 .For security, SSH will refuse to use the authorized_keys file if any ancestor of the ~/.ssh directory is writable by someone other than the user or root (ancestor meaning /home/user/.ssh, /home/user, /home, /). This is because another user could replace the ~/.ssh directory (or ~/.ssh/authorized_keys file) with their own, and then ssh into your user. To fix, change the permissions on the directory with something like: chmod 755 ~
I've been struggling for days with this now and can't find what I'm doing wrong. I've got a website on a VPS server. Every night I make a backup of the database. It gets stored on my VPS server. I also want to send a copy to my NAS (Synology DS214play) at home. Both servers operate on Linux. So I've logged into my VPS server (as root) and generated a ssh-keygen. On my VPS it looks like this: [root@vps /]# cd ~ [root@vps ~]# ls -alh dr-xr-x---. 7 root root 4.0K Jun 25 18:58 . dr-xr-xr-x. 24 root root 4.0K Jun 25 19:33 .. drwx------ 3 root root 4.0K Jun 25 20:29 .ssh [root@vps ~]# cd .ssh [root@vps .ssh]# ls -alh drwx------ 3 root root 4.0K Jun 25 20:29 . dr-xr-x---. 7 root root 4.0K Jun 25 18:58 .. -rw------- 1 root root 1.7K Jun 26 07:27 id_rsa -rw-r--r-- 1 root root 403 Jun 26 07:27 id_rsa.pub -rw------- 1 root root 394 Jun 25 20:29 known_hostsThen I copied the file to the NAS by using ssh-copy-id admin@NAS:/$ cd ~ admin@NAS:~$ ls -alh drwxrwxrwx 6 admin users 4.0K Jun 26 07:28 . drwxrwxrwx 13 root root 4.0K Jun 21 20:57 .. drwx------ 2 admin users 4.0K Jun 26 07:28 .ssh admin@NAS:~$ cd .ssh admin@NAS:~/.ssh$ ls -alh drwx------ 2 admin users 4.0K Jun 26 07:28 . drwxrwxrwx 6 admin users 4.0K Jun 26 07:28 .. -rw------- 1 admin users 403 Jun 26 07:27 authorized_keysWhen looking into VPS/id_rsa.pub and NAS/authorized_keys I see that both keys are identical. Now I'm trying to copy a test file from the VPS to the NAS by using: [root@vps /]# scp -i ~/.ssh/id_rsa /test.txt admin@___.___.___.___:/volume1/SQL_backupThat however results in shell asking me for the password (every time). How come that I have to keep giving my pass?
SCP command keeps asking password
If running the script is the only thing you want those other users to be able to do, then I'd go with using ssh keys. Each user should have their own ssh key, so you won't get into a hassle when somebody no longer needs access. The public part of the key should be put into ~scriptuser/.ssh/authorized_keysand in front of the actual key, you should add the text command="/path/to/script" Here's an example: from="10.23.5.32",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="/path/to/script" ssh-dss A........This limits the ip-address that this key can be used from, and it limits what kind of forwarding can be done, and makes sure that no pty can ever be granted when using this key, and whenever the user connects with this key then the script will be run and nothing else can happen. To add an environment variable, you just add it too to the key: from="10.23.5.32",environment="MYVARIABLE=whatever",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="/path/to/script" ssh-dss A........However, in order for that to work, you have to have the PermitUserEnvironment directive set to "yes" in the sshd config file. If you can't make that happen, you can instead change the line to this: from="10.23.5.32",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="export MYVARIABLE=whatever; /path/to/script" ssh-dss A........
Possible Duplicate: Creating a UNIX account which only executes one command There is a shell script which has to be executed through an existing user account XXX. Now I have various other users which shall be able to execute only this script as well without getting access to the user account XXX. Is there a way to create a ssh command (maybe through a key or anything else), which only allows to execute this specific shell script of the user XXX without knowing the password of XXX?
Allowing ssh, but only to execute a specific script [duplicate]
You can use command keyword in authorized_keys to restrict execution to one single command for particular key, like this: command="/usr/local/bin/mysync" ...sync public key... Update: If you specify a simple script as the command you may verify the command user originally supplied: #!/bin/sh case "$SSH_ORIGINAL_COMMAND" in /path/to/unison *) $SSH_ORIGINAL_COMMAND ;; *) echo "Rejected" ;; esac
I'd like to use a passwordless key to perform e.g. unison synchronization while being able to SSH into the server only with a password-protected key. The usual way of using scponly is changig the login-shell of my server account, but that is too global. Can an entry in authorized_keys achieve this instead?
How to associate only one public key with a restricted shell like scponly?
The root user can be constrained in its set of capabilities. From capabilities(7):If the effective user ID is changed from nonzero to 0, then the permitted set is copied to the effective set.This implies that in the capability model, becoming the root user does not grant all permissions, unlike in the traditional model, where it does. The capability model is used in Linux 2.2 and later. The bounding set of capabilities for a process is inherited from its parent. When Docker drops capabilities from the bounding set for the thread starting the container, those capabilities are dropped for the the container, affecting every process of that container, whether for the root user or otherwise. The capabilities that are left are inherited by the root user inside the container when it gains the user ID 0 (in the given namespace created by clone(2)). The scope of these capabilities are limited by the parameters passed to clone(2), which create new namespaces for various subsystems; cgroups; and any additional security subsystems, such as AppArmor or SELinux.
Does the root user bypass capability checking in the kernel, or is the root user subject to capability checking starting with Linux 2.2? May applications check for and deny access for the root user, if certain capabilities are dropped from its capability set? By default the root user has a full set of capabilities. The reason I'm asking is the following except from man capabilities: Privileged processes bypass all kernel permission checksHowever, nothing is said whether this rule still holds after Linux 2.2 release. Extra: Docker removes certain capabilities from the root user while starting a new container. However, Docker doesn't use user namespaces by default, so how is the root user's capabilities restored? man capabilities:For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list). Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled. Capabilities are a per-thread attribute.
Does the root user bypass capability checking?
As explained by the very good and comprehensive wikipedia page on the subject :+ (plus) suffix indicates an access control list that can grant additional permissions. Details are available with man getfacl.Furthermore, there are three permission triads :First triad : what the owner can do Second triad : what the group members can do Third triad : what other users can doAs for the characters of the triad :First characterr : readableSecond characterw : writableThird characterx: executable s or t: executable and setuid/setgid/sticky S or T: setuid/setgid or sticky, but not executableThe setuid/setgid basically means that, if you have the permission to run the program, you will run it as if you were the owning user and/or of the owning group of that program. This is helpful when you need to run a program which needs root access but also needs to work for non-root users (to change your password, for example). The sticky bit might have different meaning depending on the system or flavor you are running and how old it is, but on linux, the wiki page states that :[...] the Linux kernel ignores the sticky bit on files. [...] When the sticky bit is set on a directory, files in that directory may only be unlinked or renamed by root or the directory owner or the file owner.
If the ls -l command gives me a permission string like rwsr-s--xWhat does the 's' mean? The only sources I found mention that it can be present sometimes but do not elaborate. What does a '+' instead of a '-' mean? I have found mentions of 'extended permission' but nothing clear.
'+' and 's' in permission strings
In addition to Gilles' answer to the Can I block non-interactive sudo invocation? aspect of your question, I would suggest a work-around for this particular element of your situation:I don't like typing the password for sudo so I disabled itandNow I have a bad side effect that any shell script I run can silently execute sudo on my behalf.If your situation is that you normally sudo a bunch of various commands, but then you find yourself executing scripts that silently execute sudo to where you aren't prompted for a password to realize that root-level things are happening, you could:Take the NOPASSWD option back out Extend the timestamp timeout beyond the stock 5 minutes to be a longer period of time Kill the sudo timestamp before running an untrusted scriptYou already know how to edit sudoers for the NOPASSWD option. To extend the timestamp_timeout in the sudoers file, set it to either a really high value or a negative value. Relevant snippets from the manual page for that parameter:timestamp_timeout Number of minutes that can elapse before sudo will ask for a passwd again. The default is 5. If set to a value less than 0 the user's time stamp will not expire until the system is rebooted.When you find yourself about to execute a script that you're not sure of, simply run sudo -k to "kill" the timestamp:-k When used without a command, invalidates the user's cached credentials. In other words, the next time sudo is run a password will be required.If you run a script and find yourself being prompted by sudo for your password, you'll know that sudo was invoked and would be able to interrupt the script if you wanted. As an aside here, I recommend setting the passprompt parameter to include the text sudo in it, such as: Defaults passprompt="[sudo] password for %u:"... so that it's obvious if/when sudo is prompting for your password (versus any other tool).
I don't like typing the password for sudo so I disabled it with %sudo ALL=(ALL) NOPASSWD: ALLNow I have a bad side effect that any shell script I run can silently execute sudo on my behalf. The most typical use case would be intending to install experimental stuff to ${HOME}/local and forgetting to configure the PREFIX. Question - is there an out-of-the-box way with sudo to block non-interactive invocation (from a script)? On Arch Linux, to be specific.
block scripted sudo
I was able to reach the desired page in the end. It seems that I wasn't following the correct sequence of URL calls. Once I did that, the desired page was retrieved correctly. Thank you very much for the quick responses !
I'm trying to use cURL to automate some processes that we usually do using a website. I was able to login to the website using curl and the following command: curl -k -v -i --user "[user]:[password]" -D cookiejar.txt https://link/to/home/pageHowever, when I'm trying to use the generated cookiejar.txt file for subsequent calls, I'm not getting passed the authorization. The browser sends the following data to the server: GET /[my other page] HTTP/1.1 Host [my host] User-Agent Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-US,en;q=0.5 Accept-Encoding gzip, deflate Cookie JSESSIONID=[my session id] Authorization Basic [my encrypted string] Connection keep-aliveSo, I changed my second cURL call to something like this, to be sure that all these parameters are sent as well: curl -i -X GET -k -v \ -b cookiejar.txt \ -H "Authorization: Basic [my encrypted string]" \ -H "Host: [my host]" \ -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" \ -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \ -H "Content-Type: application/x-www-form-urlencoded" \ -H "Accept-Language: en-US,en;q=0.5" \ -H "Connection: Keep-Alive" \ https://[my other page]Unfortunately this doesn't work. If I omit the Authorization header, I get a 401 error. If I include it in my cURL request, I get the Login page (with the 200 OK response). There's no error in the console to give me at least a hint about what the problem is. I appreciate any idea to help me get passed this issue.
curl authentication works but I cannot reach other pages
This is just a workaround to the problem. Any suggestions on how to tackle the actual problem of bluetooth-agent stalling are welcome.I used stdbuf to disable line buffering of STDOUT when running bluetooth-agent in the background. This updates the log file in real time, thereby letting me check and trigger the rest of the activities that need to be done. stdbuf -o 0 bluetooth-agent "$PIN" 1> ./bluelog &
I am trying to manually connect between my laptop and phone. I have bluez-utils version 4.98-2ubuntu7 installed. When I run the agent on the terminal, I get: asheesh@U32U:~$ sudo bluetooth-agent 4835 Pincode request for device /org/bluez/980/hci0/dev<id> Authorizing request for /org/bluez/980/hci0/dev<id>The pincode request line gets printed when I try to pair from my phone. After I enter the passkey on being prompted, the device gets authorized. I can now send files to the laptop from my phone. However, the application gets stuck after authorizing request and control is not passed back to the terminal. Why is this happening? How do I get back control? This seems to be contrary to examples I have seen across the internet, where the terminal becomes available after authorization to run further commands. I realise that running it in the background is a possible solution, but since I need to run certain other tasks once pairing is completed, I would prefer to have it run in the foreground. I tried using this: bluetooth-agent "$PIN" 1> ./bluelog #Background run tested alsoHowever, the process does not write its output to file till it completes (or is killed), so I cannot test the output in bluelog. Is there a way to force the process to write output before completion?
Why is bluetooth-agent getting stuck on authorizing?
I would recommend creating a private/public key pair on the client machine, and copying the public key to the remote machine. You can generate such a keypair with ssh-keygen and copy it to the remote machine using ssh-copy-id. The logs are probably readable by all user accounts on the server (at least they are on my machine). You should therefore not use the root account on the server for this, as root access to your client would mean root access to the remote machine.
I have a remote server that I need to download Apache logs from. I can manually scp into the server and get the files, but I'd like to put this in crontab. The only way to automate it is to include the password of the target server which I'd rather not do. What would you recommend to scp into the other server, get files and download them to another machine?
Shell Script - how to scp into remote server and download files and protect password
sshd by default already checks ~/.ssh/authorized_keys and ~/.ssh/authorized_keys2. This is configurable with the AuthorizedKeysFile option in /etc/ssh/sshd_config, which can take a list of multiple files to check. From sshd_config(5):AuthorizedKeysFile Specifies the file that contains the public keys used for user authentication. The format is described in the AUTHORIZED_KEYS FILE FORMAT section of sshd(8). Arguments to AuthorizedKeysFile accept the tokens described in the TOKENS section. After expansion, AuthorizedKeysFile is taken to be an absolute path or one relative to the user's home directory. Multiple files may be listed, separated by whitespace. Alternately this option may be set to none to skip checking for user keys in files. The default is ".ssh/authorized_keys .ssh/authorized_keys2".
I'm working on a piece of automation that generates a list of allowed public keys and overwrites a server's user ~./ssh/authorized_keys. Is there a way to prevent a mistake in the automation to completely block me from accessing the host? I have some limitations, the server itself is from a VM image that gets updates over time, so creating additional users is not something I would like to pursue. So far I've thought of:Would it be possible to have composition of authorized_keys. If there were 2 files, I could have a dynamic file and one of the files with a static fallback key. I will still do a testing before the overwrite (like checking for contents and format of keys) to ensure I'm not copying an empty file. But still, something could go wrong.Is composition a possibility? If not do you folks have other ideas? Thank you in advance.
Fallback for authorized_keys
The default Apache settings for /var/www meet your requirements already. You can restrict access to /var/www/private using Require group team as you suggested, by adding the missing configuration as follows. Require directives default to RequireAny so it can usually be omitted unless you need to change it as shown in the configuration below. Create a .groups file like this in a suitable location for your system: # group: memberOne memberTwo memberThree etc team: richard david jane billThen generate a .password file of users and hashed passwords: $ htpasswd -c /path/to/file/.passwords richardRun the same command for each group member who needs access, but omit the -c (create) flag or you'll overwrite the password file with a new blank one. Configure your Apache directives as follows, setting the correct path to the .passwords and .groups files you created above. <Location /private> Options Indexes AuthType basic AuthName "login info required" AuthUserFile path/to/file/.passwords AuthGroupFile path/to/file/.groups <RequireAll> Require all granted Require group team </RequireAll> </Location>Restart Apache and you're done.
I've a very easy question. However I have been digging all the manuals for an answer for a full day already. What I want is to configure Apache to give anyone read access to /var/www but restrict /var/www/private to my team only. I'm looking for the new solution of version 2.4. Thus not using deprecated directives like Allow, Deny, Order and Satisfy. I have write permission for the /etc/apache2/sites-available/* files but only read permission for /etc/apache2/apache2.conf. What I've tried so far is this: Content of /etc/apache2/apache2.conf: <Directory /> Require all denied </Directory> <Directory /var/www> Require all granted </Directory>Content of /etc/apache2/sites-available/000-default.conf: <Directory /var/www/private> Require group team </Directory>But with this configuration everyone has access to /var/www/private. And this I can understand, since Apache merges all the environments for /var/www/private to something like this: Require all denied # inherited from / Require all granted # inherited from /var/www Require group team # inherited from /var/www/privateAnd since Require directives outside <RequireAll>, <RequireAny> or <RequireNone> are equivalent to being in a <RequireAny> block, the merged view is thus: <RequireAny> Require all denied # inherited from / Require all granted # inherited from /var/www Require group team # inherited from /var/www/private </RequireAny>And this shows clearly why /var/www/private is open for everyone (the second statement always matches). My question is thus: "Can you somehow overide the Require all granted in a parent directory in a subdirectory or can you change the default <RequireAny> behaviour to <RequireAll>?"
Restrict access to subdirectory in Apache
RHEL doesn't yet have systemd, so the approach for Fedora 19 and RHEL will be dramatically different. At any rate, what you are trying to do is not sanely possible. You'd have to create a separate login role for each user and grant it ability to execute systemd without transitioning into systemd domain -- at which point you'd have to pretty much clone the entire systemd policy into each user's domain and then write another policy for executing each service. Per user. Unless you already have a really awesome understanding of SELinux and are already really excellent at writing SELinux policies (and really love M4), I strongly suggest not going down this route. Just add sudo rules per user to allow executing things like "/sbin/service foo restart" or "/bin/systemctl restart foo.service". If you want to add SELinux into the fray, make these users staff_u and the rest user_u.
How can I give certain users authorization to start and stop certain services? I am asking specifically about Fedora 19 systems with SELinux installed. The utility to call for services administration is in this case systemctl. What type of SELinux security policy do I have to write? Where? How? Any references?
SELinux policy to authorize some users to start / stop certain services
It was my .xinitrc missing the line exec startkde and cannot start a windowmanager.
I've upgraded my virtual machine from Opensuse 12.2 M3 to 12.2 Beta 1 and it hangs at the startup after the mysqld. However I can manually access a terminal and start kdm as root but kdm doesn't recognize my user? I have double checked the password and it's not a typo? I don't have special character like here http://www.kubuntuforums.net/showthread.php?58623-Can-t-login-via-KDM-after-upgrading-to-Precise/page1, too.
kdm cannot authorize my user?
The simple stuff PATH=$PATH:~/opt/binor PATH=~/opt/bin:$PATHdepending on whether you want to add ~/opt/bin at the end (to be searched after all other directories, in case there is a program by the same name in multiple directories) or at the beginning (to be searched before all other directories). You can add multiple entries at the same time. PATH=$PATH:~/opt/bin:~/opt/node/bin or variations on the ordering work just fine. Don't put export at the beginning of the line as it has additional complications (see below under “Notes on shells other than bash”). If your PATH gets built by many different components, you might end up with duplicate entries. See How to add home directory path to be discovered by Unix which command? and Remove duplicate $PATH entries with awk command to avoid adding duplicates or remove them. Some distributions automatically put ~/bin in your PATH if it exists, by the way. Where to put it Put the line to modify PATH in ~/.profile, or in ~/.bash_profile or if that's what you have. (If your login shell is zsh and not bash, put it in ~/.zprofile instead.) The profile file is read by login shells, so it will only take effect the next time you log in. (Some systems configure terminals to read a login shell; in that case you can start a new terminal window, but the setting will take effect only for programs started via a terminal, and how to set PATH for all programs depends on the system.) Note that ~/.bash_rc is not read by any program, and ~/.bashrc is the configuration file of interactive instances of bash. You should not define environment variables in ~/.bashrc. The right place to define environment variables such as PATH is ~/.profile (or ~/.bash_profile if you don't care about shells other than bash). See What's the difference between them and which one should I use? Don't put it in /etc/environment or ~/.pam_environment: these are not shell files, you can't use substitutions like $PATH in there. In these files, you can only override a variable, not add to it. Potential complications in some system scripts You don't need export if the variable is already in the environment: any change of the value of the variable is reflected in the environment.¹ PATH is pretty much always in the environment; all unix systems set it very early on (usually in the very first process, in fact). At login time, you can rely on PATH being already in the environment, and already containing some system directories. If you're writing a script that may be executed early while setting up some kind of virtual environment, you may need to ensure that PATH is non-empty and exported: if PATH is still unset, then something like PATH=$PATH:/some/directory would set PATH to :/some/directory, and the empty component at the beginning means the current directory (like .:/some/directory). if [ -z "${PATH-}" ]; then export PATH=/usr/local/bin:/usr/bin:/bin; fiNotes on shells other than bash In bash, ksh and zsh, export is special syntax, and both PATH=~/opt/bin:$PATH and export PATH=~/opt/bin:$PATH do the right thing even. In other Bourne/POSIX-style shells such as dash (which is /bin/sh on many systems), export is parsed as an ordinary command, which implies two differences:~ is only parsed at the beginning of a word, except in assignments (see How to add home directory path to be discovered by Unix which command? for details); $PATH outside double quotes breaks if PATH contains whitespace or \[*?.So in shells like dash, export PATH=~/opt/bin:$PATH sets PATH to the literal string ~/opt/bin/: followed by the value of PATH up to the first space. PATH=~/opt/bin:$PATH (a bare assignment) doesn't require quotes and does the right thing. If you want to use export in a portable script, you need to write export PATH="$HOME/opt/bin:$PATH", or PATH=~/opt/bin:$PATH; export PATH (or PATH=$HOME/opt/bin:$PATH; export PATH for portability to even the Bourne shell that didn't accept export var=value and didn't do tilde expansion). ¹ This wasn't true in Bourne shells (as in the actual Bourne shell, not modern POSIX-style shells), but you're highly unlikely to encounter such old shells these days.
I'm wondering where a new path has to be added to the PATH environment variable. I know this can be accomplished by editing .bashrc (for example), but it's not clear how to do this. This way: export PATH=~/opt/bin:$PATHor this? export PATH=$PATH:~/opt/bin
How to correctly add a path to PATH?
If I understand the question correctly you should be able to cycle through alternatives by repeatedly hitting Ctrl + R. E.g.:Ctrl + R grep Ctrl + R Ctrl + R ...That searches backwards through your history. To search forward instead, use Ctrl + S, but you may need to have set: stty -ixon (either by .bash_profile or manually) prior to that to disable the XON/XOFF feature which takes over Ctrl + S. If it happens anyway, use Ctrl + Q to re-enable screen output (More details here.)
In the terminal, I can type Ctrl + R to search for a matching command previously typed in BASH. E.g., if I type Ctrl + R then grep, it lists my last grep command, and I can hit enter to use it. This only gives one suggestion though. Is there any way to cycle through other previously typed matching commands?
How to cycle through reverse-i-search in BASH?
Just use time when you call the script: time yourscript.sh
I would like to display the completion time of a script. What I currently do is - #!/bin/bash date ## echo the date at start # the script contents date ## echo the date at endThis just show's the time of start and end of the script. Would it be possible to display a fine grained output like processor time/ io time , etc?
How to get execution time of a script effectively?
Add the following to your ~/.bashrc: # Avoid duplicates HISTCONTROL=ignoredups:erasedups # When the shell exits, append to the history file instead of overwriting it shopt -s histappend# After each command, append to the history file and reread it PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"
I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget. I want a history that:Remembers everything from every terminal Is instantly accessible from every terminal (eg if I ls in one, switch to another already-running terminal and then press up, ls shows up) Doesn't forget command if there are spaces at the front of the command.Anything I can do to make bash work more like that?
Preserve bash history in multiple terminal windows
This technique allows for a variable to be assigned a value if another variable is either empty or is undefined. NOTE: This "other variable" can be the same or another variable. excerpt ${parameter:-word} If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.NOTE: This form also works, ${parameter-word}. According to the Bash documentation, for all such expansions:Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.If you'd like to see a full list of all forms of parameter expansion available within Bash then I highly suggest you take a look at this topic in the Bash Hacker's wiki titled: "Parameter expansion". Examples variable doesn't exist $ echo "$VAR1"$ VAR1="${VAR1:-default value}" $ echo "$VAR1" default valuevariable exists $ VAR1="has value" $ echo "$VAR1" has value$ VAR1="${VAR1:-default value}" $ echo "$VAR1" has valueThe same thing can be done by evaluating other variables, or running commands within the default value portion of the notation. $ VAR2="has another value" $ echo "$VAR2" has another value $ echo "$VAR1"$$ VAR1="${VAR1:-$VAR2}" $ echo "$VAR1" has another valueMore Examples You can also use a slightly different notation where it's just VARX=${VARX-<def. value>}. $ echo "${VAR1-0}" has another value $ echo "${VAR2-0}" has another value $ echo "${VAR3-0}" 0In the above $VAR1 & $VAR2 were already defined with the string "has another value" but $VAR3 was undefined, so the default value was used instead, 0. Another Example $ VARX="${VAR3-0}" $ echo "$VARX" 0Checking and assigning using := notation Lastly I'll mention the handy operator, :=. This will do a check and assign a value if the variable under test is empty or undefined. Example Notice that $VAR1 is now set. The operator := did the test and the assignment in a single operation. $ unset VAR1 $ echo "$VAR1"$ echo "${VAR1:=default}" default $ echo "$VAR1" defaultHowever if the value is set prior, then it's left alone. $ VAR1="some value" $ echo "${VAR1:=default}" some value $ echo "$VAR1" some valueHandy Dandy Reference TableParameter set and not null Parameter set but null Parameter unset${parameter:-word} substitute parameter substitute word substitute word${parameter-word} substitute parameter substitute null substitute word${parameter:=word} substitute parameter assign word assign word${parameter=word} substitute parameter substitute null assign word${parameter:?word} substitute parameter error, exit error, exit${parameter?word} substitute parameter substitute null error, exit${parameter:+word} substitute word substitute null substitute null${parameter+word} substitute word substitute word substitute null(Screenshot of source table) This makes the difference between assignment and substitution explicit: Assignment sets a value for the variable whereas substitution doesn't. ReferencesParameter Expansions - Bash Hackers Wiki 10.2. Parameter Substitution Bash Parameter Expansions
I have been looking at a few scripts other people wrote (specifically Red Hat), and a lot of their variables are assigned using the following notation VARIABLE1="${VARIABLE1:-some_val}" or some expand other variables VARIABLE2="${VARIABLE2:-`echo $VARIABLE1`}" What is the point of using this notation instead of just declaring the values directly (e.g., VARIABLE1=some_val)? Are there benefits to this notation or possible errors that would be prevented? Does the :- have specific meaning in this context?
Using "${a:-b}" for variable assignment in scripts
You can use getent, which comes with glibc (so you almost certainly have it on Linux). This resolves using gethostbyaddr/gethostbyname2, and so also will check /etc/hosts/NIS/etc: getent hosts unix.stackexchange.com | awk '{ print $1 }'Or, as Heinzi said below, you can use dig with the +short argument (queries DNS servers directly, does not look at /etc/hosts/NSS/etc) : dig +short unix.stackexchange.comIf dig +short is unavailable, any one of the following should work. All of these query DNS directly and ignore other means of resolution: host unix.stackexchange.com | awk '/has address/ { print $4 }' nslookup unix.stackexchange.com | awk '/^Address: / { print $2 }' dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 }'If you want to only print one IP, then add the exit command to awk's workflow. dig +short unix.stackexchange.com | awk '{ print ; exit }' getent hosts unix.stackexchange.com | awk '{ print $1 ; exit }' host unix.stackexchange.com | awk '/has address/ { print $4 ; exit }' nslookup unix.stackexchange.com | awk '/^Address: / { print $2 ; exit }' dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 ; exit }'
What's the most concise way to resolve a hostname to an IP address in a Bash script? I'm using Arch Linux.
How can I resolve a hostname to an IP address in a Bash script?
If you want to say OR use double pipe (||). if [ "$fname" = "a.txt" ] || [ "$fname" = "c.txt" ](The original OP code using | was simply piping the output of the left side to the right side, in the same way any ordinary pipe works.)After many years of comments and misunderstanding, allow me to clarify. To do OR you use ||. Whether you use [ or [[ or test or (( all depends on what you need on a case by case basis. It's wrong to say that one of those is preferred in all cases. Sometimes [ is right and [[ is wrong. But that's not what the question was. OP asked why | didn't work. The answer is because it should be || instead.
This question is a sequel of sorts to my earlier question. The users on this site kindly helped me determine how to write a bash for loop that iterates over string values. For example, suppose that a loop control variable fname iterates over the strings "a.txt" "b.txt" "c.txt". I would like to echo "yes!" when fname has the value "a.txt" or "c.txt", and echo "no!" otherwise. I have tried the following bash shell script: #!/bin/bashfor fname in "a.txt" "b.txt" "c.txt" do echo $fname if [ "$fname" = "a.txt" ] | [ "$fname" = "c.txt" ]; then echo "yes!" else echo "no!" fi doneI obtain the output:a.txt no! b.txt no! c.txt yes!Why does the if statement apparently yield true when fname has the value "a.txt"? Have I used | incorrectly?
In a bash script, using the conditional "or" in an "if" statement
If you need to select more specific files than only directories use find and pass it to while read: shopt -s dotglob find * -prune -type d | while IFS= read -r d; do echo "$d" doneUse shopt -u dotglob to exclude hidden directories (or setopt dotglob/unsetopt dotglob in zsh). IFS= to avoid splitting filenames containing one of the $IFS, for example: 'a b' see AsymLabs answer below for more find optionsedit: In case you need to create an exit value from within the while loop, you can circumvent the extra subshell by this trick: while IFS= read -r d; do if [ "$d" == "something" ]; then exit 1; fi done < <(find * -prune -type d)
I have a folder with some directories and some files (some are hidden, beginning with dot). for d in *; do echo $d donewill loop through all files and directories, but I want to loop only through directories. How do I do that?
How do I loop through only directories in bash?
Straight from Greg's Wiki: # Rename all *.txt to *.text for file in *.txt; do mv -- "$file" "${file%.txt}.text" done*.txt is a globbing pattern, using * as a wildcard to match any string. *.txt matches all filenames ending with '.txt'. -- marks the end of the option list. This avoids issues with filenames starting with hyphens. ${file%.txt} is a parameter expansion, replaced by the value of the file variable with .txt removed from the end. Also see the entry on why you shouldn't parse ls. If you have to use basename, your syntax would be: for file in *.txt; do mv -- "$file" "$(basename -- "$file" .txt).text" done
I would like to change a file extension from *.txt to *.text. I tried using the basename command, but I'm having trouble changing more than one file. Here's my code: files=`ls -1 *.txt`for x in $files do mv $x "`basename $files .txt`.text" doneI'm getting this error:basename: too many arguments Try basename --help' for more information
How do I change the extension of multiple files?
An alias should effectively not (in general) do more than change the default options of a command. It is nothing more than simple text replacement on the command name. It can't do anything with arguments but pass them to the command it actually runs. So if you simply need to add an argument at the front of a single command, an alias will work. Common examples are # Make ls output in color by default. alias ls="ls --color=auto" # make mv ask before overwriting a file by default alias mv="mv -i"A function should be used when you need to do something more complex than an alias but that wouldn't be of use on its own. For example, take this answer on a question I asked about changing grep's default behavior depending on whether it's in a pipeline: grep() { if [[ -t 1 ]]; then command grep -n "$@" else command grep "$@" fi }It's a perfect example of a function because it is too complex for an alias (requiring different defaults based on a condition), but it's not something you'll need in a non-interactive script. If you get too many functions or functions too big, put them into separate files in a hidden directory, and source them in your ~/.bashrc: if [ -d ~/.bash_functions ]; then for file in ~/.bash_functions/*; do . "$file" done fiA script should stand on its own. It should have value as something that can be re-used, or used for more than one purpose.
Noone should need 10 years for asking this question, like I did. If I were just starting out with Linux, I'd want to know: When to alias, when to script and when to write a function? Where aliases are concerned, I use aliases for very simple operations that don't take arguments. alias houston='cd /home/username/.scripts/'That seems obvious. But some people do this: alias command="bash bashscriptname"(and add it to the .bashrc file). Is there a good reason to do that? I didn't come across a circumstance for this. If there is an edge case where that would make a difference, please answer below. That's where I would just put something in my PATH and chmod +x it, which is another thing that came after years of Linux trial-and-error. Which brings me to the next topic. For instance, I added a hidden folder (.scripts/) in the home directory to my PATH by just adding a line to my .bashrc (PATH=$PATH:/home/username/.scripts/), so anything executable in there automagically autocompletes. I don't really need that, do I? I would only use that for languages which are not the shell, like Python. If it's the shell, I can just write a function inside the very same .bashrc: funcname () { somecommand -someARGS "$@" }Did I miss anything? What would you tell a beginning Linux user about when to alias, when to script and when to write a function? If it's not obvious, I'm assuming the people who answer this will make use of all three options. If you only use one or two of these three (aliases, scripts, functions), this question isn't really aimed at you.
In Bash, when to alias, when to script and when to write a function?
Preventative measures If you want to run a command without saving it in history, prepend it with an extra space prompt$ echo saved prompt$ echo not saved \ > # ^ extra spaceFor this to work you need either ignorespace or ignoreboth in HISTCONTROL. For example, run HISTCONTROL=ignorespaceTo make this setting persistent, put it in your .bashrc. Post-mortem clean-up If you've already run the command, and want to remove it from history, first use historyto display the list of commands in your history. Find the number next to the one you want to delete (e.g. 1234) and run history -d 1234Additionally, if the line you want to delete has already been written to your $HISTFILE (which typically happens when you end a session by default), you will need to write back to $HISTFILE, or the line will reappear when you open a new session: history -w
I'm working in Mac OSX, so I guess I'm using bash...? Sometimes I enter something that I don't want to be remembered in the history. How do I remove it?
How to remove a single line from history?
Your best bet if on a GNU system: stat --printf="%s" file.anyFrom man stat:%s total size, in bytesIn a bash script : #!/bin/bash FILENAME=/home/heiko/dummy/packages.txt FILESIZE=$(stat -c%s "$FILENAME") echo "Size of $FILENAME = $FILESIZE bytes."NOTE: see @chbrown's answer for how to use stat on BSD or macOS systems.
How can I get the size of a file in a bash script? How do I assign this to a bash variable so I can use it later?
How can I get the size of a file in a bash script?
Always use double quotes around variable substitutions and command substitutions: "$foo", "$(foo)" If you use $foo unquoted, your script will choke on input or parameters (or command output, with $(foo)) containing whitespace or \[*?. There, you can stop reading. Well, ok, here are a few more:read — To read input line by line with the read builtin, use while IFS= read -r line; do … Plain read treats backslashes and whitespace specially. xargs — Avoid xargs. If you must use xargs, make that xargs -0. Instead of find … | xargs, prefer find …-exec …. xargs treats whitespace and the characters \"' specially.This answer applies to Bourne/POSIX-style shells (sh, ash, dash, bash, ksh, mksh, yash…). Zsh users should skip it and read the end of When is double-quoting necessary? instead. If you want the whole nitty-gritty, read the standard or your shell's manual.Note that the explanations below contains a few approximations (statements that are true in most conditions but can be affected by the surrounding context or by configuration). Why do I need to write "$foo"? What happens without the quotes? $foo does not mean “take the value of the variable foo”. It means something much more complex:First, take the value of the variable. Field splitting: treat that value as a whitespace-separated list of fields, and build the resulting list. For example, if the variable contains foo * bar ​ then the result of this step is the 3-element list foo, *, bar. Filename generation: treat each field as a glob, i.e. as a wildcard pattern, and replace it by the list of file names that match this pattern. If the pattern doesn't match any files, it is left unmodified. In our example, this results in the list containing foo, following by the list of files in the current directory, and finally bar. If the current directory is empty, the result is foo, *, bar.Note that the result is a list of strings. There are two contexts in shell syntax: list context and string context. Field splitting and filename generation only happen in list context, but that's most of the time. Double quotes delimit a string context: the whole double-quoted string is a single string, not to be split. (Exception: "$@" to expand to the list of positional parameters, e.g. "$@" is equivalent to "$1" "$2" "$3" if there are three positional parameters. See What is the difference between $* and $@?) The same happens to command substitution with $(foo) or with `foo`. On a side note, don't use `foo`: its quoting rules are weird and non-portable, and all modern shells support $(foo) which is absolutely equivalent except for having intuitive quoting rules. The output of arithmetic substitution also undergoes the same expansions, but that isn't normally a concern as it only contains non-expandable characters (assuming IFS doesn't contain digits or -). See When is double-quoting necessary? for more details about the cases when you can leave out the quotes. Unless you mean for all this rigmarole to happen, just remember to always use double quotes around variable and command substitutions. Do take care: leaving out the quotes can lead not just to errors but to security holes. How do I process a list of file names? If you write myfiles="file1 file2", with spaces to separate the files, this can't work with file names containing spaces. Unix file names can contain any character other than / (which is always a directory separator) and null bytes (which you can't use in shell scripts with most shells). Same problem with myfiles=*.txt; … process $myfiles. When you do this, the variable myfiles contains the 5-character string *.txt, and it's when you write $myfiles that the wildcard is expanded. This example will actually work, until you change your script to be myfiles="$someprefix*.txt"; … process $myfiles. If someprefix is set to final report, this won't work. To process a list of any kind (such as file names), put it in an array. This requires mksh, ksh93, yash or bash (or zsh, which doesn't have all these quoting issues); a plain POSIX shell (such as ash or dash) doesn't have array variables. myfiles=("$someprefix"*.txt) process "${myfiles[@]}"Ksh88 has array variables with a different assignment syntax set -A myfiles "someprefix"*.txt (see assignation variable under different ksh environment if you need ksh88/bash portability). Bourne/POSIX-style shells have a single one array, the array of positional parameters "$@" which you set with set and which is local to a function: set -- "$someprefix"*.txt process -- "$@"What about file names that begin with -? On a related note, keep in mind that file names can begin with a - (dash/minus), which most commands interpret as denoting an option. Some commands (like sh, set or sort) also accept options that start with +. If you have a file name that begins with a variable part, be sure to pass -- before it, as in the snippet above. This indicates to the command that it has reached the end of options, so anything after that is a file name even if it starts with - or +. Alternatively, you can make sure that your file names begin with a character other than -. Absolute file names begin with /, and you can add ./ at the beginning of relative names. The following snippet turns the content of the variable f into a “safe” way of referring to the same file that's guaranteed not to start with - nor +. case "$f" in -* | +*) "f=./$f";; esacOn a final note on this topic, beware that some commands interpret - as meaning standard input or standard output, even after --. If you need to refer to an actual file named -, or if you're calling such a program and you don't want it to read from stdin or write to stdout, make sure to rewrite - as above. See What is the difference between "du -sh *" and "du -sh ./*"? for further discussion. How do I store a command in a variable? “Command” can mean three things: a command name (the name as an executable, with or without full path, or the name of a function, builtin or alias), a command name with arguments, or a piece of shell code. There are accordingly different ways of storing them in a variable. If you have a command name, just store it and use the variable with double quotes as usual. command_path="$1" … "$command_path" --option --message="hello world"If you have a command with arguments, the problem is the same as with a list of file names above: this is a list of strings, not a string. You can't just stuff the arguments into a single string with spaces in between, because if you do that you can't tell the difference between spaces that are part of arguments and spaces that separate arguments. If your shell has arrays, you can use them. cmd=(/path/to/executable --option --message="hello world" --) cmd=("${cmd[@]}" "$file1" "$file2") "${cmd[@]}"What if you're using a shell without arrays? You can still use the positional parameters, if you don't mind modifying them. set -- /path/to/executable --option --message="hello world" -- set -- "$@" "$file1" "$file2" "$@"What if you need to store a complex shell command, e.g. with redirections, pipes, etc.? Or if you don't want to modify the positional parameters? Then you can build a string containing the command, and use the eval builtin. code='/path/to/executable --option --message="hello world" -- /path/to/file1 | grep "interesting stuff"' eval "$code"Note the nested quotes in the definition of code: the single quotes '…' delimit a string literal, so that the value of the variable code is the string /path/to/executable --option --message="hello world" -- /path/to/file1. The eval builtin tells the shell to parse the string passed as an argument as if it appeared in the script, so at that point the quotes and pipe are parsed, etc. Using eval is tricky. Think carefully about what gets parsed when. In particular, you can't just stuff a file name into the code: you need to quote it, just like you would if it was in a source code file. There's no direct way to do that. Something like code="$code $filename" breaks if the file name contains any shell special character (spaces, $, ;, |, <, >, etc.). code="$code \"$filename\"" still breaks on "$\`. Even code="$code '$filename'" breaks if the file name contains a '. There are two solutions.Add a layer of quotes around the file name. The easiest way to do that is to add single quotes around it, and replace single quotes by '\''. quoted_filename=$(printf %s. "$filename" | sed "s/'/'\\\\''/g") code="$code '${quoted_filename%.}'"Keep the variable expansion inside the code, so that it's looked up when the code is evaluated, not when the code fragment is built. This is simpler but only works if the variable is still around with the same value at the time the code is executed, not e.g. if the code is built in a loop. code="$code \"\$filename\""Finally, do you really need a variable containing code? The most natural way to give a name to a code block is to define a function. What's up with read? Without -r, read allows continuation lines — this is a single logical line of input: hello \ worldread splits the input line into fields delimited by characters in $IFS (without -r, backslash also escapes those). For example, if the input is a line containing three words, then read first second third sets first to the first word of input, second to the second word and third to the third word. If there are more words, the last variable contains everything that's left after setting the preceding ones. Leading and trailing whitespace are trimmed. Setting IFS to the empty string avoids any trimming. See Why is `while IFS= read` used so often, instead of `IFS=; while read..`? for a longer explanation. What's wrong with xargs? The input format of xargs is whitespace-separated strings which can optionally be single- or double-quoted. No standard tool outputs this format. xargs -L1 or xargs -l is not to split the input on lines, but to run one command per line of input (that line still split to make up the arguments, and continued on the next line if ending in blanks). xargs -I PLACEHOLDER does use one line of input to substitute the PLACEHOLDER but quotes and backslashes are still processed and leading blanks trimmed. You can use xargs -r0 where applicable (and where available: GNU (Linux, Cygwin), BusyBox, BSDs, OSX, but it isn't in POSIX). That's safe, because null bytes can't appear in most data, in particular in file names and external command arguments. To produce a null-separated list of file names, use find … -print0 (or you can use find … -exec … as explained below). How do I process files found by find? find … -exec some_command a_parameter another_parameter {} +some_command needs to be an external command, it can't be a shell function or alias. If you need to invoke a shell to process the files, call sh explicitly. find … -exec sh -c ' for x do … # process the file "$x" done ' find-sh {} +I have some other question Browse the quoting tag on this site, or shell or shell-script. (Click on “learn more…” to see some general tips and a hand-selected list of common questions.) If you've searched and you can't find an answer, ask away.
… or an introductory guide to robust filename handling and other string passing in shell scripts. I wrote a shell script which works well most of the time. But it chokes on some inputs (e.g. on some file names). I encountered a problem such as the following:I have a file name containing a space hello world, and it was treated as two separate files hello and world. I have an input line with two consecutive spaces and they shrank to one in the input. Leading and trailing whitespace disappears from input lines. Sometimes, when the input contains one of the characters \[*?, they are replaced by some text which is actually the names of some files. There is an apostrophe ' (or a double quote ") in the input, and things got weird after that point. There is a backslash in the input (or: I am using Cygwin and some of my file names have Windows-style \ separators).What is going on, and how do I fix this?
Why does my shell script choke on whitespace or other special characters?
bash does cache the full path to a command. You can verify that the command you are trying to execute is hashed with the type command: $ type svnsync svnsync is hashed (/usr/local/bin/svnsync)To clear the entire cache: $ hash -rOr just one entry: $ hash -d svnsyncFor additional information, consult help hash and man bash.
When I execute a program without specifying the full path to the executable, and Bash must search the directories in $PATH to find the binary, it seems that Bash remembers the path in some sort of cache. For example, I installed a build of Subversion from source to /usr/local, then typed svnsync help at the Bash prompt. Bash located the binary /usr/local/bin/svnsync for "svnsync" and executed it. Then when I deleted the installation of Subversion in /usr/local and re-ran svnsync help, Bash responds: bash: /usr/local/bin/svnsync: No such file or directoryBut, when I start a new instance of Bash, it finds and executes /usr/bin/svnsync. How do I clear the cache of paths to executables?
How do I clear Bash's cache of paths to executables?
Two ways: Press Ctrl+V and then Tab to use "verbatim" quoted insert. cut -f2 -d' ' infileor write it like this to use ANSI-C quoting: cut -f2 -d$'\t' infileThe $'...' form of quotes isn't part of the POSIX shell language (not yet), but works at least in ksh, mksh, zsh and Busybox in addition to Bash.
Here is an example of using cut to break input into fields using a space delimiter, and obtaining the second field: cut -f2 -d' ' How can the delimiter be defined as a tab, instead of a space?
How to define 'tab' delimiter with 'cut' in Bash?
As is often the case with obscure terms, the Jargon File has an answer:[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc] Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
Is it "resource configuration", by any chance?
What does "rc" in .bashrc stand for?
There's no need to do that, it's already in a variable: $ echo "$PWD" /home/terdonThe PWD variable is defined by POSIX and will work on all POSIX-compliant shells:PWDSet by the shell and by the cd utility. In the shell the value shall be initialized from the environment as follows. If a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory that is no longer than {PATH_MAX} bytes including the terminating null byte, and the value does not contain any components that are dot or dot-dot, then the shell shall set PWD to the value from the environment. Otherwise, if a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory, and the value does not contain any components that are dot or dot-dot, then it is unspecified whether the shell sets PWD to the value from the environment or sets PWD to the pathname that would be output by pwd -P. Otherwise, the sh utility sets PWD to the pathname that would be output by pwd -P. In cases where PWD is set to the value from the environment, the value can contain components that refer to files of type symbolic link. In cases where PWD is set to the pathname that would be output by pwd -P, if there is insufficient permission on the current working directory, or on any parent of that directory, to determine what that pathname would be, the value of PWD is unspecified. Assignments to this variable may be ignored. If an application sets or unsets the value of PWD, the behaviors of the cd and pwd utilities are unspecified.For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command)or var=`command`Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")"Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`"
I want to have a script that takes the current working directory to a variable. The section that needs the directory is like this dir = pwd. It just prints pwd how do I get the current working directory into a variable?
How can I get the current working directory? [duplicate]
First of all, as ddeimeke said, aliases by default are not expanded in non-interactive shells. Second, .bashrc is not read by non-interactive shells unless you set the BASH_ENV environment variable. But most importantly: don't do that! Please? One day you will move that script somewhere where the necessary aliases are not set and it will break again. Instead set and use variables as shortcuts in your script: #!/bin/bashCMDA=/path/to/gizmo CMDB=/path/to/huzzah.shfor file in "$@" do $CMDA "$file" $CMDB "$file" done
In my ~/.bashrc file reside two definitions:commandA, which is an alias to a longer path commandB, which is an alias to a Bash scriptI want to process the same file with these two commands, so I wrote the following Bash script:#!/bin/bashfor file in "$@" do commandA $file commandB $file doneEven after logging out of my session and logging back in, Bash prompts me with command not found errors for both commands when I run this script. What am I doing wrong?
Why doesn't my Bash script recognize aliases?
The PID of the last executed command is in the $! shell variable: my-app & echo $!
I want to have a shell script like this: my-app & echo $my-app-pidBut I do not know how the get the pid of the just executed command. I know I can just use the jobs -p my-app command to grep the pid. But if I want to execute the shell multiple times, this method will not work. Because the jobspec is ambiguous.
How to get the pid of the last executed command in shell script?
You can use the read command. If you are using bash: read -p "Press enter to continue"In other shells, you can do: printf "%s " "Press enter to continue" read ansAs mentioned in the comments above, this command does actually require the user to press enter; a solution that works with any key in bash would be: read -n 1 -s -r -p "Press any key to continue"Explanation by Rayne and wchargin -n defines the required character count to stop reading -s hides the user's input -r causes the string to be interpreted "raw" (without considering backslash escapes)
I'm making a script to install my theme, after it finished installing it will appear the changelog and there will be "Press any key to continue" so that after users read the changelog then press any key to continue
How can I make "Press any key to continue" [duplicate]
eval is part of POSIX. It's an interface which can be a shell built-in. It's described in the "POSIX Programmer's Manual": http://www.unix.com/man-page/posix/1posix/eval/ eval - construct command by concatenating argumentsIt will take an argument and construct a command of it, which will then be executed by the shell. This is the example from the manpage: foo=10 x=foo # 1 y='$'$x # 2 echo $y # 3 $foo eval y='$'$x # 5 echo $y # 6 10In the first line you define $foo with the value '10' and $x with the value 'foo'. Now define $y, which consists of the string '$foo'. The dollar sign must be escaped with '$'. To check the result, echo $y. The result of 1)-3) will be the string '$foo' Now we repeat the assignment with eval. It will first evaluate $x to the string 'foo'. Now we have the statement y=$foo which will get evaluated to y=10. The result of echo $y is now the value '10'.This is a common function in many languages, e.g. Perl and JavaScript. Have a look at perldoc eval for more examples: http://perldoc.perl.org/functions/eval.html
What can you do with the eval command? Why is it useful? Is it some kind of a built-in function in bash? There is no man page for it..
What is the "eval" command in bash?
You can create a section [color] in your ~/.gitconfig with e.g. the following content [color] diff = auto status = auto branch = auto interactive = auto ui = true pager = trueYou can also fine control what you want to have coloured in what way, e.g. [color "status"] added = green changed = red bold untracked = magenta bold[color "branch"] remote = yellowI hope this gets you started. And of course, you need a terminal which supports colour. Also see this answer for a way to add colorization directly from the command line.
Is there a way to color output for git (or any command)? Consider: baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/models/message_type.rb # no changes added to commit (use "git add" and/or "git commit -a") baller@Laptop:~/rails/spunky-monkey$ git add app/modelsAnd baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: app/models/message_type.rb #The output looks the same, but the information is totally different: the file has gone from unstaged to staged for commit. Is there a way to colorize the output? For example, files that are unstaged are red, staged are green? Or even Changes not staged for commit: to red and # Changes to be committed: to green? Working in Ubuntu. EDIT: Googling found this answer which works great: git config --global --add color.ui true. However, is there any more general solution for adding color to a command output?
How to colorize output of git?
First, note that the -z test is explicitly for:the length of string is zeroThat is, a string containing only spaces should not be true under -z, because it has a non-zero length. What you want is to remove the spaces from the variable using the pattern replacement parameter expansion: [[ -z "${param// }" ]]This expands the param variable and replaces all matches of the pattern (a single space) with nothing, so a string that has only spaces in it will be expanded to an empty string.The nitty-gritty of how that works is that ${var/pattern/string} replaces the first longest match of pattern with string. When pattern starts with / (as above) then it replaces all the matches. Because the replacement is empty, we can omit the final / and the string value:${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. If pattern begins with ‘/’, all matches of pattern are replaced with string. Normally only the first match is replaced. ... If string is null, matches of pattern are deleted and the / following pattern may be omitted.After all that, we end up with ${param// } to delete all spaces. Note that though present in ksh (where it originated), zsh and bash, that syntax is not POSIX and should not be used in sh scripts.
The following bash syntax verifies if param isn't empty: [[ ! -z $param ]]For example: param="" [[ ! -z $param ]] && echo "I am not zero"No output and its fine. But when param is empty except for one (or more) space characters, then the case is different: param=" " # one space [[ ! -z $param ]] && echo "I am not zero""I am not zero" is output. How can I change the test to consider variables that contain only space characters as empty?
How can I test if a variable is empty or contains only spaces?
Here are a couple of things you can do: Editors + Code A lot of editors have syntax highlighting support. vim and emacs have it on by default. You can also enable it under nano. You can also syntax highlight code on the terminal by using Pygments as a command-line tool. grep grep --color=auto highlights all matches. You can also use export GREP_OPTIONS='--color=auto' to make it persistent without an alias. If you use --color=always, it'll use colour even when piping, which confuses things. ls ls --color=always Colors specified by: export LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33'(hint: dircolors can be helpful) PS1 You can set your PS1 (shell prompt) to use colours. For example: PS1='\e[33;1m\u@\h: \e[31m\W\e[0m\$ 'Will produce a PS1 like: [yellow]lucas@ubuntu: [red]~[normal]$ You can get really creative with this. As an idea: PS1='\e[s\e[0;0H\e[1;33m\h \t\n\e[1;32mThis is my computer\e[u[\u@\h: \w]\$ 'Puts a bar at the top of your terminal with some random info. (For best results, also use alias clear="echo -e '\e[2J\n\n'".) Getting Rid of Escape Sequences If something is stuck outputting colour when you don't want it to, I use this sed line to strip the escape sequences: sed "s/\[^[[0-9;]*[a-zA-Z]//gi"If you want a more authentic experience, you can also get rid of lines starting with \e[8m, which instructs the terminal to hide the text. (Not widely supported.) sed "s/^\[^[8m.*$//gi"Also note that those ^[s should be actual, literal ^[s. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [.
I spend most of my time working in Unix environments and using terminal emulators. I try to use color on the command line, because color makes the output more useful and intuitive. What options exist to add color to my terminal environment? What tricks do you use? What pitfalls have you encountered? Unfortunately, support for color varies depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here are some tips from my setup, after a lot of experimentation:I tend to set TERM=xterm-color, which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I use everything from macOS X, Ubuntu Linux, RHEL/CentOS/Scientific Linux and FreeBSD. I'm trying to keep things simple and generic, if possible. I do a bunch of work using GNU screen, which adds another layer of fun. Many OSs set things like dircolors and by default, and I don't want to modify this on a hundred different hosts. So I try to stick with the defaults. Instead, I tweak my terminal's color configuration. Use color for some Unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to use the standard "ANSI escape sequences". For example: alias less='less --RAW-CONTROL-CHARS' export LS_OPTS='--color=auto' alias ls='ls ${LS_OPTS}'I'll post my .bashrc and answer my own question Jeopardy Style.
Colorizing your terminal and shell environment?
If you use the env command to display the variables, they should show up roughly in the order in which they were created. You can use this as a guide to if they were set by the system very early in the boot, or by a later .profile or other configuration file. In my experience, the set and export commands will sort their variables by alphabetical order, so that listing isn't as useful.
I have a Linux instance that I set up some time ago. When I fire it up and log in as root there are some environment variables that I set up but I can't remember or find where they came from. I've checked ~/.bash_profile, /etc/.bash_rc, and all the startup scripts. I've run find and grep to no avail.I feel like I must be forgetting to look in some place obvious. Is there a trick for figuring this out?
How to determine where an environment variable came from?
To recursively sanitize a project I use this oneliner: git ls-files -z | while IFS= read -rd '' f; do if file --mime-encoding "$f" | grep -qv binary; then tail -c1 < "$f" | read -r _ || echo >> "$f"; fi; doneExplanation:git ls-files -z lists files in the repository. It takes an optional pattern as additional parameter which might be useful in some cases if you want to restrict the operation to certain files/directories. As an alternative, you could use find -print0 ... or similar programs to list affected files - just make sure it emits NUL-delimited entries.while IFS= read -rd '' f; do ... done iterates through the entries, safely handling filenames that include whitespace and/or newlines.if file --mime-encoding "$f" | grep -qv binary checks whether the file is in a binary format (such as images) and skips those.tail -c1 < "$f" reads the last char from a file.read -r _ exits with a nonzero exit status if a trailing newline is missing.|| echo >> "$f" appends a newline to the file if the exit status of the previous command was nonzero.
Using version control systems I get annoyed at the noise when the diff says No newline at end of file. So I was wondering: How to add a newline at the end of a file to get rid of those messages?
How to add a newline to the end of a file?
This is the one-liner that you need. No other config needed: mkdir longtitleproject && cd $_The $_ variable, in bash, is the last argument given to the previous command. In this case, the name of the directory you just created. As explained in man bash: _ At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the envi‐ ronment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When check‐ ing mail, this parameter holds the name of the mail file cur‐ rently being checked."$_" is the last argument of the previous command.Use cd $_ to retrieve the last argument of the previous command instead of cd !$ because cd !$ gives the last argument of previous command in the shell history: cd ~/ mkdir folder && cd !$you end up home (or ~/ ) cd ~/ mkdir newfolder && cd $_you end up in newfolder under home !! ( or ~/newfolder )
I find myself repeating a lot of: mkdir longtitleproject cd longtitleprojectIs there a way of doing it in one line without repeating the directory name? I'm on bash here.
Is there a one-liner that allows me to create a directory and move into it at the same time?
That's what xargs does. ... | xargs command
I want to run a java command once for every match of ls | grep pattern -. In this case, I think I could do find pattern -exec java MyProg '{}' \; but I'm curious about the general case - is there an easy way to say "run a command once for every line of standard input"? (In fish or bash.)
Execute a command once per line of piped input?
Preamble First, I'd say it's not the right way to address the problem. It's a bit like saying "you should not murder people because otherwise you'll go to jail". Similarly, you don't quote your variable because otherwise you're introducing security vulnerabilities. You quote your variables because it is wrong not to (but if the fear of the jail can help, why not). A little summary for those who've just jumped on the train. In most shells, leaving a variable expansion unquoted (though that (and the rest of this answer) also applies to command substitution (`...` or $(...)) and arithmetic expansion ($((...)) or $[...])) has a very special meaning. The best way to describe it is that it is like invoking some sort of implicit split+glob operator¹. cmd $varin another language would be written something like: cmd(glob(split($var)))$var is first split into a list of words according to complex rules involving the $IFS special parameter (the split part) and then each word resulting of that splitting is considered as a pattern which is expanded to a list of files that match it (the glob part). As an example, if $var contains *.txt,/var/*.xml and $IFS contains ,, cmd would be called with a number of arguments, the first one being cmd and the next ones being the txt files in the current directory and the xml files in /var. If you wanted to call cmd with just the two literal arguments cmd and *.txt,/var/*.xml, you'd write: cmd "$var"which would be in your other more familiar language: cmd($var)What do we mean by vulnerability in a shell? After all, it's been known since the dawn of time that shell scripts should not be used in security-sensitive contexts. Surely, OK, leaving a variable unquoted is a bug but that can't do that much harm, can it? Well, despite the fact that anybody would tell you that shell scripts should never be used for web CGIs, or that thankfully most systems don't allow setuid/setgid shell scripts nowadays, one thing that shellshock (the remotely exploitable bash bug that made the headlines in September 2014) revealed is that shells are still extensively used where they probably shouldn't: in CGIs, in DHCP client hook scripts, in sudoers commands, invoked by (if not as) setuid commands... Sometimes unknowingly. For instance system('cmd $PATH_INFO') in a php/perl/python CGI script does invoke a shell to interpret that command line (not to mention the fact that cmd itself may be a shell script and its author may have never expected it to be called from a CGI). You've got a vulnerability when there's a path for privilege escalation, that is when someone (let's call him the attacker) is able to do something he is not meant to. Invariably that means the attacker providing data, that data being processed by a privileged user/process which inadvertently does something it shouldn't be doing, in most of the cases because of a bug. Basically, you've got a problem when your buggy code processes data under the control of the attacker. Now, it's not always obvious where that data may come from, and it's often hard to tell if your code will ever get to process untrusted data. As far as variables are concerned, In the case of a CGI script, it's quite obvious, the data are the CGI GET/POST parameters and things like cookies, path, host... parameters. For a setuid script (running as one user when invoked by another), it's the arguments or environment variables. Another very common vector is file names. If you're getting a file list from a directory, it's possible that files have been planted there by the attacker. In that regard, even at the prompt of an interactive shell, you could be vulnerable (when processing files in /tmp or ~/tmp for instance). Even a ~/.bashrc can be vulnerable (for instance, bash will interpret it when invoked over ssh to run a ForcedCommand like in git server deployments with some variables under the control of the client). Now, a script may not be called directly to process untrusted data, but it may be called by another command that does. Or your incorrect code may be copy-pasted into scripts that do (by you 3 years down the line or one of your colleagues). One place where it's particularly critical is in answers in Q&A sites as you'll never know where copies of your code may end up. Down to business; how bad is it? Leaving a variable (or command substitution) unquoted is by far the number one source of security vulnerabilities associated with shell code. Partly because those bugs often translate to vulnerabilities but also because it's so common to see unquoted variables. Actually, when looking for vulnerabilities in shell code, the first thing to do is look for unquoted variables. It's easy to spot, often a good candidate, generally easy to track back to attacker-controlled data. There's an infinite number of ways an unquoted variable can turn into a vulnerability. I'll just give a few common trends here. Information disclosure Most people will bump into bugs associated with unquoted variables because of the split part (for instance, it's common for files to have spaces in their names nowadays and space is in the default value of IFS). Many people will overlook the glob part. The glob part is at least as dangerous as the split part. Globbing done upon unsanitised external input means the attacker can make you read the content of any directory. In: echo You entered: $unsanitised_external_inputif $unsanitised_external_input contains /*, that means the attacker can see the content of /. No big deal. It becomes more interesting though with /home/* which gives you a list of user names on the machine, /tmp/*, /home/*/.forward for hints at other dangerous practises, /etc/rc*/* for enabled services... No need to name them individually. A value of /* /*/* /*/*/*... will just list the whole file system. Denial of service vulnerabilities. Taking the previous case a bit too far and we've got a DoS. Actually, any unquoted variable in list context with unsanitized input is at least a DoS vulnerability. Even expert shell scripters commonly forget to quote things like: #! /bin/sh - : ${QUERYSTRING=$1}: is the no-op command. What could possibly go wrong? That's meant to assign $1 to $QUERYSTRING if $QUERYSTRING was unset. That's a quick way to make a CGI script callable from the command line as well. That $QUERYSTRING is still expanded though and because it's not quoted, the split+glob operator is invoked. Now, there are some globs that are particularly expensive to expand. The /*/*/*/* one is bad enough as it means listing directories up to 4 levels down. In addition to the disk and CPU activity, that means storing tens of thousands of file paths (40k here on a minimal server VM, 10k of which directories). Now /*/*/*/*/../../../../*/*/*/* means 40k x 10k and /*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* is enough to bring even the mightiest machine to its knees. Try it for yourself (though be prepared for your machine to crash or hang): a='/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/*' sh -c ': ${a=foo}'Of course, if the code is: echo $QUERYSTRING > /some/fileThen you can fill up the disk. Just do a google search on shell cgi or bash cgi or ksh cgi, and you'll find a few pages that show you how to write CGIs in shells. Notice how half of those that process parameters are vulnerable. Even David Korn's own one is vulnerable (look at the cookie handling). up to arbitrary code execution vulnerabilities Arbitrary code execution is the worst type of vulnerability, since if the attacker can run any command, there's no limit on what he may do. That's generally the split part that leads to those. That splitting results in several arguments to be passed to commands when only one is expected. While the first of those will be used in the expected context, the others will be in a different context so potentially interpreted differently. Better with an example: awk -v foo=$external_input '$2 == foo'Here, the intention was to assign the content of the $external_input shell variable to the foo awk variable. Now: $ external_input='x BEGIN{system("uname")}' $ awk -v foo=$external_input '$2 == foo' LinuxThe second word resulting of the splitting of $external_input is not assigned to foo but considered as awk code (here that executes an arbitrary command: uname). That's especially a problem for commands that can execute other commands (awk, env, sed (GNU one), perl, find...) especially with the GNU variants (which accept options after arguments). Sometimes, you wouldn't suspect commands to be able to execute others like ksh, bash or zsh's [ or printf... for file in *; do [ -f $file ] || continue something-that-would-be-dangerous-if-$file-were-a-directory doneIf we create a directory called x -o yes, then the test becomes positive, because it's a completely different conditional expression we're evaluating. Worse, if we create a file called x -a a[0$(uname>&2)] -gt 1, with all ksh implementations at least (which includes the sh of most commercial Unices and some BSDs), that executes uname because those shells perform arithmetic evaluation on the numerical comparison operators of the [ command. $ touch x 'x -a a[0$(uname>&2)] -gt 1' $ ksh -c 'for f in *; do [ -f $f ]; done' LinuxSame with bash for a filename like x -a -v a[0$(uname>&2)]. Of course, if they can't get arbitrary execution, the attacker may settle for lesser damage (which may help to get arbitrary execution). Any command that can write files or change permissions, ownership or have any main or side effect could be exploited. All sorts of things can be done with file names. $ touch -- '-R ..' $ for file in *; do [ -f "$file" ] && chmod +w $file; doneAnd you end up making .. writeable (recursively with GNU chmod). Scripts doing automatic processing of files in publicly writable areas like /tmp are to be written very carefully. What about [ $# -gt 1 ] That's something I find exasperating. Some people go down all the trouble of wondering whether a particular expansion may be problematic to decide if they can omit the quotes. It's like saying. Hey, it looks like $# cannot be subject to the split+glob operator, let's ask the shell to split+glob it. Or Hey, let's write incorrect code just because the bug is unlikely to be hit. Now how unlikely is it? OK, $# (or $!, $? or any arithmetic substitution) may only contain digits (or - for some²) so the glob part is out. For the split part to do something though, all we need is for $IFS to contain digits (or -). With some shells, $IFS may be inherited from the environment, but if the environment is not safe, it's game over anyway. Now if you write a function like: my_function() { [ $# -eq 2 ] || return ... }What that means is that the behaviour of your function depends on the context in which it is called. Or in other words, $IFS becomes one of the inputs to it. Strictly speaking, when you write the API documentation for your function, it should be something like: # my_function # inputs: # $1: source directory # $2: destination directory # $IFS: used to split $#, expected not to contain digits...And code calling your function needs to make sure $IFS doesn't contain digits. All that because you didn't feel like typing those 2 double-quote characters. Now, for that [ $# -eq 2 ] bug to become a vulnerability, you'd need somehow for the value of $IFS to become under control of the attacker. Conceivably, that would not normally happen unless the attacker managed to exploit another bug. That's not unheard of though. A common case is when people forget to sanitize data before using it in arithmetic expression. We've already seen above that it can allow arbitrary code execution in some shells, but in all of them, it allows the attacker to give any variable an integer value. For instance: n=$(($1 + 1)) if [ $# -gt 2 ]; then echo >&2 "Too many arguments" exit 1 fiAnd with a $1 with value (IFS=-1234567890), that arithmetic evaluation has the side effect of settings IFS and the next [ command fails which means the check for too many args is bypassed. What about when the split+glob operator is not invoked? There's another case where quotes are needed around variables and other expansions: when it's used as a pattern. [[ $a = $b ]] # a `ksh` construct also supported by `bash` case $a in ($b) ...; esacdo not test whether $a and $b are the same (except with zsh) but if $a matches the pattern in $b. And you need to quote $b if you want to compare as strings (same thing in "${a#$b}" or "${a%$b}" or "${a##*$b*}" where $b should be quoted if it's not to be taken as a pattern). What that means is that [[ $a = $b ]] may return true in cases where $a is different from $b (for instance when $a is anything and $b is *) or may return false when they are identical (for instance when both $a and $b are [a]). Can that make for a security vulnerability? Yes, like any bug. Here, the attacker can alter your script's logical code flow and/or break the assumptions that your script are making. For instance, with a code like: if [[ $1 = $2 ]]; then echo >&2 '$1 and $2 cannot be the same or damage will incur' exit 1 fiThe attacker can bypass the check by passing '[a]' '[a]'. Now, if neither that pattern matching nor the split+glob operator apply, what's the danger of leaving a variable unquoted? I have to admit that I do write: a=$b case $a in...There, quoting doesn't harm but is not strictly necessary. However, one side effect of omitting quotes in those cases (for instance in Q&A answers) is that it can send a wrong message to beginners: that it may be all right not to quote variables. For instance, they may start thinking that if a=$b is OK, then export a=$b would be as well (which it's not in many shells as it's in arguments to the export command so in list context) or env a=$b. There are a few places though where quotes are not accepted. The main one being inside Korn-style arithmetic expressions in many shells like in echo "$(( $1 + 1 ))" "${array[$1 + 1]}" "${var:$1 + 1}" where the $1 must not be quoted (being in a list context --the arguments to a simple command-- the overall expansions still needs to be quoted though). Inside those, the shell understands a separate language altogether inspired from C. In AT&T ksh for instance $(( 'd' - 'a' )) expands to 3 like it does in C and not the same as $(( d - a )) would. Double quotes are ignored in ksh93 but cause a syntax error in many other shells. In C, "d" - "a" would return the difference between pointers to C strings. Doing the same in shell would not make sense. What about zsh? zsh did fix most of those design awkwardnesses. In zsh (at least when not in sh/ksh emulation mode), if you want splitting, or globbing, or pattern matching, you have to request it explicitly: $=var to split, and $~var to glob or for the content of the variable to be treated as a pattern. However, splitting (but not globbing) is still done implicitly upon unquoted command substitution (as in echo $(cmd)). Also, a sometimes unwanted side effect of not quoting variable is the empties removal. The zsh behaviour is similar to what you can achieve in other shells by disabling globbing altogether (with set -f) and splitting (with IFS=''). Still, in: cmd $varThere will be no split+glob, but if $var is empty, instead of receiving one empty argument, cmd will receive no argument at all. That can cause bugs (like the obvious [ -n $var ]). That can possibly break a script's expectations and assumptions and cause vulnerabilities. As the empty variable can cause an argument to be just removed, that means the next argument could be interpreted in the wrong context. As an example, printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2If $attacker_supplied1 is empty, then $attacker_supplied2 will be interpreted as an arithmetic expression (for %d) instead of a string (for %s) and any unsanitized data used in an arithmetic expression is a command injection vulnerability in Korn-like shells such as zsh. $ attacker_supplied1='x y' attacker_supplied2='*' $ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2 [1] <x y> [2] <*>fine, but: $ attacker_supplied1='' attacker_supplied2='psvar[$(uname>&2)0]' $ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2 Linux [1] <2> [0] <>The uname arbitrary command was run. Also note that while zsh doesn't do globbing upon substitutions by default, as globs in zsh are much more powerful than in other shells, that means they can do a lot more damage if ever you enabled the globsubst option at the same time of the extendedglob one, or without disabling bareglobqual and left some variables unintentionally unquoted. For instance, even: set -o globsubst echo $attacker_controlledWould be an arbitrary command execution vulnerability, because commands can be executed as part of glob expansions, for instance with the evaluation glob qualifier: $ set -o globsubst $ attacker_controlled='.(e[uname])' $ echo $attacker_controlled Linux .emulate sh # or ksh echo $attacker_controlleddoesn't cause an ACE vulnerability (though it still a DoS one like in sh) because bareglobqual is disabled in sh/ksh emulation. There's no good reason to enable globsubst other than in those sh/ksh emulations when wanting to interpret sh/ksh code. What about when you do need the split+glob operator? Yes, that's typically when you do want to leave your variable unquoted. But then you need to make sure you tune your split and glob operators correctly before using it. If you only want the split part and not the glob part (which is the case most of the time), then you do need to disable globbing (set -o noglob/set -f) and fix $IFS. Otherwise you'll cause vulnerabilities as well (like David Korn's CGI example mentioned above). Conclusion In short, leaving a variable (or command substitution or arithmetic expansion) unquoted in shells can be very dangerous indeed especially when done in the wrong contexts, and it's very hard to know which are those wrong contexts. That's one of the reasons why it is considered bad practice. Thanks for reading so far. If it goes over your head, don't worry. One can't expect everyone to understand all the implications of writing their code the way they write it. That's why we have good practice recommendations, so they can be followed without necessarily understanding why. (and in case that's not obvious yet, please avoid writing security sensitive code in shells). And please quote your variables on your answers on this site!¹In ksh93 and pdksh and derivatives, brace expansion is also performed unless globbing is disabled (in the case of ksh93 versions up to ksh93u+, even when the braceexpand option is disabled). ² In ksh93 and yash, arithmetic expansions can also include things like 1,2, 1e+66, inf, nan. There are even more in zsh, including # which is a glob operator with extendedglob, but zsh never does split+glob upon arithmetic expansion, even in sh emulation
If you've been following unix.stackexchange.com for a while, you should hopefully know by now that leaving a variable unquoted in list context (as in echo $var) in Bourne/POSIX shells (zsh being the exception) has a very special meaning and shouldn't be done unless you have a very good reason to. It's discussed at length in a number of Q&A here (Examples: Why does my shell script choke on whitespace or other special characters?, When is double-quoting necessary?, Expansion of a shell variable and effect of glob and split on it, Quoted vs unquoted string expansion) That has been the case since the initial release of the Bourne shell in the late 70s and hasn't been changed by the Korn shell (one of David Korn's biggest regrets (question #7)) or bash which mostly copied the Korn shell, and that's how that has been specified by POSIX/Unix. Now, we're still seeing a number of answers here and even occasionally publicly released shell code where variables are not quoted. You'd have thought people would have learnt by now. In my experience, there are mainly 3 types of people who omit to quote their variables:beginners. Those can be excused as admittedly it's a completely unintuitive syntax. And it's our role on this site to educate them.forgetful people.people who are not convinced even after repeated hammering, who think that surely the Bourne shell author did not intend us to quote all our variables.Maybe we can convince them if we expose the risk associated with this kind of behaviours. What's the worst thing that can possibly happen if you forget to quote your variables. Is it really that bad? What kind of vulnerability are we talking of here? In what contexts can it be a problem?
Security implications of forgetting to quote a variable in bash/POSIX shells
If you don't mind being limited to single-letter argument names i.e. my_script -p '/some/path' -a5, then in bash you could use the built-in getopts, e.g. #!/bin/bashwhile getopts ":a:p:" opt; do case $opt in a) arg_1="$OPTARG" ;; p) p_out="$OPTARG" ;; \?) echo "Invalid option -$OPTARG" >&2 exit 1 ;; esac case $OPTARG in -*) echo "Option $opt needs a valid argument" exit 1 ;; esac doneprintf "Argument p_out is %s\n" "$p_out" printf "Argument arg_1 is %s\n" "$arg_1"Then you can do $ ./my_script -p '/some/path' -a5 Argument p_out is /some/path Argument arg_1 is 5There is a helpful Small getopts tutorial or you can type help getopts at the shell prompt. Edit: The second case statement in while loop triggers if the -p option has no arguments and is followed by another option, e.g. my_script -p -a5, and exits the program.
Is there any easy way to pass (receive) named parameters to a shell script? For example, my_script -p_out '/some/path' -arg_1 '5'And inside my_script.sh receive them as: # I believe this notation does not work, but is there anything close to it? p_out=$ARGUMENTS['p_out'] arg1=$ARGUMENTS['arg_1']printf "The Argument p_out is %s" "$p_out" printf "The Argument arg_1 is %s" "$arg1"Is this possible in Bash or Zsh?
Passing named arguments to shell scripts
.bashrc is a Bash shell script that Bash runs whenever it is started interactively. It initializes an interactive shell session. You can put any command in that file that you could type at the command prompt. You put commands here to set up the shell for use in your particular environment, or to customize things to your preferences. A common thing to put in .bashrc are aliases that you want to always be available. .bashrc runs on every interactive shell launch. If you say: $ bash ; bash ; bashand then hit Ctrl-D three times, .bashrc will run three times. But if you say this instead: $ bash -c exit ; bash -c exit ; bash -c exitthen .bashrc won't run at all, since -c makes the Bash call non-interactive. The same is true when you run a shell script from a file. Contrast .bash_profile and .profile which are only run at the start of a new login shell. (bash -l) You choose whether a command goes in .bashrc vs .bash_profile depending on whether you want it to run once or for every interactive shell start. As a counterexample to aliases, which I prefer to put in .bashrc, you want to do PATH adjustments in .bash_profile instead, since these changes are typically not idempotent: export PATH="$PATH:/some/addition"If you put that in .bashrc instead, every time you launched an interactive sub-shell, :/some/addition would get tacked onto the end of the PATH again, creating extra work for the shell when you mistype a command. You get a new interactive Bash shell whenever you shell out of vi with :sh, for example.
I found the .bashrc file and I want to know the purpose/function of it. Also how and when is it used?
What is the purpose of .bashrc and how does it work?
Ctrl+W is the standard "kill word" (aka werase). Ctrl+U kills the whole line (kill). You can change them with stty. -bash-4.2$ stty -a speed 38400 baud; 24 rows; 80 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc -xcase iflags: -istrip icrnl -inlcr -igncr -iuclc ixon -ixoff ixany imaxbel -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -ocrnl -onocr -onlret -olcuc oxtabs -onoeot cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -mdmbuf cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = <undef>; stop = ^S; susp = ^Z; time = 0; werase = ^W; -bash-4.2$ stty werase ^p -bash-4.2$ stty kill ^a -bash-4.2$Note that one does not have to put the actual control character on the line, stty understands putting ^ and then the character you would hit with control. After doing this, if I hit Ctrl+P it will erase a word from the line. And if I hit Ctrl+A, it will erase the whole line.
How can I delete a word backward at the command line? I'm truly used to some editors deleting the last 'word' using Ctrl+Backspace, and I'd like that functionality at the command line too. I am using Bash at the moment and although I could jump backward a word and then delete forward a word, I'd rather have this as a quick-key, or event as Ctrl+Backspace. How can accomplish this?
How can I delete a word backward at the command line (bash and zsh)?
Function? mkcdir () { mkdir -p -- "$1" && cd -P -- "$1" }Put the above code in the ~/.bashrc, ~/.zshrc or another file sourced by your shell. Then source it by running e.g. source ~/.bashrc to apply changes. After that simply run mkcdir foo or mkcdir "nested/path/in quotes". Notes:"$1" is the first argument of the mkcdir command. Quotes around it protects the argument if it has spaces or other special characters. -- makes sure the passed name for the new directory is not interpreted as an option to mkdir or cd, giving the opportunity to create a directory that starts with - or --. -p used on mkdir makes it create extra directories if they do not exist yet, and -P used makes cd resolve symbolic links. Instead of source-ing the rc, you may also restart the terminal emulator/shell.
is there any way (what is the easiest way in bash) to combine the following: mkdir foo cd fooThe manpage for mkdir does not describe anything like that, maybe there is a fancy version of mkdir? I know that cd has to be shell builtin, so the same would be true for the fancy mkdir... Aliasing?
Combined `mkdir` and `cd`? [duplicate]
Non-printable sequences should be enclosed in \[ and \]. Looking at your PS1 it has a unenclosed sequence after \W. But, the second entry is redundant as well as it repeats the previous statement "1;34". \[\033[01;32m\]\u:\[\033[01;34m\] \W\033[01;34m \$\[\033[00m\] |_____________| |_| | | +--- Let this apply to this as well.As such this should have intended coloring: \[\033[1;32m\]\u:\[\033[1;34m\] \W \$\[\033[0m\] |_____| | +---- Bold blue.Keeping the "original" this should also work: \[\033[1;32m\]\u:\[\033[1;34m\] \W\[\033[1;34m\] \$\[\033[0m\] |_| |_| | | +-----------+-- Enclose in \[ \]Edit: The reason for the behavior is because bash believes the prompt is longer then it actually is. As a simple example, if one use: PS1="\033[0;34m$" 1 2345678The prompt is believed to be 8 characters and not 1. As such if terminal window is 20 columns, after typing 12 characters, it is believed to be 20 and wraps around. This is also evident if one then try to do backspace or Ctrl+u. It stops at column 9. However it also does not start new line unless one are on last column, as a result the first line is overwritten. If one keep typing the line should wrap to next line after 32 characters.
I have an issue where if I type in very long commands in bash the terminal will not render what I'm typing correctly. I'd expect that if I had a command like the following: username@someserver ~/somepath $ ssh -i /path/to/private/key [emailprotected] The command should render on two lines. Instead it will often wrap around and start writing over the top of my prompt, somewhat like this: [emailprotected] -i /path/to/private/key If I decide to go back and change some argument there's no telling where the cursor will show up, sometimes in the middle of the prompt, but usually on the line above where I'm typing. Additional fun happens when when I Up to a previous command. I've tried this in both gnome-terminal and terminator and on i3 and Cinnamon. Someone suggested it was my prompt, so here that is: \[\033[01;32m\]\u:\[\033[01;34m\] \W\033[01;34m \$\[\033[00m\] Ctrll, reset, and clear all do what they say, but when I type the command back in or Up the same things happens. I checked and checkwinsize is enabled in bash. This happens on 80x24 and other window sizes. Is this just something I learn to live with? Is there some piece of magic which I should know? I've settled for just using a really short prompt, but that doesn't fix the issue.
Terminal prompt not wrapping correctly
bash stores exported function definitions as environment variables. Exported functions look like this: $ foo() { bar; } $ export -f foo $ env | grep -A1 foo foo=() { bar }That is, the environment variable foo has the literal contents: () { bar }When a new instance of bash launches, it looks for these specially crafted environment variables, and interprets them as function definitions. You can even write one yourself, and see that it still works: $ export foo='() { echo "Inside function"; }' $ bash -c 'foo' Inside functionUnfortunately, the parsing of function definitions from strings (the environment variables) can have wider effects than intended. In unpatched versions, it also interprets arbitrary commands that occur after the termination of the function definition. This is due to insufficient constraints in the determination of acceptable function-like strings in the environment. For example: $ export foo='() { echo "Inside function" ; }; echo "Executed echo"' $ bash -c 'foo' Executed echo Inside functionNote that the echo outside the function definition has been unexpectedly executed during bash startup. The function definition is just a step to get the evaluation and exploit to happen, the function definition itself and the environment variable used are arbitrary. The shell looks at the environment variables, sees foo, which looks like it meets the constraints it knows about what a function definition looks like, and it evaluates the line, unintentionally also executing the echo (which could be any command, malicious or not). This is considered insecure because variables are not typically allowed or expected, by themselves, to directly cause the invocation of arbitrary code contained in them. Perhaps your program sets environment variables from untrusted user input. It would be highly unexpected that those environment variables could be manipulated in such a way that the user could run arbitrary commands without your explicit intent to do so using that environment variable for such a reason declared in the code. Here is an example of a viable attack. You run a web server that runs a vulnerable shell, somewhere, as part of its lifetime. This web server passes environment variables to a bash script, for example, if you are using CGI, information about the HTTP request is often included as environment variables from the web server. For example, HTTP_USER_AGENT might be set to the contents of your user agent. This means that if you spoof your user agent to be something like '() { :; }; echo foo', when that shell script runs, echo foo will be executed. Again, echo foo could be anything, malicious or not.
There is apparently a vulnerability (CVE-2014-6271) in bash: Bash specially crafted environment variables code injection attack I am trying to figure out what is happening, but I'm not entirely sure I understand it. How can the echo be executed as it is in single quotes? $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a testEDIT 1: A patched system looks like this: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a testEDIT 2: There is a related vulnerability / patch: CVE-2014-7169 which uses a slightly different test: $ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"unpatched output: vulnerable bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)' bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable' bash: error importing function definition for `BASH_FUNC_x' testpartially (early version) patched output: bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' bash: error importing function definition for `BASH_FUNC_x()' testpatched output up to and including CVE-2014-7169: bash: warning: x: ignoring function definition attempt bash: error importing function definition for `BASH_FUNC_x' testEDIT 3: story continues with:CVE-2014-7186 CVE-2014-7187 CVE-2014-6277
What does env x='() { :;}; command' bash do and why is it insecure?
A stopped job is one that has been temporarily put into the background and is no longer running, but is still using resources (i.e. system memory). Because that job is not attached to the current terminal, it cannot produce output and is not receiving input from the user. You can see jobs you have running using the jobs builtin command in bash, probably other shells as well. Example: user@mysystem:~$ jobs [1] + Stopped python user@mysystem:~$ You can resume a stopped job by using the fg (foreground) bash built-in command. If you have multiple commands that have been stopped you must specify which one to resume by passing jobspec number on the command line with fg. If only one program is stopped, you may use fg alone: user@mysystem:~$ fg 1 pythonAt this point you are back in the python interpreter and may exit by using control-D. Conversely, you may kill the command with either it's jobspec or PID. For instance: user@mysystem:~$ ps PID TTY TIME CMD 16174 pts/3 00:00:00 bash 17781 pts/3 00:00:00 python 18276 pts/3 00:00:00 ps user@mysystem:~$ kill 17781 [1]+ Killed python user@mysystem:~$ To use the jobspec, precede the number with the percent (%) key: user@mysystem:~$ kill %1 [1]+ Terminated pythonIf you issue an exit command with stopped jobs, the warning you saw will be given. The jobs will be left running for safety. That's to make sure you are aware you are attempting to kill jobs you might have forgotten you stopped. The second time you use the exit command the jobs are terminated and the shell exits. This may cause problems for some programs that aren't intended to be killed in this fashion. In bash it seems you can use the logout command which will kill stopped processes and exit. This may cause unwanted results. Also note that some programs may not exit when terminated in this way, and your system could end up with a lot of orphaned processes using up resources if you make a habit of doing that. Note that you can create background process that will stop if they require user input: user@mysystem:~$ python & [1] 19028 user@mysystem:~$ jobs [1]+ Stopped pythonYou can resume and kill these jobs in the same way you did jobs that you stopped with the Ctrl-z interrupt.
I get the message There are stopped jobs. when I try to exit a bash shell sometimes. Here is a reproducible scenario in python 2.x:ctrl+c is handled by the interpreter as an exception. ctrl+z 'stops' the process. ctrl+d exits python for reals.Here is some real-world terminal output: example_user@example_server:~$ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ctrl+z [1]+ Stopped python example_user@example_server:~$ exit logout There are stopped jobs.Bash did not exit, I must exit again to exit the bash shell.Q: What is a 'stopped job', or what does this mean? Q: Can a stopped process be resumed? Q: Does the first exit kill the stopped jobs? Q: Is there a way to exit the shell the first time? (without entering exit twice)
There are stopped jobs (on bash exit)
Short answer (closest to your answer, but handles spaces) OIFS="$IFS" IFS=$'\n' for file in `find . -type f -name "*.csv"` do echo "file = $file" diff "$file" "/some/other/path/$file" read line done IFS="$OIFS"Better answer (also handles wildcards and newlines in file names) find . -type f -name "*.csv" -print0 | while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty doneBest answer (based on Gilles' answer) find . -type f -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty ' exec-sh {} ';'Or even better, to avoid running one sh per file: find . -type f -name '*.csv' -exec sh -c ' for file do echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty done ' exec-sh {} +Long answer You have three problems:By default, the shell splits the output of a command on spaces, tabs, and newlines Filenames could contain wildcard characters which would get expanded What if there is a directory whose name ends in *.csv?1. Splitting only on newlines To figure out what to set file to, the shell has to take the output of find and interpret it somehow, otherwise file would just be the entire output of find. The shell reads the IFS variable, which is set to <space><tab><newline> by default. Then it looks at each character in the output of find. As soon as it sees any character that's in IFS, it thinks that marks the end of the file name, so it sets file to whatever characters it saw until now and runs the loop. Then it starts where it left off to get the next file name, and runs the next loop, etc., until it reaches the end of output. So it's effectively doing this: for file in "zquery" "-" "abc" ...To tell it to only split the input on newlines, you need to do IFS=$'\n'before your for ... find command. That sets IFS to a single newline, so it only splits on newlines, and not spaces and tabs as well. If you are using sh or dash instead of ksh93, bash or zsh, you need to write IFS=$'\n' like this instead: IFS=' 'That is probably enough to get your script working, but if you're interested to handle some other corner cases properly, read on... 2. Expanding $file without wildcards Inside the loop where you do diff $file /some/other/path/$filethe shell tries to expand $file (again!). It could contain spaces, but since we already set IFS above, that won't be a problem here. But it could also contain wildcard characters such as * or ?, which would lead to unpredictable behavior. (Thanks to Gilles for pointing this out.) To tell the shell not to expand wildcard characters, put the variable inside double quotes, e.g. diff "$file" "/some/other/path/$file"The same problem could also bite us in for file in `find . -name "*.csv"`For example, if you had these three files file1.csv file2.csv *.csv(very unlikely, but still possible) It would be as if you had run for file in file1.csv file2.csv *.csvwhich will get expanded to for file in file1.csv file2.csv *.csv file1.csv file2.csvcausing file1.csv and file2.csv to be processed twice. Instead, we have to do find . -name "*.csv" -print | while IFS= read -r file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty doneread reads lines from standard input, splits the line into words according to IFS and stores them in the variable names that you specify. Here, we're telling it not to split the line into words, and to store the line in $file. Also note that read line has changed to read line </dev/tty. This is because inside the loop, standard input is coming from find via the pipeline. If we just did read, it would be consuming part or all of a file name, and some files would be skipped. /dev/tty is the terminal where the user is running the script from. Note that this will cause an error if the script is run via cron, but I assume this is not important in this case. Then, what if a file name contains newlines? We can handle that by changing -print to -print0 and using read -d '' on the end of a pipeline: find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read char </dev/tty doneThis makes find put a null byte at the end of each file name. Null bytes are the only characters not allowed in file names, so this should handle all possible file names, no matter how weird. To get the file name on the other side, we use IFS= read -r -d ''. Where we used read above, we used the default line delimiter of newline, but now, find is using null as the line delimiter. In bash, you can't pass a NUL character in an argument to a command (even builtin ones), but bash understands -d '' as meaning NUL delimited. So we use -d '' to make read use the same line delimiter as find. Note that -d $'\0', incidentally, works as well, because bash not supporting NUL bytes treats it as the empty string. To be correct, we also add -r, which says don't handle backslashes in file names specially. For example, without -r, \<newline> are removed, and \n is converted into n. A more portable way of writing this that doesn't require bash or zsh or remembering all the above rules about null bytes (again, thanks to Gilles): find . -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read char </dev/tty ' exec-sh {} ';'*3. Skipping directories whose names end in .csv find . -name "*.csv"will also match directories that are called something.csv. To avoid this, add -type f to the find command. find . -type f -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty ' exec-sh {} ';'As glenn jackman points out, in both of these examples, the commands to execute for each file are being run in a subshell, so if you change any variables inside the loop, they will be forgotten. If you need to set variables and have them still set at the end of the loop, you can rewrite it to use process substitution like this: i=0 while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty i=$((i+1)) done < <(find . -type f -name '*.csv' -print0) echo "$i files processed"Note that if you try copying and pasting this at the command line, read line will consume the echo "$i files processed", so that command won't get run. To avoid this, you could remove read line </dev/tty and send the result to a pager like less.NOTES I removed the semi-colons (;) inside the loop. You can put them back if you want, but they are not needed. These days, $(command) is more common than `command`. This is mainly because it's easier to write $(command1 $(command2)) than `command1 \`command2\``. read char doesn't really read a character. It reads a whole line so I changed it to read line.
I wrote the following script to diff the outputs of two directores with all the same files in them as such: #!/bin/bashfor file in `find . -name "*.csv"` do echo "file = $file"; diff $file /some/other/path/$file; read char; doneI know there are other ways to achieve this. Curiously though, this script fails when the files have spaces in them. How can I deal with this? Example output of find: ./zQuery - abc - Do Not Prompt for Date.csv
Looping through files with spaces in the names? [duplicate]
Environment variables or shell variables introduced by the operating system, shell startup scripts, or the shell itself, etc., are usually all in CAPITALS1. To prevent your variables from conflicting with these variables, it is a good practice to use lower_case variable names.1A notable exception that may be worth knowing about is the path array, used by the zsh shell. This is the same as the common PATH variable but represented as an array.
Most languages have naming conventions for variables, the most common style I see in shell scripts is MY_VARIABLE=foo. Is this the convention or is it only for global variables? What about variables local to the script?
Are there naming conventions for variables in shell scripts?
The following command will do it for you. Use caution though if this isn't your intention as this also removes files in the directory and subdirectories. rm -rf directoryname
In bash all I know is that rmdir directorynamewill remove the directory but only if it's empty. Is there a way to force remove subdirectories?
How do I remove a directory and all its contents?
There is no need to execute an external program. bash's string manipulation can handle it (also available in ksh93 (where it comes from), zsh and recent versions of mksh, yash and busybox sh (at least)): $ VERSION='2.3.3' $ echo "${VERSION//.}" 233(In those shells' manuals you can generally find this in the parameter expansion section.)
I want to parse a variable (in my case it's development kit version) to make it dot(.) free. If version='2.3.3', desired output is 233. I tried as below, but it requires . to be replaced with another character giving me 2_3_3. It would have been fine if tr . '' would have worked. 1 VERSION='2.3.3' 2 echo "2.3.3" | tr . _
remove particular characters from a variable using bash
One problem with your first command is that you redirect stderr to where stdout is (if you changed the $ to a & as suggested in the comment) and then, you redirected stdout to some log file, but that does not pull along the redirected stderr. You must do it in the other order, first send stdout to where you want it to go, and then send stderr to the address stdout is at some_cmd > some_file 2>&1 &and then you could throw the & on to send it to the background. Jobs can be accessed with the jobs command. jobs will show you the running jobs, and number them. You could then talk about the jobs using a % followed by the number like kill %1 or so. Also, without the & on the end you can suspend the command with Ctrlz, use the bg command to put it in the background and fg to bring it back to the foreground. In combination with the jobs command, this is powerful. to clarify the above part about the order you write the commands. Suppose stderr is address 1002, stdout is address 1001, and the file is 1008. The command reads left to right, so the first thing it sees in yours is 2>&1 which moves stderr to the address 1001, it then sees > file which moves stdout to 1008, but keeps stderr at 1001. It does not pull everything pointing at 1001 and move it to 1008, but simply references stdout and moves it to the file. The other way around, it moves stdout to 1008, and then moves stderr to the point that stdout is pointing to, 1008 as well. This way both can point to the single file.
Can I redirect output to a log file and a background process at the same time? In other words, can I do something like this? nohup java -jar myProgram.jar 2>&1 > output.log &Or, is that not a legal command? Or, do I need to manually move it to the background, like this: java -jar myProgram.jar 2>$1 > output.log jobs [CTRL-Z] bg 1
Can I redirect output to a log file and background a process at the same time?
Others have answered the basic question: What is it? (Answer: It's a here string.) Let's look at why it's useful. You can also feed a string to a command's stdin like this: echo "$string" | commandHowever in Bash, introducing a pipe means the individual commands are run in subshells. Consider this: echo "hello world" | read first second echo $second $firstThe output of the 2nd echo command prints just a single space. Whaaaa? What happened to my variables? Because the read command is in a pipeline, it is run in a subshell. It correctly reads 2 words from its stdin and assigns to the variables. But then the command completes, the subshell exits and the variables are lost. Sometimes you can work around this with braces: echo "hello world" | { read first second echo $second $first }That's OK if your need for the values is contained, but you still don't have those variables in the current shell of your script. To remedy this confusing situation, use a here string: read first second <<< "hello world" echo $second $firstAh, much better!
What does <<< mean? Here is an example: $ sed 's/a/b/g' <<< "aaa" bbbIs it something general that works with more Linux commands? It looks like it's feeding the sed program with the string aaa, but isn't << or < usually used for that?
What does <<< mean? [duplicate]
I think caching the whole md device make most sense. Putting bcache to cache the whole md device sacrifices the whole idea of having raid, because it introduces another single point of failure.OTH failurs of SSD disks are relatively rare, and bcache can be put into the writethrough/writearound mode (in contrast to the writeback mode), where there is no data stored only to the cache device, and failure of the cache doesn't kill the information in the raid makes it a relatively safe option. Other fact is that there is significant computational overhead of soft RAID-5; when caching each spinning raid member separately, computer still has to re-calculate all the parities, even on cache hits Obviously, you'd sacrifice some expensive ssd space, if you cache each spinning drive separately. - Unless you plan to use raided ssd cache. Both options relatively don't affect the time of growing process - although the option with spinning drives being cached separately has potential to be slower due to more bus traffic.It is fast and relatively simple process to configure bcache to remove the ssd drive, when you need to replace it. Thanks to the blocks it should be possible to migrate the raid setup both ways on-place. You should also remember, that at the moment most (all?) live-CD distributions don't support bcache, so you can't simply access your data with such tools regardless of the bcache-mdraid layout option you chose.
bcache allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives. If I understand correctly, an SSD* could be assigned to cache multiple backing HDDs, and then the resulting cached devices could be RAIDed with mdadm or multiple HDDs could be RAIDed into a single backing md device and the SSD assigned to cache thatI'm wondering which is the saner approach. It occurs to me that growing a RAID5/6 may be simpler with one or other technique, but I'm not sure which! Are there good reasons (eg growing the backing storage or anything else) for choosing one approach over the other (for a large non-root filesystem containing VM backing files)?* by "an SSD" I mean some sort of redundant SSD device, eg a RAID1 of two physical SSDs
bcache on md or md on bcache
Suppose you've set up successfully a bcache, you are already working on it, put there a lot of important data too big to simply backup and start over, when you realized, that you'd better replace the caching device. This is how you can do it. This solution is based on a VM trials. Lets say we are talking about the device /dev/bcache0, the new cache device is /dev/sdf1 and the backing device is /dev/md1. All commands done by root. Make sure that nothing is using that bcache0 device. Do in any orderRemove the cache device just as Martin von Wittich wrote, by echoing setUUID into /sys/block/bcache0/bcache/detach. If you want to repartition the caching device, you need to reboot, because bcache still locks the partitions unless you unregister it. format- new cache device with make-bcache -C /dev/sdf1 and take a note of the setUUID of that device.Register our backing device with the new cache set: echo [setUUID of new cache device] >/sys/block/bcache0/bcache/attachNo need to reboot.
I believe, that once I made sure my cache device state is "clean": $ sudo cat /sys/block/bcache0/bcache/stateI can just physically remove it from the machine when it is powered off or boot with liveCD and clean the superblock with: $ sudo dd if=/dev/zero of=<backing device for cache> bs=1024 count=1024But I cannot find anywhere a confirmation, that this procedure wouldn't mess anything up.
How do I remove the cache device from bcache?
Simply re-register each bcache device in the cache set (both backing and cache devices) to the kernel: echo /dev/<path_to_device> > /sys/fs/bcache/registerOr, if the udev rules from bcache-tools are in place, then partprobe will automatically register the devices when they are scanned.
I need to upgrade some drive firmware and I'd like to shut down bcache for the duration. The docs show how to stop a bcache device:stop Write to this file to shut down the bcache device and close the backing device.For me that will look like this: echo 1 > /sys/block/bcache0/bcache/stopand for the cache device: echo 1 > /sys/fs/bcache/<set-uuid>/stopBut how do I bring the device back (without rebooting the server)?
How to restart a 'stopped' bcache device?
To my best understanding, dm-cache does what you are asking for. I could not find a definite source for this, but here the author explains that he should have called it dm-hotspot, because it tries to find "hot spots", i.e. areas of high activity and only caches those. In the output of dmsetup status you will find two variables, namely read_promote_adjustment and write_promote_adjustment. The cache-policies file explains thatInternally the mq policy determines a promotion threshold. If the hit count of a block not in the cache goes above this threshold it gets promoted to the cache.So by adjusting read_promote_adjustment and write_promote_adjustment you can determine what exactly you mean by frequently read/written data and once the number of reads/writes exceed this threshold, the block will be "promoted" to, that is, stored in, the cache. Remember that this (pre-cache) metadata is usually kept in memory and only written to disk/SSD when the cache device is suspended.
I'm looking for ways to make use of an SSD to speed up my system. In “Linux equivalent to ReadyBoost?” (and the research that triggered for me) I've learned about bcache, dm-cache and EnhanceIO. All three of these seem capable of caching read data on SSD. However, unless I'm missing something, all three seem to store a file/block/extent/whatever in cache the first time it is read. Large sequential reads might be an exception, but otherwise it seems as if every read cache miss would cause something to get cached. I'd like the cache to cache those reads I use often. I'm worried that a search over the bodies of all my maildir files or a recursive grep in some large directory might evict large portions of stuff I read far more often. Is there any technology to cache frequently read files, instead of recently read ones? Something which builds up some form of active set or some such? I guess adaptive replacement might be a term describing what I'm after. Lacking that, I wonder whether it might make sense to use LVM as a bottom layer, and build up several bcache-enabled devices on top of that. The idea is that e.g. mail reads would not evict caches for /usr and the likes. Each mounted file system would get its own cache of fixed size, or none at all. Does anyone have experience with bcache on top of lvm? Is there a reason against this approach? Any alternative suggestions are welcome as well. Note however that I'm looking for something ready for production use on Linux. I feel ZFS with its L2ARC feature doesn't fall in that category (yet), although you are welcome to argue that point if you are convinced of the opposite. The reason for LVM is that I want to be able to resize space allocated for those various file systems as needed, which is a pain using static partitioning. So proposed solutions should also provide that kind of flexibility.Edit 1: Some clarifications. My main concern is bootup time. I'd like to see all the files which are used for every boot readily accessible on that SSD. And I'd rather not have to worry about keeping the SSD in sync e.g. after package upgrades (which occur rather often on my Gentoo testing). If often-used data which I don't use during boot ends up in the cache as well, that's an added bonus. My current work project e.g. would be a nice candidate. But I'd guess 90% of the files I use every day will be used within the first 5 minutes after pressing the power button. One consequence of this aim is that approaches which wipe the cache after boot, like ZFS L2ARC apparently does, are not a feasible solution. The answer by goldilocks moved the focus from cache insertion to cache eviction. But that doesn't change the fundamental nature of the problem. Unless the cache keeps track of how often or frequently an item is used, things might still drop out of the cache too soon. Particularly since I expect those files I use all the time to reside in the RAM cache from boot till shutdown, so they will be read from disk only once for every boot. The cache eviction policies I found for bcache and dm-cache, namely LRU and FIFO, both would evict those boot-time files in preference to other files read on that same working day. Thus my concern.
SSD as a read cache for FREQUENTLY read data
I managed to fix this by re-creating the superblock on /dev/sdc4. Looks like the --block 4k --bucket 2M was incorrect and that's why the cache device wasn't attaching. I cleared the superblock, then ran: make-bcache -C /dev/sdc4Now when I did: echo 'uuid' > /sys/block/bcache0/bcache/attachit worked!
I have an LVM and I wanted to use bcache to cache one of its LVs. (Yes, I know I could use lvmcache, but I was having issues booting and I gave up using it.) First, I used blocks to convert the LV to a bcache backing device (this seemed to actually work!): blocks to-bcache /dev/my_vg/my_lvI created a caching device on my SSD: make-bcache --block 4k --bucket 2M -C /dev/sdc4I then attempted to attach the cache to the backing device: bcache-super-show /dev/sdc4 | grep cset.uuid echo 'above_uuid' > /sys/block/bcache0/bcache/attachI then rebooted my machine (after adding /dev/bcache0 to /etc/fstab) and realized that the cache wasn't running. # cat /sys/block/bcache0/bcache/state no cache# bcache-super-show /dev/my_vg/my_lv | grep cache_state dev.data.cache_state 0 [detached]Am I missing something? Is there another command I need to use to enable caching? Why does bcache not like my cache device and not letting me attach it to my backing device? Did I use the wrong values for --block and --bucket?
Cannot attach cache device to backing device
If you want to permanently destroy the bcache volume, you need to wipe the bcache superblock from the underlying device. This operation is not exposed through the sysfs interface. So:Stop the bcache device as usual with echo 1 > /sys/block/<device>/bcache/stop. On newer kernels this may fail with "Permission denied". In such a case, you would need to stop the device by its UUID as explained here: ls -la /sys/block/<device>/bcache/set # lrwxrwxrwx 1 root root 0 Jun 19 18:42 /sys/block/<device>/bcache/set -> ../../../../../../../../fs/bcache/<UUID> # Note: UUID is something like "89f4c92a-7fae-4d04-ab3c-7c1dd41fa1a5"echo 1 > /sys/fs/bcache/<UUID>/stopWipe the superblock with head -c 1M /dev/zero > /dev/<device>. (If you have a sufficiently new version of util-linux, you can use wipefs instead, which is more precise in wiping the bcache signature: wipefs -a /dev/<device>.) Obviously, you need to be careful to select the right device because this is a destructive operation that will wipe the header of the device. Take note that you will no longer have access to any data in the bcache volume!
I've tried echoing to detach and stop. The device will remove itself, but will show up again on reboot. One time on reboot, it restored the mdadm raid I had as a backing device! The other time I disabled the ramdrive that it was paired with, did a detach. And /dev/bcache0 came back up again after reboot. There is no unregister under /sys/fs/block/bcache I've also looked in /sys/fs/bcache... /sys/block/md0/md0p1/bcache for this non existent unregister. only register and register-quiet I even uninstalled bcache-tools, and /dev/bcache0 still shows up after reboot and is caching /dev/md0!
How to remove bcache0 volume?
I solved the same issue by setting congested_read_threshold_us and congested_write_threshold_us following bcache documentation:Traffic's still going to the spindle/still getting cache misses In the real world, SSDs don't always keep up with disks - particularly with slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So you want to avoid being bottlenecked by the SSD and having it slow everything down. To avoid that bcache tracks latency to the cache device, and gradually throttles traffic if the latency exceeds a threshold (it does this by cranking down the sequential bypass). You can disable this if you need to by setting the thresholds to 0: # echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us # echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.All disk IO are sent to my SSD(sde) now: Device: wrqm/s r/s w/s rkB/s wkB/s await svctm %util sdb 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.10 0.30 0.80 0.00 3.00 3.00 0.12 sdc 0.00 2.20 0.30 26.00 0.00 1.76 1.76 0.44 sda 0.00 0.20 0.20 0.80 0.00 8.00 13.00 0.52 sde 293.20 81.70 232.70 1129.20 58220.00 21.05 3.18 100.00 md1 0.00 2.50 0.30 27.60 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 bcache0 0.00 83.00 402.40 1156.80 28994.80 31.70 2.06 99.92
I have 3 HDD and 1 SSD, I have successfully mounted all drives to bcache. pavs@VAS:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 132G 35G 90G 28% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.9G 8.0K 3.9G 1% /dev tmpfs 786M 2.3M 784M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 152K 3.9G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/bcache1 2.7T 2.1T 508G 81% /var/www/html/directlink/FTP1 /dev/bcache2 1.8T 614G 1.2T 36% /var/www/html/directlink/FTP2 /dev/bcache0 1.8T 188G 1.6T 11% /var/www/html/directlink/FTP3 /dev/sdf1 367G 284G 65G 82% /media/pavs/e93284df-e52e-4a5d-a9e1-323a388b332fThe drives that are being cached are not OS drive. Three HDD with lots of BIG files, on average the files sizes goes from 600mb to 2GB, the smallest file size being 500mb and largest being 10GB. The files are being downloaded constantly through apache webserver. But I am only seeing marginally or no speed up in IO even on frequently accessed files. I don't know what type of cache formula bcache uses or if it can be tweaked for maximum cache performance. Ideally I would like to see frequently accessed files to be cached for at leased a day until there is no request for that file. I don't know if that level of granular cache tweaking is possible. I care about read performance only and would like to see maximum utilization of the SSD drive.EDIT: According to this. bcache "discourages" sequential cache, which if I understand correctly, is a problem for me as most of my files are large sequential files. The default sequential cutoff was 4.0M, it might have prevented the files from being cached (I don't know), so I disabled the cutoff by doing this for each backup drives: echo 0 > /sys/block/bcache0/bcache/sequential_cutoffNow wait and see if it actually improves performance.According to bcache stats all three of the drives are being cached bcache0 pavs@VAS:~$ tail /sys/block/bcache0/bcache/stats_total/* ==> /sys/block/bcache0/bcache/stats_total/bypassed <== 461G==> /sys/block/bcache0/bcache/stats_total/cache_bypass_hits <== 9565207==> /sys/block/bcache0/bcache/stats_total/cache_bypass_misses <== 0==> /sys/block/bcache0/bcache/stats_total/cache_hit_ratio <== 63==> /sys/block/bcache0/bcache/stats_total/cache_hits <== 3003399==> /sys/block/bcache0/bcache/stats_total/cache_miss_collisions <== 659==> /sys/block/bcache0/bcache/stats_total/cache_misses <== 1698297==> /sys/block/bcache0/bcache/stats_total/cache_readaheads <== 0bcache1 pavs@VAS:~$ tail /sys/block/bcache1/bcache/stats_total/* ==> /sys/block/bcache1/bcache/stats_total/bypassed <== 396G==> /sys/block/bcache1/bcache/stats_total/cache_bypass_hits <== 9466833==> /sys/block/bcache1/bcache/stats_total/cache_bypass_misses <== 0==> /sys/block/bcache1/bcache/stats_total/cache_hit_ratio <== 24==> /sys/block/bcache1/bcache/stats_total/cache_hits <== 749032==> /sys/block/bcache1/bcache/stats_total/cache_miss_collisions <== 624==> /sys/block/bcache1/bcache/stats_total/cache_misses <== 2358913==> /sys/block/bcache1/bcache/stats_total/cache_readaheads <== 0bcache2 pavs@VAS:~$ tail /sys/block/bcache2/bcache/stats_total/* ==> /sys/block/bcache2/bcache/stats_total/bypassed <== 480G==> /sys/block/bcache2/bcache/stats_total/cache_bypass_hits <== 9202709==> /sys/block/bcache2/bcache/stats_total/cache_bypass_misses <== 0==> /sys/block/bcache2/bcache/stats_total/cache_hit_ratio <== 58==> /sys/block/bcache2/bcache/stats_total/cache_hits <== 4821439==> /sys/block/bcache2/bcache/stats_total/cache_miss_collisions <== 1098==> /sys/block/bcache2/bcache/stats_total/cache_misses <== 3392411==> /sys/block/bcache2/bcache/stats_total/cache_readaheads <== 0
Optimizing bcache
From my limited perspective, there really isn't a reason for everyone to be using a cachepool, and it's the result of blindly following enterprise tutorials for the sake of it. Outside of using a separate physical drive for metadata and a separate physical drive for data, for the sake of improving throughput, I can't find much justification for using a cachepool. If you're only using a single SSD for caching (which, I would assume is the vast majority of desktop users), it seems that a cachevol is more than sufficient. And, it potentially maximizes your caching ability. Rather than having a reduced cache size for data in order to allow room for the metadata, you can leave that up to LVM and allocate the entire device as a cachevol. It seems like the only reason for a cachepool is for caching using two or more SSDs.
I'm aware of the differences between a cachepool and cachevol. Cachepool separates the cache data and metadata into two separate volumes, whereas a Cachevol uses a single volume for both. My question is, what is the benefit of using a cachepool instead of just using a cachevol? The only scenario I can think of that it would make the most sense would be if you wanted to dedicate a single device (or single set of devices) to the cache's metadata, and a separate device (or set of devices) for the actual cache data. But that seems like a very specific scenario, and it also doesn't address the question of Why? Why do most people default to using a cachepool instead of a cachevol, esp. when one device is used for caching? What is the motivation/pros-cons to using a cachepool vs cachevol? EDIT For context, the motivation behind this question comes from an assumption that cachepool is used in conservative setups (like home servers and desktops) simply because the guides and tutorials that people follow have trickled down from Enterprise use-cases. Outside of tooling support, and based solely on the merits of cachepool vs cachevol, is there any concrete reason (performance, implementation, etc...) that motivates people to advocate for cachepool instead? Or is it simply a victim of an Enterprise trickle-down phenomenon? If the latter is true, it may justify more conservative setups considering cachevol instead of cachepool if they don't need the flexibility and complexity of a cachepool setup.
LVMCache: Why use a cachepool instead of a cachevol?
Ad question 1.I mount only caching device and not any backing partitions and see files, that are on those backing partitionsNot true - you mount the bcache device. It must internally be composed from at least one hdd device. The ssd cache is actually optional - so you still can access your data, even if your ssd is dead. At least if you use it with default settings. Rest is True. Ad question 2. Yes, there is a heuristic in bcache module that tries to distinguish between sequential and random reads. But it works based on the level individual system calls - bcache is filesystem agnostic: It doesn't even know that it reads files. So it all boils down to how the game actually loads the data and what system calls are eventually used. If the game uses Name your game and see, if anyone did benchmark it. Or better yet: benchmark it yourself. Bcache did speed up my systems considerably, but I play no games on them. Ad question 3. Yes, the bcache use UUID when selecting partitions. Did you read its documentation? Please, do. Ad questions 4 and 5. It depends on how much you want to trade system's speed vs ssd degradation and how much RAM you have (/tmp is often tmpfs which is RAM-backed). Ad question 6. There are/were at least 2 viable alternatives to bcache which I systematically evaluated. I decided to invest in bcache - mostly for speed and compatibility and popularity. That was in 2014.
I recently bought a 240 GB SSD to speed up my computer with 1 TB HDD. I dual boot Windows and Linux. I want to use my new SSD in the most effective way. Reading through many sites led me to conclusion that, for Linux, bcache is the way to go. I want to make sure that my understanding of bcache is correct. So, I wanted to put commonly used data on SSD and rest on HDD. However I play games on both Windows and Linux, so 240 GB is not quite enough for both (I also use several programs, that use several gigabytes of space). So I wanted to partition my SSD in two ~120 GB partitions, one for Intel Smart Response (for Windows) and second for caching partition for bcache. Now, here is what I've gathered about bcache:Bcache acts as a layer between HDD and RAM I can have many backing devices/partitions (that are on HDD) that are cached by caching partition (on SSD). Their size may be larger than caching partition. Recently read data is put on SSD for later use I mount only caching device and not any backing partitions and see files, that are on those backing partitions Converting existing partitions to be used as backing partitions for bcache is troublesome but possible Resizing backing partitions is also possible but troublesomeNow here are my questions:Is my knowledge about bcache correct? Sequential I/O is ignored by bcache. How does it work with loading games? Can I shuffle my partitions or move their beginnings? (does bcache use UUID or /dev/sdxx when selecting partitions?) I read, that it's not recommended to put partitions such as /tmp or /var on SSD, because constant read/writes will wear the SSD. Should I have those on separate partitions and not set them as backing partitions? What about swap? Should I put it on SSD? Is there any other solution, that would suit my needs better than bcache?Last one is a little bit complicated. I tried to set up pci passthrough of my GPU to windows guest. I managed to boot in vm the system, that I have normally installed on my hdd. So I can boot same windows either natively or through VM. Since I wanted to make minimal changes to hardware that is seen by Windows I passed through entire HDD to VM. Windows only uses its NTFS partitions and my Linux uses remaining ones. Will there be any problems with bcache and that setup? For the record, I use elementary OS (based on Ubuntu 16.04).
Is bcache solution for my ssd use case? [closed]
To answer 1., the most sensible thing to do is to put bcache on top of two LUKS virtual devices. LUKS-encrypting a bcache device might work, but there's no guarantee LUKS will consistently put the same virtual sector in the same physical sector every time. You can encrypt both LUKS devices with the same keyfile and unlock both at the same time.
I recently bought a new laptop with a 16Gb mSATA SSD cache drive. I haven't used that one yet. I have, however, opted for Ubuntu 13.04 with "Full Disk Encryption" for the main partition (is that what's called LUKS?). With bcache making it's way into 3.10, I'd like to take advantage of the aforementioned cache drive. According to this, one has to format one's backing and cache drives in order to take advantage of bcache. My questions are:Which order do I proceed in? Set up bcache, then (re)setup LUKS or the other way around? Except for the few files pertaining to the encrypted setup (incl. /etc/fstab), can I tar/rsync/whatever the rest of the existing fs to another disk, set up bcache and LUKS and then tar/rsync/whatever back and expect things to work? Are there other things I should know about?
BCache and disk encryption
The roadmap for bcachefs mentions it:Send and receive: Like ZFS and btrfs have, we need it to. This will give us the ability to efficiently synchronize filesystems over a network for backup/redundancy purposes - much more efficient than rsync, and it'll pair well with snapshots. Since all metadata exists as btree keys, this won't be a huge amount of work to implement: we need a network protocol, and then we need to scan and send for keys newer than version number X (if making use of key version numbers), or keys newer than a given snapshot ID.Since it's in the roadmap, this means it's not implemented yet, but there are plans for it in the future (no date).
I'm using a setup I consider to be rather fragile and prone to failure involving LUKS, LVM, btrfs, and bcache. I have used btrfs for a long time, and have never experienced any significant issues with it. But, it doesn't support caching, nor does it handle erasure coding (i.e. RAID5 or 6 style redundancy). I consider it prone to failure especially when trying to install a new version of my distribution, since it involves a setup that I suspect is very much an outlier, and not accounted for in any kind of testing for upgrade procedures, or any kind of installation of a new system while preserving most of the data from the old one. I would like to move to bcachefs, but I can't tell whether or not it supports, or intends to support anything like btrfs send and btrfs recv. These two commands are critical to my incremental backup strategy. Does it support anything like this now in November of 2023? Will it ever?
Equivalent of `btrfs send` and `btrfs recv` for bcachefs
Mounted the bcache partition to a loop device with sudo losetup -f /dev/[DEVICE] -o 8192 The bcache data is probably only 1KiB or less, but the offset needs to align with the sector size of the disk, in this case 8KiB. This worked perfectly and I've been transferring files to a stable storage pool overnight. If anyone else finds themselves stuck with this issue, get the sector size of your disk with sudo smartctl -a /dev/[DEVICE] (needs smartmontools package) and use increments of that size as an offset with the losetup command I mentioned earlier until the loop device shows a filesystem present when lsblk -f is run.
I have a disk with a btrfs filesystem on top of bcache that was used in an old installation I no longer have (unintentionally nuked). When I plug in the drive, /dev/bcache0 doesn't show up and I'm not allowed to echo /dev/{dev} into /sys/fs/bcache/register to force it. I have the bcache module loaded, and when I try to rmmod it I get a "module in use" message. lsblk -f indicates that bcache is present on the drive, but I can't map it to get to the btrfs filesystem underneath and recover my data. I don't think it matters, but this is all on top of a dmcrypt encrypted volume, which I have the keyfile for and is accessible without problems. System information Distro: Arch Kernel: 4.12.5-1-ARCH x86_64 bcache Version: 1.0.8-1
Getting files off bcache disk from different computer
This is hardcoded value in the bcache drive code - linux/drivers/md/bcache/writeback.h. The only way to change this limit is to rebuild the driver from source.
I'm tryng to set writeback_percentat a value > 40 but it only accept value between 0 and 40. If i set echo 50 > /sys/block/bcache0/bcache/writeback_percent then when i read the value more /sys/block/bcache0/bcache/writeback_percenti have 40. For value<=40 the settings work fine. My setting for cache type are more /sys/block/bcache0/bcache/cache_mode writethrough [writeback] writearound noneI know this is dangerous but this is not a problem for me. As far as i understand writeback_percent is the % used from dirty data in cache, why i can't use 90% or 100% of available space? May be i dont' understand quite well this settings?
Bcache writeback_percent max value
It depends on hybrid drive type: new Seagate SSHD drives manufactured since late 2013 have far better caching than earlier models. What is also important, bcache caching strategy is totally different than Seagate Adaptive Memory strategy, and while bcache is very fast in benchmarks, AM learns data topology and can outperform bcache in real situations, eg.:loading thousands of game files (textures, sounds etc.) that have been installed sequentially with the game reading from flat file cache (eg. by Chrome) processing MTA mail queueOn the other hand, if the disk is encrypted and Adaptive Memory is not able to learn, then bcache is faster.
I read somewhere that the algorithms used by LVM/bcache are much better than the algorithms implemented by the H-HDD/SSHD drives. Is it true?
SSHD drive caching algorithm vs bcache/LVM
That appears to be expected behaviour.Write-back mode is usually safe because the caching device is journalled. Bcache will rewrite all dirty data after (unexpected) reboots to the persistent backing device, in fact bcache doesn't even finish writing dirty data at shutdown as part of its design, it will always boot dirty and continue writing back dirty data and reliably finish filesystem transactions. Only when the backing device has ack'ed all data written, the caching data is marked clean.From: https://wiki.archlinux.org/title/Talk:Bcache#Whole_article_revamp
I created a Raspberry Pi based bcache on SSD with a HDD based RAID 1 array. After populating the RAID with few TB of content, bcache showed 10% dirty cache. That would be expected, since I have /sys/block/bcache0/bcache/cache_mode set to writeback. However, it stays at 10% indefinitely. The device is running for days without any activity. I even tried to force the cache flush by echo 0 > /sys/block/bcache0/bcache/writeback_percentWhich was properly set, but no disk activity started, as evidenced by iostat, the dirty cache still remains at 10%. Does this mean there's something wrong with bcache? Should I worry, or is there any explanation for this behavior?
Can bcache have non-zero dirty cache forever?
Try this instead: rpm -ivh kernel-3.10.0-957.el7.src.rpm cd ~/rpmbuild/SOURCES rpmbuild -bp kernel.spec cd ~/rpmbuild/BUILD/kernel-3.10.0-957.el7/linux-3.10.0-957.fc32.x86_64 make menuconfig make bzImage make modules
I'm trying to compile the kernel from source on the system CentOS 7. The output of uname -a is: Linux dbn03 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Here is how I download the source code and compile it: wget "http://vault.centos.org/7.6.1810/os/Source/SPackages/kernel-3.10.0-957.el7.src.rpm" rpm2cpio ./kernel-3.10.0-957.el7.src.rpm | cpio -idmv make menuconfig Device Drivers ->Multiple devices driver support (RAID and LVM) -><*> Block device as cache make bzImage make modulesAs you see, I just tried to compile the kernel with the module BCACHE. However, when I executed the commands above, I got the error as below: drivers/md/bcache/request.c:675:3: warning: passing argument 2 of ‘part_round_stats’ makes integer from pointer without a cast [enabled by default] part_round_stats(cpu, &s->d->disk->part0); ^ In file included from include/linux/blkdev.h:9:0, from include/linux/blktrace_api.h:4, from drivers/md/bcache/bcache.h:181, from drivers/md/bcache/request.c:9: include/linux/genhd.h:408:13: note: expected ‘int’ but argument is of type ‘struct hd_struct *’ extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part); ^ drivers/md/bcache/request.c:675:3: error: too few arguments to function ‘part_round_stats’ part_round_stats(cpu, &s->d->disk->part0); ^ In file included from include/linux/blkdev.h:9:0, from include/linux/blktrace_api.h:4, from drivers/md/bcache/bcache.h:181, from drivers/md/bcache/request.c:9: include/linux/genhd.h:408:13: note: declared here extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part);It seems that I got a warning and an error. I think I can ignore the warning but the error is fatal. In the header, the function part_round_stats is declared that three parameters are necessary, whereas in the file drivers/md/bcache/request.c, only two parameters are passed to the function part_round_stats. I've tried to google this issue but I got nothing. So what kind of problem did I get here? Is this the error coming from the source code of linux? (I don't think so...), or is this some kind of issue of the versions? or the downloaded source code doesn't support the module BCACHE and the developer of kernel left a fatal error?
Compiling kernel from source got a fatal error: too few arguments to function 'part_round_stats'
After some detailed reviewing I realized the problem is with the block size from the different devices. When adjusted the make-bcache commands, everything worked as expected: make-bcache --block 4k -B /dev/sdb make-bcache --block 4k -C /dev/nvme0n1
I'm trying to setup bcache in an instance in google cloud using a raid of local SSDs as caching device but everything fails in the attach phase of the process. For testing purposes I created a new instance with blank disks (two SSDs and four Local SSDs). The objective is to have one regular SSD as backing device and form a raid with the 4 Local SSDs and use that device as caching. In the output below you can see the steps taken (the outcome is the same when using a raid, I just used one Local SSD for simplicity here and because I believe the issue is not related to raid but to the disks themselves). When using the Local SSD as caching device, I fail to attach the caching device to the backing device. When instead I use a regular SSD for caching device, you can see that everything works as expected. The question for you experts out there is: Are there any known limitations for the Local SSDs or am I doing anything wrong (or maybe there are some extra steps needed)? For reference, these are devices being used: /dev/sdb => Backing Device /dev/sdc => SSD Caching Device /dev/nvme0n1 => Local SSD Single Caching Device # apt update && apt install mdadm bcache-tools -y# make-bcache -B /dev/sdb UUID: cb10650f-cf60-4a96-81eb-7149ae650f94 Set UUID: dc0a7f3a-de46-4b00-84f4-4aa40c203745 version: 1 block_size: 1 data_offset: 16# mkfs.ext4 -L cached /dev/bcache0# make-bcache -C /dev/nvme0n1 UUID: c5a33c1e-e1ef-4d3d-a5ac-5d0adc340f43 Set UUID: 228dcba5-6085-47a1-b2e9-eff68dd6ac14 version: 0 nbuckets: 768000 block_size: 8 bucket_size: 1024 nr_in_set: 1 nr_this_dev: 0 first_bucket: 1# bcache-super-show /dev/nvme0n1 sb.magic ok sb.first_sector 8 [match] sb.csum 674DD52F06C4562B [match] sb.version 3 [cache device]dev.label (empty) dev.uuid c5a33c1e-e1ef-4d3d-a5ac-5d0adc340f43 dev.sectors_per_block 8 dev.sectors_per_bucket 1024 dev.cache.first_sector 1024 dev.cache.cache_sectors 786430976 dev.cache.total_sectors 786432000 dev.cache.ordered yes dev.cache.discard no dev.cache.pos 0 dev.cache.replacement 0 [lru]cset.uuid 228dcba5-6085-47a1-b2e9-eff68dd6ac14# echo 228dcba5-6085-47a1-b2e9-eff68dd6ac14 > /sys/block/bcache0/bcache/attach -bash: echo: write error: Invalid argument# make-bcache -C /dev/sdc UUID: 55c95063-9aa7-4d2c-8c8c-d4d34d35a7ad Set UUID: 2de3ccef-a7eb-4620-8b6d-265d0a06da17 version: 0 nbuckets: 204800 block_size: 1 bucket_size: 1024 nr_in_set: 1 nr_this_dev: 0 first_bucket: 1# bcache-super-show /dev/sdc sb.magic ok sb.first_sector 8 [match] sb.csum 11E99ECE7A83EABE [match] sb.version 3 [cache device]dev.label (empty) dev.uuid 55c95063-9aa7-4d2c-8c8c-d4d34d35a7ad dev.sectors_per_block 1 dev.sectors_per_bucket 1024 dev.cache.first_sector 1024 dev.cache.cache_sectors 209714176 dev.cache.total_sectors 209715200 dev.cache.ordered yes dev.cache.discard no dev.cache.pos 0 dev.cache.replacement 0 [lru]cset.uuid 2de3ccef-a7eb-4620-8b6d-265d0a06da17# echo 2de3ccef-a7eb-4620-8b6d-265d0a06da17 > /sys/block/bcache0/bcache/attach# bcache-super-show /dev/sdc sb.magic ok sb.first_sector 8 [match] sb.csum 11E99ECE7A83EABE [match] sb.version 3 [cache device]dev.label (empty) dev.uuid 55c95063-9aa7-4d2c-8c8c-d4d34d35a7ad dev.sectors_per_block 1 dev.sectors_per_bucket 1024 dev.cache.first_sector 1024 dev.cache.cache_sectors 209714176 dev.cache.total_sectors 209715200 dev.cache.ordered yes dev.cache.discard no dev.cache.pos 0 dev.cache.replacement 0 [lru]cset.uuid 2de3ccef-a7eb-4620-8b6d-265d0a06da17# bcache-super-show /dev/sdb sb.magic ok sb.first_sector 8 [match] sb.csum 2E55F82F4131C19B [match] sb.version 1 [backing device]dev.label (empty) dev.uuid cb10650f-cf60-4a96-81eb-7149ae650f94 dev.sectors_per_block 1 dev.sectors_per_bucket 1024 dev.data.first_sector 16 dev.data.cache_mode 0 [writethrough] dev.data.cache_state 1 [clean]cset.uuid 2de3ccef-a7eb-4620-8b6d-265d0a06da17
bcache fails to attach local ssd as caching device in google cloud
I found a workaround which proved itself over several updates in several machines already: After the normal system & kernel upgrade, before the reboot, simply stop bcache: echo 1 >/sys/fs/bcache/*/stop;sleep 2;sync;sync;shutdown -r now;logoutOn the next reboot, bcache will be back on without any corruptions. Most certainly bcache doesn't change its format from kernel version to kernel version, but the fact is: rebooting the machine without kernel updates don't corrupt bcache's caching device while updating the kernel does. Even if it is a minor update.
After using a write-around bcache on the root device of an old laptop for more than a year (HD + SD card), I finally found out that some severe filesystem corruptions I was facing -- which lead me to resort to backups and reinstall everything twice (!!) -- were due to bcache corruption on the cache device after rebooting uppon updating/upgrading the kernel. The workaround for that is rather "easy", since it is a read cache: When the boot process says my device is corrupted beyond automatic repair habilities, I just have to remove the cache device before doing a manual fsck, recreate the cache and register it again. -- BTW, never try to fsck an allegedly corrupted file system with a corrupted write-around bcache because, then, you will really corrupt your data. Question: what would be a possible way to prevent this corruption? I use archlinux, therefore, always with a recent version of everything -- the kernel now is 4.19.4.
Corruption on BCache uppon Kernel Update/Upgrade
For stuff like writeback, it is stored permanently on disk. However for stuff like sequential_cutoff, readahead, etc. those require you to reset them every boot (you can do this with a script / systemd service).
I recall when setting up my system, I changed the sequential_cutoff parameter to zero by executing: echo 0 | sudo tee /sys/block/bcache0/bcache/sequential_cutoffNow, after many months I see it back to the default value of 4.0M. On the other side, the cache_mode parameter is still the way I set it, writeback. Is the sequential_cutoff parameter supposed to be permanently stored on disk?
bcache: is sequential_cutoff and other parameters supposed to be permanent?
If you format a piece of the SSD drive to be used as a bcache cache, then it will not be available anymore to store any other filesystem, obviously. But nothing prevents you from using the remaining part of the SSD drive as you see fit. This applies to for instance WD Black2 There are also other SSHD disks, like Seagete Momentus XT, where the cache is hidden behind the on-disk cache controller, acting in place of the bcache logic. In such case unfortunately you most certainly cannot access the cache yourself and manage it manually.
I heard that LVM/bcache can be used in Linux to store the most accessed files if there is a separate SSD drive. With a hybrid drive (SSHD/H-HDD), can the flash area be manually used by LVM/bcache as well instead of the disk algorithms?
Using the SSD area of a H-HDD drive with LVM/bcache
The stats for the cache device of a certain bcache cache set can be queried using /sys/fs/bcache/$CSET_UUID/cache0/priority_stats. Among other information, it contains the unused field.
I've got a setup that includes a bcache cache device serving multiple backing devices. I would like to know how much of it is currently in use because bcache only caches certain kinds of data.
How can you check how much of a bcache cache is currently in use?