output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Well, aside from using e.g. bc or your shell (simply echo "$((0xB79C6440 + 1234))), you can't. Let's be honest here, hexedit is a fine tool with a time-honored tradition, but it's hardly the hexeditor of choice if you're really trying to move around complex files:if you're looking at executable program code / libraries, radare2 is a very potent tool that also allows you to directly follow jumps, can analyze the binary to extract functions, analyze the entropy in regions of the file to find specific content etc if you're looking to analyze a data storage file format, Ange Albertini's sbud is probably more what you're looking for. There's a neat presentation about that problem. if you're used to using emacs, anyway, guess what major mode exists vim has a pretty popular hex-editor mode: in fact, one of the most commonly used tools to just dump the content of a file as hex (as opposed to interactively jump around in it) has its name from it, xxd. You enter the hex-edit mode by typing :%xxd in vim.Looking at your dump, this looks like something that is going to get loaded as process image or directly as bare-metal firmware: radare2's r2 is the tool to use, really.
I am looking through a memory dump B79C6440 64 6F 6E 65 00 00 6C 5F 75 62 6C 65 20 73 68 6F done..l_uble sho B79C6450 77 5F 00 00 5F 6F 6E 5F 72 75 70 00 00 61 63 6B w_.._on_rup..ack B79C6460 69 72 71 5F 76 65 63 74 6F 72 73 10 10 05 30 10 irq_vectors...0. B79C6470 06 50 10 07 70 10 08 90 10 09 B0 10 0A 98 1B FC .P..p........... B79C6480 16 9C 1B A0 A4 A8 18 6E 6D 69 5F 63 68 65 63 6B .......nmi_checkusing Hexedit I can jump to a new address by pressing enter and typing the address in. Say I want to move from address B79C6440 by adding an offset of X bytes , how could I compute the new address I want to reach so I could type it in?
How to jump X bytes down in hexedit?
I found a sample big-endian cpio archive (it was already commented in the libmagic file): # https://sembiance.com/fileFormatSamples/archive/cpio/skeleton2.cpioThe path entries start at the exact spot (26th byte) as the little-endian archive. So to answer my own question: No. There's no reason not to check the 26th byte for the byte-swapped cpio archives.
I'm on a little-endian linux machine and would like to see the canoncial hexdump of a cpio archive on big-endian linux machine. Can someone please run these commands on a big-endian linux and post the output: echo TESTING > /tmp/test cpio -o <<< "/tmp/test" > /tmp/test.cpio hexdump -C /tmp/test.cpioIf you are curious, I need this because libmagic does the following to determine the cpio archive type: # same byteorder machine 0 short 070707 26 string >\0 cpio archive# opposite byteorder machine 0 short 0143561 byte-swapped cpio archiveI want to see if there's a reason libmagic doesn't check 26th byte of the archive for the opposite byteorder machine. The output of the command on my little-endian machine: 1 block 00000000 c7 71 1b 00 57 01 a4 81 e8 03 e8 03 01 00 00 00 |.q..W...........| 00000010 ff 65 ce a4 0a 00 00 00 08 00 2f 74 6d 70 2f 74 |.e......../tmp/t| 00000020 65 73 74 00 54 45 53 54 49 4e 47 0a c7 71 00 00 |est.TESTING..q..| 00000030 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................| 00000040 0b 00 00 00 00 00 54 52 41 49 4c 45 52 21 21 21 |......TRAILER!!!| 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200
CPIO Archive Hexdump on a Big-Endian Linux Machine
Binary test data 1GB, discussed here, is created by dd if=/dev/urandom of=sample.bin bs=64M count=16Split by byte position Please, see the thread about this here. I think this is the most appropriate way to do the split if the byte offset is fixed. You need to determine the locations of the first two event headers and count the size of the event. Consider also the tail of the last event header so you know when to end the splitting. xxd Answer is in FloHimself's comment. xxd -ps sample.bin | process | xxd -ps -rod -v In od -v, one should specify the output format like the following based on StephenKitt's comment od -v -t x1 sample.bingiving 0334260 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334300 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334320 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334340 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334360 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334400 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334420 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334440 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334460 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334500 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334520 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0334540which is easier to handle. Comment about passing through Ascii channel instead of hex/octave I think the conversion from binary to hex and back to binary is sufficient by xxd -ps. I did base64 sample.bin | less -s -M +Gg but I noticed significant slower processing and output looking like thisCGgUAMA0GCSqGSIb3DQEBAQUABIIBABR2EFj76BigPN+jzlGvk9g3rYrHiPKNIjKGprJMaB91ATT6gc0Rs3xlEr6Ybzm8NVcxMnR+2chto/oSh85ExuH4Lk8mELHOIZLeAUUr8eFAXKnZ4SBZ6a8Ewr0x/zX09Bp6IMk18bdVUCT15PT2fbluvJfj7htWCDy0ewm+eU2LIJgkriK8AA0oarqjjK/CIhfglQutfN6QDEp4zqc6tJVqUO7XrEsFlGDOgcPTzeWJuWx31/8MrvEn5HcPzhq+nMI1D6NYjzGhHN08//ObF3z3zthlCDVmowbV161i2LhQ0jy9a/TNyAM0juCR0IF9j7zSyFW0/vvMZYdt5kg1J1EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
I need to convert binary data to hex/octave/any suitable format and back to binary when I am splitting one big 1GB file into files containing 777 events each which are not of the same size such that each event is separated by a string fafafafa in hexdump format (but note that this separator may not exist in telnet-examples so you can choose any string there for practice). I am trying to understand which of these commands are suitable for this, motivated by this answer here. The following data source, binary of telnet, is just an example. I use by purpose pseudolevel in commenting about the outputs, not confuse you with details; I have full documentation of headers and their parts but their understanding is not necessary for this task. od - v od -v /usr/bin/telnet | head 0000000 042577 043114 000402 000001 000000 000000 000000 000000 0000020 000003 000076 000001 000000 054700 000000 000000 000000 0000040 000100 000000 000000 000000 073210 000001 000000 000000 0000060 000000 000000 000100 000070 000010 000100 000034 000033 0000100 000006 000000 000005 000000 000100 000000 000000 000000 0000120 000100 000000 000000 000000 000100 000000 000000 000000 0000140 000700 000000 000000 000000 000700 000000 000000 000000 0000160 000010 000000 000000 000000 000003 000000 000004 000000 0000200 001000 000000 000000 000000 001000 000000 000000 000000 0000220 001000 000000 000000 000000 000034 000000 000000 000000Commentsfirst strings should be some header but it is odd that they go from 2, 4, 6, 10, ... so I think this may be a limitation laterhexdump -v hexdump -v /usr/bin/telnet | head 0000000 457f 464c 0102 0001 0000 0000 0000 0000 0000010 0003 003e 0001 0000 59c0 0000 0000 0000 0000020 0040 0000 0000 0000 7688 0001 0000 0000 0000030 0000 0000 0040 0038 0008 0040 001c 001b 0000040 0006 0000 0005 0000 0040 0000 0000 0000 0000050 0040 0000 0000 0000 0040 0000 0000 0000 0000060 01c0 0000 0000 0000 01c0 0000 0000 0000 0000070 0008 0000 0000 0000 0003 0000 0004 0000 0000080 0200 0000 0000 0000 0200 0000 0000 0000 0000090 0200 0000 0000 0000 001c 0000 0000 0000Commentsnumbering of the first string ok some letters between so may be later problem for readability first string different size than later four letter comboshexdump -vb hexdump -vb /usr/bin/telnet | head 0000000 177 105 114 106 002 001 001 000 000 000 000 000 000 000 000 000 0000010 003 000 076 000 001 000 000 000 300 131 000 000 000 000 000 000 0000020 100 000 000 000 000 000 000 000 210 166 001 000 000 000 000 000 0000030 000 000 000 000 100 000 070 000 010 000 100 000 034 000 033 000 0000040 006 000 000 000 005 000 000 000 100 000 000 000 000 000 000 000 0000050 100 000 000 000 000 000 000 000 100 000 000 000 000 000 000 000 0000060 300 001 000 000 000 000 000 000 300 001 000 000 000 000 000 000 0000070 010 000 000 000 000 000 000 000 003 000 000 000 004 000 000 000 0000080 000 002 000 000 000 000 000 000 000 002 000 000 000 000 000 000 0000090 000 002 000 000 000 000 000 000 034 000 000 000 000 000 000 000Command od -v gives me six letter strings like 000000 042577 which I think is an octave format. Another command hexdump -v gives me also four letter strings like 457f 464c but with some octave options hexdump -vo gives three letter words like 000000 177 105 .... Which of these commands are suitable for data manipulation of binary data such as to make splitting easy?
Commands for data manipulation of binary to octave/hex formats?
Bacticks are used for command substitution, you need: watch './md /dev/ttyUSB0 | xxd'
My tool is wtiting binary chars to stdout and I can view it in hex with # ./md /dev/ttyUSB0 | xxd 0000000: 6f03 1100 0003 0084 8400 0000 0900 0a00 o............... 0000010: 0008 0004 0000 0000 2c00 0000 0000 0000 ........,....... ... 00000b0: 8000 8000 8000 8000 8000 8000 8000 8000 ................ 00000c0: 8047 ffff ff6f 04fd 2180 ff02 f700 f702 .G...o..!....... 00000d0: fbb6 00bf 10e1 a57f 4004 fb00 a780 7e00 [email protected], when I am trying to watch this screen watch `./md /dev/ttyUSB0 | xxd`watch `./md /dev/ttyUSB0 | hexdump`It prints something likeeither corrupting or misinterpretting output. What do I do wrong?
How to `watch` output of `xxd` or `hexdump` command?
From your other questions I take it you're using OS X. The default HFS+ filesystem on OS X is case-insensitive: you can't have two files called "abc" and "ABC" in the same directory, and trying to access either name will get to the same file. The same thing can happen under Cygwin, or with case-insensitive filesystems (like FAT32 or ciopfs) anywhere. Because grep is a real executable, it's looked up on the filesystem (in the directories of PATH). When your shell looks in /usr/bin for either grep or GREP it will find the grep executable. Shell builtins are not looked up on the filesystem: because they're built in, they are accessed through (case-sensitive) string comparisons inside the shell itself. What you're encountering is an interesting case. While cd is a builtin, accessed case-sensitively, CD is found as an executable /usr/bin/cd. The cd executable is pretty useless: because cd affects the current shell execution environment, it is always provided as a shell regular built-in, but there is a cd executable for POSIX's sake anyway, which changes directory for itself and then immediately terminates, leaving the surrounding shell where it started. You can try these out with the type builtin: $ type cd cd is a shell builtin $ type CD CD is /usr/bin/CDtype tells you what the shell will do when you run that command. When you run cd you access the builtin, but CD finds the executable. For other builtins, the builtin and the executable will be reasonably compatible (try echo), but for cd that isn't possible.
Why is this? When I do this CD ~/DesktopIt doesn't take me to the Desktop. But this: echo "foo bar" | GREP bargives me: bar
Why can Shell builtins not be run with capital letters but other commands can?
Rsync will not use deltas but will transmit the full file in its entirety if it - as a single process - is responsible for the source and destination files. It can transmit deltas when there is a separate client and server process running on the source and destination machines. The reason that rsync will not send deltas when it is the only process is that in order to determine whether it needs to send a delta it needs to read the source and destination files. By the time it's done that it might as well have just copied the file directly. If you are using a command of this form you have only one rsync process: rsync /path/to/local/file /network/path/to/remote/fileIf you are using a command of this form you have two rsync processes (one on the local host and one on the remote) and deltas can be used: rsync /path/to/local/file remote_host:/path/to/remote/file
I have a large file (2-3 GB, binary, undocumented format) that I use on two different computers (normally I use it on a desktop system but when I travel I put it on my laptop). I use rsync to transfer this file back and forth. I make small updates to this file from time to time, changing less than 100 kB. This happens on both systems. The problem with rsync as I understand it is that if it think a file has changed between source and destination it transfers the complete file. In my situation it feels like a big waste of time when just a small part of a file has changes. I envisage a protocol where the transfer agents on source and destination first checksums the whole file and then compare the result. When they realise that the checksum for the whole file is different, they split the file into two parts, A and B and checksum them separately. Aha, B is identical on both machines, let's ignore that half. Now it splits A into A1 and A2. Ok, only A2 has changed. Split A2 into A2I and A2II and compare etc. Do this recursively until it has found e.g., three parts that are 1 MB each that differs between source and destination and then transfer just these parts and insert them in the right position at the destination file. Today with fast SSDs and multicore CPUs such parallelisation should be very efficient. So my question is, are there any tools that works like this (or in another manner I couldn't imagine but with similar result) available today? A request for clarification has been posted. I mostly use Mac so the filesystem is HFS+. Typically I start rsync like this rsync -av --delete --progress --stats - in this cases I sometimes use SSH and sometimes rsyncd. When I use rsyncd I start it like this rsync --daemon --verbose --no-detach. Second clarification: I ask for either a tool that just transfers the delta for a file that exists in two locations with small changes and/or if rsync really offers this. My experience with rsync is that it transfers the files in full (but now there is an answer that explains this: rsync needs an rsync server to be able to transfer just the deltas, otherwise (e.g., using ssh-shell) it transfers the whole file however much has changed).
Smarter filetransfers than rsync?
You should to try to rebuild the catalog file (B-tree) on the specified file system (which is HFS+) by specifying -r option for fsck, for example: $ fsck.hfsplus -fryd /dev/sdd2This option currently will only work if there is enough contiguous space on the specified file system for a new catalog file and if there is no damage to the leaf nodes in the existing catalog file (in other words, fsck is able to traverse each of the nodes in the requested btree successfully). Of course, do the backup (whole image disk dump) before performing any disk operations, if you don't want to risk of corrupting any data further more. See more by running man fsck.hfsplus. If this won't help, try using some other tools to repair your disk, e.g.:TestDisk by CGSecurity | Mac, Windows, Linux (apt-get install testdisk) DiskWarrior by Alsoft (commercial) - bootable disk or Mac app
My drive is formatted to hfs+ and it is not clean. For example, when I'm trying to mount the drive by mount -f -o rw, dmesg displays the error: hfs: Filesystem was not cleanly unmounted, running fsck.hfsplus is recommended. mounting read-only.So when I'm trying to repair it via fsck.hfsplus (part of hfsprogs) it says: $ fsck -dyf /media/sdd2 ** /dev/sdd2 Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K. ** Checking HFS Plus volume. ** Detected a case-sensitive catalog. ** Checking Extents Overflow file. ** Checking Catalog file. Invalid map node linkage (4, 0) ** Volume check failed. volume check failed with error 7 volume type is pure HFS+ primary MDB is at block 0 0x00 alternate MDB is at block 0 0x00 primary VHB is at block 2 0x02 alternate VHB is at block 3906291630 0xe8d547ae sector size = 512 0x200 VolumeObject flags = 0x07 total sectors for volume = 3906291632 0xe8d547b0 total sectors for embedded volume = 0 0x00 Despite using -y or -f, the drive is not being repaired. Here is the explanation of that error according to this blog:Once the B*-Tree has been checked, fsck moves on to checking the Allocation Map. fsck checks the header node as described above. Then it checks through each node making sure it identifies itself as a map node and has the proper number of records. If the node fails these checks fsck returns “Invalid Map Node.” Then fsck checks to make sure the node height is not 0 (“Invalid Node height”). Finally, if it has made it to the bottom of the tree and the mapSize,(which stores the total number of records in the tree and is decremented each time a node is processed) is not 0, fsck knows there are nodes that are orphaned and returns “Invalid map node linkage.”However, I don't know how to fix that error as it's not being corrected automatically and I can't mount the partition to be writable. Any ideas how to fix that error? P.S. Disk Utility has similar problem.
How to fix invalid map node linkage?
Apple's new APFS filesystem supports copy-on-write; CoW is automatically enabled in Finder copy operations where available, and when using cp -c on the command line. Unfortunately, cp -c is equivalent to cp --reflink=always (not auto), and will fail when copy-on-write is not possible with cp: somefile: clonefile failed: Operation not supportedI'm not aware of a way to get auto behavior. You could make a shell script or function with automatic fallback a la cpclone() { cp -c "$@" || cp "$@"; }but it'll be difficult to make it entirely reliable for all edge cases.
cp --reflink=auto shows following output for MacOS:cp: illegal option -- -Is copy-on-write or deduplication supported for HFS? How can I COW huge files with HFS?
cp --reflink=auto for MacOS X
I may have found something: The ls command on OS X has this switch: -O Include the file flags in a long (-l) output.The result is: $ ls -O Info.plist -rw-r--r-- 1 root wheel compressed 15730 11 jui 15:02 Info.plistI just checked (experimentally) that du always reports 0 for HFS+ compressed files. Copying compressed files uncompress them; so logically du reports the correct file on a copied, uncompressed file. Here is an explanation for the behaviour of du: HFS+ File CompressionIn Mac OS X 10.6, Apple introduced file compression in HFS+. Compression is most often used for files installed as part of Mac OS X; user files are typically not compressed (but certainly can be!). Reading and writing compressed files is transparent as far as Apple's file system APIs. Compressed files have an empty data fork. This means that forensic tools not aware of HFS+ file compression (including TSK before 4.0.0) will not see any data associated with a compressed file!There is also a discussion on this subject in Mac OS X and iOS Internals: To the Apple's Core by Jonathan Levin , in chapter 16: To B(-Tree) or not to be - The HFS+ file systems. Also afsctool may help see which files are compressed in a folder. $ afsctool -v /Applications/Safari.app/ /Applications/Safari.app/.: Number of HFS+ compressed files: 1538 Total number of files: 2247 Total number of folders: 144 Total number of items (number of files + number of folders): 2391 Folder size (uncompressed; reported size by Mac OS 10.6+ Finder): 29950329 bytes / 34.7 MB (megabytes) / 33.1 MiB (mebibytes) Folder size (compressed - decmpfs xattr; reported size by Mac OS 10.0-10.5 Finder): 21287197 bytes / 23.8 MB (megabytes) / 22.7 MiB (mebibytes) Folder size (compressed): 22694835 bytes / 25.2 MB (megabytes) / 24 MiB (mebibytes) Compression savings: 24.2% Approximate total folder size (files + file overhead + folder overhead): 26353338 bytes / 26.4 MB (megabytes) / 25.1 MiB (mebibytes)
What is the explanation for the difference: $ ls -l /Applications/Safari.app/Contents/Info.plist -rw-r--r-- 1 root wheel 15730 11 jui 15:02 /Applications/Safari.app/Contents/Info.plist$ du -sh /Applications/Safari.app/Contents/Info.plist 0B /Applications/Safari.app/Contents/Info.plistOnce the file is copied in my home folder, ls and du report the same number. $ cp /Applications/Safari.app/Contents/Info.plist . $ du -sh Info.plist; ls -l Info.plist 16K Info.plist -rw-r--r-- 1 ant staff 15730 17 oct 16:53 Info.plistBoth directories are on this partition ( / ) diskutil info / Device Identifier: disk0s2 Device Node: /dev/disk0s2 Part of Whole: disk0 Device / Media Name: ml2013Volume Name: OSX.10.8 Escaped with Unicode: OSX.10.8Mounted: Yes Mount Point: / Escaped with Unicode: /File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 40960 KB at offset 0xc83000 Owners: EnabledHere is the output of stat: $ stat Info.plist 16777218 8780020 -rw-r--r-- 1 root wheel 0 15730 "Oct 17 17:47:12 2013" \ "Jun 11 15:02:17 2013" "Jun 11 15:02:17 2013" "Apr 27 11:49:34 2013"\ 4096 0 0x20 Info.plist
Why does du report a size of 0 for some non-empty files on a HFS+ partition?
First I'll create a 500M image file: $ cd /tmp $ fallocate -l $((1024*1024*500)) ./diskNow I'll give it a GPT: $ gdisk ./diskGPT fdisk (gdisk) version 0.8.10Partition table scan: MBR: not present BSD: not present APM: not present GPT: not presentCreating new GPT entries.o for create new GPT. Command (? for help): o This option deletes all partitions and creates a new protective MBR. Proceed? (Y/N): yn for create new partition. I just press enter to select all defaults after that. Command (? for help): n Partition number (1-128, default 1): 1 First sector (34-1023966, default = 2048) or {+-}size{KMGTP}: Last sector (2048-1023966, default = 1023966) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem'w writes changes to disk. Command (? for help): wFinal checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to ./disk. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully.Now I'll set it up as a partitioned block device and format the first partition with a filesystem: $ sync; lp=$(sudo losetup --show -fP ./disk) $ sudo mkfs.vfat -n SOMEDISK "${lp}p1"Results: $ lsblk -o NAME,FSTYPE,LABEL,PARTUUID "$lp"NAME FSTYPE LABEL PARTUUID loop0 └─loop0p1 vfat SOMEDISK f509e1d4-32bc-4a7d-9d47-b8ed0f280b36 Now, to change that. First, destroy the block dev: $ sudo losetup -d "$lp"Now, edit the GPT: $ gdisk ./diskGPT fdisk (gdisk) version 0.8.10 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT.i gives extended info about a single partition. Had I more than one partition, I would next be prompted to enter its partition number. The same goes for the c command later. Command (? for help): i Using 1 Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem) Partition unique GUID: F509E1D4-32BC-4A7D-9D47-B8ED0F280B36 First sector: 2048 (at 1024.0 KiB) Last sector: 1023966 (at 500.0 MiB) Partition size: 1021919 sectors (499.0 MiB) Attribute flags: 0000000000000000 Partition name: 'Linux filesystem'x is the xperts menu. Command (? for help): xc for change PARTUUID. Expert command (? for help): c Using 1 Enter the partition's new unique GUID ('R' to randomize): F509E1D4-32BC-4A7D-9D47-B00B135D15C5 New GUID is F509E1D4-32BC-4A7D-9D47-B00B135D15C5w writes out the changes to disk (or, in this case, to my image file). Expert command (? for help): wFinal checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to ./disk. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully.$ sync; lp=$(sudo losetup --show -fP ./disk)The results: $ lsblk -o NAME,FSTYPE,LABEL,PARTUUID "$lp"NAME FSTYPE LABEL PARTUUID loop0 └─loop0p1 vfat SOMEDISK f509e1d4-32bc-4a7d-9d47-b00b135d15c5
I needed to make a clone of my hard drive recently (bad blocks FTW). I was using Clonezilla at the time. However, Clonezilla refused to copy the HFS+ partition, so I did it manually. The problem is that the UUIDs are out of sync. What is the command to set a specific UUID for HFS+?
Changing HFSPlus UUID from PartedMagic
No I do not believe that either the native HFS+ driver or the Paragon software HFS+ products support extended attributes. According to the HFS+ Wikipedia page the status of these drivers is every basic in the features that they support and have been known to corrupt HDDs in certain situations. excerpt from CentOS threadOn Wednesday, March 07, 2012 01:17:15 PM Wessel van der Aart wrote:so i add user_xattr and acl to my fstab options but then it fails to mount. checking the error in dmesg just gives me ¨hfs: unable to parse mount options¨. does anyone know what´s going on and what i should do to make this work?Well, having used the in-kernel HFS+ filesystem driver before, and found it lacking in a number of areas (like massive corruption under heavy load or when unlinking lots of files) I bought the commercially supported Paragon NTFS&HFS drivers.excerpt from CentOS threadi tried their free version today. at first it did look promising but as soon i was to perform actions on files with acl's on them the whole system came down hard and leaving my external HDD corrupted. after several hours i've decided to give up and go with ext4 but still thanks!ReferencesHFS+ Wikipedia Page hfs with extended attribute support
I'm in the process of setting up a file server, and I'm looking for a way to preserve extended attributes in files that come from OS X machines, and manipulate them while the file is on the server. Obviously (well, presumably) this will require using HFS+ on the server, which is not a problem (unless there are hidden downsides I should know about), but I'm concerned that the support for HFS+ is minimal and will either (1) not preserve these attributes, or (2) preserve them but require copying the file to an OS X machine to manipulate them. How complete is the support for HFS+ in Linux? Will I be able to do everything I've mentioned?
On Linux, can you write to HFS+ extended attributes?
If you have space, please back up the disk as a whole (e.g. dd if=/dev/sdb of=disk.img bs=1M), before running random programs like fsck on things that you don't think are valid partitions :p. I'm not saying you've damaged it, but there's a very good chance of doing so while experimenting.The partition table shown by parted & the kernel looks incredible :(. But if it was created on a PowerMac, surely that's too old for GPT. And your ASCII dump (while not a recommended way of identification) does look like there's an Apple Partition Map there. Note the 'PM' signature. 'ER' also fits in. If there was a PC-style MBR you'd expect to see some error messages in the ASCII dump of the first sector. This is looking mutually exclusive with GPT as well. Barring black magic, which there's no reason for anyone to have loosed upon the world. (Black magic as used in Linux boot media for compatibility; see the ER link above. Your information has too many points of divergence from this case - e.g. there'd be an MBR super-imposed on the first sector, containing error messages used by isolinux).I don't have any Mac experience, but I suggest running testdisk. It works like parted's rescue mode. See if it identifies anything reasonable, i.e. a Mac-supported filesystem that covers the majority of the drive. I think should show a starting offset for the partition in terms of 512-byte sectors. Then you could try the offset identified by testdisk using a loop device. E.g. losetup -f -o offset-in-bytes /dev/sdb => loopN mount /dev/loopN /mnt If you can't mount the filesystem and you only have a few files using common formats, you could try photorec (from the same link). It works like testdisk but on common file formats (originally for photos, hence the name) instead of file systems.
I would like to better understand what is on this hard disk, and how I can mount it into Linux (specifically Debian GNU/Linux, Stable): It was created on a Powerbook g4 "alu book" with the default program, and used as a backup drive. Now I'm trying to rescue it or at least just use dd to save images of the partitions where the data actually is (where?). It has been mounted a few times in old macs, but doesn't always, and should have a single partition with a handful of files in it. I was thinking to use dd to blow away the appropriate bytes; will this let me mount it like a standard GPT uefi drive? Data speaks: First 2 blocks Here is what the first 2 512-byte-blocks look like, dumped out into Bash ER���@x$����"��PM?AppleApple_partition_Manual mount Trying to mount individual pieces of this partition, is not what I want to do; I want to mount the entire drive, like it would read on a mac. I don't understand where the files are, and why there are 15-16 partitions instead of one! Output from 'Analyse' option on testdisk Disk /dev/sdb - 160 GB / 149 GiB - CHS 19457 255 63 Partition Start End Size in sectors P HFS 262208 148499399 148237192 P HFS 148499400 148523975 24576 P HFS 148786120 212717799 63931680 P HFS 212979944 271039599 58059656 P HFS 271301744 312581791 41280048and here are the preceeding partitions according to testdisk initial info: 1 P partition_map 1 63 63 2 P Driver43 64 119 56 3 P Driver43 120 175 56 4 P Driver_ATA 176 231 56 5 P Driver_ATA 232 287 56 6 P FWDriver 288 799 512 7 P Driver_IOKit 800 1311 512 8 P Patches 1312 1823 512 9 P Free 1824 263967 262144 10 P HFS 263968 ...parted: (parted) unit b (parted) p Model: ST916082 3AS (scsi) Disk /dev/sdb: 160041885696B Sector size (logical/physical): 512B/512B Partition Table: macNumber Start End Size File system Name Flags 1 512B 32767B 32256B Apple 2 32768B 61439B 28672B Macintosh 3 61440B 90111B 28672B Macintosh 4 90112B 118783B 28672B Macintosh 5 118784B 147455B 28672B Macintosh 6 147456B 409599B 262144B Macintosh 7 409600B 671743B 262144B Macintosh 8 671744B 933887B 262144B Patch Partition 10 135151616B 91240419327B 91105267712B hfs+ Apple_HFS_Untitled_1 11 91240419328B 91777290239B 536870912B hfs+ Apple_HFS_Untitled_2 13 91911507968B 113693339647B 21781831680B hfs+ Apple_HFS_Untitled_3 14 113693339648B 113727942655B 34603008B hfs+ Apple_HFS_Untitled_4 16 113862160384B 160041877503B 46179717120B hfs+ Apple_HFS_Untitled_5
Is this a so-called "Hybrid" Mac Partition table, and how can I mount this in Linux?
Note: It seems you need to mount a hfsplus as write/read, which is a bit problematic, because of it's journal function. However, you can mount it as write/read as seen here and here.The problem is that /dev/sde2 is mounted read only, according to the ro flag in the parentheses in the last line:/dev/sde2 on /media/dev/andre_clients type hfsplus (ro,nosuid,nodev,uhelper=udisks2)Therefore you can't change anything on this disk. Remount it as read+write rw: sudo mount -o remount,rw /partition/identifier /mount/pointIn your case: sudo mount -o remount,rw /dev/sde2 /media/dev/andre_clientsBefore you do that, though, make sure you mount the right partition identifier by using dmesg | tail, e.g.: [25341.272519] scsi 2:0:0:0: Direct-Access [...] [25341.273201] sd 2:0:0:0: Attached scsi generic sg1 type 0 [25341.284054] sd 2:0:0:0: [sde] Attached SCSI removable disk [...] [25343.681773] sde: sde2The most recent sdX: sdXX line gives you a hint on which partition identifier (the sdXX one) your device connection is identified with. You can also check which dev your device is connected to, by doing ll /dev/disk/by-id/This will give you all symbolic links of the device and it's partitions: lrwxrwxrwx 1 root root 9 Jul 22 16:02 usb-manufacturername_*serialnumber* -> ../../sdb lrwxrwxrwx 1 root root 10 Jul 22 16:02 usb-manufacturername_*serialnumber*-part1 -> ../../sdb1
I ran the command fdisk -l to find out what my external drive is formatted to, I found out it's uses GPT partitions and the filesystem is HFS+. When I try and create a new folder on the external drive I receive the following message: chmod: changing permissions of 'file_name/': Read-only file systemIf I run mount this is the output: /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sdc2 on /media/dev/andre backup type hfsplus (ro,nosuid,nodev,uhelper=udisks2) /dev/sde2 on /media/dev/andre_clients type hfsplus (ro,nosuid,nodev,uhelper=udisks2)So now I ran umount /dev/sde2 and unplugged the device then reconnected the device and ran the command dmesg | tail and got this information back: [429154.613747] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.615995] sd 14:0:0:0: [sde] Test WP failed, assume Write Enabled [429154.616993] sd 14:0:0:0: [sde] Asking for cache data failed [429154.616997] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.669277] sde: sde1 sde2 [429154.671369] sd 14:0:0:0: [sde] Test WP failed, assume Write Enabled [429154.672742] sd 14:0:0:0: [sde] Asking for cache data failed [429154.672747] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.672751] sd 14:0:0:0: [sde] Attached SCSI disk [429157.047244] hfsplus: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only.Would it now be safe to run sudo mount -o remount,rw /dev/sde2 /media/dev/andre_clients without loosing any infomation?
Changing file permissions on an HFS+ filesystem
Sure. The only reason that only NTFS and HFS+ were mentioned was because that's what the vast majority of people purchasing their product are going to use. This isn't OS-specific, but I would strongly recommend that you always make sure to properly unmount the drive before you disconnect the USB cable. USB drives aren't always as fast as internal drives, and if you disconnect the cable before the drive has completed writing you'll potentially lose data!
Today I have bought a new Toshiba 1TB Canvio Ready USB 3.0 Portable External Hard Drive (Black). The specifications page of the portable hard drive says, it has been formated to NTFS file system and can be re-formatted to HFS+ file system for full Mac compatibility.File system NTFS (MS Windows) * The drive can be re-formatted to HFS+ file system for full Mac compatibility. However, I am a GNU/Linux user and I am wishing to re-format the portable external hard drive to ext4 file system. Is it okay to do so?
Is it okay to format my Toshiba Canvio Ready Portable Hard Drive to "ext4"?
You've found a bug (or, possibly, "we'll get to it somday...") in Apple's HFS+ fsck. It sounds like it needs to try fixing your file to a different name, after finding out its first attempt isn't available. This leaves you with a couple of options: First, take a backup of any data you can currently read from the FS. Ideally, take an image (bit-for-bit copy) and work on that. Corruption always makes me wonder how it happened. There are a lot of places it could have come from, but the most worrysome would be bad memory. I'd run a memory test on the machine. The file names is printing out appear to be UTF16-LE, which gives a name α̍λογο/σελ.37.tif. It wants to change it to άλογο/σελ.37.tif—not sure why. Google Translate tells me that's Greek, and makes sense, so I'm guessing its right. It's possible that an rm (or mv) on one of those will work. You really want to attempt to hex-decode the file name its giving on the command-line; I used xxd -p -r to do so, but I'm not sure if you have that on Mac OS X. Who knows if that weird file name will survive copy & paste from my terminal, through my web browser, Stack Exchange, you browser, and finally copy & past to your terminal. I also note the / in the file name; that's an actual forward-slash, not something that just looks like it. I'm not sure if that's allowed by HFS+. Any way, if all that doesn't work, you have three next approaches to try:Format the filesystem and restore from backup. HFS+ fsck is open source, you could download the source and attempt to fix it. Look up the HFS+ specifications (hopefully its documented; HFS was, I presume Apple documented HFS+ too). Use a filesystem editor (if you can find one) or, worst case, a hex editor to fix it, or at least delete the file.The simplest edit might be to change a few bytes of the file name. For example, you could easily change the .tif at the end (2E 00 74 00 69 00 66 00) to .bad (2E 00 62 00 61 00 64 00). Then run fsck again, and that'll hopefully lead to a non-duplicate name.
Well... It's about on an OS X system (hfs+), but that's unix after all to. :p system has become unbootable, due to a filesystem error... fsck.hfsplus fails because .... ... ** Checking Catalog file. Illegal name illegal name is 0xB1 03 0D 03 BB 03 BF 03 B3 03 BF 03 2F 00 C3 03 B5 03 BB 03 2E 00 33 00 37 00 2E 00 74 00 69 00 66 00 replacement name is 0xB1 03 01 03 BB 03 BF 03 B3 03 BF 03 2F 00 C3 03 B5 03 BB 03 2E 00 33 00 37 00 2E 00 74 00 69 00 66 00 .... ** Repairing volume. replacement name already exists duplicate name is 0xB1 03 01 03 BB 03 BF 03 B3 03 BF 03 2F 00 C3 03 B5 03 BB 03 2E 00 33 00 37 00 2E 00 74 00 69 00 66 00 FixIllegalNames - repair failed for type 0x23B 571 ** The volume Macintosh HD could not be repaired. ...using find -mtime I managed to locate some problematic files, that.... do not actually exist.... # ls -lhai ls: cannot access USB 프린터 공유: No such file or directory ls: cannot access 시동 디스크: No such file or directory ls: cannot access 애플 메뉴 선택사항: No such file or directory ls: cannot access 인터넷: No such file or directory ls: cannot access 파일 관리자: No such file or directory total 0 152704 drwxr-xr-x 1 root root 7 Apr 23 16:49 . 152677 drwxr-xr-x 1 root root 18 Apr 23 14:55 .. ? -????????? ? ? ? ? ? 애플 메뉴 선택사항 ? -????????? ? ? ? ? ? 시동 디스크 ? -????????? ? ? ? ? ? 파일 관리자 ? -????????? ? ? ? ? ? 인터넷 ? -????????? ? ? ? ? ? USB 프린터 공유rm -r on them simply does not have any result or error. rmdir and rm -rf on the parent directory do not do the trick, because "the directory is not empty". Tried to touch those files and # ls -lhai ls: cannot access USB 프린터 공유: No such file or directory ls: cannot access 시동 디스크: No such file or directory ls: cannot access 애플 메뉴 선택사항: No such file or directory ls: cannot access 인터넷: No such file or directory ls: cannot access 파일 관리자: No such file or directory total 0 152704 drwxr-xr-x 1 root root 7 Apr 23 16:52 . 152677 drwxr-xr-x 1 root root 18 Apr 23 14:55 .. ? -????????? ? ? ? ? ? 애플 메뉴 선택사항 ? -????????? ? ? ? ? ? 시동 디스크 ? -????????? ? ? ? ? ? 파일 관리자 ? -????????? ? ? ? ? ? 인터넷 ? -????????? ? ? ? ? ? USB 프린터 공유 # touch USB\ 프린터\ 공유 # ls -lhai ls: cannot access 시동 디스크: No such file or directory ls: cannot access 애플 메뉴 선택사항: No such file or directory ls: cannot access 인터넷: No such file or directory ls: cannot access 파일 관리자: No such file or directory total 0 152704 drwxr-xr-x 1 root root 8 Apr 23 17:09 . 152677 drwxr-xr-x 1 root root 18 Apr 23 14:55 .. ? -????????? ? ? ? ? ? 애플 메뉴 선택사항 ? -????????? ? ? ? ? ? 시동 디스크 ? -????????? ? ? ? ? ? 파일 관리자 ? -????????? ? ? ? ? ? 인터넷 4641964 -rw-r--r-- 1 root root 0 Apr 23 17:09 USB 프린터 공유 4641964 -rw-r--r-- 1 root root 0 Apr 23 17:09 USB 프린터 공유dual entries with the same inode... # rm -f U*but that also brings me to the initial situation # ls -lhai ls: cannot access USB 프린터 공유: No such file or directory ls: cannot access 시동 디스크: No such file or directory ls: cannot access 애플 메뉴 선택사항: No such file or directory ls: cannot access 인터넷: No such file or directory ls: cannot access 파일 관리자: No such file or directory total 0 152704 drwxr-xr-x 1 root root 7 Apr 23 16:52 . 152677 drwxr-xr-x 1 root root 18 Apr 23 14:55 .. ? -????????? ? ? ? ? ? 애플 메뉴 선택사항 ? -????????? ? ? ? ? ? 시동 디스크 ? -????????? ? ? ? ? ? 파일 관리자 ? -????????? ? ? ? ? ? 인터넷 ? -????????? ? ? ? ? ? USB 프린터 공유any ideas of anything I could try????
Files that do not exist, prevent me from deleting them, corrupted filesystem
This was broken in OpenSUSE 12.1, but when I updated my PC to OpenSUSE 13.1, the problem simply stopped existing. Weird, but true! Must have been a kernel bug or something...
I'm having trouble trying to mount a filesystem. Basically, if I mount /dev/sdc1, it works perfectly. But if I mount /dev/sdc with a byte offset, it fails. The filesystem is HFS+ (formatted using an actual iMac). root# fdisk -l /dev/sdcDisk /dev/sdc: 320.1 GB, 320072932864 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142447 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 128519 64228+ af HFS / HFS+ root# mount /dev/sdc1 /mnt root# echo $? 0 root# umount /mnt root# mount --ro -o offset=32256,sizelimit=6576994 /dev/sdc /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop20, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so root# dmesg | tail [164258.208493] hfs: unable to find HFS+ superblock [164398.983651] hfs: invalid secondary volume header [164398.983654] hfs: unable to find HFS+ superblock [164404.235785] hfs: invalid secondary volume header [164404.235787] hfs: unable to find HFS+ superblock [164407.461400] hfs: invalid secondary volume header [164407.461404] hfs: unable to find HFS+ superblockWhat the heck am I doing wrong here? P.S. The offsets seem to match the output from fdisk, so that's not the problem. EDIT Added fdisk output.
Can't mount filesystem by offset
mkfs.hfs -s /dev/sdd2from man mkfs.hfs: -s Creates a case-sensitive HFS Plus filesystem. By default a case-insensitive filesystem is created. Case-sensitive HFS Plus file systems require a Mac OS X version of 10.3 (Darwin 7.0) or later.
Linux can format an (external) disk as HFS+, e.g.: apt-get install gparted hfsprogs, then gparted /dev/sdd, rightclick on the partition to format, choose HFS+, click Apply, quit; mount -t hfsplus /dev/sdd2 /mnt/foo. But then you can't make both /mnt/foo/xyzzy and /mnt/foo/XYZZY, because gparted used macOS's default option, case-insensitive. So copying files onto it from Linux causes all sorts of problems. Can Linux format it as case sensitive? Or must I plug the disk into a Mac to format it like that? Related: https://apple.stackexchange.com/questions/334330/which-filesystems-support-symbolic-links
Format disk as HFS+, but case sensitive?
The command line tools that come pre-installed on OS X come from FreeBSD, but many guides online will probably assume a Linux environment and GNU tools. They're not always the same. Compare the two man pages for FreeBSD stat and GNU stat. In FreeBSD, -f sets the output format and takes a corresponding argument. In GNU stat -f asks for the output about the filesystem (not the named file), and takes no argument. So, 1) the result is different because you're using a different tool, 2) the format options are mentioned under "Formats" in the FreeBSD man page. 3) The quotes aren't really related to stat itself, but the shell. Command line arguments that contain characters special to the shell (like whitespace, or glob characters ?*[] etc) need to be quoted to prevent the shell from processing them. But % isn't special (at least not in that context), so it doesn't matter if it's quoted or not.
First of all, I'm completely ignorant regarding shell commands. So, please, be patient. Also, I'm using OS X, but I'm happy with an answer in the generality of Unix if such thing is possible. I'm trying to execute the command stat -f [some parameters here] [volume name]. According to references elsewhere, the most simple command of this form stat -f /dev/disk0s2 is syntactically correct. However I've got the following: stat -f /dev/disk0s2 /dev/disk0s2According to here (https://www.computerhope.com/unix/stat.htm), for instance, I should get a paragraph full of information. In my case, I'm mostly interested in the block size obtained through the stat command. Also, here (https://apple.stackexchange.com/questions/42509/how-to-get-hfs-filesystem-blocksize), it's mentioned the parameters "%k, %z, %b". However in the manual of stat (i.e., by using man stat) I can't find such parameters. Furthermore I have no idea why the quotation marks are being used there (I've seen both stat -f %k and stat -f "%k", for instance). So, in summary, I have three questions: 1) Why stat -f /dev/disk0s2 is not giving me the expected output? 2) What are these %k, %z and %b parameters and are they being mentioned in the manual? 3) What's the meaning of the quotation marks around the above mentioned parameters (for instance, stat -f %k and stat -f "%k")? Is this a general stuff in the syntax of Unix commands? Thanks in advance.
Some questions about the stat command for file systems
I see three problems here of which two were explainable immediately and one needed more investigation by program dvd+rw-mediainfo. First, you create an ISO 9660 filesystem and try to mount it as HFS+. This is supposed to fail with "mount: wrong fs type, ...". Well, your error message rather points to a medium problem before mount has a chance to complain about the filesystem type. Nevertheless, it looks as if you should leave out the arguments "-t hfsplus". Second, you ran into a known growisofs bug which is said to be harmless https://bugs.launchpad.net/ubuntu/+source/dvd+rw-tools/+bug/1113679 It is caused by the fact that growisofs sees an unformatted BD-R when it starts, later formats it by default, but in the end forgets that it is formatted and issues a CLOSE SESSION command which is appropriate only for unformatted media. Workaround is to use growisofs option "-use-the-force-luke=spare:none" or to format the BD-R by program dvd+rw-format before you give it to growisofs or to apply the code fix shown in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=713016 The third and decisive problem is the operating system believing that there is no readable medium in the drive. Program dvd+rw-mediainfo shows why: The drive does not report any of its supported profiles as "current". A MMC profile is a set of features, typically associated to a particular medium type. The drive announces to support BD-RE, BD-R, BD-ROM, DVD+R/DL, DVD+R, DVD+RW, DVD-R/DL, DVD-RW, DVD-RAM, DVD-R, DVD-ROM, CD-RW, CD-R, CD-ROM, and "Removable Disk". But none of them bears the "current" bit. So the program concludes the same as the Linux kernel: No medium. I get exact this reply from an ASUS BW-16D1HT if no medium is inserted. With BD-R inserted, profile 0x0041 "sequential BD-R" is marked by the byte "01" after "41": GET [CURRENT] CONFIGURATION: 0000: 00 43 00 00 00 42 00 00 00 41 01 00 00 40 00 00 That's the "current" bit which is missing in your drive's output. So either the drive went blind or the medium is so damaged that the drive does not recognize its type. Obvious remedy proposals are: Try other drive or other medium.
I backed up my MacBook using a Blu-ray burner on my CentOS server. When I try to mount the Blu-ray disk, $ mount -t hfsplus /dev/sr0 /mnt/blurayI get the error, mount: no medium found on /dev/sr0I believe the write was successful. I use a disk cataloger immediately after I burn every disk, and I have a catalog of the disk contents built from the mounted disk. I didn't give a thought to testing the disk since it was created from files copied to the server and the disk was clearly mounted during the cataloging step. How I made the backup Because I can't find a Linux package which plays nice with my Blu-ray drive, I use K3B to write an ISO from files copied to the CentOS server. Then I use growisofs to burn the Blu-Ray: $ growisofs -Z /dev/sr0=mrwizard-archive-001.iso |& tee -a burn.log Other Linux disks I've made will mount, so I know it's not the drive or drivers. I'm 99% sure these are the steps I followed for this Mac OS backup. A grep in history shows I copied files to a directory with the same name as the Blu-ray disk image (also found in history). [UPDATE, just in case it wasn't clear, the backup was made four months ago in March] There are numerous posts around the net talking about hfs+ and CentOS. These recommend the kmod-hfsplus package which I have installed. This package was necessary to transfer the files to CentOS. Also, here's the tail from the growisofs log, 24024383488/24142608384 (99.5%) @1.8x, remaining 0:14 RBU 100.0% UBU 54.3% 24049221632/24142608384 (99.6%) @1.7x, remaining 0:11 RBU 99.8% UBU 43.5% 24078647296/24142608384 (99.7%) @2.0x, remaining 0:07 RBU 100.0% UBU 40.3% 24102764544/24142608384 (99.8%) @1.6x, remaining 0:04 RBU 100.0% UBU 45.7% 24126881792/24142608384 (99.9%) @1.6x, remaining 0:01 RBU 93.8% UBU 39.2% /dev/sr0: flushing cache /dev/sr0: closing track /dev/sr0: closing session :-[ CLOSE SESSION failed with SK=5h/INVALID FIELD IN CDB]: Input/output error.This last error seems to be related to something else. As the OP in this post says the disk is mountable and readable even with this error. Why growisofs made Blu-ray disk won't mount with Mac OS files? Why would the disk appear to be mounted, only to fail mounting later? What might happen as a result of these steps (k3b made iso, growisofs) and files from Mac OS, which might cause problems with this media? What don't I understand about hfs+ file system, k3b iso's and growisofs which makes my disk a coaster? $ dvd+rw-mediainfo /dev/sr0 long INQUIRY: [ATAPI ][iHBS112 2 ][CL0J] MODE SENSE[#3Fh]: 01: 00 80 00 00 00 00 00 00 00 00 05: 40 05 08 00 00 00 00 00 00 00 00 00 00 96 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 08: 04 00 00 00 00 00 00 00 00 00 0D: 00 00 00 3c 00 4b 0E: 04 00 00 00 00 4b 01 ff 02 ff 00 00 00 00 18: 00 01 00 01 00 00 00 00 00 01 00 01 00 00 00 00 00 00 00 00 00 01 00 01 00 00 1A: 00 03 00 00 02 58 00 00 04 b0 1D: 00 00 00 00 00 06 04 b0 00 00 2A: 3f 37 f1 77 29 23 21 14 01 00 20 00 21 14 00 10 21 14 21 14 00 01 00 00 00 00 21 14 00 09 00 00 21 14 00 00 1b 91 00 00 16 0d 00 00 10 8a 00 00 0b 07 00 00 05 84 00 00 00 00 00 00 00 00 00 00 00 00 30: 2e 00 00 00 00 00 00 00 00 00 00 00 00 00 GET [CURRENT] CONFIGURATION: 0000: 00 43 00 00 00 42 00 00 00 41 00 00 00 40 00 00 00 2b 00 00 00 1b 00 00 00 1a 00 00 00 16 00 00 00 15 00 00 00 14 00 00 00 13 00 00 00 12 00 00 00 11 00 00 00 10 00 00 00 0a 00 00 00 09 00 00 00 08 00 00 00 02 00 00 0001: 00 00 00 07 01 00 00 00 0002: 02 00 00 00 0003: 39 00 00 00 0100: 0105: 00 00 00 00 0108: 33 37 37 32 35 31 32 30 31 32 20 32 31 36 31 30 37 35 30 30 34 34 34 20//ERROR OUTPUT :-( no media mounted, exiting...
What about hfs+ file system, `k3b` iso's and `growisofs` makes my blu-ray disk a coaster?
I was able to image the disk with testdisk which produced a similar result to what ddrescue normally would, if that would have been successful. Then I used hfsprescue on the image created by testdisk. The process couldn't have been easier, following those two steps. I recovered 99.99% of my files, including the directory structure, by doing this. (A few files were corrupted by bad sectors)
I have a 1TB External drive that was formatted HFS+ for use with a Macintosh Apple Computer. The drive is completely full. I'm using Linux to try to recover all the files, or to repair the disk and then mount it and retrieve everything. I tried gddrescue, but that was taking too long to finish. At 0.06% of the recovery, it slows down to bytes/second scanned, and eventually sped up a little, but still seemed too slow for my liking. To fully recover this 1TB, it had 41 years remaining. I ran it for a day, and decided that 41 years of ddrescue is longer than the data will have value. I then started photorec, which seems to be recovering deleted files from the partition just fine. However, I would like to recover everything (including an intact directory structure, preferably) and not just the files that were deleted before the drive began to fail. I tried fsck.hfsplus -d and got these results, ** /dev/sdd1 Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K. ** Checking HFS Plus volume. Catalog file entry not found for extent (4, 0) ** Volume check failed. volume check failed with error 7 volume type is pure HFS+ primary MDB is at block 0 0x00 alternate MDB is at block 0 0x00 primary VHB is at block 2 0x02 alternate VHB is at block 1953458172 0x746f67fc sector size = 512 0x200 VolumeObject flags = 0x07 total sectors for volume = 1953458174 0x746f67fe total sectors for embedded volume = 0 0x00 Seeing the Catalog file entry not found for extent error, which I also get when trying to preen, I decided to rebuild the catalog with -r, but no success. I think a catalog must exist in order for it to be rebuilt or repaired. fsck.hfsplus -q reports a DIRTY FILESYSTEM. I tried using hsfprescue which seems like the perfect tool for my problem, but during the analyze step (hfsprescue -s1) it hangs at 0.06% just like ddrescue. Likewise testdisk hangs during analysis when it gets to cylindar 74. I gave the drive to someone that has an iMac and he tried it's GUI disk utility software to fix the drive, to no avail. (I do not know what error is produced, if any) I have a backup of this drive from some time last year, but the backup has less than half of the amount of data that's actually stored on the drive currently. I'm looking for advice on how to either repair the disk so it can be mounted and all the data copied off, or perhaps another tool similar to photorec can be recommended, but one which can recover all files and folder structures. Basically, what I should do next, at this point. Also, in your advice, feel free to refrain from "Lesson learned, constantly back up ur stuff!" lectures. The drive is not mine, and the backup that I do have was made without the owner's knowledge, consent, permission, etc. and I'm unable to convince said owner of the value of backing up data, even after events such as drive failures.
How to recover HFS+ Partition Catalog (Possible failing drive)
The actions available on file systems are listed on the GParted Features page. Currently (latest version 0.23.0) the Label and UUID features are not available for HFS and HFS+ file systems. This is due to limitations in the underlying file system tools.
I have read through all the man pages of my hfs-related packages (hfsplus-tools, hfsutils), and I cannot find a way to relabel a hfs+ volume. gparted also seems to be able on my system to do all the other operations on hfs+, but not change UUID and label (it can view them though, I suppose through libblkid). Is it really not possible?
relabel hfs+ volumes
Unfortunately, the answer seems to be that parted and other command line tools are not capable of writing labels to HFS/HFS+ partition schemes at the moment, possibly ever. Parted being the most feature-rich tool for editing partitions, and that in itself depending on hfsprogs, which does not seem to support HFS+ relabeling at this time, it does not seem possible. Given that this basic functionality has not been added in the 4 years since I asked this question, parted will probably never support HFS+ relabelling, though I'd be happy to be proven wrong. Source: https://gparted.org/features.php
I am attempting to rename an HFS+ partition for a friend to use. After formatting, the volume appears as "untitled". Gparted and Nemo cannot seem to relabel it due to their own limitations for dealing with HFS. I can read/write normally, but currently cannot edit the GPT label, which is what I want to do. How can I do this?
How can I relabel an HFS+ partition?
I interrupted the second dd session after about 100Gb in the disk. Then I booted off an external OSX drive with DiskWarrior lent from a friend. From there I got a list of the overlapping files which were mostly cache files, so I went ahead and deleted them from the terminal. Then let DW rebuild the disk directory. Afterwards I restored the files from a backup or a last-minute read from the failing drive. This helped resolve the issue and all files are now intact.
I have a hard drive which has suddenly developed unstable sectors. I am able to read it with dd_rescue, so I transferred it completely to another new drive of the same size. The Windows partition is bootable after the transfer, however, the Mac partition behaves weird. When I boot it for the first time, it boots just fine, but forces an FSCK on the next boot. The FSCK however fails. If I boot in single-user mode and forcefully tell FSCK to rebuild the FS, then the following happens:A lot of ‘invalid node’ errors appear fsck restarts a couple of times after one of the iterations it’s getting abort()’ed if I run it once again my screen is filled with ‘Node unrecoverable’ errors afterwards if I try to continue booting, it tells me to ** REBOOT NOW ** if I obey and reboot, the partition is rendered unbootable, in case I try to mount it while booting in single-user from the failing drive I get an error of being unable to find the root in the catalogI am currently running the dd_rescue procedure the second time (and it will probably take a week again), but can I somehow forcefully mark the partition as clean? From the FSCK logs I saw that the damaged files are some of the drivers (kexts) I don’t use or calendar files from 2013 which I couldn’t care less about. Maybe somehow deleting them might work? I don’t have any third drive of the same size to save just an image of the whole thing because they are too expensive :/ Any help is appreciated. Thanks in advance!
Migrating a failed hard drive — preventing fsck
Users' home directories can be found in in /Users/.
I have a friend's Mac OS X disk that comes with an HFS+ partition. I am supposed to recover the personal data from this disk (it's not yet clear if the FS is corrupted or the disk dying), but for the life of me I cannot understand what is the traditional file-tree structure on a Mac OS X disk. Where is the user content located? On Windows it's in My Documents, on Linux it's in /home/user, but where is it on Mac OS X? EDIT1: If I mount the drive on Linux I get the following: liv@liv-HP-Compaq-dc7900:/media/Macintosh HD$ cat /etc/mtab | grep -i hfsplus /dev/sdc2 /media/Macintosh\040HD hfsplus ro,nosuid,nodev,uhelper=udisks 0 0And doing an ls on the mount point doesn't contain any mention of Users in it: liv@liv-HP-Compaq-dc7900:/media/Macintosh HD$ ls -lh ls: cannot access home: Input/output error ls: cannot access libpeerconnection.log: Input/output error ls: cannot access net: Input/output error ls: reading directory .: Input/output error total 20M drwxrwxr-x 1 root 80 53 Oct 18 22:07 Applications drwxr-xr-x 1 root root 39 Sep 26 00:51 bin drwxrwxr-t 1 root 80 2 Jul 9 2009 cores dr-xr-xr-x 1 root root 2 Jul 9 2009 dev lrwxr-xr-x 1 root root 11 Sep 24 2009 etc -> private/etc lrwxr-xr-x 1 root 80 60 Mar 20 2010 Guides de l’utilisateur et informations -> /Library/Documentation/User Guides and Information.localized d????????? ? ? ? ? ? home -????????? ? ? ? ? ? libpeerconnection.log drwxrwxr-t 1 root 80 58 Mar 27 2013 Library drwxrwxrwt 1 root root 4 Sep 18 2012 lost+found -rw-r--r-- 1 root root 20M Jun 8 2011 mach_kernel d????????? ? ? ? ? ? net drwxr-xr-x 1 root root 2 Jul 9 2009 Network drwxr-xr-x 1 501 80 3 Oct 26 2010 opt drwxr-xr-x 1 root root 6 Sep 24 2009 private drwxr-xr-x 1 root root 67 Sep 26 00:52 sbin drwxr-xr-x 1 root root 4 Jul 3 2011 System lrwxr-xr-x 1 root root 11 Sep 24 2009 tmp -> private/tmpBut I do see a home, though. Does this imply some weird Mac OS X configuration? Or is it likely that Users was deleted?
where is the user content on a Mac OS X disk?
100 MiB for /boot is not enough, I recommend 1 GiB. It varies in different distributions, but on my system initramfs is 36 MiB and vmlinuz 11 MiB. So with 100 MiB you probably wouldn't be able to fit more than one bootable kernel+init on your system. I would recommend 1 GiB, 500 MiB minimum. Don't forget that you'll also need /boot/efi if you are on a UEFI system. Recommended size for /boot/efi vary, it's usually is between 100 or 200 MiB and 550 or 600 MiB. If you have enough RAM you don't necessarily need swap (unless you plan using suspend to disk). Some distributions don't create swap by default and either just don't use swap or create swap on zram. 50 GiB for / is good. 50 GiB for /home depends on what you plan to do with the system. For me (desktop with a single OS) it wouldn't be enough but for a server it's not really necessary to create a separate /home (but it definitely won't hurt) -- I wouldn't expect to put anything else than SSH key in my home on a server but that again might depend on the "type" of the server. If you are running server for multiple users and plan to set up something like per-user web directories with Apache (with /home/<user>/public_html) it makes sense to have separate /home. In general the size of /home depends on how much data will the user (users) store there. /tmp shouldn't be on disk, most distribution now use tmpfs and store /tmp in RAM. I don't really see a use case for separate /usr. But if you want, you can do that. Edit: As @telcoM pointed out in the comments, having a separate /usr isn't a good idea and your system may be unbootable with a separate /usr. Separate /var is useful for servers that store a lot of things in /var, like webservers that use /var/html, or for virtualization and other similar applications that use /var a lot. So it again depends on what you plan to do with the system. Separate /var can be also useful for systems with flatpak which installs the applications to /var/lib. It can also prevent /var/log from eating all space in / if something goes wrong when logging (but journald limits can also prevent that). You should also consider using something more modern than plain partitions. Especially if you plan to create multiple mount points. Dealing with running out of space on one of them is really painful with fixed partitions but with technologies like LVM or btrfs (sub)volumes you can make the system future proof more easily. Moving free space from filesystem to another (e.g. shrinking /home to make more space for /var) isn't trivial with partitions (because they cannot be resized to the left), but relatively easy with LVM and with btrfs this isn't an issue at all.
Scenario: for simplicity - consider that exists a hard disk of 500GB to only install Linux (for example Ubuntu, Debian or Fedora) - and if exists a hard disk of 750GB or 1TB then 500GB are dedicated for Linux (as first case) and the rest of the disk for Windows. I read many tutorials about best practices about to define partitions to install Linux, for the most common or general scenario is suggested: /boot 100MB /swap x2 current RAM if is minor or equals of 4GB / 50GB to 100 GB /home 50GB to 100 GB Note: for above is important consider the order of the partitions too. Until here all is OK and gparted can be used in peace. Now considering the security and administration aspects/concerns then is available the following: #1 /boot 100MB /swap x2 current RAM if is minor or equals of 4GB / 50GB to 100 GB /home 50GB #2 /var /usr /tmpQuestion 1 Is the order of part #2 correct? or it does not matter? In many tutorials these partitions of the part 2 appear or are only mentioned but never indicated about their order - therefore not sure if is not important or not. It can be problematic later. Question 2: What are the recommended sizes for /var, /usr and /tmp partitions? This question would be tricky but I am assuming is there a kind of guidance or rule thumb for them.
Define partitions to install Linux but considering Security and Administration aspects/concerns
Answer in Philippos' commentYou can't. Sorry to say that, but as soon as it's on hfs+, it's no "backup" anymore. Even without considering the problems with the linux hfs driver, backup on hfs(+) is not a good idea. I've been a mac lover for decades, but hfs(+) has always been a pain for me. TimeMachine is a great tool, but every now and then you have to start anew due to some hfs problems. I hope for apfs and pray that I don't need my backup until then.
I think the only option is to move a zip file there but I get when trying to create a directory there on hfsplus file system at /media/masi/Elements/. masi@masi:/media/masi/Elements$ mkdir MasiWeek mkdir: cannot create directory ‘MasiWeek’: Read-only file system masi@masi:/media/masi/Elements$ df -T | grep hfs output /dev/sdb2 hfsplus 4883299280 615565288 4267733992 13% /media/masi/ElementsTest command which I would like to run after successful creation of the directory in the disk nice tar --keep-directory-symlink -czf /media/masi/MasiWeek/backup_home_19.8.2017.tar.gz /home/masi/OS: Debian 9
How to backup Debian system to Apple hfsplus Harddisk?
!$ is a word designator of history expansion; it expands to the last word of the previous command in history. In other words, the last word of the previous entry in history. This word is usually the last argument to the command, but not in case of redirection. In: echo "hello" > /tmp/a.txtthe whole command 'echo "hello" > /tmp/a.txt' appeared in history, and /tmp/a.txt is the last word of that command. _ is a shell parameter; it expands to the last argument of the previous command. Here, the redirection is not a part of arguments passed to the command, so only hello is the argument passed to echo. That's why $_ expanded to hello. _ is no longer one of shell standard special parameters. It works in bash, zsh, mksh and dash only when interactive, ksh93 only when two commands are on separated lines: $ echo 1 && echo $_ 1 /usr/bin/ksh$ echo 1 1 $ echo $_ 1
The question is about special variables. Documentation says:!!:$ designates the last argument of the preceding command. This may be shortened to !$.($_, an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. There must be some difference I cannot catch, because: $ echo "hello" > /tmp/a.txt $ echo "!$" echo "/tmp/a.txt" /tmp/a.txt$ echo "hello" > /tmp/a.txt $ echo $_ helloWhat is the difference?
$_ vs !$. Last argument of the preceding command and output redirection
!! is expanded by bash when you type it. It's not expanded by alias substitution. You can use the history built-in to do the expansion: alias sbb='sudo $(history -p !!)'If the command is more than a simple command (e.g. it contains redirections or pipes), you need to invoke a shell under sudo: alias sbb='sudo "$BASH" -c "$(history -p !!)"'
I'm trying to set an alias for sudo !! in Bash. I tried alias sbb='sudo !!', but it interprets that as a literal !! and prints sudo: !!: command not foundIf I use double quotes, it substitutes the double bang in the string itself, so that doesn't work. Is there any way to make this work? Or an alternative alias? `
How can I `alias sudo !!`?
The lone ! at the start of a command negates the exit status of the command or pipeline: if the command exits 0, it will flip into 1 (failure), and if it exits non-zero it will turn it into a 0 (successful) exit. This use is documented in the Bash manual:If the reserved word ‘!’ precedes the pipeline, the exit status is the logical negation of the exit status as described above.A ! with no following command negates the empty command, which does nothing and returns true (equivalent to the : command). It thus inverts the true to a false and exits with status 1, but produces no error.There are also other uses of ! within the test and [[ commands, where they negate a conditional test. These are unrelated to what you're seeing. In both your question and those cases it's not related to history expansion and the ! is separated from any other terms.
Bash uses exclamation marks for history expansions, as explained in the answers to this question (e.g. sudo !! runs the previous command-line with sudo). However, I can't find anywhere that explains what running the following command (i.e. a single exclamation mark) does: !It appears to print nothing and exit with 1, but I'm not sure why it does that. I've looked online and in the Bash man page, but can't find anything, apart from the fact that it's a "reserved word" – but so is }, and running this: }prints an error: bash: syntax error near unexpected token `}'
What does typing a single exclamation mark do in Bash?
Social/cultural inertia. This question is in the how-humans-work problem space, so I'm going to answer from that angle, without putting forth any opinion about whether or not the feature ought to be on by default. To start, to make sure you understand the other side, consider that the annoyance you feel about having to go out of your way to turn off the feature, is the annoyance they would feel if they had to go out of their way to turn on the feature. Combine the above with the fact that enough bash users do use the feature, suggestions of removing it or turning it off by default are met with resistance from the people who are already comfortable with it being there by default. Also, bash is the default shell for many people (not just in the default login or system shell sense, but in a psychological sense). If your reference frame for shell quoting is bash, if that's the shell you learned first, the fact that ! is a special shell character will feel natural and automatic to you (or at least, when you first learn it, it'll be just part of the way the shell is, just one quirk to accept among many). And if you think about it, a lot of bash users probably encounter the history substitution syntax in a positive context: they read about it or someone shows it to them and they see the possible usefulness of it, when they're first learning bash. It's only coming from the peripheral world of other Bourne-like shells, that you'd be bitten by ! being special and thus be inclined to view it negatively: Because if you're used to shells where you never had the feature, then your first exposure to it will be when it screws you when you're trying to get something done in a hurry. TL;DR: Most users probably don't strongly care what the default is either way, some users like the feature and have the strong advantage of it already being that way, and there hasn't been enough people actively advocating against the feature to overcome that.
Does anybody know why bash still has history substitution enabled by default? My .bashrc has included set +H for many many years but some other people are still getting bitten by this feature. Given that pretty much everybody are using terminals with copy-paste features and bash compiled with readline library and history substitution is enabled by default only in interactive shells, is there really any reason to have the feature at all? None of the existing scripts would be broken even if this was disabled by default for all shells. Try this if you do not know why history substitution is broken: $ set +H # disable feature history substitution $ echo "WTF???!?!!?" WTF???!?!!? $ set -H # enable feature history substitution $ echo "WTF???!?!!?" echo WTF???echo WTF???!?!!? WTF???echo WTF???!?!!?(Clearly the feature has major issues if it's disabled by default for all scripting and a feature exists to verify the results before executing: shopt -s histverify.) See also:Why does the exclamation mark `!` sometimes upset bash? Why does Bash history not record this command? How to echo a bang! Bash: History expansion inside single quotes after a double quote inside the same line
Why is bash history substitution still enabled by default? [closed]
!1255:pWill do this ! is history recall 1255 is the line number :p prints but does not execute Then you can use up-arrow to get ther previous (unexecuted) command back and you can change it as you need. I often combine this with hg ("History Grep") - my favorite alias. $ alias hg # Maybe use hgr instead if you are a Mercurial CLI user. alias hg='history | tail -200 | grep -i'This searches for text on a recent history line, regardless of case and is used this way: When I want to search for recent vi commands to edit a certain file and then I want to re-use one of them to edit the same file but with a different file extension. $ hg variables 6153 vi Variables/user-extensions.js 6176 vi Variables/user-extensions.js 6178 vi Variables/user-extensions.js 6190 vi Variables/user-extensions.js 6230 hg variables $ # Notice the difference in case with V and v is ignored $ !6190:p vi Variables/user-extensions.js $ ["up-arrow"] $ vi Variables/user-extensions.[now change .js to .html]I also define hga ("History Grep All") to search my entire history: $ alias hga alias hga='history | grep -i'but I don't use it much because my history is (intentionally) very large and I get too much output that later affects scrolling back thru pages in my terminal.
I can't remember the trick where I could get the last command without running it: let's say I want to be be able to access the command !1255 when pressing the up arrow key and modify the command. So what's the trick to call the command, make it be shown up in the command line but not executed and afterwards accessible via the arrow key up? I tried with putting an echo, but then I have an echo before the command, I don't remember how to do it correctly.
How to recall a previous command (without execution) in order to change it?
You can use <M-.> (or <Esc>. if your Meta key is being used for something else), that is, Meta-dot (or <esc> dot), where Meta is usually the Alt key, to recall the last argument of the previous command. So, first you would type $ grep foo /usr/share/dict/american-englishAnd then if you wanted to grep for something else, you would type $ grep barAfter typing a space and then Esc. (that is, first pressing the escape key, and then the period key): $ grep bar /usr/share/dict/american-englishYou can also use either of the following: $ grep bar !:2 $ grep bar !$Where !:2 and !$ mean "second argument" and "last argument" respectively.
I am starting to learn some Regex, therefore I use this command repeatedly: grep pattern /usr/share/dict/american-english Only the part with pattern changes, so I have to write the long expression "/usr/share/dict/american-english" again and again. Someone made the remark that it is possible to expand an argument of a command from the command history by typing cryptic character combinations instead of the full expression. Could you tell me those cryptic character combinations ?
How to access the second argument from the last command in the history ?
! is not a special _variable. (There's a variable called !, which you can access with $!, but it's unrelated.) It's a character with a special meaning, depending on where bash sees it and on what comes after. The ! character starts history expansion. Bash performs history expansion very early when parsing a command line, when it's reading commands interactively (not when it's running a script, even a script sourced with . or source from an interactive command line). You can set the variable histchars to select a different character instead of ! (but most other ASCII characters would conflict with common usage). The character ! starts a history substitutionexcept when followed by a space, tab, the end of the line, ‘=’ or ‘(’ (when the extglob shell option is enabled using the shopt builtin).So things like echo "Hello, world"! or if ! grep -q foo myfile; … don't trigger any history expansion. Also single quotes and dollar-single quotes protect from history expansion (i.e. the history expansion character does not have its special meaning when within '…'), and a backslash that's quoting a character protects that character from starting a history expansion, but double quotes do not, except (since bash 4.4) when bash is invoked in POSIX compatibility mode. That is, by default, echo "!foo"makes !foo a history reference which is substituted inside the double quotes. You can't simply include an actual exclamation mark in a double-quoted string, because echo "\!foo" would include the backslash. You have to use something like echo \!"foo" or echo '!foo'. I don't know why bash does this: it's a design decision made in the 1980s. (Bash inherits history expansion from csh where it's even worse: even single quotes don't protect from history expansion.) I can't reproduce this exact case though: bash-5.0$ echo "Hello, World!" Hello, World!But it might depend on the version or configuration of bash. The double quotes themselves don't inhibit history expansion: it's the fact that !" doesn't fit any of the forms of event designators. bash-5.0$ echo "Hello!world" bash: !world: event not found`
If I type in this: echo "Hello, World!"I don't know the name of it, but it prompts me for the next line. You know the PS2 thing. Or if you type echo \ and press Enter. Why? Well I know that ! is a Special Variable that you can use to reference your history. But as soon as I use this: echo "Hello, World"!I get my desired output. What is happening and why can't you use ! inside ""?Thanks for your help :)
Echoing "!" inside a string does some weird things [duplicate]
I reported this to [emailprotected] and got this answer:History expansion is explicitly line-oriented, and always has been. There's not a clean way to make it aware of the shell's current quoting state (mostly since it's a library independent of the shell). Maybe there's a way to use one of the existing callback functions to do it.This sounds to me like "this is not a bug, because we can't do it any better with the current implementation". Update I lost interest in this subject after switching to zsh. Now I tried with bash version 5.1.4 and found the problem can't be reproduced anymore. So somewhere between 4.4.12 and 5.1.4, somebody did fix this.
I took a closer look on this phenomenon after I stumbled over it in two other questions today. I've tried all of this with the default set -H (history expansion on). To test a script, I often do things like echoing a multi-line string and pipe it through a script, but in some cases it gives an error: $ echo "foo bar" | sed '/foo/!d' bash: !d': event not found > The ! seems to trigger history expansion, although it is enclosed with single quotes. The problem seems to be the occurrence of the double quote in the same line, because $echo $'foo\nbar' | sed '/foo/!d'works as well as $echo "foo bar" | > sed '/foo/!d'My suspicion: History expansion is applied linewise, so the ' after a single " is considered to be escaped, so the following ! is not escaped. Now my question: Is this a bug or expected behavior? Reproduced with bash versions 4.2.30 and 4.4.12.
Bash: History expansion inside single quotes after a double quote inside the same line
It's possible but a bit cumbersome. In bash !# refers to the entire line already typed. You can specify a given word you want to refer to after :, in this case it would be !#:1. You can expand it in place using shell-expand-line built-in readline keybinding Control-Alt-e.
Often I'm writing a command in a bash prompt, where I want to get previous arguments, in the CURRENT line I'm typing out, and put in other places in the command. A simple example, would be if I want to rename a file. I wouldtype out the mv command type the filename I want to move, ~/myTestFileWithLongFilename.txt now I want to just change the extension of the file that I supplied in the first argument, without typing it again.Can I use history or bash completion in some way to autocomplete that first argument? $ mv ~/myTestFileWithLongFilename.txt ~/myTestFileWithLongFilename.mdI know of course I could execute the incomplete command, to get it into the history, and then reference it with !$, but then my history is polluted with invalid commands, and I'm wondering if there's a better way
bash - get 1st argument of current command I am editing via history, or similar? [duplicate]
You can select particular word from last typed command with !!: and a word designator. As a word designator you need 0. You may find ^ and $ useful too. From man bash:Word Designators 0 (zero) The zeroth word. For the shell, this is the command word. ^ The first argument. That is, word 1. $ The last argument.So in your case try: echo !!:0
Example: I type man ls, than I want to get man only. By using !! I can get man ls but how do I get man?
How do I get command name of the last executed command?
For a non-interactive shell, you must specifically also enable command history: set -o history set -o histexpandThen your example will work: disabled enabled /home/schaller/tmp/502442 last_command="pwd" pwd
I have the following Bash script case $- in (*H*) echo enabled ;; (*) echo disabled ;; esac set -H case $- in (*H*) echo enabled ;; (*) echo disabled ;; esac pwd last_command="!!" echo $last_commandwhich prints disabled enabled /home/user !!The first line of code checks to see if history expansion is enabled. The second enables history expansion. The third is the same as the first. Then its runs pwd and finally assigns what should be last command to last_command and prints this variable. The history is not expanding. What is going on?
History expansion in scripts [duplicate]
You can't, history expansion happens before alias or parameter expansion. I personally hate history expansion and is the first thing I disable. Here, instead of aliasing a history expansion, I'd suggest creating a widget that increments a E<n> number left of the cursor: increment-episode() { emulate -L zsh setopt extendedglob LBUFFER=${LBUFFER/(#b)(*E)(<->)/$match[1]${(l:${#match[2]}::0:)$((match[2]+1))}} }zle -N increment-episodebindkey '\e+' increment-episodeAnd then, you just press Up and then Alt++ and you have a visual feedback of what's going on at every stage and can undo/redo/adapt at will, and not work blindly like with csh history expansion (a feature from the 70s that IMO made sense then but not so much now that we have faster and more capable terminals and line-editors). But if you really wanted to blindly evaluate the code in the previous command in the history with the number after E incremented, you could do: rerun-with-next-episode() { emulate -L zsh setopt extendedglob local new new=${${history:0:1}/(#b)E(<->)/E${(l:${#match[1]}::0:)$((match[1]+1))}} # display it print -ru2 -- $new # put it on the history print -rs -- $new # evaluate it eval -- $new }
I want this to work (it needs extendedglob and histsubstpattern): alias ri='^(#b)E(?)^E${(l:2::0:)$((match[1]+1))}'But it doesn't: $ alias sss='^(#b)E(?)^E${(l:2::0:)$((match[1]+1))}' $ echo /Users/evar/Downloads/Video/Teenage_Mutant_Ninja_Turtles_2003_S02E01_DVDRip_30NAMA.mkv /Users/evar/Downloads/Video/Teenage_Mutant_Ninja_Turtles_2003_S02E01_DVDRip_30NAMA.mkv $ sss zsh: command not found: PocketI wouldn't mind using a function instead of an alias, but the result was the same. I even tried export ss='^(#b)E(?)^E${(l:2::0:)$((match[1]+1))}' and then doing $ss, but that failed with zsh: command not found: ^(#b)E(?)^E${(l:2::0:)$((match[1]+1))}. Using eval '^(#b)E(?)^E${(l:2::0:)$((match[1]+1))}' also fails with zsh: command not found: Pocket. Update: Related (possibly duplicate) questions found: Alternative of bash's `history -p` in zsh? https://stackoverflow.com/questions/27494753/how-to-get-last-command-run-without-using https://stackoverflow.com/questions/48696876/using-history-expansion-in-a-bash-alias-or-function
How can I alias a history expansion in zsh?
I don't understand why you're making this so complex. Why use the (rather finicky) !:N history expansion feature when you already have everything you need passed as an argument? For example: #! /bin/bashsource="/Source/$1" destination="Destination/" folderParam="$(basename "$source")" /usr/bin/rsync -avh -r "$source" "$destination" rsyncStatus=$? if($rsyncStatus==0) then cp /Status/Sucesss /Result/Success_"$folderParam" else cp /Status/Failure /Result/Failure_"$folderParam" ifOr, even simpler: #! /bin/bashsource="/Source/$1" destination="Destination/" folderParam="$(basename "$source")" if /usr/bin/rsync -avh -r "$source" "$destination"; then cp /Status/Sucesss /Result/Success_"$folderParam" else cp /Status/Failure /Result/Failure_"$folderParam" ifOr even: #! /bin/bashsource="/Source/$1" destination="Destination/" folderParam="$(basename "$source")" touch "/Result/Failure_$folderParam" /usr/bin/rsync -avh -r "$source" "$destination" && mv /Result/Failure_"$folderParam" /Result/Success_"$folderParam"
I am using rsync command to sync two folder and on success of rysnc I want to copy a file success and while copying append source folder name parameter like Success_FolderName.I am using $(basename !:3) to get the third parameter i.e Folder Name. bash /Sync.sh 10_03_2016 #! /bin/bashset -o history set -o histexpand /usr/bin/rsync -avh -r /Source/$1 /Destination/ rsyncStatus=$? folderParam=$(basename !:3) if($rsyncStatus==0) then cp /Status/Sucesss /Result/Success_$folderParam else cp /Status/Failure /Result/Failure_$folderParam ifOutput Error /Sync.sh: line 7: :3: bad word specifier And File gets copied with 'Success_'
Sync two folders and on success copy one file from a location to the other
This similar question has two related answers. Specifically:As explained in the Bash manual, history lines prefixed with a * have been modified. This happens when you navigate to a command (e.g. by using the Up key), edit it and then navigate away from it without hitting Enter. ... BTW, you can revert modified commands to their unedited state by navigating to them and hitting Ctrl + _ repeatedly.Answered here by Eugene Yarmash This answer shows how to disable it by disabling mark-modified-lines with: set mark-modified-lines OffAs for your question regarding whether someone could hide their command history in this way, you can see that it's possible to set mark-modified-lines as well as revert to the original command line. So it is both possible to hide or change the history and to revert it. That being said, what is the threat model for a user hiding their history? Who is the user? In an administered environment, a user should only have access and permissions for the functions and files that are related to their role. Otherwise, if an unauthorized user has gained access, then finding modified command lines may be the least of an admin's worries.
My modified search history lines have an asterisk next to them. I've searched unix.stackexchange.com and stackoverflow.com, but I yearn for a full explanation for the asterisks in my history (other than what the man page says).Lines listed with a * have been modified.Example: $ history | tail 11850* 11851 ./block_ip.sh '23.228.114.203' 'evil probe' 11852 ./block_ip.sh DROP '23.228.114.203' 'evil probe $In this example, a shell script had a third argument, but there was no error, and i ran it twice without specifying (DROP/ACCEPT). The modification was an attempt to blank out this history so that history-expansion would not lead me to the wrong command (again). I want to know more about this (but I don't know what I don't know). Please consider both angles of this:how can i use this (for instance can i get that original command if i need it)? how can a bad guy use this (can someone hide their command history this way)?If a generic answer is too verbose, please note some of my settings: EDITOR=/usr/bin/vim HISTFILE=/home/jim/.bash_history SHELLOPTS=braceexpand:hashall:histexpand:history:interactive-comments:monitor:viAnd this OS info (It is RedHat...but Debian/Fedora/Ubuntu shouldn't vary much...should they?): Linux qwerutyhgfjkd 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxI am using bash as my shell.
`history` command produces asterisk * entries
I don't use it often but it's sometimes useful in conjunction with : for extracting n-th word of the command. For example: $ touch FILE.a $ echo file created $ mv FILE.a !#:1.bak mv FILE.a FILE.a.bakAnother example, although quite pointless in practice, would be using it together with cut to get contents of the variable defined in the same line in the simple command, for example: $ LETTER=a echo letter: $(cut -d '=' -f2 <<< "!#:0") letter: aNotice that this wouldn't work as $LETTER is expanded before running the command: $ LETTER=a echo letter: $LETTER letter:
From man bash:!# The entire command line typed so far.From man zshall:!# Refer to the current command line typed in so far. The line is treated as if it were complete up to and including the word before the one with the !# reference.The only thing I could think off is: cd ..;!#!#!#To go up 8 steps:))
What is the use of `!#` in csh, bash, zsh and probably other shells?
You can't do it that way. History substitution (i.e. the handling the ^ and !) is done before alias expansion. Use fc -s instead: $ alias da='fc -s diff=add' $ echo git diff git diff $ da echo git add git add
I have a workflow that first check git diff for specific file and then add it to stage. git diff ..^diff^addI want to give these command a alias but this one doesn't work alias da="^diff^add"command not found: ^diff^add
How to create alias with a caret^ command?
Those history modifiers could also be applied to variables in csh where the feature comes from. But bash chose not to copy that part. zsh did though. So you could use zsh instead of bash here: $ file=foo/bar/ $ echo $file:h # (or ${file:h}) foo(the example chosen to show that zsh actually improved upon csh which would have returned foo/bar instead; it also supports quite a few additional useful modifiers). In other shells, you can always use dirname instead: $ dir=$(dirname -- "$file") $ echo "$dir" foo(though beware it doesn't work correctly for directory names that end in newline characters). In bash, like in other POSIX shells, you can use ${file%/*} but it gives unexpected results in a few corner cases like that foo/bar/ or foo or /.
It would be handy if I could use bash history modifiers in scripts such as: !$:h to get the path of a file. Is there a way to use them in scripts? Eg ${1:h}
Can I use bash history modifiers with variables in scripts?
It can be omitted if it is the last character of the event line. First, we check what ^string1^string2^ meaning from man bash: ^string1^string2^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to ``!!:s/string1/string2/'' (see Modifiers below).So it's equivalent to s/string1/string2/. Read documentation of s modifier: s/old/new/ Substitute new for the first occurrence of old in the event line. Any delimiter can be used in place of /. The final delimiter is optional if it is the last character of the event line. The delimiter may be quoted in old and new with a single backslash. If & appears in new, it is replaced by old. A single backslash will quote the &. If old is null, it is set to the last old substituted, or, if no previous history substitutions took place, the last string in a !?string[?] search.
In my terminal (bash 3), I sometimes use the quick substitution ^aa^bb^^string1^string2^ Quick substitution. Repeat the last command, replacing string1 with string2. Equivalent to `!!:s/string1/string2/` (see Modifiers below).For example $ echo aa aa aa aa $ ^aa^bb^ echo bb aa bb aaWhich is really handy in some cases. However, I found out that omitting the last ^ also works, i.e. $ echo aa aa aa aa $ ^aa^bb echo bb aa bb aaMy question is: what is the consequence of omitting the closing ^? Can I safely leave it out or are there any caveats?
Is using shorthand quick substitution of history expansion problematic?
If you have history expansion enabled, and run history -p "!23:1", the expansion happens before the history builtin sees the designator !23:1, since history expansion takes place even within double-quotes. However, if you either disable history expansion, or protect the exclamation mark with single quotes or a backslash, so that the builtin gets to handle it, you'll see that history -p outputs the result of that history expansion: $ true $ history -p '!!' trueI assume the purpose of it is to be able to script history expansions.
From man bash,history -p arg [arg ...] ... -p Perform history substitution on the following args and display the result on the standard output.What does 'history substitution' mean here? Can you provide an example of its use? Thanks.I understand command line history substitution, and already tried things like this: history -p "!23:1"But this is not dependent on -p, as xx "!23:1" does the same thing.
The -p option in the bash history command?
psqlcmd1="psql -c \"""alter user root with encrypted password 'D1£LF1A\!2eNZY6P$9examplePassword';""\""With history expansion turned off, the value of psqlcmd1 is psql -c "alter user root with encrypted password 'D1£LF1A\!2eNZY6PexamplePassword;"Inside double quotes, the only characters that don't stand from themselves are \"$`, plus ! if history expansion is enabled. Note in particular that ' inside double quotes stands for itself: it doesn't start a string literal,the double quotes have already started one. Also note that since $ starts a variable expansion, $9 is expanded; its value is the 9th parameter to the current script of function, empty if there were fewer than 9 parameters. Also note that "foo""bar" is the same thing as "foobar": it's two string literals joined together, so it might as well be written as a single string literal. Inside double quotes, you need to put a backslash before the characters \"$` if you want the resulting string to include that character. This doesn't work for !: "!" is the string ! in a script, but invokes history expansion if that is enabled; "\!" is always the string \!. You need to use single quotes or no quotes around the !. Simplest would be to use a single-quoted literal: all characters inside single quotes stand for themselves, except for single quotes. To include a single quote in a single-quoted literal, end the literal, then use a backlash to quote a single quote, and start a single-quoted literal; or in other words, to include a single quote in a single-quoted literal, write '\''. psqlcmd1='alter user root with encrypted password '\''D1£example!2eNZY6P$9examplePassword'\'';'
I am creating a script to configure a server from scratch, part of this is postgres. One of the issues I'm having is if a random password has an exclamation it seems to be expanded by bash: I want run the following postgres command: alter user root with encrypted password 'D1£example!2eNZY6P$9examplePassword';But from bash script using: psql -c "command;"Using: runuser -l postgres -c "above cammand"So I get this: psqlcmd1="psql -c \"""alter user root with encrypted password 'D1£LF1A\!2eNZY6P$9examplePassword';""\"" runuser -l postgres -c "$psqlcmd1"But bash expands the ! even though its between single quotes? Also, I had tried the following based on this to escape the single quotes without luck : runuser -l postgres -c "psql -c 'alter user root with encrypted password '"'"'D1£example!2eNZY6P$9examplePassword'"'"';'"
Why is bash expanding history/exclamation-mark when between single quotes
Quoting the bash manual:History expansion is performed immediately after a complete line is read, before the shell breaks it into words.History expansion is the first stage of processing, even before shell parsing, which is why double quotes don’t protect !: the latter is processed before double quotes. It is handled by the history library, which implements its own parsing, with a few ways of protecting the history operator:Only ‘\’ and ‘'’ may be used to escape the history expansion character, but the history expansion character is also treated as quoted if it immediately precedes the closing double quote in a double-quoted string.By the time the shell’s parser starts handling a string, it’s already been parsed by the history library and history expansion has already taken place.
When does history expansion happen? From bash manualEnclosing characters in double quotes (‘"’) preserves the literal value of all characters within the quotes, with the exception of ‘$’, ‘`’, ‘\’, and, when history expansion is enabled, ‘!’.Since double quotes are recognized at parsing stage by the parser, is it correct that history expansion must happen after parsing? If yes, when does it happen with respect to shell expansions such as brace expansion, parameter expansion, filename expansion, etc? But I think that history expansion is provided by the readline of the shell, so is processed before lexical analysis and parsing? Just like auto-completion in shell. Am I missing something?Thanks.
When does history expansion happen in bash?
As you discovered, ! doesn't trigger history expansion inside single-quotes. You could use printf with a format string containing the ! symbols in single quotes. For example: $ name="boda" $ printf 'hello! my name is %s! bye!\n' "$name" hello! my name is boda! bye!or $ name="boda" $ var=$(printf 'hello! my name is %s! bye!\n' "$name") $ echo "$var" hello! my name is boda! bye!
I can't figure out how to write ! symbol in bash scripts when putting it in double quotes strings. For example: var="hello! my name is $name! bye!"Something crazy happens: $ age=20 $ name='boda' $ var="hello! my name is $name! bye!"When I press enter at last command the command repeats itself (types itself) without the last !: $ var="hello! my name is $name! bye"If I press enter again $ var="hello! my name is $name bye"If i press enter again it disappears nothing gets output $ If I try this: $ echo "hello\! my name is $name\! bye\!"Then it outputs: hello\! my name is boda\! bye\! If i use single quotes then my name doesn't get expanded: $ echo 'hello! my name is $name! bye!'Outputs are: hello! my name is $name! bye! I have it working this way: $ echo "hello"'!'" my name is $name"'!'" bye"'!'But it's one big mess with " and ' impossible to understand/edit/maintain/update. Can anyone help?
How to write "!" symbol between double quotes in bash? [duplicate]
Zsh doesn't have non-greedy wildcards. The only place I can think of where it does non-greedy matching is when stripping a prefix with the parameter substitution forms ${VAR#PATTERN} and ${VAR%PATTERN} (as opposed to ${VAR##PATTERN} and ${VAR%%PATTERN} which match greedily). It's always possible to translate a pattern using non-greedy wildcards into one that doesn't use them, but the translation can be painful and the size of the result is exponential in the size of the original in the worst case. A classic example of when non-greedy matching would be convenient is when you want to match a numeric range followed by something else with e.g. <1-42>*: this matches 43a because <1-42> matches 4; a workaround is <1-42>([^0-9]*)#. Depending on what you want to do, other methods may be easier, for example arranging to use a prefix or suffix substitution, or approaching the problem from a different angle. For the use case of a history expansion where you want to change the command, there's a different approach which is one character shorter as your example and is more reliable in your specific example. Instead of ^ff* ^open which matches ff anywhere on the command line and only works if the argument doesn't contain a space, you can use open !ff:*which only matches ff at the beginning of the command line (!?ff would match anywhere).
I like to do non-greedy globs, but my Google searches hint that this is not supported. Is this the case? If so, why is it the case? For example I'd like to use a non-greedy glob in history expansions, e.g. ^ff* ^open to open an mp3 previously ffplayed.
Non-greedy (extended) globs in zsh
You can use bash's "reverse interactive search", usually accessible via Ctrl+R. That key combination will bring up this prompt: (reverse-i-search)`': There, you can start writing the command and it will autocomplete from your history, with the most recent first. However, it matches the entire string you enter, so tool q will immediately bring back tool qux -a -b asdf -c=100 /var/lib/foo/.... That should do what you want. From man bash: reverse-search-history (C-r) Search backward starting at the current line and moving `up' through the history as necessary. This is an incremental search.
I'm working with a command-line tool that provides a number of subcommands that all use the same binary, e.g. tool foo, tool bar, etc, and as I work, these commands are placed into my Bash shell history, for example: 7322 [2021-04-16 15:37:45 +0000] tool foo . 7323 [2021-04-16 15:37:47 +0000] tool bar 7324 [2021-04-16 15:37:50 +0000] tool baz 7325 [2021-04-16 15:38:01 +0000] tool qux -a -b asdf -c=100 /var/lib/foo/... 7326 [2021-04-16 15:38:15 +0000] htop 7327 [2021-04-16 15:38:21 +0000] tool foo . -xThe exact tool is proprietary (its exact functionality is irrelevant) and doesn't have specific features to help with tracking and recalling its own commands. For example, I'd like to recall command 7325, tool qux -a -b asdf -c=100 /var/lib/foo/... (suppose that it's not recent enough to just hit the up arrow a bunch of times). The commands and parameters vary often enough that establishing a Bash alias doesn't seem practical or convenient (either I edit .bashrc or I lose the alias when the shell closes). I'm pretty confident that the last time I ran tool qux, it had the correct parameters that I would want to use, or a reasonably safe set that I would need to edit anyway. I know I could run history | grep qux to look for the history index and then run !7325. Is there a way I can directly recall it with one set of keystrokes typed into the Bash prompt? !tool qux doesn't work because in this scenario it will run tool foo . -x qux instead. I tried quoting it, but it looks like expansion happens earlier.
Is there a way to match history entries on multiple words/tokens of the command, when performing history expansion?
Using the shell's history expansion for getting the commands executed in the shell session previous to running your script would not be doable unless the invoking shell saves its history to $HISTFILE after each executed command, which bash does not do by default, and exported the HISTFILE variable, which it does not need to do. By default, the bash shell maintains an in-memory history for the current interactive shell session. This history is saved to $HISTFILE when the shell session exits. When a new session that enables command line history starts, the saved history is read from that file (assuming that new shell uses the same $HISTFILE value). Your script, unless it has the HISTFILE variable inherited from its invoking environment, will at most be able to access command line history of its own session, i.e. the commands in the script. If HISTFILE is exported, but if the invoking shell never saved its history to $HISTFILE before running the script, it would be impossible to get at the in-memory history of that parent shell session, and you would at most be able to access the historical history of sessions long since dead.
I have the following Bash script set -o histexpand set -o history pwd lc="!!"which, when I run it in an interactive shell, prints /home/user lc="pwd"I'd like to, instead of getting lc=pwd, get the last command used in the interactive shell by using history expansion. So if I run echo foo; ./script, I hope to get /home/user lc="echo foo"I tried to set -H in the script and it doesn't work.
Using interactive's shell history expansion inside a script
This functionality already exists You don't need anything complicated to access the last word of previous commands. Just press ESC-. (i.e. Alt+.) or ESC-_ (i.e. Alt+_). This invokes the editor command insert-last-word, which inserts the last word from the previous command line. Press the key again to get the last word from the command line before that, and so on. If you press ESC-. one time too many, use C-_ (undo) to go back to the word you had just before. This command isn't bound to a key by default in vi mode, but you can bind it with bindkey. You can pass a numerical argument to get a different word: positive to start on the right (1 is the last word), zero or negative to start on the left (0 is the first word which is generally the command name, 1 is the word after that which is the first argument, etc.). For example ESC . ESC - ESC 1 ESC . inserts the first argument of the next-to-last command. Many variations on this command are possible by defining your own widget around zle insert-last-word. Zsh comes with copy-earlier-word and smart-insert-last-word which you may find useful either to use as is or as code examples. If you really want $__ to expand to the last word of the previous-but-one command, I'll give some solutions below, but first I need to explain what's going on. Why your attempt isn't working First, you aren't defining what you think you're defining. alias "$__"=… defines an alias whose name is the current value of the variable __ at the time the alias definition is executed. This is probably empty, so you're executing alias ='!-2:$' which looks for a command called '!-2:$' on the search path (the = expansion part of filename expansion). To define an alias called $__, you need to pass $__ to the alias command, e.g. with alias '$__'=… or alias \$__=…. Second, an alias is only expanded in command position, i.e. as the first word of a command (after any leading variable assignments and redirections). In order for this alias to be useful, it would need to be a global alias: alias -g '$__'=… Third, this alias wouldn't do anything useful, because alias expansion happens after history expansion. darkstar darkstar% alias -g '$__'='!-2:$' darkstar% echo $__ !-2:$$_ does not “stand for” !-1:$. $_ and !-1:$ are two ways to access the same information in common cases. You can say that $_ “is an alias” of !-1:$, or conversely that !-1:$ “is an alias” of $_, but that's using “alias” in its general English sense, not in the technical sense of shell aliases, and it's imprecise because the two don't always have the same value. !-1:$ is a history expansion (!) construct which expands to the last word (:$) of the previous command line (-1). $_ is a parameter expansion using the parameter _ which the shell sets to the last argument of the previous command. It makes a difference if you run command lines that aren't exactly one simple command, for example: darkstar% for x in 1 2 3; do echo $x; done 1 2 3 darkstar% echo $_ is not !-1:$ echo $_ is not done 3 is not done darkstar% echo $_ and !-1:$ are different; echo $_ and !-1:$ are different echo $_ and done are different; echo $_ and done are different done and done are different different and done are differentDefining $__ per command You can define a trap function called TRAPDEBUG which runs before executing each command. Remember the current value of $_ (note that you have to do this first, because the first command inside the trap will overwrite _), then “shift” the multiple-underscore variables. darkstar% TRAPDEBUG () { _0=$_; ___=$__; __=$_1; _1=$_0; } darkstar% echo one one darkstar% echo two two darkstar% echo three three darkstar% echo $_,$__,$___ three,two,one$_1 won't always be the same as $_, because the debug trap doesn't run in exactly the same circumstances that cause _ to be set, but it's pretty close. Defining $__ per command line You can register a hook function to run before or after entering a command line. In this case, either precmd or preexec. They run before and after executing a command respectively. preexec_set_underscore_variables () { ___=$__ __=$_1 _1=$historywords[1] } preexec_functions+=(preexec_set_underscore_variables)I use historywords to get the last word from the command line. I store it in _1 because _ is already taken. And the function “shifts” the last-word-history variables by one. darkstar% echo one one darkstar% echo two two darkstar% echo three three darkstar% echo $_ $__ $___ three two one
similar to the existing $_ which I learned stands for !-1:$, I would like to create aliases for $__, $___ and so on which refer to the 2nd or 3rd -last command. I have tried adding alias "$__"='!-2:$'in my .zshrc.local. If possible, I would like to write a zsh-function which gives back the 1st argument of the n-th last command based on the amount of underscores. arch linux kernel 5.1.4-arch zsh 5.7.1 (x86_64-pc-linux-gnu)
Setting up aliases for a history expansion pattern
You need the extendedglob option for (#b). Also 05 + 1 yields 6, not 06. You could do (with extendedglob and histsubstpattern) ^(#b)E(<->)^E${(l:2::0:)$((match[1]+1))}Or: echo ${_//(#b)E(<->)/${(l:2::0:)$((match[1]+1))}<-> is a form of <x-y> positive decimal number matching operator where both boundaries are omitted, so matches any non-empty sequence of decimal digits. Same as [0-9]## (though ## needs extended-glob while <x-y> doesn't). (l:2::0:) (note that it's a lower case L, not the 1 digit) is the left-padding parameter expansion flag, here with 0s, of length 2.
I want to accomplish this: setopt HIST_SUBST_PATTERN echo Ninja_Turtles_2003_S02E05_DVDRip_30NAMA.mkv ^E(0?)^E$((match[1]+1)) # resulting in: echo Ninja_Turtles_2003_S02E06_DVDRip_30NAMA.mkv‌But I get: echo Ninja_Turtles_2003_S02E1_DVDRip_30NAMA.mkvI tried ^(#b)E(0?)^E$((match[1]+1)), but it didn't work.
How can I increase a number found by wildcard in the previous command? (zsh)
Per the manual (emphasize mine):!?str[?] Refer to the most recent command containing str. The trailing '?' is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str.so in your case it's !?reload?:pthat is, you need a trailing ? after the search string.
I could do: !systemctl:p to get systemctl reload bind result printed (as last command in the history starting with systemctl string). but doing the same with the partial search on the command history: !?reload:p results in zsh: no such event: reload:p the former looks the most recent event in the history that starts with systemctl string and prints it on the screen, thanks to :p modifier, instead of executing. i thought :p is true for !? as well on any shell. and bash also results in bash: !?reload:p: event not found. how can i achieve the printing and not executing of the found command line on partial command history search in common unix shells?
printing and not executing the result of zsh history expansion on partial search
I have two suggestions how you can approach what you want (referring to bash only): add it to the history Before typing the first history expansion command line you can disable history expansion (set +H) and "execute" the history expansion command (and then reenable with set -H). It then is part of the shell history and you can easily get back to it and modify it. A more direct approach for getting the history expansion command line in the shell history would be history -s. The earlier suggestion may be easier to remember (and may be easier in case of complicated quoting), though (depending on how familiar someone is with shell options). readline yank This is most useful when you do not need yanking during the whole operation. Type the history expansion command line but do not press Enter. Instead go to the beginning / end of the line and delete the whole line with Ctrl-K / Ctrl-U. This puts the whole line on the kill ring. You can restore the line with Ctrl-Y. Even after executing the command you can get it back this way as long as you do not put anything else in the kill ring. And even if: You can go back to older kill ring entries with Ctrl-Y Alt-Y.
I am having to to rewrite history expansion commands, instead of calling it from history. For Example, I have to change 35 to 36, 37, 38.... in the following command. $ print -P '\033[35mThis is the same red as in your solarized palette\033[0m' $ !!:gs/35/36Now I need to make it !!:gs/36/37 However, when I use the Up key. It does not show $ !!:gs/35/36. It shows print -P '\033[35mThis is the same red as in your solarized palette\033[0m' What can be done here?
View History Expansion On History
It depends on if you want to add a set of quotes around the word from history expansion, or not. Assuming foo="abc def", compare $echo $foo $ printf "<%s>\n" !$vs. $echo $foo $ printf "<%s>\n" "!$"The former produces printf "<%s>\n" $foo, invoking word splitting, and printf gets two arguments after the format string. The second produces printf "<%s>\n" "$foo", where the quotes prevent word splitting, as usual. Of course, if you were to do this: $echo "$foo"Then a !$ on a following command would already expand to "$foo", with the quotes already in place. Now, using "!$" would produce ""!$"", where the quotes effectively cancel each other.
I've just come across !$ (without quotes). I've not met this before and did some tests: $ ls -l (...some output...) $ echo !$ -l $ echo "!$" -lman bash says this in the section on history expansion:$ The last word. This is usually the last argument, but will expand to the zeroth word if there is only one word in the line.Fair enough. But should I quote it or not? Another test on history expansion leaves me in doubt: $ man bash $ "!!" "man bash" bash: man bash: command not foundThis could be expected. But then, what about !$? This is one word, so I guess it should be quoted... (I'll risk this new tag here: good-practice.)
Should history expansion be quoted?
$ echo "AAAAAAAAAAAAAAAAA" > test1 $ !!:gs/A/B/:s/1/2/ echo "BBBBBBBBBBBBBBBBB" > test2That is, just add the second substitution to the end of the first. Just be aware that the second substitution will act on the result of the first.
I know that I can simply substitute a string with another in the previous command by typing: !!:gs/string1/string2/But how I can perform multiple substitutions, e.g. having a command: echo "AAAAAAAAAAAAAAAAA" > test1I want to substitute A with B and 1 with 2, so execute such a command: echo "BBBBBBBBBBBBBBBBB" > test2How can I do it with !! operator?
Multiple substitution when repeating the previous command
Running a shell script starts a Bash process as a non-interactive shell. In this mode, history expansion doesn’t actually get carried out. From the Bash manual page, man bash:HISTORY EXPANSION The shell supports a history expansion feature that is similar to the history expansion in csh. This section describes what syntax features are available. This feature is enabled by default for interactive shells, and can be disabled using the +H option to the set builtin command (see SHELL BUILTIN COMMANDS below). Non-interactive shells do not perform history expansion by default.I couldn’t see any mention of having a script save its commands to the Bash history file but I’ve never seen this happen in practice. I’m assuming that you run your scripts in the standard way (as an executable file containing shell commands or by providing the script name as the argument to a bash command). If I’m not interpreting your question correctly, could you edit your question to clarify?
I haven't yet been able to find this among the Bash documentation so I was hoping it could get answered if I asked it here. Is there any way that I can, on execution of a script, branch its history (that is, to have the same history as the shell in the parent process which invokes the script, without using source or . to) and after completion, not have its history recorded in ~/.bash_history? The purpose of not having its history recorded is so the last command before invoking the script only becomes the second last command after invoking the script. Is the easiest for this to just use set -o history, record the number of commands that get executed, and at the end of the script just delete that many lines from $HISTFILE?
Persistent Bash history between detached processes
Xorg -configure while X is not running did it for me - I'm on Debian Sid (unstable). You MUST NOT have X running when you do this, and must be in a console TTY. (ctrl-alt-f1/f2/f3/f4/f5/f6) To stop your X server (if running), you may have to stop a desktop manager/login manager (e.g., xdm, gdm, lightdm, kdm, but there are others). If you are running X without a login manager, I assume you already know what you're doing and how to stop X. Otherwise, the 'preferred' method of stopping your manager might vary based on your init system, but here's a couple common ways. Run these commands as root, replacing xdm with your desktop manager, if appropriate. System V Init (sysvinit): # /etc/init.d/xdm stopSystemd init (most distros use Systemd by default these days): # service xdm stopAs a catch-all that should work on many systems (Linux distros, at least; I don't think FreeBSD has pidof in a basic installation): # kill `pidof xdm`If Xorg.conf doesn't change after doing this, and the program didn't return an error but printed an Xorg.conf configuration file to the screen, do Xorg -configure > /etc/xorg.conf to pipe the output into the file. BUT the way that I got the official Nvidia drivers working in the end was to uninstall the package manager's version and download the setup program from Nvidia's site. It's been working flawlessly since. The one time it didn't work (when I was trying to run Minecraft), I set the variable LD_PRELOAD=/usr/lib/libGL.so.1 and it ran - lwjgl has problems detecting the correct libGL version to use.
Many people have talked about this issue but I've not found a satisfactory answer. I'm on a debian jessie. Currently I have tried nvidia-driver as the driver but it caused the system to crash; so I have purged all the nvidia packages. But the problem is that /etc/X11/xorg.conf has been replaced with NVidia settings and the backup xorg.conf.backup has been removed. The related configuration set by NVidia is: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSectionI once tried changing nvidia to intel(also NVidia -> Intel) but the resolution is much lower(my laptop has a Intel Corporation Haswell-ULT Integrated Graphics Controller as listed by lspci). So I might need to use nouveau as the driver; however simply changing nvidia to nouveau doesn't work. It seems that the recent X system can be booted without xorg.conf(by rm /etc/X11/xorg.conf) but slower. So I still prefer the xorg.conf with my current settings. The version of Xorg: X.Org X Server 1.16.0 Release Date: 2014-07-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.14-1-amd64 x86_64 Debian Current Operating System: Linux debian 3.14-1-amd64 #1 SMP Debian 3.14.9-1 (2014-06-30) x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.14-1-amd64 root=UUID=e9341749-9dee-4cc9-878e-3b59ed1906b2 ro quiet Build Date: 17 July 2014 10:22:36PM xorg-server 2:1.16.0-1 (http://www.debian.org/support) Current version of pixman: 0.32.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.So are there any ways to re-generate the configuration file?
regenerate xorg.conf with current settings
It was a bug that got fixed with the next release of the kernels. If someone has to use the affected kernel, they can use the radeon.nopm=0 kernel boot time option which is a workaround. The related bug report here on freedesktop.
I am using Manjaro KDE edition. I have a system with Skylake i5 processor and hybrid graphics. System: Host: aditya-laptop Kernel: 4.4.8-1-MANJARO x86_64 (64 bit gcc: 5.3.0) Desktop: KDE Plasma 5.6.3 (Qt 5.6.0) Distro: Manjaro Linux Machine: System: HP product: HP Notebook v: Type1ProductConfigId Mobo: HP model: 8136 v: 31.36 Bios: Insyde v: F.1F date: 01/18/2016 CPU: Dual core Intel Core i5-6200U (-HT-MCP-) cache: 3072 KB flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 9603 clock speeds: max: 2800 MHz 1: 699 MHz 2: 2694 MHz 3: 750 MHz 4: 750 MHz Graphics: Card-1: Intel Skylake Integrated Graphics bus-ID: 00:02.0 Card-2: Advanced Micro Devices [AMD/ATI] Sun XT [Radeon HD 8670A/8670M/8690M / R5 M330] bus-ID: 01:00.0 Display Server: X.Org 1.17.4 drivers: ati,radeon,intel Resolution: [emailprotected] GLX Renderer: Mesa DRI Intel HD Graphics 520 (Skylake GT2) GLX Version: 3.0 Mesa 11.2.1 Direct Rendering: Yes With an earlier version of kernel 4.4 as well as drivers, PRIME offloading worked properly with the commands xrandr --setprovideroffloadsink radeon Intel However now after updating the kernel and xf86 drivers, it does not work. $ xrandr --listproviders Providers: number : 2 Provider 0: id: 0x66 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 4 outputs: 3 associated providers: 0 name:Intel Provider 1: id: 0x3f cap: 0x0 crtcs: 0 outputs: 0 associated providers: 0 name:HAINAN @ pci:0000:01:00.0 $ xrandr --setprovideroffloadsink 0x3f 0x66 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 34 (RRSetProviderOffloadSink) Value in failed request: 0x3f Serial number of failed request: 16 Current serial number in output stream: 17I don't know where exactly the problem is as many packages got updated, which includes the kernel as well as xf86-video-intel and xf86-video-ati packages. I have also installed the linux4.6 kernel but I get the same problem on that too.
Getting error while using xrandr --setprovideroffloadsink in Manjaro after update
As seen in the QA test case, you only need to specify the DRI_PRIME=1 environment variable when launching the application, like so: [dkarlovi@amelie ~]$ glxgears -info | grep REND GL_RENDERER = Mesa DRI Intel(R) Sandybridge Mobile ^C [dkarlovi@amelie ~]$ DRI_PRIME=1 glxgears -info | grep REND GL_RENDERER = Gallium 0.4 on NVD9 ^C
As shown from this blog, Fedora 25 now has NVida graphics binary driver support and users has an option to launch applications with "Launch with Dedicated Graphics Card" with right click on an icon, if your computer has hybrid GPU (Intel/NVidia) configuration. Given this option, I would like to write scripts to launch my other applications from the command line or to make desktop launchers connected to my scripts directly, with Dedicated Graphics Card option pre-selected. I am wondering how I can achieve this? or How is this implemented in Fedora 25, so that I can learn from and use it on my scripts? Thank you!
Script to launch an application with dedicated graphics card (Fedora 25)
In the prime-run script, I also need to set the variable __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G{CARD#} using the card identifier found in xrandr --listproviders. The official Nvidia guide listed below has the resolution to this problem, but I did not read the guide closely enough and missed it on my first read. https://download.nvidia.com/XFree86/Linux-x86_64/495.44/README/primerenderoffload.html /etc/X11/xorg.conf is unnecessary for this solution. I missed the part about the OFFLOAD_PROVIDER because I was accessing the headless server remotely from a client without any Xsession or graphics display. The machine must be running some type of GUI and not just a headless terminal environment. In my case, I installed lightdm onto my server and then everything worked. I believe that the graphics needs to be using x11/ xorg and not wayland. For gdm3, xorg can be set to run by uncommenting #WaylandEnable=false in /etc/gdm3/custom.conf and rebooting.
My computer has one integrated graphics card and 2 Nvidia RTX 3070 GPUS. I am using Ubuntu 20.04 and nvidia-driver-530. lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation AlderLake-S GT1 (rev 0c) 01:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070 Lite Hash Rate] (rev a1) 05:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070 Lite Hash Rate] (rev a1)I am currently trying to test my 3070 graphics cards with the Phoronix Test Suite. I am using nvidia-prime and prime-select: on-demand to run the terminal on the intel iGPU and phoronix tests on the Nvidia 3070: prime-run phoronix-test-suite run unigine-heaven. There were some issues getting nvidia-prime to work, so I followed the suggestions from this article: https://askubuntu.com/questions/1364762/prime-run-command-not-found cat /usr/bin/prime-run #!/bin/bash export __NV_PRIME_RENDER_OFFLOAD=1 export __GLX_VENDOR_LIBRARY_NAME=nvidia export __VK_LAYER_NV_optimus=NVIDIA_only export VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/nvidia_icd.json exec "$@"By using prime-run I am successfully able to run the phoronix test suite on GPU 0 which has bus id 01:00.0 / PCI:1:0:0. However, I seem unable to run any tests with GPU 1 which has bus id 05:00.0 / PCI:5:0:0. Modifying /etc/X11/xorg.conf by changing the bus number and rebooting as suggested by the following links didn't seem to do anything and still ran on GPU 0.https://stackoverflow.com/questions/18382271/how-can-i-modify-xorg-conf-file-to-force-x-server-to-run-on-a-specific-gpu-i-a https://askubuntu.com/questions/787030/setting-the-default-gpucat /etc/X11/xorg.conf # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 530.41.03Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSectionSection "Files" EndSectionSection "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSectionSection "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSectionSection "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Option "DPMS" EndSectionSection "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" # BusID "PCI:1:0:0" BusID "PCI:5:0:0" EndSectionSection "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSectionIn fact, I deleted etc/X11/xorg.conf and was able to run the phoronix tests on GPU 0 without the conf file at all. I would guess that one of the drivers or programs I run automatically selects the nvidia card with the lowest bus id. I would like to know where I should look to change the settings or any configuration files in order to select the second RTX 3070 gpu with the bus id 05:00.0. I would be more than happy to provide any further information.
Force graphics to run on specific GPU
From the lspci output I can only see one Intel graphics card, make sure there is a AMD card and that it is enabled in BIOS. Also you can use the Additional Drivers window to install the proprietary fglrx drivers:If that or installing fglrx/fglrx-updates does not work, you can download the drivers from AMD's site (this may help there - I will add that when I had to do this recently I had to purge the existing fglrx install from the repo and ocl-icd-libopencl1) Once you have the AMD card enabled with a driver (doesn't have to be fglrx, the open source ones can work better), you might be able to use vga_switcheroo if you have a kernel older than 3.11 - otherwise it may be managed by Radeon DPM.
I own a HP ProBook 450 G0 laptop, running Ubuntu 14.04 (3.16.0-33-generic x86_64). This particular laptop has two GPUs and I want to be able to switch between them. I'm looking for a free driver or utility that would allow me achieve this, but I'm wiling to install proprietary software if no other solution applies. Things I have tried so far:I tried locating vga_switcheroo, but file /sys/kernel/debug/vgaswitcheroo/switch is not present on my system. I downloaded the official AMD drivers and tried installing them using aptitude, but it didn't complete the installation because of a missing dependency (fglrx-core). I found out I am able to disable the discrete graphics card in BIOS.Output of lspci: 00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4) 00:1c.2 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 3 (rev c4) 00:1c.3 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM76 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Mars [Radeon HD 8670A/8670M/8750M] (rev ff) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01) 03:00.0 Network controller: Ralink corp. RT3290 Wireless 802.11n 1T/1R PCIe 03:00.1 Bluetooth: Ralink corp. RT3290 Bluetooth 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)Output of lshw -C display: *-display description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:50 memory:d0000000-d03fffff memory:c0000000-cfffffff ioport:4000(size=64)
How can I make hybrid graphics(AMD/Intel) work on Ubuntu?
The solution was finally to install a more recent OS, Debian 9 was enought. Then I installed firmware-amd-graphics from the non-free source, and now it's working: xrandr --listproviders Providers: number : 2Provider 0: id: 0x7b cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 5 associated providers: 0 name:modesettingProvider 1: id: 0x53 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 2 outputs: 2 associated providers: 0 name:OLAND @ pci:0000:03:00.0Only the names are weird, but once the configuration is done it's not a big deal anymore. Happy New Year!
I'd like to play Steam Linux games on my laptop. Those games work fine on the Windows partition, but on Debian's one the games run slow. I searched for the reason why they run so slow on Linux, and I found out that my 2nd graphic card wasn't used, so, now I'm trying to activate it. The reason of that post is that I struggle a lot to make it work, here are things I tried:"lspci | grep VGA" tells me that my 2nd card is here: 00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev Ob)03:00.0 VGA compatible controller: Advanced Micro Devices, Inc [AMD/ATI] Mars [Radeon HD 8730M]"xrandr --listproviders" says the opposite: Providers number : 1Provider 0: id: 0x47 cap: 0xb, Source Output, Sink Offload crtcs, 3 outputs: 5 associated providers: 0 name:Intel"glxinfo | grep "OpenGL renderer string"" confirms what xrandr said: OpenGL renderer string: Mesa DRI Intel(R) Haswell MobileBUT "cat /sys/kernel/debug/vgaswitcheroo/switch" says: 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :DynOff:000:03:00.0 2:DIS-Audio: :Off:000:03:00.1 Which means that my discrete graphic card is there, but off, but ready to be used. So I tried to activate it using switcheroo:"echo ON > /sys/kernel/debug/vgaswitcheroo/switch"Nothing happens"echo DIS > /sys/kernel/debug/vgaswitcheroo/switch"Nothing happens... Then? "echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch"Nothing happensEach time I checked for the switch file, and it's content stayed the same all along, the IGD being powered, and the DIS DynOff, even after reboot. Oh, sometimes eventually switcheroo says: "vga_switcheroo client 0 refused switch" when doing those commands in su mode directly, and I don't really know what that means...Since I had no xorg.conf file, I decided to make one, with the cmd "X -configure" while on recovery mode. Then I moved the file: "cp /root/xorg.conf.new /etc/X11/xorg.conf". But when I reboot with this conf file, my computer gets stuck on the Plymouth boot screen, and the only thing I can access is the tty. Here I tried to backup the /usr/share/X11/xorg.conf.d/ folder and remove it from it's original location, and reboot. Now I don't even get stuck on Plymouth boot screen, after booting I'm redirected on the tty.At that point, I'm not even sure that the xorg conf is of any use for my original problem, but I see that I also have a problem with it, since freshly generated xorg.conf files makes my computer stuck on splash screen.Results of lsb_release -a : LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2- noarch:core-4.0-amd64:core-4.0-noarch:core-4.1-amd64:core-4.1-noarch:security-4.0-amd64:security-4.0-noarch:security-4.1-amd64:security-4.1-noarch Distributor ID: Debian Description: Debian GNU/Linux 8.10 (jessie) Release: 8.10 Codename: jessie Results of sudo lshw -C display *-display description: VGA compatible controller product: Haswell-ULT Integrated Graphics Controller vendor: Intel Corporation hardware ID: 2 bus information: pci@0000:00:02.0 version: 0b bits: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:70 mémoire:d0000000-d03fffff mémoire:c0000000-cfffffff port(s):4000(size=64)I don't know how to make my AMD graphic card work on that Debian 8 "Jessie" OS, and I probably have a problem with my xorg, so this post is my last hope, pretty much. UPDATE Now, after following this post: https://askubuntu.com/questions/648426/discrete-graphics-always-dynoff I'm stuck on the Plymouth Boot Screen when I boot, and I have this message on both tty1 and Plymout screen (tty7) every 2mn: INFO: task kworker/u16:0:6 blocked for more than 120 seconds. Tainted: G C 3.16.0-4-amd64 #1 echo 0 > /proc/sys/kernel/hung_task_timeout_secs disables this message.Plus, I'm unable to reboot. The /etc/rc.local file edition may seem to be the cause, because when I comment out the line I added, it boots correctly. On a positive note, my discrete GC is now Pwr in vga_switcheroo. But still not listed in xrandr --listproviders, and I still get the message vga_switcheroo: client 0 refused switch when I try to activate it (echo DIS > /sys/kernel/debug/vgaswitcheroo/switch) Otherwise, if using radeon rather than fglrx is more complicated, would installing another desktop environment than Gnome like Cinnamon would make my life easier? (Knowing that fglrx is incompatible with Gnome) UPDATE I did a bunch of experiments to answer my previous question. I tried to install fglrx-driver and use LightDM (also works with KDM) for display management, and it works. Now, I've tried to use Cinnamon, but it seems that it doesn't support that driver, just like Gnome. So I've installed xfce4 and it seemed to work fine with the driver. So, now I have KDM for login, and xfce as desktop environment. I open my terminal and type xrandr --listproviders, but still only the Intel device shows up... I created a xorg.conf file using aticonfig --initial, then reboot, and now I have a black screen (black screen for LightDM, tty1 redirection for KDM) meaning that the xorg.conf file generated is not working... I don't know what to think of it, my xorg configuration might have something to do with it after all! Reporting another problem: Even though I have managed to make my computer work with a desktop environment along with fglrx, now commands like fglrxinfo, glxinfo & glxgears return the same error: Xlib: extension "GLX" missing on display ":0.0".UPDATE I've asked people on Reddit about my problem, and it seems that my system & graphical stack are too old, so, I'm going to backup my PC and upgrade it from Debian 8 to Debian 9, and hope for the best!
Discrete Graphic Card activation on hybrid laptop
I just tried this solution again. However, I changed /etc/bumblebee/xorg.conf.nvidia before running optirun intel-virtual-output and it worked this time. The Archwiki didn't mention that this must be done beforehand so I had only changed the configuration files afterwards and re-running optirun intel-virtual-output probably didn't work as there was already an instance of it running.
I have a Dell Latitude e6420 laptop with a discrete Nvidia NVS 4200m graphics card in addition to an Intel HD 3000. When at home, I use a docking station which is connected to another monitor. I used to have my monitor connected to the docking station via DVI which worked perfectly fine. But as I recently got a new monitor (Dell P2418D) with a resolution which is too high for DVI, I attempted to connect it to the docking station via Displayport. When using the new monitor (connected to the docking station via Displayport) under Windows 10 (I have a Manjaro/Windows dual-boot system), it works fine. However, if I try to use it under Linux, it is recognized by xrandr, but the monitor doesn't recognize any input signal: Screen 0: minimum 320 x 200, current 2560 x 1440, maximum 8192 x 8192 LVDS-1 connected primary 1600x900+0+0 (normal left inverted right x axis y axis) 310mm x 174mm 1600x900 60.06*+ 59.99 59.94 59.95 59.82 40.32 1400x900 59.96 59.88 1440x810 60.00 59.97 1368x768 59.88 59.85 1280x800 59.99 59.97 59.81 59.91 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 700x525 59.98 800x450 59.95 59.82 640x512 60.02 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 640x400 59.88 59.98 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 VGA-1 disconnected (normal left inverted right x axis y axis) LVDS-1-2 disconnected (normal left inverted right x axis y axis) VGA-1-2 disconnected (normal left inverted right x axis y axis) HDMI-1-1 disconnected (normal left inverted right x axis y axis) DP-1-1 connected 2560x1440+0+0 (normal left inverted right x axis y axis) 526mm x 296mm 2560x1440 59.95*+ 1920x1440 60.00 1856x1392 60.01 1792x1344 75.00 60.01 2048x1152 59.90 59.91 1920x1200 59.88 59.95 2048x1080 60.00 24.00 1920x1080 59.97 59.96 60.00 50.00 59.94 59.93 24.00 23.98 1920x1080i 60.00 50.00 59.94 1600x1200 75.00 70.00 65.00 60.00 1680x1050 59.95 59.88 1400x1050 74.76 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 75.02 60.02 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1280x800 59.99 59.97 59.81 59.91 1152x864 75.00 1280x720 60.00 59.99 59.86 60.00 50.00 59.94 59.74 1024x768 75.05 60.04 75.03 70.07 60.00 960x720 75.00 60.00 928x696 75.00 60.05 896x672 75.05 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 832x624 74.55 960x540 59.96 59.99 59.63 59.82 800x600 75.00 70.00 65.00 60.00 72.19 75.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 720x576 50.00 720x576i 50.00 700x525 74.76 59.98 800x450 59.95 59.82 720x480 60.00 59.94 720x480i 60.00 59.94 640x512 75.02 60.02 700x450 59.96 59.88 640x480 60.00 75.00 72.81 75.00 60.00 59.94 720x405 59.51 58.99 720x400 70.08 684x384 59.88 59.85 640x400 59.88 59.98 576x432 75.00 640x360 59.86 59.83 59.84 59.32 512x384 75.03 70.07 60.00 512x288 60.00 59.92 416x312 74.66 480x270 59.63 59.82 400x300 72.19 75.12 60.32 56.34 432x243 59.92 59.57 320x240 72.81 75.00 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1-2 disconnected (normal left inverted right x axis y axis) 1600x900 (0x45) 246.000MHz -HSync +VSync DoubleScan h: width 1600 start 1728 end 1900 total 2200 skew 0 clock 111.82KHz v: height 900 start 901 end 904 total 932 clock 59.99Hz 1600x900 (0x46) 186.500MHz +HSync -VSync DoubleScan h: width 1600 start 1624 end 1640 total 1680 skew 0 clock 111.01KHz v: height 900 start 901 end 904 total 926 clock 59.94Hz 1600x900 (0x47) 118.250MHz -HSync +VSync h: width 1600 start 1696 end 1856 total 2112 skew 0 clock 55.99KHz v: height 900 start 903 end 908 total 934 clock 59.95Hz 1600x900 (0x48) 97.500MHz +HSync -VSync h: width 1600 start 1648 end 1680 total 1760 skew 0 clock 55.40KHz v: height 900 start 903 end 908 total 926 clock 59.82Hz 1400x900 (0x4a) 103.500MHz -HSync +VSync h: width 1400 start 1480 end 1624 total 1848 skew 0 clock 56.01KHz v: height 900 start 903 end 913 total 934 clock 59.96Hz 1400x900 (0x4b) 86.500MHz +HSync -VSync h: width 1400 start 1448 end 1480 total 1560 skew 0 clock 55.45KHz v: height 900 start 903 end 913 total 926 clock 59.88Hz 1440x810 (0x4c) 198.125MHz -HSync +VSync DoubleScan h: width 1440 start 1548 end 1704 total 1968 skew 0 clock 100.67KHz v: height 810 start 811 end 814 total 839 clock 60.00Hz 1440x810 (0x4d) 151.875MHz +HSync -VSync DoubleScan h: width 1440 start 1464 end 1480 total 1520 skew 0 clock 99.92KHz v: height 810 start 811 end 814 total 833 clock 59.97Hz 1368x768 (0x4e) 85.250MHz -HSync +VSync h: width 1368 start 1440 end 1576 total 1784 skew 0 clock 47.79KHz v: height 768 start 771 end 781 total 798 clock 59.88Hz 1368x768 (0x4f) 72.250MHz +HSync -VSync h: width 1368 start 1416 end 1448 total 1528 skew 0 clock 47.28KHz v: height 768 start 771 end 781 total 790 clock 59.85Hz 1280x800 (0x50) 174.250MHz -HSync +VSync DoubleScan h: width 1280 start 1380 end 1516 total 1752 skew 0 clock 99.46KHz v: height 800 start 801 end 804 total 829 clock 59.99Hz 1280x800 (0x51) 134.250MHz +HSync -VSync DoubleScan h: width 1280 start 1304 end 1320 total 1360 skew 0 clock 98.71KHz v: height 800 start 801 end 804 total 823 clock 59.97Hz 1280x800 (0x52) 83.500MHz -HSync +VSync h: width 1280 start 1352 end 1480 total 1680 skew 0 clock 49.70KHz v: height 800 start 803 end 809 total 831 clock 59.81Hz 1280x800 (0x53) 71.000MHz +HSync -VSync h: width 1280 start 1328 end 1360 total 1440 skew 0 clock 49.31KHz v: height 800 start 803 end 809 total 823 clock 59.91Hz 1280x720 (0x54) 156.125MHz -HSync +VSync DoubleScan h: width 1280 start 1376 end 1512 total 1744 skew 0 clock 89.52KHz v: height 720 start 721 end 724 total 746 clock 60.00Hz 1280x720 (0x55) 120.750MHz +HSync -VSync DoubleScan h: width 1280 start 1304 end 1320 total 1360 skew 0 clock 88.79KHz v: height 720 start 721 end 724 total 740 clock 59.99Hz 1280x720 (0x56) 74.500MHz -HSync +VSync h: width 1280 start 1344 end 1472 total 1664 skew 0 clock 44.77KHz v: height 720 start 723 end 728 total 748 clock 59.86Hz 1280x720 (0x57) 63.750MHz +HSync -VSync h: width 1280 start 1328 end 1360 total 1440 skew 0 clock 44.27KHz v: height 720 start 723 end 728 total 741 clock 59.74Hz 1024x768 (0x58) 133.475MHz -HSync +VSync DoubleScan h: width 1024 start 1100 end 1212 total 1400 skew 0 clock 95.34KHz v: height 768 start 768 end 770 total 794 clock 60.04Hz 1024x768 (0x59) 65.000MHz -HSync -VSync h: width 1024 start 1048 end 1184 total 1344 skew 0 clock 48.36KHz v: height 768 start 771 end 777 total 806 clock 60.00Hz 960x720 (0x5a) 117.000MHz -HSync +VSync DoubleScan h: width 960 start 1024 end 1128 total 1300 skew 0 clock 90.00KHz v: height 720 start 720 end 722 total 750 clock 60.00Hz 928x696 (0x5b) 109.150MHz -HSync +VSync DoubleScan h: width 928 start 976 end 1088 total 1264 skew 0 clock 86.35KHz v: height 696 start 696 end 698 total 719 clock 60.05Hz 896x672 (0x5c) 102.400MHz -HSync +VSync DoubleScan h: width 896 start 960 end 1060 total 1224 skew 0 clock 83.66KHz v: height 672 start 672 end 674 total 697 clock 60.01Hz 1024x576 (0x5d) 98.500MHz -HSync +VSync DoubleScan h: width 1024 start 1092 end 1200 total 1376 skew 0 clock 71.58KHz v: height 576 start 577 end 580 total 597 clock 59.95Hz 1024x576 (0x5e) 78.375MHz +HSync -VSync DoubleScan h: width 1024 start 1048 end 1064 total 1104 skew 0 clock 70.99KHz v: height 576 start 577 end 580 total 592 clock 59.96Hz 1024x576 (0x5f) 46.500MHz -HSync +VSync h: width 1024 start 1064 end 1160 total 1296 skew 0 clock 35.88KHz v: height 576 start 579 end 584 total 599 clock 59.90Hz 1024x576 (0x60) 42.000MHz +HSync -VSync h: width 1024 start 1072 end 1104 total 1184 skew 0 clock 35.47KHz v: height 576 start 579 end 584 total 593 clock 59.82Hz 960x600 (0x61) 96.625MHz -HSync +VSync DoubleScan h: width 960 start 1028 end 1128 total 1296 skew 0 clock 74.56KHz v: height 600 start 601 end 604 total 622 clock 59.93Hz 960x600 (0x62) 77.000MHz +HSync -VSync DoubleScan h: width 960 start 984 end 1000 total 1040 skew 0 clock 74.04KHz v: height 600 start 601 end 604 total 617 clock 60.00Hz 960x540 (0x63) 86.500MHz -HSync +VSync DoubleScan h: width 960 start 1024 end 1124 total 1288 skew 0 clock 67.16KHz v: height 540 start 541 end 544 total 560 clock 59.96Hz 960x540 (0x64) 69.250MHz +HSync -VSync DoubleScan h: width 960 start 984 end 1000 total 1040 skew 0 clock 66.59KHz v: height 540 start 541 end 544 total 555 clock 59.99Hz 960x540 (0x65) 40.750MHz -HSync +VSync h: width 960 start 992 end 1088 total 1216 skew 0 clock 33.51KHz v: height 540 start 543 end 548 total 562 clock 59.63Hz 960x540 (0x66) 37.250MHz +HSync -VSync h: width 960 start 1008 end 1040 total 1120 skew 0 clock 33.26KHz v: heigTht 540 start 543 end 548 total 556 clock 59.82Hz 800x600 (0x67) 81.000MHz +HSync +VSync DoubleScan h: width 800 start 832 end 928 total 1080 skew 0 clock 75.00KHz v: height 600 start 600 end 602 total 625 clock 60.00Hz 800x600 (0x68) 40.000MHz +HSync +VSync h: width 800 start 840 end 968 total 1056 skew 0 clock 37.88KHz v: height 600 start 601 end 605 total 628 clock 60.32Hz 800x600 (0x69) 36.000MHz +HSync +VSync h: width 800 start 824 end 896 total 1024 skew 0 clock 35.16KHz v: height 600 start 601 end 603 total 625 clock 56.25Hz 840x525 (0x6a) 73.125MHz -HSync +VSync DoubleScan h: width 840 start 892 end 980 total 1120 skew 0 clock 65.29KHz v: height 525 start 526 end 529 total 544 clock 60.01Hz 840x525 (0x6b) 59.500MHz +HSync -VSync DoubleScan h: width 840 start 864 end 880 total 920 skew 0 clock 64.67KHz v: height 525 start 526 end 529 total 540 clock 59.88Hz 864x486 (0x6c) 32.500MHz -HSync +VSync h: width 864 start 888 end 968 total 1072 skew 0 clock 30.32KHz v: height 486 start 489 end 494 total 506 clock 59.92Hz 864x486 (0x6d) 30.500MHz +HSync -VSync h: width 864 start 912 end 944 total 1024 skew 0 clock 29.79KHz v: height 486 start 489 end 494 total 500 clock 59.57Hz 700x525 (0x6e) 61.000MHz +HSync +VSync DoubleScan h: width 700 start 744 end 820 total 940 skew 0 clock 64.89KHz v: height 525 start 526 end 532 total 541 clock 59.98Hz 800x450 (0x6f) 59.125MHz -HSync +VSync DoubleScan h: width 800 start 848 end 928 total 1056 skew 0 clock 55.99KHz v: height 450 start 451 end 454 total 467 clock 59.95Hz 800x450 (0x70) 48.750MHz +HSync -VSync DoubleScan h: width 800 start 824 end 840 total 880 skew 0 clock 55.40KHz v: height 450 start 451 end 454 total 463 clock 59.82Hz 640x512 (0x71) 54.000MHz +HSync +VSync DoubleScan h: width 640 start 664 end 720 total 844 skew 0 clock 63.98KHz v: height 512 start 512 end 514 total 533 clock 60.02Hz 700x450 (0x72) 51.750MHz -HSync +VSync DoubleScan h: width 700 start 740 end 812 total 924 skew 0 clock 56.01KHz v: height 450 start 451 end 456 total 467 clock 59.96Hz 700x450 (0x73) 43.250MHz +HSync -VSync DoubleScan h: width 700 start 724 end 740 total 780 skew 0 clock 55.45KHz v: height 450 start 451 end 456 total 463 clock 59.88Hz 640x480 (0x74) 54.000MHz +HSync +VSync DoubleScan h: width 640 start 688 end 744 total 900 skew 0 clock 60.00KHz v: height 480 start 480 end 482 total 500 clock 60.00Hz 640x480 (0x75) 25.175MHz -HSync -VSync h: width 640 start 656 end 752 total 800 skew 0 clock 31.47KHz v: height 480 start 490 end 492 total 525 clock 59.94Hz 720x405 (0x76) 22.500MHz -HSync +VSync h: width 720 start 744 end 808 total 896 skew 0 clock 25.11KHz v: height 405 start 408 end 413 total 422 clock 59.51Hz 720x405 (0x77) 21.750MHz +HSync -VSync h: width 720 start 768 end 800 total 880 skew 0 clock 24.72KHz v: height 405 start 408 end 413 total 419 clock 58.99Hz 684x384 (0x78) 42.625MHz -HSync +VSync DoubleScan h: width 684 start 720 end 788 total 892 skew 0 clock 47.79KHz v: height 384 start 385 end 390 total 399 clock 59.88Hz 684x384 (0x79) 36.125MHz +HSync -VSync DoubleScan h: width 684 start 708 end 724 total 764 skew 0 clock 47.28KHz v: height 384 start 385 end 390 total 395 clock 59.85Hz 640x400 (0x7a) 41.750MHz -HSync +VSync DoubleScan h: width 640 start 676 end 740 total 840 skew 0 clock 49.70KHz v: height 400 start 401 end 404 total 415 clock 59.88Hz 640x400 (0x7b) 35.500MHz +HSync -VSync DoubleScan h: width 640 start 664 end 680 total 720 skew 0 clock 49.31KHz v: height 400 start 401 end 404 total 411 clock 59.98Hz 640x360 (0x7c) 37.250MHz -HSync +VSync DoubleScan h: width 640 start 672 end 736 total 832 skew 0 clock 44.77KHz v: height 360 start 361 end 364 total 374 clock 59.86Hz 640x360 (0x7d) 31.875MHz +HSync -VSync DoubleScan h: width 640 start 664 end 680 total 720 skew 0 clock 44.27KHz v: height 360 start 361 end 364 total 370 clock 59.83Hz 640x360 (0x7e) 18.000MHz -HSync +VSync h: width 640 start 664 end 720 total 800 skew 0 clock 22.50KHz v: height 360 start 363 end 368 total 376 clock 59.84Hz 640x360 (0x7f) 17.750MHz +HSync -VSync h: width 640 start 688 end 720 total 800 skew 0 clock 22.19KHz v: height 360 start 363 end 368 total 374 clock 59.32Hz 512x384 (0x80) 32.500MHz -HSync -VSync DoubleScan h: width 512 start 524 end 592 total 672 skew 0 clock 48.36KHz v: height 384 start 385 end 388 total 403 clock 60.00Hz 512x288 (0x81) 23.250MHz -HSync +VSync DoubleScan h: width 512 start 532 end 580 total 648 skew 0 clock 35.88KHz v: height 288 start 289 end 292 total 299 clock 60.00Hz 512x288 (0x82) 21.000MHz +HSync -VSync DoubleScan h: width 512 start 536 end 552 total 592 skew 0 clock 35.47KHz v: height 288 start 289 end 292 total 296 clock 59.92Hz 480x270 (0x83) 20.375MHz -HSync +VSync DoubleScan h: width 480 start 496 end 544 total 608 skew 0 clock 33.51KHz v: height 270 start 271 end 274 total 281 clock 59.63Hz 480x270 (0x84) 18.625MHz +HSync -VSync DoubleScan h: width 480 start 504 end 520 total 560 skew 0 clock 33.26KHz v: height 270 start 271 end 274 total 278 clock 59.82Hz 400x300 (0x85) 20.000MHz +HSync +VSync DoubleScan h: width 400 start 420 end 484 total 528 skew 0 clock 37.88KHz v: height 300 start 300 end 302 total 314 clock 60.32Hz 400x300 (0x86) 18.000MHz +HSync +VSync DoubleScan h: width 400 start 412 end 448 total 512 skew 0 clock 35.16KHz v: height 300 start 300 end 301 total 312 clock 56.34Hz 432x243 (0x87) 16.250MHz -HSync +VSync DoubleScan h: width 432 start 444 end 484 total 536 skew 0 clock 30.32KHz v: height 243 start 244 end 247 total 253 clock 59.92Hz 432x243 (0x88) 15.250MHz +HSync -VSync DoubleScan h: width 432 start 456 end 472 total 512 skew 0 clock 29.79KHz v: height 243 start 244 end 247 total 250 clock 59.57Hz 320x240 (0x89) 12.587MHz -HSync -VSync DoubleScan h: width 320 start 328 end 376 total 400 skew 0 clock 31.47KHz v: height 240 start 245 end 246 total 262 clock 60.05Hz 360x202 (0x8a) 11.250MHz -HSync +VSync DoubleScan h: width 360 start 372 end 404 total 448 skew 0 clock 25.11KHz v: height 202 start 204 end 206 total 211 clock 59.51Hz 360x202 (0x8b) 10.875MHz +HSync -VSync DoubleScan h: width 360 start 384 end 400 total 440 skew 0 clock 24.72KHz v: height 202 start 204 end 206 total 209 clock 59.13Hz 320x180 (0x8c) 9.000MHz -HSync +VSync DoubleScan h: width 320 start 332 end 360 total 400 skew 0 clock 22.50KHz v: height 180 start 181 end 184 total 188 clock 59.84Hz 320x180 (0x8d) 8.875MHz +HSync -VSync DoubleScan h: width 320 start 344 end 360 total 400 skew 0 clock 22.19KHz v: height 180 start 181 end 184 total 187 clock 59.32Hz(The external monitor is DP-1-1.) As I suspected the graphics driver to be the problem, I used mhwd to switch from "video-linux" (which is basically nouveau) to the proprietary driver "video-hybrid-intel-nvidia-390xx-bumblebee". However, using this driver, the external monitor (along with several other outputs) was no longer shown. The output of xrandr: Screen 0: minimum 8 x 8, current 1600 x 900, maximum 32767 x 32767 LVDS1 connected primary 1600x900+0+0 (normal left inverted right x axis y axis) 310mm x 170mm 1600x900 60.06*+ 40.32 1400x900 59.88 1368x768 60.00 59.88 59.85 1280x800 59.81 59.91 1280x720 59.86 60.00 59.74 1024x768 60.00 1024x576 60.00 59.90 59.82 960x540 60.00 59.63 59.82 800x600 60.32 56.25 864x486 60.00 59.92 59.57 800x450 60.00 640x480 59.94 720x405 59.51 60.00 58.99 640x360 59.84 59.32 60.00 VGA1 disconnected (normal left inverted right x axis y axis) VIRTUAL1 disconnected (normal left inverted right x axis y axis)Using another driver (video-nvidia-390xx), the external monitor worked fine, but the laptop's own monitor (which was LVDS-1/LVDS1 in the previous xrandr outputs) was no longer shown: Screen 0: minimum 8 x 8, current 4608 x 1440, maximum 16384 x 16384 VGA-0 disconnected (normal left inverted right x axis y axis) LVDS-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DP-2 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 526mm x 296mm panning 4608x1440+0+0 2560x1440 59.95*+ 2048x1080 60.00 24.00 1920x1200 59.88 1920x1080 60.00 59.94 50.00 23.98 1680x1050 59.95 1600x1200 60.00 1280x1024 75.02 60.02 1280x800 59.81 1280x720 59.94 50.00 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 720x576 50.00 720x480 59.94 640x480 75.00 59.94 59.93 DP-3 disconnected (normal left inverted right x axis y axis)(The external monitor is DP-2) This confirmed my assumption that I was dealing with a diver-issue but didn't solve my problem (as I need to use the monitor of the laptop for obvious reasons). Furthermore, this solution using intel-virtual-output (w/ the "video-hybrid-intel-nvidia-390xx-bumblebee" driver) didn't work either. Another solution I tried was using bbswitch (w/ the "video-hybrid-intel-nvidia-390xx-bumblebee" driver) to turn the discrete graphics card on and then running sudo systemctl restart display-manager but the external monitor still didn't show up in xrandr. How can I get both monitors (external and laptop monitor) to work at the same time? Why is the monitor recognized using the "video-linux" driver but I can't get it to receive any input? Why does the monitor work with the proprietary "video-nvidia-390xx" driver but not with the (also proprietary) "video-hybrid-intel-nvidia-390xx-bumblebee" driver? The output of inxi -Fxxxz with the "video-linux" driver inxi -G w/ "video-hybrid-intel-nvidia-390xx-bumblebee" inxi -G w/ "video-nvidia-390xx"
Unable to get Displayport output working with a Nvidia Optimus Laptop
There isn’t so much to do for amd since most of the drivers are open source. You will want to install xf86-video-amdgpu if it isn’t installed already. pacman -S xf86-video-amdgpuYou can also append DRI_PRIME=1 to the program invocation You can also use Get-The-Right-AMD-Driver Website to get some more drivers for the laptop.
I am relatively new to Linux. I have installed Endeavour OS on my laptop (an HP Victus 16), and noticed underwhelming performance on apps like waydroid. It seems like linux is only detecting the iGPU in my system. When I run xrandr --listprovidersit gives me the output Providers: number : 0** ! Even going to Settings > About shows the graphics card as "AMD Renoir" only. Running lspcishows the dGPU connected as: Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 14 [Radeon RX 5500/5500M / Pro 5500M] (rev c1)**but it seems like it doesn't work anywhere else? Configuration of my laptop if it matters: AMD Ryzen 5600h 16 GB RAM AMD RX 5500M graphicsAnd the OS details: Endeavour OS Linux x86_64 Kernel: 5.17.0-247-tkg-pds
Linux can't seem to detect my dedicated GPU on laptop
I guess the answer is that no, no-one can post a working config for hdmi out on a Dell L502x laptop running Fedora 25. That's a real pain. I'll get back to this eventually and when i have a working config I will share it here for others.
I have a Dell XPS 15 [L502x]. it comes with an Nvidia GeForce GT525 GPU card. It's an Optimus laptop so an Intel card powers the main display but the hdmi out port connects to the Nvidia card. I have never had the HDMI out work in Fedora and would like to rectify that. Before I go back down the various rabbit-holes around optimus, bumblebee, nvidia, nouveau, Wayland and xorg I would like to find someone who has it working - just to see what they use. If you have a Dell L502x running Fedora 25 and you have your HDMI out running please let me know. It would be great if you could post your working config. If you run a different flavour of linux I would also be interested. If you could take follow on questions that would be great! disclosure: I asked this question a couple of days ago on ask.fedoraproject [https://ask.fedoraproject.org/en/question/97486/does-anyone-have-hdmi-out-working-on-dell-xps-15-l502x-with-fedora-25/] but there hasn't been any response so I am castingmy net a bit wider. I hope that's ok.
Can anyone post a working config for hdmi out Dell XPs 15 L502x with Fedora 25
I use system76-power Works well. Also see https://askubuntu.com/questions/1341945/disabled-dedicated-gpu-powers-on-after-suspend
I have a laptop with integrated AMD graphics and discrete Nvidia GTX 1650Ti. $ sudo lspci ... 01:00.0 VGA compatible controller: NVIDIA Corporation TU117M [GeForce GTX 1650 Ti Mobile] (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 10fa (rev a1) ... 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev c7)Distro: Ubuntu 21.04 Kernel: 5.11.0-17-generic I use $ sudo prime-select intelto disable Nvidia Graphics and also set PCI power management to auto using TLP: $ sudo tlp-stat /sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030000, VGA compatible controller, no driver) /sys/bus/pci/devices/0000:01:00.1/power/control = auto (0x040300, Audio device, snd_hda_intel)This works great, the GPU is in low power mode and battery life is good: $ cat /sys/bus/pci/devices/0000:01:00.0/power_state D3coldBut after I use suspend the GPU starts to consume more power again: $ cat /sys/bus/pci/devices/0000:01:00.0/power_state D0This also happens randomly during laptop being on sometimes. Please help. This thing halves my laptop's battery life.
Disabled dedicated GPU powers on after suspend (and also just randomly)
I had a similar issue, though possibly slightly different as I don't intend to keep using Gnome. Removing ibus-gtk, ibus-gtk3, ibus-gtk3-32bit, ibus-lang, and ibus (all ibus-related packages on my system; yours may be different) seems to have worked with no ill effects after a reboot. You can remove them by running zypper rm -u ibus* - be sure to check the list for anything essential that you do not want removed.
Please, how to get rid of IBus service/IBus panel when running KDE? This Gnome(?) keyboard layout manager (?) can get into conflict with the layout set natively in KDE Settings. I need to switch often between CZ and UK keyboard and IBus makes it impossible. The severity of this issue can range from a visual irritation of having two keyboard layout indicators in the tray area, but it gets much more serious when both systems set conflicting layouts; like when in KDE I set Czech keyboard but IBus somewhat keeps English UK layout:Can you guess which layout - EN (IBus) or CZ (KDE) - is actually active? The wrong one, of course, IBus seems to always override KDE :( If I quit IBus panel now, it would make it even worse because it's only the tray applet, the GUI bit, what disappears but IBus service is still active. I still wouldn't have my CZ keyboard and absolutely no way to change it. Very annoying variant of this problem is when user have only single layout setting in KDE, which is by default not displayed in the tray area; but IBus setting always is, even as single layout. User then have a very angrying sensation of whatever he sets in KDE Settings is completely ignored, blaming probably KDE, unaware that the layout indicator belongs to a different system and that is overriding KDE settings. I once managed to kill all keyboard input completely when playing with KDE Settings and IBus properties at the same time. Really awful UX. IBus seems to be part of the Gnome stack. So why it gets activated in KDE? I suspect it appeared there only after I had installed also some Gnome/Gtk applications, like Gimp, GDM, etc. OS: openSuse 15.0 Linux. This issue was present in previous versions too. UPDATE: I also faced issue with US keyboard suddenly appearing as a third option. But twat would be for yet another bug report. UPDATE 2: OK, I uninstalled them. Surprisingly it is possible to uninstall just ibus, not the whole Gnome. My GDM still works. BUT - I now face another issue, I cannot switch keyboard layouts anymore, despite setting everything in System Settings and having 2 keyboard indicators in tray, only UK keyboard now works for me. I suspect IBus screwed something in KDE generally. Ehhh, I sometimes feel that working with Linux means you will spend half of the time solving usability issues and writing bug reports :(
KDE: how to get rid of IBus keyboard selector
For newer versions see Koterpillar's answer.IIRC ibus uses gconf to store its settings so you should be able to use either gconf-editor or gconftool (CLI) to get/set those settings.
I'm wondering where is the config file of ibus stored? I checked ~/.config/ibus, and there's only a dbus socket. And no ~/.ibus folder available.
Where is config file of ibus stored?
No need to install ibus, etc. All X11 apps have access to the exact keycodes and to their xkb / xim translations, and may ignore the latter. The problem is in the terminal emulator, and with the fact that there's no standard way to represent key combos like Ctrl-Enter in the terminal. Also, each terminal emulator has (or hasn't) its own way of configuring key-bindings. In xterm, like in any xt-based app you can easily configure it with X11 resources. For instance, this will translate Ctrl-Enter to the escape corresponding to the F33 function key (according to infocmp): xterm -xrm '*VT100*translations: #override Ctrl<Key>Return:string("\033[20;5~")'Then you could bind that \e[20;5~ to whatever action you want in readline's ~/.inputrc, with bind in bash, with bindkey in zsh, etc. X11 resources are stored as the RESOURCE_MANAGER property of the root window and can be loaded there with the xrdb utility; usually, xrdb will be called from an x11 session initialization script to load the content of the ~/.Xresources file. KDE or Gnome applications like konsole and gnome-terminal have their own way of configuring key combos to actions; I don't know if that includes the ability to write arbitrary strings to the pseudo-tty master.
I'm using Fedora 30 with KDE and am trying to bind (Zsh) autosuggest-execute to Ctrl+Enter for convenience. I'm trying to get it to work in gnome-terminal. However I discovered that showkey -a always returns ^M in these three cases: Enter, Ctrl+Enter, and Shift+Enter. I tried this method (Ctrl <Return> : "\033M" in .XCompose), but it didn't work at all as the XCompose file wasn't being read. So I decided to install ibus as it is not shipped with my KDE install with dnf groupinstall input-methods. Running ìbus-setup gets me this warning now: GTK+ supports to output one char only: "\033M": ! Ctrl <Return> : "\033M"Unfortunately all enter combinations still boil down to ^M in gnome-terminal as well as xterm. Is there a way to differentiate between those key combinations with or without ibus?
Ctrl-Enter, Shift-Enter and Enter are interpreted as the same key
You can configure and use the composeKey: Under gnome Desktop : From the system settings choose keyboard >> keyboard layouts >> option then Select position of compose key extend it and choose (eg: Caps Lock)To get the accented character press : [composeKey] + [accent character] + [letter] e,g: Caps Lock + " + o = ö Caps Lock + " + u = ü Caps Lock + " + e = ë Caps Lock + " + i = ï Caps Lock + ~ + a = ã edit: Caps Lock + " + Shift + o = Ö Under Kde Desktop system settings >> imput devices >> keyboard >> Advanced >> configure keyboard option >> compose key position extend it then make your choice.
I am Vietnamese, I use ibus to type Vietnamese on my Debian. I just started to learn Hungarian and I need to type some Hungarian characters which I can't type with ibus-unikey, like ë, ö, ő, ü or ű. Other characters like á, ó,... I can type with no problems as we have them in Vietnamese as well. I added Hungarian to my ibus but I still can't figure out how to type those characters. If you know how or if you know there are instructions somewhere, please let me know. I really appreciate. Thanks!
Type Hungarian special alphabets on Debian, ibus
If you need Zoom and don't want to repackage zoom.deb, one option is to let IBus be installed, but disable it at the user level (so that the default input manager is used) by having the line run_im none in the file .xinputrc.Edit: To fully get rid of ibus I ended up using the script from Grief's answer to repackage zoom's deb.
This is a variant of this question. But the provided answers either don't seem to work or would entail not being able to have Zoom (cf below). Situation:I run Kubuntu 21.04 with KDE Plasma 5.21.4 Zoom requires IBus. I have Zoom and I need it. IBus has by default an icon on the system tray in addition to the default keyboard selectorHow can I get rid of the IBus keyboard selector (the ugly leftmost one) in the system tray ? What I have tried:Uninstalling IBus means uninstalling Zoom ; not an option Start-up script to kill IBus (ibus exit) => no sys tray icon but keyboard doesn't work in certain apps Uncheck "Show icon in sys tray" in IBus Preferences => IBus still appears in system tray even after reboot even though box remains unchecked Do the same with dconf from the command line => IBus still ignores the config option Start-up script to restart IBus without panel (ibus-daemon -rd --panel=disable) => no sys tray icon and IBus is running (ibus-daemon says something about an existing instance) but the dead keys of my French keyboard ("circumflex + e" displays "e" instead of "ê") don't work anymore. Tried to hide the IBus sys tray icon in the System Tray Configuration Menu but the drop-down for display options is grayed out:
KDE: how to get rid of IBus sys tray icon and keep Zoom
Inspired by the Arch wiki, I added export QT_IM_MODULE=ibus to ~/.xprofile, which fixed it. I was initially thrown, because "normal" ibus input worked without this fix. In any case, I've edited the wiki to be a little clearer.
I'm trying to get ibus's popup selection to work. I followed the instructions in the Arch wiki, by installing the ibus and ibus-libpinyin packages. ibus appears to partially work. ibus-setup works fine, and I can select inputs. In my text editor, I can then switch between English and (say) Arabic. Input changes as expected. However, after switching to Chinese Pinyin, text just appears as normal, with no pop-up appearing. All packages are up-to-date.KDE Plasma 5.7.3-1 ibus 1.5.14-1 ibus-qt 1.3.3-6 ibus-libpinyin 1.7.92-1 libpinyin 1.5.92-1(Previously posted on the Arch forum with no reply.) EDIT I recently upgraded some of these packages, but I still am having this problem.KDE Plasma 5.7.4-2 libpinyin 1.6.0-1
How can I enable ibus's input popup?
UPDATE. I've found that the latest commit in the IBus source has the blacklist already implemented, and that all Latin American layouts are blacklisted by default. This affects the generation process, which is done with a Python script on build time, which in turn, sources the available X layouts from /usr/share/X11/xkb/rules/evdev.xml, as this comment clearly states. The exact commit on which this restriction was implemented is here. As for the reason why this was done, is honestly beyond me, and until this situation is properly addressed, the fix I propose below must be applied every time IBus is updated (as stated in this previous answer).I've faced the same problem in Xubuntu 22.04, and recently used a workaround that involves editing a whitelist. Even though it's been suggested that IBus 1.5.23 would include a blacklist, in place of the currently used whitelist, so that engines added would automatically appear as selectable layouts, it seems this feature is yet to be implemented (I have version 1.5.26 right now). What I did to make it work is as follows:Open the file /usr/share/ibus/component/simple.xml using sudo, and your editor of choice.Locate the xkb:es::spa engine. In my machine, it looks like this:<engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[emailprotected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank> </engine>Once found, copy the <engine> tag and paste it beside it (as a sibling, on the same level), and change the following tag values:name, from xkb:es::spa to xkb:latam::spa. layout, from es to latam. longname, to any text of your choice so that you can distinguish it from other layouts.It should now look like this: <!-- I added this one. vvv --> <engine> <name>xkb:latam::spa</name> <language>es</language> <license>GPL</license> <author>logo_writer</author> <layout>latam</layout> <longname>Spanish Latam</longname> <description>Spanish Latam</description> <icon>ibus-keyboard</icon> <rank>50</rank> </engine> <!-- I added this one. ^^^ --><engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[emailprotected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank> </engine>Once the new engine is added, save the file.Restart the IBus service, by issuing the command ibus restart.Once IBus restarts, type ibus list-engine and check that the new engine appears in the list.In my machine, I have the following configurations. The one I added is Spanish Latam. $ ibus list-engine | grep -A 7 Espa idioma: Español xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:latam::spa - Spanish Latam xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - SpanishUsing ibus-setup or ibus engine, set the layout to the one you previosuly created. At this point, it should work.I hope this works for you. :)
I am running Debian 11 Bullseye for AMD64 on an HP Pavillion Touch 14-N009LA laptop, using IBus and MATE as desktop environment, having upgraded recently from Buster. Prior to upgrading point release, I could use the Latin American keyboard layout with IBus; afterwards, I am no longer able to do so. The Keyboard Preferences app on MATE Control Center shows the Latin American Spanish layout, and I can manually set it with setxkbmap latam on a terminal (before IBus kicks in and replaces it), but on IBus I am only presented with the "Spanish" keyboard, which corresponds to the Spaniard Spanish keyboard that has different punctuation keys; there is no option for "Latin American" or anything similar. Running ibus list-engine gives me the following output, in which I can't see the Latin American Spanish layout, and no matches for latam or anything similar: <irrelevant languages omitted> language: Spanish xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:sundeadkeys:spa - Spanish (Sun dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish <irrelevant languages omitted>So far I could only find a guide that only seems to apply to Ubuntu, and the Arch Linux guide for IBus. The former guide suggested that maybe I had to generate a Spanish locale for my system, which I did by uncommenting the es-MX locales from /etc/locale.gen and then running locale-gen. Afterwards, I rebooted my system. It didn't work. Any other idea on how could I use the Latin American Spanish layout on IBus for Debian Bullseye?
Trying to add the Latin American Spanish keyboard layout on IBus for Debian Bulleye in MATE, but I only get Spaniard Spanish
While in the middle of posting this question, I found the answer haha. I first entered the following into a terminal: $ ibus engine xkb:us::engI then got the list of engines to find what I needed to change it to (output cropped for brevity): $ ibus list-engine language: Estonian xkb:ee::est - Estonian language: Slovak xkb:sk:qwerty:slo - Slovak (qwerty) xkb:sk::slo - Slovak language: Romanian xkb:ro::rum - Romanian language: Japanese xkb:jp::jpn - Japanese language: Japanese anthy - AnthyI then selected the Anthy engine (which also has support for English input so I don't need to keep swapping engines): $ ibus engine anthyAlthough there was no output for that command, using the keyboard shortcuts built in (Ctrl + ,) to go to the next input method (e.g. hiragana, katakana, english etc). EDIT: I also found the way to swap between the engines (US to anthy) using a keyboard shortcut. First, open the ibus settings: $ ibus-setupSelecting the Input Method tab, ensure the 'Customise active input methods' checkbox is ticked. Then, using the scrolldown (shown with the text 'Select an input method' to find the Japanese Anthy input method. Then click the 'Add' button on the right hand side of the screen. This will add 'Japanese - Anthy' to the list of Input Methods in Ibus. Now, when you press Ctrl + Space, it will properly switch between the English input method engine and the Japanese Anthy input method engine. The commands to do so via the terminal still work, this just enables it to be done via a keyboard shortcut.
I am wanting to get the Ibus IME (Anthy engine for Japanese input) working in all my window managers. Unity is fine, along with Compiz and Metacity. But the one I really want to get it working with is spectrwm (a tiling window manager - i3m, xmonad are others). I tried running the ibus-daemon, but any of they keyboard shortcuts to change to the different input method don't work. I can't tell whether this is because Ibus isn't working, or it requires the Gnome-panel to function, or just the keyboard shortcuts are being stolen by the desktop manager and thus not passed to ibus. Where do I start in debugging this?
Getting Ibus working with tiling window manager
Tutorial Hello guys, I'm here to address this common problem we all once faced. The purpose of this tutorial is to solve this problem once for all. The information is out there, but is all over the place and sometimes we feel confused with the amount of different approaches that we can find to resolve this problem. Here, I'll try to synthesize all the information, so we can use our dead keys and compose symbols in any application. This problem usually appears when we use a Window Manager(WM) or when our Desktop Environment(DE) is not properly configured. The Solutions There are two ways to address this problem:Disable Input Method Engine(IME) and use X11 to compose keys - This approach only works for latin languages characters. Properly configure iBus or Fcitx - This approach works for every language.Here in this tutorial I'll address both approaches and talk a bit on why you should consider them. Disable Input Method Engine(IME) IMEs, like iBus or Fcitx are complexes engines built to compose non-latin characters languages (e.g. Japanese, Chinese, etc.) If you don't need to type in those languages, there is really not a need for using either iBus or Fcitx, because X11 can handle pretty easy the task to compose the latin characters. Disable iBus completely and use the system x11 to compose and use your dead keys. How to: These steps are taken from Janek Bevendorff's Answer You gonna need these Environment Variables: export GTK_IM_MODULE="" export QT_IM_MODULE="" export XMODIFIERS=""You can set these variables either system-wide in /etc/profile (or a dedicated file inside /etc/profile.d, respectively) or inside your local ~/.xprofile. Setting it in ~/.bashrc or ~/.profile will not ensure that the lines will be executed when logging into your system using a graphical login manager such as GDM, SDDM, KDM or LightDM. If you are starting your X session using XDM, Slim or startx, you need to put those lines in ~/.xinitrc.If you configured an input method other than ibus, go to Gnome settings afterwards and make sure any ibus-related settings are disabled, especially any keyboard shortcuts. Alternatively, tell Gnome not to touch your keyboard settings using:gsettings set org.gnome.settings-daemon.plugins.keyboard active falseAfter that, restart your computer and test your faulty applications.Properly configure iBus or Fcitx If you need a very complex Input Method Engine for your language or if you do want the possibility to have it as an input method, you should follow these steps to properly configure your IME. Here you have a choice, you can either go with iBus or Fcitx5, start first with what is installed in your Desktop Environment. For those who uses a Window Manager with minimal installation, checkout iBus Arch Wiki and Fcitx5 Arch Wiki to make a proper decision. How to: 1. Input Method Configuration First in our system we need to setup the Input Method to be either iBus or Fcitx5. In your terminal, type: $ im-configThen click OK, select YES (We wish to update the user configuration). In the next window, select the IME that you want (either ibus or fcitx), then OK and OK again. You'll be told that you need to restart your system to make the configuration active. 2. Restart your computer. 3. Configure iBus or fcitx5 After your system restarts, configure your inputs methods through the GUI application iBus $ ibus-setupIt'll prompt you to start ibus-daemon, click YES (It is important that you don't have an script that autostart ibus-daemon in your system in this moment, otherwise the new ibus-daemon with the recent configured settings won't start.) Fcitx5 See Fcitx5 Configuration After configuring, we need to make sure that iBus daemon or fcitx daemon is running when our system starts. For Desktop Environments, the autostart usually works out of the box. For us that use a Window Manager or if your ibus-daemon isn't autostarting in your Desktop Environment, we need to create a script to start our IME with our Session. So in your ~/.xprofile file you'll need those lines: For iBus export GTK_IM_MODULES="ibus" export QT_IM_MODULES="ibus" export XMODIFIERS="@im=ibus" ibus-daemon -drxRFor Fcitx5 export GTK_IM_MODULES="fcitx" export QT_IM_MODULES="fcitx" export XMODIFIERS="@im=fcitx" fcitx5 -dFor more information on Fcitx5 autostart: Fcitx5 Arch Wiki 4. Test your applications Now test if the applications are working correctly. 5. Restart your computer This last restart is to check if the IME is autostarting after we configured them, if after this last restart, your applications are working fine, you can rest ease and focus on coding :) That's it guys ! I hope after all this, your system is working properly, if you guys have any questions, please comment here and I'll try to help. Cya ! *This post was originally made by me in JetBrains issue track Cannot type dead keys in Linux
My Window Manager/Desktop Environment has some applications where the dead keys (e.g.`, ~, ö) are not working by default, when I press then, they sometimes doesn't show at all on key press, sometimes they don't have the expected behavior('`'+ 'e'= è). They do work on some applications though, like here on firefox. How can I make they all work as expected in all applications ?
How to make Dead Keys(Compose) Work with Window Managers?
I just had to move AltGr part in between Alphanumeric and Keypad to make it work in this way: (input-method ar phonetic) (map (arabic ("1" "١") ("2" "٢") . . . ((G-1) "۩") ((G-2) "ﷱ") . . . ((KP_0) "٠") . . . )(state(init(arabic)))
I've added my arabic layout in /usr/share/m17n and I've no problem with Alphanumeric and Keypad mapping, it works fine. Here's how it looks like: (input-method ar phonetic) (map (arabic ("1" "١") ("2" "٢") . . . ((KP_0) "٠") . . . ;;((G-1) "١") ;;((G-2) "٢") )(state(init(arabic)))If I uncomment last two lines of map to enable AltGr options, instead of arabic I get english! I don't have AltGr key BUT I used Alt-Right or Ctrl+Alt on Windows 10 for that.
AltGr isn't working with ibus-m17n in KDE Neon
You need ibus-anthy and ibus-unikey for those two languages so: yum install ibus-anthy ibus-unikeyTo set IBUS up follow these Fedora 17 guides (it should be pretty much the same thing in Fedora 18): japanese , vietnamese.
I'm using Fedora 18. I choose ibus in input method selector. In ibus, I choose preference to change input method. But I just see english-english method. I cannot choose input method for another language.(there is a dropbox, but all language is blur, but english). So, I don't know how to choose another language. (in my case is japanese and vietnamese). I have tried a command when I google: yum install scim-lang-japanesebut my terminal notice that this package does not exist. Please teach me, how to add another input method for my language in ibus. thanks :)
Fedora 18 : install input method
No, you do not toggle between xkb and ibus. You fully switch to ibus - it will support both multi-key and single-key languages. Not sure how it is done on MX GNU, but in Debian derivatives there always an "Input Method" section in the settings, where you do a complete switch to ibus, fcitx, xim, etc (the option "none" there would a switch back to xkb). Once you install ibus package, you should have a tool ibus-setup. It will allow you to set hotkeys to switch between layouts.
I am using MX GNU/Linux v19 (Patito feo). I have installed these three packages with apt.ibus ibus-mozc ibus-anthyI can confidently say that they got installed correctly as I can see this in the Japanese section of ibus-preferences.I can see the ibus icon in my panel too!but I still can't input Japanese. It is still using my xkb keyboards. So in this case How to temporarily switch to ibus? I wanted to know if there is any commands for fully activating and fully deactivating ibus? I would need both of them as I very often need to switch to my xkb-layouts too. I am planning to add keyboard shortcuts for both of those commands which I believe will make my life very easy :) GUI solutions are welcome as they will be useful for other users having similar issue. Thanks in advance.
How to toggle between ibus and xkb?
Found the very easy-to-read .MIM files in /usr/share/m17n/. These files have all the necessary information. Thanks to Mohan, senior member of Indian Linux User Group - Chennai [ILUGC] for sharing this.
I'm using Devuan GNU/Linux [based on Debian 11 Bullseye]. I have Xfce as my DE and use ibus-m17n as my input tool. In Ubuntu, in the I-Bus settings dialog, I used to find the keyboard layout for the chosen language. The last version of Ubuntu that I used was 16.04 LTS. Ever since I shifted to Debian and eventually Devuan, I don't find that keyboard layout option. What I get is some sort of an overlay when I hover the mouse over the particular input method. I have provided the link for two such overlays below. https://i.sstatic.net/8XAAr.png https://i.sstatic.net/IMzoK.png QuestionHow can I get the full information shown in the overlays? How can I see the keyboard layout for the chosen input method?Kindly help. Regards, Vrajaraja Govinda Das.
Info about input method and keyboard layout
Finally, I have found a solution. Solution Run ibus-setup, then choose yes when tip whether start ibus. This way the wine application could also use ibus.Tips Previously, I use ibus-daemon & to start ibus, and wine can't use it. Not sure, what trick ibus-setup did.
When working on linux (mint mate 17.2), need kill ibus daemon and restart it for some reason. After that one of the editor which is a wine application can't use the ibus input anymore, while other non-wine application could. Trying to restart the wine application or ibus again won't fix the problem. Restart the machine fixes the issue, but it's not preferred. Wondering is it due to some kind of cache in wine or wine application. So, any idea? Thanks.
After restart ibus, can't use it in wine application
The answer for me was to install ibus, ibus-anthy and anthy again, reboot and Update my Opensuse. After doing this, it worked. I hope this helps anyone who stumbles across this weird bug in the future.
i want to be able to write Kanji on my Opensuse system, so i followed this tutorial: http://www.localizingjapan.com/blog/2013/11/20/japanese-input-on-opensuse-linux-13-1-kde/ It does not work, because the ibus tray is not shown and it doesnt react to the key combination which should switch the input method. (yes, i added the input method in the ibus-settings and i enabled the show tray option.) I read somewhere else, that ibus is not needed for the Gnome Desktop, one should be able to add input methods via system->Region&Languages. This does not open for me anymore though. Even after deinstall ibus and rebooting multiple times, i can not open system->Region&Languages (i used to be able to). Does anyone know what might be the cause of this? Johann
Opensuse 13.1 Gnome Desktop Region&Language won"t open after installing ibus
Based on the comments of @slm, the answer is no. You can go to https://code.google.com/p/ibus/ to ask an question.
Is this project became abandoned? As Bcooksley stated at this post, the authors has stopped developing IBus, and this is the reason why Kubuntu removed the Language Option in the System Settings. I want to have a confirmation. Is it dead? Or is it just not suitable for Kubuntu
Is IBus stopped developing?
The input method used (IM) is actually set in ~/.xinputrc. Run the command im-config to choose your input method. Or maybe simply add manually run_im ibus inside your ~/.xinputrc : It is what im-config is doing. Of course you need to restart X.
I can't find anything clear of how linux is handling the keyboard. (system based configurations not gui) My problem is : I installed ibus on a linux mint with several languages but it just doesn't work, despite ibus-setup, ibus-daemon -rx... Maybe my system is using another input method than ibus ? Is there a command to know which input method my computer is actually using ?
Switch to another input method
Have a look here It is Mac oriented. It says to try: nv And says V is the standard convention to represent ü in pinyin input systems...It refers to Wikipedia which says Since the letter "v" is unused in Mandarin pinyin, it is universally used as an alias for ü. For example, typing "nv" into the input method would bring up the candidate list for pinyin: nǚ.
I'm using IME/iBus to type Chinese characters using Pinyin (Intelligent Pinyin 1.6.92 in Debian Jessie with Gnome 3.14.1). Now, I'm trying to type the word 旅行 (lüxing = travel), but all I get is 路性 (luxing), because it doesn't seem to recognize the two dots over the u, which makes ü. Is this a bug? Or am I missing something?
How to type ü in Pinyin IME?
I could fix this issue by editing the file /etc/environment I added the following lines: QT_IM_MODULE=ibus XMODIFIERS=@im=ibusSo the files' content is the following: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" # Custom: Add Searchable command palette to any GTK3 Application. GTK3_MODULES=$GTK3_MODULES:/usr/lib/x86_64-linux-gnu/libplotinus/libplotinus.so QT_IM_MODULE=ibus XMODIFIERS=@im=ibus
I try to get IBus input methods working on my Debian testing (bookworm) under Gnome 43. For example pinyin I installed it with sudo apt install ibus-pinyinWhen I switch to Chinese now the Chinese input works only in GTK apps but not for example in Firefox or in Electron born apps like Atom or Signal. What could be the cause of this problem and how to fix it? Is this a Debian bug or a specific problem of my configuration?
IBus input method is working in GTK apps only on Debian testing (bookworm)
Actually, the problem was with KDE, it seems to have a hard time with IBus. IBus usually prefers gtk file finders, whereas, by default, in KDE, the file finder package is Qt. So it(IBus) gets itself stuck in a collision. And eventually, the problem has no apparent solution. Even you log out after installation, you may see that IBus is not working. Most of the time, after installing a new input method through IBus in KDE, you are advised to shut down to things take effect.
So for stability issues, I had to switch back to MX Linux, in its KDE version, since I am a KDE lover. But this time, something very unusual happened. I installed ibus-avro properly, added that in ibus-preferences, added the required lines in ~/.bashsrc and made ibus-daemon run automatically upon booting. Ibus-daemon is running in the background, I can select Avro phonetic, but whatever I type comes in plain English, not in Bengali. To mention, Bengali fonts are visible in chrome or any kind of docx format. But don't know why, ibus-daemon is having trouble typing Bengali, despite selecting Avro phonetic in the input method. Can anyone help me in this issue?
Installed Ibus-avro, open-keyboard, ibus-daemon is running background but unable to type in Bengali in MX Linux KDE
With the IM envionment variables set in my .xinitrc: export GTK_IM_MODULE=ibus export XMODIFIERS=@im=ibus export QT_IM_MODULE=ibusit did work after a restart. I thought I had logged out and in already, but perhaps not, that should probably be sufficient.
I'm not sure if this is a bug, or error of expectation or configuration - but when I switch language with IBus (with the m17n engine) it appears not to affect the open window. However, if I open a new one, or close and reopen the existing application, the new language selection takes effect. My intended use case is to occasionally use a differently scripted language that I'm learning while using Firefox (e.g. to search for a word on Wiktionary), I'd rather not have to use a separate instance of Firefox for it. I have verified that the same happens with my terminal emulator, it's just with Firefox. I haven't been able to find documentation describing the behaviour as specifically supporting or not supporting this. Is it supposed to be this way, or can I configure it otherwise?
Change IBus language in current window
I never had to add those export ... lines in .bashrc over the past 2+ months on linux. For libreoffice-still, I've to add these two lines: export XMODIFIERS=@im=ibus export QT_IM_MODULE=ibusin ~/.bashrc and for calligra, I've to add all three of these: export GTK_IM_MODULE=ibus export XMODIFIERS=@im=ibus export QT_IM_MODULE=ibusin ~/.bashrc on KDE Plasma Desktop.
I've noticed it on the Desktop on which I installed Arch linux yesterday. It used to work on the laptop until I ran pacman -Syu today! As far as I've seen, it never worked on libreoffice-fresh, calligra and wps! It still works with firefox, atom, vscode on both the laptop and desktop, how to fix this issue?
ibus-m17n input method no longer works on libreoffice-still
If I need to know what it is say Linux/Unix , 32/64 bit uname -a This would give me almost all information that I need, If I further need to know what release it is say (Centos 5.4, or 5.5 or 5.6) on a Linux box I would further check the file /etc/issue to see its release info ( or for Debian / Ubuntu /etc/lsb-release ) Alternative way is to use the lsb_release utility: lsb_release -aOr do a rpm -qa | grep centos-release or redhat-release for RHEL derived systems
Often times I will ssh into a new client's box to make changes to their website configuration without knowing much about the server configuration. I have seen a few ways to get information about the system you're using, but are there some standard commands to tell me what version of Unix/Linux I'm on and basic system information (like if it is a 64-bit system or not), and that sort of thing? Basically, if you just logged into a box and didn't know anything about it, what things would you check out and what commands would you use to do it?
How can I tell what version of Linux I'm using?
GNU Info was designed to offer documentation that was comprehensive, hyperlinked, and possible to output to multiple formats. Man pages were available, and they were great at providing printed output. However, they were designed such that each man page had a reasonably small set of content. A man page might have the discussion on a single C function such as printf(3), or would describe the ls(1) command. That breaks down when you get into larger systems. How would you fit the documentation for Emacs into man pages? An example of the problem is the Perl man page, which lists 174 separate man pages you can read to get information. How do you browse through that, or do a search to find out what && means? As an improvement over man pages, Info gave us:The ability to have a single document for a large system, which contains all the information about that system. (versus 174 man pages) Ability to do full-text search across the entire document (v. man -k which only checks keywords) Hyperlinks to different parts of the same or different documents (v. The See Also section, which was made into hyperlinks by some, but not all, man page viewers) An index for the document, which could be browsed or you could hit "i" and type in a term and it would search the index and take you to the right place (v. Nothing) Linear document browsing across concepts, allowing you read the previous and next sections if you want to, either by mouse or keystroke (v. Nothing).Is it still relevant? Nowadays most people would say "This documentation doesn't belong in a manpage" and would put it in a PDF or would put it up in HTML. In fact, the help systems on several OSes are based on HTML. However, when GNU Info was created (1986), HTML didn't exist yet. Nowadays texinfo allows you to create PDF, Info, or other formats, so you can use those formats if you want. That's why GNU Info was invented.
I understand what GNU Info is and how to use it, but what is it for? Why does it exist in parallel to the man pages? Why not write detailed man pages rather than provide a separate utility?
What is GNU Info for?
help is a bash command. It uses internal bash structures to store and retrieve information about bash commands. man is a macro set for the troff (via groff) processor. The output of processing a single file is sent to a pager by the man command by default. info is a text-only viewer for archives in the info format output of Texinfo.
I know that these command will help to get syntax and options for commands but my question is that how they differ from each other?
Difference between help, info and man command
To answer your question with at least a hint of factual background I propose to start by looking at the timeline of creation of man, info and other documentation systems. The first man page was written in 1971 using troff (nroff was not around yet) in a time when working on a CRT based terminal was not common and printing of manual pages the norm. The man pages use a simple linear structure. The man pages normally give a quick overview of a command, including its commandline option/switches. The info command actually processes the output from Texinfo typesetting syntax. This had its initial release in February 1986, a time when working on a text based CRT was the norm for Unix users, but graphical workstations still exclusive. The .info output from Texinfo provides basic navigation of text documents. And from the outset has a different goal of providing complete documentation (for the GNU Project). Things like the use of the command and the commandline switches are only a small part of what an Texinfo file for a program contains. Although there is overlap the (Tex)info system was designed to complement the man pages, and not to replace them. HTML and web browsers came into existence in the early 90s and relatively quickly replaced text based information systems based on WAIS and gopher. Web browsers utilised the by then available graphical systems, which allows for more information (like underlined text for a hyperlink) then text-only systems allow. As the functionality info provides can be emulated in HTML and a web browser (possible after conversion), the browser based system allow for greater ease of navigation (or at least less experience/learning). HTML was expanded and could do more things than Texinfo can. So for new projects (other than GNU software) a whole range of documentation systems has evolved (and is still evolving), most of them generating HTML pages. A recent trend for these is to make their input (i.e. what the human documenter has to provide) human readable, whereas Texinfo (and troff) is more geared to efficient processing by the programs that transform them.¹ info was not intended to be a replacement for the man pages, but they might have replaced them if the GNU software had included a info2man like program to generate the man pages from a (subset of a larger) Texinfo file. Combine that with the fact that fully utilising the facilities that a system like Texinfo, (La(TeX, troff, HTML (+CSS) and reStructured Text provide takes time to learn, and that some of those are arguably more easy to learn and/or are more powerful, there is little chance of market dominance of (Tex)info. ¹ E.g reStructured Text, which can also be used to write man pages
As per my knowledge/understanding both help and man came at the same time or have very little time difference between them. Then GNU Info came in and from what I have seen is much more verbose, much more detailed and arguably much better than what man is. Many entries even today in man are cryptic. I have often wondered why Info which is superior to man in many ways didn't succeed man at all. I still see people producing man pages than info pages. Was it due to not helpful tools for info? Something in the licenses of the two? Or some other factor which didn't get info the success it richly deserved? I did see a few questions on unix stackexchange notably What is GNU Info for? and Difference between help, info and man command among others.
Why didn't GNU Info succeed man?
Yes, info has support for pretty much any key binding scheme you like; see http://www.gnu.org/software/texinfo/manual/info-stnd/html_node/Custom-Key-Bindings.html and note in particular the --vi-keys startup option for Info.
The layout of my netbook's keyboard means that using the arrow keys for navigation is slightly uncomfortable. Is there a way to make GNU Info pages use vim-style hjkl navigation? I know I can info printf | less...and use j and k to scroll up and down, which is good enough as I use info pages for reading so navigating to specific characters isn't vital; but it would be nice if I could do this within info, rather than resorting to a pipe.
Can I get vim-stlye (hjkl) navigation for GNU info?
Posting as an answer, as requested. Just don't use info to browse info pages. There is a standalone info browser named pinfo, and Emacs has, of course, its own Info Mode. If you're using Vim you can also install the ref and ref-info plugins. ref is essentially a generic hypertext browser. It comes with plugins for a number of sources, such as man pages, perldoc, pydoc, etc., but not for info. ref-info is a plugin for ref that adds capability to browse info pages. The combination ref+ref-info makes a decent info browser, with the only drawback that it can only search through the page it currently displays. A partial workaround for this problem is to tell the info backend to produce larger chunks before feeding them to ref-info, by adding this line to your vimrc: let g:ref_info_cmd = 'info --subnodes -o -'You'd then browse info pages like this: :Ref info <page>Of course, you can also use ref with the other sources (:Ref man <page> etc.). Read the manual for more information.
Using the ↑ and ↓ directional arrow keys to to scroll up and down the page in the GNU info pages causes the info page viewer to unexpectedly jump to another node, this is really disorienting. How can I scroll down through the page and just have the info viewer/pager stop when it gets to the top or the bottom, and then require a separate command to jump to a different node?
How to scroll GNU info pages without unexpectedly jumping to the next node?
Ah, info brings along the texi2ps and texi2pdf programs. So if you find the info source (info.texi) you can generate beautiful (or bloated, depending on your point of view) PDF using: texi2pdf info.texi
If man -t ls | ps2pdf - > ls.pdf is useful for outputting the ls man page via ps2pdf to pdf, what about info pages? I've tried something like the following but with no success: info -o info | ps2pdf - > info.pdfAll this does is output a blank pdf file called info.pdf and output the body into a text file.
How do you output an info page to pdf?
On Debian and its derivatives like Ubuntu, the info pages are not installed unless you install the corresponding package-doc package for a given package. So in your case: apt-get install tar-docA notable exception (though that may only apply to Debian and not Ubuntu) is bash-doc. The textinfo bash documentation is not considered as free software by Debian as you're not free to modify it (you have to notify the bash maintainers if you want to distribute a modified version of it which is against the Debian policy). There's a similar case for texinfo-doc though in that case there is a texinfo-doc-nonfree package.
On my computer (ubuntu 12.04), some info pages are missing, like for tar. When I enter info tar, it opens the tar manpage instead of the tar info manual. So how can I install these pages on my system?
Some info pages missing
pinfo was designed to emulate the behavior of the lynx web browser and make browsing info pages easier to do. Its interface and formatting abilities are somewhat more advanced than the original info was and it also supports viewing man pages including colorizing them. It has a little bit more understanding of the content it is viewing, and can extract and follow URLS. It has considerably more key-bindings which are also user configurable. See the pinfo documentation.
What's the difference between info and pinfo besides color? pinfo is:a program for viewing info fileswhile info is:Read documentation in Info formatI tried to search the web for the difference between these two command, but found no useful information.
Difference between `info` and `pinfo`
There's a list on Wikipedia, which includes the following:info pinfo tkman tkinfo (linked page also has a list of info viewers) khelpcenter emacskhelpcenter relies on info2html which could also be used to enable reading info files with any browser. However, the converted pages lack tons of useful features, like search and access to the index; even if, like me, you find the info implementation of those features lacking, they are still better than nothing.
I enjoy having choices of $PAGER, e.g.more less most ...Can I enjoy the same choice when reading Info documentation? (i.e. info tar) What are my options?
Alternative Info reader