prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
How to keep dir-local variables when switching major modes? I'm committing to a project where standard indentations and tabs are 3-chars wide, and it's using a mix of HTML, PHP, and JavaScript. Since I use Emacs for everything, and only want the 3-char indentation for this project, I set up a ".dir-locals.el" file at the root of the project to apply to all files/all modes under it: ``` ; Match projets's default indent of 3 spaces per level- and don't add tabs ( (nil . ( (tab-width . 3) (c-basic-offset . 3) (indent-tabs-mode . nil) )) ) ``` Which works fine when I first open a file. The problem happens when switching major modes- for example to work on a chunk of literal HTML inside of a PHP file. Then I lose all the dir-local variables. I've also tried explicitly stating all of the modes I use in ".dir-locals.el", and adding to my .emacs file "dir-locals-set-class-variables / dir-locals-set-directory-class". I'm glad to say they all behave consistently, initially setting the dir-local variables, and then losing them as I switch the major mode. I'm using GNU Emacs 24.3.1. What's an elegant way of reloading dir-local variables upon switching a buffer's major-mode? -- edit -- Thanks for the excellent answers and commentary both Aaron and phils! After posting here, I thought it "smelled" like a bug, so entered a report to GNU- will send them a reference to these discussions.
As per comments to Aaron Miller's answer, here is an overview of what happens when a mode function is called (with an explanation of derived modes); how calling a mode manually differs from Emacs calling it automatically; and where `after-change-major-mode-hook` and `hack-local-variables` fit into this, in the context of the following suggested code: ``` (add-hook 'after-change-major-mode-hook 'hack-local-variables) ``` After visiting a file, Emacs calls `normal-mode` which "establishes the proper major mode and buffer-local variable bindings" for the buffer. It does this by first calling `set-auto-mode`, and immediately afterwards calling `hack-local-variables`, which determines all the directory-local and file-local variables for the buffer, and sets their values accordingly. For details of how `set-auto-mode` chooses the mode to call, see `C-h``i``g` `(elisp) Auto Major Mode` `RET`. It actually involves some early local-variable interaction (it needs to check for a `mode` variable, so there's a specific look-up for that which happens before the mode is set), but the 'proper' local variable processing happens afterwards. When the selected mode function is actually called, there's a clever sequence of events which is worth detailing. This requires us to understand a little about "derived modes" and "delayed mode hooks"... ## Derived modes, and mode hooks The majority of major modes are defined with the macro `define-derived-mode`. (Of course there's nothing *stopping* you from simply writing `(defun foo-mode ...)` and doing whatever you want; but if you want to ensure that your major mode plays nicely with the rest of Emacs, you'll use the standard macros.) When you define a derived mode, you must specify the parent mode which it derives *from*. If the mode has no logical parent, you still use this macro to define it (in order to get all the standard benefits), and you simply specify `nil` for the parent. Alternatively you could specify `fundamental-mode` as the parent, as the effect is much the same as for `nil`, as we shall see momentarily. `define-derived-mode` then defines the mode function for you using a standard template, and the very first thing that happens when the mode function is called is: ``` (delay-mode-hooks (PARENT-MODE) ,@body ...) ``` or if no parent is set: ``` (delay-mode-hooks (kill-all-local-variables) ,@body ...) ``` As `fundamental-mode` itself calls `(kill-all-local-variables)` and then immediately returns when called in this situation, the effect of specifying it as the parent is equivalent to if the parent were `nil`. Note that `kill-all-local-variables` runs `change-major-mode-hook` before doing anything else, so that will be the first hook which is run during this whole sequence (and it happens while the previous major mode is still active, before any of the code for the new mode has been evaluated). So that's the first thing that happens. The very *last* thing that the mode function does is to call `(run-mode-hooks MODE-HOOK)` for its own `MODE-HOOK` variable (this variable name is literally the mode function's symbol name with a `-hook` suffix). So if we consider a mode named `child-mode` which is derived from `parent-mode` which is derived from `grandparent-mode`, the whole chain of events when we call `(child-mode)` looks something like this: ``` (delay-mode-hooks (delay-mode-hooks (delay-mode-hooks (kill-all-local-variables) ;; runs change-major-mode-hook ,@grandparent-body) (run-mode-hooks 'grandparent-mode-hook) ,@parent-body) (run-mode-hooks 'parent-mode-hook) ,@child-body) (run-mode-hooks 'child-mode-hook) ``` What does `delay-mode-hooks` do? It simply binds the variable `delay-mode-hooks`, which is checked by `run-mode-hooks`. When this variable is non-`nil`, `run-mode-hooks` just pushes its argument onto a list of hooks to be run at some future time, and returns immediately. Only when `delay-mode-hooks` is `nil` will `run-mode-hooks` *actually* run the hooks. In the above example, this is not until `(run-mode-hooks 'child-mode-hook)` is called. For the general case of `(run-mode-hooks HOOKS)`, the following hooks run in sequence: - `change-major-mode-after-body-hook` - `delayed-mode-hooks` (in the sequence in which they would otherwise have run) - `HOOKS` (being the argument to `run-mode-hooks`) - `after-change-major-mode-hook` So when we call `(child-mode)`, the full sequence is: ``` (run-hooks 'change-major-mode-hook) ;; actually the first thing done by (kill-all-local-variables) ;; <-- this function ,@grandparent-body ,@parent-body ,@child-body (run-hooks 'change-major-mode-after-body-hook) (run-hooks 'grandparent-mode-hook) (run-hooks 'parent-mode-hook) (run-hooks 'child-mode-hook) (run-hooks 'after-change-major-mode-hook) ``` ## Back to local variables... Which brings us back to `after-change-major-mode-hook` and using it to call `hack-local-variables`: ``` (add-hook 'after-change-major-mode-hook 'hack-local-variables) ``` We can now see clearly that if we do this, there are two possible sequences of note: 1. We manually change to `foo-mode`: ``` (foo-mode) => (kill-all-local-variables) => [...] => (run-hooks 'after-change-major-mode-hook) => (hack-local-variables) ``` 2. We visit a file for which `foo-mode` is the automatic choice: ``` (normal-mode) => (set-auto-mode) => (foo-mode) => (kill-all-local-variables) => [...] => (run-hooks 'after-change-major-mode-hook) => (hack-local-variables) => (hack-local-variables) ``` Is it a problem that `hack-local-variables` runs twice? Maybe, maybe not. At minimum it's slightly inefficient, but that's probably not a significant concern for most people. For me, the main thing is that I wouldn't want to rely upon this arrangement *always* being fine in all situations, as it's certainly not the expected behaviour. (Personally I *do* actually cause this to happen in certain specific cases, and it works just fine; but of course those cases are easily tested -- whereas doing this as standard means that all cases are affected, and testing is impractical.) So I would propose a small tweak to the technique, so that our additional call to the function does not happen if `normal-mode` is executing: ``` (defvar my-hack-local-variables-after-major-mode-change t "Whether to process local variables after a major mode change. Disabled by advice if the mode change is triggered by `normal-mode', as local variables are processed automatically in that instance.") (defadvice normal-mode (around my-do-not-hack-local-variables-twice) "Prevents `after-change-major-mode-hook' from processing local variables. See `my-after-change-major-mode-hack-local-variables'." (let ((my-hack-local-variables-after-major-mode-change nil)) ad-do-it)) (ad-activate 'normal-mode) (add-hook 'after-change-major-mode-hook 'my-after-change-major-mode-hack-local-variables) (defun my-after-change-major-mode-hack-local-variables () "Callback function for `after-change-major-mode-hook'." (when my-hack-local-variables-after-major-mode-change (hack-local-variables))) ``` Disadvantages to this? The major one is that you can no longer change the mode of a buffer which sets its major mode using a local variable. Or rather, it will be changed back immediately as a result of the local variable processing. That's not impossible to overcome, but I'm going to call it out of scope for the moment :)
Switch to other instance of same application I want my c# winform application to switch to another running instance if a certain event occurs. For example if I have a application with just a button and three instances are running at the moment. Now if I 1. press the button in first instance, focus to second instance 2. press the button in second instance, focus to third instance 3. press the button in third instance, focus to first instance How do i do that?
if you know the handle of the other instances you should just call the Windows API: **SetForegroundWindow**: ``` [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] static extern bool SetForegroundWindow(IntPtr hWnd); [DllImport("user32.dll", SetLastError = true)] static extern IntPtr FindWindow(string lpClassName, string lpWindowName); ``` you can use the **FindWindow** API call to get the handle of the other instances, for example: ``` public static int FindWindow(string windowName) { int hWnd = FindWindow(null, windowName); return hWnd; } ``` you can search for those api calls here in SO for more examples, for example found this one: [How do I focus a foreign window?](https://stackoverflow.com/questions/444430/how-do-i-focus-a-foreign-window)
Why Are Parentheses Required on C# Static Constructors? Consider: ``` class Foo { static Foo() { // Static initialisation } } ``` Why are the `()` required in `static Foo() {...}`? The static constructor must always be parameterless, so why bother? Are they necessary to avoid some parser ambiguity, or is it just to maintain consistency with regular parameterless constructors? Since it looks so much like an initialiser block, I often find myself leaving them out by accident and then have to think for a few seconds about what is wrong. It would be nice if they could be elided in the same way.
I get this sort of question frequently; that is, the question "the compiler could work out that this thing is missing, so why is it required?" Here's another example of this sort of question: [C# using consts in static classes](https://stackoverflow.com/questions/2631975/c-using-consts-in-static-classes/2632844#2632844) As I noted in that question, basically we have three choices in that situation. Make the redundant text required, make it optional, or make it illegal. Each has its own downside. The downside of making it required is you end up with an unnecessary redundancy in the language. The downside of making it optional is you confuse people who think there must be a difference between the two forms. Also, you make it harder for the error-recovering parser to do its work; it thrives on redundancy. And you potentially make it harder to add new language features in the future, because more "syntactic area" is already claimed. The downside of making it illegal is you then make a "gotcha", where the user has to remember that oh, yeah, I'm supposed to put parens here, but not here. The proposed feature had better have an upside that pays for the downside. The smallest downside seems to me to be the first: make it required. The other options I would want to have an upside that justifies the downside, and I'm not seeing one here.
How do I add / insert a before or after pseudo element into Chrome's Inspector? I can add a regular style rule via the + sign (New Style Rule) but I can't add one under the "Pseudo ::before Element" or "Pseudo ::after Element" sections of the Style Inspector. If I try to add the `::before` or `::after` element into the HTML via "Edit as HTML", it comes out as text. My workaround is to add `<span class="pseudo_before"></span>` and then style that. Am I missing something?
This is the easiest way and the way I do it: 1. Inspect the element you want to add the ::before or ::after to by right clicking it and going to "Inspect Element". 2. Now in the Developer Tools Console, click on the plus sign icon aka. "New Style Rule". See the image below, the plus sign is next to the "Toggle Element State" button. ![enter image description here](https://i.stack.imgur.com/6iru6.png) 3. Next, you will be able to edit the selector so add ::before / ::after to it: ![enter image description here](https://i.stack.imgur.com/Et5TE.png) 4. Now edit the content to whatever you like, i.e. Like so: ``` .grp-row::before { content: '> '; } ``` That's all there is to it :)
RGBa border color & element I ran into an issue while working on a webdesign, trying to apply a semi-transparent (RGBa) border color to elements doesn't seem to work properly. You get a non-transparent border instead. Here's a CSS sample: ``` header > div form { width: 229px; background: url('img/connexion.png') no-repeat; position: absolute; top: 0px; right: 0px; text-align: center; } header > div form > p:first-child { color: #1B2E83; font-size: 16px; font-weight: bold; margin-top: 31px; } header > div form input[type=email], header > div form input[type=text], header > div form input[type=password] { width: 140px; height: 20px; border: 2px solid rgba(0,0,0,0.14); } ``` Expected behavior: a gray, transparent border. I tried it on another element on the same page and it works perfectly. Actual behavior: A gray border. That is all. RGBa values seem to be somewhat interpreted as the color given is black and the result is gray, it just ain't transparent at all though. Tested on: Firefox 8.0, Chrome 16.0.912.63 Since it happens on both Webkit & Gecko, maybe there's something I'm doing wrong... I tried to remove position: absolute on the container, to remove the background image (which is a PNG with transparency)... nothing changed.
The problem appears to be that an `input` element is a replaced element (in that it's supplied/rendered by the underlying OS, not the browser itself; though I don't know the OS can't handle the `rgba()` color properly). It's not an ideal solution, but wrapping the `input` elements in another element, and styling the borders of the wrapping element works: ``` <form method="post"> <p>Espace connexion</p> <div> <input type="email" name="mail" placeholder="Votre adresse e-mail" required="required" value="" /> </div> <div> <input type="password" name="password" placeholder="Votre mot de passe" required="required" pattern=".{4,}" title="4 caractères minimum" /> </div> <input type="submit" value="OK" /> </form> ``` With the CSS: ``` form div, div#test { width: 140px; height: 20px; border: 20px solid rgba(255,0,0,0.5); } form div input { width: 100%; /* to stop the inputs being wider than the containg div */ box-sizing: border-box; /* to include the border as part of the width */ } ``` [Updated JS Fiddle](http://jsfiddle.net/davidThomas/VT4ye/11/).
How to implement Nat loopback/reflection? I'm trying to access a server on my LAN via its public IP address. External clients can connect just fine, but I'm unable to do so from within the LAN. There *is* a separate rule in the NAT settings of my gateway (which translates subnet addresses to the public interface IP, but is otherwise identical), but it doesn't appear to be working properly. How would I correctly set things up to access a local resource via a public IP address?
The most common problem is that your gateway rewrites the destination address of the packet to the internal server, but not the source. So, when the internal server responds it sees that the packet came from something on the local network, sends back the packet directly - and the client can't tell this is from the server, because the packet still has the internal, not the public, address on it. The standard fix is to force the traffic to come back through your gateway. One way to achieve that is to put the server that the public address redirects to in a "DMZ", so that traffic between the client and the server has to pass through the router. The other way is to also apply NAT to the source address of internal connections to the external IP, so that they look like they come from the gateway. The internal server will then respond to the gateway, that will undo *both* NAT changes, and send the packet back to the internal client.
What's the difference of Step and Step Into in Google Chrome developer tools? [![enter image description here](https://i.stack.imgur.com/xj1eu.png)](https://i.stack.imgur.com/xj1eu.png) What is the difference of "Step" and "Step into" in Google Chrome Developer tools,? I even can't find it in docs <https://developers.google.com/web/tools/chrome-devtools/javascript/step-code> [![enter image description here](https://i.stack.imgur.com/5mtQE.png)](https://i.stack.imgur.com/5mtQE.png)
You can spot the difference while running async code or multi-threaded code. **Step into**: DevTools assumes that you want to pause in the asynchronous code that eventually runs **Step**: DevTools pause in code as it chronologically ran Consider this example: ``` setTimeout(() => { console.log('inside') }, 3000); console.log('outside') ``` After stopping on the breakpoint on the first line (`setTimeout(() => {`). **Step into**: it waits 3 seconds and stops on the 2nd line (`console.log('inside')`) **Step** it pauses on the 4th line (`console.log('outside')`) Link to the docs: <https://developers.google.com/web/updates/2018/01/devtools#async>
C++ make a string to int function I usually program in python which has a string to integer converter built in but when i use C++ it doesn't seem to work so I decided to make my own. this is what i made so far C++: ``` int strtoint(string str) { int values [str.length()]; int return_value = 0; for (int i=0; i < str.length(); ++i) if(str.at(str.length()-1-i) == '1') values[i] = 1; else if(str.at(str.length()-1-i) == '2') values[i] = 2; else if(str.at(str.length()-1-i) == '3') values[i] = 3; else if(str.at(str.length()-1-i) == '4') values[i] = 4; else if(str.at(str.length()-1-i) == '5') values[i] = 5; else if(str.at(str.length()-1-i) == '6') values[i] = 6; else if(str.at(str.length()-1-i) == '7') values[i] = 7; else if(str.at(str.length()-1-i) == '8') values[i] = 8; else if(str.at(str.length()-1-i) == '9') values[i] = 9; for (int i=0; i < str.length(); ++i) return_value += values[i]^(10*i); return return_value; } ``` I seem to get very weird answers like `"12"` returns `13` and `"23"` returns `11`. I know about `stoi` but I prefer to make my own so I can learn C++.
Three problems I can see on a quick look. The first is that ``` int values [str.length()]; ``` is not valid C++. It is using a feature from the 1999 C standard, which some C++ compilers support as an extension, but it is still not valid C++. The second is lack of handling of `0`s or errors (non-digit characters) in input. The third is the statement ``` return_value += values[i]^(10*i); ``` `^` is a bitwise XOR operator in C++. Not mathematical exponentiation. A couple of other minor tips. You can probably simplify your code a lot by using iterators. Also, with all standard character sets, the roman digits are sequential, starting with `'0'`. So a simple way to convert a digit to the numeric value you want is `digit - '0'`, which will convert `'0'` to `0`, `'1'` to `1`, .... `'9'` to `9`.
How to re-use compiled sources in different machines To speed up our development workflow we split the tests and run each part on multiple agents in parallel. However, compiling test sources seem to take most of the time for the testing steps. To avoid this, we pre-compile the tests using sbt `test:compile` and build a docker image with compiled targets. Later, this image is used in each agent to run the tests. However, it seems to recompile the tests and application sources even though the compiled classes exists. Is there a way to make sbt use existing compiled targets? **Update: To give more context** The question strictly relates to scala and sbt (hence the sbt tag). Our CI process is broken down in to multiple phases. Its roughly something like this. - stage 1: Use SBT to compile Scala project into java bitecode using `sbt compile` We compile the test sources in the same test using `sbt test:compile` The targes are bundled in a docker image and pushed to the remote repository, - stage 2: We use multiple agents to split and run tests in parallel. The tests run from the built docker image, so the environment is the same. However, running `sbt test` causes the project to recompile even through the compiled bitecode exists. To make this clear, I basically want to compile on one machine and run the compiled test sources in another without re-compiling **Update** I don't think <https://stackoverflow.com/a/37440714/8261> is the same problem because unlike it, I don't mount volumes or build on the host machine. Everything is compiled and run within docker but in two build stages. The file modified times and paths are retained the same because of this. The debug output has something like this ``` Initial source changes: removed:Set() added: Set() modified: Set() Invalidated products: Set(/app/target/scala-2.12/classes/Class1.class, /app/target/scala-2.12/classes/graph/Class2.class, ...) External API changes: API Changes: Set() Modified binary dependencies: Set() Initial directly invalidated classes: Set() Sources indirectly invalidated by: product: Set(/app/Class4.scala, /app/Class5.scala, ...) binary dep: Set() external source: Set() All initially invalidated classes: Set() All initially invalidated sources:Set(/app/Class4.scala, /app/Class5.scala, ...) Recompiling all 304 sources: invalidated sources (266) exceeded 50.0% of all sources Compiling 302 Scala sources and 2 Java sources to /app/target/scala-2.12/classes ... ``` It has no Initial source changes, but products are invalidated. **Update: Minimal project to reproduce** I created a minimal sbt project to reproduce the issue. <https://github.com/pulasthibandara/sbt-docker-recomplile> As you can see, nothing changes between the build stages, other than running in the second stage in a new step (new container).
While <https://stackoverflow.com/a/37440714/8261> pointed at the right direction, the underlying issue and the solution for this was different. **Issue** SBT seems to recompile everything when it's run on different stages of a docker build. This is because docker compresses images created in each stage, which strips out the millisecond portion of the lastModifiedDate from sources. SBT depends on lastModifiedDate when determining if sources have changed, and since its different (the milliseconds part) the build triggers a full recompilation. **Solution** - Java 8: Setting `-Dsbt.io.jdktimestamps=true` when running SBT as recommended in <https://github.com/sbt/sbt/issues/4168#issuecomment-417655678> to workaround this issue. - Newer: Follow recomendation in <https://github.com/sbt/sbt/issues/4168#issuecomment-417658294> I solved the issue by setting `SBT_OPTS` env variable in the docker file like ``` ENV SBT_OPTS="${SBT_OPTS} -Dsbt.io.jdktimestamps=true" ``` The [test project](https://github.com/pulasthibandara/sbt-docker-recomplile) has been updated with this workaround.
Different ways to distribute (embedded) C modular library [picoTCP](https://github.com/tass-belgium/picotcp) (an open source embedded TCP/IP stack) has always had a development focus (both in picoTCP itself and with picoTCP) on linux & gcc systems. The final target is usually an embedded micro (ARM based). For this reason we put quite some logic into the makefile to keep modularity high. We're also using compile flags to enable/disable all of these modules. [We start to notice](https://github.com/tass-belgium/picotcp/issues/441) that Linux isn't the only developer system (here I mean while using picoTCP in your own project) in the world and that Windows together with a bunch of IDEs with each their own (proprietary) compiler aren't good friends with Makefiles. This usually means that these users have to manually add the picoTCP files to their project and expand their build system. This also means that people get into issues like 1. which files to include (because the logic is in the Makefile) 2. in which order they should be included 3. manually update all these files when there's a new version We're currently looking at the different ways of how to distribute picoTCP in a more convenient way and would like to know the pros and cons. We've currently identified a couple of ways **Generate .a file** - (+) Clean solution that keeps using the existing Makefile - (-) Needs a Linux & gcc environment - (-) Only works for the same target compiler/system **Generate a single .c and .h file** This is something the Mongoose library from Cesanta does - (+) Very portable & simple to use - (-) Nightmare to do debugging - (-) Will need external scripts to remove #includes and merge files together - (-) All modules would be included, therefore we'll probably have to add some more compilation flags **Build a (web-based) library generator (again one .c and .h file)** Something commonly done in web applications (like JQuery) - (+) Assists people in building an optimized library - (+) Web based, so no need for any environment - (-) The most work to set up such a system - (-) How to keep track of many library versions ? Could you provide comments and more alternatives?
> > Generate .a file > > > Build a (web-based) library generator (again one .c and .h file) > > > Neither of these is a good option. The former is, for the reason you covered, platform-specific. The latter leaves anyone wanting to alter the set of modules compiled into their binary dependent on you continuing to run a server that can generate a new version. You could provide the sources for that server, but having to set one up just for that would be inconvenient or impossible in some shops. > > Generate a single .c and .h file > > > I think this is the best way to go since you can generate it as part of your distribution and it's the same every time. > > (+) Very portable & simple to use > > > The portability issue is a big thing in the embedded space. There are lots of odd little development environments for various platforms that don't provide a lot of the kinds of features you find in a Unixy environment, so anything you can do to make it easier to incorporate will increase adoption. An added bonus is that many compilers can do size and speed optimizations on a single file that aren't possible when the code is split. Being able to squeeze your code into a smaller footprint and wring more out of it is welcome in constrained environments. > > (-) Nightmare to do debugging > > > Not as much as you'd think. Most debuggers don't care if you have one giant file or 50 little ones. Once you've worked the kinks out of determining what gets included and what doesn't, you probably won't notice it because most debugging is done intra-function. Odds are quite good that most of the bugs you encounter are going to be due to flaws in the code and will show up in both the un-merged and merged versions. Speaking of same, if you have a test battery, it would be good to run it against both. You do want to keep the original files separate because it's lots easier to do reviews and poke around in your version control system looking at smaller chunks. One reason to avoid custom builds for each combination of features is that it effectively puts *2^n* versions of the source out there, where *n* is the number of features you can enable or disable. If somebody points out a bug in line 456 if your sources, you're going to have to determine the set of features is enabled so you know which line 456 it is. > > (-) Will need external scripts to remove #includes and merge files together > > > This shouldn't be too big a deal. The important thing is to make sure the process is automated and repeatable so it becomes a set-and-forget part of your distribution process. > > (-) All modules would be included, therefore we'll probably have to add some more compilation flags > > > That shouldn't be too disruptive, especially if you already have feature selection built into the code. If you're doing it as part of your build environment (i.e., the decision to include or exclude something is part of your Makefile, I would recommend getting away from that. Before diving into a single-file distribution, I would highly recommend studying [SQLite](https://sqlite.org). That project has been around for 16 years, is pretty much the gold standard for that sort of thing and does many of the things you're looking at doing.
How to build ffmpeg with hardware decoding support for raspberry pi? (cross compilation if possible) Is it possible to build `ffmpeg` with decoding support for Raspberry Pi? I've read that `mmal` can do hardware accelerated decoding on the Raspberry Pi. I've tried on debian 10 x86\_64: ``` ./configure \ --prefix=${BUILD_DIR}/desktop/${FFMPEG_ARCH_FLAG} \ --disable-doc \ --enable-cross-compile \ --cross-prefix=${CROSS_PREFIX} \ --target-os=linux \ --arch=${FFMPEG_ARCH_FLAG} \ --extra-cflags="-O3 -fPIC $EXTRA_CFLAGS" \ --enable-mmal \ --enable-omx \ --enable-omx-rpi \ --enable-shared \ --disable-debug \ --disable-runtime-cpudetect \ --disable-programs \ --disable-muxers \ --disable-encoders \ --disable-bsfs \ --disable-pthreads \ --disable-avdevice \ --disable-network \ --disable-postproc ``` where `CROSS_PREFIX=aarch64-linux-gnu-` and `FFMPEG_ARCH_FLAG=aarch64` but obviously I get `ERROR: mmal not found`. I couldn't find MMAL to compile and install.
This is possible. You can use [this](https://gist.github.com/wildrun0/86a890585857a36c90110cee275c45fd#file-compile-ffmpeg-sh) build script, which you will need to run on the Raspberry Pi itself. The part you probably care most about is towards the end: ``` git clone --depth 1 https://github.com/FFmpeg/FFmpeg.git ~/FFmpeg \ && cd ~/FFmpeg \ && ./configure \ --extra-cflags="-I/usr/local/include" \ --extra-ldflags="-L/usr/local/lib" \ --extra-libs="-lpthread -lm -latomic" \ --arch=armel \ --enable-gmp \ --enable-gpl \ --enable-libaom \ --enable-libass \ --enable-libdav1d \ --enable-libfdk-aac \ --enable-libfreetype \ --enable-libkvazaar \ --enable-libmp3lame \ --enable-libopencore-amrnb \ --enable-libopencore-amrwb \ --enable-libopus \ --enable-librtmp \ --enable-libsnappy \ --enable-libsoxr \ --enable-libssh \ --enable-libvorbis \ --enable-libvpx \ --enable-libzimg \ --enable-libwebp \ --enable-libx264 \ --enable-libx265 \ --enable-libxml2 \ --enable-mmal \ --enable-nonfree \ --enable-omx \ --enable-omx-rpi \ --enable-version3 \ --target-os=linux \ --enable-pthreads \ --enable-openssl \ --enable-hardcoded-tables \ && make -j$(nproc) \ && sudo make install ``` Note this caveat from [RaspberryPi.org](https://www.raspberrypi.org/documentation/raspbian/applications/camera.md): > > MMAL is a Broadcom-specific API used only on VideoCore 4 systems > > > This makes me think the chances of cross-compiling are low, but you can always give it a shot with this script.
Can I reflect messages out of a Haskell program at runtime? I’m writing a program that validates a complex data structure according to a number of complex rules. It inputs the data and outputs a list of messages indicating problems with the data. Think along these lines: ``` import Control.Monad (when) import Control.Monad.Writer (Writer, tell) data Name = FullName String String | NickName String data Person = Person { name :: Name, age :: Maybe Int } data Severity = E | W | C -- error/warning/comment data Message = Message { severity :: Severity, code :: Int, title :: String } type Validator = Writer [Message] report :: Severity -> Int -> String -> Validator () report s c d = tell [Message s c d] checkPerson :: Person -> Validator () checkPerson person = do case age person of Nothing -> return () Just years -> do when (years < 0) $ report E 1001 "negative age" when (years > 200) $ report W 1002 "age too large" case name person of FullName firstName lastName -> do when (null firstName) $ report E 1003 "empty first name" NickName nick -> do when (null nick) $ report E 1004 "empty nickname" ``` For documentation, I also want to compile a list of all messages this program can output. That is, I want to obtain the value: ``` [ Message E 1001 "negative age" , Message W 1002 "age too large" , Message E 1003 "empty first name" , Message E 1004 "empty nickname" ] ``` I could move the messages out of `checkPerson` into some external data structure, but I like it when the messages are defined right at the spot where they are used. I could (and probably should) extract the messages from the AST at compile time. But the touted flexibility of Haskell made me thinking: can I achieve that **at runtime**? That is, can I write a function ``` allMessages :: (Person -> Validator ()) -> [Message] ``` such that `allMessages checkPerson` would give me the above list? Of course, `checkPerson` and `Validator` **need not** stay the same. I can almost (not quite) see how I could make a custom `Validator` monad with a “backdoor” that would run `checkPerson` in a sort of “reflection mode,” traversing all paths and returning all `Message`s encountered. I would have to write a custom `when` function that would know to ignore its first argument under some circumstances (which ones?). So, a kind of a DSL. Perhaps I could even emulate pattern matching? So: can I do something like this, how, and what would I have to sacrifice? Please feel free to suggest any solutions even if they do not exactly fit the above description.
This kind of half-static analysis is basically exactly what arrows were invented for. So let's make an arrow! Our arrow will basically be just a `Writer` action, but one that remembers what messages it might have spit out at any given moment. First, some boilerplate: ``` {-# LANGUAGE Arrows #-} import Control.Arrow import Control.Category import Control.Monad.Writer import Prelude hiding (id, (.)) ``` Now, the type described above: ``` data Validator m a b = Validator { possibleMessages :: [m] , action :: Kleisli (Writer m) a b } runValidator :: Validator m a b -> a -> Writer m b runValidator = runKleisli . action ``` There are some straightforward instances to put in place. Of particular interest: the composition of two validators remembers messages from both the first action and the second action. ``` instance Monoid m => Category (Validator m) where id = Validator [] id Validator ms act . Validator ms' act' = Validator (ms ++ ms') (act . act') instance Monoid m => Arrow (Validator m) where arr f = Validator [] (arr f) first (Validator ms act) = Validator ms (first act) instance Monoid m => ArrowChoice (Validator m) where left (Validator ms act) = Validator ms (left act) ``` All the magic is in the operation that actually lets you report something: ``` reportWhen :: Monoid m => m -> (a -> Bool) -> Validator m a () reportWhen m f = Validator [m] (Kleisli $ \a -> when (f a) (tell m)) ``` This is the operation that notices when you're about to output a possible message, and makes a note of it. Let's copy your types and show how to code up `checkPerson` as an arrow. I've simplified your messages a little bit, but nothing important is different there -- just less syntactic overhead in the example. ``` type Message = String data Name = FullName String String | NickName String -- http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/ data Person = Person { name :: Name, age :: Maybe Int } checkPerson :: Validator Message Person () checkPerson = proc person -> do case age person of Nothing -> returnA -< () Just years -> do "negative age" `reportWhen` (< 0) -< years "age too large" `reportWhen` (>200) -< years case name person of FullName firstName lastName -> do "empty first name" `reportWhen` null -< firstName NickName nick -> do "empty nickname" `reportWhen` null -< nick ``` I hope you'll agree that this syntax is not *too* far removed from what you originally wrote. Let's see it in action in ghci: ``` > runWriter (runValidator checkPerson (Person (NickName "") Nothing)) ((),"empty nickname") > possibleMessages checkPerson ["empty nickname","empty first name","age too large","negative age"] ```
Why `PagerAdapter::notifyDataSetChanged` is not updating the View? I'm using the ViewPager from the compatibility library. I have succussfully got it displaying several views which I can page through. However, I'm having a hard time figuring out how to update the ViewPager with a new set of Views. I've tried all sorts of things like calling `mAdapter.notifyDataSetChanged()`, `mViewPager.invalidate()` even creating a brand new adapter each time I want to use a new List of data. Nothing has helped, the textviews remain unchanged from the original data. **Update:** I made a little test project and I've almost been able to update the views. I'll paste the class below. What doesn't appear to update however is the 2nd view, the 'B' remains, it should display 'Y' after pressing the update button. ``` public class ViewPagerBugActivity extends Activity { private ViewPager myViewPager; private List<String> data; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); data = new ArrayList<String>(); data.add("A"); data.add("B"); data.add("C"); myViewPager = (ViewPager) findViewById(R.id.my_view_pager); myViewPager.setAdapter(new MyViewPagerAdapter(this, data)); Button updateButton = (Button) findViewById(R.id.update_button); updateButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { updateViewPager(); } }); } private void updateViewPager() { data.clear(); data.add("X"); data.add("Y"); data.add("Z"); myViewPager.getAdapter().notifyDataSetChanged(); } private class MyViewPagerAdapter extends PagerAdapter { private List<String> data; private Context ctx; public MyViewPagerAdapter(Context ctx, List<String> data) { this.ctx = ctx; this.data = data; } @Override public int getCount() { return data.size(); } @Override public Object instantiateItem(View collection, int position) { TextView view = new TextView(ctx); view.setText(data.get(position)); ((ViewPager)collection).addView(view); return view; } @Override public void destroyItem(View collection, int position, Object view) { ((ViewPager) collection).removeView((View) view); } @Override public boolean isViewFromObject(View view, Object object) { return view == object; } @Override public Parcelable saveState() { return null; } @Override public void restoreState(Parcelable arg0, ClassLoader arg1) { } @Override public void startUpdate(View arg0) { } @Override public void finishUpdate(View arg0) { } } } ```
There are several ways to achieve this. The first option is easier, but bit more inefficient. Override `getItemPosition` in your `PagerAdapter` like this: ``` public int getItemPosition(Object object) { return POSITION_NONE; } ``` This way, when you call `notifyDataSetChanged()`, the view pager will remove all views and reload them all. As so the reload effect is obtained. The second option, [suggested by Alvaro Luis Bustamante (previously alvarolb)](https://stackoverflow.com/a/8024557), is to `setTag()` method in `instantiateItem()` when instantiating a new view. Then instead of using `notifyDataSetChanged()`, you can use `findViewWithTag()` to find the view you want to update. ## Conclusion If you have a lot of views, or want to support modifying any specific item and/or view (fastly at any time), then the second approach (tagging) is very flexible and high performant, as it prevents recreating all the **not** modified views. (Kudos to alvarolb for the original research.) But if your App has only a "refresh" feature (without single item changes being even allowed), or has just few items, use the first approach, as it saves development time.
Get Multiple Embedded Youtube Videos to Play Automatically in Sequence Is there any simple way to have multiple embedded YouTube videos on a page and have them start playing as soon as the page is opened and when the first one finished have the second one start? I was hoping something like this would work: ``` <object width="425" height="350"><param name="movie" value="http://www.youtube.com/v/OdT9z-JjtJk&autoplay=1"></param><embed src="http://www.youtube.com/v/OdT9z-JjtJk&autoplay=1" type="application/x-shockwave-flash" width="425" height="350"></embed></object> <br> <object width="425" height="350"><param name="movie" value="http://www.youtube.com/v/NlXTv5Ondgs&autoplay=2"></param><embed src="http://www.youtube.com/v/NlXTv5Ondgs&autoplay=2" type="application/x-shockwave-flash" width="425" height="350"></embed></object> ``` And it does for the first one but not the second. I would imagine that I may need to dive into the API. Anyone have any suggestions?
Using the Youtube IFrame API, you can do this easily. The only part you need to configure here is the array of youtube IDs. You can retrieve those from the part after the /v/ in the URL (If need be, you can modify the javascript to load URLs instead of IDs. I just like this way better. ``` <div id="player"></div> <script src="//www.youtube.com/iframe_api"></script> <script> /** * Put your video IDs in this array */ var videoIDs = [ 'OdT9z-JjtJk', 'NlXTv5Ondgs' ]; var player, currentVideoId = 0; function onYouTubeIframeAPIReady() { player = new YT.Player('player', { height: '350', width: '425', events: { 'onReady': onPlayerReady, 'onStateChange': onPlayerStateChange } }); } function onPlayerReady(event) { event.target.loadVideoById(videoIDs[currentVideoId]); } function onPlayerStateChange(event) { if (event.data == YT.PlayerState.ENDED) { currentVideoId++; if (currentVideoId < videoIDs.length) { player.loadVideoById(videoIDs[currentVideoId]); } } } </script> ```
Android Video Recording of OpenGL surface (GLSURFACEVIEW) I am working on a project where we need to record the rendered OpenGL surface. (for example if we use GLsurfaceView, we need to record the surface along with the audio from the MIC) Presently I am using MediaRecorder API by setting the video source as the `VIDEO_SOURCE_GRALLOC_BUFFER`. I am using the following [sample](https://android.googlesource.com/platform/frameworks/av/+/master/media/libstagefright/tests/SurfaceMediaSource_test.cpp) as the base code I wanted to know .... 1. Is this the right way? . Is there any better alternate ? 2. The sample test given in the link is recording the audio and video of the EGLSURFACE but it is not displayed properly. What might be the reason? Any help/pointers is really appreciated. thanks, Satish
The code you reference isn't a sample, but rather internal test code that exercises a non-public interface. `SurfaceMediaSource` could change or disappear in a future release and break your code. **Update:** Android 4.3 (API 18) allows [Surface input](http://developer.android.com/reference/android/media/MediaCodec.html#createInputSurface%28%29) to `MediaCodec`. The [EncodeAndMuxTest](http://bigflake.com/mediacodec/#EncodeAndMuxTest) sample demonstrates recording OpenGL ES frames to a .mp4 file. The `MediaRecorder` class doesn't take Surface input, so in your case you'd need to record the audio separately and then combine it with the new [MediaMuxer](http://developer.android.com/reference/android/media/MediaMuxer.html) class. **Update #2:** Android 5.0 (API 21) allows [Surface input](http://developer.android.com/reference/android/media/MediaRecorder.html#getSurface()) to MediaRecorder, which is often much more convenient than MediaCodec. If you neeed to use MediaCodec, there is an example showing three different ways of recording OpenGL ES output with it in [Grafika's](https://github.com/google/grafika) "record GL activity". The [MediaProjection](http://developer.android.com/reference/android/media/projection/MediaProjection.html) class can also be useful for screen recording.
numpy error when importing pandas with AWS Lambda I'm currently have an issue with importing the library `pandas` to my AWS Lambda Function. I have tried two scenarios. - Installing pandas directly into one folder with my `lambda_function` and uploading the zipped file. - Creating a layer with an uploaded zip file with the following structure: ``` - python - lib - python3.8 - site-packages - all the pandas packages here ``` My `lambda_function` is just: ``` import json import pandas as pd def lambda_handler(event, context): return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } ``` This is my error: ``` START RequestId: 9e27641e-587b-4be2-b9be-c9be85007f9e Version: $LATEST [ERROR] Runtime.ImportModuleError: Unable to import module 'main': Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.21.1" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath' ``` Is there any other approach? I don't want to use Docker for this task. Thanks!
I have solved the issue, thanks to this article: `https://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e` In my case, I cannot normally install the libraries through pip, I'm on a windows machine. You must install the linux versions of pandas and numpy. Since I'm on python 3.8 I installed these versions: - `numpy-1.21.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl` - `pandas-1.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl` After downloading the packages, I replaced the pandas and numpy folders that were originally from the install via `pip install pandas`. I used my first scenario as showed in my question.
Apps Script: how to get hyperlink from a cell where there is no formula I have a sheet where hyperlink is set in cell, but *not through formula*. When clicked on the cell, in "fx" bar it only shows the value. I searched on web but everywhere, the info is to extract hyperlink by using `getFormula()`. But in my case there is no formula set at all. I can see hyperlink as you can see in image, but it's not there in "formula/fx" bar. [![enter image description here](https://i.stack.imgur.com/KF2ig.png)](https://i.stack.imgur.com/KF2ig.png) How to get hyperlink of that cell using Apps Script or any formula?
When Excel file including the cells with the hyperlinks is converted to Google Spreadsheet, such situation can be also seen. In my case, I retrieve the URLs using Sheets API. A sample script is as follows. I think that there might be several solutions. So please think of this as one of them. **When you use this script, please enable Sheets API at Advanced Google Services and API console. You can see about how to enable Sheets API at [here](https://stackoverflow.com/questions/48754286/retrieving-google-sheets-cell-border-style-programmatically/48756509#48756509).** ### Sample script: ``` var spreadsheetId = "### spreadsheetId ###"; var res = Sheets.Spreadsheets.get(spreadsheetId, {ranges: "Sheet1!A1:A10", fields: "sheets/data/rowData/values/hyperlink"}); var sheets = res.sheets; for (var i = 0; i < sheets.length; i++) { var data = sheets[i].data; for (var j = 0; j < data.length; j++) { var rowData = data[j].rowData; for (var k = 0; k < rowData.length; k++) { var values = rowData[k].values; for (var l = 0; l < values.length; l++) { Logger.log(values[l].hyperlink) // You can see the URL here. } } } } ``` ### Note: - Please set `spreadsheetId`. - `Sheet1!A1:A10` is a sample. Please set the range for your situation. - In this case, each element of `rowData` is corresponding to the index of row. Each element of `values` is corresponding to the index of column. ### References: - [Method: spreadsheets.get](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/get) If this was not what you want, please tell me. I would like to modify it.
Change subpanel order in SugarCRM 7 How can one change the order on each subpanel either by code or through the GUI? In Sugar 6 the user could change the order simply by dragging and dropping the subpanels under each module. From what I can see this is not possible in 7.x. I have tried to change ``` 'order' => 1 ``` in ``` custom/Extension/modules/Opportunities/Ext/Layoutdefs/some_file.php ``` with no luck at all..
**UPDATE:** As UTAlan stated, this will become part of the stock functionality of Sugar starting in version 7.5.0: <https://web.sugarcrm.com/support/issues/66590> Until then, here is the reason and the solution: The `'order' => 1`, does not seem to work on Sugar 7 at the moment. **Solution** Copy the file `modules/Opportunities/clients/base/layouts/subpanels/subpanels.php` to `custom/modules/Opportunities/clients/base/layouts/subpanels/subpanels.php` Now, add your custom subpanel definition to the beginning of the array or in any order you desire. My example looks like this now: ``` $viewdefs['Opportunities']['base']['layout']['subpanels'] = array( 'components' => array( // This is my custom module array( 'layout' => 'subpanel', 'label' => 'LBL_OPPORTUNITIES_FOOBAR_TITLE', 'context' => array( 'link' => 'opportunities_foobar_1', ), ), .. // Code ommited array( 'layout' => 'subpanel', 'label' => 'LBL_EMAILS_SUBPANEL_TITLE', 'context' => array ( 'link' => 'archived_emails', ), ), ), 'type' => 'subpanels', 'span' => 12, ); ``` **Long Answer:** *Why is 'order' => 1 not working anymore?* Inside `include/MetaDataManager/MetaDataConverter.php:327`: ``` public function toLegacySubpanelLayoutDefs(array $layoutDefs, SugarBean $bean) { .. foreach ($layoutDefs as $order => $def) { .. $return[$def['context']['link']] = array( 'order' => $order, .. } ``` The order that is being rendered in the view is based on which order each bean-name is inserted inside the 'components'-key inside this file: `modules/Opportunities/clients/base/layouts/subpanels/subpanels.php` Core modules are hard-coded inside the subpanel file for Opportunities.
Why does this globally defined css rule affect styling inside of the shadow DOM? I created an web-component with shadow mode 'open', which is used like this: ``` <scu-switch checked="true" value="switch1"> <span id="scu-switch-label">Switch On</span> </scu-switch> ``` and looks like this: [![Html Toogle Button](https://i.stack.imgur.com/7eMEP.png)](https://i.stack.imgur.com/7eMEP.png) Than I added the button to a webpage with the following global CSS: ``` text-align: center; ``` and now the button style is broken: [![enter image description here](https://i.stack.imgur.com/0xtB4.png)](https://i.stack.imgur.com/0xtB4.png) When I inspect the button I can see, that the global style was applied to the span (and notice that it is not part of slot content) **inside** of the shadow root. [![Developer Tools](https://i.stack.imgur.com/8RlwM.png)](https://i.stack.imgur.com/8RlwM.png) The shadow DOM is supposed to [isolate style](https://css-tricks.com/encapsulating-style-and-structure-with-shadow-dom/) from the rest of the web page. Why was this `text-align: center` applied here, even though it was defined outside of the Shadow DOM?
> > One of the great features of the Shadow DOM in Web Components is that styles are encapsulated to the component - you can style your component without worrying about any specifier (id, class, etc.) conflicts or styles '**leaking out**' to other elements on the page. > > > > > This often leads to the belief that the reverse is true - that styles outside of the component won't cross the Shadow boundary and '**leak in**' to your component. However this is only partly true. > > > > > While specifiers do not leak in to your component (e.g. a color applied to an p element in a style rule outside of your component won't effect any p elements in your Shadow DOM, although the rule will be applied to your Light DOM, or slotted content), > > > > > ***inheritable styles* applied to any elements containing your component will be applied to both your Shadow and Light DOM.** > > > **Source:** <https://lamplightdev.com/blog/2019/03/26/why-is-my-web-component-inheriting-styles/>
How to create a PriorityQueue with new comparator and NO specified initial capacity? in Java, I don't know how to create a new `PriorityQueue` with new comparator but without given the queue length? How can I create it? I know I can write: ``` Queue<Node> theQueue = new PriorityQueue<Node>(15,new Comparator<Node>(); ``` But I hope the queue can works like `LinkedList`, I mean its length is not fixed, how can I declare it?
Modern answer, as of 2021: <https://stackoverflow.com/a/30015986/139010> --- Pre-Java-8 answer, for posterity: There is no such constructor. As per the JavaDocs, [the default capacity is 11](http://docs.oracle.com/javase/7/docs/api/java/util/PriorityQueue.html#PriorityQueue%28%29), so you could specify that for analogous behavior to the no-arg `PriorityQueue` constructor: ``` Queue<Node> theQueue = new PriorityQueue<Node>(11,new Comparator<Node>()); ``` And yes, [the queue will grow if it needs to.](http://docs.oracle.com/javase/7/docs/api/java/util/PriorityQueue.html) > > A priority queue is unbounded, but has an internal capacity governing the size of an array used to store the elements on the queue. It is always at least as large as the queue size. As elements are added to a priority queue, its capacity grows automatically. The details of the growth policy are not specified.x > > >
Swift MPGTextField autocomplete in tableview cell I can't see suggestions when typing.. i have tableview cell and textfield in it. I'm using [MPGTextField](https://github.com/gaurvw/MPGTextField) library, [swift version](https://github.com/gaurvw/MPGTextField/issues/22#issuecomment-154628946)(swift 2 supported). Any solution for this? [![enter image description here](https://i.stack.imgur.com/bGDOL.png)](https://i.stack.imgur.com/bGDOL.png) Code: ``` @IBOutlet weak var articleField: MPGTextField_Swift! override func viewDidLoad() { super.viewDidLoad() articleField.mDelegate = self } func dataForPopoverInTextField(textfield: MPGTextField_Swift) -> [Dictionary<String, AnyObject>] { return articles } func textFieldShouldSelect(textField: MPGTextField_Swift) -> Bool{ return true } func textFieldDidEndEditing(textField: MPGTextField_Swift, withSelection data: Dictionary<String,AnyObject>){ print(data["CustomObject"]) } ```
In the `MPGTextField-Swift.swift` you'll find a function `provideSuggestions()` In this function you'll find a line ``` self.superview!.addSubview(tableViewController!.tableView) ``` Replace this line with ``` //BUG FIX - SHOW ON TOP //self.superview!.addSubview(tableViewController!.tableView) let aView = tableViewController!.tableView var frame = aView.frame frame.origin = self.superview!.convertPoint(frame.origin, toView: nil) aView.frame = frame self.window!.addSubview(aView) //// ``` I've forked `MPGTextField` repository, made necessary changes for demo purpose. You can find my repo at <https://github.com/rishi420/MPGTextField> **Note**: This repo needs Xcode 7.1.1 to compile. Feel free to contribute. :-]
Pointer gets modified after a push\_back Let us consider the following c++ code ``` #include <iostream> #include <vector> class A { int x, y; public: A(int x, int y) : x(x), y(y){} friend std::ostream & operator << (std::ostream & os, const A & a){ os << a.x << " " << a.y; return os; } }; int main(){ std::vector<A> a; std::vector<const A*> b; for(int i = 0; i < 5; i++){ a.push_back(A(i, i + 1)); b.push_back(&a[i]); } while(!a.empty()){ a.pop_back(); } for(auto x : b) std::cout << *x << std::endl; return 0; } ``` Using a debugger I noticed that after the first insertion is done to `a` the address of `a[0]` changes. Consequently, when I'm printing in the second for loop I get an unvalid reference to the first entry. Why does this happen? Thanks for your help!
``` for(int i = 0; i < 5; i++){ a.push_back(A(i, i + 1)); //add a new item to a b.push_back(&a[i]); // point at the new item in a } ``` The immediate problem is [Iterator invalidation](https://stackoverflow.com/questions/16904454/what-is-iterator-invalidation). As `a` grows, it reallocates its storage for more capacity. This may leave the pointers in `b` pointing to memory that has been returned to the freestore (probably the heap). Accessing these pointers invokes [Undefined Behaviour](https://en.cppreference.com/w/cpp/language/ub) and anything could happen. There are a few solutions to this, such as reserving space ahead of time to eliminate reallocation or using a container with more forgiving [invalidation rules](https://stackoverflow.com/questions/6438086/iterator-invalidation-rules), but whatever you do is rendered moot by the next problem. ``` while(!a.empty()){ a.pop_back(); // remove item from `a` } ``` Since the items in `b` point to items in `a` and there are no items in `a`, all of the pointers in `b` now reference invalid objects and cannot be accessed without invoking Undefined Behaviour. All of the items in `a` referenced by items in `b` must remain alive as long as the item in `b` exists or be removed from `a` and `b`. In this trivial case that answer is simple, don't empty `a`, but that defeats the point of the example. There are many solutions to the general case (just use `a`, store copies rather than pointers in `b`, use [`std::shared_ptr`](https://en.cppreference.com/w/cpp/memory/shared_ptr) and store `shared_ptr`s to `A`s in both `a` and `b`) but to make useful suggestions we need to know how `a` and `b` are being consumed.
CakePHP 3.5 Auth use multiple tables I have an Auth process which works fine with one userModel. But not only because of my DB schema I need to have **one login method/action which works with multiple models**. So far I've tried everything I was able to think of or find online - for example editing [this Cake 1.3 solution](https://stackoverflow.com/a/2974494) into Cake 3 and a few more hints I was able to find. However, I'm not able to figure it out. Thank you for any answer. **My AppController component load:** ``` $this->loadComponent('ExtendedAuth', [ 'authenticate' => [ 'Form' => [ //'userModel' => 'Admins', 'fields' => [ 'username' => 'email', 'password' => 'password' ] ] ], 'loginAction' => [ 'controller' => 'Admins', 'action' => 'login' ], // If unauthorized, return them to page they were just on 'unauthorizedRedirect' => $this->referer(), ]); ``` **My ExtendedAuthComponent:** ``` class ExtendedAuthComponent extends AuthComponent { function identify($user = null, $conditions = null) { $models = array('Admins', 'Users'); foreach ($models as $model) { //$this->userModel = $model; // switch model parent::setConfig('authenticate', [ AuthComponent::ALL => [ 'userModel' => $model ] ]); $result = parent::identify(); // let cake do its thing if ($result) { return $result; // login success } } return null; // login failure } } ``` **EDIT1: Description of situation** I have two separate tables (Admins, Users). I need just one login action which tries to use Admins table prior to Users. Because of the application logic I can't combine them to one table with something like 'is\_admin' flag. So basically what I need is instead of one specific userModel set in Auth config, I need a set of models. Sounds simple and yet I'm not able to achieve it. **EDIT2: Chosen solution** Based on the answer below, I decided to update my schema. Auth users table is just simplified table with login credentials and role and other role-specific fields are then in separate tables which are used as a connection for other role-specific tables. Even though the answer is not exactly a solution for the asked question, it made me think more about any possible changes of the schema and I found this solution because of it so I'm marking it as a solution. I appreciate all comments as well.
As Mark already said in a comment: Don't use two users tables. Add a type field or role or whatever else and associated data in separate tables if it's different like admin\_profiles and user\_profiles. Don't extend the Auth component. I wouldn't recommend to use it anymore any way because it's going to get deprecated in the upcoming 3.7 / 4.0 release. [Use the new official authentication](https://github.com/cakephp/authentication) and [authorization](https://github.com/cakephp/authorization) plugins instead. If you insist on the rocky path and want to make your life harder, well go for it but then you should still not extend the auth component but instead write a custom authentication adapter. This is the right place to implement your custom 2-table-weirdness. [Read this section of the manual](https://book.cakephp.org/3.0/en/controllers/components/authentication.html#creating-custom-authentication-objects) on how to do it.
how to set readable xticks in seaborn's facetgrid? i have this plot of a dataframe with seaborn's facetgrid: ``` import seaborn as sns import matplotlib.pylab as plt import pandas import numpy as np plt.figure() df = pandas.DataFrame({"a": map(str, np.arange(1001, 1001 + 30)), "l": ["A"] * 15 + ["B"] * 15, "v": np.random.rand(30)}) g = sns.FacetGrid(row="l", data=df) g.map(sns.pointplot, "a", "v") plt.show() ``` seaborn plots all the xtick labels instead of just picking a few and it looks horrible: [![enter image description here](https://i.stack.imgur.com/QnBQX.png)](https://i.stack.imgur.com/QnBQX.png) Is there a way to customize it so that it plots every n-th tick on x-axis instead of all of them?
The `seaborn.pointplot` is not the right tool for this plot. But the answer is very simple: use the basic `matplotlib.pyplot.plot` function: ``` import seaborn as sns import matplotlib.pylab as plt import pandas import numpy as np df = pandas.DataFrame({"a": np.arange(1001, 1001 + 30), "l": ["A"] * 15 + ["B"] * 15, "v": np.random.rand(30)}) g = sns.FacetGrid(row="l", data=df) g.map(plt.plot, "a", "v", marker="o") g.set(xticks=df.a[2::8]) ``` [![enter image description here](https://i.stack.imgur.com/7yIS8.png)](https://i.stack.imgur.com/7yIS8.png)
Elixir: best practice to extract data from nested structs In Elixir we can get data from nested data structures using ``` data = %{field: %{other_field: 1}} data[:field][:other_field] ``` If it contains lists it also can be done using ``` data = %{field: %{other_field: [1]}} get_in data, [:field, :other_field, Access.at(0)] ``` But how to get that data given that data.field.other\_field is a structure? Both of the above would fail because structs don't implement `Access.fetch/2`. ``` data = %{field: %{other_field: %Struct{a: 1}}} ``` So what's the right way to access nested structs data other than pattern matching?
Use [`Access.key/2`](http://elixir-lang.org/docs/stable/elixir/Access.html#key/2): > > **key(key, default \\ nil)** > > > Accesses the given key in a map/struct. > > > Uses the default value if the key does not exist or if the value being accessed is `nil`. > > > ``` iex(1)> defmodule Struct do ...(1)> defstruct [:a] ...(1)> end iex(2)> data = %{field: %{other_field: %Struct{a: 1}}} %{field: %{other_field: %Struct{a: 1}}} iex(3)> get_in data, [:field, :other_field, Access.key(:a)] 1 iex(4)> get_in data, [:field, :other_field, Access.key(:b, :default)] :default ```
Pybrain Reinforcement Learning Example As the question states I am looking for a good explanation/example for reinforcement learning in pybrain as the documentation on this confuses me no end, I can get it to work but I don't understand how to apply it to other things. Thanks Tom
Unfortunately, pybrain's documentation for rl classes is disappointing. I have found [this blog](http://simontechblog.blogspot.ca/2010/08/pybrain-reinforcement-learning-tutorial_15.html) quite useful. --- In summary, you need to identify the following components (for the implementation details follow the tutorial on the link): 1. an environment: `env = Environment(...)` 2. a task --> `task = Task(env)` 3. a controller, which is a module (like a table) to keep your action-value information --> `controller = Module(...)` 4. a learner --> `learner = SARSA()` --> you may also add an Explorer to the learner. The default is epsilon-greedy with epsilon = 0.3, decay = 0.9999. 5. an agent to integrate controller and learner --> `agent = Agent(controller, learner)` 6. An experiment to integrate the task and the agent and do actual iterations --> `experiment = Experiment(task, agent)` Each of the capitalized classes should be replaced with corresponding class from PyBrain.Then you simply run a do-while cycle to perform the iterations and learn. Note that there are several options to be set by the user, and in real-world problems you most likely need to write sub-classes to generalize the basic classes of pybrain, but the steps will be the same as here.
Escaping quotes and double quotes How do I properly escape the quotes in the `-param` value in the following command line? ``` $cmd="\\server\toto.exe -batch=B -param="sort1;parmtxt='Security ID=1234'"" Invoke-Expression $cmd ``` This of course fails. I tried to escape the quotes (single and double) using the escape character ` and did various combination, but nothing is working.
Escaping parameters like that is usually source of frustration and feels a lot like a time wasted. I see you're on v2 so I would suggest using a technique that Joel "Jaykul" Bennet [blogged about a while ago](https://web.archive.org/web/20140113132221/http://huddledmasses.org/the-problem-with-calling-legacy-or-native-apps-from-powershell/). Long story short: you just wrap your string with @' ... '@ : ``` Start-Process \\server\toto.exe @' -batch=B -param="sort1;parmtxt='Security ID=1234'" '@ ``` (Mind that I *assumed* which quotes are needed, and which things you were attempting to escape.) If you want to work with the output, you may want to add the `-NoNewWindow` switch. BTW: this was so important issue that since v3 you can use `--%` to stop the PowerShell parser from doing anything with your parameters: ``` \\server\toto.exe --% -batch=b -param="sort1;paramtxt='Security ID=1234'" ``` ... should work fine there (with the same assumption).
When building layered jars in Spring Boot, how do you include a multi-module projects jars in a layer? According to the [Spring Boot gradle plugin reference](https://docs.spring.io/spring-boot/docs/2.3.3.RELEASE/gradle-plugin/reference/html/#packaging-executable), I should be able to package a particular pattern of jars into a specific layer (for making better docker files). I'm confused about the pattern matching used in the documentation. Here's an example: ``` tasks.getByName<BootJar>("bootJar") { layered { isIncludeLayerTools = true application { intoLayer("spring-boot-loader") { include("org/springframework/boot/loader/**") } intoLayer("application") } dependencies { intoLayer("module-dependencies") { include("com*:*:*") } intoLayer("dependencies") } layerOrder = listOf("dependencies", "spring-boot-loader", "module-dependencies", "application") } } ``` What I don't understand is what this pattern matching is matching on: intoLayer("module-dependencies") { include("com\*:*:*") } Is it the group, artifact and version of a jar ? Is it the name of the jar ? If I have a multi-module project that has modules aa,ab and ac, equating to aa.jar, ab.jar and ac.jar and an external dependency org.something:anartifact:25 equating to anartifact-25.jar what pattern do I need to add to include aa,ab and ac in one layer and every other dependency in another layer ?
For module dependencies the pattern is `<group>:<artifactid>:<version>`. You can using trailing wildcard to match a subset of items or omit the item entirely to match everything. For example, `com.fasterxml.jackson::` will match all artifact and all versions in the `com.fasterxml.jackson` group. In the case of a multi-module project, by default the `artifactid` is the name of the project and the `group` is the value of the `group` value set in your `build.gradle`. It's usually common to define the group in the root project's `build.gradle` file, for example: ``` allprojects { group "com.example" version = '0.0.1-SNAPSHOT' repositories { mavenCentral() } } ``` You can then define the layer patterns as follows in your application module: ``` bootJar { layered { application { intoLayer("spring-boot-loader") { include("org/springframework/boot/loader/**") } intoLayer("application") } dependencies { intoLayer("module-dependencies") { include("com.example:*:*") } intoLayer("dependencies") } layerOrder = [ "dependencies", "spring-boot-loader", "module-dependencies", "application" ] } } ``` I've uploaded a sample to <https://github.com/philwebb/mutli-module-layered-gradle-example> that shows this in a complete project.
Split data using sed or awk I have a lot of data I'm trying to split in CSV. My source data has this format: ``` * USER 'field1' '[email protected]' 'field3' * USER 'field1' '[email protected]' 'field3' * USER 'field1' '[email protected]' 'field3' ``` And here's what I'm trying to get as output: ``` field1;[email protected];field3 field1;[email protected];field3 field1;[email protected];field3 ``` **Rules**: 1. `* USER` in the begin of the line must be obviously stripped; 2. `field1` and `field3` could be an email address, or can contain `'`; 3. `field1` could be empty `''` 4. the second field is always an email address; 5. each field has `'` on the beginning and ending of the field itself. My idea was to strip `* USER` (`sed -e 's/^* USER //'` could be a starting point), then "find" the mail in "the center" field, and then catch the left side and right side into two vars. Last thing should be to strip beginning and ending `'` on the vars. Unfortunately, I don't have sed or awk knowledge at this level. Any ideas on how to achieve this? --- Here an example ``` * USER '' '[email protected]' 'CORDINI ALBERTO' * USER 'moglie delmonte daniele' '[email protected]' 'Anna Borghi' * USER '' '[email protected]' 'CRAVERO ANNA MARIA' * USER '' '[email protected]' 'D'AGOSTINO PATRIZIA' * USER '' '[email protected]' 'DE PRA' PIERO' * USER '' '[email protected]' 'D'INGEO VIVIANA' ```
**Update:** You can use this awk for the provided input: ``` awk -F " '" '{gsub(/^ +| +$/, "", $3); s=sprintf("%s;%s;%s;", $2,$3,$4); gsub(/'"'"';/, ";", s); print s}' file ;[email protected];CORDINI ALBERTO; moglie delmonte daniele;[email protected];Anna Borghi; ;[email protected];CRAVERO ANNA MARIA; ;[email protected];D'AGOSTINO PATRIZIA; ;[email protected];DE PRA' PIERO; ;[email protected];D'INGEO VIVIANA; ```
Setting up Continuous rendering in a mac osx cocoa application using OpenGL I'm starting to work on a 3D particle system editor and evolver. Ive done something similar in the past with OpenGL but this time I'm making a mac os x cocoa application. I just have a few questions regarding some code I keep running into on setting up OpenGL. 1) Why do I see a lot of people on the web using... ``` [self setNeedsDisplay:YES]; ``` Is this the proper way to get OpenGL to render, I now understand it leads to drawRect being called, but is it the correct way? 2) Is drawRect the proper method I should be overriding for my render frame method? Heres the code that I continue to run into on the web: ``` -(void) prepareOpenGL { [[self window] makeFirstResponder:self]; glClearColor(1.0f, 1.0f, 0.0f, 10.f); NSTimer *timer = [NSTimer timerWithTimeInterval:1.0/60.0 target:self selector:@selector(idle:) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:timer forMode:NSDefaultRunLoopMode]; } -(void)idle:(NSTimer *)timer { if(![[NSApplication sharedApplication] isHidden]) [self setNeedsDisplay:YES]; } -(void) drawRect:(NSRect)dirtyRect { glClear(GL_COLOR_BUFFER_BIT); } ```
You haven't indicated whether you will be drawing your OpenGL content within an NSOpenGLView or a CAOpenGLLayer. These two have slightly different ways of updating their content for display to the screen. For an NSOpenGLView, you don't need to update the view within it's `-drawRect:` method. In fact, I think you won't want to trigger `-setNeedsDisplay:` to do a refresh of the NSView because of some overhead that might incur. In one of my applications, I use a CVDisplayLink to trigger updates at 60 FPS within my own custom rendering methods in an NSOpenGLView. None of these touch `-drawRect:`. Frames are presented to the screen upon calling `[[self openGLContext] flushBuffer]`, not by forcing a redraw of the NSView. CAOpenGLLayers are a little different, in that you override `- drawInCGLContext:pixelFormat:forLayerTime:displayTime:` with your custom rendering code. This method is triggered in response to a manual `-setNeedsDisplay` or by the CAOpenGLLayer itself if its `asynchronous` property is set to YES. It knows when it's ready to present new content by the boolean value you provide in response to `-canDrawInCGLContext:pixelFormat:forLayerTime:displayTime:`. I've used both of these, and each has its advantages. CAOpenGLLayers make it much easier to overlay other UI elements on your OpenGL rendering, but their rendering methods can be difficult to get to work correctly from a background thread. NSOpenGLViews can be updated easily on a background thread using a CVDisplayLink, but are a bear to overlay content on.
LDAP Authentication though a Firewall I have a Rails Gem that does Active Directory authentication and its test suite has quite a lot of authentication checks. When I run the test suite on our LAN everything works no problem (as one would expect) but we are thinking of moving one of our applications off to a cloud server which of course means it will be accessing the DC from outside the network. I've pulled a copy of the LDAP Gem onto a cloud server that we can test with and setup a port forward on our firewall and then ran the tests after putting in the settings and they fail, but in a different way to if you get your password wrong. I'm getting a successful login appearing in the security logs on the DC but for what ever reason its not returning the data back to the client. I have port XXXX forwarded to port 389 on the DC, do I need any other forwards?
[LDAP](http://en.wikipedia.org/wiki/Ldap) uses TCP 389 for unsecured communication and 636 for secured communication. > > A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port 389. > > > and > > A common alternate method of securing LDAP communication is using an SSL tunnel. This is denoted in LDAP URLs by using the URL scheme "ldaps". The default port for LDAP over SSL is 636. > > > In terms of firewall, you'll need to allow access to those ports from the "External" interface of the firewall to the "Trusted" interface. If you are using a NAT, you may need to add the rule on both the public IP as well as the LAN IP. You shouldn't need to forward any ports, but you will need to allow communication as follows: ``` permit tcp any x.x.x.x 0.0.0.0 389 permit tcp any x.x.x.x 0.0.0.0 636 ``` Where you allow any source IP on any souce port that is destened for your server on a specific port. You should also allow any established connections, on Cisco devices it looks like `permit any established` this will allow the response from your server back out of the firewall. Most firewalls typically do this established thing by default (in my experience) so if you have a dedicated firewall this shouldn't be an issue.
Fitting webpage contents inside a webview (Android) Simple question. I'm trying to stretch the contents of a webview so that the entire webpage within is visible. i.e no scrolling. I've looked through the docs but cant find any method asides from zoom controls and setinitialscale. The problem with setinitialscale in this case is that it works differently for different sites. Example: 1:wikipedia will load as expected with the zoom far enough out. 2:Google however will only show the center of the page. Heres the code snippet I have ``` test1 = (WebView) findViewById(R.id.webview_test_1); test2 = (WebView) findViewById(R.id.webview_test_2); test3 = (WebView) findViewById(R.id.webview_test_3); test1.getSettings().setJavaScriptEnabled(true); test2.getSettings().setJavaScriptEnabled(true); test3.getSettings().setJavaScriptEnabled(true); test1.setInitialScale(12); test2.setInitialScale(12); test3.setInitialScale(12); test1.loadUrl("http://developer.android.com/resources/tutorials/views/hello-tablelayout.html"); test2.loadUrl("http://www.wikipedia.org/"); test3.loadUrl("http://www.google.com"); ``` --- ``` test1.getSettings().setDefaultZoom(ZoomDensity.FAR); ``` This seems to be an alternative to what I'm trying to do but I cant get it to zoom far enough.
So People can use this as a tutorial in the future. There are a bunch of ways to handle zooms in android and fitting pages. It can be pretty temperamental at times and some methods work better than others. For most people, Just use this: ``` WebView x; x.setInitialScale(1); ``` This is the furtheset zoom possible. But for some sites it just looks pure UGLY. This was the second version I found ``` test1.getSettings().setDefaultZoom(ZoomDensity.FAR); ``` Thats a nice all rounder than seems to just zoom out far enough for a lot but still not what I was looking for. And now heres the final solution I have. ``` x.getSettings().setLoadWithOverviewMode(true); x.getSettings().setUseWideViewPort(true); ``` Basically what these do is answered in another question like this. ``` setLoadWithOverviewMode(true) ``` Loads the WebView completely zoomed out ``` setUseWideViewPort(true) ``` Makes the Webview have a normal viewport (such as a normal desktop browser), while when false the webview will have a viewport constrained to it's own dimensions (so if the webview is 50px\*50px the viewport will be the same size)
Flex dataGrid add button in datagridcolumn using ItemRenderer? I have this code. I want to add Buttons in the second column of the data grird. ``` <mx:DataGrid width="100%" height="95%" id="id_variableRefList" > <mx:columns> <mx:DataGridColumn id="id_name" dataField=""/> <mx:DataGridColumn id="id_strip" dataField=""> </mx:DataGridColumn> </mx:columns> </mx:DataGrid> ``` How can I add buttons in second column using an ItemRenderer?
There are many ways you can do this. You could use an [inline itemRenderer](http://www.adobe.com/devnet/flex/articles/itemrenderers_pt1.html) like so: ``` <fx:Script> public function myButton_clickHandler(event:Event):void { Alert.show("My button was clicked!"); } </fx:Script> <mx:DataGrid width="100%" height="95%" id="id_variableRefList" > <mx:columns> <mx:DataGridColumn id="id_name" dataField=""/> <mx:DataGridColumn id="id_strip" dataField=""> <mx:itemRenderer> <fx:Component> <mx:VBox> <mx:Button label="My Button" click="outerDocument.myButton_clickHandler(event);" /> </mx:VBox> </fx:Component> </mx:itemRenderer> </mx:DataGridColumn> </mx:columns> </mx:DataGrid> ``` Or you could create a [custom component and set the `itemRenderer` property](http://www.switchonthecode.com/tutorials/flex-using-item-renderers) of the `DataGridColumn`. ``` <mx:DataGrid width="100%" height="95%" id="id_variableRefList" > <mx:columns> <mx:DataGridColumn id="id_name" dataField=""/> <mx:DataGridColumn id="id_strip" dataField="" itemRenderer="MyCustomItemRenderer"/> </mx:columns> </mx:DataGrid> ``` **UPDATE:** To get the id of the button that was clicked, you can use the `currentTarget` property of the `event` that gets passed to your `eventListener`. ``` public function myButton_clickHandler(event:Event):void { Alert.show("Button " + Button(event.currentTarget).id + " was clicked!"); } ```
How to Generate Windows DLL versioning information with CMake I'm using CMake to build a shared library, however for the Windows DLL I need the versioning information, like: - FileDescription - FileVersion - InternalName - LegalCopyright - OriginalFilename - ProductName - ProductVersion So far, all I have are the VERSION and SOVERSION properties, but these don't seem to correlate to the FileVersion information I was expecting. ``` set(LIC_TARGET MySharedLib) add_library(${LIC_TARGET} SHARED ${SOURCES} ) SET_TARGET_PROPERTIES(${LIC_TARGET} PROPERTIES VERSION ${MY_PRODUCT_NUMBER}.${MY_PRODUCT_VERSION}.${MY_BUILD_NUMBER} SOVERSION ${MY_PRODUCT_NUMBER}) ``` I've found [manual methods](http://msdn.microsoft.com/en-us/library/aa381058(v=vs.85).aspx) (see example at the bottom) but would prefer to contain this within CMake. Help?
You could use your CMake variable values in conjunction with a **version.rc.in** file and the [configure\_file](https://cmake.org/cmake/help/latest/command/configure_file.html) command. ``` // version.rc.in #define VER_FILEVERSION @MY_PRODUCT_NUMBER@,@MY_PRODUCT_VERSION@,@MY_BUILD_NUMBER@,0 #define VER_FILEVERSION_STR "@MY_PRODUCT_NUMBER@.@MY_PRODUCT_VERSION@.@[email protected]\0" #define VER_PRODUCTVERSION @MY_PRODUCT_NUMBER@,@MY_PRODUCT_VERSION@,@MY_BUILD_NUMBER@,0 #define VER_PRODUCTVERSION_STR "@MY_PRODUCT_NUMBER@.@MY_PRODUCT_VERSION@.@MY_BUILD_NUMBER@\0" // // ...along with the rest of the file from your "manual methods" reference ``` And then, in your CMakeLists.txt file: ``` # CMakeLists.txt set(MY_PRODUCT_NUMBER 3) set(MY_PRODUCT_VERSION 5) set(MY_BUILD_NUMBER 49) configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/version.rc.in ${CMAKE_CURRENT_BINARY_DIR}/version.rc @ONLY) set(LIC_TARGET MySharedLib) add_library(${LIC_TARGET} SHARED ${SOURCES} ${CMAKE_CURRENT_BINARY_DIR}/version.rc) # Alternatively you could simply include version.rc in another rc file # if there already is one in one of the files in ${SOURCES} ```
Why can't I use SSH with these firewall rules? [WINDOWS HOST] I'm currently trying to connect to my Linux Mint guest via SSH on my Windows 10 host. Better said, I'm trying to connect to the guest successfully with particular firewall rules in mind. Using Virtual Box, I've enabled a host-only adapter for the Linux Mint virtual machine. When allowing any and all traffic through port 22 via the firewall with that particular allow all rule, I'm able to successfully connect to the guest machine using SSH from my Windows host. However, I want the Linux guest to **only** accept *my* IP for SSH. In other words, I want it to only accept my IP for port 22 (if that makes any sense). I configured the Linux firewall with these rules and **could not** then after successfully connect: ``` GUEST IP 22/tcp ALLOW IN HOST IP 22/tcp HOST IP 22/tcp ALLOW OUT GUEST IP 22/tcp ``` *Note: The terms 'GUEST IP' and 'HOST IP' are simple placeholders for the associative IP address given the context. In this case, GUEST IP is the host-only adapter's IP address from the eth1 interface. Needless to say, the HOST IP represents the host IP address found on my Windows 10 host.* I've done what I can to make it so that it'd work, but I honestly don't know how to get what I'm asking for at this point. My current setup: - Windows 10 (Host) w/ MobaXtrem SSH client & FreeSSHd SSH server - Linux Mint 17 (Guest) w/ openSSH client & sever - Didn't touch the Windows firewall at all - Currently have Linux firewall set to drop, but added in mentioned rules I would like to let it be known that my experience with Linux/Unix based systems is *very* limited as it is with using shells, SSH, and configuring firewalls. I literally just installed the mentioned SSH software a day ago. **TL;DR**: *I want to make it so that my Linux guest is configured to only accept SSH traffic from my host computer. I'm assuming this is done via IP but I'm not sure how to do it. After adding the above firewall rules to the Linux guest, connecting no longer became successful and every attempt would time-out.* *Additional Info*: - I have seen around these neck of the woods folks mentioning port forwarding; I tried it, didn't work. Seeing as how I was able to get it to work with the host-only adapter, I figured it'd be fine without that solution. **As an aside**, is it possible at all to connect to my Windows host via my Linux guest using SSH? I tried doing it from my Linux terminal, but it kept saying I didn't have permission to do so. If you need clarification or additional information, please feel free to ask. Thanks in advance! — T.R.G.
TCP connections have two ports, source and destination. For outgoing SSH connections, the destination port is 22, but the source port is *chosen randomly*. (Only very few protocols, e.g. BGP, use identical source & destination ports.) Therefore you would need to relax the rules to: ``` allow incoming from hostIP (any)/tcp to guestIP 22/tcp allow outgoing from guestIP 22/tcp to hostIP (any)/tcp ``` If you're concerned about the *guest* connecting back to the host, you can remove the second rule as long as you're using a *stateful* firewall (which would allow outgoing packets belonging to an existing connection). In iptables, a working ruleset would look like: ``` -A INPUT -p icmp -j ACCEPT -A INPUT -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -j REJECT # (Well, *I think* this should work.) -A OUTPUT -p icmp -j ACCEPT -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -j REJECT ``` (Included "allow ICMP" because *honestly*)
Does application execute Console.WriteLine("Test"); in compiled version? I use a lot of lines like that ``` Console.WriteLine("Test"); ``` to debug application under VS 2010. My question is: Have I do comment all those lines when I build an application? Thanks!
Yes. In fact, if your app was a console application, you'd really want those lines executed. Have a look at `System.Diagnostics.Debug` methods (e.g. [Debug.WriteLine](http://msdn.microsoft.com/en-us/library/9z9k5ydz.aspx)) which may be what you need. Their output is in Visual Studio's Output window, and they do nothing in Release code. More generally, you can have code that's only compiled in a Debug build by doing: ``` #if DEBUG // Debug-only code here. #endif ``` You can also put [this attribute](http://msdn.microsoft.com/query/dev11.query?appId=Dev11IDEF1&l=EN-US&k=k%28System.Diagnostics.ConditionalAttribute%29;k%28TargetFrameworkMoniker-.NETFramework,Version%3Dv4.0%29;k%28DevLang-csharp%29&rd=true) before your method definition to write a method that's not called at all when you do a Release build: ``` [System.Diagnostics.Conditional("DEBUG")] ``` All these methods have the advantage that they shouldn't affect the performance of production code. To check I'm giving you accurate advice, I compiled the following in Release mode: ``` class Program { static void Main(string[] args) { Console.WriteLine("Hello world!"); #if DEBUG Console.WriteLine("Inside #if block."); #endif WriteLine("With ConditionalAttribute."); Debug.WriteLine("Debug.WriteLine."); } [Conditional("DEBUG")] public static void WriteLine(string line) { Console.WriteLine(line); } } ``` I then used the IL Dissasembler tool to see what will actually run: ``` .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 11 (0xb) .maxstack 8 IL_0000: ldstr "Hello world!" IL_0005: call void [mscorlib]System.Console::WriteLine(string) IL_000a: ret } // end of method Program::Main ``` As you can see, only the Console.WriteLine method is called. The other three alternatives are, as we had hoped, 'compiled out' of the debug code. The Debug version looks like this: ``` .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 46 (0x2e) .maxstack 8 IL_0000: nop IL_0001: ldstr "Hello world!" IL_0006: call void [mscorlib]System.Console::WriteLine(string) IL_000b: nop IL_000c: ldstr "Inside #if block." IL_0011: call void [mscorlib]System.Console::WriteLine(string) IL_0016: nop IL_0017: ldstr "With ConditionalAttribute." IL_001c: call void ConditionalCompileTest.Program::WriteLine(string) IL_0021: nop IL_0022: ldstr "Debug.WriteLine." IL_0027: call void [System]System.Diagnostics.Debug::WriteLine(string) IL_002c: nop IL_002d: ret } // end of method Program::Main ```
Why does the vc++ 2019 not accept the code? ``` template<int N> void f() { constexpr int n = 9; ++*const_cast<int*>(&n); // ok ++*const_cast<int*>(&N); // error C2101: '&' on constant } int main() { f<8>(); } ``` According to [cppref](https://en.cppreference.com/w/cpp/language/value_category): > > the name of a variable, a function, a **template parameter object** (since > C++20), or a data member, regardless of type, such as std::cin or > std::endl. Even if the variable's type is rvalue reference, the > expression consisting of its name is an lvalue expression; > > > Two questions: **1. Why does vc++ 2019 (with /std:c++latest) not accept the code?** **2. Why does C++20 permit a template parameter object be an lvalue?**
*template parameter object* is a normative term, that refers only to template parameters that have a class type. > > **[temp.param/6](http://eel.is/c++draft/temp.param#6)** (emphasis mine) > > > ...An id-expression **naming a non-type template-parameter of class type T > denotes a static storage duration object of type const T, known as a > template parameter object**, whose value is that of the corresponding > template argument after it has been converted to the type of the > template-parameter. All such template parameters in the program of the > same type with the same value denote the same template parameter > object. [ Note: **If an id-expression names a non-type non-reference > template-parameter, then it is a prvalue if it has non-class type. > Otherwise, if it is of class type T, it is an lvalue and has type > const T ([expr.prim.id.unqual])**. — end note ] > > > Since `int` is not of a class type, *it's not* a template parameter object. We can consult the relevant section for the normative text on the value category here, which supports the note: > > **[expr.prim.id.unqual/2](http://eel.is/c++draft/expr.prim.id.unqual#2)** > > > ... The expression is an lvalue if the entity is a function, variable, > structured binding ([dcl.struct.bind]), data member, or template > parameter object and a prvalue otherwise ... > > > Since we are not in the "template parameter object" case, we are dealing with a prvalue, and as such may not apply unary `&` to it, like any other prvalue.
Asp.Net Core 2.0 - Retrieve Image Url I am trying to create a restful service that exposes some data, everything was just fine until I realize is a pain to expose images full URL from the server. Maybe it's just me but I am finding it very complicated. Just to be in context, I am using Entity Framework to read from my Database, here is the table I am working with: [![enter image description here](https://i.stack.imgur.com/Qb888.png)](https://i.stack.imgur.com/Qb888.png) As you can see, there is a field named "ImagePath", bad name guys, it should be "ImageName", whatever, that column is saving the exact file name of the image that represents that Champion object, all the images are currently located inside the wwwroot project folder, just like this: [![enter image description here](https://i.stack.imgur.com/ek5AW.png)](https://i.stack.imgur.com/ek5AW.png) All the Business Logic is not at the controller, is at a Service folder/layer in a regular class (Maybe this is not relevant, but I just wanted to clarify I am not working inside any controller): [![enter image description here](https://i.stack.imgur.com/FCFEE.png)](https://i.stack.imgur.com/FCFEE.png) And this is the method where I expect the magic to happen: ``` public async Task<IList<Champion>> GetChampions() { List<Champion> champions = await _context.Champion .Include(x => x.PrimaryClass) .Include(x => x.SecondaryClass) .Include(x => x.ChampionUserRate) .ToListAsync(); //BEFORE RETURNING THE CHAMPION TO THE CONTROLLER, I WILL NEED TO CONCATENATE MY CURRENT SERVER URL AND THE PATH TO THE IMAGE PLUS THE IMAGE NAME FILE WHICH IS ALREADY IN THE DATABASE AND ALREADY MATCHES THE IMAGES FOLDER string serverPathToChampionsFolder = null; //Example: http://localhost:57435/wwwroot/Champions foreach (var champion in champions) { champion.ImagePath = serverPathToChampionsFolder + champion.ImagePath; } return champions; } ``` Just to be clear here, the key line here is: ``` string serverPathToChampionsFolder = null; //Example: http://localhost:57435/wwwroot/Champions ``` I need to somehow get the current URL just like in the example to add it to every single champion so it can be used by the client side inside an image tag. If it is not possible to do it by the approach I am trying to implement I will accept any other suggestion, the point here is to expose the Image URL, does not matter how.
Basically, you need to use `IHostingEnvironment` and inject it in your service constructor. Then create a string variable with the name of your folder inside the wwwroot let's say `"Champions"` Here's the example code: ``` private readonly IHostingEnvironment hostingEnv; private const string ChampionsImageFolder = "Champions"; public ChampionsService(IHostingEnvironment hostingEnv){ this.hostingEnv = hostingEnv; } // Suppose this method is responsible for fetching image path public string GetImage(){ var path = Path.Combine(hostingEnv.WebRootPath, ChampionsImageFolder); return path; } ``` What `IHostingEnvironment` interface does is "Provides information about the hosting environment an application is running in." If you want to get files inside a given path, this will give you a hint. ``` var directoryFiles = Directory.GetFiles("wwwroot/Champions"); foreach (var item in directoryFiles) { // do something here } ``` If you want to create path link from those wwwroot folder, you need register in your startup the `UseDirectoryBrowser` Inside your Startup.cs file, Find the `Configure` method insert this code snippet These code snippets will expose files inside the `Champions` directory and create a new route on your website which is `ChampionImages` derived from folder `Champions` in your `wwwroot` ``` app.UseDirectoryBrowser(new DirectoryBrowserOptions() { FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), @"wwwroot", "Champions")), RequestPath = new PathString("/ChampionImages") }); ``` Then you can now use something like this `localhost:8080/ChampionImages` where you can see each file stored inside the Champions folder of your wwwroot. What you can do to create a URL of that image is something like this. ``` var imageUrl = $"/ChampionImages/{exactFileName}"; // this will create a string link. ``` I hope this simple code snippets give you help or idea :)
Ignore switchMap return value I want to resolve an observable but I don't want the return value to replace the previous value in the pipe. Is there any asynchronous `tap()`? I need an operator like a `switchMap` but I want to ignore the return. ``` of(1).pipe(switchMap(() => of(2))).subscribe(console.log); // expected: 1 ``` I could create a custom operator but sure there's something built-in in rxjs.
I ended up with this custom operator. It is like tap but resolves observables (and should be updated to also support promises). ``` export function switchTap<T, R>(next: (x: T) => Observable<R>): MonoTypeOperatorFunction<T>; export function switchTap<R>(observable: Observable<R>): MonoTypeOperatorFunction<R>; export function switchTap<T, R>( arg: Observable<T> | ((x: T) => Observable<R>) ): MonoTypeOperatorFunction<T> { const next: (x: any) => Observable<T | R> = (typeof arg === 'function') ? arg : (x: any): Observable<T> => arg; return switchMap<T, T>(value => next(value).pipe( ignoreElements(), concat(of(value)) )); } ``` Usage: ``` of(1).pipe( switchTap(of(2)) ).subscribe(console.log); // 1 ``` or with a function: ``` of(1).pipe( switchTap(value => { console.log(value); // value: 1 return of(value + 1); }) ); .subscribe(console.log); // 1 ```
Android reading a large file I have a CSV file in the **assets** folder with more than 10000 lines of data. I want to insert this data into a SQLite database when the database is created. I cannot include a database because it is a very generic application and the model demands a CSV file instead. I don't want to read all 10000 lines of data and insert it from the memory in one stretch. How do I accomplish the task effectively and efficiently?
Just insert *immediately* once you've read the line. So, just don't store the lines in some arraylist in memory. E.g. ``` while ((line = reader.readLine()) != null) { insert(line); } ``` You might want to do this in a single transaction so that you can rollback whenever it fails halfway. In the JDBC side, you may want to consider `PreparedStatement#addBatch()`/`#executeBatch()` to execute the inserts in batches. An example can be found in [this answer](https://stackoverflow.com/questions/4355046/java-insert-multiple-rows-into-mysql-with-preparedstatement/4355097#4355097). **Update**: as a completely different but more efficient alternative, you can also leave Java outside the story and use [CSV import facility provided by SQLite](http://www.sqlite.org/cvstrac/wiki/wiki?p=ImportingFiles).
Pillow - Resizing a GIF I have a `gif` that I would like to resize with `pillow` so that its size decreases. The current size of the `gif` is 2MB. I am trying to 1. resize it so its height / width is smaller 2. decrease its quality. With JPEG, the following piece of code is usually enough so that large image drastically decrease in size. ``` from PIL import Image im = Image.open("my_picture.jpg") im = im.resize((im.size[0] // 2, im.size[1] // 2), Image.ANTIALIAS) # decreases width and height of the image im.save("out.jpg", optimize=True, quality=85) # decreases its quality ``` With a GIF, though, it does not seem to work. The following piece of code even makes the `out.gif` bigger than the initial gif: ``` im = Image.open("my_gif.gif") im.seek(im.tell() + 1) # loads all frames im.save("out.gif", save_all=True, optimize=True, quality=10) # should decrease its quality print(os.stat("my_gif.gif").st_size) # 2096558 bytes / roughly 2MB print(os.stat("out.gif").st_size) # 7536404 bytes / roughly 7.5MB ``` If I add the following line, then only the first frame of the GIF is saved, instead of all of its frame. ``` im = im.resize((im.size[0] // 2, im.size[1] // 2), Image.ANTIALIAS) # should decrease its size ``` I've been thinking about calling `resize()` on `im.seek()` or `im.tell()` but neither of these methods return an Image object, and therefore I cannot call `resize()` on their output. Would you know how I can use Pillow to decrease the size of my GIF while keeping all of its frames? [edit] Partial solution: Following [Old Bear's response](https://stackoverflow.com/a/41755454/2564199), I have done the following changes: - I am using [BigglesZX's script](https://gist.github.com/BigglesZX/4016539) to extract all frames. It is useful to note that this is a Python 2 script, and my project is written in Python 3 (I did mention that detail initially, but it was edited out by the Stack Overflow Community). Running `2to3 -w gifextract.py` makes that script compatible with Python 3. - I have been resicing each frame individually: `frame.resize((frame.size[0] // 2, frame.size[1] // 2), Image.ANTIALIAS)` - I've been saving all the frames together: `img.save("out.gif", save_all=True, optimize=True)`. The new gif is now saved and works, but there is 2 main problems : - I am not sure that the resize method works, as `out.gif` is still 7.5MB. The initial gif was 2MB. - The gif speed is increased and the gif does not loop. It stops after its first run. Example: original gif `my_gif.gif`: [![Original gif](https://i.stack.imgur.com/WARSx.gif)](https://i.stack.imgur.com/WARSx.gif) Gif after processing (`out.gif`) <https://i.imgur.com/zDO4cE4.mp4> (I could not add it to Stack Overflow ). Imgur made it slower (and converted it to mp4). When I open the gif file from my computer, the entire gif lasts about 1.5 seconds.
Using [BigglesZX's script](https://gist.github.com/BigglesZX/4016539), I have created a new script which resizes a GIF using Pillow. Original GIF (2.1 MB): [![Original gif](https://i.stack.imgur.com/WARSx.gif)](https://i.stack.imgur.com/WARSx.gif) Output GIF after resizing (1.7 MB): [![Output gif](https://i.stack.imgur.com/Y66P1.gif)](https://i.stack.imgur.com/Y66P1.gif) I have saved the script [here](https://gist.github.com/PaulineLc/46bb8ddec8a1c3279c24482ae48a1e06). It is using the `thumbnail` method of Pillow rather than the `resize` method as I found the `resize` method did not work. The is not perfect so feel free to fork and improve it. Here are a few unresolved issues: - While the GIF displays just fine when hosted by imgur, there is a speed issue when I open it from my computer where the entire GIF only take 1.5 seconds. - Likewise, while imgur seems to make up for the speed problem, the GIF wouldn't display correctly when I tried to upload it to `stack.imgur`. Only the first frame was displayed (you can see it [here](https://i.stack.imgur.com/DebCV.gif)). Full code (should the above gist be deleted): ``` def resize_gif(path, save_as=None, resize_to=None): """ Resizes the GIF to a given length: Args: path: the path to the GIF file save_as (optional): Path of the resized gif. If not set, the original gif will be overwritten. resize_to (optional): new size of the gif. Format: (int, int). If not set, the original GIF will be resized to half of its size. """ all_frames = extract_and_resize_frames(path, resize_to) if not save_as: save_as = path if len(all_frames) == 1: print("Warning: only 1 frame found") all_frames[0].save(save_as, optimize=True) else: all_frames[0].save(save_as, optimize=True, save_all=True, append_images=all_frames[1:], loop=1000) def analyseImage(path): """ Pre-process pass over the image to determine the mode (full or additive). Necessary as assessing single frames isn't reliable. Need to know the mode before processing all frames. """ im = Image.open(path) results = { 'size': im.size, 'mode': 'full', } try: while True: if im.tile: tile = im.tile[0] update_region = tile[1] update_region_dimensions = update_region[2:] if update_region_dimensions != im.size: results['mode'] = 'partial' break im.seek(im.tell() + 1) except EOFError: pass return results def extract_and_resize_frames(path, resize_to=None): """ Iterate the GIF, extracting each frame and resizing them Returns: An array of all frames """ mode = analyseImage(path)['mode'] im = Image.open(path) if not resize_to: resize_to = (im.size[0] // 2, im.size[1] // 2) i = 0 p = im.getpalette() last_frame = im.convert('RGBA') all_frames = [] try: while True: # print("saving %s (%s) frame %d, %s %s" % (path, mode, i, im.size, im.tile)) ''' If the GIF uses local colour tables, each frame will have its own palette. If not, we need to apply the global palette to the new frame. ''' if not im.getpalette(): im.putpalette(p) new_frame = Image.new('RGBA', im.size) ''' Is this file a "partial"-mode GIF where frames update a region of a different size to the entire image? If so, we need to construct the new frame by pasting it on top of the preceding frames. ''' if mode == 'partial': new_frame.paste(last_frame) new_frame.paste(im, (0, 0), im.convert('RGBA')) new_frame.thumbnail(resize_to, Image.ANTIALIAS) all_frames.append(new_frame) i += 1 last_frame = new_frame im.seek(im.tell() + 1) except EOFError: pass return all_frames ```
JDBC - Inserting an array variable into a PostgreSQL table I am trying to insert an array variable into the table. The code is shown below ``` import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.Statement; class PostgreSQLJDBC { public static void main(String args[]) { Connection c = null; Statement stmt = null; Statement stmt1 = null; int id[] = new int[3]; int no = 1; id[0] = 2; id[1] = 14; id[2] = 4; try { Class.forName("org.postgresql.Driver"); c = DriverManager .getConnection("jdbc:postgresql://localhost:5432/ass2", "postgres", "post"); c.setAutoCommit(true); System.out.println("Opened database successfully"); stmt = c.createStatement(); String sql1 = "INSERT INTO COMPANY (NO,ID) " + "VALUES (7, id);"; stmt1 = c.createStatement(); stmt1.executeUpdate(sql1); stmt1.close(); c.close(); } catch (Exception e) { System.err.println(e.getClass().getName() + ": " + e.getMessage()); System.exit(0); } System.out.println("Operation done successfully"); } } ``` This code compiles [but gives an PSQLexception.](https://stackoverflow.com/questions/48642761/how-to-insert-an-element-to-a-column-with-array-field) Could someone please help in fixing this
Try to use [Prepared Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html) so you can use [setArray](https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html#setArray-int-java.sql.Array-) like this : But first of all you can't set `int[]` you have to convert it to Array, so you can use : ``` Integer[] id = {2, 14, 4}; Array array = connection.createArrayOf("INTEGER", id); ``` Then create your Prepared Statement and set the array : ``` String sql = "INSERT INTO COMPANY (NO, ID) VALUES (?, ?)"; try (PreparedStatement pstmt = conn.prepareStatement(sql);) { pstmt.setInt(1, 7); // Set NO pstmt.setArray(2, array); // Set ID pstmt.executeUpdate(); // Execute the query } ``` --- **Note :** please avoid UPPER LETTERS in the name of tables and columns in `PostgreSQL`! This can makes some problems, instead your query should look like : ``` INSERT INTO company (no, id) VALUES (?, ?) ```
Can caffe reshape layer do transpose `Caffe` have `reshape` layer implemented, but say I want to first reshape a blob of `(1, n, k, p)` to `(1, a, b, k, p)`, where `n= a*b` and then transpose it to shape `(1, b, a, k, p)`, how to implement this operation, I know I can write a seperate python layer and do all this with `numpy.reshape` and `numpy.transpose`, but that would be not efficient, is it?
`transpose` and `reshape` are two **fundamentally** different operations: While `reshape` only changes the shape of a blob, it does not affect its internal structure (and thus can be execute very efficiently). On the other hand, `transpose` re-arrange the blob's data. Let's look at a simple example. Suppose you have a 2x2 blob with values ``` [[0, 1], [2, 3]] ``` In memory the values are stored in a 1D contiguous way (row-major): ``` [0, 1, 2, 3] ``` If you `reshape` the blob to 4x1 ``` [[0], [1], [2], [3]] ``` The underlying arrangement of the elements in memory is not changed. However, if you `transpose` the blob to get ``` [[0, 2], [1, 3]] ``` The underlying arrangement is also changed to ``` [0, 2, 1, 3] ``` Therefore, you **cannot** use `"Reshape"` layer to `transpose` a blob. Caffe SSD branch (by [Weilu](http://cs.unc.edu/~wliu/)) has a [`"Permute"`](https://github.com/BVLC/caffe/commit/b68695db42aa79e874296071927536363fe1efbf) layer which is equivalent to `transpose`. **A note about performance:** While `reshape` only changes the blob's header (`O(1)` runtime and space), `transpose` needs to re-arrange elements in memory thus taking `O(n)` time and space. To make things worse, if you use **`numpy.transpose`** to perform the task it means you `transpose` in CPU (host memory) thus adding two sync operations between CPU and GPU memory (sync GPU->CPU, transpose in CPU, sync CPU->GPU). So, if you have no alternative but to `transpose` (aka `"Permute"`) make sure you have a GPU implementation.
Get result set of 10 years before and 10 years after current year **What I want:** I would like a result set that shows me the 10 years prior to and the 10 years after the current year. **Example:** Being that this year is 2014: ``` 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 ``` **What I've tried:** I'm not a huge SQL guy so bear with me here: ``` SELECT YEAR(DATEADD(year, -10, GETDATE())) AS prevYear, YEAR(DATEADD(year, 10, GETDATE())) AS nextYear, YEAR(GETDATE()) AS currentYear WHERE currentYear BETWEEN prevYear AND nextYear ``` Can someone help me out? Am I even close??
``` SELECT TOP (21) YEAR(DATEADD(YEAR, number-10, GETDATE())) FROM master.dbo.spt_values WHERE type = N'P' ORDER BY number; ``` Chances are you want to do something else with this data. One example might be to aggregate the data from some table, and include years in this range that aren't found in the table. Here's how you might do that: ``` ;WITH y(d) AS ( SELECT TOP (21) DATEADD(YEAR, number-10, DATEADD(YEAR, YEAR(GETDATE())-1900,0)) FROM master.dbo.spt_values WHERE type = N'P' ORDER BY number ) SELECT y.d, COUNT(o.key) FROM y LEFT OUTER JOIN dbo.other_table AS o ON o.datetime_column >= d.d AND o.datetime_column < DATEADD(YEAR, 1, d.d) GROUP BY y.d ORDER BY y.d; ```
Combine trigram with ranked searching in django 1.10 We working with searching in django 1.10 and we need user ranked searching with trigram searching. Our code is this: ``` def get_queryset(self): search = self.request.GET.get('text', '') vector = SearchVector('name',weight='A', config=settings.SEARCH_LANGS[ settings.LANGUAGE ], ) + SearchVector( 'content', weight='B', config=settings.SEARCH_LANGS[ settings.LANGUAGE ], ) query = SearchQuery(search) return Article.objects.annotate( rank=SearchRank( vector, query ), similarity=TrigramSimilarity( 'name', search ) + TrigramSimilarity( 'content', search ), ).filter( rank__gte=0.3 ).filter( similarity__gt=0.3 ).order_by( '-similarity' )[:20] ``` But this code doesn't return any query, without use trigram we haven problems, but, combined between they we can't get a query. How can we combine trigram and ranked searching in django 1.10?
We investigated more thoroughly understood how search works weights. According to [documents](https://docs.djangoproject.com/en/dev/ref/contrib/postgres/search/#weighting-queries) you can be assigned weights according to the fields and they can even be assigned weights, and similarly we can use [trigrams](https://docs.djangoproject.com/en/dev/ref/contrib/postgres/search/#trigram-similarity) to filter by similarity or distance. However not specify an example of using the two and investigating further it understood nor much as weights work. A little logic tells us that if we seek a common word in all we will all ranks 0, similarity varies much more than ranges, however tends to lower values ​​that range. Now, text search, as far as we understand, it is carried out based on the text contained in the fields you want to filter even more than in the language that is placed in the configuration. Example is that putting titles, the used model had a title field and a content field, whose most common words were `how change`, reviewing weighted words (ranges function as query, so we can use `values` ​​or `values_list` to review the ranks and similarities, which are numerical values, we can view weighted words viewing vector object), we saw that if weights were allocated, but combinations of splitted words: found 'perfil' and 'cambi', however we did not find 'cambiar' or 'como'; however, all models had contained the same text as 'lorem ipsun ...', and all the words of that sentence if they were whole and with weights B; We conclude with this that the searches are done based on the contents of the fields to filter more than the language with which we configure searches. That said, here we present the code we use for everything. First, we need to use Trigrams the extent necessary to enable the database: ``` from django.db import migrations from django.contrib.postgres.operations import UnaccentExtension, TrigramExtension class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ ... TrigramExtension(), UnaccentExtension(), ] ``` Import operations for migration from `postgres` packages and run from any file migration . The next step is to change the code of the question so that the filter returns one of the querys if the second fails: ``` def get_queryset(self): search_query = SearchQuery(self.request.GET.get('q', '')) vector = SearchVector( 'name', weight='A', config=settings.SEARCH_LANGS[settings.LANGUAGE_CODE], ) + SearchVector( 'content', weight='B', config=settings.SEARCH_LANGS[settings.LANGUAGE_CODE], ) if self.request.user.is_authenticated: queryset = Article.actives.all() else: queryset = Article.publics.all() return queryset.annotate( rank=SearchRank(vector, search_query) similarity=TrigramSimilarity( 'name', search_query ) + TrigramSimilarity( 'content', search_query ), ).filter(Q(rank__gte=0.3) | Q(similarity__gt=0.3)).order_by('-rank')[:20] ``` The problem with the above code was seeping one query after another, and if the word chosen not appear in any of the two searches the problem is greater . We use a `Q` object to filter using an `OR` connector so that if one of the two does not return a desired value , send the other in place. With this is enough, however they are welcome clarifications depth on how these weights and trigramas work, to explitar the most of this new advantage offered by the latest version of Django.
Error: Type 'number | undefined' is not assignable to type 'number | { valueOf(): number; }'? I have this array type: ``` interface Details { Name: string; URL: string; Year: number; } interface AppState { data: Details[]; } ``` I am using D3 to create an x axis like this: ``` createChart = () => { const { data } = this.state; const width = 900, height = 600; // x-axis const x = d3 .scaleLinear() .domain([ d3.min(data, ({ Year }) => (Year ? Year - 1 : 0)), // ERROR d3.max(data, ({ Year }) => (Year ? Year + 1 : 0)) // ERROR ]) .range([0, width]); }; ``` On the marked lines using `d3.min` and `d3.max` I get the following error: > > Type 'number | undefined' is not assignable to type 'number | { > valueOf(): number; }'. Type 'undefined' is not assignable to type > 'number | { valueOf(): number; }'. > > > How do I make this work?
For your call to `d3.min()` the compiler will use the following [type definition](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/d3-array/index.d.ts#L71): ``` export function min<T, U extends Numeric>(array: Iterable<T>, accessor: (datum: T, index: number, array: Iterable<T>) => U | undefined | null): U | undefined; ``` As you can see the function can return either `U`—in your case the type of `Details.Year`, i.e. `number`— or `undefined`. This, however, does not match the [type definition](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/d3-scale/index.d.ts#L80) for the `.domain()` method which takes an array of either numbers or values coercible to numbers: ``` domain(domain: Array<number | { valueOf(): number }>): this; ``` This explains why you get the error. The same, obviously, holds true for `d3.max()`. Looking at the [documentation](https://github.com/d3/d3-array#min) the reasons for `d3.min()` to return `undefined` are rather limited: > > If the iterable contains no comparable values, returns undefined. > > > Given your code you are guaranteed to not run into this issue. For that reason you can safely cast the return values of `d3.min()` and `d3.max()` to `number`: ``` const x = d3 .scaleLinear() .domain([ d3.min(data, ({ Year }) => (Year ? Year - 1 : 0)) as number, // cast to number d3.max(data, ({ Year }) => (Year ? Year + 1 : 0)) as number // cast to number ]) .range([0, width]); ```
Word Embedding, LookupTable, Word Embedding Visualizations I need to ask few questions regarding word embeddings.....could be basic. 1. When we convert a one-hot vector of a word for instance king `[0 0 0 1 0]` into an embedded vector `E = [0.2, 0.4, 0.2, 0.2]`.... is there any importance for each index in resultant word vector? For instance `E[1]` which is 0.2.... what specifically `E[1]` defines (although I know its basically a transformation into another space).... or word vector collectively defines context but not individually... 2. How the dimension (reduced or increased) of a word vector matters as compared to the original one-hot vector ? 3. How can we define lookup table in term of embedding layer? 4. is lookup table a kind of random generated table or it already been trained separately with respect to data instance in data and we just use it later on in Neural Network operations? 5- Is there any method to visualize an embedded vector at Hidden Layer (as we do have in Image based Neural Network Processing)? Thanks in advance
1: Each element (or a group of element) in embedding vector have some meaning, but mostly unknown for human. Depend on what algorithm you use, a word embedding vector may have different meaning, but usually useful. For example, [Glove](https://nlp.stanford.edu/projects/glove/), similar word 'frog', 'toad' stay near each other in vector space. King - man result in vector similar to Queen. 3. Turn vocab into index. For example, you have a vocabulary list: [dog, cat, mouse, feed, play, with] Then the sentences: Dog play with cat => 0, 4, 5, 1 While, you have embedding matrix as follow [0.1, 0.1, 0] # comment: this is dog [0.2, 0.5, 0.1] # this is cat [...] [...] [...] [...] where first row is embedding vector of dog, second row is cat, then so on Then, you use the index (0, 4, 5, 1) after lookup would become a matrix [[0.1, 0.1, 0][...][...][0.2, 0.5, 0.1]] 4. either or both - You can randomly init embedding vector and training it with gradient descent - You can take pretrained word vector and keep it fixed (i.e: read-only, no change). You can train your word vector in model and use it in another model. Our you can download pretrained word vector online. Example Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip on [Glove](https://nlp.stanford.edu/projects/glove/) - You can init with pretrained word vector and train with your model by gradient descent Update: **One-hot vector** does not contain any information. You can think that one-hot vector is index of that vector in vocabulary. For example, Dog => [1, 0, 0, 0, 0, 0] and cat => [0, 1, 0, 0, 0, 0]. There are some different between one-hot vs index: - if you input a list of index: [0, 4, 5, 1] to your multi-layer perceptron, it cannot learn anything (I tried...).But if you input a matrix of one-hot vector [[...1][1...][...][...]], it learn something. But it costly in term of RAM and CPU. - One-hot cost a lot of memory to store zeros. Thus, I suggest randomly init embedding matrix if you don't have one. Store dataset as index, and use index to look up embedding vector > > "its mean that lookup table is just a matrix of embedded vectors > (already been trained seperately via word2vec or...) for each word in > the vocabulary. and while in the process of neural network either we > can use an Embedding Layer or we can just refer to embedded vector in > lookup table for that particular embedded vector against particular > one-hot vector." > > > Use the "INDEX" to look-up in lookup table. Turn dog into 0, cat into 1. One-hot vector and index contain same information, but one-hot cost more memory to store. Moreover, a lot of deeplearning framework accept index as input to embedding layer (which, output is a vector represent for a word in that index.) > > ". How we get this embedding vector..." > > > => read paper. Here is paper about [Word2vec](https://arxiv.org/abs/1301.3781) and [Glove](https://nlp.stanford.edu/projects/glove/). Ask your lecturers for more detail, they are willing to help you.
What is the usage of ":" in if and else statements? I saw the following code snippet: ``` <?php if(!empty($_POST)): // case I: what is the usage of the : if(isset($_POST['num']) && $_POST['num'] != ''): $num = (int)$_POST['num']; .... if($rows == 0): echo 'No'; else: // case II: what is usage of : echo $rows.'Yes'; endif; ``` I would like to know what the usage of ":" in php code is.
This is the [**alternative syntax for control structures**](http://php.net/manual/en/control-structures.alternative-syntax.php). So ``` if(condition): // code here... else: // code here... endif; ``` is equivalent to ``` if(condition) { // code here... } else { // code here... } ``` This can come very handy when dealing with HTML. Imho, it is easier to read, because you don't have to look for braces `{}` and the PHP code and HTML don't feel like mixed up. Example: ``` <?php if(somehting): ?> <span>Foo</span> <?php else: ?> <span>Bar</span> <?php endif; ?> ``` --- I would not use the alternative syntax in "normal" PHP code though, because here the braces provide better readability.
RichFaces 4 - how to disable skins Is there any way how to disable all skins RichFaces apply? They ruin my layout and override fonts, links,...
You can redefine each CSS style, but it'll be boring... Have a look at [reset css](http://cssresetr.com/), this can help you to redefine the CSS. or, you can try to remove style: ``` <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>disable</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING_CLASSES</param-name> <param-value>disable</param-value> </context-param> ``` or try to use the plain style ``` <context-param> <param-name>org.richfaces.skin</param-name> <param-value>plain</param-value> </context-param> ```
Java data type to hold only date Which data type in Java can hold just the date and doesn't require a time component? For example, just to store `12/07/2012`. I'm working with persisting the data to/from a database that has a date-only data type, so I'm looking for the best equivalent data type in Java.
from the JDK: [`java.sql.Date`](http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html): > > A thin wrapper around a millisecond value that allows JDBC to identify > this as an SQL DATE value. A milliseconds value represents the number > of milliseconds that have passed since `January 1, 1970 00:00:00.000 GMT`. > > > To conform with the definition of SQL DATE, the millisecond values > wrapped by a `java.sql.Date` instance must be 'normalized' by setting > the hours, minutes, seconds, and milliseconds to zero in the > particular time zone with which the instance is associated. > > > or from [JodaTime](http://joda-time.sourceforge.net/): [`DateMidnight`](http://joda-time.sourceforge.net/api-release/org/joda/time/DateMidnight.html) or [`LocalDate`](http://joda-time.sourceforge.net/apidocs/org/joda/time/LocalDate.html) (thanks @cdeszaq) > > `DateMidnight` defines a date where the time component is fixed at > midnight. The class uses a time zone, thus midnight is local unless a > UTC time zone is used. > > > It is important to emphasise that this class represents the time of > midnight on any given day. Note that midnight is defined as 00:00, > which is at the very start of a day. > > >
Replace regex matches in attributed string with image in Objective-C My goal is to store the information for an attributed string in Parse.com. I decided to come up with an encoding for attributed text for my images that works by replacing any string `{X}` in braces with the corresponding image. For example: ``` Picture of 2 colorless mana: {X} ``` Should produce an attributed string where `{X}` is replaced by an image. This is what I've tried: ``` NSString *formattedText = @"This will cost {2}{PW}{PW} to cast."; NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"(?<=\\{)[^}]+(?=\\})" options:NSRegularExpressionAnchorsMatchLines error:nil]; NSArray *matches = [regex matchesInString:formattedText options:kNilOptions range:NSMakeRange(0, formattedText.length)]; NSMutableAttributedString *attributedString = [[NSMutableAttributedString alloc] initWithString:formattedText]; for (NSTextCheckingResult *result in matches) { NSString *match = [formattedText substringWithRange:result.range]; NSTextAttachment *imageAttachment = [NSTextAttachment new]; imageAttachment.image = [UIImage imageNamed:[NSString stringWithFormat:@"Mana%@.png", match]]; NSAttributedString *replacementForTemplate = [NSAttributedString attributedStringWithAttachment:imageAttachment]; [attributedString replaceCharactersInRange:result.range withAttributedString:replacementForTemplate]; } [_textView setAttributedText:attributedString]; ``` There are two problems with this approach currently: - The braces aren't replaced, only the text inside of them. - The range for each match is changing because the string itself is changing, and it gets more off with each replacement whose original text was of length > 1. Here's what it looks like: [![an image](https://i.stack.imgur.com/PHmwy.png)](https://i.stack.imgur.com/PHmwy.png)
Two problems: Braces aren't replaced. That's because you're using assertions, which aren't counted as part of the match. The match you're making with your pattern only contains the stuff inside the braces. Use this pattern instead: ``` \{([^}]+)\} ``` That's: match a brace, followed by one or more things that aren't closing braces in a capture group, followed by a closing brace. The whole match includes the braces now. That introduces another problem, though -- you're using the enclosed bits to pick the replacement image. Small change to fix this: the internal capture group holds that information, now, rather than the whole group. The capture group's length tells you the range of the substring you need. ``` NSUInteger lengthOfManaName = [result rangeAtIndex:1].length; NSString manaName = [match substringWithRange:(NSRange){1, lengthOfManaName}]; imageAttachment.image = [UIImage imageNamed:[NSString stringWithFormat:@"Mana%@.png", manaName]]; ``` Second problem: length of string is changing. [Just enumerate backwards](https://stackoverflow.com/questions/18919934/nsmutablestring-modify-while-iterating-text-checking-results): ``` for (NSTextCheckingResult *result in [matches reverseObjectEnumerator]) { //... } ``` Changes to ranges towards the end of the string now won't affect earlier ranges.
Child elements outside parent div It appears that **all the elements nested inside my parent divs are overflowing** from the bottom border of my parental divs. As you can see the image divs overlay the parent and the paragraph on the header Similar questions have to deal with floating elements, but this is not the applicable here since I don't use those Why is "position:relative" ? Here is the code, and a ready fiddle for your, very much appreciated ,tweaks. <https://jsfiddle.net/r96fxfgj/> ``` <!DOCTYPE html> <html> <title>DISSECTIONS</title> <head> <link rel="stylesheet" type="text/css" href="dissections.css"> </head> <body> <div id="header"> <p><span>/<sup>*</sup></span>DISSECTIONS</p> </div> <div id="main"> <div class="photo" id="one"> </div> <div class="photo" id="two"> </div> <div class="photo" id="three"> </div> <div class="photo" id="four"> </div> <span class="stretch"></span> </div> <div id="footer"> <button id="about"> ABOUT </button> <button id="contact"> CONTACT </button> </div> </body> </html> ``` ``` body { overflow: hidden; /*prevents scrolling*/ font-family: courier; } div { width: 98vw; } p{ font-size: 8vh; } span { font-size: 15vh; } sup { font-size: 8vh; } #header { border: 2px solid black; height: 20vh; padding: 0; } #main { border: 2px solid red; height: 60vh; margin-top: 5vh; margin-bottom: 5vh; padding: 0; text-align: justify; /*justify*/ } .stretch { /*justify*/ width: 100%; display: inline-block; } .photo { border: 2px solid black; height: 100%; display: inline-block; vertical-align: top; width: 20vw; margin-left: 1%; margin-right: 1%; background-repeat: no-repeat; background-image: url( http://www.publicdomainpictures.net/pictures/10000/nahled/1001-12380278201dXT.jpg); } #footer { border: 2px solid blue; height: 10vh; bottom: 0; } ```
There are a few separate but similar issues here. Most boil down to you're unintentionally setting a specific height for the parent which is smaller than the things it contains. In general it's best to set specific heights or widths only when your design actually needs those specific sizes -- otherwise just let the content flow dictate the size of its parents. - text in header overflowing the container: Fonts are a bit weird when it comes to sizing -- the value you put in `font-size` will match the text itself, but will also scale the padding above and below the text to a (typically) larger value (this is in addition to the normal `padding` attribute found on other elements). You can fix this by setting values for the text's `line-height` (a cheap but often-used hack for short headers is `line-height:1px`, which will remove all the extra padding. Don't use this if there's any chance the text will wrap to a second line, though, or the second line will overlap the first.) - images overflowing `#main`: you're setting #main as a percentage of the viewport height, but images at 100% of their actual size -- so depending on the window size the images may end up larger than or smaller than the container. Either use the same units for both, or don't set a height on #main at all (therefore letting the images dictate the height of the container.) - `position:relative` -- I don't see this in your code but I've seen it confuse a lot of people: `position:relative` counterintuitively doesn't affect the DOM node you attach it to, it affects the absolute-positioned children of that node. If a parent has `position:relative`, then any children with `position:absolute` will be placed relative to the parent's position instead of relative to the full window. If you're not using `position:absolute` (and you shouldn't unless absolutely necessary!) then you don't need `position:relative`.
iOS Objective-C How to get 2 decimal rounded float value? I have a ``` float a = 1412.244019; ``` and I need a to be rounded to the nearest second decimal like 1412.24000000. I understand that if i want to present a with only two decimals i use %.2f, but that is NOT the case. I need the new a for mathematical reasons as a float. I have tried a couple of methods found on stackoverflow without luck. The more obvious one i used, i mention below, but still had no luck using it. Can YOU see why the magic is not happening? PS: It does do the desired effect sometimes, NOT always, which gets me thinking... hmmm... Thanks in advance. Code: ``` float a = 1412.244019; NSLog(@"a is %f", a); //output a is 1412.244019 a = [[NSString stringWithFormat:@"%.2f", a] floatValue]; NSLog(@"a is %f", a); //output a is 1412.239990 ``` **EDIT:** SEEMS like when i am using the float a after the above surgery, it is considering it to be 1412.240000 eventhough the NSLog says differently... strange. But i am getting what I want, so Kudos for wasted time chasing nothing :) **EDIT** I would love to choose you all as correct answers, but since i can only choose one, i chose the first good answer with extra explanation (of the two last).
Have you tried this? ``` CGFloat val = 37.777779; CGFloat rounded_down = floorf(val * 100) / 100; /* Result: 37.77 */ CGFloat nearest = floorf(val * 100 + 0.5) / 100; /* Result: 37.78 */ CGFloat rounded_up = ceilf(val * 100) / 100; /* Result: 37.78 */ ``` source : [Rounding Number to 2 Decimal Places in C](https://stackoverflow.com/questions/1343890/rounding-number-to-2-decimal-places-in-c) Complementing: You just don't have control about how the computer will store the float value. So these rounded values may not be EXACTLY the "obvious" decimal values, but they will be very very close, and that's the max guarantee you will have.
Singleton with multithreads This question was asked in an interview. The first part was to write the singleton class: ``` class Singleton { static Singleton *singletonInstance; Singleton() {} public: static Singleton* getSingletonInstance() { if(singletonInstance == null) { singletonInstance = new Singleton(); } return singletonInstance; } }; ``` Then I was asked how to handle this `getSingletonInstance()` in a multithreaded situation. I wasn't really sure, but I modified as: ``` class Singleton { static Singleton *singletonInstance; Singleton() {} static mutex m_; public: static Singleton* getSingletonInstance() { m_pend(); if(singletonInstance == null) { singletonInstance = new Singleton(); } return singletonInstance; } static void releaseSingleton() { m_post(); } }; ``` Then I was told that although a mutex is required, pending and posting a mutex is not efficient as it takes time. And there is a better way to handle to this situation. Does anybody know a better and more efficient way to handle the singleton class in a multithreaded situation?
In C++11, the following is guaranteed to perform thread-safe initialisation: ``` static Singleton* getSingletonInstance() { static Singleton instance; return &instance; } ``` In C++03, a common approach was to use double-checked locking; checking a flag (or the pointer itself) to see if the object might be uninitialised, and only locking the mutex if it might be. This requires some kind of non-standard way of atomically reading the pointer (or an associated boolean flag); many implementations incorrectly use a plain pointer or `bool`, with no guarantee that changes on one processor are visible on others. The code might look something like this, although I've almost certainly got something wrong: ``` static Singleton* getSingletonInstance() { if (!atomic_read(singletonInstance)) { mutex_lock lock(mutex); if (!atomic_read(singletonInstance)) { atomic_write(singletonInstance, new Singleton); } } return singletonInstance; } ``` This is quite tricky to get right, so I suggest that you don't bother. In C++11, you could use standard atomic and mutex types, if for some reason you want to keep the dynamic allocation of you example. Note that I'm only talking about synchronised initialisation, not synchronised access to the object (which your version provides by locking the mutex in the accessor, and releasing it later via a separate function). If you need the lock to safely access the object itself, then you obviously can't avoid locking on every access.
How to detect a strict clockwise/ counter clockwse motion in MATLAB I need to write a small program which to test whether a line (position vector) does strictly clockwise or CCLW movement. I tried to use atand to find the angle, but it could jump from negative to positive value when it pass thought 90 deg, it will have the same thing if I use slope method. However, the motion does not have to cut at 90 deg, it could jump from 89 to 91. Then a big slope jump could happen. Any idea please Thanks
One way to do this would be to calculate the cross-product of consecutive position vectors. If all the cross-products are positive then the line moved with a strictly clockwise. Similarly, if they are all negative then the line moved counter-clockwise. If the signs are mixed then the line did not move strictly in one angular direction: ``` function checkRotation(pos) pos(:,3) = 0; pos = unique(sum(sign(cross(pos(1:end-1,:), pos(2:end,:))), 2)); if isequal(pos, 1) disp('Rotation was counter-clockwise'); elseif isequal(pos, -1) disp('Rotation was clockwise'); else disp('No strict rotation direction'); end ``` Create some random position vectors on `-10<=x<=10` and `-10<=y<=10` and test rotation: ``` >> pos = 20 * rand([10, 2]) - 10 pos = -8.28968405819912 9.26177078573826 -4.75035530603335 0.936114374779359 6.02029245539477 0.422716616080031 -9.41559444875707 -5.36811226582952 8.57708278956089 -0.222045121596661 4.60661725710906 2.48120176347379 -0.227820523928417 3.58271081731495 1.57050122046878 -2.08969568662814 -5.25432840456957 -2.65126702911047 -0.823023436401378 9.75964006323266 >> checkRotation(pos) No strict rotation direction ``` Create position vectors that move only CCW and test: ``` >> theta = 0:15:180; >> pos = [cosd(theta)' sind(theta)']; >> checkRotation(pos) Rotation was counter-clockwise ``` and similarly for CW rotation: ``` >> theta = 180:-15:0; >> pos = [cosd(theta)' sind(theta)']; >> checkRotation(pos) Rotation was clockwise ``` Note that the success of rotation detection is limited by your sampling rate. If the line rotates counter-clockwise by more than 180 degrees in successive samplings of the line position, it is indistinguishable from a rotation less than 180 degrees in the clockwise direction. This is an example of [aliasing](http://en.wikipedia.org/wiki/Aliasing).
Writing to /etc/networking/interfaces at boot using sed/awk? Newbie here, I'm trying to write to an auto-generated `/etc/network/interfaces` file of a newely provisioned XEN Ubuntu (12.04/10.04/8.04) DomU server at boot time using (currently) `sed`. The auto-generated file is formatted as below: ``` auto eth0 iface eth0 inet static address 192.168.0.88 gateway 192.168.0.254 network 255.255.255.255 auto lo iface lo inet loopback ``` Using `sed`, I'm trying to alter lines 1 & 2, add a third line, remove the gateway and last two lines, and append four extra lines at the very end. I'm currently stuck on adding the third line, as the script adds this line everytime it's run: ``` #!/bin/bash sed -i "1s/.*/auto lo eth0/" /tmp/interfaces sed -i "2s/.*/iface lo inet loopback/" /tmp/interfaces sed -i "2a\iface eth0 inet static" /tmp/interfaces sed -i "s/auto lo//g" /tmp/interfaces ``` Is it possible to add the third line only if it doesn't exist using `sed` (or `awk`)? Likewise, how can I delete the gateway and last two lines only if they don't exist? I'm new to `sed`, so am wondering whether I should be looking at `awk` instead for achieving this?
You can do that with sed: ``` sed -i -e '4{/iface eth0 inet static/! i\ iface eth0 inet static }' ``` You can group commands with braces. The commands in the braces will only execute on the third line. The `i` insert command will only execute on the third line **and** if the third line doesn't match the string between slashes (the `!` after it tells it to execute when it doesn't match). You can do the same to delete: ``` sed -i -e '3{/gateway/d}' ``` Here we delete the third line only if it contains the string `gateway`. You could probably be more generic and simply do: ``` sed -i -e '/gateway/d' ``` which will delete all lines that contain gateway, but maybe that's not what you want. As for deleting the last lines, the easiest solution would be: ``` sed -i -e '${/auto lo/d;/iface lo inet loopback/d}' sed -i -e '${/auto lo/d;/iface lo inet loopback/d}' ``` Where the `d` delete command is executed on the last line if it matches either `auto lo` or `iface lo inet loopback`. Executing it twice will delete the last two lines if they match the patterns. If you want to add lines to the end of the file, you can do: ``` sed -i -e '$a\ newline1\ newline2\ newline3' ``` Or maybe only add them if the last line isn't a specific line: ``` sed -i -e '${/192\.168\.1\.1/!a\ newline1\ newline2\ newline3 }' ``` Hope this helps a little =)
Drawing path point simplification / reduction in JavaScript We are developing an drawing application for iOS and android. I am using cubic quadratic curves to draw smooth curves because cubic Bézier curve is way slow to draw on mobile devices(mostly pads). Drawing long quadratic curve with lot of points is still slow in pads so I am trying to reduce points I have to plot on canvas to speed up drawing. I have tried, 1. Catmull-Rom splines 2. Ramer-Douglas-Peucker but they are for cubic curves and not has not working properly for quad-curves. Is there any algorithm or techniques for quad curves as well? can any other optimization be done to speed up path drawing?
You could subdivide the spline segment recursively, until they are almost a straight line. > > **function** Subdivide( *C* : Curve, *maxDepth* : int ) > > **begin** > >     **if** *maxDepth* ≤ 1 **or** Polyline-Length( *C* ) ≤ 1px **or** StraightLineMeasure( *C* ) < ϵ **then** > >         **return** List-Single( *C* ) > >     **end** > >     *C1*, *C2* ← Split( *C* ) > >     **return** List-Concat( Subdivide( *C1*, *maxDepth* - 1 ), Subdivide( *C2*, *maxDepth* - 1 ) ) > > **end** > > > where *Polyline-Length* calculates the length of the poly-line formed by the control points. *StraightLineMeasure* returns zero for a straight line, and a small number for almost straight lines. *Split* returns two sets of control points, each of which represents half of the original curve. B-Splines are [easy to subdivide (pdf)](http://www.mpi-inf.mpg.de/~ag4-gm/slides/b-splines.pdf). --- ![Screenshot](https://i.stack.imgur.com/C0Jnt.png) [(click here for demo)](http://jsfiddle.net/h6X5m/1/) Here is an implementation in javascript: ``` $(function() { var canvas = document.createElement('canvas'); document.body.appendChild(canvas); var ctx = canvas.getContext('2d'); ctx.fillStyle = '#f00'; ctx.strokeStyle = '#f00'; ctx.lineWidth = 1; var segments = BSplineSegment.FromBSpline([ new Vector(10, 10), new Vector(110, 10), new Vector(110, 110), new Vector(10, 110), new Vector(10, 10), new Vector(110, 10), new Vector(110, 110) ]); for (var i = 0; i < segments.length; i++) { var subsegments = segments[i].subdivide(30); for (var j = 0; j < subsegments.length; j++) { var bss = subsegments[j]; ctx.fillRect(bss.p1.x, bss.p1.y, 1, 1); } } var segment = new BSplineSegment( new Vector(110, 10), new Vector(210, 10), new Vector(110, 110), new Vector(210, 110)); subsegments = segment.subdivide(50); for (var j = 0; j < subsegments.length; j++) { var bss = subsegments[j]; ctx.fillRect(bss.p1.x, bss.p1.y, 1, 1); } }); function Vector(x, y) { this.x = x; this.y = y; } Vector.prototype = { lengthSquared: function() { return this.x * this.x + this.y * this.y; }, length: function() { return Math.sqrt(this.lengthSquared()); }, add: function(other) { return new Vector(this.x + other.x, this.y + other.y); }, sub: function(other) { return new Vector(this.x - other.x, this.y - other.y); }, mul: function(scale) { return new Vector(this.x * scale, this.y * scale); }, div: function(scale) { return new Vector(this.x / scale, this.y / scale); }, cross: function(other) { return this.x * other.y - this.y * other.x; }, }; function BSplineSegment(p0, p1, p2, p3) { this.p0 = p0; this.p1 = p1; this.p2 = p2; this.p3 = p3; }; BSplineSegment.FromBSpline = function(pts) { var n = pts.length; var segments = []; for (var i = 3; i < n; i++) { segments.push(new BSplineSegment(pts[i - 3], pts[i - 2], pts[i - 1], pts[i])); } return segments; }; BSplineSegment.prototype = { polylineLength: function() { return this.p2.sub(this.p1).length(); }, straightLineMeasure: function() { var det0 = this.p1.cross(this.p2); var det1 = det0 + this.p2.cross(this.p0) + this.p0.cross(this.p1); var det2 = det0 + this.p2.cross(this.p3) + this.p3.cross(this.p1); return (Math.abs(det1) + Math.abs(det2)) / this.p2.sub(this.p1).length(); }, split: function() { var p0 = this.p0.add(this.p1).mul(0.5); var p1 = this.p0.add(this.p1.mul(6)).add(this.p2).mul(0.125); var p2 = this.p1.add(this.p2).mul(0.5); var p3 = this.p1.add(this.p2.mul(6)).add(this.p3).mul(0.125); var p4 = this.p2.add(this.p3).mul(0.5); return [new BSplineSegment(p0, p1, p2, p3), new BSplineSegment(p1, p2, p3, p4)]; }, subdivide: function(maxLevels) { if (maxLevels <= 0 || this.polylineLength() < 1.0 || this.straightLineMeasure() < 1.0) { return [this]; } else { var children = this.split(); var left = children[0].subdivide(maxLevels - 1); var right = children[1].subdivide(maxLevels - 1); return left.concat(right); } } };​ ```
Tooltip inside TextInput label is not working. Material-UI + React I want to use an Outlined style of a [TextField](https://material-ui.com/components/text-fields/) who's label must contain a [tooltip](https://material-ui.com/components/tooltips/#tooltip) icon with some text [![Design](https://i.stack.imgur.com/ZwCaP.png)](https://i.stack.imgur.com/ZwCaP.png) Please refer to [Sandbox](https://codesandbox.io/s/material-ui-7qyr6) for a live demo **Code excerpt:** ``` const IconWithTooltip = () => ( <Tooltip title="Text explaining stuff"> <HelpIcon /> </Tooltip> ); const Example = () => { return ( <div> <FormControl variant="outlined"> <InputLabel htmlFor="with-label"> FormControl with label <IconWithTooltip /> </InputLabel> <OutlinedInput id="with-label" startAdornment={<InputAdornment position="start">$</InputAdornment>} /> </FormControl> <TextField label={ <div> TextFiled <IconWithTooltip /> </div> } variant="outlined" /> Just icon with tooltop <IconWithTooltip /> </div> ); }; ``` **Problem:** When hovering over the (?) icon tooltip does not appear. I have tried coding the input in 2 different ways using FormControl and TextInput but none works. Am i missing something?
As indicated by Nimmi in a comment, this is due to `pointer-events: none` in the [CSS for the label](https://github.com/mui-org/material-ui/blob/v4.9.4/packages/material-ui/src/InputLabel/InputLabel.js#L73). Changing this in the manner shown below does allow the tooltip to work, **but you should NOT do this**. This causes the label to be clickable. When `pointer-events` is `none`, a click on the label passes through to the input and causes it to receive focus. When `pointer-events` is `auto`, the click stops on the label and does not bring focus to the input. You may want to look into leveraging helper text (shown below the input) as a place to incorporate the tooltip. ``` <TextField InputLabelProps={{ style: { pointerEvents: "auto" } }} label={ <div> TextFiled <IconWithTooltip /> </div> } variant="outlined" type="text" /> ``` [![Edit Material UI](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/material-ui-zfe6u?fontsize=14&hidenavigation=1&theme=dark) Related documentation: - <https://developer.mozilla.org/en-US/docs/Web/CSS/pointer-events>
Grails: how to programatically bind command object data to domain object in service? I have a command object that I want to convert into a domain object. However, the object I want to convert the command object into may be one of two domain classes (they're both derived classes), and I need to do it in a service (which is where, based on other data, I decide which type of object it should be bound to). Is this possible and what's the best way to do this? `bindData()` only exists in a controller. Do I just have to manually map command object parameters to the appropriate domain object properties? Or is there a faster/better way?
If the parameters have the same name, then you can use [this question](https://stackoverflow.com/questions/9072307/copy-groovy-class-properties) to copy the values over. A quick summary can be as follows. **Using the Grails API** You can cycle through the properties in a class by accessing the `properties` field in the class. ``` object.properties.each { property -> // Do something } ``` You can then check to see if the property is present in the other object. ``` if(otherObject.hasProperty(property) && !(key in ['class', 'metaClass'])) ``` Then you can copy it from one object to the other. **Using Commons** Spring has a really good utility class called `BeanUtils` that provides a generic copy method that means you can do a simlple oneliner. ``` BeanUtils.copyProperties(object, otherObject); ``` That will copy values over where the name is the same. You can check out the docs [here](https://commons.apache.org/proper/commons-beanutils/apidocs/org/apache/commons/beanutils/BeanUtils.html). **Otherwise..** If there is no mapping between them, then you're kind of stuck because the engine has no idea how to compare them, so you'll need to do it manually.
PostgreSQL create tablespace no permission I've got a problem when I create tablespace for PostgreSQL. The following are the steps: 1. `mkdir /postgres` 2. `chown postgres.postgres /postgres` 3. `su - postgres` 4. `psql` 5. `create tablespace p1 location '/postgres'` In this step I got a error: ``` could not set permissions on directory "/postgres": Permission denied ``` The directory ownership is correct: ``` [root@dev ~]# ls -la /postgres 总用量 8 drwxr-xr-x. 2 postgres postgres 4096 12月 2 13:17 . dr-xr-xr-x. 28 root root 4096 12月 3 06:57 .. ``` the user is `postgres` ``` [root@dev contrib]# ps -ef|grep postgres postgres 1971 1 0 08:21 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data ``` I'm running on CentOS. fix: setenforce 0
At a wild guess I'd say you're on Mac OS X and your PostgreSQL is running as the user `postgres_` (note the underscore), as is used by some PostgreSQL packages. `ps -ef | grep postgres` or `ps aux|grep postgres` should show you what user the server is running as. Make sure the directory is owned by that user. **Update** based on extra info in comments: You're on CentOS, not Mac OS X. Your PostgreSQL is running as user `postgres`, which is the same owner as the directory. It thus seems likely that you are having issues with *SELinux*. If, *for testing purposes only*, you run: ``` setenforce 0 ``` are you then able to run the `CREATE TABLESPACE` command? (`DROP` the tablespace after creating it with SELinux temporarily off; if you don't, and restart, PostgreSQL will fail to start up). If creation fails with SELinux temporarily disabled, you must either exempt PostgreSQL from your SELinux policy, create the tablespace at a location that the SELinux policy permits, or set appropriate SELinux attributes on the tablespace directory so that PostgreSQL can manipulate it. Or you can turn SELinux off entirely, but that's not really preferable. There might be hints in dmesg, or in CentOS's SELinux helper tool, to tell you specific SELinux booleans you can turn on or off to control this. See the help for the `setsebool` command, the [Fedora Security Guide](http://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html), the [CentOS SELinux howto](http://wiki.centos.org/HowTos/SELinux), etc. Perhaps the best option is to just change the SELinux context of the file. See [the documentation](http://docs.fedoraproject.org/en-US/Fedora/11/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Working_with_SELinux-SELinux_Contexts_Labeling_Files.html). You can use `chcon`, but then the change will be lost after a file system relabel. It's better to use `semanage` as discussed in the next page of the linked manual.
Single Page Application SEO and infinite scroll AngularJS We are have a site with a feed similar to pinterest and are planning to refactor the jquery soup into something more structured. The two most likely candidates are AngularJS and Backbone+Marionette. The site is user-generated and is mostly consumption-oriented (typical 90/9/1 rule) with the ability for users to like, bookmark, and comment on posts. From the feed we open a lightbox to see more detail about the post with comments, related posts, similar to pinterest. We have used backbone sporadically and are familiar with the idea but put off by the boilerplate. I assume Marionette would help a lot with that but we're open to changing the direction more radically (eg Angular) if it will help in the long term. The requirements: - Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight. - we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc. - Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.) - We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site. These requirements related to my concerns about Angular: 1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages. 2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call. 3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small. Posts like: - <https://stackoverflow.com/a/7654856/214545> - [Angular JS ng-repeat consumes more browser memory](https://stackoverflow.com/questions/14065050/angular-js-consumes-more-browser-memory) concern me for our use-case. We can easily have hundreds of posts each with 1-2 dozen details. Can the two-way binding be "disabled" where I have fields/elements that I know won't change? Is it normal/possible to unload elements outside of the view port to same memory? This is also connected to the mobile direction because memory is even more of a concern there. Would AngularJS work/perform well in our use-case? Are there any tricks/tips that would help here?
There are different methods of "infinite scroll" or feed as you put it. The needs of the users and size of acceptable response payload will determine which one you choose. *You sacrifice usability where you meet performance it seems here.* **1.** [Append assets](http://binarymuse.github.io/ngInfiniteScroll/) This method is your traditional append to bottom approach where if the user reaches the bottom of the current scroll height, another API call will be made to "stack on more" content. This has it's benefits as being the most effective solution to handle cross device caveats. Disadvantages of this solution, as you have mentioned, come from large payloads flooding memory as user carelessly scrolls through content. There is no throttle. ``` <div infinite-scroll='getMore()' infinite-scroll-distance='0'> <ul> <li ng-repeate="item in items"> {{item}} </li> </ul> </div> var page = 1; $scope.getMore() = function(){ $scope.items.push(API.returnData(i)); page++; } ``` **2.** Append assets with a throttle Here, we are suggesting that the user can continue to display more results in a feed that will infinitely append, but they must be throttle or "manually" invoke the call for more data. This becomes cumbersome relative to the size of the content being returned that the user will scroll through. If there is a lot of content being retruned per payload, the user will have to click the "get more" button less. This is of course at a tradeoff of returning a larger payload. ``` <div> <ul> <li ng-repeate="item in items"> {{item}} </li> </ul> </div> <div ng-click='getMore()'> Get More! </div> var page = 1; $scope.getMore() = function(){ $scope.items.push(API.returnData(i)); page++; } ``` **3.** [Virtual Scroll](http://blog.stackfull.com/2013/02/angularjs-virtual-scrolling-part-1/) This is the last and most interesting way to infinite scroll. The idea is that you are only storing the rendered version of a range of results in browser memory. That is, complicated DOM manipulation is only acting on the current range specified in your configuration. This however has it's own pitfalls. The biggest is cross device compatibility . If your handheld device has a virtual scrolling window that reaches the width of the device --- it better be less then the total height of the page because you will never be able to scroll past this "feed" with its own scroll bar. You will be "stuck" mid page because your scroll will always be acting on the virtual scroll feed rather than the actual page containing the feed. Next is reliability. If a user drags the scroll bar manually from a low index to one that is extremely high, you are forcing the broswer to run these directives very very quickly, which in testing, has caused my browser to crash. This could be fixed by hiding the scroll bar, but of course a user could invoke the same senario by scrolling very very quickly. [Here is the demo](http://demo.stackfull.com/virtual-scroll/#/comparison) [The source](https://github.com/stackfull/angular-virtual-scroll) `"Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight."` So what you are saying is that you want the page to be prerendered server side before it serves content? This approach worked well in the early thousands but most everyone is moving away from this and going towards the single page app style. There are good reasons: - The inital seed you send to the user acts as a bootstrap to fetch API data so your servers do WAY less work. - Lazy loading assets and asynchronous web service calls makes the percieved load time much faster than the traditional "render everything on the server first then spit it back out to the user approach." - Your SEO can be preserved by using a page pre-render / caching engine to sit in front of your web server to only respond to web crawlers with your "fully rendered version". This concept is explained well **[here](http://theothersideofcode.com/what-is-stopping-google-from-indexing-single-page-javascript-applications)**. `we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc.` If your inital payload for feed does not contain children data points for each "feed id" and need to use an additional API request to load them in your lightbox --- you are doing it right. That's totally a legit usecase. You would be arguing 50-100ms for a single API call which is unpercievable latency to your end user. If you abosultely need to send the additional payload with your feed, you arent winning much. `Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.)` You are mixing technologies here --- The like button is an API call to facebook. Whether those changes propogate to other instantiations of the facebook like button on the same page is up to how facebook handles it, I'm sure a quick google would help you out. Data specific to YOUR website however --- there are a couple different use cases: - Say I change the title in my lightbox and also want the change to propogate to the feed its currently being displayed in. If your "save edit action" POST's to the server, the success callback could trigger updating the new value with a websocket. This change would propogate to not just your screen, but everyone elses screen. - You could also be talking about two-way data binding (AngularJS is great at this). With two way data-binding, your "model" or the data you get back from your webservice can be binded to muiltiple places in your view. This way, as you edit one part of the page that is sharing the same model, the other will update in real time along side it. This happens before any HTTP request so is a completely different use case. `We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site.` You should really take a look a modern responsive CSS frameworks like **[Bootstrap](http://twitter.github.io/bootstrap/)** and **[Foundation](http://foundation.zurb.com/)**. The point of using responsive web design is that you only have to build the site once to accomadate all the different screen sizes. If you are talking about feature modularity, AngularJS takes the cake. The idea is that you can export your website components into modules that can be used for another project. This can include views as well. And if you built the views with a responsive framework, guess what --- you can use it anywhere now. `1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages.` As discussed above, its really best to move away from these kind of approaches. If you absolutely need it, templating engines dont care about wether your payload was rendered serverside or client side. Links to partial pages will be just as accesible. `2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call.` Again, this is exactly what the industry is moving into. You will be saving in "percieved" and "actual" load time using an inital static bootstrap that fetches all of your external API data --- This will also make your development cycle much faster because you are separating concerns of completely independant peices. Your API shouldnt care about your view and your view shouldnt care about your API. The idea is that both your API and your front end code can become modular / reusable when you break them into smaller peices. `3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small.` I'm also going to combine this question with the comment you left below: `Thanks for the answer! Can you clarify - it seems that 1) and 2) just deal with how you would implement infinite scrolling, not the performance issues that might come from such an implementation. It seems that 3 addresses the problem in a way similar to recent versions of Sencha Touch, which could be a good solution` The performance issues you will run into are totally subjective. I tried to outline the performance considerations like throttling into the discussion because throttling can drastically reduce the amount of stress your server is taking and the work your users browser has to do with each new result set appended into the DOM. Infinite scroll, after a while, will eat up your users browser memory. That much I can tell you is inevitible but only through testing will you be able to tell how much. In my experience I could tell you that a users browser can handle a great deal of abuse but again, how big your payload is for each result set and what directives you are running on all of your results are totally subjective. There are solutions that render only on a ranged data set in option three I described, but have their limitations as well. API data coming back shouldn't be anymore than 1-2kbs in size, and should only take about 50-200ms to return a query. If you arent meeting those speeds, mabye it's time to re-evaluate your queries or cut down on the size of the result set coming back by using child ID's to query other endpoints for specifics.
Run python oneliner in bash script The following is the python one-liner I want to run as part of my bash script ``` python -c "from xml.dom.minidom import parse;dom = parse('/path/to/pom.xml');print [n.firstChild.data for n in dom.childNodes[0].childNodes if n.firstChild and n.tagName == 'version']" ``` (pom.xml is a maven POM xml file) I want to have the result of the command be assigned to variable **MVN\_VER** This is my base script: ``` WS="/path/to" PY_GET_MVN_VERS="from xml.dom.minidom import parse;dom = parse(\'${WS}/pom.xml\')\;print [n.firstChild.data for n in dom.childNodes[0].childNodes if n.firstChild and n.tagName == \'version\']" funcion test_mvn { MVN_VER=`python -c \"${PY_GET_MVN_VERS}\"` echo ${MVN_VERS} } test_mvn ``` However it fails to run. If I run the script with +x option, it is what I see: ``` ++ python -c '"from' xml.dom.minidom import 'parse;dom' = 'parse(\'\''/path/to/pom.xml\'\'')\;print' '[n.firstChild.data' for n in 'dom.childNodes[0].childNodes' if n.firstChild and n.tagName == '\'\''version\'\'']"' File "<string>", line 1 "from ``` I think it has something to do with escaping the python code. How can I escape it properly?
There is no need for escaping or moving the argument to its own variable. But, keeping it mostly the same, the following works for me: ``` #!/usr/bin/env bash WS="/Users/danielbeck/Desktop" PY_GET_MVN_VERS="from xml.dom.minidom import parse;dom = parse('${WS}/pom.xml');print [n.firstChild.data for n in dom.childNodes[0].childNodes if n.firstChild and n.tagName == 'version']" function test_mvn { MVN_VER=$( python -c "${PY_GET_MVN_VERS}" ) echo ${MVN_VER} } test_mvn ``` `/Users/danielbeck/Desktop/pom.xml` is the example minimal POM from the Maven docs: ``` <project> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany.app</groupId> <artifactId>my-app</artifactId> <version>1</version> </project> ``` Output: ``` [u'1'] ``` --- Please throw away your code and just use mine (after adjusting `WS`) instead of adjusting yours until it works. You have quite a few syntax errors in there.
Why don't I have to escape equal sign? For example: ``` "1+1=2".replace(/=/,"aa"); "1+1=2".replace(/\=/,"aa"); ``` return the same result. Does it mean I don't have to escape "=" (the equal sign) in JavaScript? I remembered that I always have to escape equal sign in Java and .NET. I tried to find some info from <https://www.ecma-international.org/ecma-262/7.0/index.html> but didn't come up with anything. Can anyone help me find if the specification talks about escaping the equal sign?
I'd go with [this table](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#Using_special_characters) in MDN web docs. You'll see that `=` has no special meaning - just as @Barmar stated. You referred to [lookaheads](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#special-lookahead), which use `?=`, but without the leading `?` it's just an equal sign. There are not many special characters which needs an additional special character to "work", I guess that's why it's confusing. Think of it as `-` in combination with `[` and `]`: - `0-9` matches "0-9" - `[0-9]` matches "0" **and** "1" **and** ... **and** "9" - `[0\-9]` matches "0" and "1" and ... and "9" **and "-"** So just `-` has no meaning and does not have to be escaped, only if combined with square brackets. The same applies to `=` with/without `?`.
Ember when does controller reloads? (or reset) I've noticed that if i use the same controller for different routes it does not get reset so i can keep data shared between routes which is really helpful for me. But i wonder... when does the controller reloads in ember? (runs the init and cleans all of his properties)? And can i manually tell the controller to reload itself? Thanks for the help guys :)
The controllers are generally singleton instances (excluding `itemController` instances), they live the life of the page. If you need to reset some properties you can do it during `setupController` of the route in need. ``` App.FooRoute = Ember.Route.extend({ model: function(){ //return something... }, setupController: function(controller, model){ this._super(controller, model); controller.setProperties({foo:'asdf', bar: 'ewaf'}); } }); ``` or you can define some method on the controller that resets it all, and call it during the `setupController`. Computed properties are all marked dirty and recalculated automatically when the model behind the controller is swapped out. ``` App.FooRoute = Ember.Route.extend({ model: function(){ //return something... }, setupController: function(controller, model){ this._super(controller, model); controller.reset(); } }); App.FooController = Ember.ObjectController.extend({ foo: 'asdf', bar: 'wert', reset: function(){ this.setProperties({foo:'asdf', bar: 'ewaf'}); }// if you want it to happen on init tack on .on('init') right here }); ``` ## on init ``` App.FooController = Ember.ObjectController.extend({ foo: 'asdf', bar: 'wert', reset: function(){ this.setProperties({foo:'asdf', bar: 'ewaf'}); }.on('init') }); ```
How do I check if the user clicked inside the current component? I have a component called `Dialog`, in which I attach an event listener on mouse clicks on the `window` object. ``` componentDidMount() { document.addEventListener('click', this.handleClick); } componentWillUnmount() { document.removeEventListener('click', this.handleClick); } ``` How can I detect (in the `handleClick` function) whether a click has been fired inside the component or outside? Note that this dialog contains different elements and child components.
`parent.contains(child)` is your friend. This solution using `refs` might not be perfect, but simply using `this` does not work as it's not a proper DOM node. I'm using React 15 here, so keep in mind that in earlier versions you'd have to use `.getDOMNode()` on the parent. ``` class Dialog extends React.Component { constructor() { super(); this.handleClick = this.handleClick.bind(this); } componentDidMount() { document.addEventListener('click', this.handleClick); } componentWillUnmount() { document.removeEventListener('click', this.handleClick); } handleClick(e) { if (this.node.contains(e.target)) { console.log('You clicked INSIDE the component.') } else { console.log('You clicked OUTSIDE the component.') } } render() { return( <span ref={node => this.node = node}> Level 0<br/> <span> Level 1.<br/> <span>Level 2.</span> </span> </span> ); } } ReactDOM.render(<Dialog/>, document.getElementById('View')); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script> <div id="View"></div> ```
JavaScript - Combine two arrays item by item I have two arrays of arrays where the first contains the location name and the second contains the location latitude and longitude values. Each item in each array corresponds to it's counterpart in the other and both arrays have the same number of items like so: ``` var arr1 = [['Location #1'],['Location #2']]; var arr2 = [['36.1978319','-83.02365759999999'],['38.679842','-121.7457402']]; ``` What I need is a combined array of the two sets of items, but not concatenated like so: ``` var arr3 = [['Location #1','36.1978319','-83.02365759999999'], ['Location #2','38.679842','-121.7457402']]; ``` The only way I can think of doing this would be with like a combined for loop, but I can't get the syntax correct and not sure this is even possible... something like this: ``` for ((var a = 0; a < arr1.length; a++) && (var b = 0; b < arr2.length; b++)) { arr3.push(arr1[a] + ',' + arr2[b]); } ``` Is there a way to solve this problem with pure javascript?
I suggest [`Array.map`](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/map) for its shortness: ``` var arr1 = [['Location #1'], ['Location #2']]; var arr2 = [['36.1978319', '-83.02365759999999'], ['38.679842', '-121.7457402']]; var combined = arr1.map((element, i) => element.concat(arr2[i])); console.log(combined); ``` For a more generic solution (combining an arbitrary number of arrays), refer to [Javascript equivalent of Python's zip function](https://stackoverflow.com/questions/4856717/javascript-equivalent-of-pythons-zip-function)
Why does using $this inside a class in a Wordpress plugin throw a fatal error? I'm in the process of writing a Wordpress plugin which creates a page in the admin area, as well as executing some frontend code. The code below throws a nice `Fatal error: Using $this when not in object context` error. Which is rather mystifying, as the variable is called inside the class. Maybe I'm not following the intended Wordpress plugin structure for functions and classes, but the conceptual code below was created using the relevant entries on plugin development in the Wordpress Codex. Could somebody explain why the error is triggered, because when I create an instance of the class outside of the Wordpress codebase everything is fine. ``` if (!class_exists("MyClass")) { class MyClass { var $test = 'Test variable'; public function index() { //Index code } public function add() { echo $this->test; } } } add_action('admin_menu', 'my_plugin_menu'); function my_plugin_menu() { add_menu_page('My Plugin', 'My Plugin', 'manage_options', 'my-plugin', array('MyClass', 'index')); add_submenu_page('my-plugin', 'Add New Thing', 'Add New', 'manage_options', 'my-plugin-add', array('MyClass', 'add')); } ```
So, I've seem to have fixed it, by going back to the basics and asking Google the humble question: *"Using classes in Wordpress plugins".* Both the article by [Jay Fortner](http://w3prodigy.com/behind-wordpress/php-classes-wordpress-plugin/) and one on [dConstructing.com](http://www.dconstructing.com/php/2011/01/08/using-classes-to-develop-wordpress-plugins/) were helpful. Basically, I'm now calling add\_menu\_page and add\_submenu\_page from within the class. I was under the impression those functions somehow created an object, but they obviously don't. My code now looks something like this and I'm able to call the declared class variable without error: ``` if (!class_exists("MyClass")) { class MyClass { var $test = 'Test variable'; function __construct() { add_action('admin_menu', 'my_plugin_menu'); } function my_plugin_menu() { add_menu_page('My Plugin', 'My Plugin', 'manage_options', 'my-plugin', array(&$this, 'index')); add_submenu_page('my-plugin', 'Add New Thing', 'Add New', 'manage_options', 'my-plugin-add', array(&$this, 'add')); } public function index() { //Index code } public function add() { echo $this->test; } } new MyClass; } ```
How deploy a OSX or IOS Delphi project from the Command line? I'm building my Delphi Apps using a script like ``` call "C:\Program Files (x86)\Embarcadero\RAD Studio\11.0\bin\rsvars.bat" msbuild.exe "C:\Projects\Foo\Bar.dproj" ``` And now I want add an option to deploy the application to an OSX (or IOS) system modifing such script, so is possible deploy a OSX or IOS Delphi project from the Command line?
To deploy your application to a remote location , you must use the [paclient.exe (Platform Assistant Client Application)](http://docwiki.embarcadero.com/RADStudio/XE4/en/Paclient.exe,_the_Platform_Assistant_Client_Application) tool. In order to get the passed parameters Build and Run you project from RAD Studio and then check the `Build` tab of the message windows of the IDE. Check the next image for a OSX Application using a profile called `Local` ![enter image description here](https://i.stack.imgur.com/jrWnE.png) From here you can extract all the parameters passed to the paclient.exe - Delete in the host the previous files (is exists) > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --Clean="Project7.app,C:\Users\RRUZ\Desktop\Test Deploy\_@emb\_.tmp" > > > the `_@emb_.tmp` file is a temp file created by the ide that contains all the files to deploy in this case the content is like so ``` Project7.app\Contents\MacOS\Project7.rsm Project7.app\Contents\Entitlements.plist Project7.app\Contents\MacOS\libcgunwind.1.0.dylib Project7.app\Contents\MacOS\Project7 Project7.app\Contents\Resources\Project7.icns ``` - Copy the Info.plist (contains settup info the the .app like the icon used and the version) file to the Host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="OSX32\Debug\Project7.info.plist,Project7.app\Contents\,1,Info.plist" Local > > > - Copy the `libcgunwind.1.0.dylib` file (library) to the host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="c:\program files (x86)\embarcadero\rad studio\11.0\Redist\osx32\libcgunwind.1.0.dylib,Project7.app\Contents\MacOS\,1,libcgunwind.1.0.dylib" > Local > > > - Copy the bundler to the host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="OSX32\Debug\Project7,Project7.app\Contents\MacOS\,1,Project7" Local > > > - Copy the Remote debug symbols file to the Host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="OSX32\Debug\Project7.rsm,Project7.app\Contents\MacOS\,1,Project7.rsm" > Local > > > - Copy the project icon to the Host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="c:\program files (x86)\embarcadero\rad studio\11.0\bin\delphi\_PROJECTICNS.icns,Project7.app\Contents\Resources\,1,Project7.icns" > Local > > > - Copy the Entitlements.plist file to the Host > > c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe > --put="OSX32\Debug\Project7.entitlements,Project7.app\Contents\,1,Entitlements.plist" > Local > > > # Final script Finally you can put all this in a script file like so ``` call "C:\Program Files (x86)\Embarcadero\RAD Studio\11.0\bin\rsvars.bat" MSBuild Project7.dproj "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --Clean="Project7.app,C:\Users\RRUZ\Desktop\Test Deploy\files.txt" "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="OSX32\Debug\Project7.info.plist,Project7.app\Contents\,1,Info.plist" Local "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="c:\program files (x86)\embarcadero\rad studio\11.0\Redist\osx32\libcgunwind.1.0.dylib,Project7.app\Contents\MacOS\,1,libcgunwind.1.0.dylib" Local "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="OSX32\Debug\Project7,Project7.app\Contents\MacOS\,1,Project7" Local "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="OSX32\Debug\Project7.rsm,Project7.app\Contents\MacOS\,1,Project7.rsm" Local "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="c:\program files (x86)\embarcadero\rad studio\11.0\bin\delphi_PROJECTICNS.icns,Project7.app\Contents\Resources\,1,Project7.icns" Local "c:\program files (x86)\embarcadero\rad studio\11.0\bin\paclient.exe" --put="OSX32\Debug\Project7.entitlements,Project7.app\Contents\,1,Entitlements.plist" Local ``` Note : Remember create a file with the file names of all the files to deploy , in this sample script is called `files.txt`, this file is use by the paclient to cleanup previus deployed files.
int16 overflow leading to infinite loop As we know, int16\_t has a max value 32767, so the following code will just loop: ``` for (int16_t i = 0; i < 65535; i++) { // infinite loop } ``` When I change the code, it loops as well: ``` const int32_t t = 65535; for (int16_t i = 0; i < t; i++) { // infinite loop } ``` But when I make it uint32\_t instead of int32\_t, it actually exits: ``` const uint32_t t = 65535; for (int16_t i = 0; i < t; i++) { // actually exits } ``` Is this due to some compiler trick it does for me? I assume when I do the comparison: ``` i < t ``` For the last version it does auto convert for me? But I don't understand why it exits in the last version still...
When you compare a signed type and unsigned type, the signed type is promoted to an an unsigned type before the comparison is performed. Hence, the last block of code works. --- Take a look at the following program ``` #include <iostream> #include <cstdint> int main() { const uint32_t t = 65535; int16_t i = 32765; for (; i < t; i++) { std::cout << i << ", " << static_cast<uint32_t>(i) << std::endl; } std::cout << i << ", " << static_cast<uint32_t>(i) << std::endl; } ``` and its output with g++ 6.4.0 ``` 32765, 32765 32766, 32766 32767, 32767 -32768, 4294934528 ``` At the point of integer overflow, the value of `i` flips from the maximum value to the minimum value. More importantly, when that number is promoted to `uint32_t`, it appears to be a large value. The binary representation of `4294934528` in `uint32_t` is: ``` 11111111 11111111 10000000 00000000 ``` The binary representation of `-32768` in `int16_t` is: ``` 10000000 00000000 ``` There seems to be a correlation there although I don't fully understand what the rules of promoting an signed type to an unsigned type are to explain the correlation.
Converting image to grayscale in windows phone 8 How i can convert a normal image to Grayscale image in windows phone 8. Is there any provision for grayscale conversion is available in WritableBitmapEx.
``` try this extension method... public static WriteableBitmap ToGrayScale(this WriteableBitmap bitmapImage) { for (var y = 0; y < bitmapImage.PixelHeight; y++) { for (var x = 0; x < bitmapImage.PixelWidth; x++) { var pixelLocation = bitmapImage.PixelWidth * y + x; var pixel = bitmapImage.Pixels[pixelLocation]; var pixelbytes = BitConverter.GetBytes(pixel); var bwPixel = (byte)(.299 * pixelbytes[2] + .587 * pixelbytes[1] + .114 * pixelbytes[0]); pixelbytes[0] = bwPixel; pixelbytes[1] = bwPixel; pixelbytes[2] = bwPixel; bitmapImage.Pixels[pixelLocation] = BitConverter.ToInt32(pixelbytes, 0); } } return bitmapImage; } ```
Why is it not possible to update a django model instance like this: instance(\*\*update\_dict)? I can make a new django model object with `Model(**dictionary)` but if I want to update an instance in the same way, I've found only `update_or_create` which has its problems and often returns 'duplicate PK' errors. It also doesn't seem particularly pythonic to call a custom helper function on a model object. Alternatively, you can use this pattern: ``` Model.objects.get(id=id).update(**dict) ``` or ``` model.__dict__.update(**dict) model.save() ``` The latter feels like a hack and I read a few times that it's not 'supposed' to be done this way. The first method requires a query call and again feels incorrect since the model instance is likely already instantiated and to update it we need to send a query to the DB again? I also read that 'you can do this with a form' - but its not always users that are updating models in the DB, there are all kinds of functions we might write which requires models to be updated. I wonder, is there a reason that the below code is not possible/implemented? Is there a reason that model objects are not directly callable and updateable in this way? ``` model_instance(**dict) model_instance.save() ```
[**Model instances** [Django docs]](https://docs.djangoproject.com/en/dev/ref/models/instances/) do not have any built-in methods which update the field values by feeding in keyword arguments. ## Queryset `update()` The following query will update the database row immediately, but has the disadvantage that it [won't call any pre-save or post-save signals [Django docs]](https://docs.djangoproject.com/en/dev/ref/models/querysets/#update) that you may have registered to the model: ``` Model.objects.filter(id=model_instance.id).update(**dict_) ``` And as you mentioned, if you intend to keep using the same instance you would need to update its values from the database: ``` model_instance.refresh_from_db() ``` ## Why `model_instance(**dict_)` doesn't work Django model instances (objects) are not callable. Objects cannot be called like functions by default. For example, when you create a plain dict `dict_ = dict()`, you can't update it by calling `dict_(**kwargs)`. An object can be *made* callable by overriding the [`__call__` [Python docs]](https://docs.python.org/3/reference/datamodel.html#object.__call__) class method. ## A solution – custom `update()` method on model You could create your own model method so you're only making one database call (and preserving signals). For example you could create an [abstract base class [Django docs]](https://docs.djangoproject.com/en/dev/topics/db/models/#abstract-base-classes) that all of your models inherit from (rather than from `models.Model`) e.g. ``` class ModelWithUpdate(models.Model): class Meta: abstract = True def update(self, commit=False, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) if commit: self.save() ``` And then your usage would be: ``` model_instance.update(**dict_) model_instance.save() # or a shortcut model_instance.update(commit=True, **dict_) ```
How to convert meters into feet and inches Creating a program and I have been struggling to convert metres to feet and inches, but I finally got it working I think. My issue now is with the variable inchesleft it is a int and I am struggling to work out how to make it an integer as I want to drop the remainder of the inches value so I can get a value of 6feet 4inches etc. code below: ``` double inft, convert, inchesleft, value = 0.3048; int ft; string input; Console.WriteLine("please enter amount of metres"); input = Console.ReadLine(); convert = double.Parse(input); inft = convert / value; ft = (int)inft; inchesleft = convert / value % 1 *12; Console.WriteLine("{0} feet {1} inches.", ft, inchesleft); Console.ReadLine(); ```
**Try this:** ``` double inft, convert, value = 0.3048; int ft, inchesleft; string input; Console.WriteLine("please enter amount of metres"); input = Console.ReadLine(); convert = double.Parse(input); ``` Divide the input number by 0.3048 to get Feet ``` inft = convert / value; ``` Now we got Feet in decimal. Fetch the left part of feet (before decimal point) ``` ft = (int)inft; ``` Fetch the right part of Feet (after decimal point) and divide it by 0.08333 to convert it into Inches ``` double temp = (inft - Math.Truncate(inft)) / 0.08333; ``` Now we got inches in decimal. Fetch the left part of Inches (before decimal point) ``` inchesleft = (int)temp; // to be more accurate use temp variable which contains the decimal point value of inches Console.WriteLine("{0} feet {1} inches.", ft, inchesleft); Console.ReadLine(); ```
Why does it make sense to write str.split(line) over line.split()? For str.split(line) I'm calling a method on the str class and passing a line object, which happens to be a list full of strings, to the string object? It seems more clear to me that I should just call the split() method on my line object. I'm having trouble understanding why both ways work.
First, you're right that in this case, it's more readable (and more Pythonic, etc.) to just call `line.split()` than `str.split(line)`. But are there any cases where `str.split` is useful? Sure. Imagine that you had a list of lines, and you wanted to split all of them. Which of these is more readable: ``` split_lines = map(str.split, lines) split_lines = map(lambda line: line.split(), lines) ``` Because `str.split` is already a function that works on any `str`, you don't have to create a new function that works on any `str` to pass around. --- More generally, what you're asking is why Python has "unbound methods".\* Partly it's because they just naturally fall out of the design for how methods work in Python.\*\* But mainly, it's because they're handy for passing around to higher-order functions (and because of the idea that absolutely everything should be usable as a value unless there's a good reason not to allow it). --- As for the the last part, understanding how they both work, that might be a little involved for an SO answer. You can learn the basics of how they work [in the tutorial](https://docs.python.org/3/tutorial/classes.html#method-objects); for more details, see [How methods work](http://stupidpythonideas.blogspot.com/2013/06/how-methods-work.html), which has links to other useful information. But as a quick summary: - `line.split` is a bound method—a callable object that knows what value to pass as the `self` parameter when you later call it. So, `line.split()` just calls that bound method with no additional arguments, and `line` automatically gets passed as the `self`. - `str.split` is an unbound method—basically just a function. So, `str.split(line)` explicitly passes `line` as the `self`. --- \* Since 3.x, the term "unbound method" has been downplayed, because really, an unbound method is the same thing as a function. \*\* Guido has explained this a few times; start with his 2009 blog post [First-Class Everything](http://python-history.blogspot.com/2009/02/first-class-everything.html).
what's basic difference between JsonStore and JsonReader in context to Ext.data? What's basic difference between JsonStore and JsonReader in context to Ext.data? I mean when I should go for JsonStore and when I should use JsonReader as for me both are providing same solution.
Actually they are two separate things. A [`Ext.data.JsonReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader) reads a given JSON object and returns data records ([`Ext.data.Record`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.Record) objects) that are later stored by the respective data store. The [`Ext.data.Store`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.Store) is the base class for all Ext storages and uses helper objects for retrieving data ([`Ext.data.DataProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DataProxy)), for writing data ([`Ext.data.DataWriter`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DataWriter)) and for reading data ([`Ext.data.DataReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader)). These base classes come in different flavors such as: - [`Ext.data.DataProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DataProxy): - [`Ext.data.DirectProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DirectProxy) - [`Ext.data.HttpProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.HttpProxy) - [`Ext.data.MemoryProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.MemoryProxy) - [`Ext.data.ScriptTagProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.ScriptTagProxy) - [`Ext.data.DataWriter`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DataWriter) - [`Ext.data.JsonWriter`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonWriter) - [`Ext.data.XmlWriter`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.XmlWriter) - [`Ext.data.DataReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader) - [`Ext.data.JsonReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader) - [`Ext.data.XmlReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.XmlReader) This all builds up to a very extendable component that allows the developer to configure exactly what he needs to tweak. To make it easier for developers (especially new ones) Ext comes with some pre-configured data stores: - [`Ext.data.ArrayStore`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.ArrayStore) to make reading from simple Javascript arrays easier - [`Ext.data.DirectStore`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DirectStore), just a store preconfigured with an [`Ext.data.DirectProxy`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.DirectProxy) and an [`Ext.data.JsonReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader) - [`Ext.data.JsonStore`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonStore), just a store preconfigured with an [`Ext.data.JsonReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonReader) - [`Ext.data.XmlStore`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.XmlStore), just a store preconfigured with an [`Ext.data.XmlReader`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.XmlReader) So actually a [`Ext.data.JsonStore`](http://www.extjs.com/deploy/dev/docs/?class=Ext.data.JsonStore) is just a convenience class to make it easier for the developer. The following two snippets will create the same (or comparable) stores: ``` var store = new Ext.data.JsonStore({ url: 'get-images.php', root: 'images', idProperty: 'name', fields: ['name', 'url', {name:'size', type: 'float'}, {name:'lastmod', type:'date'}] }); // or var store = new Ext.data.Store({ url: 'get-images.php', reader: new Ext.data.JsonReader({ root: 'images', idProperty: 'name', fields: ['name', 'url', {name:'size', type: 'float'}, {name:'lastmod', type:'date'}] }); }); ```
Get/Set from C# to Java I'm working on a project to translate a c# project to Java. I have the following Get/Set block in C# ``` public Unit[] Units { get { Unit[] units_aux = new Unit[this.list_units.Count]; this.list_units.CopyTo(units_aux); return units_aux; } set { if (value == null) return; Unit[] units_aux = (Unit[])value; this.list_units.Clear(); foreach (Unit u in units_aux) this.lista_units.Add(u); } } ``` I want to translate this to Java, but I have not been successful in translating it with no syntax errors. I'm very new to Java, so maybe this is a basic question, but i haven't found any information on how to do this that won't produce errors. Thanks for your help
You'd basically have to convert it to a pair of methods: ``` public Unit[] getUnits() { // Method body } public void setUnits(Unit[] value) { // Method body } ``` Java doesn't have properties at a *language* level - the above is basically just a (very common) convention. I should note, by the way, that this C# code really isn't terribly nice: - There are simpler ways of converting an array to a list and vice versa - The setter *ignores* a null value when I'd expect it to throw an exception - By cloning the array, it doesn't have the generally-expected behaviour (at least, my expectations) if you set the property and then modify the array contents. It's usually a bad idea to have an array as a property type anyway; if you could get away with making it a read-only collection, that would be nicer and make life *much* simpler.
Difference between rows in long format for R based on other column variables I have an R dataframe such as: ``` df <- data.frame(ID = rep(c(1, 1, 2, 2), 2), Condition = rep(c("A", "B"),4), Variable = c(rep("X", 4), rep("Y", 4)), Value = c(3, 5, 6, 6, 3, 8, 3, 6)) ID Condition Variable Value 1 1 A X 3 2 1 B X 5 3 2 A X 6 4 2 B X 6 5 1 A Y 3 6 1 B Y 8 7 2 A Y 3 8 2 B Y 6 ``` I want to obtain the difference between each value of `Condition` (A - B) for each `Variable` and `ID` while keeping the long format. That would mean the value must appear every two rows, like this: ``` ID Condition Variable Value diff_value 1 1 A X 3 -2 2 1 B X 5 -2 3 2 A X 6 0 4 2 B X 6 0 5 1 A Y 3 -5 6 1 B Y 8 -5 7 2 A Y 3 -3 8 2 B Y 6 -3 ``` So far, I managed to do something relatively similar using the `dplyr` package, but it does not work if I want to maintain the long format: ``` df_long_example %>% group_by(Variable, ID) %>% mutate(diff_value = lag(Value, default = Value[1]) -Value) # A tibble: 8 x 5 # Groups: Variable, ID [4] ID Condition Variable Value diff_value <dbl> <chr> <chr> <dbl> <dbl> 1 1 A X 3 0 2 1 B X 5 -2 3 2 A X 6 0 4 2 B X 6 0 5 1 A Y 3 0 6 1 B Y 8 -5 7 2 A Y 3 0 8 2 B Y 6 -3 ```
You don't have to use `lag`, but use `diff`: ``` df %>% group_by(Variable,ID) %>% mutate(diff = -diff(Value)) ``` Output: ``` # A tibble: 8 x 5 # Groups: Variable, ID [4] ID Condition Variable Value diff <dbl> <chr> <chr> <dbl> <dbl> 1 1 A X 3 -2 2 1 B X 5 -2 3 2 A X 6 0 4 2 B X 6 0 5 1 A Y 3 -5 6 1 B Y 8 -5 7 2 A Y 3 -3 8 2 B Y 6 -3 ```
File upload error in servlet while using Apache Tomcat 7.0.40.0 When I use Apache Tomcat 7.0.34 for file uploading using "org.apache.tomcat.util.fileupload" no error is displayed and everything works fine. But when I use Apache Tomcat 7.0.40 one error occurred in the line "parseRequest(request)". I can't tell this as an error because if I use RequestContext then the error will go but I don't know how to use RequestContext Interface. Please help me how to use RequestContext because I need to pass the instance to "parseRequest(RequestContext ctx)" method. ``` public void service(HttpServletRequest request,HttpServletResponse response) { response.setContentType("text/html;charset=UTF-8"); String status=null; List<FileItem> items=null; try { if(ServletFileUpload.isMultipartContent(request)) { items=new ServletFileUpload(new DiskFileItemFactory()).parseRequest(request); for(FileItem item:items) { if(item.getFieldName().equals("status")) status=item.getString(); } } } catch(Exception e) { e.printStackTrace(); } } ``` I need to put RequestContext instance inside parseRequest(RequestContext ctx) but don't know how to use RequestContext.
This is **not** the right way to process a file upload in Servlet 3.0. You should instead be using [`@MultipartConfig`](http://docs.oracle.com/javaee/6/api/javax/servlet/annotation/MultipartConfig.html) annotation on the servlet and be using [`HttpServletRequest#getPart()`](http://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html#getPart%28%29) method to obtain the uploaded file, which was introduced in Servlet 3.0. The `org.apache.tomcat.util.fileupload` package contains exactly those classes which are doing all "behind the scenes" work of this new Servlet 3.0 feature. You shouldn't be using them directly, like as that you shouldn't be using `sun.*` classes when using Java SE on a Sun/Oracle JVM, and that you shouldn't be using `com.mysql.*` classes when using JDBC on a MySQL DB. It seems that you got confused by examples targeted at Servlet 2.5 or older using [Apache Commons FileUpload](http://commons.apache.org/fileupload) which happens to use the same classnames. Using Tomcat-specific classes would tight-couple your webapp to the specific Tomcat version and makes your webapp **unportable** to other Servlet 3.0 compatible containers and even to a different Tomcat version as you encountered yourself. You should in this particular case stick to standard classes from the `javax.servlet` package. The right way is shown in the 2nd part of this answer: [How to upload files to server using JSP/Servlet?](https://stackoverflow.com/questions/2422468/how-to-upload-files-to-server-using-jsp-servlet/2424824#2424824) All with all, this kickoff example should get you started: ``` <form action="upload" method="post" enctype="multipart/form-data"> <input type="text" name="status" /> <input type="file" name="uploadedFile" /> <input type="submit" /> </form> ``` with ``` @WebServlet("/upload") @MultipartConfig public class UploadServlet extends HttpServlet { @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String status = request.getParameter("status"); // Retrieves <input type="text" name="status"> Part uploadedFile = request.getPart("uploadedFile"); // Retrieves <input type="file" name="uploadedFile"> InputStream content = uploadedFile.getInputStream(); // ... (do your job here) } } ``` That's all.
Verifying route preconditions prior to loading controller I'm writing a single page application in Angular, specifically [angular.dart](https://github.com/angular/angular.dart), but I'm assuming this question still applies to AngularJS. Take for example the following routes: **/login** - Expects nobody to be logged in. If someone is authenticated but not registered, redirect to "register" route, if they are registered, redirect to the "home" route. **/register** - Expects an authenticated user who hasn't finished the registration process. If not authenticated, redirect to login. If is authenticated, redirect to home. **/home** - Expects an authenticated and registered user. If not authenticated, redirect to "login" route, if not registered, redirect to "register" route. I've done quite a bit of searching but cannot find a built-in or idiomatic way to check to make sure that certain preconditions are met before loading the controller associated with a particular route, and to redirect appropriately when these preconditions are not met. Any help would be greatly appreciated!
Angular.Dart and Angular.JS routing frameworks are very/fundamentally different. Angular.Dart is using a third party routing framework (<https://github.com/dart-lang/route/tree/experimental_hierarchical>) and only implements angular specific features like `ng-view` and ways to bind routes to templates via `ViewFactory`. So your question really falls into the `route_hierarchical` package domain. Currently you can veto users attempt to navigate away from a route, but I guess what you want the ability to veto user entering a route depending on if the user is logged in or not. Unfortunately this is not yet supported, but is planned. What you could try in the meantime is creating a custom viewFactory wrapper. ``` class RecipesRouteInitializer implements RouteInitializer { void init(Router router, ViewFactory view) { router ..addRoute( name: 'login', path: '/login', enter: authView(router, view('login.html'), NOT_AUTH)) ..addRoute( name: 'register', path: '/register', enter: authView(router, view('register.html'), AUTH_NO_REG)) ..addRoute( name: 'home', path: '/home', enter: authView(router, view('home.html'), AUTH_AND_REG)); } authView(Router router, ngView, mode) { return (RouteEvent e) { if (mode == AUTH_AND_REG) { if (!authService.isAuth) { router.go('login', {}); return; } if (!authService.isReg) { router.go('register', {}); return; } } else if (mode == AUTH_NO_REG) { if (!authService.isAuth) { router.go('login', {}); return; } if (authService.isReg) { router.go('home', {}); return; } } else if (mode == NOT_AUTH) { if (authService.isAuth) { router.go('register', {}); return; } } ngView(e); }; } ``` } DISCLAMER: this is just an idea... might totally not work.
Is it necessary to append querystrings to images in an img tag and images in css to refresh cached items? I know that a common practice is to set an expire time far in the future for css, javascript and image files and then make sure that all browsers fetches the latest content as soon the files changes by appending a querystring (or changing filename) like this From this `<link rel="stylesheet" type="text/css" href="base.css">`: to this: ``` <link rel="stylesheet" type="text/css" href="base.css?v=1234"> ``` or: ``` <link rel="stylesheet" type="text/css" href="base_1234.css"> ``` But what about images referenced in a css file? ``` // Inside base.css background: url(/img/logo.png) // Is this necessary(?): background: url(/img/logo.png?v=1234) ``` Or will `/img/logo.png` be reloaded when base.css changes filename to `base.css?v=1234` or `base_1234.css` automatically? And also, what about images in `src` for `img`-tags?
The browser is making these requests after determining an absolute path, so if you are 'cache busting' your static assets in this way, you do need to do it for each file individually, no matter where it's called. You can, however, make it easer on yourself by making it a variable on the backend. You can append the string as a variable that you only have to update in one place on your backend, probably in conjunction with a CSS pre-processor like LESS or SASS to get all your images. Or use relative paths to your advantage by adding the version to the base url (site.com/folder/styles.css => site.com/v123/folder/styles.css). This can be added to an existing static asset base url variable in your app, then on the server you can just use a UrlRewrite to strip out the version. This way all the images relatively referred to from your CSS automatically get the version too, having the same 'cache busting' effect. You could be extra clever and set the variable automatically as part of your build process as the last commit hash from you version control system - which will also make future debugging easier.
Force Auto layout to update UIView frame correctly at viewDidLoad As simple as it might be, I still find my self struggling with the correct solution. I'm trying to understand what is the **correct** way to find the **REAL** `UIView`(or any other subview) frame inside **`viewDidLoad`** when using Auto Layout. The main issue is that in viewDidLoad, the views aren't applied their constraints. I know that the "known" answer for this situation is ``` override func viewDidLoad() { super.viewDidLoad() view.layoutIfNeeded() stepContainer.layoutIfNeeded() // We need this Subview real frame! let realFrame = stepContainer.frame } ``` But I found out that it's not ALWAYS working, and from time to time it give's wrong frame (ie not the final frame that is displayed). After some more researching I found that warping this code under `DispatchQueue.main.async { }` gives accurate result. But I'm not sure if it's the correct way to handle that, or am I causing some kind of under-the-hood issues using this. Final "working" code: ``` override func viewDidLoad() { super.viewDidLoad() DispatchQueue.main.async { self. stepContainer.layoutIfNeeded() print(self. stepContainer.frame) // Real frame.. } } ``` **NOTE : I need to find what is the real frame only from viewDidLoad, please don't suggest to use viewDidAppear/layoutSubviews etc.**
As @DavidRönnqvist pointed out > > The reason dispatch async gives you the "correct" value here is that > it get scheduled to run in the next run loop; after `viewWillAppear` > and `viewDidLayoutSubviews` has run. > > > **Example** ``` override func viewDidLoad() { super.viewDidLoad() DispatchQueue.main.async { print("DispatchQueue.main.async viewDidLoad") } } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) print("viewWillAppear") } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) print("viewDidAppear") } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() print("viewDidLayoutSubviews") } ``` > > viewWillAppear > > > viewDidLayoutSubviews > > > viewDidAppear > > > DispatchQueue.main.async viewDidLoad > > > Code inside `DispatchQueue.main.async` from `viewDidLoad` even is called after `viewDidAppear`. So using `DispatchQueue.main.async` in `viewDidLoad` gives you right frame but it isn't earliest as possible. **Answer** - If you want to get right frame as early as possible, `viewDidLayoutSubviews` is the correct place to do it. - If you **have to** put some code inside `viewDidLoad`, you are doing right way. Seem like `DispatchQueue.main.async` is the best way to do it.
View on the right only if their is enough space, otherwise go at the bottom Using the `ConstraintLayout` I have 2 `TextView` that I want to be next to each other when their is enough space to see the content of both. But if TextView A takes a lot of space and fullfill all the screen width, than I would like that TextView B goes under TextView A instead of still being on the right but invisible (out of the screen). So with the code below I got my 2 `TextView` right next to each other. ``` <android.support.constraint.ConstraintLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/text_view_a" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginTop="8dp" tools:text="a very long text ..." /> <TextView android:id="@+id/text_view_b" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="8dp" android:layout_marginTop="8dp" app:layout_constraintLeft_toRightOf="@+id/text_view_a" app:layout_constraintTop_toTopOf="parent" tools:text="some text" /> </android.support.constraint.ConstraintLayout> ``` How should I edit this code snippet so that TextView B goes under TextView A if their isn't any space left for him?
Take a look at Google's FlexboxLayout. It may be what you need to flow your `TextView`s. You can find some documentation [here](https://github.com/google/flexbox-layout) and a tutorial [here](https://blog.stylingandroid.com/flexboxlayout-part-1/). > > FlexboxLayout is a library project which brings the similar capabilities of CSS Flexible Box Layout Module to Android. > > > You can look at wrapping your `TextView`s in the FlexbotLayout within the ConstraintLayout to allow them to flow as needed. Here is a sample layout. The two `TextView`s will be stacked due to the long text of the top `TextView`. If you shorten this text, the two `TextView`s will be side-by-side. I hope this helps. ``` <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.example.flexer.MainActivity"> <com.google.android.flexbox.FlexboxLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="0dp" app:alignContent="stretch" app:alignItems="stretch" app:flexWrap="wrap" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent"> <TextView android:id="@+id/text_view_a" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="8dp" android:text="a very long text ...a very long text ...a very long text ...a very long text ...a very long text ...a very long text ...a very long text ...a very long text ..." /> <TextView android:id="@+id/text_view_b" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="8dp" android:text="some text" /> </com.google.android.flexbox.FlexboxLayout> </android.support.constraint.ConstraintLayout> ```
How do you execute a template with 500 status code in Go? I know that I can execute template with: ``` t.ParseFiles(name) t.Execute(w, page) ``` And respond 500 with a message like this: ``` http.Error(w, err.Error(), http.StatusInternalServerError) ``` But how should I return 500 with a template that contains that message?
Call [`ResponseWriter.WriteHeader`](https://godoc.org/net/http#ResponseWriter) before you execute your template: > > WriteHeader sends an HTTP response header with status code. > > If WriteHeader is not called explicitly, the first call to Write > will trigger an implicit WriteHeader(http.StatusOK). > > Thus explicit calls to WriteHeader are mainly used to send error codes. > > > ``` t.ParseFiles(name) w.WriteHeader(http.StatusInternalServerError) t.Execute(w, page) ``` If you look at the [source code of `http.Error`](https://golang.org/src/net/http/server.go#L1951), you can see it's doing the same thing.
Updating elements of multiple collections with dynamic functions **Setup**: I have several collections of various data structures witch represent the state of simulated objects in a virtual system. I also have a number of functions that transform (that is create a new copy of the object based on the the original and 0 or more parameters) these objects. The goal is to allow a user to select some object to apply transformations to (within the rules of the simulation), apply those the functions to those objects and update the collections by replacing the old objects with the new ones. I would like to be able to build up a function of this type by combining smaller transformations into larger ones. Then evaluate this combined function. **Questions**: How to I structure my program to make this possible? What kind of combinator do I use to build up a transaction like this? **Ideas**: 1. Put all the collections into one enormous structure and pass this structure around. 2. Use a state monad to accomplish basically the same thing 3. Use IORef (or one of its more potent cousins like MVar) and build up an IO action 4. Use a Functional Reactive Programing Framework 1 and 2 seem like they carry a lot of baggage around especially if I envision eventually moving some of the collections into a database. (Darn IO Monad) 3 seems to work well but starts to look a lot like recreating OOP. I'm also not sure at what level to use the IORef. (e.g `IORef (Collection Obj)` or `Collection (IORef Obj)` or `data Obj {field::IORef(Type)}` ) 4 feels the most functional in style, but it also seems to create a lot of code complexity without much payoff in terms of expressiveness. --- **Example** I have a web store front. I maintain a collections of products with (among other things) the quantity in stock and a price. I also have a collection of users who have credit with the store. A user comes along ands selects 3 products to buy and goes to check out using store credit. I need to create a new products collection that has the amount in stock for the 3 products reduced, create a new user collection with the users account debited. This means I get the following: ``` checkout :: Cart -> ProductsCol -> UserCol -> (ProductsCol, UserCol) ``` But then life gets more complicated and I need to deal with taxes: ``` checkout :: Cart -> ProductsCol -> UserCol -> TaxCol -> (ProductsCol, UserCol, TaxCol) ``` And then I need to be sure to add the order to the shipping queue: ``` checkout :: Cart -> ProductsCol -> UserCol -> TaxCol -> ShipList -> (ProductsCol, UserCol, TaxCol, ShipList) ``` And so forth... What I would like to write is something like ``` checkout = updateStockAmount <*> applyUserCredit <*> payTaxes <*> shipProducts applyUserCredit = debitUser <*> creditBalanceSheet ``` but the type-checker would have go apoplectic on me. How do I structure this store such that the `checkout` or `applyUserCredit` functions remains modular and abstract? I cannot be the only one to have this problem, right?
Okay, let's break this down. You have "update" functions with types like `A -> A` for various specific types `A`, which may be derived from partial application, that specify a new value of some type in terms of a previous value. Each such type `A` should be specific to what that function does, and it should be easy to change those types as the program develops. You also have some sort of shared state, which presumably contains all the information used by any of the aforementioned update functions. Further, it should be possible to change what the state contains, without significantly impacting anything other than the functions acting directly on it. Additionally, you want to be able to abstractly combine update functions, without compromising the above. We can deduce a few necessary features of a straightforward design: - An intermediate layer will be necessary, between the full shared state and the specifics needed by each function, allowing pieces of the state to be projected out and replaced independently of the rest. - The types of the update functions themselves are by definition incompatible with no real shared structure, so to compose them you'll need to first combine each with the intermediate layer portion. This will give you updates acting on the entire state, which can then be composed in the obvious way. - The only operations needed on the shared state as a whole are to interface with the intermediate layer, and whatever may be necessary to maintain the changes made. This breakdown allows each entire layer to be modular to a large extent; in particular, type classes can be defined to describe the necessary functionality, allowing any relevant instance to be swapped in. In particular, this essentially unifies your ideas 2 and 3. There's an inherent monadic context of some sort here, and the type class interface suggested would allow multiple approaches, such as: - Make the shared state a record type, store it in a `State` monad, and use lenses to provide the interface layer. - Make the shared state a record type containing something like an `STRef` for each piece, and combine field selectors with `ST` monad update actions to provide the interface layer. - Make the shared state a collection of `TChan`s, with separate threads to read/write them as appropriate to communicate asynchronously with an external data store. Or any number of other variations.
Erlang lists:filter/2 cannot use comma instead of andalso Following code gives me warning at compile time: Warning: use of operator '>' has no effect ``` rd(a,{x,y}), List = [#a{x=1,y=2}, #a{x=3,y=4}], lists:filter( fun(E) -> E#a.x > 1, E#a.y =:= 2 end, List). ``` But when I substitute comma with andalso, there is no warning.
Using comma in this case only separates two actions, without effect to each other: `E#a.x > 1` and the next operation (which is result of function) `E#a.y =:= 2` It means that in your case, filter function is equal to: ``` fun( E ) -> E#a.y =:= 2 end ``` Only if you're writing **guard expressions** usage of comma is equal to usage of `andalso`, otherwise - comma just a **separator** between actions. So, you may rewrite your function in two ways: 1) ``` fun (E) when E#a.x > 1, E#a.y =:= 2 -> true; (_Othervise) -> false end ``` 2) ``` fun( E ) -> (E#a.x > 1) andalso (E#a.y =:= 2) end ```
Why labels are mentioned three times in a single deployment I've gone over the following docomentation page: <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/> The example deployment yaml is as follows: ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 ``` We can see here three different times where the label `app: nginx` is mentioned. Why do we need each of them? I had a hard time understanding it from the official documentation.
The **first label** is for deployment itself, it gives label for that particular deployment. Lets say you want to delete that deployment then you run following command: ``` kubectl delete deployment -l app=nginx ``` This will delete the entire deployment. The **second label** is `selector: matchLabels` which tells the resources(service etc) to match the pod according to label. So lets say if you want to create the service which has all the pods having labels of `app=nginx` then you provide following definition: ``` apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx ``` The above service will look for the matchLabels and bind pods which have label `app: nginx` assigned to them The **third label** is `podTemplate` labels, the `template` is actually `podTemplate`. It describe the pod that it is launched. So lets say you have two replica deployment and k8s will launch 2 pods with the label specified in `template: metadata: labels`. This is subtle but important difference, so you can have the different labels for deployment and pods generated by that deployment.
In ggplot2 how to remove all theme + remove some data but keep aspect ratio of the data displayed? I'am trying to create a base plot and then recreate a modified version of the same plot without some of the data and without any other element (essentially `+ theme_void()`). The difficulty here is to keep the exact size and position of the data that the plot keep between the two versions. Say I have the following plot: ``` library(ggplot2) # Sample data frame d <- data.frame(group = c("A", "B", "C"), value = c(10, 15, 5)) # Create the original bar plot g1 <- ggplot() + geom_col(data = d, aes(x = group, y = value, fill = group)) + theme_bw() ``` [![enter image description here](https://i.stack.imgur.com/ZzjB8.png)](https://i.stack.imgur.com/ZzjB8.png) The aim is to create (and save as .SVG) three plots with one bar per plot (+ theme\_void) but with the same position/size as the first one. **Desired plot 1:** [![enter image description here](https://i.stack.imgur.com/loZMF.png)](https://i.stack.imgur.com/loZMF.png) **Desired plot 2:** [![enter image description here](https://i.stack.imgur.com/aYlVJ.png)](https://i.stack.imgur.com/aYlVJ.png) **Desired plot 3:** [![enter image description here](https://i.stack.imgur.com/gAFds.png)](https://i.stack.imgur.com/gAFds.png) I guess one possibility is to make everything else white/transparent, but I want to avoid this approach for I will further manipulate the plot saved as a .SVG, and the elements would be there haunting me (add complexity and bigger file size). The other approach that I do want to pursue is to get in the middle of the `ggplot2` workflow, stop it in the right time (the drawing context is already given), modify it (as in erase everything but a single bar), and finally render the modified plot. The package `gginnards` has functions like `delete_layers()` and themes could be replaced with the `%+%` operator, but as far as I could see they modify the size/position (as it should, but this is not what I want). The closest thing that I found is `ggtrace` package (particularly **"[highjack-ggproto](https://yjunechoe.github.io/ggtrace/#id_3-highjack-ggproto-remove-boxplot-outliers)"**) and the whole discussion (that is still very opaque for me) of `grid/grob`. I guess I will be learning more on those issues in the weeks to come, but any advice on that would be very appreciated! **Edit**: from the valuable answers below I must stress: 1. This is a toy example, the real case will incorporate numerous `theme` modifications in the original plot. That´s is to say, a workaround of making the first plot more simple (that would facilitate the comparison) is not a solution in this case. 2. The aim is to save the result in a clean SVG. By clean I mean there is suppose to be only the visible elements in the SVG file (as I inspect its source code). For example, if I have hundreads of points in my plot and I filter for one point, this single point should be alone in the new SVG (in the exact position as it was in the first plot - the plot that has multiple theme modification, title, legends, axis, and so on).
This is actually fairly difficult. The problem is that the exact position of the bars is determined by nested viewports. The easiest solution is probably just to walk the `gTable` of the ggplot object and make all objects that are *not* the bars be `zeroGrobs` Let's start with the plot itself: ``` library(ggplot2) # Sample data frame d <- data.frame(group = c("A", "B", "C"), value = c(10, 15, 5)) # Create the original bar plot g1 <- ggplot() + geom_col(data = d, aes(x = group, y = value, fill = group)) + theme_bw() ``` Our first step is to build this into a `gTable`: ``` gt <- ggplot_gtable(ggplot_build(g1)) ``` Note than from now on, if we want to draw the result, we can do: ``` grid::grid.newpage() grid::grid.draw(gt) ``` ![](https://i.imgur.com/dgZXpMm.png) Now, let's make everything that isn't the panel a zero grob. The panel is always a `gTree`, so we can do: ``` gt$grobs <- lapply(gt$grobs, function(x) { if(class(x)[1] == 'gTree') x else zeroGrob() }) ``` Note that this wipes everything except the panel, but leaves all the spacing the same: ``` grid::grid.newpage() grid::grid.draw(gt) ``` ![](https://i.imgur.com/NNmLEOb.png) Now we want to do the same thing within the panel, removing everything that isn't a `geom_rect` grob: ``` panel <- which(lengths(gt$grobs) > 3) gt$grobs[[panel]]$children <- lapply(gt$grobs[[panel]]$children, function(x) { if(grepl('geom_rect', x)) x else zeroGrob() }) ``` This leaves us just with our three bars: ``` grid::grid.newpage() grid::grid.draw(gt) ``` ![](https://i.imgur.com/jYGW2qA.png) To get the individual bars in their own plots, we create three copies of the plot object ``` gt_list <- list(gt1 = gt, gt2 = gt, gt3 = gt) ``` Now we iterate through this list and remove all but one bar from each: ``` rectangles <- which(lengths(gt$grobs[[panel]]$children) > 3) gt_list <- Map(function(x, i) { rect <- x$grobs[[panel]]$children[[rectangles]] rect$x <- rect$x[i] rect$y <- rect$y[i] rect$width <- rect$width[i] rect$height <- rect$height[i] rect$gp <- rect$gp[i] x$grobs[[panel]]$children[[rectangles]] <- rect x }, gt_list, seq_along(gt_list)) ``` We now have 3 plots with only a single graphical object in each one, yet the position of each graphical element is unchanged compared to the original plot. ``` grid::grid.newpage() grid::grid.draw(gt_list[[1]]) ``` ![](https://i.imgur.com/jg1sLJo.png) ``` grid::grid.newpage() grid::grid.draw(gt_list[[2]]) ``` ![](https://i.imgur.com/Oxdcbcb.png) ``` grid::grid.newpage() grid::grid.draw(gt_list[[3]]) ``` ![](https://i.imgur.com/m4iIJQ6.png) Furthermore, we can see that the resulting svg is not full of unnecessary invisible objects; only the bar is written to file: ``` svg('my.svg') grid::grid.newpage() grid::grid.draw(gt_list[[1]]) dev.off() ``` Resulting in **my.svg** ``` <?xml version="1.0" encoding="UTF-8"?> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="504pt" height="504pt" viewBox="0 0 504 504" version="1.1"> <g id="surface1"> <rect x="0" y="0" width="504" height="504" style="fill:rgb(100%,100%,100%);fill-opacity:1;stroke:none;"/> <path style=" stroke:none;fill-rule:nonzero;fill:rgb(97.254902%,46.27451%,42.745098%);fill-opacity:1;" d="M 52.492188 451.675781 L 169.050781 451.675781 L 169.050781 168.375 L 52.492188 168.375 Z M 52.492188 451.675781 "/> </g> </svg> ``` And in case there are any nagging doubts that things don't line up, let's save the plots and animate them to prove it: ``` gt <- ggplot_gtable(ggplot_build(g1)) png('plot1.png') grid::grid.newpage() grid::grid.draw(gt) dev.off() Map(function(x, f) { png(f) grid::grid.newpage() grid::grid.draw(x) dev.off() }, gt_list, c('plot2.png', 'plot3.png', 'plot4.png')) library(magick) list.files(pattern = 'plot\\d+\\.png', full.names = TRUE) |> image_read() |> image_join() |> image_animate(fps=4) |> image_write("barplot.gif") ``` [![enter image description here](https://i.stack.imgur.com/fA8oq.gif)](https://i.stack.imgur.com/fA8oq.gif) Created on 2023-08-31 with [reprex v2.0.2](https://reprex.tidyverse.org)
How to set defined values for a parameter in C#? How can I set defined values for a parameter? I mean, for example, I have the following function: ``` int functionA(string parameterA) {} ``` Now, when I call this function and input value for the parameterA, I want the value to be in the value set I have defined. For example, the defined value set is: ``` string[] definedParameterA = { "Hello World", "stackoverflow" }; ``` the value of parameterA I input must be "Hello World" or "stackoverflow".
If you really want it to be a string, you'd just validate it at the start of the method: ``` private static readonly String[] ValidValues = {"Hello World", "stackoverflow"}; public void DoSomething(string text) { if (!ValidValues.Contains(text)) { throw new ArgumentException("Invalid value: " + text, "text"); } // Carry on } ``` However, you might also consider using an enum instead. You still need to validate that the enum value is defined, but it makes it simpler for the caller to get it right: ``` public enum Message { HelloWorld, StackOverflow } public void DoSomething(Message message) { if (!Enum.IsDefined(typeof(Enum), message)) { throw new ArgumentException("Invalid value: " + message, "message"); } // Carry on } ``` You can have a mapping from the enum value to string elsewhere if you want, of course. Note that the above code will box `message` on every call. You could avoid that (and make it more typesafe) using my [Unconstrained Melody](https://code.google.com/p/unconstrained-melody/) project if you *really* want, but it's probably not worth it until you've proved that this boxing is actually an issue. The code would then look like this: ``` public void DoSomething(Message message) { if (!message.IsNamedValue()) // Extension method { throw new ArgumentException("Invalid value: " + message, "message"); } // Carry on } ```
Failed to start The PHP 7.0 FastCGI Process Manager i have problem with `php7` when i run this command i get the error > > sudo systemctl status php7.0-fpm.service > > > this output: ``` ● php7.0-fpm.service - The PHP 7.0 FastCGI Process Manager Loaded: loaded (/lib/systemd/system/php7.0-fpm.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since سه<U+200C>شنبه 2016-06-28 17:58:08 IRDT; 4min 27s ago Docs: man:php-fpm7.0(8) Process: 14328 ExecStart=/usr/sbin/php-fpm7.0 --nodaemonize --fpm-config /etc/php/7.0/fpm/php-fpm.conf (code=exited, status=203/EXEC) Main PID: 14328 (code=exited, status=203/EXEC) ژوئن 28 17:58:08 khonok158 systemd[1]: Starting The PHP 7.0 FastCGI Process Manager... ژوئن 28 17:58:08 khonok158 systemd[1]: php7.0-fpm.service: Main process exited, code=exited, status=203/EXEC ژوئن 28 17:58:08 khonok158 systemd[1]: Failed to start The PHP 7.0 FastCGI Process Manager. ژوئن 28 17:58:08 khonok158 systemd[1]: php7.0-fpm.service: Unit entered failed state. ژوئن 28 17:58:08 khonok158 systemd[1]: php7.0-fpm.service: Failed with result 'exit-code'. ```
You probably must do this ``` apt-get install --reinstall php systemctl start php7.0-fpm systemctl enable php7.0-fpm systemctl restart apache2 ``` for me it works. ``` root@micro:~# systemctl status php7.0-fpm.service ? php7.0-fpm.service - The PHP 7.0 FastCGI Process Manager Loaded: loaded (/lib/systemd/system/php7.0-fpm.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2018-11-27 06:46:40 CET; 1 weeks 1 days ago Docs: man:php-fpm7.0(8) Main PID: 508 (php-fpm7.0) Status: "Processes active: 0, idle: 2, Requests: 5736, slow: 0, Traffic: 0req/sec" Tasks: 3 (limit: 4915) CGroup: /system.slice/php7.0-fpm.service tq508 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf) tq755 php-fpm: pool www mq756 php-fpm: pool www ```
how to set django and mysql work with UTF-8 I want to set my Django/mysql site to work with UTF-8. Please guide me the changes I have to make in django and mysql to make sure the following things. 1. User entered data in the forms will be always encoded as UTF-8 2. Storing the data in mysql as utf-8 3. Displaying utf8 encoded data in the template correctly 4. Also I want to know whether moving to UTF-8 encoding will remove unicode decode error like below. > > UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 55: ordinal not in range(128) > > > Many Thanks.
here some advices: 1) use utf8 encoding when creating database ``` CREATE DATABASE <dbname> CHARACTER SET utf8; ``` [docs](https://docs.djangoproject.com/en/dev/ref/databases/#creating-your-database) 2) place the following special comment in the first or second lines of your script: ``` # -*- coding: utf-8 -*- ``` [nice article about python and utf8](http://www.evanjones.ca/python-utf8.html) 3) Use unicode strings with u prefix in \*.py files ``` unicodeString = u"hello Unicode world!" ``` 4) Use follwing meta tag in section of your base template: ``` <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> ```
Should class Square publicly inherit from class Rectangle? I find this question very interesting after reading the part of "Effective C++" about public inheritance. Before it would be common sense for me to say yes, because every square **is a** rectangle, but not necessarily other way around. However consider this code: ``` void makeBigger(Rectangle& r) { r.setWidth(r.width() + 10); } ``` This code is perfectly fine for a `Rectangle`, but would break the `Square` object if we passed it to `makeBigger` - its sides would become unequal. --- **So how can I deal with this?** The book didn't provide an answer (yet?), but I'm thinking of a couple of ways of fixing this: 1. Override `setWidth()` and `setHeight()` methods in `Square` class to also adjust the other side. Drawback: code duplication, unnecessary 2 members of `Square`. 2. For `Square` not to inherit from `Rectangle` and be on its own - have `size`, `setSize()` etc. Drawback: weird - squares are rectangles after all - it would be nice to reuse `Rectangle`'s features such as right angles etc. 3. Make `Rectangle` abstract (by giving it a pure virtual destructor and defining it) and have *a third class* that represents rectangles that are not squares and inherits from `Rectangle`. That will force us to change the above function's signature to this: `void makeBigger(NotSquare& r);` Can't see any drawbacks except having an extra class. --- Is there a better way? I'm leaning towards the third option.
This is one of the key principles in OO design that I find gets handled incorrectly. Mr Meyer does an excellent job of of discussing it the book you are referring to. The trick is to remember that the principles must be applied to concrete use cases. When using inheritence, remember that the key is that the "is a" relationship applies to an object when you want to *use that object as a* ... So whether a square is a rectangle or not depends on what you are going to be doing with rectangles in the future. If you will be setting width and height of a rectangle independently, then no, a square is not a rectangle (in the context of your software) although it is mathematically. Thus you have to consider what you will be doing with your base objects. In the concrete example you mention, there is a canonical answer. If you make makeBigger a virtual member function of rectangle, then each one can be scaled in a way that is appropriate to a class. But this is only good OO design if all the (public) methods which apply to a rectangle will apply to a square. So let's see how this applies to your efforts so far: 1. I see this kind of thing in production code pretty often. It's excusable as a kludge to fix a gap in an otherwise good design, but it is not desirable. But it's a problem because it leads to code which is syntactically correct, but semantically incorrect. It will compile, and do something, but the meaning is incorrect. Lets say you are iterating over a vector of rectangles, and you scale the width by 2, and the height by 3. This is semantically meaningless for a square. Thus it violates the precept "prefer compile time errors to runtime errors". 2. Here you are thinking of using inheritance in order to re-use code. There's a saying "use inheritance to *be* re-used, not to re-use". What this means is, you want to use inheritance to make sure the oo code can be re-used elsewhere, as its base object, without any manual rtti. Remember that there other mechanisms for code re-use: in C++ these include functional programming and composition. If square's and rectangles have shared code (e.g. computing the area based on the fact that they have right angles), you can do this by composition (each contains a common class). In this trivial example you are probably better off with a function though, for example: compute\_area\_for\_rectangle(Shape\* s){return s.GetHeight() \* s.GetWidth());} provided at a namespace level. So if both Square and Rectangle inherit from a base class Shape, Shape having the following public methods: draw(), scale(), getArea() ..., all of these would be semantically meaningful for whatever shape, and common formulas could be shared via namespace level functions. 3. I think if you meditate on this point a little, you'll find a number of flaws with your third suggestion. Regarding the oo design perspective: as icbytes mentioned, if you're going to have a third class, it makes more sense that this class be a common base that meaningfully expresses the common uses. Shape is ok. If the main purpose is to draw the objects than Drawable might be another good idea. There are a couple other flaws in the way you expressed the idea, which may indicate a misunderstanding on your part of virtual destructors, and what it means to be abstract. Whenever you make a method of a class virtual so that another class may override it, you should declare the destructor virtual as well (S.M. does discuss this in Effective C++, so I guess you would find this out on your own). This does not make it abstract. It becomes abstract when you declare at least one of the methods purely virtual -- i.e. having no implementation virtual void foo() = 0; // for example This means that the class in question cannot be instantiated. Obviously since it has at least one virtual method, it should also have the destructor declared virtual. I hope that helps. Keep in mind that inheritence is only one method by which code can be re-used. Good design comes out of the optimal combination of all methods. For further reading I highly recommend Sutter and Alexandrescu's "C++ Coding Standards", especially the section on Class Design and Inheritence. Items 34 "Prefer composition to inheritence" and 37 "Public inheritence is substitutability. Inherit, not to reuse, but to be reused.
Draw ring with given thickness, position, and radius. (Java2D) I need to draw a ring, with given thickness, that looks something like this: [![enter image description here](https://i.stack.imgur.com/gGywP.gif)](https://i.stack.imgur.com/gGywP.gif) The center must be transparent, so that it doesn't cover previously drawn shapes. (or other rings) I've tried something like this: ``` //g is a Graphics2D object g.setColor(Color.RED); g.drawOval(x,y,width,height); g.setColor(Color.WHITE); g.drawOval(x+thickness,y+thickness,width-2*thickness,height-2*thickness); ``` which draws a satisfactory ring, but it covers other shapes; the interior is white, not transparent. How can I modify/rewrite my code so that it doesn't do that?
You can create an [`Area`](https://docs.oracle.com/javase/8/docs/api/java/awt/geom/Area.html) from an [`Ellipse2D`](https://docs.oracle.com/javase/8/docs/api/java/awt/geom/Ellipse2D.html) that describes the outer circle, and [`subtract`](https://docs.oracle.com/javase/8/docs/api/java/awt/geom/Area.html#subtract-java.awt.geom.Area-) the ellipse that describes the inner circle. This way, you will obtain an actual [`Shape`](https://docs.oracle.com/javase/8/docs/api/java/awt/Shape.html) that can either be [drawn](https://docs.oracle.com/javase/8/docs/api/java/awt/Graphics2D.html#draw-java.awt.Shape-) or [filled](https://docs.oracle.com/javase/8/docs/api/java/awt/Graphics2D.html#fill-java.awt.Shape-) (and this will only refer to the area that is actually covered by the ring!). The advantage is that you really have the **geometry** of the ring available. This allows you, for example, to check whether the ring shape [`contains`](https://docs.oracle.com/javase/8/docs/api/java/awt/Shape.html#contains-double-double-) a certain point, or to fill it with a [`Paint`](https://docs.oracle.com/javase/8/docs/api/java/awt/Paint.html) that is more than a single color: [![RingPaint01](https://i.stack.imgur.com/8PQOX.png)](https://i.stack.imgur.com/8PQOX.png) Here is an example, the relevant part is the `createRingShape` method: ``` import java.awt.Color; import java.awt.GradientPaint; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Point; import java.awt.RenderingHints; import java.awt.Shape; import java.awt.geom.Area; import java.awt.geom.Ellipse2D; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.SwingUtilities; public class RingPaintTest { public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { createAndShowGUI(); } }); } private static void createAndShowGUI() { JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); RingPaintTestPanel p = new RingPaintTestPanel(); f.getContentPane().add(p); f.setSize(800,800); f.setLocationRelativeTo(null); f.setVisible(true); } } class RingPaintTestPanel extends JPanel { @Override protected void paintComponent(Graphics gr) { super.paintComponent(gr); Graphics2D g = (Graphics2D)gr; g.setRenderingHint( RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g.setColor(Color.RED); g.drawString("Text", 100, 100); g.drawString("Text", 300, 100); Shape ring = createRingShape(100, 100, 80, 20); g.setColor(Color.CYAN); g.fill(ring); g.setColor(Color.BLACK); g.draw(ring); Shape otherRing = createRingShape(300, 100, 80, 20); g.setPaint(new GradientPaint( new Point(250, 40), Color.RED, new Point(350, 200), Color.GREEN)); g.fill(otherRing); g.setColor(Color.BLACK); g.draw(otherRing); } private static Shape createRingShape( double centerX, double centerY, double outerRadius, double thickness) { Ellipse2D outer = new Ellipse2D.Double( centerX - outerRadius, centerY - outerRadius, outerRadius + outerRadius, outerRadius + outerRadius); Ellipse2D inner = new Ellipse2D.Double( centerX - outerRadius + thickness, centerY - outerRadius + thickness, outerRadius + outerRadius - thickness - thickness, outerRadius + outerRadius - thickness - thickness); Area area = new Area(outer); area.subtract(new Area(inner)); return area; } } ```
Why does Java char use UTF-16? I have been reading about how Unicode code points have evolved over time, including [this article by Joel Spolsky](http://www.joelonsoftware.com/articles/Unicode.html), which says: > > Some people are under the misconception that Unicode is simply a 16-bit code where each character takes 16 bits and therefore there are 65,536 possible characters. This is not, actually, correct. > > > But despite all this reading, I couldn't find the real reason that Java uses UTF-16 for a `char`. Isn't UTF-8 far more efficient than UTF-16? For example, if I had a string which contains 1024 letters of ASCII scoped characters, UTF-16 will take 1024 \* 2 bytes (2KB) of memory. But if Java used UTF-8, it would be just 1KB of data. Even if the string has a few character which needs to 2 bytes, it will still only take about a kilobyte. For example, suppose in addition to the 1024 characters, there were 10 characters of "字" (code-point U+5b57, UTF-8 encoding e5 ad 97). In UTF-8, this will still take only (1024 \* 1 byte) + (10 \* 3 bytes) = 1KB + 30 bytes. So this doesn't answer my question. 1KB + 30 bytes for UTF-8 is clearly less memory than 2KB for UTF-16. Of course it makes sense that Java doesn't use ASCII for a char, but why does it not use UTF-8, which has a clean mechanism for handling arbitrary multi-byte characters when they come up? UTF-16 looks like a waste of memory in any string which has lots of non-multibyte chars. Is there some good reason for UTF-16 that I'm missing?
Java used [UCS-2](https://en.wikipedia.org/wiki/UTF-16#History) before transitioning over UTF-16 in [2004/2005](http://www.oracle.com/technetwork/articles/javase/supplementary-142654.html). The reason for the original choice of UCS-2 is [mainly historical](http://www.oracle.com/technetwork/articles/javase/supplementary-142654.html): > > Unicode was originally designed as a fixed-width 16-bit character encoding. The primitive data type char in the Java programming language was intended to take advantage of this design by providing a simple data type that could hold any character. > > > This, and the birth of UTF-16, is further [explained by the Unicode FAQ page](http://www.unicode.org/faq/utf_bom.html#UTF16): > > Originally, Unicode was designed as a pure 16-bit encoding, aimed at representing all modern scripts. (Ancient scripts were to be represented with private-use characters.) Over time, and especially after the addition of over 14,500 composite characters for compatibility with legacy sets, it became clear that 16-bits were not sufficient for the user community. Out of this arose UTF-16. > > > As @wero has [already mentioned](https://stackoverflow.com/a/36236511/1048186), random access cannot be done efficiently with UTF-8. So all things weighed up, UCS-2 was seemingly the best choice at the time, particularly as no supplementary characters had been allocated by that stage. This then left UTF-16 as the easiest natural progression beyond that.
Django: How to access URL regex parameters inside a middleware class? I am working on a Django project on Google App Engine. I have a URL like: `http://localhost:8080/[company]/projects/project` Note that `[company]` is a URL parameter defined in my urls.py like: `(r'(^[a-zA-Z0-9-_.]*)/projects/project/(\d*)', 'projects.views.project_form'),` I want to get the value of `[company]` from a middleware where I will set the GAE datastore namespace to the `[company]` value. Is it possible to get the `[company]` parameter from the request object passed in the `process_request` method of middleware class?
If you are using the `process_view` middleware, you will have access to the views arguments and therefore the company value. Have a look at the function's definition: ``` def process_view(self, request, view_func, view_args, view_kwargs) ... ``` > > view\_args is a list of positional arguments that will be passed to the view, and view\_kwargs is a dictionary of keyword arguments that will be passed to the view. > > > so you should just be able to grab it from there, something like: ``` def process_view(self, request, view_func, view_args, view_kwargs): company = view_kwargs.get('company', None) ``` Here's some more info from the django book on how the named and unnamed groups in your urls translate to args and kwargs in your view: <http://www.djangobook.com/en/1.0/chapter08/#cn38> particularly > > This [named url groups] accomplishes exactly the same thing as the previous example, with one subtle difference: the captured values are passed to view functions as keyword arguments rather than positional arguments. > > >
Deleting carriage returns caused by line reading I have a list: ``` Cat Dog Monkey Pig ``` I have a script: ``` import sys input_file = open('list.txt', 'r') for line in input_file: sys.stdout.write('"' + line + '",') ``` The output is: ``` "Cat ","Dog ","Monkey ","Pig", ``` I'd like: ``` "Cat","Dog","Monkey","Pig", ``` I can't get rid of the carriage return that occurs from processing the lines in the list. Bonus point for getting rid of the , at the end. Not sure how to just find and delete the last instance.
[str.rstrip](http://docs.python.org/2/library/stdtypes.html#str.rstrip) or simply [str.strip](http://docs.python.org/2/library/stdtypes.html#str.strip) is the right tool to split carriage return (newline) from the data read from the file. Note str.strip will strip of whitespaces from either end. If you are only interested in stripping of newline, just use `strip('\n')` Change the line ``` sys.stdout.write('"' + line + '",') ``` to ``` sys.stdout.write('"' + line.strip() + '",') ``` Note in your case, a more simplistic solution would had been ``` >>> from itertools import imap >>> with open("list.txt") as fin: print ','.join(imap(str.strip, fin)) Cat,Dog,Monkey,Pig ``` or Just using List COmprehension ``` >>> with open("test.txt") as fin: print ','.join(e.strip('\n') for e in fin) Cat,Dog,Monkey,Pig ```
How to remove hash (#) from URL in Flutter web The default URL of a Flutter web project defines a URL containing a hashtag (`#`), as follows: ``` http://localhost:41521/#/peaple/... ``` I would like to remove this '#', looking like this: ``` http://localhost:41521/peaple/ ``` How can I solve this problem?
You can now use a **simple package** and a ***single line of code*** to remove the leading hash (#) from your Flutter web app: **[`url_strategy`](https://pub.dev/packages/url_strategy)** (full disclosure: I am the author) ### Using [`url_strategy`](https://github.com/simpleclub/url_strategy) You simply add the dependency [as described here](https://pub.dev/packages/url_strategy/install) and then add the following function call to your `main` function: ``` import 'package:url_strategy/url_strategy.dart'; void main() { // Here we set the URL strategy for our web app. // It is safe to call this function when running on mobile or desktop as well. setPathUrlStrategy(); runApp(MyApp()); } ``` Calling `setPathUrlStrategy` is all you need to do --- The package also ensures that running the code will not crash on mobile (see below). Additionally, this will also run on `stable` if you build your mobile app on `stable` and only web on `beta`. ### Notes You need to make sure that you include `<base href="/">` inside the `<head>` section of your `web/index.html` when using the path URL strategy. This is added *by default* when creating a new Flutter app. Furthermore, when **deploying** your **production app**, you need to make sure that *every path* points to your `index.html`. If you use tools like Firebase hosting, this is done automatically for you when configuring your app as a **single page app**. Otherwise, you want to look up how to rewrite all paths to your `index.html` for the hosting you are using. Essentially, you want to have a single page app, where the HTTP server serves the `index.html` for all paths. --- The package implementation is based on the manual solution using `flutter_web_plugins`. The benefits of using the package are the following: - Only need to call a single function. - No need to use conditional imports (the package does it for you). - You will not get any missing implementation issues on `stable` (as the web feature is still on `beta`).
problems about the mail command in linux I want to send email with mail command, but it didn't work. I use the following command : ``` mail -v -s "test" [email protected] ``` then the terminal will always wait and no response. Also, there is nothing in the /var/log/mail Can anyone help me ? Thanks By the way, my operating system is debian
After that command, the process running `mail` is waiting for input on *stdin*, that you should end with Ctrl-D (end of file). You can also pipe or redirect or use a [here document](http://tldp.org/LDP/abs/html/here-docs.html) Exemple of using a pipe: ``` date | mail -s "now is" [email protected] ``` Typing a message ``` mail -s "a message" [email protected] body of your message end it with Ctrl-D ``` Redirecting a file containing the body ``` mail -s "a message in file" [email protected] < mailbody.txt ``` Using a here document ``` mail -s "a here doc" [email protected] <<ENDMSG this is the here doc ended by the line below ENDMSG ```
Flutter Return Length of Documents from Firebase Im trying to return the length of a list of documents with this function: ``` Future totalLikes(postID) async { var respectsQuery = Firestore.instance .collection('respects') .where('postID', isEqualTo: postID); respectsQuery.getDocuments().then((data) { var totalEquals = data.documents.length; return totalEquals; }); } ``` I'm initialize this in the void init state (with another function call: ``` void initState() { totalLikes(postID).then((result) { setState(() { _totalRespects = result; }); }); } ``` However, when this runs, it initially returns a null value since it doesn't have time to to fully complete. I have tried to out an "await" before the Firestore call within the Future function but get the compile error of "Await only futures." Can anyone help me understand how I can wait for this function to fully return a non-null value before setting the state of "\_totalRespsects"? Thanks!
I think you're looking for this: ``` Future totalLikes(postID) async { var respectsQuery = Firestore.instance .collection('respects') .where('postID', isEqualTo: postID); var querySnapshot = await respectsQuery.getDocuments(); var totalEquals = querySnapshot.documents.length; return totalEquals; } ``` Note that this loads all documents, just to determine the number of documents, which is incredibly wasteful (especially as you get more documents). Consider keeping a document where you maintain the count as a field, so that you only have to read a single document to get the count. See [aggregation queries](https://firebase.google.com/docs/firestore/solutions/aggregation) and [distributed counters](https://firebase.google.com/docs/firestore/solutions/counters) in the Firestore documentation.
iOS: How to create PKCS12 (P12) keystore from private key and x509certificate in application programmatically? This question was apparently similar but had no answers of any kind: [Programmatically create a x509 certificate for iPhone without using OpenSSL](https://stackoverflow.com/questions/24683786/programmatically-create-a-x509-certificate-for-iphone-without-using-openssl) In our application (server, client), we are implementing client authentication (SSL based on X509Certificate). We already have a way to generate a `keypair`, create a `PKCS10 Certificate Signing Request`, have this signed by the `self-signed CA` and create a `X509Certificate`, send this back. However, to use this certificate in SSL requests, the `private key` and the `X509Certificate` have to be exported to a `PKCS12` (P12) `keystore`. Does anyone know anything about how to do this, or even if it's possible? The client **has** to generate the P12 file (we don't want to give out the private key), and the client is running iOS, and is a mobile device. The solution worked for Android using BouncyCastle (SpongyCastle), but we found nothing for iOS. EDIT: In Java, this export is done by the following: ``` ByteArrayOutputStream bos = new ByteArrayOutputStream(); KeyStore ks = KeyStore.getInstance("PKCS12", BouncyCastleProvider.PROVIDER_NAME); ks.load(null); ks.setKeyEntry("key-alias", (Key) key, password.toCharArray(), new java.security.cert.Certificate[] { x509Certificate }); ks.store(bos, password.toCharArray()); bos.close(); return bos.toByteArray(); ```
If you use openssl, you don't have to copy the full source code into your project, it is enough to add the libs and headers, so the openssl library can be used without any size problem. You can generate a key and a cert like that with openssl: ``` EVP_PKEY * pkey; pkey = EVP_PKEY_new(); RSA * rsa; rsa = RSA_generate_key( 2048, /* number of bits for the key - 2048 is a sensible value */ RSA_F4, /* exponent - RSA_F4 is defined as 0x10001L */ NULL, /* callback - can be NULL if we aren't displaying progress */ NULL /* callback argument - not needed in this case */ ); EVP_PKEY_assign_RSA(pkey, rsa); X509 * x509; x509 = X509_new(); ASN1_INTEGER_set(X509_get_serialNumber(x509), 1); X509_gmtime_adj(X509_get_notBefore(x509), 0); X509_gmtime_adj(X509_get_notAfter(x509), 31536000L); X509_set_pubkey(x509, pkey); X509_NAME * name; name = X509_get_subject_name(x509); X509_NAME_add_entry_by_txt(name, "C", MBSTRING_ASC, (unsigned char *)"CA", -1, -1, 0); X509_NAME_add_entry_by_txt(name, "O", MBSTRING_ASC, (unsigned char *)"MyCompany Inc.", -1, -1, 0); X509_NAME_add_entry_by_txt(name, "CN", MBSTRING_ASC, (unsigned char *)"localhost", -1, -1, 0); X509_set_issuer_name(x509, name); //X509_sign(x509, pkey, EVP_sha1()); const EVP_CIPHER *aConst = EVP_des_ede3_cbc(); ``` And you can write this into pem format with these functions: ``` PEM_write_PrivateKey(f, pkey, NULL, NULL, 0, NULL, NULL); PEM_write_X509( f, /* write the certificate to the file we've opened */ x509 /* our certificate */ ); ``` After that it is possible to write these files into a p12 file, source from here: <https://github.com/luvit/openssl/blob/master/openssl/demos/pkcs12/pkwrite.c> ``` /* pkwrite.c */ #include <stdio.h> #include <stdlib.h> #include <openssl/pem.h> #include <openssl/err.h> #include <openssl/pkcs12.h> /* Simple PKCS#12 file creator */ int main(int argc, char **argv) { FILE *fp; EVP_PKEY *pkey; X509 *cert; PKCS12 *p12; if (argc != 5) { fprintf(stderr, "Usage: pkwrite infile password name p12file\n"); exit(1); } SSLeay_add_all_algorithms(); ERR_load_crypto_strings(); if (!(fp = fopen(argv[1], "r"))) { fprintf(stderr, "Error opening file %s\n", argv[1]); exit(1); } cert = PEM_read_X509(fp, NULL, NULL, NULL); rewind(fp); pkey = PEM_read_PrivateKey(fp, NULL, NULL, NULL); fclose(fp); p12 = PKCS12_create(argv[2], argv[3], pkey, cert, NULL, 0,0,0,0,0); if(!p12) { fprintf(stderr, "Error creating PKCS#12 structure\n"); ERR_print_errors_fp(stderr); exit(1); } if (!(fp = fopen(argv[4], "wb"))) { fprintf(stderr, "Error opening file %s\n", argv[1]); ERR_print_errors_fp(stderr); exit(1); } i2d_PKCS12_fp(fp, p12); PKCS12_free(p12); fclose(fp); return 0; } ```
How do you create a generic function in Rust with a trait requiring a lifetime? I am trying to write a trait which works with a database and represents something which can be stored. To do this, the trait inherits from others, which includes the `serde::Deserialize` trait. ``` trait Storable<'de>: Serialize + Deserialize<'de> { fn global_id() -> &'static [u8]; fn instance_id(&self) -> Vec<u8>; } struct Example { a: u8, b: u8 } impl<'de> Storable<'de> for Example { fn global_id() -> &'static [u8] { b"p" } fn instance_id(&self) -> Vec<u8> { vec![self.a, self.b] } } ``` Next, I am trying to write this data using a generic function: ``` pub fn put<'de, S: Storable>(&mut self, obj: &'de S) -> Result<(), String> { ... let value = bincode::serialize(obj, bincode::Infinite); ... db.put(key, value).map_err(|e| e.to_string()) } ``` However, I am getting the following error: ``` error[E0106]: missing lifetime specifier --> src/database.rs:180:24 | 180 | pub fn put<'de, S: Storable>(&mut self, obj: &'de S) -> Result<(), String> { | ^^^^^^^^ expected lifetime parameter ``` [Minimal example on the playground.](https://play.rust-lang.org/?gist=94501043315e9b78718af49a5d3aac91&version=stable) How would I resolve this, possibly avoid it altogether?
You have defined `Storable` with a generic parameter, in this case a lifetime. That means that the generic parameter has to be propagated throughout the entire application: ``` fn put<'de, S: Storable<'de>>(obj: &'de S) -> Result<(), String> { /* ... */ } ``` You can also decide to make the generic specific. That can be done with a concrete type or lifetime (e.g. `'static`), or by putting it behind a trait object. Serde also has [a comprehensive page about deserializer lifetimes](https://serde.rs/lifetimes.html). It mentions that you can choose to use `DeserializeOwned` as well. ``` trait Storable: Serialize + DeserializeOwned { /* ... */ } ``` You can use the same concept as `DeserializeOwned` for your own trait as well: ``` trait StorableOwned: for<'de> Storable<'de> { } fn put<'de, S: StorableOwned>(obj: &'de S) -> Result<(), String> { ```
How to access object prototype in javascript? In all the articles it is written that JavaScript is a prototype-based language, meaning that every object has a prototype (or, more precisely, prototype chain). So far, I've tried the following code snippet: ``` var F = function(); F.prototype.member1 = 1; var object1 = new F(); console.log(object1.member1); // prints 1 ``` How can I access the prototype object of `object1`? Is there a browser-neutral way to do that (I mean, not relying on `__proto__` property? Seen [this](https://stackoverflow.com/questions/2242518/how-can-i-see-a-javascript-objects-prototype-chain) link, but maybe there are new developments since 2010) If I can't, could you share please the rationale behind the hood?
``` var f = function(); var instance = new f(); ``` If you know name of `instance` *class function*, you can simply access prototype as: ``` var prototype = f.prototype; prototype.someMember = someValue; ``` If you don't: **1)** ``` var prototype = Object.getPrototypeOf(instance); prototype.someMember = someValue; ``` **2)** or ``` var prototype = instance.__proto__; prototype.someMember = someValue; ``` **3)** or ``` var prototype = instance.constructor.prototype; // works only if constructor is properly assigned and not modified prototype.someMember = someValue; ``` For compatibility you can place into your code the following snippet (and use always `Object.getPrototypeOf(instance)` to return prototype): ``` if(!Object.getPrototypeOf) { if(({}).__proto__ === Object.prototype && ([]).__proto__ === Array.prototype) { Object.getPrototypeOf = function getPrototypeOf(object) { return object.__proto__; }; } else { Object.getPrototypeOf = function getPrototypeOf(object) { // May break if the constructor has been changed or removed return object.constructor ? object.constructor.prototype : void 0; }; } } ``` **UPDATE:** According to ECMA-262 6th Edition (June 2015) `__proto__` property is standardized as additional feature for Web browsers. All latest editions of top browsers supports it now. Read more about `__proto__`: - MDN: [`Object.prototype.__proto__`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/proto) - EDMA-262 6th Edition (June 2015): [`B.2.2.1 Object.prototype.__proto__`](http://www.ecma-international.org/ecma-262/6.0/#sec-object.prototype.__proto__)