prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Multiple column values in a single row I have table like this ``` ID Status 1 5 1 6 1 7 2 5 2 6 2 7 ``` I need the result like below ``` ID col1 col2 col3 1 5 6 7 2 5 6 7 ``` Please help me
``` SELECT ID, MAX(CASE WHEN status = 5 THEN Status ELSE NULL END) col1, MAX(CASE WHEN status = 6 THEN Status ELSE NULL END) col2, MAX(CASE WHEN status = 7 THEN Status ELSE NULL END) col3 FROM tableNAME GROUP BY ID ``` - [SQLFiddle Demo](http://www.sqlfiddle.com/#!3/4836f/1) using `PIVOT` ``` SELECT * FROM ( SELECT ID, Status, CASE Status WHEN 5 THEN 'Col1' WHEN 6 THEN 'Col2' WHEN 7 THEN 'Col3' END Stat FROM tableName ) src PIVOT ( MAX(Status) FOR Stat IN ([Col1],[Col2],[Col3]) ) pivotTbl ``` - [SQLFiddle Demo](http://www.sqlfiddle.com/#!3/4836f/7)
Inheriting constructors of virtual base classes Virtual base classes are initialized in the most derived class, so my guess is that inheriting the constructor of the base class should work as well: ``` struct base { base(int) {} }; struct derived: virtual base { using base::base; }; derived d(0); ``` However, this fails to compile with GCC 5.2.0, which tries to find `base::base()`, but works fine with Clang 3.6.2. Is this a bug in GCC?
This is gcc bug [58751](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58751) "*[C++11] Inheriting constructors do not work properly with virtual inheritance*" (aka: [63339](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63339) "*using constructors" from virtual bases are implicitly deleted*"): From the 58751 description: > > In the document N2540 it states that: > > > Typically, inheriting constructor definitions for classes with virtual bases will be ill-formed, unless the virtual base supports default initialization, or the virtual base is a direct base, and named as the base forwarded-to. Likewise, all data members and other direct bases must support default initialization, or any attempt to use a inheriting constructor will be ill-formed. Note: ill-formed when used, not declared. > > > Hence, the case of virtual bases is explicitly considered by the committee and thus should be implemented. > > > Workaround borrowed from the bug report: ``` struct base { base() = default; // <--- add this base(int) {} }; ``` According to the bug report, in this cases the constructor `base::base(int)` is called by the implicitly generated constructor `derived::derived(int)`. I have checked that [your code](http://coliru.stacked-crooked.com/a/d44786fc57bde885) does not compile. But [this](http://coliru.stacked-crooked.com/a/c49d065932b4ceb3) does and it calls the `base::base(int)` constructor.
Compile all angular templates to one js file I am trying to compile all angulara templates into a single js file. Something like what ember does with ember-cli. So I successfully managed to minify and concat all the javascript files. I have just 2 files now vendor.js and application.js and whole lot of template files which I want to cram into templates.js. How do I go about it? If some one could give step by step explanation, please. Any links would be appreciated too. Surprisingly there is no information about this anywhere. I am using mimosa as build tool, it seemed to me the easiest. Here is my mimosa config: ``` exports.config = { modules: [ "copy", "stylus", "minify-css", "minify-js", "combine", "htmlclean", "html-templates" ], watch: { sourceDir: "app", compiledDir: "public", javascriptDir: "js", exclude: [/[/\\](\.|~)[^/\\]+$/] }, vendor: { javascripts: "vendor/js" }, stylus: { sourceMap: false }, combine: { folders: [ { folder:"vendor/js", output:"vendor.js", order: [ "angular.js" ] }, { folder:"js", output:"main.js", order: [ "application/main.js" ] } ] }, htmlclean: { extensions:["html"] }, htmlTemplates: { extensions: ["tpl"] }, template: { outputFileName: "templates" } } ``` It does generate templates.js file without any errors. But when I link it, angular spits a bunch of errors. Once compiled, how do I actually call those templates from ng-include and from the route provider? I assume that it is the same as I would call a script template using the id which in my case is derived from template original file name, right? Maybe I am missing some important steps. The build tool is not important here although desirable. If some one could show how to do it manually without a build tool I would figure out the rest. Thanks.
I'm using Gulp as my build tool, and in that, there's a plugin [gulp-angular-templatecache](https://github.com/miickel/gulp-angular-templatecache) which pre-compiles and registers all templates for your module in the angular $templateCache - no changes are required to any of the calling code to use these. **EDIT**: The [Angular documentation for $templateCache](https://docs.angularjs.org/api/ng/service/$templateCache) explains how the templateCache works. It might be worth reading through the documentation for gulp-angular-templatecache to see how that pre-populates the $templateCache to see if you can crib something that would work with your build process. Here's my gulp task that does the job: ``` var templateCache = require('gulp-angular-templatecache'); gulp.task('buildjstemplates', function () { return gulp.src(['public/javascripts/app/**/*.html']) .pipe(templateCache({module: 'app'})) .pipe(gulp.dest('public/javascripts/app/')); }); ```
What exactly is code branching What is code branching? I've seen it mentioned in various places, especially with bit twiddling, but never really thought about it? How does it slow a program down and what should I be thinking about while coding? I see mention of `if` statements. I really don't understand how such code can slow down the code. If condition is true do following instructions, otherwise jump to another set of instructions? I see the other thread mentioning "branch prediction", maybe this is where I'm really lost. What is there to predict? The condition is right there and it can only be true or false. I don't believe this to be a duplicate of [this related question](https://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array). The linked thread is talking about "Branch prediction" in reference to an unsorted array. I'm asking what is branching and why prediction is required.
Essentially imagine an assembly line in a factory. Imagine that, as each item passes through the assembly line, it will go to employee 1, then employee 2, on up to employee 5. After employee 5 is done with it, the item is finished and is ready to be packaged. Thus all five employees can be working on different items at the same time and not having to just wait around on each other. Unlike most assembly lines though, every single time employee 1 starts working on a new item, it's potentially a new type of item - not just the same type over and over. Well, for whatever weird and imaginative reason, imagine the manager is standing at the very end of the assembly line. And he has a list saying, "Make this item first. Then make that type of item. Then that type of item." And so on. As he sees employee 5 finish each item and move on to the next, the manager then tells employee 1 which type of item to start working on, looking at where they are in the list at that time. Now let's say there's a point in that list - that "sequence of computer instructions" - where it says, "Now start making a coffee cup. If it's nighttime when you finish making the cup, then start making a frozen dinner. If it's daytime, then start making a bag of coffee grounds." This is your if statement. Since the manager, in this kind of fake example, doesn't really know what time of day it's going to be until he actually sees the cup after it's finished, he could just wait until that time to call out the next item to make - either a frozen dinner or some coffee grounds. The problem there is that if waits until the very last second like that - which he has to wait until to be absolutely sure what time of day it'll be when the cup is finished, and thus what the next item's going to be - then workers 1-4 are not going to be working on anything at all until worker 5 is finished. That completely defeats the purpose of an assembly line! So the manager takes a guess. The factory is open 7 hours in the day and only 1 hour at night. So it is much more likely that the cup will be finished in the daytime, thus warranting the coffee grounds. So as soon as employee 2 starts working on the coffee cup, the manager calls out the coffee grounds to the employee 1. Then the assembly line just keeps moving along like it had been, until employee 5 is finished with the cup. At that time the manager finally sees what time of day it is. If it's daytime, that's great! If it's nighttime, everything started on after that coffee cup must be thrown away, and the frozen dinner must be started on. ...So essentially branch prediction is where the manager temporarily ventures a guess like that, and the line moves along faster when he's right. **Pseudo-Edit:** It is largely hardware-related. The main search phrase would probably be "computer pipeline cpu". But the list of instructions is already made up - it's just that that list of instructions has branches within it; it's not always 1, 2, 3, etc. But as stage 5 of the pipeline is finishing up instruction 10, stage 1 can already be working on instruction 14. Usually computer instructions can be broken up like that and worked on in segments. If stages 1-n are all working on something at the same time, and nothing gets trashed later, that's just faster than finishing one before starting another.
Dart json.encode returns json string with key values without quotes I am trying to convert a dictionary to json string. However I am not getting quotes around any of the strings. I am using dart 2 . Here is what I have ``` var resBody = {}; resBody["email"] = "[email protected]"; resBody["password"] = "admin123"; var user = {}; user["user"] = resBody; String str = json.encode(user); ``` Output is: ``` {user: {email: [email protected], password: admin123}} ``` I would like this to be like an actual json object ``` {"user": {"email": "[email protected]", "password: admin123"}} ``` How can I tell dart to put quotes around it ? I looked at [this](https://stackoverflow.com/questions/29294019/dart-convert-map-to-json-with-all-elements-quoted) thread and am doing exactly what works for the user Am I doing something wrong ?
This is working as expected ``` import 'dart:convert'; void main() { var resBody = {}; resBody["email"] = "[email protected]"; resBody["password"] = "admin123"; var user = {}; user["user"] = resBody; String str = json.encode(user); print(str); } ``` prints ``` {"user":{"email":"[email protected]","password":"admin123"}} ``` [**DartPad example**](https://dartpad.dartlang.org/bca589c77b74a3290ab1ef3e246cf384) [or] ``` import 'dart:convert'; void main() { const JsonEncoder encoder = JsonEncoder.withIndent(' '); try { var resBody = {}; resBody["email"] = "[email protected]"; resBody["password"] = "admin123"; var user = {}; user["user"] = resBody; String str = encoder.convert(user); print(str); } catch(e) { print(e); } } ``` which gives you the beautified output ``` { "user": { "email": "[email protected]", "password": "admin123" } } ```
How to make an image corner rounded programmatically I am using a text view. It has one image as background. How do I programmatically round this image's corner?
Convert your image to bitmap and then convert that bitmap with rounded corners bitmap. Finally apply that bitmap to your textview background. The below code is for convert bitmap to rounded bitmap image. ``` public static Bitmap getRoundedCornerBitmap(Bitmap bitmap,int roundPixelSize) { Bitmap output = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Config.ARGB_8888); Canvas canvas = new Canvas(output); final Paint paint = new Paint(); final Rect rect = new Rect(0, 0, bitmap.getWidth(), bitmap.getHeight()); final RectF rectF = new RectF(rect); final float roundPx = roundPixelSize; paint.setAntiAlias(true); canvas.drawRoundRect(rectF,roundPx,roundPx, paint); paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN)); canvas.drawBitmap(bitmap, rect, rect, paint); return output; } ```
Realm Cleaning Up Old Objects I've just started using Realm for caching in my iOS app. The app is a store, with merchandise. As the user browses merchandise, I'm adding the items to the database. However, as these items do not stay available forever, it does not make sense to keep them in the database past a certain point, let's say 24hrs. Is there a preferred way to batch expire objects after an amount of time? Or would it be best to add a date property and query these objects on each app launch?
There's no default cache expiration mechanism in Realm itself, but like you said, it's a relatively trivial matter of adding an `NSDate` property to each object, and simply performing a query to check for objects whose date property is older than 24 hours periodically inside your app. The logic could potentially look something like this in both languages: **Objective-C** ``` NSDate *yesterday = [[NSDate alloc] initWithTimeIntervalSinceNow:-(24 * 60 *60)]; RLMResults *itemsToDelete = [ItemObject objectsWhere:"addedDate < %@", yesterday]; [[RLMRealm defaultRealm] deleteObjects:itemsToDelete]; ``` **Swift** ``` let yesterday = NSDate(timeIntervalSinceNow:-(24*60*60)) let itemsToDelete = Realm().objects(ItemObject).filter("addedDate < \(yesterday)") Realm().delete(itemsToDelete) ``` I hope that helped!
Tensorflow: When should I use or not use `feed\_dict`? I am kind of confused why are we using `feed_dict`? According to my friend, you commonly use `feed_dict` when you use `placeholder`, and this is probably something bad for production. I have seen code like this, in which `feed_dict` is not involved: ``` for j in range(n_batches): X_batch, Y_batch = mnist.train.next_batch(batch_size) _, loss_batch = sess.run([optimizer, loss], {X: X_batch, Y:Y_batch}) ``` I have also seen code like this, in which `feed_dict` is involved: ``` for i in range(100): for x, y in data: # Session execute optimizer and fetch values of loss _, l = sess.run([optimizer, loss], feed_dict={X: x, Y:y}) total_loss += l ``` I understand `feed_dict` is that you are feeding in data and try `X` as the key as if in the dictionary. But here I don't see any difference. So, what exactly is the difference and why do we need `feed_dict`?
In a tensorflow model you can define a placeholder such as `x = tf.placeholder(tf.float32)`, then you will use `x` in your model. For example, I define a simple set of operations as: ``` x = tf.placeholder(tf.float32) y = x * 42 ``` Now when I ask tensorflow to compute `y`, it's clear that `y` depends on `x`. ``` with tf.Session() as sess: sess.run(y) ``` This will produce an error because I did not give it a value for `x`. In this case, because `x` is a placeholder, if it gets used in a computation you must pass it in via `feed_dict`. If you don't it's an error. Let's fix that: ``` with tf.Session() as sess: sess.run(y, feed_dict={x: 2}) ``` The result this time will be `84`. Great. Now let's look at a trivial case where `feed_dict` is not needed: ``` x = tf.constant(2) y = x * 42 ``` Now there are no placeholders (`x` is a constant) and so nothing needs to be fed to the model. This works now: ``` with tf.Session() as sess: sess.run(y) ```
Publish a Web Application from the Command Line I've got a series of .NET 4 based web applications (WCF and Web) within the same solution, but need to selectively publish, from the command line. I've tried various things so far, MSBuild, aspnet\_compiler, but nothing, to date has worked. I need to be able to specify the Project, not the solution, have any transforms run and have the output redirected to a folder...basically mimick the right mouse click 'Publish' option, using the File System. In addition to all of this, I'd like to leave the projects alone - not adding msbuild files all over the place, as this is a specific build, and not necessarily related to the project. Stuff I've tried: - [Publish ASP.NET MVC 2 application from command line and Web.config transformations](https://stackoverflow.com/questions/4381864/publish-asp-net-mvc-2-application-from-command-line-and-web-config-transformation) - [Equivalent msbuild command for Publish from VS2008](https://stackoverflow.com/questions/313762/equivalent-msbuild-command-for-publish-from-vs2008)
Save the following script as **publishProject.bat** ``` rem publish passed project rem params: %configuration% %destDir% %srcDir% %proj% @echo off SET DestPath=d:\projects\Publish\%2 SET SrcPath=d:\projects\Src\%3\ SET ProjectName=%4 SET Configuration=%1 RD /S /Q "%DestPath%" rem clear existed directory :: build project MSBuild "%SrcPath%%ProjectName%.vbproj" /p:Configuration=%Configuration% :: deploy project ::/t:TransformWebConfig MSBuild "%SrcPath%%ProjectName%.vbproj" /target:_CopyWebApplication /property:OutDir=%DestPath%\ /property:WebProjectOutputDir=%DestPath% /p:Configuration=%Configuration% xcopy "%SrcPath%bin\*.*" "%DestPath%\bin\" /k /y echo ========================================= echo %SrcPath%%3.vbproj is published echo ========================================= ``` I call it from another batch file ``` @echo off rem VS2010. For VS2008 change %VS100COMNTOOLS% to %VS90COMNTOOLS% call "%VS100COMNTOOLS%\vsvars32.bat" SET ex=.\publishProject.bat Release call %ex% KillerWebApp1 KillerWebApp1\KillerWebApp1 KillerWebApp1 call %ex% KillerWebApp2 KillerWebApp2\KillerWebApp2 KillerWebApp2 call %ex% KillerWebApp3 KillerWebApp3\KillerWebApp3 KillerWebApp3 call %ex% KillerWebApp4 KillerWebApp4\KillerWebApp4 KillerWebApp4 ``` **EDIT**: Code above works for most cases but not for all. I.e. we use another asp .net application and link it as virtual folder in IIS. For this situation VS2008 worked fine with code above but VS2010 also copy files from virtual directory while deploying. The following code works properly also in VS2010 ([solution was found here](http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild)) Add to your **project file** (\*.csproj, \*.vbproj) ``` <Target Name="PublishToFileSystem" DependsOnTargets="PipelinePreDeployCopyAllFilesToOneFolder"> <Error Condition="'$(PublishDestination)'==''" Text="The PublishDestination property must be set to the intended publishing destination." /> <MakeDir Condition="!Exists($(PublishDestination))" Directories="$(PublishDestination)" /> <ItemGroup> <PublishFiles Include="$(_PackageTempDir)\**\*.*" /> </ItemGroup> <Copy SourceFiles="@(PublishFiles)" DestinationFiles="@(PublishFiles->'$(PublishDestination)\%(RecursiveDir)%(Filename)%(Extension)')" SkipUnchangedFiles="True" /> </Target> ``` Change **publishProject.bat** to: ``` rem publish passed project rem params: %configuration% %destDir% %srcDir% %proj% @echo off SET DestPath=d:\projects\Publish\%2 SET SrcPath=d:\projects\Src\%3\ SET ProjectName=%4 SET Configuration=%1 :: clear existed directory RD /S /Q "%DestPath%" :: build and publish project MSBuild "%SrcPath%%ProjectName%.vbproj" "/p:Configuration=%Configuration%;AutoParameterizationWebConfigConnectionStrings=False;PublishDestination=%DestPath%" /t:PublishToFileSystem ```
Refining AI movement logic I have the below class which moves the AI towards the given plant, this works well however it feels really messy. Any input as to a better way to lay out the logic would be really grateful, the logic follows the idea that we will take the x and y positions of the AI from the plants position, if the values are positive we add 30 and if it is negative we take away 30 If you need any more explaining of the logic let me know. ``` private void movePosistion(Plant p) { /* * set which direction to move,the number generated relates to the * direction as below: * 1 2 3 * 4 5 * 6 7 8 */ int xdiff = p.getXpos() - xpos; int ydiff = p.getYpos() - ypos; if (xdiff > 0){ if (ydiff > 0){ //8 xpos += 30; ypos += 30; }else if(ydiff < 0){ //3 xpos += 30; ypos -= 30; }else{ //5 xpos += 30; } }else if(xdiff < 0){ if (xdiff > 0){ //6 xpos -= 30; ypos += 30; }else if(xdiff < 0){ //1 xpos -= 30; ypos -= 30; }else{ //4 xpos -= 30; } }else{ if (ydiff < 0){ //7 ypos -= 30; }else{ //2 ypos += 30; } } if (xpos > 720) xpos = 1; if (ypos > 720) ypos = 1; if (xpos < 1) xpos = 720; if (ypos < 1) ypos = 720; } ```
Here is my refactoring: ``` private void movePosition(Plant p) { xpos += Integer.signum(p.getXpos() - xpos) * DELTA_X; ypos += Integer.signum(p.getYpos() - ypos) * DELTA_Y; xpos = Math.floorMod(xpos, MAX_X); ypos = Math.floorMod(ypos, MAX_Y); } ``` With the right `import static` this can also be: ``` private void movePosition(Plant p) { xpos = floorMod(xpos + signum(p.getXpos() - xpos) * DELTA_X, MAX_X); ypos = floorMod(ypos + signum(p.getYpos() - ypos) * DELTA_Y, MAX_Y); } ``` ### Signum `signum` implements the sign function, which gives -1, 0 or 1 for negative integers, zero and positive integers respectively. It encodes the original logic in a very short and readable expression. The sign is multiplied by the appropriate amount of units (constants are not detailed in the code, btw). I have nothing against decision tables in principle (see rolfl's answer), but I don't think this is necesary here. In his answer, palacsint cited "Code Complete". *I can do that too!* From Code Complete 2nd Edition, Chapter 19: General Control Issues, page 431: > > **Use decision tables to replace complicated conditions** > > > Sometimes you have a complicated test involving several variables. [...] > > > ... and sometimes you don't ;-) ### Modulus The wrap-around behaviour of the original code can be achieved with a modulus operation. Note that you cannot just use the `%` operator in Java because it computes the remainder, which can be negative when your position goes below zero. The `floorMod` operation actually computes modular arithmetic. Now, you might think: *this is wrong, the original code does not work like this!* Yes, but let me explain: - First, OP's coordinates range from 1 to 720 (both inclusive). I have an issue with this and I deliberately changed the approach here. The reason is that, in the original code, instead of using a coordinate space that have the origin at (0,0), the origin is translated at (1,1). Most of the time, you end up having to add or substract 1 to you operations. That can eventually lead to off-by-one errors. If coordinates are from 0 to 719, however, the wrap-around logic is simply given by the modulus operation. - "*But the original behavior is not like a modulus!*", you might say. Why do you say this? because, suppose `x` is 710, and then I add 30: with modulus, I goes back to 20, whereas using OP's code, I would have 1 because when we are out of bounds, we go back to the minimal (or maximal) position. To that, I reply that you never are at position 710, but only at 0, 30, 60, ..., 690. At least, this is what I understand from OP's code, where the object seems to move on a grid, and not freely around. I suppose the object is always located initially at a multiple of 30, and then can only move by 30 units. If I am wrong, then (1) sorry, and (2) yes, the modulus is not exactly the good answer; you might better use the `boundPos` function from rolfl.
Add Image to layout on touch in android with respect to touch position Where ever i touch on the screen with respect to x y position i need to add one image on that particular position. I tried by implementing the ontouch listener but it is adding image to different position and many images are being appeared on many touch i want some thing like where ever i touch that same image should appear there Please Help i am new to android working on project in a compnay This is my activity code ``` @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final RelativeLayout rl = (RelativeLayout) findViewById(R.id.wwcontainer); //Set On TouchListner to the Layout rl.setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { Log.i("OnTouch", "On Touch View Group...."); int x = (int) event.getX(); int y = (int) event.getY(); ImageView imageView = new ImageView(getApplicationContext()); imageView.setImageResource(R.drawable.icon); imageView.setLayoutParams(new LayoutParams(x,y)); rl.addView(imageView); return true; } }); } ``` I know about drag and drop i dont want drag and drop instead i need touch any where there image should appear wrt touch position please help me
I had same request and this is what i done and it worked perfectly for me ``` final RelativeLayout rr = (RelativeLayout) findViewById(R.id.rr); rr.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN) { int x = (int) event.getX(); int y = (int) event.getY(); RelativeLayout.LayoutParams lp = new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); ImageView iv = new ImageView(getApplicationContext()); lp.setMargins(x, y, 0, 0); iv.setLayoutParams(lp); iv.setImageDrawable(getResources().getDrawable( R.drawable.gr1)); ((ViewGroup) v).addView(iv); } return false; } }); ``` This will place image to RelativeLayout where you touched starting coordinates from there. If you want to center new image where you touched RelativeLayout you need to calculate Height and Width of your image and change one line in my code to ``` lp.setMargins(x - yourImageWidth/2, y - yourImageHeight/2, 0, 0); ```
Laravel Eloquent - $fillable is not working? I have set the variable `$fillable` in my model. I wanted to test the `update` functionality, and I get this error: > > SQLSTATE[42S22]: Column not found: 1054 Unknown column '\_method' in 'field list' (SQL: update `positions` set `name` = Casual Aquatic Leader, `_method` = PUT, `id` = 2, `description` = Here is my description, `updated_at` = 2014-05-29 17:05:11 where `positions`.`client_id` = 1 and `id` = 2)" > > > Why is this yelling at `_method` when my fillable doesn't have that as a parameter? My update function is: ``` Client::find($client_id) ->positions() ->whereId($id) ->update(Input::all()); ```
Change following: ``` ->update(Input::all()); ``` to this (exclude the `_method` from the array) ``` ->update(Input::except('_method')); ``` ## Update: Actually following `update` method is being called from `Illuminate\Database\Eloquent\Builder` class which is being triggered by `_call` method of `Illuminate\Database\Eloquent\Relations` class (because you are calling the `update` on a relation) and hence the `$fillable` check is not getting performed and you may use `Input::except('_method')` as I answered: ``` public function update(array $values) { return $this->query->update($this->addUpdatedAtColumn($values)); } ``` If you directly call this on a Model (Not on a relation): ``` Positions::find($id)->update(Input::all()); ``` Then this will not happen because `fillable` check will be performed within `Model.php` because following `update` method will be called from `Illuminate\Database\Eloquent\Model` class: ``` public function update(array $attributes = array()) { if ( ! $this->exists) { return $this->newQuery()->update($attributes); } return $this->fill($attributes)->save(); } ```
Using concepts for function overload resolution (instead of SFINAE) Trying to say goodbye to SFINAE. Is it possible to use `concepts` to distinguish between functions, so the compiler can match the correct function based on whether or not a sent parameter meets `concept` constraints? For example, overloading these two: ``` // (a) void doSomething(auto t) { /* */ } // (b) void doSomething(ConceptA auto t) { /* */ } ``` So when called the compiler would match the correct function per each call: ``` doSomething(param_doesnt_adhere_to_ConceptA); // calls (a) doSomething(param_adheres_to_ConceptA); // calls (b) ``` --- Related question: [Will Concepts replace SFINAE?](https://stackoverflow.com/questions/28133118/will-concepts-replace-sfinae)
Yes `concepts` are designed for this purpose. If a sent parameter doesn't meet the required concept argument the function would not be considered in the overload resolution list, thus avoiding ambiguity. Moreover, if a sent parameter meets several functions, the more specific one would be selected. **Simple example:** ``` void print(auto t) { std::cout << t << std::endl; } void print(std::integral auto i) { std::cout << "integral: " << i << std::endl; } ``` Above `print` functions are a valid overloading that can live together. - If we send a non integral type it will pick the first - If we send an integral type it will prefer the second e.g., calling the functions: ``` print("hello"); // calls print(auto) print(7); // calls print(std::integral auto) ``` **No ambiguity** -- the two functions can perfectly live together, side-by-side. **No need for any SFINAE code**, such as `enable_if` -- it is applied already (hidden very nicely). --- ## Picking between two concepts The example above presents how the compiler prefers constrained type (*std::integral auto*) over an unconstrained type (*just auto*). But the rules also apply to two competing concepts. The compiler should pick the more specific one, if one is more specific. Of course if both concepts are met and none of them is more specific this will result with ambiguity. Well, what makes a concept be more specific? if it is based on the other one1. The generic concept - **GenericTwople**: ``` template<class P> concept GenericTwople = requires(P p) { requires std::tuple_size<P>::value == 2; std::get<0>(p); std::get<1>(p); }; ``` The more specific concept - Twople: ``` class Any; template<class Me, class TestAgainst> concept type_matches = std::same_as<TestAgainst, Any> || std::same_as<Me, TestAgainst> || std::derived_from<Me, TestAgainst>; template<class P, class First, class Second> concept Twople = GenericTwople<P> && // <= note this line type_matches<std::tuple_element_t<0, P>, First> && type_matches<std::tuple_element_t<1, P>, Second>; ``` Note that Twople is required to meet GenericTwople requirements, thus it is more specific. If you replace in our Twople the line: ``` GenericTwople<P> && // <= note this line ``` with the actual requirements that this line brings, Twople would still have the same requirements but it will no longer be more specific than GenericTwople. This, along with code reuse of course, is why we prefer to define Twople based on GenericTwople. --- Now we can play with all sort of overloads: ``` void print(auto t) { cout << t << endl; } void print(const GenericTwople auto& p) { cout << "GenericTwople: " << std::get<0>(p) << ", " << std::get<1>(p) << endl; } void print(const Twople<int, int> auto& p) { cout << "{int, int}: " << std::get<0>(p) << ", " << std::get<1>(p) << endl; } ``` And call it with: ``` print(std::tuple{1, 2}); // goes to print(Twople<int, int>) print(std::tuple{1, "two"}); // goes to print(GenericTwople) print(std::pair{"three", 4}); // goes to print(GenericTwople) print(std::array{5, 6}); // goes to print(Twople<int, int>) print("hello"); // goes to print(auto) ``` We can go further, as the Twople concept presented above works also with polymorphism: ``` struct A{ virtual ~A() = default; virtual std::ostream& print(std::ostream& out = std::cout) const { return out << "A"; } friend std::ostream& operator<<(std::ostream& out, const A& a) { return a.print(out); } }; struct B: A{ std::ostream& print(std::ostream& out = std::cout) const override { return out << "B"; } }; ``` add the following overload: ``` void print(const Twople<A, A> auto& p) { cout << "{A, A}: " << std::get<0>(p) << ", " << std::get<1>(p) << endl; } ``` and call it (while all the other overloads are still present) with: ``` print(std::pair{B{}, A{}}); // calls the specific print(Twople<A, A>) ``` Code: <https://godbolt.org/z/3-O1Gz> --- Unfortunately C++20 doesn't allow concept specialization, otherwise we would go even further, with: ``` template<class P> concept Twople<P, Any, Any> = GenericTwople<P>; ``` Which could add a nice possible answer to [this SO question](https://stackoverflow.com/questions/60400537/wildcard-for-c-concepts-saying-accepting-anything-for-this-template-argument), however concept specialization is not allowed. --- 1 The actual rules for *Partial Ordering of Constraints* are more complicated, see: [cppreference](https://en.cppreference.com/w/cpp/language/constraints#Partial_ordering_of_constraints) / [C++20 spec](https://eel.is/c++draft/temp.constr#order).
git blame with commit details in emacs From emacs, how can I see the details (e.g. commit message) of the commit that last changed the line at point? I have magit installed.
It is not necessary to use magit for this particular operation - vanilla emacs can do it. (Also, because this uses vc, this should work identically in any version control system that vc supports.) First, use the `vc-annotate` command, which is bound to the key sequence `C-x` `v` `g`. Now, the point should be at the commit id that you are interested in. (If not, you might need to use `C-x` `1` and/or `v` so that you can see which line is which, in order to navigate to the right one.) You might be able to see the diff for that file using `=` now - if you get an error beginning with "cannot diff from any revision prior...", that probably means that this revision created the file (and that's probably a bug in emacs). The author and date are already visible in the *Annotate* buffer that `vc-annotate` produced. To see the commit message, press `l` (the letter, not the number 1) in the *Annotate* buffer. To see what the file looked like at that revision, press `f` in either the *Annotate* buffer or the *vc-change-log* buffer.
Using Code Contracts With Generic Interface With Type Condition I would like to add code contracts to a generic interface using an abstract class, but where the type parameter is validated. Here is an example of what I would like to do: ``` [ContractClass(typeof(ContractsForIRepository<,>))] public interface IRepository<T, in TId> where T : IEntity { T GetById(TId id); T Persist(T entity); void Remove(TId id); } [ContractClassFor(typeof(IRepository<,>))] internal abstract class ContractsForIRepository<T, TId> : IRepository<T, TId> { public T GetById(TId id) { Contract.Requires(id != null); return default(T); } public T Persist(T entity) { Contract.Requires(entity != null); return default(T); } public void Remove(TId id) { Contract.Requires(id != null); } } ``` I can easily get it to work by dropping the condition: - `public interface IRepository<T, in TId> where T : IEntity` - `public interface IRepository<T, in TId>` But I really want to keep this. Is it possible?
Okay, it is actually trivial - specify the same condition on the abstract class! Full example below. ``` [ContractClass(typeof(ContractsForIRepository<,>))] public interface IRepository<T, in TId> where T : IEntity { T GetById(TId id); T Persist(T entity); void Remove(TId id); } [ContractClassFor(typeof(IRepository<,>))] internal abstract class ContractsForIRepository<T, TId> : IRepository<T, TId> where T : IEntity { public T GetById(TId id) { Contract.Requires(id != null); return default(T); } public T Persist(T entity) { Contract.Requires(entity != null); return default(T); } public void Remove(TId id) { Contract.Requires(id != null); } } ```
How to get vuex state from a javascript file (instead of a vue component) I am working with vuex (2.1.1) and get things working within vue single file components. However to avoid too much cruft in my vue single file component I moved some functions to a `utils.js` module which I import into the vue-file. In this `utils.js` I would like to read the vuex state. How can I do that? As it seems approaching the state with getters etc is presuming you are working from within a vue component, or not? I tried to `import state from '../store/modules/myvuexmodule'` and then refer to `state.mystateproperty` but it always gives 'undefined', whereas in the vue-devtools I can see the state property does have proper values. My estimate at this point is that this is simply not 'the way to go' as the state.property value within the js file will not be reactive and thus will not update or something, but maybe someone can confirm/ prove me wrong.
It is possible to access the store as an object in an external js file, I have also added a test to demonstrate the changes in the state. here is the external js file: ``` import { store } from '../store/store' export function getAuth () { return store.state.authorization.AUTH_STATE } ``` The state module: ``` import * as NameSpace from '../NameSpace' /* Import everything in NameSpace.js as an object. call that object NameSpace. NameSpace exports const strings. */ import { ParseService } from '../../Services/parse' const state = { [NameSpace.AUTH_STATE]: { auth: {}, error: null } } const getters = { [NameSpace.AUTH_GETTER]: state => { return state[NameSpace.AUTH_STATE] } } const mutations = { [NameSpace.AUTH_MUTATION]: (state, payload) => { state[NameSpace.AUTH_STATE] = payload } } const actions = { [NameSpace.ASYNC_AUTH_ACTION]: ({ commit }, payload) => { ParseService.login(payload.username, payload.password) .then((user) => { commit(NameSpace.AUTH_MUTATION, {auth: user, error: null}) }) .catch((error) => { commit(NameSpace.AUTH_MUTATION, {auth: [], error: error}) }) } export default { state, getters, mutations, actions } ``` The store: ``` import Vue from 'vue' import Vuex from 'vuex' import authorization from './modules/authorization' Vue.use(Vuex) export const store = new Vuex.Store({ modules: { authorization } }) ``` So far all I have done is create a js file which exports a function returning the `AUTH_STATE` property of `authorization` state variable. A component for testing: ``` <template lang="html"> <label class="login-label" for="username">Username <input class="login-input-field" type="text" name="username" v-model="username"> </label> <label class="login-label" for="password" style="margin-top">Password <input class="login-input-field" type="password" name="username" v-model="password"> </label> <button class="login-submit-btn primary-green-bg" type="button" @click="login(username, password)">Login</button> </template> <script> import { mapActions, mapGetters } from 'vuex' import * as NameSpace from '../../store/NameSpace' import { getAuth } from '../../Services/test' export default { data () { return { username: '', password: '' } }, computed: { ...mapGetters({ authStateObject: NameSpace.AUTH_GETTER }), authState () { return this.authStateObject.auth }, authError () { return this.authStateObject.error } }, watch: { authError () { console.log('watch: ', getAuth()) // ------------------------- [3] } }, authState () { if (this.authState.sessionToken) { console.log('watch: ', getAuth()) // ------------------------- [2] } }, methods: { ...mapActions({ authorize: NameSpace.ASYNC_AUTH_ACTION }), login (username, password) { this.authorize({username, password}) console.log(getAuth()) // ---------------------------[1] } } } </script> ``` On the button click default state is logged on to the console. The action in my case results in an api call, resulting a state change if the username - password combination had a record. A success case results in showing the console in `authState` watch, the imported function can print the changes made to the state. Likewise, on a fail case, the watch on `authError` will show the changes made to the state
How to display shadow for NSView? I have gone through many threads here and other blogs but not able to solve this issue. I have a added a subview in content view of window. Here is the storyboard-- [![enter image description here](https://i.stack.imgur.com/o6KF6.png)](https://i.stack.imgur.com/o6KF6.png)- I have dragged out outlet of customView to view controller and here is the code for view controller - ``` import Cocoa import QuartzCore class ViewController: NSViewController { @IBOutlet weak var customView: NSView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. self.view.wantsLayer = true self.customView.wantsLayer = true self.customView.layer?.backgroundColor = NSColor.redColor().CGColor self.customView.layer?.cornerRadius = 5.0 self.customView.layer?.shadowOpacity = 1.0 self.customView.layer?.shadowColor = NSColor.blackColor().CGColor self.customView.layer?.shadowOffset = NSMakeSize(0, -3) self.customView.layer?.shadowRadius = 20 } override var representedObject: AnyObject? { didSet { // Update the view, if already loaded. } } } ``` I have added QuartzCore frame work in my project - [![enter image description here](https://i.stack.imgur.com/sWqTV.png)](https://i.stack.imgur.com/sWqTV.png) But the shadow is not appearing, here is the screen shot - [![enter image description here](https://i.stack.imgur.com/VxOuW.png)](https://i.stack.imgur.com/VxOuW.png). I am not able to solve what appears to be trivial. What am I missing? thanks for your help.
If I add the following line it solves the problem- ``` self.customView.shadow = NSShadow() ``` Final code is - ``` import Cocoa import QuartzCore class ViewController: NSViewController { @IBOutlet weak var customView: NSView! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. self.view.wantsLayer = true self.view.superview?.wantsLayer = true self.customView.wantsLayer = true self.customView.shadow = NSShadow() self.customView.layer?.backgroundColor = NSColor.redColor().CGColor self.customView.layer?.cornerRadius = 5.0 self.customView.layer?.shadowOpacity = 1.0 self.customView.layer?.shadowColor = NSColor.greenColor().CGColor self.customView.layer?.shadowOffset = NSMakeSize(0, 0) self.customView.layer?.shadowRadius = 20 } override var representedObject: AnyObject? { didSet { // Update the view, if already loaded. } } } ``` I am not able to identify the issue may be someone here will point it out.
SQL Union in Separate Columns I made a database to enter all my headache and migraine tracking data into. I'm pulling some queries that show counts of different headache severity by month for a certain year. I have one query that gets all headaches by month, another gets headaches under a certain severity, and the last gets headaches over a certain severity. There are two columns I'm using in the database: HeadacheDate and Severity. I'd like to do a query that would have the following columns as output: ``` Month, Count of All Headaches, Count of Headaches under 6 Severity, Count of Headaches Over 5 Severity ``` I've made a union query that takes 3 queries and gives me the data I want but I just can't figure out how to do a query that will move the data around to give me the column format I want. Here are my union queries: ``` SELECT DateName(month, DateAdd(month, MONTH(HeadacheDate), -1)) AS HeadacheMonth, COUNT(Severity) as SeverityCount FROM Headaches WHERE Severity > 0 AND YEAR(HeadacheDate) = 2013 GROUP BY MONTH(HeadacheDate) UNION SELECT DateName(month, DateAdd(month, MONTH(HeadacheDate), -1)) AS HeadacheMonth, COUNT(Severity) as SeverityCount FROM Headaches WHERE Severity > 0 AND Severity < 6 AND YEAR(HeadacheDate) = 2013 GROUP BY MONTH(HeadacheDate) UNION SELECT DateName(month, DateAdd(month, MONTH(HeadacheDate), -1)) AS HeadacheMonth, COUNT(Severity) as SeverityCount FROM Headaches WHERE Severity > 5 AND YEAR(HeadacheDate) = 2013 GROUP BY MONTH(HeadacheDate); ``` This returns results something like this: ``` April 3 April 11 April 14 August 5 August 10 August 15 December 2 December 11 December 13 July 5 July 6 July 11 June 4 June 10 June 14 March 1 March 2 March 3 May 5 May 8 May 13 November 1 November 13 November 14 October 4 October 9 October 13 September 4 September 10 September 14 ``` What I want is this: ``` Month, Count of All Headaches, Count of Headaches under 6 Severity, Count of Headaches Over 5 Severity January, 20, 15, 5 February, 18, 13, 5 ``` and so on. I'd also like to include months where one of the count fields could be zero.
You can use conditional grouping: ``` SELECT [HeadacheMonth] = DATENAME(month, DateAdd(month , MONTH(HeadacheDate), -1)) ,[SeverityCountTotal] = COUNT(CASE WHEN Severity > 0 THEN 1 END) ,[SeverityCount_1_5] = COUNT(CASE WHEN Severity > 0 AND Severity < 6 THEN 1 END) ,[SeverityCount_6] = COUNT(CASE WHEN Severity > 5 THEN 1 END) FROM Headaches WHERE YEAR(HeadacheDate) = 2013 GROUP BY MONTH(HeadacheDate); ``` `YEAR(HeadacheDate) = 2013` is not SARGable so if index exits on that column query optimizer will skip it.You could consider using: ``` HeadacheDate >= '2013-01-01T00:00:00' AND HeadacheDate < '2014-01-01T00:00:00' ```
Is it possible to remove an app's results from Modern UI search on the start screen? How can I remove all search items (for example emails) from a particular pre-installed Modern UI app when searching by typing on the start screen?
Open the Windows-8 Charm Bar by moving the mouse pointer to the top-right/lower-right corner of the screen and click on **Settings**. Alternatively you can directly press the `Windows`+`I` hotkey to open the `Settings` sidebar. Here click on the `Change PC Settings` link at the bottom to open Windows-8 Modern UI settings. ![enter image description here](https://i.stack.imgur.com/zShGF.png) Navigate to **Search** settings in `PC settings` and click on the button **Delete history** to delete all the searches you have performed. To disable the search history tracker, toggle the `Let Windows save my searches as future search` suggestions settings from on to off. When you disable the history tracker, the Delete history button will be disabled as all your history will be deleted automatically. If you want to disable Windows-8 search on any of the app, you can switch if off as well and omit the app from showing results in the start screen.
Best possible way to get device Id in Android I just wonder what would be the best to get device id I have 2 options either I use below code that would need run time permission `android.permission.READ_PHONE_STATE` - ``` private String android_id = Secure.getString(getContext().getContentResolver(), Secure.ANDROID_ID); ``` Or Secondly I have option of UUID that will not need any run time permission but need to be mapped with users data(like email or phone) to identify. ``` String uniqueID = UUID.randomUUID().toString(); ``` Any suggestions which one to use and why would be appreciated. Thanks
**ANDROID ID:** On a device first boot, a random value is generated and stored. This value is available via Settings.Secure.ANDROID\_ID. It’s a 64-bit number that should remain constant for the lifetime of a device. ANDROID\_ID seems a good choice for a unique device identifier because it’s available for smartphones and tablets. *Issues:* However, the value may change if a factory reset is performed on the device. There is also a known bug with a popular handset from a manufacturer where every instance has the same ANDROID\_ID. Clearly, the solution is not 100% reliable. **UUID:** As the requirement for most applications is to identify a particular installation and not a physical device, a good solution to get a unique id for a user if to use the UUID class. *Issues:* UUID.randomUUID() method generates a unique identifier for a specific installation. You have just to store that value and your user will be identified at the next launch of your application. You can also try to associate this solution with the Android Backup service to keep the information available for the user even if he installs your application on another device. **CONCLUSION:** Identify a particular device on Android is not an easy thing. There are many good reasons to avoid that. The best solution is probably to identify a particular installation by using the UUID solution. However, if you want absolutely identify a particular device physically, you can try to use the ANDROID\_ID solution. Not 100% reliable but better than another solution.
Why is catching an exception non-pure, but throwing an exception is pure? In Haskell, you can throw an exception from purely functional code, but you can only catch in IO code. - Why? - Can you catch in other contexts or only the IO monad? - How do other purely functional languages handle it?
Because *throwing* an exception inside a function doesn't make that function's result dependent on anything but the argument values and the definition of the function; the function remains pure. OTOH *catching* an exception inside a function *does* (or at least can) make that function no longer a pure function. I'm going to look at two kinds of exceptions. The first is nondeterministic; such exceptions arise unpredictably at runtime, and include things like out of memory errors. The existence of these exceptions is not included in the *meaning* of the functions that might generate them. They're just an unpleasant fact of life we have to deal with because we have actual physical machines in the real world, which don't always match up to the abstractions we're using to help us program them. If a function throws such an exception, it means that that one particular attempt to evaluate the function failed to produce a value. It doesn't necessarily mean that the function's result is undefined (on the arguments it was invoked on this time), but the system was unable to produce the result. If you could catch such an exception within a pure caller, you could do things like have a function that returns one (non-bottom) value when a sub-computation completes successfully, and another when it runs out of memory. This doesn't make sense as a pure function; the value computed by a function call should be uniquely determined by the values of its arguments and the definition of the function. Being able to return something different depending on whether the sub-computation ran out of memory makes the return value dependent on something else (how much memory is available on the physical machine, what other programs are running, the operating system and its policies, etc.); by definition a function which can behave this way is not pure and can't (normally) exist in Haskell. Because of purely operational failures, we do have to allow that evaluating a function may produce bottom instead of the value it "should" have produced. That doesn't completely ruin our semantic interpretation of Haskell programs, because we know the bottom will cause all the callers to produce bottom as well (unless they didn't need the value that was supposed to be computed, but in that case non-strict evaluation implies that the system never would have tried to evaluate this function and failed). That sounds bad, but when we place our computation inside the `IO` monad than we can safely catch such exceptions. Values in the `IO` monad *are* allowed to depend on things "outside" the program; in fact they can change their value dependent on anything in the world (this is why one common interpretation of `IO` values is that they are as if they were passed a representation of the entire universe). So it's perfectly okay for an `IO` value to have one result if a pure sub-computation runs out of memory and another result if it doesn't. --- But what about *deterministic* exceptions? Here I'm talking about exceptions that are *always* thrown when evaluating a particular function on a particular set of arguments. Such exceptions include divide-by-zero errors, as well as any exception explicitly thrown from a pure function (since its result can only depend on its arguments and its definition, if it evaluates to a throw once it will *always* evaluate to the same throw for the same arguments[1]). It might seem like this class of exceptions *should* be catchable in pure code. After all, the value of `1 / 0` just **is** a divide-by-zero error. If a function can have a different result depending on whether a sub-computation evaluates to a divide-by-zero error by checking whether it's passing in a zero, why can't it do this by checking whether the result is a divide-by-zero error? Here we get back to the point larsmans made in a comment. If a pure function can observe *which* exception it gets from `throw ex1 + throw ex2`, then its result becomes dependent on the order of execution. But that's up to the runtime system, and it could conceivably even change between two different executions of the same system. Maybe we've got some advanced auto-parallelising implementation which tries different parallelisation strategies on each execution in order to try to converge on the best strategy over multiple runs. This would make the result of the exception-catching function depend on the strategy being used, the number of CPUs in the machine, the load on the machine, the operating system and its scheduling policies, etc. Again, the *definition* of a pure function is that only information which comes into a function through its arguments (and its definition) should affect its result. In the case of non-`IO` functions, the information affecting which exception gets thrown doesn't come into the function through its arguments or definition, so it can't have an effect on the result. But computations in the `IO` monad implicitly are allowed to depend on any detail of the entire universe, so catching such exceptions is fine there. --- As for your second dot point: no, other monads wouldn't work for catching exceptions. All the same arguments apply; computations producing `Maybe x` or `[y]` aren't supposed to depend on anything outside their arguments, and catching any kind of exception "leaks" all sorts of details about things which aren't included in those function arguments. Remember, there's nothing particularly special about monads. They don't work any differently than other parts of Haskell. The monad typeclass is defined in ordinary Haskell code, as are almost all monad implementations. *All* the same rules that apply to ordinary Haskell code apply to all monads. It's `IO` itself that is special, not the fact that it's a monad. --- As for how other pure languages handle exception catching, the only other language with *enforced* purity that I have experience with is Mercury.[2] Mercury does it a little differently from Haskell, and you *can* catch exceptions in pure code. Mercury is a logic programming language, so rather than being built on functions, Mercury programs are built from *predicates*; a call to a predicate can have zero, one, or more solutions (if you're familiar with programming in the list monad to get nondeterminism, it's a little bit like the entire language is in the list monad). Operationally, Mercury execution uses backtracking to recursively enumerate all possible solutions to a predicate, but the semantics of a nondeterministic predicate is that it simply *has* a set of solutions for each set of its input arguments, as opposed to a Haskell function which calculates a single result value for each set of its input arguments. Like Haskell, Mercury is pure (including I/O, though it uses a slightly different mechanism), so each call to a predicate must uniquely determine a single *solution set*, which depends only on the arguments and the definition of the predicate. Mercury tracks the "determinism" of each predicate. Predicates which always result in exactly one solution are called `det` (short for deterministic). Those which generate *at least* one solution are called `multi`. There are a few other determinism classes as well, but they're not relevant here. Catching an exception with a `try` block (or by explicitly calling the higher-order predicates which implement it) has determinism `cc_multi`. The cc stands for "committed choice". It means "this computation has at least one solution, and operationally the program is only going to get one of them". This is because running the sub-computation and seeing whether it produced an exception has a solution set which is the union of the sub-computation's "normal" solutions plus the set of all possible exceptions it could throw. Since "all possible exceptions" includes every possible runtime failure, most of which will never actually happen, this solution set can't be fully realised. There's no possible way the execution engine could actually backtrack through every possible solution to the `try` block, so instead it just gives you **a** solution (either a normal one, or an indication that all possibilities were explored and there was no solution or exception, or the first exception that happened to arise). Because the compiler keeps track of the determinism, it will not allow you to call `try` in a context where the complete solution set matters. You can't use it to generate all solutions which don't encounter an exception, for example, because the compiler will complain that it needs all solutions to a `cc_multi` call, which is only going to produce one. However you also can't call it from a `det` predicate, because the compiler will complain that a `det` predicate (which is supposed to have exactly one solution) is making a `cc_multi` call, which will have multiple solutions (we're just only going to know what one of them is). So how on earth is this useful? Well, you can have `main` (and other things it calls, if that's useful) declared as `cc_multi`, and they can call `try` with no problems. This means that the *entire program* has multiple "solutions" in theory, but running it will generate **a** solution. This allows you to write a program that behaves differently when it happens to run out of memory at some point. But it doesn't spoil the declarative semantics because the "real" result it would have computed with more memory available is still *in* the solution set (just as the out-of-memory exception is still in the solution set when the program actually does compute a value), it's just that we only end up with one arbitrary solution. It's important that `det` (there is exactly one solution) is treated differently from `cc_multi` (there are multiple solutions, but you can only have one of them). Similarly to the reasoning about catching exceptions in Haskell, exception catching can't be allowed to happen in a non-"committed choice" context, or you could get pure predicates producing different solution sets depending on information from the real world that they shouldn't have access to. The `cc_multi` determinism of `try` allows us to write programs *as if* they produced an infinite solution set (mostly full of minor variants of unlikely exceptions), and prevents us from writing programs that actually need more than one solution from the set.[3] --- [1] Unless evaluating it encounters a nondeterministic error first. Real life's a pain. [2] Languages which merely encourage the programmer to use purity without enforcing it (such as Scala) tend to just let you catch exceptions wherever you want, same as they allow you to do I/O wherever you want. [3] Note that the "committed choice" concept is not how Mercury handles pure I/O. For that, Mercury uses unique types, which is orthogonal to the "committed choice" determinism class.
how to prevent blur() running when clicking a link in jQuery? i have: ``` <input type="text" /> ``` and ``` $('input').blur(function(){ alert('stay focused!'); }); ``` I want to prevent the blur function running when I'm "blurring" by clicking on an anchor element. I.E. if i tab to another input, click somewhere on the page etc i want the blur to fire, but if i click a link, I don't want it to fire. Is this easily achievable, or do i need to hack about with delegates and semaphores? Thanks
I had to solve this problem myself today, too. I found that the mousedown event fires *before* the blur event, so all you need to do is set a variable that indicates that a mousedown event occurred first, and then manage your blur event appropriately if so. ``` var mousedownHappened = false; $('input').blur(function() { if(mousedownHappened) // cancel the blur event { alert('stay focused!'); $('input').focus(); mousedownHappened = false; } else // blur event is okay { // Do stuff... } }); $('a').mousedown(function() { mousedownHappened = true; }); ``` Hope this helps you!!
DIsable scrolling for listview and enable for whole layout Hi iam currently working on an android application it has two list views inside the main activity.What i want is disable the scrolling of two lists and allow the whole page to scroll only,is there any way for that please do help..... my code package com.example.listviewdemo; ``` import android.app.Activity; import android.os.Bundle; import android.view.Menu; import android.view.MenuItem; import android.view.MotionEvent; import android.view.View; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.Toast; public class MainActivity extends Activity { ListView list,list2; String[] name={"Happy","always","try","hard","you will","get it!","Believe","in","God","everything","will","work well!","Believe","in","God","everything","will","work well!"}; String[] name2={"Believe","in","God","everything","will","work well!","Believe","in","God","everything","will","work well!"}; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); list = (ListView) findViewById(R.id.listview); list2 = (ListView) findViewById(R.id.listview2); list.setAdapter(new ArrayAdapter<String>(MainActivity.this,android.R.layout.simple_list_item_1,name)); list2.setAdapter(new ArrayAdapter<String>(MainActivity.this,android.R.layout.simple_list_item_1,name2)); list.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // TODO Auto-generated method stub Toast.makeText(getBaseContext(), name [position],Toast.LENGTH_SHORT).show(); } }); list2.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // TODO Auto-generated method stub Toast.makeText(getBaseContext(), name2 [position],Toast.LENGTH_SHORT).show(); } }); } } ``` my xml code is: ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/LinearLayout1" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context="${relativePackage}.${activityClass}" > <TextView android:id="@+id/text_id1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/str1" /> <ListView android:id="@+id/listview" android:layout_width="match_parent" android:layout_height="wrap_content" /> <TextView android:id="@+id/text_id2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/str2" /> <ListView android:id="@+id/listview2" android:layout_width="match_parent" android:layout_height="wrap_content" /> </LinearLayout> ```
You can try this. FOR xml PART DO THIS: Put your entire layout data under one Scroll View, for example: ``` <ScrollView android:id="@+id/scrollViewId" android:layout_width="match_parent" android:layout_height="match_parent" android:fillViewport="true" > <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" > <ListView android:id="@+id/list" android:layout_width="match_parent" android:layout_height="wrap_content" /> // SAY YOUR FIRST LIST VIEW: <ListView android:id="@+id/list" android:layout_width="match_parent" android:layout_height="wrap_content" /> // SAY YOUR SECONDLIST VIEW: // Add your other views as per requirement.... </LinearLayout> </ScrollView> ``` NOW IN JAVA CLASS DO THE FOLLOWING THING... Just add this custom method to your code after setting adapter to list view: ``` setListViewHeightBasedOnChildren(listview) ``` For Example: ``` list = (ListView) findViewById(R.id.listview); list.setAdapter(new ArrayAdapter<String> (MainActivity.this,android.R.layout.simple_list_item_1,name)); setListViewHeightBasedOnChildren(list); ``` Do it same for second list view too. Here is body of setListViewHeightBasedOnChildren METHOD ``` public static void setListViewHeightBasedOnChildren(ListView listView) { ListAdapter listAdapter = listView.getAdapter(); if (listAdapter == null) return; int desiredWidth = MeasureSpec.makeMeasureSpec(listView.getWidth(), MeasureSpec.UNSPECIFIED); int totalHeight=0; View view = null; for (int i = 0; i < listAdapter.getCount(); i++) { view = listAdapter.getView(i, view, listView); if (i == 0) view.setLayoutParams(new ViewGroup.LayoutParams(desiredWidth, LayoutParams.MATCH_PARENT)); view.measure(desiredWidth, MeasureSpec.UNSPECIFIED); totalHeight += view.getMeasuredHeight(); } ViewGroup.LayoutParams params = listView.getLayoutParams(); params.height = totalHeight + ((listView.getDividerHeight()) * (listAdapter.getCount())); listView.setLayoutParams(params); listView.requestLayout(); } ``` Hope it works for you.
Can files uploaded to Amazon S3 get auto-deleted after few days? Amazon S3 API has added Object Expiration which deleted all the files uploaded **within a folder** after few days. Is it possible to make the same for each file from the day it was uploaded? For example when I upload foo.png, after X days, delete that file not all the files within the folder.
Your file path is not more than a *prefix* in S3. So, if you have a structure as follows: ``` / | +--folder1 | +--folder2 | +--folder3 | | | +--foo.png | | | +--foo2.png | +--bar.png ``` And you want your rule to apply only to foo.png, set it to "folder1/folder3/foo.png" (there will be only one file matching the "entire-name" prefix in your bucket). But be aware of the limits regarding number of rules. From [Object Expiration docs](http://docs.amazonwebservices.com/AmazonS3/latest/dev/ObjectExpiration.html): *To set an object’s expiration, you add a lifecycle configuration to your bucket, which describes the lifetime of various objects in your bucket. A lifecycle configuration can have up to 100 rules. Each rule identifies an object prefix and a lifetime for objects that begin with this prefix. The lifetime is the number of days since creation when you want the object removed.*
Use Java HashMap multiple Type I'm using a Hashmap as my in-memory cache. So basically, this is what I have: ``` private static final Map<String, Object> lookup = new HashMap<String, Object>(); public static Object get(CacheHelper key) { return lookup.get(key.getId()); } public static void store(CacheHelper key, Object value) { lookup.put(key.getId(), value); } ``` That's fine. But for every Object I "get" from the Map, I have to cast, which is very ugly. I want to put ArrayList and many other different things into it. Does anybody know an other solution to be typesafe ? (For sure, I can create for every type a getter and setter, but is that the only solution ? So, is there a better way to create an in-memory cache or does somebody have an idea how to wrap the hashmap to be more safe ?
One solution to this problem is to make your CacheHelper type generic, with `CacheHelper<T>`. Then create a wrapper for your map: ``` class MyCache { private final Map<CacheHelper<?>, Object> backingMap = new HashMap<>(); public <T> void put(CacheHelper<T> key, T value) { backingMap.put(key, value); } @SuppressWarnings("unchecked") // as long as all entries are put in via put, the cast is safe public <T> T get(CacheHelper<T> key) { return (T) backingMap.get(key); } } ``` The Java compiler actually uses this approach internally; see e.g. [here](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b27/com/sun/tools/javac/util/Context.java#Context.get%28com.sun.tools.javac.util.Context.Key%29). You don't have to pass around explicit `Class` objects, but you do have to know what type is *actually* associated with each key, which is as it should be in well-behaved applications.
Does "Rename Symbol" work across files when editing JavaScript? Does the "Rename Symbol" feature work across files, when editing JavaScript? Currently it's only working within the current file being edited. I seem to recall it working across files, though. It would open all the files that it had made changes to. It's no longer doing that, though. Am I remembering incorrectly or does it work this way?
The feature `Rename Symbol` only works for the current file. But there is another feature, which fits your description: `Replace in Files` (Menu Bar: `Edit` > `Replace in Files`). From [Visual Studio Code User Guide](https://code.visualstudio.com/docs/editor/codebasics#_search-across-files): > > You can also Search and Replace across files. Expand the Search widget > to display the Replace text box. > > > When you type text into the Replace text box, you will see a diff > display of the pending changes. You can replace across all files from > the Replace text box, replace all in one file or replace a single > change. > > > For a quick use you can select a word in your source and hit `Ctrl`+`Shift`+`H`.
Javascript compare timestamps I am trying to sort a two-dimensional array by a timestamp column Descending. This collumn (index 11) is in the format: 'yyyy-MM-dd HH:mm:ss'. I have tried multiple things. According to the topics I've read, this code should work: ``` List.sort(function(x, y){ return Date.parse(y[11]) - Date.parse(x[11]); }); ``` Thank you in advance!
That will work on up-to-date browsers that support the only-recently-defined input format for `Date.parse` (prior to ES5, it was just "parse whatever `Date#toString` spits out"). Although never spec'd, older browsers will support it with `/` rather than `-` in the date, so: ``` List.sort(function(x, y){ return Date.parse(y[11].replace(/-/g, '/')) - Date.parse(x[11].replace(/-/g, '/')); }); ``` Always test on your target browsers, of course, because again this was never specified. For example, on IE8 and earlier: ``` display(Date.parse("2012-06-01 14:22:17")); ``` ...is `NaN`, but: ``` display(Date.parse("2012/06/01 14:22:17")); ``` ...is `1338556937000`.
Modify list of strings to only have max n-length strings (use of Linq) Suppose we have a list of strings like below: ``` List<string> myList = new List<string>(){"one", "two", "three", "four"}; ``` There are some items with the length of more than 3. By the help of Linq I want to divide them into new items in the list, so the new list will have these items: ``` {"one", "two", "thr", "ee", "fou", "r"}; ``` Don't ask me why Linq. I am a Linq fan and don't like unLinq code :D
For real code basic `for` would likely be better (i.e. as shown in [other answer](https://stackoverflow.com/a/34364197/477420). If you really need LINQ split string into 3-letter chunks and than merge all with `SelectMany`: ``` var list = new[]{"", "a", "abc","dee","eff","aa","rewqs"}; var result = list .Select( s => Enumerable.Range(0, s.Length / 3 + (s.Length == 0 || (s.Length % 3 > 0) ? 1 : 0)) .Select(i => s.Substring( i * 3, Math.Min(s.Length - i * 3, 3)))) .SelectMany(x=>x); ``` `Range` creates enumerable for all segments of the string (which is either length/3 if all pieces are exactly 3 characters, or one more if last one is shorter than 3 character). `.Select(i => s.Substring...` splits string into chunks of 3 or less characters (need to carefully adjust length to avoid index out of range error) `.SelectMany` combines list of list of 3 character segments into flat list of 3 character segments. --- Note: This LINQ code should be used for entertainment/learning purposes. If you must use similar LINQ solution in production code at least convert splitting of string into more readable helper function.
OpenCV 4.0.0 SystemError: returned a result with an error set Hello I am trying to create a facial recognition program but I have a peculiar error: here is my code: ``` import cv2 as cv gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) face_cascade = cv.CascadeClassifier("lbpcascade_frontalface.xml") faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5); ``` and this error is the output ``` SystemError: <class 'cv2.CascadeClassifier'> returned a result with an error set ``` I have "lbpcascade\_frontalface.xml" in the working directory so that shouldn't be an issue if it helps when I enter ``` cv.__version__ ``` I get ``` '4.0.0' ```
**New Answer** OpenCV seems to now have a directory dedicated to cascades, they are placed in `data` and I am seeing something like this floating around in tutorials now `haar_cascade_face = cv.CascadeClassifier('data/haarcascade/haarcascade_frontalface_default.xml')` You may have to find where `data` is on your machine or the above my work. I have not touched this project since I finished it in early 2019. Bear in mind this only works for frontal face, if you want to use Haar's Cascade for eyes that is a separate file. **old answer** Turns out I didn't need to download another file and use it because opencv comes with it this little bit of code worked ``` cv.CascadeClassifier(cv.data.haarcascades + "haarcascade_frontalface_default.xml") ```
OpenMP nested for, unequal num. of iterations I am using OpenMP to parallelize loops. In normal case, one would use: ``` #pragma omp for schedule(static, N_CHUNK) for(int i = 0; i < N; i++) { // ... } ``` For nested loops, I can put `pragma` on the inner or outter loop ``` #pragma omp for schedule(static, N_CHUNK) // can be here... for(int i = 0; i < N; i++) { #pragma omp for schedule(static, N_CHUNK) // or here... for(int k = 0; k < N; k++) { // both loops have consant number of iterations // ... } } ``` But! I have two loops, where number of iterations in 2nd loop depends on the 1st loop: ``` for(int i = 0; i < N; i++) { for(int k = i; k < N; k++) { // k starts from i, not from 0... } } ``` What is the best way to balance CPU usage for this kind of loop?
As always: - it depends - profile. - In this case: see also [OMP\_NESTED environment variable](http://publib.boulder.ibm.com/infocenter/comphelp/v101v121/index.jsp?topic=/com.ibm.xlf121.aix.doc/proguide/ompnest.html) The things that are going to make the difference here are not being shown: - (non)linear memory addressing (also watch the order of the loops - use of shared variables; As to your last scenario: ``` for(int i = 0; i < N; i++) { for(int k = i; k < N; k++) { // k starts from i, not from 0... } } ``` I suggest *parallelizing the outer loop* for the following reasons: - all other things being equal coarse grained parallelizing usually leads to better performance due to - increased cache locality - reduced frequency of locking required (note that this hinges on *assumptions* about the loop contents that I can't really make; I'm basing it on my experience of /usual/ parallelized code) - the inner loop might become so short as to be inefficient to parallelize (IOW: the outer loop's range is predictable, the inner loop less so, or doesn't lend itself to static scheduling as well) - nested parallellism rarely scales well
How to mock events with internal constructors I have a service responsible for subscribing to EWS for new mail notification. I've created an interface for the service in order to mock it and test a dummy implementation. However, I'm running into a wall whenever I try to manually tell what my events are supposed to do. Here is my concrete implementation. ``` public interface IExchangeService { void Subscribe(); } public class ExchangeServiceSubscriber : IExchangeService { private readonly ExchangeService _exchangeService; private readonly IConsumer<IEmail> _consumer; public ExchangeServiceSubscriber( ExchangeService exchangeService, IConsumer<IEmail> consumer) { _exchangeService = exchangeService; _consumer = consumer; } public void Subscribe() { // code to subscribe streamingConnection.OnNotificationEvent += OnEvent; streamingConnection.Open(); } public void OnEvent(object sender, NotificationEventArgs args) { foreach (NotificationEvent triggeredEvent in args.Events) { if (triggeredEvent is ItemEvent) { var propertySet = new PropertySet(ItemSchema.UniqueBody, ItemSchema.Attachments) { RequestedBodyType = BodyType.Text }; EmailMessage email = EmailMessage.Bind(args.Subscription.Service, ((ItemEvent)triggeredEvent).ItemId, propertySet); _consumer.Consume(new ExchangeEmail { Body = email.UniqueBody }); } } } } ``` And unfortunatly, almost every class in EWS is either sealed or has an internal constructor which really limits how I decouple them, it seems. I've attempted to set the expectation for `NotificationEventArgs` (for example) but it uses an internal constructor. Here is some ideas I've been fiddling with. You can read about mocking events [here](http://code.google.com/p/moq/wiki/QuickStart#Events). `mock.Setup(x => x.OnEvent(new object(), new NotificationEventArgs()));` Issue with that is `NotificationEventArgs` uses an internal constructor. I could see getting this working with some sort of wrapper but I'm not exactly sure what it would look like. One of the big problems is the way EWS is made pretty much prevents anyone from manually injecting dependencies. I'm essentially trying to test that whenever event `OnEvent` fires that the email will actually get consumed. Also, while I would like to test this functionality I'm not sure it's worth fighting EWS every step of the way.
Let's first see, what you can't do: - You can't subclass `NotificationEventArgs` because the ctor is internal. - You can't create an instance directly for the same reason. So basically, you can't create an instance of this class using the "normal way". I assume you already checked for a factory method or class? This leaves us with only one option: Instantiate the class using reflection, e.g. with the help of the `Activator.CreateInstance` method: [Unit testing exception handling for third party exceptions with internal constructors](https://stackoverflow.com/questions/5498162/unit-testing-exception-handling-for-third-party-exceptions-with-internal-construc/5498261#5498261), like so: ``` mock.Setup(x => x.OnEvent(new object(), Activator.CreateInstance(typeof(NotificationEventArgs), BindingFlags.NonPublic | BindingFlags.Instance, null, null, null)) ); ```
Deploy a GitHub branch automatically to AWS Elastic Beanstalk Say I have a branch `stable` on GitHub that I want to automatically deploy to my AWS EB instances when there is a commit. I have looked at `CodePipeline`, which is not available in the region I am hosting my instances. I have also looked at `CodeDeploy` but this appears to only be for a single instance of EC2 and not for `Elastic Beanstalk`. Please correct me if I am wrong. All help is appreciated. I do not want to have to manually deploy every time a commit is made.
For anyone that stumbles on this, I would like to comment saying this is possible. 1. You need to create an application on Elastic Beanstalk (This application should be based on the runtime environment of your code for example nodejs for an application built with nodejs, php for a php-application) 2. Make sure to select the sample application provided by aws when creating a beanstalk application. 3. This spins up an environment that runs that sample application (feel free to configure the environment the way you want) 4. Launch the application and then create a new pipeline but be sure to select your source as GitHub in the pipeline (this enables your branch from github to be selected as the source) 5. In the Deploy stage of the pipeline, be sure to select Elastic Beanstalk for deployment. A well detailed explanation can be found here: <https://medium.com/@faithfulanere/how-to-create-cicd-using-github-as-source-and-elastic-beanstalk-244319a2a350>
Mockito Mock a static void method with Mockito.mockStatic() I'm using Spring Boot and in one of my unit test, I need to mock the `Files.delete(somePath)` function. Which is a static void method. I know that with Mockito it is possible to mock void method: ``` doNothing().when(MyClass.class).myVoidMethod() ``` And since July 10th 2020, it is possible to mock static method: ``` try (MockedStatic<MyStaticClass> mockedStaticClass = Mockito.mockStatic(MyStaticClass.class)) { mockedStaticClass.when(MyStaticClass::giveMeANumber).thenReturn(1L); assertThat(MyStaticClass.giveMeANumber()).isEqualTo(1L); } ``` But I can't manage to mock a static void mehtod such as `Files.delete(somePath)`. This is my pom.xml file (only test related dependencies): ``` <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-inline</artifactId> <version>3.5.15</version> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <version>3.5.15</version> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-junit-jupiter</artifactId> <version>3.5.15</version> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <version>2.2.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-test</artifactId> <scope>test</scope> </dependency> ``` Is there a way to mock static void methods without using PowerMockito ? If it is possible, what is the correct syntax to do so ?
In general mocking static calls is the last resort, that is not supposed to be used as default approach. For example, for testing the code, that works with file system, there are better means. E.g. depending on the `junit` version either use [`TemporaryFolder` rule](https://junit.org/junit4/javadoc/4.13/org/junit/rules/TemporaryFolder.html) or [`@TempDir` annotation](https://junit.org/junit5/docs/5.4.0/api/org/junit/jupiter/api/io/TempDir.html). Also, please note, that `Mockito.mockStatic` might significantly slow down your tests (e.g. look at the notes below). Having said the caution above, find the snippet below, that shows how to test, that file got removed. ``` class FileRemover { public static void deleteFile(Path filePath) throws IOException { Files.delete(filePath); } } class FileRemoverTest { @TempDir Path directory; @Test void fileIsRemovedWithTemporaryDirectory() throws IOException { Path fileToDelete = directory.resolve("fileToDelete"); Files.createFile(fileToDelete); FileRemover.deleteFile(fileToDelete); assertFalse(Files.exists(fileToDelete)); } @Test void fileIsRemovedWithMockStatic() throws IOException { Path fileToDelete = Paths.get("fileToDelete"); try (MockedStatic<Files> removerMock = Mockito.mockStatic(Files.class)) { removerMock.when(() -> Files.delete(fileToDelete)).thenAnswer((Answer<Void>) invocation -> null); // alternatively // removerMock.when(() -> Files.delete(fileToDelete)).thenAnswer(Answers.RETURNS_DEFAULTS); FileRemover.deleteFile(fileToDelete); removerMock.verify(() -> Files.delete(fileToDelete)); } } } ``` **Notes:** 1. `Mockito.mockStatic` is available in Mockito 3.4 and above, so check you're using correct version. 2. The snippet deliberatly shows two approaches: `@TempDir` and `Mockito.mockStatic`. When run both tests you'll notice that `Mockito.mockStatic` is much slower. E.g. on my system test with `Mockito.mockStatic` runs around 900 msec vs 10 msec for `@TempDir`.
simple javascript search() function is not working I am trying to find some string part in another string. And I found a function called `search()` and tried this: ``` if("http://www.google.de".search("http://") > 0){ alert('with http'); } else { alert('no http'); } ``` but it is giving me `no http` even if it has `http://` part in it. here is the fiddle: <http://jsfiddle.net/xXTuY/2/> can you please help me out?
First, [`String#search`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/String/search) expects a regex, not a string. If it encounters a non-regex, it tries to convert it into a regex via `new RegExp(patt)`. In case of a string, it treats the string as a regex pattern. This means that your search will behave unexpectedly (match more than desired, match less than desired or even throw a syntax error, if the string is not a valid regex) if the search string contains characters that have a special meaning in regular expressions. Use [`indexOf`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/String/search) instead. Second, `search` and `indexOf` return the position of the first match, or -1 if no match has been found. This means that if the return value is less than zero, nothing has been found. If the return value is zero, the match was made at the beginning of the string. Also note there is a handy shortcut for `x != -1`: the bitwise negation `~x` ``` if("http://www.google.de".indexOf("http://") > -1){ alert('with http'); } else { alert('no http'); } ```
Rails, Devise, Rspec: Undefined method 'sign\_in' I am trying to write Rspec tests in Rails, using Devise helper methods for signing in and out. The sign\_in method is not working. However, it had been working earlier, before a slew of changes to the app. Things I have tried: - I am including the test helpers in Rspec.configure. - Using Warden's login\_as - Clearing the Rails cache. - Getting rid of Capybara to see if that were causing the issue - I am not setting the session explicitly in my controller specs (e.g. no valid\_session) So far, no dice. What do I need to do differently to test my controllers with a signed-in user? Error message: ``` OrderItemsController GET #index renders the :index view Failure/Error: sign_in :admin NoMethodError: undefined method `sign_in' for # <RSpec::ExampleGroups::OrderItemsController_2::GETIndex:0x00000102c002d0> # ./spec/controllers/order_items_controller_spec.rb:6:in `block (2 levels) in <top (required)>' ``` Controller Spec ``` require 'spec_helper' describe OrderItemsController do before (:each) do admin = create(:admin) sign_in :admin end describe "GET #index" do it "renders the :index view" do get :index expect( response ).to render_template :index end end end ``` spec\_helper.rb ``` require 'rspec/rails' require 'capybara/rspec' RSpec.configure do |config| config.include ApplicationHelper config.include ControllersHelper config.include UsersHelper config.include Devise::TestHelpers, type: :controller config.include FactoryGirl::Syntax::Methods end ``` Gemfile ``` group :development, :test do gem 'rspec-rails', '~> 3.0.0.beta' gem 'capybara' gem 'factory_girl_rails' gem 'faker' gem 'dotenv-rails' gem 'guard' gem 'guard-annotate' gem 'guard-rspec', require: false gem 'guard-livereload', require: false gem 'foreman' end ``` factories/user.rb ``` FactoryGirl.define do factory :user do first { Faker::Name.first_name } last { Faker::Name.last_name } email { Faker::Internet.email } admin false password "secrets1" password_confirmation "secrets1" confirmed_at Date.today factory :admin do admin true end end end ``` Thanks in advance.
Did you recently upgrade to RSpec 3 like I did? This is from [the RSpec 3 documentation](https://www.relishapp.com/rspec/rspec-rails/v/3-0/docs/directory-structure): > > Automatically Adding Metadata > RSpec versions before 3.0.0 automatically added metadata to specs based on > their location on the filesystem. This was both confusing to new users and not > desirable for some veteran users. > > > In RSpec 3, this behavior must be explicitly enabled: > > > > ``` > ​# spec/rails_helper.rb > RSpec.configure do |config| > config.infer_spec_type_from_file_location! > end > > ``` > > Since this assumed behavior is so prevalent in tutorials, the default > configuration generated by rails generate rspec:install enables this. > > > If you follow the above listed canonical directory structure and have > configured infer\_spec\_type\_from\_file\_location!, RSpec will automatically > include the correct support functions for each type. > > > After I add that configuration snippet, I no longer have to specify the spec type (e.g. `type: :controller`).
How to count number of vowel - SQL Server I want to write a procedure like count number of vowel in a column. So if I need it any time, then I do not need to write the procedure again. I want to use like `find_vowel()`; Example: ``` Select Column_String From Tablo1 Where Column_ID=1 ``` Result: "I Am gonna find it" Vowel: "I,A,o,a,i,i" There are 6 vowels in the column (including upper - lower characters). So how can I find the number of vowels in the columns? I'm using Microsoft SQL Server 2014 Thanks
Replace all vowels with blank (to delete them) then subtract the length of the vowel-less string from the original length: ``` select len(Column_String) - len( replace(replace(replace(replace(replace( lower(Column_String), 'a', ''), 'e', ''), 'i', ''), 'o', ''), 'u', '') ) as vowel_count from ... ``` --- As a function: ``` create function vowel_count(str nvarchar(1024)) returns int as begin return ( len(str) - len(replace(replace(replace(replace(replace( lower(str), 'a', ''), 'e', ''), 'i', ''), 'o', ''), 'u', '')); end; ```
clientaccesspolicy.xml not requested the first time in some browsers I'm running into a weird issue with a crossdomain webservice call in Silverlight 4. Immediately after starting, the application calls a webservice on the same host from where it has been downloaded but on a different port (for ex. the application resides at <http://www.mydomain.com:80> and the webservice is at <http://www.mydomain.com:81>). No SSL involved. The host provides a proper clientaccesspolicy.xml file and everything works correctly *most of the time* (like 99.9%). In some cases however, the browser does not request clientaccesspolicy.xml and as a result the webservice call is blocked and fails with a cross-domain error. In the typical case this is the sequence of requests you see with Fiddler or Chrome developer tools: - index.html (the page hosting the silverlight app) - silverlight.js - application.xap - clientaccesspolicy.xml (requested and downloaded correctly) - webservice call In some instances however you only see - index.html (the page hosting the silverlight app) - silverlight.js - application.xap - -> cross domain error (no clientaccesspolicy requested, no web service call). This only happens on a minority of machines (all running Windows 7) if all these conditions are true: - application running within **Chrome, Firefox or out-of-the browser** (IE always works) - it's **the first time you load the page** (i.e. if you hit the browser's reload button the problem goes away. Close/restart browser and the first time you still have the problem) - **no Fiddler** running (if you run traffic through Fiddler the problem goes away). Chrome developer tools have no effect though. - the machine is inside the **same domain as the application** server. If you access the page from an external network (with the same machine), the problem is not there. On those machines, under those circumstances, the problem is 100% reproducible. What could be causing this? What steps can I perform to track the issue?
This problem is obviously quite rare, but with some help from Microsoft I've found the solution. I'm posting it for future reference so that hopefully [this](http://xkcd.com/979/) won't happen again. As a security measure, **Silverlight blocks any cross-domain call between the Internet zone and the Local Intranet zone**. In that case it does not even request clientaccesspolicy.xml. So if the application is hosted on www.myhost.com (Internet zone), Silverlight prevents him from calling a webservice on www.another.com (Local Intranet zone). [This blog post](http://blogs.msdn.com/b/fiddler/archive/2010/11/22/fiddler-and-silverlight-cross-zone-cross-domain-requests.aspx) explains it in detail. So if you have one or several of the following symptoms (despite having discarded the obvious crossdomain errors like a malformed or misplaced clientaccesspolicy.xml): - crossdomain error from some apparently random machines (several different locations/domains), but working from other machines - clientaccesspolicy.xml not requested at all - works with some browsers, does not with others. Apparently random, sometimes not working with any browser. - sometimes no problem when Fiddler is open, but error without Fiddler running - everything works correctly on localhost It may be worth to attempt the following, in order to put the application host and the web service in the same security zone: - go to IE security settings (these settings are also used by any application that access the network, i.e. any other browser as well) - add the address that hosts the application AND the address that hosts the webservice to the **Local Intranet** sites OR - uncheck the "Automatically detect intranet network" flag (so that they **both end up in the Internet zone**)
How to get sms sent confirmation for each contact/person in android? I want to send sms to multiple people and verify whether sms sent or not. I checked multiple links (mentioned here) and got the idea of using `PendingIntent` and `broadCast Receiver` for confirmation. > > [Practical way to find out if SMS has been sent](https://stackoverflow.com/questions/9520277/practical-way-to-find-out-if-sms-has-been-sent) > > [Sending text messages programmatically in android](https://stackoverflow.com/questions/8578689/sending-text-messages-programmatically-in-android) > > <http://mobiforge.com/design-development/sms-messaging-android> > > > But the key problem is that, I have different 50 contacts number in an `arrayList` and their different msgs in another `arrayList`. I use this code : ``` for (Condition) { sms = SmsManager.getDefault(); try { . . . sms.sendTextMessage(phoneNumbers[i], null, messages[i], sentPI, deliveredPI); } catch(IllegalArgumentException e) { } } ``` Now, I can't identify how many people do get their msg and how many don't. Because as shown in post(mentioned link above), every time we just get one msg, "SMS delivered". So please let me know, how can I put "extras" in `Intent`, when I send msg and get the extras from `broadcast Receiver` to get the detail of specific contact/person. **One More thing** : There are four different option for flag value in `PendingIntent` (`FLAG_ONE_SHOT`, `FLAG_NO_CREATE`, `FLAG_CANCEL_CURRENT`,`FLAG_UPDATE_CURRENT`). Which one should I use when I send messages in for loop for correct result?
Attaching and retrieving extras on those `Intent`s is no different here than when passing data between Activities. The only real catch is that `PendingIntent`s might not behave as one expects. The `get*()` methods will return a distinct `PendingIntent` only if the passed `Intent` is different than all currently active ones per the [`Intent#filterEquals()` method](https://developer.android.com/reference/android/content/Intent.html#filterEquals(android.content.Intent)), or if the request code is not currently in use for an equal `Intent`. Different extras on an otherwise-same `Intent` with the same request code will *not* cause a new `PendingIntent` to be created. Depending on the flag passed in that case, those extras might be ignored, or they might overwrite those in a currently active `PendingIntent`, possibly leading to incorrect results. Any of the filter properties in `Intent` – i.e., the component, the action, the data URI, etc. – can be used to distinguish them, and also to relay information in some form, so there are a few slightly different ways to set up the construction of the `PendingIntent`s, and their receipt and processing on the other end. [The original version of this answer](https://stackoverflow.com/revisions/24845193/7) showed a somewhat simplistic yet straightforward Java example in a single `Activity` class, using single-part messages with redundant extras on each result. The current version comprises relevant snippets from a complete, proper example and is therefore a bit more involved, so if you don't need anything terribly complex, that previous revision might be a preferable starting point. ### Sending This example uses only one Receiver class for both the send and delivery results, so we will use different actions to distinguish `Intent`s up to result type. If you prefer separate Receiver classes for each, you don't necessarily need the two different actions, since the components would be different in that case. ``` const val ACTION_SMS_SENT = "com.your.app.action.SMS_SENT" const val ACTION_SMS_DELIVERED = "com.your.app.action.SMS_DELIVERED" ``` NB: The full package name prefix isn't technically necessary – just need a couple of unique strings – but it can help prevent confusion during debugging, especially if you're testing additional SMS stuff alongside this. To track the individual message, the data URI is used to pass the ID for the message in our app's database, since this corresponds with Android's general design around `ContentProvider`s and content URIs, though we won't be using a Provider here. Also, as mentioned, the URI is a filter criterion, so now we can ensure distinct send and delivery results for each message. To illustrate, an `Intent` for a send report: ``` val sendIntent = Intent( ACTION_SMS_SENT, Uri.fromParts("app", "com.your.app", messageId.toString()), context, SmsResultReceiver::class.java ) ``` The next thing to consider is multi-part messages, which require `ArrayList`s of `PendingIntent`s, because each part is actually an individual SMS message with its own results. As far as the send results are concerned, at least, we should consider a message failed if any one part fails, so we need a distinct result for each part. This is a good use for `PendingIntent`'s `requestCode`, and passing the part number for that parameter in the `getBroadcast()` calls will ensure a new `PendingIntent` for each: ``` val sendPending = PendingIntent.getBroadcast( context, partNumber, sendIntent, PendingIntent.FLAG_ONE_SHOT or PendingIntent.FLAG_MUTABLE ) ``` `FLAG_ONE_SHOT` automatically cancels the `PendingIntent` after it's sent, and since we only need these once, there's no need to have the system keep them around. If we need to resend, we'll request them again. As for the other available flags, briefly: - `FLAG_MUTABLE` and `FLAG_IMMUTABLE` dictate whether the eventual sender of our `Intent` can modify it with [`Intent#fillIn()`](https://developer.android.com/reference/android/content/Intent#fillIn(android.content.Intent,%20int)) before sending it back to us. Both send and delivery reports involve extras on the received `Intent`, so we use `FLAG_MUTABLE` to allow those to be added. (The only possible extra on a send result is an optional error code for general failures, so we could use `FLAG_IMMUTABLE` for that one if we're not planning to check that.) Unfortunately, since API level 31 we're required to specify one or the other, and of course they didn't always exist, so you'll need to do an ugly `Build.VERSION.SDK_INT` check, though I've omitted that above. - `FLAG_CANCEL_CURRENT` is for cases where you've previously handed out a `PendingIntent` with some data to someone, and now you need to update that data and also make sure that the first someone can't blindly send that `Intent` with data that was changed underneath it. This isn't a concern for our particular situation. - `FLAG_UPDATE_CURRENT` is for the same cases as `FLAG_CANCEL_CURRENT`, but where you don't need to prevent that first someone from sending the updated data. - `FLAG_NO_CREATE` is used when you want to check if the given `PendingIntent` currently exists. This can be handy for several different things – e.g., determining if an alarm has been set – but isn't necessary for our design here. Lastly for the send, because we're supporting multi-part messages, we need some way to determine in the Receiver when the results are complete for a given message. Since we don't really care which specific part may fail, we don't need to track part numbers all the way through, so we'll simply include a `Boolean` extra as a flag for the final part. This actually ends up being the only extra needed for this design, but all of the previous considerations were certainly necessary to make sure that the correct values are set on each part. ``` const val EXTRA_IS_LAST_PART = "com.your.app.action.IS_LAST_PART" ``` If we put all of that together, with a little refactoring, we end up with a send function that looks like: ``` fun sendMessage(context: Context, id: Int, address: String, body: String) { val manager = getSmsManager(context) val parts = manager.divideMessage(body) val partCount = parts.size val send = arrayListOf<PendingIntent>() val delivery = arrayListOf<PendingIntent?>() for (partNumber in 1..partCount) { val isLastPart = partNumber == partCount send += PendingIntent.getBroadcast( context, partNumber, createResultIntent(context, id) .setAction(ACTION_SMS_SENT) .putExtra(EXTRA_IS_LAST_PART, isLastPart), RESULT_FLAGS ) delivery += if (isLastPart) { PendingIntent.getBroadcast( context, 0, createResultIntent(context, id) .setAction(ACTION_SMS_DELIVERED), RESULT_FLAGS ) } else null } manager.sendMultipartTextMessage(address, null, parts, send, delivery) } private fun createResultIntent(context: Context, messageId: Int) = Intent( null, Uri.fromParts("app", "com.your.app", messageId.toString()), context, SmsResultReceiver::class.java ) private val RESULT_FLAGS = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) { PendingIntent.FLAG_ONE_SHOT or PendingIntent.FLAG_MUTABLE } else { PendingIntent.FLAG_ONE_SHOT } ``` Note that the delivery list has null entries for all except the last part, as we're assuming that the status of the last part will be the overall status for the message as a whole. I'm not sure that that's a valid assumption to make, but all of the AOSP messaging apps do it this way. You can, of course, rework that to get delivery results for each part just like the send results, if you'd prefer. ### Results As you might've noticed above, we're using explicit `Intent`s, so we need to register `SmsResultReceiver` in the manifest. For example: ``` <receiver android:name=".SmsResultReceiver" /> ``` The outline for the Receiver is quite simple: pull the message ID from the data URI, check the action for the result type, then grab the relevant values and figure the result appropriately. - For a send, the `resultCode` indicates success if it's equal to `Activity.RESULT_OK`; otherwise, it's an error code corresponding to a constant in `SmsManager`. The full list of possible values is shown in [the docs for `sendMultipartTextMessage()`](https://developer.android.com/reference/android/telephony/SmsManager#sendMultipartTextMessage(java.lang.String,%20java.lang.String,%20java.util.ArrayList%3Cjava.lang.String%3E,%20java.util.ArrayList%3Candroid.app.PendingIntent%3E,%20java.util.ArrayList%3Candroid.app.PendingIntent%3E)). - For delivery, the result is actually the `status` of an `SmsMessage` that's attached encoded in a byte array extra. Android recognizes only three broad categories for this – the `Telephony.Sms.STATUS_*` constants shown below – but that status value can encode a specific failure mode, if you'd want to check that, for whatever reason. [This answer](https://stackoverflow.com/a/33240109) has the details. ``` class SmsResultReceiver : BroadcastReceiver() { override fun onReceive(context: Context, intent: Intent) { val messageId = intent.data?.fragment?.toIntOrNull() ?: return when (intent.action) { ACTION_SMS_SENT -> { val isLastPart = intent.getBooleanExtra(EXTRA_IS_LAST_PART, false) if (resultCode == Activity.RESULT_OK) { // Success } else { // Failed; resultCode is an error code, the // values for which are in SmsManager constants } } ACTION_SMS_DELIVERED -> { val status = getResultMessageFromIntent(intent)?.status ?: return when { status == Telephony.Sms.STATUS_COMPLETE -> { … } status >= Telephony.Sms.STATUS_FAILED -> { … } // >=, not == else -> { … } // Telephony.Sms.STATUS_PENDING } } } } } private fun getResultMessageFromIntent(intent: Intent): SmsMessage? = SmsMessage.createFromPdu( intent.getByteArrayExtra("pdu"), intent.getStringExtra("format") ) ``` It is important to note that not all carriers offer delivery reports. For those that don't, the delivery `PendingIntent`s just won't fire, so don't rely on that happening. Unfortunately, the emulators won't ever fire those either, so testing this particular part is not as easy as the other stuff. However, the example project linked below includes a fake report mechanism that can send the delivery `Intent`s upon receipt of the test messages if it's set up to have the device/emulator send those messages to itself. ### Full Example <https://github.com/gonodono/sms-sender> The example is in Kotlin and uses coroutines, Compose, Room, Hilt, WorkManager, and a little (optional) API desugaring for `java.time`. Kotlin, coroutines, and Compose are ready out of the box in the appropriate project templates. The remainder all require additional configuration, and though the example project is already good to go, you might want to check the setup pages to see exactly what went where. | Feature | Setup link | | --- | --- | | Room | <https://developer.android.com/training/data-storage/room#setup> | | Hilt | <https://developer.android.com/training/dependency-injection/hilt-android#setup> | | WorkManager | <https://developer.android.com/guide/background/persistent/getting-started> | | Hilt w/ WorkManager | <https://developer.android.com/training/dependency-injection/hilt-jetpack#workmanager> | | API desugaring | <https://developer.android.com/studio/write/java8-support#library-desugaring> | The design is a modern update to the classic pattern that uses a `Service` to send messages queued through a `ContentProvider` or local `SQLiteDatabase`. The Room database comprises two entities – `Message` and `SendTask` – and their corresponding DAOs. Each DAO has functions for the CRUD operations you would expect, and `MessageDao` also provides a `Flow` on a query for the most recent queued `Message`, which greatly simplifies the send routine. That routine is executed in a `Worker` which is started immediately for our demonstration, but which could be easily modified to be scheduled for whenever, with whatever constraints are needed. The actual work for the send is handled in `SmsSendRepository`, and Hilt lets us easily maintain a singleton of that which we can also inject into the statically-registered Receiver for the results, allowing us to keep the overall logic in one place. The UI is done in minimal Compose, and is basically just text logs with a couple of buttons. The left image shows the middle of a successful send of six test messages. The right image shows the same send attempted again, but with the emulator's airplane mode enabled. [![Screenshot of a running send](https://i.stack.imgur.com/VDQzet.png)](https://i.stack.imgur.com/VDQze.png) The last thing to note is the fake delivery report mechanism. This only works when the test device is sending to itself. It's nothing more than a regular `BroadcastReceiver` registered for `SMS_RECEIVED` that checks each incoming message against those sent, and upon a match, sends the same kind of broadcast you would get in a real run, complete with valid result PDU attached as an extra. This was mainly meant to make testing on the emulators more convenient, but it'll work on a real device too, should your particular carrier not provide delivery reports. As is, the app assumes that it'll be running on a single default emulator; i.e., it sends messages to hard-coded port number 5554.
Test helper method with Minitest I would like to test a helper method using **Minitest** ([`minitest-rails`](https://rubygems.org/gems/minitest-rails)) - but the helper method depends on [`current_user`, a Devise helper method available to controllers and view](https://github.com/plataformatec/devise#controller-filters-and-helpers). **app/helpers/application\_helper.rb** ``` def user_is_admin? # want to test current_user && current_user.admin? end ``` **test/helpers/application\_helper\_test.rb** ``` require 'test_helper' class ApplicationHelperTest < ActionView::TestCase test 'user is admin method' do assert user_is_admin? # but current_user is undefined end end ``` Note that I am able to test other helper methods that do not rely on `current_user`.
When you test a helper in Rails, the helper is included in the test object. (The test object is an instance of ActionView::TestCase.) Your helper's `user_is_admin?` method is expecting a method named `current_user` to also exist. On the controller and view\_context objects this method is provided by Devise, but it is not on your test object, yet. Let's add it: ``` require 'test_helper' class ApplicationHelperTest < ActionView::TestCase def current_user users :default end test 'user is admin method' do assert user_is_admin? end end ``` The object returned by `current_user` is up to you. Here we've returned a data fixture. You could return any object here that would make sense in the context of your test.
Post-processing to selected meshes In three.js is it possible to apply postprocessing effects only to selected meshes? For example to have a cube with grain effect while the rest of the scene does not have it. Thank you!
Yes. There is a [three.js example](http://threejs.org/examples/webgl_postprocessing_advanced.html) that shows how to apply postprocessing to selected meshes using masking. I think that example can be improved for clarity, but you can modify the example like so: ``` composer4 = new THREE.EffectComposer( renderer, new THREE.WebGLRenderTarget( rtWidth, rtHeight, rtParameters ) ); composer4.addPass( renderScene ); composer4.addPass( renderMask ); composer4.addPass( effectDotScreen ); composer4.addPass( clearMask ); composer4.addPass( effectVignette ); ``` You will get an output like this: [![postprocessing to selected mesh](https://i.stack.imgur.com/5cDWU.png)](https://i.stack.imgur.com/5cDWU.png) It is a complicated example, so *you will have to study it carefully*. three.js.r.77
Objective-C String(yyyy-mm-dd) to NSDate My App is a JSON code which involves a date in string type. (Such as "2011-10-01"). I would like to know how I could conver it into NSDate? It needs to be displayed in a different time format such as "1 October, 2011". ex. this code doesn't work: ``` NSString *date1 = @"2010-11-12"; NSDateFormatter *dateFormat = [[NSDateFormatter alloc] init]; [dateFormat setDateFormat:@"yyyy-MMMM-dd"]; NSDate *date2 = [dateFormat dateFromString:date1]; NSString *strdate = [dateFormat stringFromDate:date2]; ```
As Tug writes in [his blog post on the subject](http://tugdualgrall.blogspot.com/2011/01/ios-101-how-to-convert-string-to-nsdate.html): > > Objective-C and iOS SDK provide a class to help formatting date > (marshaling and unmarshaling), this class is [NSDateFormatter](http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSDateFormatter_Class/Reference/Reference.html). No > surprise, the NSDateFormatter uses the [Unicode Date Format](http://unicode.org/reports/tr35/#Date_Format_Patterns) > Patterns. > > > ``` NSDateFormatter *dateFormatter = [[NSDateFormatter alloc]init]; [dateFormatter setDateFormat:@"yyyy-MM-dd"]; NSDate *date = [dateFormatter dateFromString:publicationDate ]; [dateFormatter release]; ``` where publicationDate in this case is an NSString.
Windows 10: Mouse unresponsive to left and right click Mouse left and right click buttons unresponsive for periods of time seemly at random. - Mouse tried: wired, wireless, wireless, Bluetooth, surface typecover cover touchpad and all possible combinations at the same time. Left and right click will be locked for periods of time. Persists through restarts. Persists through unplugging/replugging all mouse type device. - Mouse scrolling and cursor movement is not affects (just right and left click does not work). Keyboard works. Touchscreen works. - Sometimes mouse click works on taskbar, but not on any foreground applications. - The only fix known so far is by starting an application called X-Mouse Button Control. It is able to "unlock" my mouse buttons. Before finding this solution, I needed to restart multiple times.
# The solution to the problem is found! It turns out the problem was mouse 'Button 4' in one of my mouse may spontaneously activate and **stay activated**. This introduces the observed behavior where it seems like left and right mouse buttons are disabled for all currently connected mouse and mouse connected in the future. The behavior which made the problem worse is the **'Mouse button 4' stays activated even if mouse wireless adapter is disconnected**! **Mouse button 4 also stays activated across reboots if the wireless mouse adapter is not disconnected during reboot.** That's why we observed problem symptoms across reboots and disconnect/reconnect of a range of mouse. The solution is to explicitly disable 'Button 4' through software for the semi-broken or all mouse. The easiest solution is to use a free program called X-Mouse Button Control. Autohotkey script to disable 'Button 4' should also work well.
How do you copy a multi-dimensional array (i.e. an array of arrays) in awk? The intent of this question is to post a canonical answer to a problem with a non-obvious solution - copying arrays of arrays (requires GNU awk for arrays of arrays). Given an array of arrays such as shown in the gawk manual on the section about [traversing arrays](https://www.gnu.org/software/gawk/manual/gawk.html#Walking-Arrays): ``` BEGIN { a[1] = 1 a[2][1] = 21 a[2][2] = 22 a[3] = 3 a[4][1][1] = 411 a[4][2] = 42 walk_array(a, "a") } function walk_array(arr, name, i) { for (i in arr) { if (isarray(arr[i])) walk_array(arr[i], (name "[" i "]")) else printf("%s[%s] = %s\n", name, i, arr[i]) } } ``` how would you write a `copy_array` function that can handle arrays of arrays to copy an existing array to a new array such that a subsequent call to `walk_array()` for the newly copied array would output the same values for the new array as for the original, i.e. so that this: ``` BEGIN { a[1] = 1 a[2][1] = 21 a[2][2] = 22 a[3] = 3 a[4][1][1] = 411 a[4][2] = 42 walk_array(a, "a") copy_array(a, b) print "----------" walk_array(b, "b") } ``` would output: ``` a[1] = 1 a[2][1] = 21 a[2][2] = 22 a[3] = 3 a[4][1][1] = 411 a[4][2] = 42 ---------- b[1] = 1 b[2][1] = 21 b[2][2] = 22 b[3] = 3 b[4][1][1] = 411 b[4][2] = 42 ```
``` $ cat tst.awk BEGIN { a[1] = 1 a[2][1] = 21 a[2][2] = 22 a[3] = 3 a[4][1][1] = 411 a[4][2] = 42 walk_array(a, "a") copy_array(a, b) print "----------" walk_array(b, "b") } function copy_array(orig, copy, i) { delete copy # Empty "copy" for first call and delete the temp # array added by copy[i][1] below for subsequent. for (i in orig) { if (isarray(orig[i])) { copy[i][1] # Force copy[i] to also be an array by creating a temp copy_array(orig[i], copy[i]) } else { copy[i] = orig[i] } } } function walk_array(arr, name, i) { for (i in arr) { if (isarray(arr[i])) walk_array(arr[i], (name "[" i "]")) else printf("%s[%s] = %s\n", name, i, arr[i]) } } ``` . ``` $ awk -f tst.awk a[1] = 1 a[2][1] = 21 a[2][2] = 22 a[3] = 3 a[4][1][1] = 411 a[4][2] = 42 ---------- b[1] = 1 b[2][1] = 21 b[2][2] = 22 b[3] = 3 b[4][1][1] = 411 b[4][2] = 42 ``` The use of `copy[i][1]` to create a temp array before the internal call to `copy_array()` which is then deleted on entry to `copy_array()` is to avoid the subsequent code from assuming that what exists at `copy[i]` is a scalar - this is the same as how you have to create a temp array before using `split()` (which internally first deletes the arry you pass as an argument) to populate a subarray because the content of an array element is assumed to be a scalar by default for backward compatibility with code written for awks that do not support arrays of arrays (e.g. POSIX awks): ``` $ printf 'a b\nc d\n' | awk '{split($0,arr[NR])} END{for (i in arr) for (j in arr[i]) print i,j,arr[i][j]}' awk: cmd. line:1: (FILENAME=- FNR=1) fatal: split: second argument is not an array $ printf 'a b\nc d\n' | awk '{arr[NR][1]; split($0,arr[NR])} END{for (i in arr) for (j in arr[i]) print i,j,arr[i][j]}' 1 1 a 1 2 b 2 1 c 2 2 d ```
Python executables: py2exe or PyInstaller? To create executable files (windows) I assume that we should use one of them: Py2exe or PyInstaller. What are the difference between them?
Py2exe and PyInstaller both are wrappers but here are few differences that I noticed, 1. Py2exe is compatible with python2.4+ including python3.0 & 3.1 whereas PyInstaller is currently, compatible with python 2.7 and 3.3–3.5 2. As far I know, Py2exe didn't support signing whereas Pyinstaller has support for signing from version 1.4 3. In PyInstaller it is easy to create one exe, By default both create a bunch of exes & dlls. 4. In py2exe its easier to embed manifest file in exe, useful for run as administrator mode in windows vista and beyond. 5. Pyinstaller is modular and has a feature of hooks to include files in the build that you like. I don't know about this feature in py2exe. Hope this helps you in your decision making. [Update] - It looks like PyInstaller is actively developed (<https://github.com/pyinstaller/pyinstaller/>) and released. py2exe is still using sourceforge and its release cycle is very random on pypi there is no build after 2014 and their code show development in 2017 as well (<https://sourceforge.net/p/py2exe/svn/HEAD/tree/trunk/py2exe-3/py2exe/>). So, I recommend using pyinstaller till the time py2exe stabilizes its release cycle in favor of developers.
Command-line to list DNS servers used by my system Is there a command to list dns servers used by my system? I tried ``` $ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.0.1 $ cat /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback ``` But it doesn't list any servers, if I go to "Network Manager GUI Tool", in Wireless section it lists "DNS 192.168.1.1 8.8.8.8 8.8.4.4" Can I get same information from command line? I am using Ubuntu 12.04 LTS
resolv.conf isn't really used anymore, unless you implement it yourself. The network manager does it now. I created an alias to list the DNS servers on my system, as I sometimes switch from OpenDNS to Google's open DNS. **Ubuntu >= 15** ``` nmcli device show <interfacename> | grep IP4.DNS ``` **Ubuntu <= 14** ``` nmcli dev list iface <interfacename> | grep IP4 ``` In my case, `<interfacename>` is `eth0`, which is common, but not always the case. You can see your interfaces with ``` nmcli device status ``` See if this is what you want. EDIT: I think resolv.conf is actually used indirectly, because the network manager creates the server that listens on 127.0.0.1, but I was told that this is an implementation detail that should not be counted on. I think that if you enter DNS addresses before this entry, they might get used, but I'm not sure exactly how this works. I think it's best to use the network manager in most cases, when possible.
Memory layout of vector of POD objects Suppose I have a simple C++ class, ``` class Data { public: float data[3]; void clear() { data[0] = 0.0f; data[1] = 0.0f; data[2] = 0.0f } } ``` And a vector of Data's, ``` std::vector<Data> v(10); ``` Is it safe to assume that `&v[0].data[0]` points to an array of 30 floats?
From standard > > **23.3.6.1 Class template vector overview** > > > The elements of a > vector are stored contiguously, meaning that if v is a vector where T is some type other > than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size() > > > so `&v[0]` indeed points to the beginning of 10 continuous `Data` objects. but for the layout of `Data` we have > > **9.2.13 Class members** > > > Nonstatic data members of a (non-union) class with the same access control (Clause 11) are allocated so > that later members have higher addresses within a class object. The order of allocation of non-static data > members with different access control is unspecified (11). Implementation alignment requirements might > cause two adjacent members not to be allocated immediately after each other; **so might requirements for > space for managing virtual functions (10.3) and virtual base classes (10.1).** > > > so we cannot be sure that `sizeof(Data) == 3*sizeof(float)`, therefore general answer should be: it's not save to assume 30 continuous floats.
Why kernel version doesn't match Ubuntu version in a Docker container? I have a Docker container built from Ubuntu 14.10. When I log in to the container to check the Ubuntu version and kernel version I see the following: ``` root@~$>> lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.10 Release: 14.10 Codename: utopic root@~$>> uname -a Linux ambiata-aws 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ``` I thought that Ubuntu 14.10 was supposed to be kernel version 3.16 (as stated [here](https://wiki.ubuntu.com/UtopicUnicorn/ReleaseNotes)), so why do I have kernel version 3.13.0-24-generic ? The reason I am asking is because there is a patch in 3.13.0-29-generic that I would like to have (that is, having fallocate working on AUFS in my docker container) which is discussed [here](https://www.mail-archive.com/[email protected]/msg04660.html).
From [What is Docker?](https://www.docker.com/what-docker): > > LIGHTWEIGHT > > > Containers running on a single machine share the same operating system > kernel; they start instantly and use less RAM. Images are constructed > from layered filesystems and share common files, making disk usage and > image downloads much more efficient. > > > Containers run on the host OS kernel. In your case, the host could be a Ubuntu 14.04 (running the original kernel) or a Ubuntu 12.04 (running kernel from trusty's [hardware enablement](https://askubuntu.com/q/248914/65926) stack). If the host is Ubuntu 14.04 you could install kernel 3.16: ``` sudo apt-get install linux-generic-lts-utopic ``` Or kernel 3.19: ``` sudo apt-get install linux-generic-lts-vivid ``` For Ubuntu 12.04, kernel 3.13 is latest official one.
ASP.NET Core Authorize Redirection to wrong URL I am trying to run a web application with the following route mapped: ``` app.UseMvc(routes => { routes.MapRoute( "default", "WoL/{controller=Account}/{action=Login}/{id?}"); }); ``` If the user is not authenticated and tries to access a action having the AuthorizeAttribute, the user should be redirected to the default login URL (as seen above). But the user gets redirected to "/Account/Login" instead of "/WoL/Account/Login". How can I redirect the user to "/WoL/Account/Login", if the user is not authenticated? I have configured the following Cookie Authentication: ``` app.UseCookieAuthentication(new CookieAuthenticationOptions { LoginPath = new PathString("/WoL/Account/Login"), AutomaticChallenge = true }); ```
The answer of @Dmitry is not working anymore in ASP.NET Core 3.1. Based on the documentation that you can find [here](https://learn.microsoft.com/en-us/aspnet/core/security/authentication/scaffold-identity?view=aspnetcore-3.1&tabs=visual-studio#create-full-identity-ui-source), you have to add the following code to the ConfigureServices: ``` services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Latest) .AddRazorPagesOptions(options => { options.Conventions.AuthorizeAreaFolder("Identity", "/Account/Manage"); options.Conventions.AuthorizeAreaPage("Identity", "/Account/Logout"); }); services.ConfigureApplicationCookie(options => { options.LoginPath = $"/Identity/Account/Login"; options.LogoutPath = $"/Identity/Account/Logout"; options.AccessDeniedPath = $"/Identity/Account/AccessDenied"; }); ```
GHCi on raspberry pi 2? I'm working on a few haskell projects that run on a raspberry pi 2 and the version of ghc that you can install with apt-get from raspbian (7.4.1). It has no GHCi though, which prevents some vital packages (like Vector) from compiling. I've seen a few rumors about being able to get later versions of ghc (with ghci) onto the pi, but nothing recent. The entry on the haskell wiki looks a couple years out of date. Has anyone had any luck with this?
I have had some luck with this! > > `sagemuej@sagemuej-Aspire-5742G:~$ ssh pi-loc` > > Linux raspberrypi 3.12.28+ #709 PREEMPT Mon Sep 8 15:28:00 BST 2014 armv6l > > > > The programs included with the Debian GNU/Linux system are free software; > > the exact distribution terms for each program are described in the > > individual files in /usr/share/doc/\*/copyright. > > > > Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent > > permitted by applicable law. > > Last login: Wed Apr  1 00:24:44 2015 from sagemuej-aspire-5742g.localdomain > > `pi@raspberrypi:~$ ghci` > > GHCi, version 7.8.2: <http://www.haskell.org/ghc/>  :? for help > > Loading package ghc-prim ... linking ... done. > > Loading package integer-gmp ... linking ... done. > > Loading package base ... linking ... done. > > `Prelude> :m +Data.Vector` > > `Prelude Data.Vector> fromList [1,2,3]` > > Loading package array-0.5.0.0 ... linking ... done. > > Loading package deepseq-1.3.0.2 ... linking ... done. > > Loading package primitive-0.5.2.1 ... linking ... done. > > Loading package vector-0.10.9.1 ... linking ... done. > > fromList [1,2,3] > > > > Ain't it nice? It is a full GHC 7.8. I had to bootstrap it from source, though. Worked pretty fine by the normal setup instructions if I recall correctly – though it took ages to build (like, half a *week*). The main issue is memory: for some of the larger modules, the compiler needs more than the π can offer even when running without X. What I did was simply, I connected a USB hard drive and set up a swap partition on it. Of course, swapping makes everything even slower, but eventually it'll succeed. Did for me, at least.
Is the reason for weak wireless signal on new laptop the wireless card itself or the antennas? I just bought a new laptop [ASUS N550JV](https://www.asus.com/Notebooks_Ultrabooks/N550JV/#overview). I wanted to test the wifi receivers of my two laptops (the old and the new), so I put them together and scanned for the neighboring wireless connections. My old laptop (Atheros AR5B93) has 10+ AP's on the list while the new laptop (Atheros AR9485WB-EG) has only 2, and the signals were substantially weaker. I gotta say any signal weaker than -75 dB won't show up on new laptop while I can list wireless connections with -90 dB on my older one. This will cause problems in my college building, since with my old laptop I could often barely connect to the network. With the new laptop, I believe I most certainly will not be able to connect. I first thought I would swap the wireless cards and fix this but then the root of the problem being the antennas made more sense. So, does this issue originate from the wireless network card itself or the antennas? # Edit I want to present some screenshots: In the scenario, I have 10-15 meters and a wall between laptops and router. The red is my connection, blue and green are the common APs that the laptops can enlist, and yellows are the ones that only the old laptop can enlist. **Old Laptop** ![enter image description here](https://i.stack.imgur.com/ZvBDF.png) **New Laptop** ![enter image description here](https://i.stack.imgur.com/f4kpS.png)
This is a tricky question. The gold standard for antennas in laptops is usually two (or even three) dedicated panel antennas installed in the display section behind the LCD panel. Most manufacturers do this, although a few cheaper ones will try to get away with antennas in the main section of the laptop (where there's a lot of metal and RFI to complicate reception). Unfortunately this is not the kind of thing anyone puts on a spec sheet, so it's hard to know without cracking a laptop open. Given that the N550 is a fairly expensive model I suspect they've done it properly and put the antennas in the display. This doesn't necessarily mean that the antenna design in the Asus isn't inferior, but given that wifi antenna design is a pretty solidly understood field I would be surprised if you saw such a big difference for that reason. Given the magnitude of the difference I am more prone to suspect a defective antenna or antenna cable, bad physical connection of the antenna to the card, or, as you suspected, a very cheap WiFi adapter. If you have the time I would recommend swapping the cards out as a test. When you open up the Asus to swap the card, take a good look at the antenna cables and make sure that the micro RF connectors are firmly in place and that the cable is not visibly being excessively pinched anywhere - especially where it passes from display panel to main body. It should be routed so that it twists here rather than bends (sort of an S curve through a display hinge usually), or it should at least not be bent too sharply. Do let us know what you find out.
Elixir Remove Duplicates From List Would someone be willing to provide some alternate solution to the removal of duplicate values from a List (X) using Functional Programming and Elixir Constructs? ``` X = [1,26,3,40,5,6,6,7] # the 6 being the duplicate ``` The stock solution in my mind for solving this problem, would be to iterate the list (X), and add to a new list (Y) where the key doesn't already exist. Thank you
`Enum.uniq` does what you want, for example: ``` iex(6)> Enum.uniq([1,26,3,40,5,6,6,7]) [1, 26, 3, 40, 5, 6, 7] ``` In terms of how you'd implement it you could write a recursive function like so: ``` defmodule Test do def uniq(list) do uniq(list, MapSet.new) end defp uniq([x | rest], found) do if MapSet.member?(found, x) do uniq(rest, found) else [x | uniq(rest, MapSet.put(found, x))] end end defp uniq([], _) do [] end end iex(3)> Test.uniq([1, 1, 2, 3, 4, 4, 1]) [1, 2, 3, 4] ```
Firebase offline cache & original firebase.js source code My question is a follow-up to [this topic](https://stackoverflow.com/questions/14670394/does-firebase-allow-an-app-to-start-in-offline-mode). I love the simplicity and performance of Firebase from what I have seen so far. As I understand, firebase.js syncs data snapshots from the server into an object in Javascript memory. However there is currently no functionality to cache this data to disk. As a result: 1. Applications are required to have a connection when they start-up, thus there is no true offline access. 2. Bandwidth is wasted every time an app starts up by re-transmitting all previous data. Since the snapshot data is sitting in memory as a Javascript object, it should be quite trivial to serialize it as JSON and save it to localStorage, so the exact application state can be loaded next time the app is started, online or not. But as the firebase.js code is minified and cryptic I have no idea where to look. [PouchDB](http://pouchdb.com) handles this very well on a CouchDB backend. (But it lacks the quick response time and simplicity of Firebase.) **So my questions are:** **1. What data would I need to serialize to save a snapshot to localStorage? How can I then load this back into Firebase when the app starts?** **2. Where can I download the original non-minified dev source code for firebase.js?** (By the way, two features that would help Firebase blow the competition out of the water: offline caching and map reduce.)
Offline caching and map reduce-like functionality are both in development. The firebase.js [source is available here](http://cdn.firebase.com/v0/firebase-debug.js) for dev and debugging. You can serialize a snapshot locally using [exportVal](https://www.firebase.com/docs/javascript/datasnapshot/exportval.html) to preserve all priority data. If you aren't using priorities, a simple value will do: ``` var fb = new Firebase(URL); fb.once('value', function(snapshot) { console.log('values with priorities', snapshot.exportVal()); console.log('values without priorities', snapshot.val()); }); ``` Later, if Firebase is offline (use [.info/connected](https://www.firebase.com/docs/managing-presence.html) to help determine this) when your app is loaded, you can call [.set()](https://www.firebase.com/docs/javascript/firebase/set.html) to put that data back into the local Firebase. When/if Firebase comes online, it will be synced. However, this is truly only suitable for static data that only one person will access and change. Consider, for example, the fallout if I download the data, keep it locally for a week, and it's modified by several other users during that time, then I load my app offline, make one minor change, and then come online. My stale changes would blow away all the work done in between. There are lots of ways to deal with this--conflict resolution, using security rules and update counters/timestamps to detect stale data and prevent regressions--but this isn't a simple affair and needs deep consideration before you head down this route.
Unable to get Rails 3.1, Compass, Sass, Blueprint working on Heroku Cedar For the most part I've followed the direction laid out [here](http://tesoriere.com/2011/08/08/migrating-from-rails-3.1-rc4-to-rc5-using-heroku%27s-cedar-stack--also-compass--unicorn--and-sendgrid-/) Which is resulted in the following error coming from the initializer it asked me to create: ``` from /app/config/initializers/sass.rb:1:in `<top (required)>' 2011-09-05T16:45:42+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/railties- 3.1.0/lib/rails/railtie/configuration.rb:78:in `method_missing': undefined method `sass' for # <Rails::Application::Configuration:0x00000003845528> (NoMethodError) ``` The Heroku page on getting started isn't much help either. It is basically the same instructions only without the initializer. However without it, then it can't find any of the blueprint stuff so I still can't start. Anyone out there who has made it further than I have? Edit for more history: I went through a number of errors to get here so I figured I should write them all out. The first problem I had was that html5-boilerplate was in :assets which meant that the ie\_html method wasn't found, so I pulled that out of :assets. This resulted in this error because html5-boilerplate depends on compass: ``` 2011-09-05T17:15:47+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/bundler/gems/compass-b7f44a48d375/lib/compass/version.rb:56:in `const_missing': uninitialized constant Compass::Frameworks (NameError) 2011-09-05T17:15:47+00:00 app[web.1]: from /app/vendor/bundle/ruby/1.9.1/bundler/gems/compass-html5-boilerplate-405f9ddbca56/lib/html5-boilerplate.rb:1:in `<top (required)>' ```
In the end the final solution was to also make sass-rails global (or at least it appears to have been). I sort of feel like I finally got this to work by co-incidence but here it is. I pulled compass out of :assets and made it global too. Which then led to errors with compiling the SCSS files which finally led me to upgrade to Ceder which then resulted in the blueprint missing errors. Lastly I added the initializer which, I assume, is meant to add the compass framework stuff to the config path. Hope that all helps. Here is the relevant code: ``` gem 'heroku' gem 'haml' gem 'compass', :git => 'git://github.com/chriseppstein/compass.git' gem 'html5-boilerplate', :git => 'git://github.com/sporkd/compass-html5-boilerplate.git' gem 'sass-rails', " ~> 3.1.0" ``` Note the github versions for compass and html5-boilerplate (you don't need h5bp if you don't use it). The initializer is: ``` Rails.configuration.sass.tap do |config| config.load_paths << "#{Gem.loaded_specs['compass'].full_gem_path}/frameworks/compass/stylesheets" end ```
Implementing Autocompletion in iPhone UITextField for contacts in address book I would like to have a UITextField or UITextView in where as I do some input, alternatives will appear something similar to when you type an address in Mail Application, alternatives appear down and is possible tap them so get a better input user interface.(since there is no need to type the complete word or address or phone number) I do know how to fetch data from Address Book framework, also how to input text in UITextField/UITextView and its delegates but I don't know what kind of structure to use for fetching and showing data as the user do his/her input. I know basic CoreData if this matters, I hope I can get some help. UPDATE (2010/3/10): I don't have problem make a native-like GUI but I am asking about the algorithm, does any body knows what kind algorithm is best for this thing? maybe some binary tree? Or should I just fetch data from coredata everytime? Thanks Ignacio UPDATE (2010/03/28): I've been very busy these days, so I have not tried UISearchResults but it seems fine to me. BUT I wonder was there a necessity of deletion of the wining answer? I don't think is fair my reputation went down and couldn't see the winning answer. ;(
You don't need some advanced algorithm to do this kind of thing... if you want to search the address book, then you can do so each time the user types in a character (or however frequent you need to seach). To do this, just take a look at the [UISearchDisplayController](http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UISearchDisplayController_Class/Reference/Reference.html) class. I learned how to do almost the exact thing by looking at Apple's [TableSearch](http://developer.apple.com/iphone/library/samplecode/TableSearch/index.html) sample app. That app searches a list of objects using different fields (All, Device, Desktop, Portable)... so you could adapt it to Address Book fields (First Name, Last Name, Address...). The only thing you need to change is the search within the Address Book. I don't know exactly what your requirements ask for but this should be what you need to get it done. If you have any trouble with the code let me know... but this example really helped me, so hopefully it works for you.
Vectorize large NumPy multiplication I am interested in calculating a large NumPy array. I have a large array `A` which contains a bunch of numbers. I want to calculate the sum of different combinations of these numbers. The structure of the data is as follows: ``` A = np.random.uniform(0,1, (3743, 1388, 3)) Combinations = np.random.randint(0,3, (306,3)) Final_Product = np.array([ np.sum( A*cb, axis=2) for cb in Combinations]) ``` My question is if there is a more elegant and memory efficient way to calculate this? I find it frustrating to work with `np.dot()` when a 3-D array is involved. If it helps, the shape of `Final_Product` ideally should be (3743, 306, 1388). Currently `Final_Product` is of the shape (306, 3743, 1388), so I can just reshape to get there.
[`np.dot()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) won't give give you the desired output , unless you involve extra step(s) that would probably include `reshaping`. Here's one `vectorized` approach using [`np.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) to do it one shot without any extra memory overhead - ``` Final_Product = np.einsum('ijk,lk->lij',A,Combinations) ``` For *completeness*, here's with `np.dot` and `reshaping` as discussed earlier - ``` M,N,R = A.shape Final_Product = A.reshape(-1,R).dot(Combinations.T).T.reshape(-1,M,N) ``` Runtime tests and verify output - ``` In [138]: # Inputs ( smaller version of those listed in question ) ...: A = np.random.uniform(0,1, (374, 138, 3)) ...: Combinations = np.random.randint(0,3, (30,3)) ...: In [139]: %timeit np.array([ np.sum( A*cb, axis=2) for cb in Combinations]) 1 loops, best of 3: 324 ms per loop In [140]: %timeit np.einsum('ijk,lk->lij',A,Combinations) 10 loops, best of 3: 32 ms per loop In [141]: M,N,R = A.shape In [142]: %timeit A.reshape(-1,R).dot(Combinations.T).T.reshape(-1,M,N) 100 loops, best of 3: 15.6 ms per loop In [143]: Final_Product =np.array([np.sum( A*cb, axis=2) for cb in Combinations]) ...: Final_Product2 = np.einsum('ijk,lk->lij',A,Combinations) ...: M,N,R = A.shape ...: Final_Product3 = A.reshape(-1,R).dot(Combinations.T).T.reshape(-1,M,N) ...: In [144]: print np.allclose(Final_Product,Final_Product2) True In [145]: print np.allclose(Final_Product,Final_Product3) True ```
position: relative appearing over position:absolute Having a strange issue in IE7. In a number of spots, I have a DIV which has position: absolute on it (faux dropdown) whenever there is something behind it which has position: relative the relative positioned item will show through the other div. Relativly positioned item does not have any z-index set, while the absolutely positioned item (the one I want on top) has a z-index of 1000. <http://skitch.com/louiswalch/dub5h/microsoft-windows-vista>
I suspect you've already tried it, but set a `z-index` on your relatively positioned element that's lower than your absolutely positioned element's `z-index` as the first test. If that doesn't work, you need to make sure both elements are in the same [stacking context](http://therealcrisp.xs4all.nl/meuk/IE-zindexbug.html). In IE, whenever you apply the `position` CSS rule to an element, it generates a new stacking context within that element. That means that z-index will only be properly respected within that element's children and children in other stacking contexts *with* lower z-indexes may still stack above. In your case, you either need to put the dropdown and button in the same stacking context **or** apply z-index to the 2 elements that are generating their separate stacking contexts.
ClassNotFoundException: Didn't find class "androidx.work.Worker" on path: DexPathList I have been trying to make an android library, whenin I'm making use of WorkManager's PeriodicWorkRequest. It works flawlessly as a module in the host development app. However, once I export it as an aar file and use it in another app, I get the following error, ``` java.lang.NoClassDefFoundError: Failed resolution of: Landroidx/work/Worker; . . . Caused by: java.lang.ClassNotFoundException: Didn't find class "androidx.work.Worker" on path: DexPathList[[zip file "/data/app/com.example.testapp-0WusKIYjC1qsHRMASLjm1Q==/base.apk"],nativeLibraryDirectories=[/data/app/com.example.testapp-0WusKIYjC1qsHRMASLjm1Q==/lib/arm64, /system/lib64, /vendor/lib64]] ``` Any idea how I can fix this?
An AAR by itself does **not** embed or otherwise encode anything about the transitive dependencies (such as your AAR's dependency on WorkManager), so it is expected that if you're just using an AAR as a [local binary dependency](https://developer.android.com/studio/build/dependencies#dependency-types) that you'd need to redeclare all of the transitive dependencies. As per the [Gradle Declaring Dependencies documentation](https://docs.gradle.org/current/userguide/declaring_dependencies.html), a proper dependency is in the form of a maven repository (either local or remote). A maven repository, besides hosting the AAR itself, also includes a [POM file](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html) that declares the transitive dependencies your library depends on. This ensures that there's only one version of each library included in your build (as it can deduplicate transitive dependencies across multiple libraries). You don't see this same issue when locally using a project since those transitive dependencies are included as part of the build process.
Use of splines in parameter estimation The question here is about a passage in the very frequently (10,000+) cited paper by Storey and Tibshirani "Statistical significance for genomewide studies" (<https://doi.org/10.1073/pnas.1530509100>). For their method they need to estimate a parameter named $\pi\_0$, which is more or less the overall false discovery rate, for which they compute some $\hat \pi\_0(\lambda)$ ($\lambda$ being some bias-variance tradeoff parameter) for various values of $\lambda$ from 0.01, 0.02, 0.03 and so on up to 0.95. Then they supposedly fit some natural cubic spline to that and extrapolate that to $\lambda=1$. In their own words: > > Consider Fig. 3, where we have plotted $\hat π\_0(λ)$ versus λ for λ = 0, 0.01, 0.02,..., 0.95. By fitting a natural cubic spline to these data (solid line), we have estimated the overall trend of $\hat π\_0(λ)$ as λ increases. We purposely set the degrees of freedom of the natural cubic spline to 3; this means we limit its curvature to be like a quadratic function, which is suitable for our purposes. It can be seen from Fig. 3 that the natural cubic spline fits the points quite well. > > > And this is Fig. 3: [![enter image description here](https://i.stack.imgur.com/AmFqFm.png)](https://i.stack.imgur.com/AmFqFm.png) Can anybody make sense of this description of the "natural cubic spline fit"? AFAIK the spline should interpolate the data, which it clearly doesn't. It looks more like just one section of a spline, but given natural boundary on the spline, that could only be a straight line IMHO, which it also isn't. I also don't understand the sentence about setting the degrees of freedom and how that "limits the curvature to a quadratic function". The curvature of a cubic spline is related to it's second derivative, which is obviously constant, and not quadratic. I would shrug that off, if it weren't for those 10000+ citations and the authors being well respected in the field. Maybe I overlooked something? Anyway, the way the method is presented, I can't even implement it as I don't have a clue what's meant here. **REFERENCE** Storey, John D., and Robert Tibshirani. "Statistical significance for genomewide studies." Proceedings of the National Academy of Sciences 100.16 (2003): 9440-9445.
I found their code on the Wayback machine and they used the `smooth.spline`-function in R. The paper points to <http://genomine.org/qvalue/results.html> for code and data,which is defunct, but can still be found in a snapshot from 2004 which redirects to [http://faculty.washington.edu/~jstorey](http://faculty.washington.edu/%7Ejstorey), also defunct , but also with a snapshot, so here are the code and data: <https://web.archive.org/web/20040810001627/http://faculty.washington.edu/%7Ejstorey/qvalue/results.html> And here's the code for fig. 3 at the bottom of the code file: ``` #Figure 3 library(modreg) lam <- seq(0,0.95,0.01) pi0 <- rep(0,length(lam)) for(i in 1:length(lam)) { pi0[i] <- mean(p>lam[i])/(1-lam[i]) } spi0 <- smooth.spline(lam,pi0,df=3,w=(1-lam)) plot(lam,pi0,xlab=expression(lambda),ylab=expression(hat(pi)[0](lambda))) lines(spi0) ``` `library(modreg)` is now part of the base R: <https://stat.ethz.ch/pipermail/bioconductor/2010-June/034197.html> So the trick for making the fit so stable at the end seems to be, that they used $1-\lambda$ as weights. Also natural splines might have been a misnomer. I'm not sure what we have learned here, except that the Wayback Machine is very useful an maybe worth a donation <https://archive.org/>
How to retrieve SQL result column value using column name in Python? Is there a way to retrieve SQL result column value using column name instead of column index in Python? I'm using Python 3 with mySQL. The syntax I'm looking for is pretty much like the Java construct: ``` Object id = rs.get("CUSTOMER_ID"); ``` I've a table with quite a number of columns and it is a real pain to constantly work out the index for each column I need to access. Furthermore the index is making my code hard to read. Thanks!
The [MySQLdb](http://sourceforge.net/projects/mysql-python/) module has a [DictCursor](http://mysql-python.sourceforge.net/MySQLdb-1.2.2/public/MySQLdb.cursors.DictCursor-class.html): Use it like this (taken from [Writing MySQL Scripts with Python DB-API](http://www.kitebird.com/articles/pydbapi.html)): ``` cursor = conn.cursor(MySQLdb.cursors.DictCursor) cursor.execute("SELECT name, category FROM animal") result_set = cursor.fetchall() for row in result_set: print "%s, %s" % (row["name"], row["category"]) ``` **edit:** According to user1305650 this works for [`pymysql`](https://github.com/petehunt/PyMySQL) as well.
Variable in switch case: UB or compiler bug? I'm trying to determine whether I invoked undefined behavior or encountered a compiler error. I was developing some code to interpret messages sent to an Arduino from an external component over a serial connection. Here's a simplified version of the member function I began with. [The `Serial.println` commands are the Arduino equivalent of printf debugging.] ``` void decodeMessage() { switch (getType()) { case 0x3A: Serial.println("foo message"); break; case 0x3B: Serial.println("bar message"); break; case 0x3C: Serial.println("zerz message"); break; ... // and so on for 0x3D through 0x40 case 0x41: Serial.println("ack message"); break; default: Serial.println("unknown message type"); break; } } ``` This worked fine for all of the message types. Then I modified the case for 0x3B to also check some bits in the message's parameter: ``` case 0x3B: Serial.println("bar message"); const auto mask = getParam(); if (mask & 0x01) Serial.println("bit 0 set"); if (mask & 0x02) Serial.println("bit 1 set"); break; ``` With this code substituted in for the original 0x3B case, everything worked *except* for the last message type (0x41, "ack"). It's as though the body of that case was gone. The default case continued to work, as did 0x3A through 0x40. After many attempts to trying to figure out the cause of the problem, I realized that I'd introduced a const variable (`mask`) in the middle of a switch without scoping it to that particular case. When I added braces, it once again worked for all cases: ``` case 0x3B: { Serial.println("bar message"); const auto mask = getParam(); if (mask & 0x01) Serial.println("bit 0 set"); if (mask & 0x02) Serial.println("bit 1 set"); break; } // braces to limit scope of `mask` ``` **Questions:** - Did the broken version invoke undefined behavior or is this a compiler bug? If UB, what section of the spec should I be re-reading? - Other compilers I've used (e.g., VC++) give a warning when you introduce a variable inside a switch case without limiting its scope. Is there an option to get a warning like this from gcc (which is the compiler the Arduino IDE uses)?
I believe this code should have been ill-formed based on [[stmt.dcl]/3](http://eel.is/c++draft/stmt.dcl#3): > > It is possible to transfer into a block, but not in a way that bypasses declarations with initialization (including ones in conditions and init-statements). **A program that jumps from a point where a variable with automatic storage duration is not in scope to a point where it is in scope is ill-formed** unless the variable has vacuous initialization ([dcl.init]). > > > emphasis mine. Your variable `mask` does not have [vacuous initialization](http://eel.is/c++draft/basic#life-1). At least as far as my understanding goes, "ill-formed" implicitly requires a diagnostic, i.e., a standard-conforming compiler must produce an error message. So no undefined behavior, this should simply never have compiled. Thus, I would say that the lack of a diagnostic here is definitely to be considered a compiler bug. Note, however, that none of the GCC versions that one can try on godbolt accept this code (and they go quite far back). It would seem that the Arduino IDE must be using a hopelessly outdated/broken version of GCC if it did indeed compile this code without a flinch… [example of proper compilers complaining about this](https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAKxAEZSAbAQwDtRkBSAJgCFufSAZ1QBXYskwgA5BwAMAQTnyAbqjzoA1ADMIeFgQ0APAJRKOAdj4KNNjYIDuhZAg0QTS2xotX5n5E0FMDVoQD09bNBZBAz0DAE8vAGYAEQ0AFi4ORJ9w2wAjYkwmAGssnzCbf0CNLlDrXI0CotLsiq9zZLMOpSljRmkAVilSFmlZYdRpAGF%2BfjtRcSDuRNphgjHevuKQLkSAOkSucwBOI/NzWgA2Li5LxP6pNOGAWzpZWRGN0kmpYcEQD7rKTjPpwWAwRAoVDPAAOeAYmDIFAgaFh8MRKGYbAA%2BgRiCIWMVSFp4QREf8IHkvnk9ExiHFpKtSKjnph9AB5FgMBnA4ZYZ6sYAIr74QrIAh4ZSYf680iYQyYZAiMmM4axTAML4MPAFOlxKYYSRSJl4vCvY29RiClCzXiMHX/SB9VAwiWoKLSP4LCR0PqaqRDT6yn6GAAclwAtJc0hpkFjgK48QTisZXLhCCQvLt6BoDWiEcQsytUzNePw1htjFsdqG9tcjsdrmlm4k7rsHk9SK8Bh9RsHPUIAaQgSDSGDIaICDDleRKKi4QW6HL8ERiEv7MQmDDVX7BsM%2B%2BNvtJlrQNI4CC4w5Ho7H44n8YTjBXeVXSAgilg1xBd48XiAe0Gh4/H8Q4jpspDbLcezmK2pznAMtBpIkaRXA8iT7l8wHDpWY4QuO%2BFIPO6JInO0ILhiwCCCwW6CAgqAEMSpLkpQVKyjS1H0qqzLQqyHJcjyh78oKwqyqKioSlKMqHvKirKkaTLqv6h7arq9IGlgXGmuaqx%2BtanBlnaKmOt%2B3yung7oyl6Yg%2BrQP6BgeEzSFeUYxsAyDIPeyaphA6arkWOZ5uRhbLFwJa2jwz4gn075MJ%2BlDVgB/qdt2vaYQO/yAjhBEgJO04MciRGLlwy4Zmu9AbluO4PPZaUyLsXBnoQl7hi5GhuR5EBJo%2BkXgTFcUmUlf4AQ5R6/IOmUvglsh7AMxzNgh5ihokiQDAMK1oRh/ZjWBoJ4XhhFkcRs4oodi7KMgMIwtiyi0Mc2JcLI2KGNGjEMGSxAUqxh7sXqXEsmyBCctyIqYAKbAiYJeBihJ0pfDJSoqha5D6BqWo6puamGppxBmlV8Y2gZAhGfAzpmRZ6XepItnVZtQFOTezyCOdGg3ccewPWmK6ZiepC5qdiJFokYWEz1r59RiJnbIk5gzR2Q2pVtIETaO2W5TOBX82u9yYFzZWkBV24WnZtOOXVy2NReRgM0zHms%2Bzh47dFH4Sz%2ByX/grdNjRl2GTRBIBpGzaSyKtaShlwVzRmklwDBtgGm6L1YB3sxwrYhzbmJcc2HLQMf%2Blwf60O8cejQnDz512bwe/HPtRaQUofeZoxpEAA%3D%3D) To fix the issue, simply wrap your variable in a block scope such that there's no possible control flow that could enter the scope in which the variable is declared without passing its declaration, as you have already discovered yourself. For example, turn this ``` void f(int x) { switch (x) { case 1: const int y = 42; // error break; case 2: break; } } ``` into ``` void f(int x) { switch (x) { case 1: { const int y = 42; // OK break; } case 2: break; } } ```
Inline CSS formatting best practices - Two questions Question #1 - When specifying an inline style in an HTML element, is it necessary to include a trailing semi-colon? For example ... ``` <div style="padding:10px;">content</div> ``` Question #2 - When specifying an inline style should a space be inserted after the colon separating attribute name from attribute value? ``` <div style="padding: 10px;">content</div> ``` vs. ``` <div style="padding:10px;">content</div> ```
**Answer #1:** No. Semi-colons are required only **between** declarations. > > A declaration-block (also called a > {}-block in the following text) starts > with a left curly brace ({) and ends > with the matching right curly brace > (}). In between there must be a list > of zero or more semicolon-separated > (;) declarations. > > > Source: <http://www.w3.org/TR/css3-syntax/#rule-sets> > > The value of the style attribute must > match the syntax of the contents of a > CSS declaration block (excluding the > delimiting braces) > > > Source: <http://www.w3.org/TR/css-style-attr/#syntax> Since you have only one declaration, there is nothing to separate, so no semicolons are needed. However, the CSS syntax allows for **empty declarations**, which means that you can add leading and trailing semicolons as you like. For instance, this is valid CSS: ``` .foo { ;;;display:none;;;color:black;;; } ``` and is equivalent to this: ``` .foo { display:none;color:black } ``` --- **Answer #2:** No. > > A declaration is either empty or > consists of a property, followed by a > colon (:), followed by a value. Around > each of these there may be **whitespace**. > > > Source: <http://www.w3.org/TR/css3-syntax/#declarations> You can add spaces in order to improve readability, but they have no relevance.
Display an object for a few milliseconds in Unity3D I'm using Unity3D to setup an "experiment" (university related) where I designed a maze and users receive subliminal cues (visual arrows displayed on the screen) that should appear just for 14 milliseconds and then disappear. I already designed the 3D maze and it's really minimal (I don't need anything fancy for the experiment), therefore I can achieve 1000fps when I navigate with a 1st person controller into it (the framerate won't be a problem). I'm planning to program a script in C# to display these arrows when some objects are triggered. I'm wondering if in Unity it's possible to display objects just for some milliseconds (in my case, 14ms) with an acceptable accuracy... If so, which approach should I use? Any ideas or previous experiece with similar issues? Thank you in advance
Use coroutines to display objects just a amount of time: ``` void *AnyTrigger*() // eg. replace with OnTriggerEnter if using triggers { StartCoroutine(ShowObject(14f / 1000f)); } IEnumerator ShowObject(float timeInSeconds) { // Show object here yield return new WaitForSeconds(timeInSeconds); // Hide object } ``` ## Accuracy The accuarcy of the time in which the object is shown depends highly on the system which you are using. If you are running Unity3D on a fast system it might be very accurate. If the system is slow the time is very likly to be over 14ms. **Examples( not accurate ):** - Fast system, 950-1000 frames per second: Likly between 0ms and 1ms - Medium system, 300-600 frames per second: Likly between 0.8ms and 3.6ms - Slow system, 50-100 frames per second: Likly between 5ms and 20ms
No node for selector while using puppeteer on facebook messenger I'm using Puppeteer to make a Facebook messenger api(kind of) for my home project. Till now, I can login into my account using puppeteer successfully. The real problem starts when I want to automate after logging in. I can't click on any element. --- **For Example**: ***I want to click on the little 'i' icon***: [![enter image description here](https://i.stack.imgur.com/vkP29.png)](https://i.stack.imgur.com/vkP29.png) ***Then I copied the selector like***: [![enter image description here](https://i.stack.imgur.com/KNUqk.png)](https://i.stack.imgur.com/KNUqk.png) ***I'm getting the following error in the conso*le**: ``` (node:4771) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): AssertionError [ERR_ASSERTION]: No node found for selector: #cch_feb64f1b00e628 > div._5742 > ul > li:nth-child(4) > a > div > svg (node:4771) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. ``` **Code**: ``` const puppeteer = require('puppeteer'); (async function() { // This function is used to login into the account. async function login() { console.log('=====In Login====='); const email = '#email'; const pass = '#pass'; const submit = '#loginbutton'; await page.waitFor(3000); await page.click(email); await page.waitFor(2000); await page.keyboard.type('[email protected]'); await page.waitFor(2000); await page.click(pass); await page.waitFor(2000); await page.keyboard.type('not_a_real_password'); await page.waitFor(2000); await page.click(submit); return 0; } const browser = await puppeteer.launch({ headless: false }); const page = await browser.newPage(); page.on('load', () => console.log('=====Page loaded!=====')); await page.goto('https://messenger.com'); await login(); await page.waitForNavigation(); await page.waitFor(3000); // I'm waiting just for safe side to make sure the element is loaded. // This is the line where the 'i' is clicked. // This is the line error occurs. await page.click( '#cch_feb64f1b00e628 > div._5742 > ul > li:nth-child(4) > a > div > svg' ); })(); ``` So, according to the above error, I know that the selector is wrong. Then what is the correct selector? Not only that, I'm getting same error when I tried it on other elements like the emoji element.
The issue appears to be that the selector you are using is dynamically generated at runtime and changes every page refresh. Assuming you want to click on just the I (info) button, the following click/selector using the data-testid works for me: ``` await page.click( '[data-testid="info_panel_button"]' ); ``` If you are wanting to test the other icons at the top, they unfortunately don't seem to have the same data-testid, meaning they would be more challenging to select. Full example ``` const puppeteer = require('puppeteer'); (async function() { // This function is used to login into the account. async function login() { console.log('=====In Login====='); const email = '#email'; const pass = '#pass'; const submit = '#loginbutton'; await page.waitFor(3000); await page.click(email); await page.waitFor(2000); await page.keyboard.type('[email protected]'); await page.waitFor(2000); await page.click(pass); await page.waitFor(2000); await page.keyboard.type('not_a_real_password'); await page.waitFor(2000); await page.click(submit); return 0; } const browser = await puppeteer.launch({ headless: false }); const page = await browser.newPage(); page.on('load', () => console.log('=====Page loaded!=====')); await page.goto('https://messenger.com'); await login(); await page.waitForNavigation(); await page.waitFor(3000); // I'm waiting just for safe side to make sure the element is loaded. // This is the line where the 'i' is clicked. // This is the line error occurs. await page.click( '[data-testid="info_panel_button"]' );})(); ```
Pyspark counting the occurance of values with keys I have a list of `(key,value)` pairs of the form: ``` x=[(('cat','dog),('a','b')),(('cat','dog'),('a','b')),(('mouse','rat'),('e','f'))] ``` I want to count the number of times each value tuple appears with the key tuple. Desired output: ``` [(('cat','dog'),('a','b',2)),(('mouse','rat'),('e','f',1))] ``` A working solution is: ``` xs=sc.parallelize(x) xs=xs.groupByKey() xs=xs.map(lambda (x,y):(x,Counter(y)) ``` however for large datasets, this method fills up the disk space (~600GB). I was trying to implement a similar solution using `reduceByKey`: ``` xs=xs.reduceByKey(Counter).collect() ``` but I get the following error: ``` TypeError: __init__() takes at most 2 arguments (3 given) ```
Here is how I usually do it: ``` xs=sc.parallelize(x) a = xs.map(lambda x: (x, 1)).reduceByKey(lambda a,b: a+b) ``` `a.collect()` yields: ``` [((('mouse', 'rat'), ('e', 'f')), 1), ((('cat', 'dog'), ('a', 'b')), 2)] ``` I'm going to assume that you want the counts (here, 1 and 2) inside the second key in the (key1, key2) pair. To achieve that, try this: ``` a.map(lambda x: (x[0][0], x[0][1] + (x[1],))).collect() ``` The last step basically remaps it so that you get the first key pair (like `('mouse','rat')`), then takes the second key pair (like `('e','f')`), and then adds the `tuple` version of `b[1]`, which is the count, to the second key pair.
Using Consolas font on Linux What is the most hassle-free way to install and use the MS Consolas font type in Debian Linux? Since it doesn't come with the mscorettf package, what could be the way to install it with less commands yet from official(or trustworthy) repos?
**The most hassle-free way to install a font that doesn't come prepackaged by your distribution is probably to drop it under `/usr/local/share/fonts` or `~/.fonts`. For this, you need the font file in some unpackaged format, such as plain TrueType (.ttf).** At least on Debian Wheezy, you will find an XML file /etc/fonts/fonts.conf which provides the system-wide fontconfig configuration. One of the purposes of this is to specify where the system should look for fonts. Modern Linux systems mostly use TrueType and OpenType fonts, while I think Windows still mainly uses TrueType. So you will have to grab the TrueType font file (.ttf extension) from somewhere trustworthy first. Note that a single font may consist of multiple font files. Then, place the .ttf file(s) in one of the two directories (depending on whether you want it available system-wide or if you want it available only to you), possibly restart the software in which you plan on using the font (to make it re-scan the available fonts), and the font should just show up in the list of available fonts. If the font doesn't show up, run `fc-cache -f` to recreate the font cache files, then try again. I suggest creating subdirectories under either `fonts` directory to keep things organized, but that's a preference, not a requirement by the system.
What's the difference between ln -s and mount --rbind? What's the difference between: ``` ln -s /mnt/extra/home / ``` and ``` mkdir /home mount --rbind /mnt/extra/home /home ```
[`mount --rbind`](http://manpages.ubuntu.com/manpages/zesty/en/man8/mount.8.html) makes a recursive bind-mount; that is, the filesystem hierarchy mounted on `/mnt/extra/home` will also be accessible through `/home`. In practice major difference between the `ln -s` solution and the `mount --rbind` solution is that with `ln -s` `/home` is a symlink while with `mount --rbind` it's a directory; this affects tools like `find`, `df`, `test`/`[` etc. Also, the `ln -s` will fail if `/home` exists, while `mount --rbind` will fail if it does not exist, or it is not an empty directory. Mark's comment below is also important: `ln -s` needs a writable file system on which to create the symlink.
What's the meaning of `()` in git command SYNOPSIS? In the [synopsis](http://git-scm.com/docs/git-reset) of `git reset`: > > 'git reset' (--patch | -p) [<tree-ish>] [--] [<paths>...] > > > I have an issue with the markers' meaning. I know `[]` stands for options, `<>` stands for replacement. But, what's the meaning of `()`? If there's no `|`, are the parentheses still needed? I didn't find relative clues in POSIX [Utility Conventions](http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap12.html).
This is covered in git's [CodingGuidelines](https://github.com/git/git/blob/master/Documentation/CodingGuidelines), found on their Github. It gives contributors a style guide while also describing how help options should be written. Other sources like POSIX or BSD should not be taken as authoritative, especially since they don't always conform to POSIX1. The following excerpt is near the bottom of the file: ``` Placeholders are spelled in lowercase and enclosed in angle brackets: <file> --sort=<key> --abbrev[=<n>] Optional parts are enclosed in square brackets: [<extra>] (Zero or one <extra>.) --exec-path[=<path>] (Option with an optional argument. Note that the "=" is inside the brackets.) [<patch>...] (Zero or more of <patch>. Note that the dots are inside, not outside the brackets.) Multiple alternatives are indicated with vertical bars: [-q | --quiet] [--utf8 | --no-utf8] Parentheses are used for grouping: [(<rev> | <range>)...] (Any number of either <rev> or <range>. Parens are needed to make it clear that "..." pertains to both <rev> and <range>.) [(-p <parent>)...] (Any number of option -p, each with one <parent> argument.) git remote set-head <name> (-a | -d | <branch>) (One and only one of "-a", "-d" or "<branch>" _must_ (no square brackets) be provided.) And a somewhat more contrived example: --diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] Here "=" is outside the brackets, because "--diff-filter=" is a valid usage. "*" has its own pair of brackets, because it can (optionally) be specified only when one or more of the letters is also provided. ``` 1: The following excerpt is at the top of the file: > > Like other projects, we also have some guidelines to keep to the code. > For Git in general, a few rough rules are: > > > - Most importantly, we never say "It's in POSIX; we'll happily ignore your needs should your system not conform to it." We live in > the real world. > - However, we often say "Let's stay away from that construct, it's not even in POSIX". > - In spite of the above two rules, we sometimes say "Although this is not in POSIX, it (is so convenient | makes the code much more > readable | has other good characteristics) and practically all the > platforms we care about support it, so let's use it". > > > Again, we live in the real world, and it is sometimes a > > judgement call, the decision based more on real world constraints > people face than what the paper standard says. > > >
JavaScript push to array How do I push new values to the following array? ``` json = {"cool":"34.33","alsocool":"45454"} ``` I tried `json.push("coolness":"34.33");`, but it didn't work.
It's not an array. ``` var json = {"cool":"34.33","alsocool":"45454"}; json.coolness = 34.33; ``` or ``` var json = {"cool":"34.33","alsocool":"45454"}; json['coolness'] = 34.33; ``` you could do it as an array, but it would be a different syntax (and this is almost certainly not what you want) ``` var json = [{"cool":"34.33"},{"alsocool":"45454"}]; json.push({"coolness":"34.33"}); ``` Note that this variable name is highly misleading, as *there is no JSON here*. I would name it something else.
python3 multiprocess shared numpy array(read-only) I'm not sure if this title is appropriate for my situation: the reason why I want to share numpy array is that it might be one of the potential solutions to my case, but if you have other solutions that would also be nice. My task: I need to implement an **iterative** algorithm with **multiprocessing**, while each of these processes need to have a copy of data(this data is large, and **read-only**, and won't change during the iterative algorithm). I've written some pseudo code to demonstrate my idea: ``` import multiprocessing def worker_func(data, args): # do sth... return res def compute(data, process_num, niter): data result = [] args = init() for iter in range(niter): args_chunk = split_args(args, process_num) pool = multiprocessing.Pool() for i in range(process_num): result.append(pool.apply_async(worker_func,(data, args_chunk[i]))) pool.close() pool.join() # aggregate result and update args for res in result: args = update_args(res.get()) if __name__ == "__main__": compute(data, 4, 100) ``` The problem is in each iteration, I have to pass the data to subprocess, which is very time-consuming. I've come up with two potential solutions: 1. share data among processes (it's ndarray), that's the title of this question. 2. Keep subprocess alive, like a daemon process or something...and wait for call. By doing that, I only need to pass the data at the very beginning. So, is there any way to share a read-only numpy array among process? Or if you have a good implementation of solution 2, it also works. Thanks in advance.
If you absolutely must use Python multiprocessing, then you can use Python multiprocessing along with [Arrow's Plasma object store](https://arrow.apache.org/docs/python/plasma.html) to store the object in shared memory and access it from each of the workers. See [this example](https://github.com/apache/arrow/blob/master/python/examples/plasma/sorting/sort_df.py), which does the same thing using a Pandas dataframe instead of a numpy array. If you don't absolutely need to use Python multiprocessing, you can do this much more easily with [Ray](https://github.com/ray-project/ray). One advantage of Ray is that it will work out of the box not just with arrays but also with Python objects that contain arrays. Under the hood, Ray serializes Python objects using [Apache Arrow](https://arrow.apache.org/), which is a zero-copy data layout, and stores the result in [Arrow's Plasma object store](https://arrow.apache.org/docs/python/plasma.html). This allows worker tasks to have read-only access to the objects without creating their own copies. You can read more about [how this works](https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html). Here is a modified version of your example that runs. ``` import numpy as np import ray ray.init() @ray.remote def worker_func(data, i): # Do work. This function will have read-only access to # the data array. return 0 data = np.zeros(10**7) # Store the large array in shared memory once so that it can be accessed # by the worker tasks without creating copies. data_id = ray.put(data) # Run worker_func 10 times in parallel. This will not create any copies # of the array. The tasks will run in separate processes. result_ids = [] for i in range(10): result_ids.append(worker_func.remote(data_id, i)) # Get the results. results = ray.get(result_ids) ``` Note that if we omitted the line `data_id = ray.put(data)` and instead called `worker_func.remote(data, i)`, then the `data` array would be stored in shared memory once per function call, which would be inefficient. By first calling `ray.put`, we can store the object in the object store a single time.
SwiftUI view with rounded corners AND border ``` ZStack { VStack { Text("some text") Button("Ok") {} .foregroundColor(.cyan) .padding() } .padding() } .background(.red) .border(.blue, width: 5) .cornerRadius(20) ``` [![enter image description here](https://i.stack.imgur.com/lTEG9.png)](https://i.stack.imgur.com/lTEG9.png) I want the entire view to have the blue border with rounded corners (instead of the red square overlapping the rounded blue border. How? I've tried seemingly all variations of ordering the modifiers.
SwiftUI borders have straight edges no matter what corner radius you apply ([`.cornerRadius`](https://developer.apple.com/documentation/swiftui/path/cornerradius(_:antialiased:)) simply clips the view to a rounded mask and doesn't adjust the border's appearance). If you want a rounded border, you'll need to overlay and [`.stroke`](https://developer.apple.com/documentation/swiftui/shape/stroke(_:linewidth:)) a rounded rectangle: ``` VStack { Text("some text") Button("Ok") {} .foregroundColor(.cyan) .padding() } .padding() .background(.red) .cornerRadius(20) /// make the background rounded .overlay( /// apply a rounded border RoundedRectangle(cornerRadius: 20) .stroke(.blue, lineWidth: 5) ) ``` Result: [![Rounded blue border](https://i.stack.imgur.com/XUZbO.png)](https://i.stack.imgur.com/XUZbO.png)
Azure Function (using DI) cannot remove "/api" route prefix So I'm aware of these two questions that seem to be asking the same thing: [How to remove the word ‘api’ from Azure functions url](https://stackoverflow.com/questions/47312531/how-to-remove-the-word-api-from-azure-functions-url) [How to change the base "/api" path on Azure Functions (v2)?](https://stackoverflow.com/questions/49362853/how-to-change-the-base-api-path-on-azure-functions-v2) However, I can still not get rid of the "api" prefix in my route. My `host.json` looks like: ``` { "version": "2.0", "extensions": { "http": { "routePrefix": "" } } } ``` And on my HttpTrigger I'm setting my custom route: ``` [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "myapp")] HttpRequest request, ``` However, when I run the app locally the end point is coming up as: ``` [POST] http://localhost:7071/api/myapp ``` If I change my `host.json` to: ``` { "version": "2.0", "extensions": { "http": { "routePrefix": "something" } } } ``` My app is now running on: ``` [POST] http://localhost:7071/something/myapp ``` So it appears that giving an empty string `""` is just not working. Any ideas? (I've done all the usual stuff: clean solution, delete bin/obj folder etc.) FYI from my function app I'm using: ``` <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.7" /> ``` **EDIT:** I'm also referencing these packages from the function app (though I don't see how this would cause this problem): ``` <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.Configuration.AzureKeyVault" Version="3.1.1" /> ``` **EDIT 2** I've narrowed down the problem to this code that is called in `Startup.cs`: ``` IConfiguration configuration = builder.Services.BuildServiceProvider().GetService<IConfiguration>(); IConfiguration combinedConfig = new ConfigurationBuilder() .AddConfiguration(configuration) .AddAzureKeyVault(kvEndpoint, kvClient, new DefaultKeyVaultSecretManager()); .Build(); builder.Services.AddSingleton(combinedConfig); // <-- this line causes the prefix "" to revert to "/api" ``` It essentially is adding key vault as a configuration provider to the stack of providers all ready there. (Note it doesnt matter what .Add methods I call on configuration builder, its the registration that is causing a problem). Is there another way to write this maybe?
So the mistake I seem to have made was very small. In the following code: ``` IConfiguration configuration = builder.Services.BuildServiceProvider().GetService<IConfiguration>(); IConfiguration combinedConfig = new ConfigurationBuilder() .AddConfiguration(configuration) .AddAzureKeyVault(kvEndpoint, kvClient, new DefaultKeyVaultSecretManager()); .Build(); builder.Services.AddSingleton(combinedConfig); ``` `ConfigurationBuild.Build()` actually returns a `IConfigurationRoot`, NOT a `IConfiguration` (`IConfigurationRoot` is a superset of `IConfiguration`). So when registering it I was loosing something (probably config provider information). Simply changing: ``` IConfiguration combinedConfig ``` to ``` IConfigurationRoot combinedConfig ``` fixes the problem (or you can use `var`, which I probably should have!). Though this fixes the problem I am still a little confused why before changing the `routePrefix` in `host.json` to some non-empty string works but setting it to empty string does not. Would have thought simply having the setting in the `host.json` at all would just apply the value and not having it there would mean reverting to the default `"api"`.
Approximating cos using the Taylor series I'm using the [Taylors series](https://www.efunda.com/math/taylor_series/trig.cfm) to calculate the cos of a number, with small numbers the function returns accurate results for example `cos(5)` gives `0.28366218546322663`. But with larger numbers it returns inaccurate results such as `cos(1000)` gives `1.2194074101485173e+225` ``` def factorial(n): c = n for i in range(n-1, 0, -1): c *= i return c def cos(x, i=100): c = 2 n = 0 for i in range(i): if i % 2 == 0: n += ((x**c) / factorial(c)) else: n -= ((x**c) / factorial(c)) c += 2 return 1 - n ``` I tried using `round(cos(1000), 8)` put it still returns a number written in scientific notation `1.2194074101485173e+225` with the e+ part. `math.cos(1000)` gives `0.5623790762907029`, how can I round my numbers so they are the same as the math.cos method?
A McLaurin series uses Euler's ideas to approximate the value of a function using appropriate polynomials. The polynomials obviously diverge from a function like `cos(x)` because they all go towards infinity at some point, while `cos` doesn't. An order 100 polynomial can approximate at most 50 periods of the function on each side of zero. Since 50 \* 2pi << 1000, your polynomial *can't* approximate `cos(1000)`. To get even close to a reasonable solution, the order of your polynomial must be at least `x / pi`. You can try to compute a polynomial of order 300+, but you're very likely to run into some major numerical issues because of the finite precision of floats and the enormity of factorials. Instead, use the periodicity of `cos(x)` and add the following as the first line of your function: ``` x %= 2.0 * math.pi ``` You'll also want to limit the order of your polynomial to avoid problems with factorials that are too large to fit in a float. Furthermore, you can, and should compute your factorials by incrementing prior results instead of starting from scratch at every iteration. Here is a concrete example: ``` import math def cos(x, i=30): x %= 2 * math.pi c = 2 n = 0 f = 2 for i in range(i): if i % 2 == 0: n += x**c / f else: n -= x**c / f c += 2 f *= c * (c - 1) return 1 - n ``` ``` >>> print(cos(5), math.cos(5)) 0.28366218546322663 0.28366218546322625 >>> print(cos(1000), math.cos(1000)) 0.5623790762906707 0.5623790762907029 >>> print(cos(1000, i=86)) ... OverflowError: int too large to convert to float ``` You can further get away from numerical bottlenecks by noticing that the incremental product is `x**2 / (c * (c - 1))`. This is something that will remain well bounded for much larger `i` than you can support with a direct factorial: ``` import math def cos(x, i=30): x %= 2 * math.pi n = 0 dn = x**2 / 2 for c in range(2, 2 * i + 2, 2): n += dn dn *= -x**2 / ((c + 1) * (c + 2)) return 1 - n ``` ``` >>> print(cos(5), math.cos(5)) 0.28366218546322675 0.28366218546322625 >>> print(cos(1000), math.cos(1000)) 0.5623790762906709 0.5623790762907029 >>> print(cos(1000, i=86), math.cos(1000)) 0.5623790762906709 0.5623790762907029 >>> print(cos(1000, i=1000), math.cos(1000)) 0.5623790762906709 0.5623790762907029 ``` Notice that past a certain point, no matter how many loops you do, the result doesn't change. This is because now `dn` converges to zero, as Euler intended. You can use this information to improve your loop even further. Since floats have finite precision (53 bits in the mantissa, to be specific), you can stop iteration when `|dn / n| < 2**-53`: ``` import math def cos(x, conv=2**-53): x %= 2 * math.pi c = 2 n = 1.0 dn = -x**2 / 2.0 while abs(n / dn) > conv: n += dn c += 2 dn *= -x**2 / (c * (c - 1)) return n ``` ``` >>> print(cos2(5), math.cos(5)) 0.28366218546322675 0.28366218546322625 >>> print(cos(1000), math.cos(1000)) 0.5623790762906709 0.5623790762907029 >>> print(cos(1000, 1e-6), math.cos(1000)) 0.5623792855306163 0.5623790762907029 >>> print(cos2(1000, 1e-100), math.cos(1000)) 0.5623790762906709 0.5623790762907029 ``` The parameter `conv` is not just the bound on `|dn/n|`. Since the following terms switch sign, it is also an upper bound on the overall precision of the result.
Installing PHP INTL in mac not getting it right I have installed `php56-intl` using [Homebrew](http://brew.sh) like so: `brew install php56-intl` and when I do `php -m | grep intl` it gives me `intl` But when I check in my `phpinfo()` file, doesn't show up `intl`. Not sure what am I missing. **Update** All I want to install `intl` in my computer so that I could run my `zend` application. But whatever I do I don't get it done. > > An error occurred An error occurred during execution; please try again > later. > > > Additional information: > > > Zend\I18n\Exception\ExtensionNotLoadedException File: > `/Applications/XAMPP/xamppfiles/htdocs/skeleton/vendor/zendframework/zend-i18n/src/Filter/AbstractLocale.php:24` > > > Message: > > > Zend\I18n\Filter component requires the intl PHP extension > > >
The problem was, intl was installed in my mac but not in XAMPP. Alright, after long hassle here how I succeeded: Go to your terminal and `cd /Applications/XAMPP/bin` Then `php -m | grep intl`, if it returns you `intl` then it is installed. If not installed then should return `empty`. Now, `sudo ./pecl install intl` Will ask `specify where ICU libraries and headers can be found`, simply hit enter, And this will install intl, also will return this message `You should add "extension=intl.so" to php.ini`. So simply add this to your `php.ini` file and restart your apache: `sudo apachectl restart` or hard restart from XAMPP itself. Hope this helps someone else.
Python xlrd read as string I'm having difficulties in reading a particular cell value from Excel in xlrd. Whatever value I'm reading (date value) is getting converted to a number. I know there are solutions to convert it into a python date format, but can I read directly the string value in xlrd?
xlrd does NOT convert dates to float. Excel stores dates as floats. Quoting from [the xlrd documentation](http://xlrd.readthedocs.io/en/latest/dates.html) (scroll down a page): > > **Dates in Excel spreadsheets** > > > In reality, there are no such things. > What you have are floating point > numbers and pious hope. There are > several problems with Excel dates: > > > (1) Dates are not stored as a separate > data type; they are stored as floating > point numbers and you have to rely on > (a) the "number format" applied to > them in Excel and/or (b) knowing which > cells are supposed to have dates in > them. This module helps with (a) by > inspecting the format that has been > applied to each number cell; if it > appears to be a date format, the cell > is classified as a date rather than a > number. > > > (2) ... When using this package’s [`xldate_as_tuple()`](http://xlrd.readthedocs.io/en/latest/api.html#xlrd.xldate.xldate_as_tuple) function to convert > numbers from a workbook, you must use the `datemode` attribute of the > `Book` object. > > > See also the section on the [Cell class](http://xlrd.readthedocs.io/en/latest/api.html#xlrd.sheet.Cell) to learn about the type of cells, and the various [Sheet methods](http://xlrd.readthedocs.io/en/latest/api.html#xlrd.sheet.Sheet) which extract the type of a cell (text, number, date, boolean, etc). Check out [python-excel.org](http://www.python-excel.org/) for info on other Python Excel packages.
The auxService:mapreduce\_shuffle does not exist When I am trying to run the below command: ``` # sqoop import --connect jdbc:mysql://IP Address/database --username root --password PASSWORD --table table_name --m 1 ``` for importing the data from mysql database to HDFS, I am getting the error: > > The auxService:mapreduce\_shuffle does not exist. > > > Searched and browsed many sites, nothing helped. How to get rid of this issue? Please let me know if any more inputs are required.
Its an entry that you are missing in yarn-site.xml. Apply those entries in both namenodes and datanodes. If you read this <http://dataheads.wordpress.com/2013/11/21/hadoop-2-setup-on-64-bit-ubuntu-12-04-part-1/> , you will see that yarn-site.xml **must have** this entries: ``` <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> ``` Be careful when you write **aux-services**, because the "-" in the middle it's probably getting you that problem.
What's the difference between a torrent file and a Magnet link? What's the difference between a torrent file and a [Magnet](http://en.wikipedia.org/wiki/Magnet_URI_scheme) link? What is the difference between usage, can I use [μTorrent](http://en.wikipedia.org/wiki/%CE%9CTorrent) to download files from a Magnet link?
μTorrent is compatible with Magnet links, so you can use them. **Short version:** Instead of downloading the .torrent file from a webserver, you download it directly from a seed/leecher. The biggest advantage is that you might be able to download the content of the torrent, even if the tracker is down or closed for registration. **Long version:** Traditionally, .torrent files are downloaded from torrent sites. A torrent client then calculates a torrent hash (a kind of fingerprint) based on the files it relates to, and seeks the addresses of peers from a tracker (or the DHT network) before connecting to those peers and downloading the desired content. Sites can save on bandwidth by calculating torrent hashes themselves and allowing them to be downloaded instead of .torrent files. Given the torrent hash – passed as a parameter within a Magnet link – clients immediately seek the addresses of peers and connect to them to download first the torrent file, and then the desired content. It is worth noting that BitTorrent can not ditch the .torrent format entirely and rely solely on Magnet links. The .torrent files hold crucial information that is needed to start the downloading process, and this information has to be available in the swarm.
R ggplot geom\_tile without fill color I am trying to add a geom\_tile layer to a plot without the filled color (just the outline). Is there a way to get a transparent tile where only the boundary is visible? Thanks
I *think* you are after `alpha` parameter. Minimal example: 1. Create a plot with dummy data where you set `color` (for "boundary") and no `fill`: ``` p <- ggplot(pp(20)[sample(20*20, size=200), ], aes(x = x, y = y, color = z)) ``` 2. Add `geom_tile()` with `alpha` set to `zero`: ``` p <- geom_tile(alpha=0) ``` 3. Add `theme_bw()` as transparent tiles look lame with a dark gray background :) ``` p + theme_bw() ``` ![enter image description here](https://i.stack.imgur.com/MdK0u.png)
Creating PDF/A-3: Embedded file shall contain valid Params key im trying to create a PDF/A-3 with itextpdf-5.4.5 and itext-pdfa-5.4.5. When i set the PdfFileSpecification, i get the following Exception: ``` com.itextpdf.text.pdf.PdfAConformanceException: Embedded file shall contain valid Params key. ``` This is the part where i create the PdfDictionary: ``` PdfDictionary params = new PdfDictionary(); params.put(PdfName.MODDATE, new PdfDate()); PdfFileSpecification fileSpec = PdfFileSpecification.fileEmbedded( writer, "./src/main/resources/com/itextpdf/invoice.xml", "invoice.xml", null, false, "text/xml", params); ``` I found the method where the check happens, but i dont find any solution: ``` com.itextpdf.text.pdf.internal.PdfA3Checker.checkEmbeddedFile protected void checkEmbeddedFile(PdfDictionary embeddedFile) { PdfDictionary params = getDirectDictionary(embeddedFile.get(PdfName.PARAMS)); if (params == null) { throw new PdfAConformanceException(embeddedFile, MessageLocalization.getComposedMessage("embedded.file.shall.contain.valid.params.key")); } ``` Any idea? Thank you in advance!
## In general PDF/A-3 has some special requirements concerning embedded files, among them > > An embedded file's stream dictionary should contain a **Params** key whose value shall be a dictionary containing at least a **ModDate** key whose value shall be the latest modification date of the source file. > > > *(Annex E.1 of ISO 19000-3)* > > > iText support explains on [PDF/A-3 with iText](https://developers.itextpdf.com/tutorial/pdfa-3-itext) how to create PDF/A-3 compliant PDFs from scratch, and they also demonstrate there how to embed files in a PDF/A-3 compliant manner, their sample code: ``` PdfDictionary parameters = new PdfDictionary(); parameters.put(PdfName.MODDATE, new PdfDate()); PdfFileSpecification fileSpec = PdfFileSpecification.fileEmbedded( writer, "./src/main/resources/com/itextpdf/invoice.xml", "invoice.xml", null, "application/xml", parameters, 0); fileSpec.put(new PdfName("AFRelationship"), new PdfName("Data")); writer.addFileAttachment("invoice.xml", fileSpec); PdfArray array = new PdfArray(); array.add(fileSpec.getReference()); writer.getExtraCatalog().put(new PdfName("AF"), array); ``` This example also adds the associated file entry (**AF**) to the Catalog which is another requirement. BTW, *strictly speaking* the **Params** dictionary is only required on a *should* base, not *shall*. Thus, this is actually a recommendation and there can be valid PDF/A-3 documents with attachments in the wild which do not have this entry. As there is no apparent reason why iText would be better off not following this recommendation when creating PDF/A-3 files, though, the strict interpretation by its checks is ok. ## Issue in iText 5.4.5 The check the OP found has been introduced recently. This check already is triggered while storing the file in the PDF file. Unfortunately at this time the params dictionary has not yet been assigned, only a reference has been reserved. Thus, the check fails, even though shortly after the dictionary would have been written. From `PdfFileSpecification.java`: ``` stream.put(PdfName.TYPE, PdfName.EMBEDDEDFILE); stream.flateCompress(compressionLevel); PdfDictionary param = new PdfDictionary(); if (fileParameter != null) { param.merge(fileParameter); } if (!param.contains(PdfName.MODDATE)) { param.put(PdfName.MODDATE, new PdfDate()); } if (fileStore != null) { param.put(PdfName.SIZE, new PdfNumber(stream.getRawLength())); stream.put(PdfName.PARAMS, param); } else stream.put(PdfName.PARAMS, refFileLength); if (mimeType != null) stream.put(PdfName.SUBTYPE, new PdfName(mimeType)); ref = writer.addToBody(stream).getIndirectReference(); ``` During this operation the check happens (and fails). ``` if (fileStore == null) { stream.writeLength(); param.put(PdfName.SIZE, new PdfNumber(stream.getRawLength())); writer.addToBody(param, refFileLength); } ``` And here, directly thereafter, the params would have been written. ## A work-around `PdfFileSpecification.fileEmbedded` allows you to present the data as a `byte[]` instead of as a file in the file system. As you can see in the source above the processing differs in that case: `fileStore` contains that `byte[]` argument. If you follow up on the `fileStore` use in there, you'll see that for a non-`null` value of it the params dictionary is written as a direct object and, therefore, present in the test. Thus, you can use iText 5.4.5 for PDF/A-3 file attachments if you supply the files as `byte[]` instances instead.
Why is string::resize and string::substr O(1) I am working on a coding problem in which I have to delete all occurrences of a substring T in a string S (keeping in mind that removing one occurrence of T in S may generate a new occurrence of T), and then to return the resulting string S after all deletions. The size of both S and T can be up to 10^6. For example, if I have S = "aabcbcd" and T = "abc", then removing all occurrences of abc in S results in S = "d". The sample solution to this problem involves building a string R from S one character at a time, and whenever the end of R matches T, we delete it from R (the comparison between the end of R and T is determined by string hashing). The solution says that > > Since this deletion is at the end of R this is just a simple O(1) resize operation. > > > However, according to <https://m.cplusplus.com/reference/string/string/resize/> the time complexity of string::resize is linear in the new string length. Ben Voigt confirms this in [Why is string::resize linear in complexity?](https://stackoverflow.com/questions/29319712/why-is-stringresize-linear-in-complexity). Also, in the solution the code involves using string::substr to double check if the end of R and T match (since hash(the end of R)==hash(T) does not guarantee the end of R equals to T): ``` /* If the end of R and T match truncate the end of R (and associated hash arrays). */ if (hsh == thsh && R.substr(R.size() - T.size()) == T) { //... } ``` Once again, <https://m.cplusplus.com/reference/string/string/substr/> says that string::substr has linear time complexity. Even if string::substr wasn't linear, then comparing the two strings directly would still cause the comparison to be linear in the size of T. If this is true, wouldn't the time complexity of the solution be at least O(S.length()\*T.length()), instead of O(S.length()) (according to the solution)? Any help is appreciated!
`string::resize` isn't *always* linear. If you're expanding a string, it's linear on the number of characters copied, which is potentially the total number in the resulting string (but could be less, if the string already has enough space for the character(s) you add, so it only has to write the new characters). Using `resize` to *reduce* the size of a string will normally take constant time. In simplified form (and Leaving out a lot of other "stuff") `string` can look something like this: ``` template <class T> class string { char *data; size_t allocated_size; size_t in_use_size; public: void resize(size_t new_size) { if (new_size < in_use_size) { in_use_size = new_size; data[in_use_size] = '\0'; } else { // code to expand string to desired size in O(n) time } } // ... }; ``` So although it'll be linear when expanding the string, it'll *typically* have constant complexity when reducing the size. As for using `substr`, yes, in the case where the hashes match, `substr` itself will be linear (it creates a new `string` object) and you're going to do a linear-complexity comparison. I'd guess they're pretty much just presuming hash collisions are rare enough to ignore, so for most practical purposes, this only happens when you have an actual match.
Netty 4 handle channel events In Netty 3 I can handle open/close events with ChannelUpstreamHandler, like ``` pipeline.addLast("channelGroupHandler", new SimpleChannelUpstreamHandler() { public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) { ... } public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) { ... } }); ``` but in Netty 4 it's doesn't work. So, how I can handle this events now?
If you check [New and noteworthy in 4.0](http://netty.io/wiki/new-and-noteworthy-in-4.0.html#wiki-h4-19) you can see that: - In 3.x, When a new connected `Channel` is created at least three `ChannelStateEvents` are triggered: `channelOpen`, `channelBound`, and `channelConnected`. When a `Channel` is closed, at least 3 more: `channelDisconnected`, `channelUnbound`, and `channelClosed`. - In 4.x, `channelOpen`, `channelBound`, and `channelConnected` have been merged to `channelActive`. Otherwise, `channelDisconnected`, `channelUnbound`, and `channelClosed` have been merged to `channelInactive`. You should use the new API and that's it.
Need more disk space for root Several months ago I bought a new computer and installed/configured dual boot with Arch/Ubuntu Studio. I wanted to try Arch (because it was suggested as a great distro for audio production [since it is comparatively lean]), but I wanted to get going with Ubuntu Studio right away in case I had bitten off more than I could chew with Arch. Hence, the dual boot scenario. I'm not an experienced Linux system administrator; I found this to be very difficult. I read a lot of information from a massive amount of online sources and eventually figured it out. I never ended up using Ubuntu much at all, so it's just been taking up 128G on my 256G disk. When I followed the tutorials from the Arch Wiki, the sample suggested 15G for the root partition. At the time, I should've thought harder about this, because it did seem like a low number for the kinds of work I was planning to do (a lot of varied things = a lot of applications). I'm using i3 as my window manager, and I noticed right away that the drive space was going fast. I've got 1G left. **Long story short**, I didn't make enough space for the root (`/`) partition, and I don't know how to correct it. My initial reaction was to just copy `/` over to where Ubuntu is. So I deleted the Ubuntu partition and repartitioned it with cgdisk. Then I booted with a live USB Ubuntu and mounted `/dev/sda3` (my current `/`) to `/oldroot` and mounted `/dev/sda2` (where Ubuntu used to be) to `/newroot` and `cp -R /oldroot/* /newroot`. I'm not sure what to do from here, though. I have a separate boot partition that looks like this: ``` / /EFI /EFI/ubuntu /EFI/ubuntu/shimx64.efi /EFI/ubuntu/grubx64.efi /EFI/ubuntu/grub.cfg /EFI/arch_grub /EFI/arch_grub/grubx64.efi ``` I don't remember what I did to create these `.efi` files, but I would imagine that their purpose is to direct the process to the current `/boot/grub/grub.cfg` script (it looks like a bash script). I'm not sure. I think I remember this being done as part of running `grub-mkconfig` or something like that, but I don't know where I was when I did that or at what stage in the process, so now I've hit a wall. Maybe I shouldn't even be trying to switch partitions--maybe I should be trying to simply increase the size of the root partition as it is, but I don't know how to do that. What should I do from here?
Migrating your root filesystem to a new partition should be possible. > > > ``` > cp -R /oldroot/* /newroot > > ``` > > `-R` is the wrong argument in this situation, because `cp` will not preserve file attributes like owners and permissions by default. Delete the copied root file system and start over with: ``` cp -a /oldroot/* /newroot ``` `-a` should preserve everything, or at least everything that is important. After you have copied it again, you need to do the following: - mount the `boot` partition to to `/newroot/boot` - bind mount `sys`, `proc`, `dev` and `run` in `/newroot` - `chroot` into `/newroot` - run `update-initramfs -u` and `update-grub` The system should then boot from the new partition.
Selecting a Range while looping through worksheets in VBA I've shortened my code a bit for the purposes of the question, but the error I'm getting is the same. When trying to select the cells with data in column A on each worksheet and doing stuff with it, I get an error after the first worksheet: ``` Sub quickSub() Dim sh As Worksheet For Each sh In Worksheets sh.Range("A6", Range("A6").End(xlDown)).Select ''Random bits of code here where I manipulate selection on each worksheet Next End Sub ``` The error I get is: ``` "Run-time error '1004': Method 'Range' of object'_Worksheet' failed. ```
Try this: ``` sh.Activate sh.Range("A6", "A" & sh.Range("A6").End(xlDown).row).Select ``` I made sure the range reference was end downing on the right sheet And I had the end down return the final row number and concatenate with the column letter which may not be needed, but may make it easier for you to debug. **Update:** Added activate line. Selection may require the sheet be active. **Update2:** Here's the 'Right' way to do this WITHOUT using select Using this method directly referneces the worksheet data INSTEAD of needing to move around worksheet by worksheet. This best practice will increase your code performance ``` Sub quickSub() Dim sh As Worksheet For Each sh In Worksheets With sh.Range("A6", "A" & sh.Range("A6").End(xlDown).row) '- lines that manipulate the 'selection' in the above with .Value = "NewValue" .font.bold = true End With ''Random bits of code here where I manipulate selection on each worksheet Next End Sub ```
Draw on Windows 10 wallpaper in C++ Is there a way to get an handle to Windows's wallpaper (behind icons) in C++ in order to draw on it? That would allow to make an active desktop (discontinued after Windows XP) equivalent, a Wallpaper Engine equivalent, or any other similar tool. (Temperature and resources usage monitoring on the wallpaper in my case). Note: the handle returned by `GetDesktopWindow()` returns the window at desktop icons level, not behind it. Solutions from [similar questions](https://stackoverflow.com/questions/1683791/drawing-on-the-desktop-background-as-wallpaper-replacement-windows-c) aren't working for me. Specifically i tried VLC media player's [wallpaper mode](http://git.videolan.org/?p=vlc.git;a=blob;f=modules/video_output/msw/directx.c;h=678e7e7e40acf84d6b362c0c09a417116aa0d244;hb=HEAD#l1770) code. Key code is: ``` hwnd = FindWindow( _T("Progman"), NULL ); if( hwnd ) hwnd = FindWindowEx( hwnd, NULL, _T("SHELLDLL_DefView"), NULL ); if( hwnd ) hwnd = FindWindowEx( hwnd, NULL, _T("SysListView32"), NULL ); if( !hwnd ) { msg_Warn( p_vout, "couldn't find \"SysListView32\" window, " "wallpaper mode not supported" ); return; } ``` But it will not draw on the wallpaper.
Credits to this [draw behind desktop icons C#](https://www.codeproject.com/articles/856020/draw-behind-desktop-icons-in-windows-plus) page as reference. The article explains the theory behind the solution, which applies regardless of the programming language being used. Long story short, the smooth fading animation you see on Windows 10 when changing wallpaper is achieved by creating a new window that does exactly what you're asking for, drawing under the icons. That window achieves the fade-in effect for the new wallpaper, and is created by the Program Manager. In the mentioned article you can see together with C# implementation an explanation of every step. Here i'll write a C++ equivalent keeping comments from the source. ``` BOOL CALLBACK EnumWindowsProc(HWND hwnd, LPARAM lParam) { HWND p = FindWindowEx(hwnd, NULL, L"SHELLDLL_DefView", NULL); HWND* ret = (HWND*)lParam; if (p) { // Gets the WorkerW Window after the current one. *ret = FindWindowEx(NULL, hwnd, L"WorkerW", NULL); } return true; } HWND get_wallpaper_window() { // Fetch the Progman window HWND progman = FindWindow(L"ProgMan", NULL); // Send 0x052C to Progman. This message directs Progman to spawn a // WorkerW behind the desktop icons. If it is already there, nothing // happens. SendMessageTimeout(progman, 0x052C, 0, 0, SMTO_NORMAL, 1000, nullptr); // We enumerate all Windows, until we find one, that has the SHELLDLL_DefView // as a child. // If we found that window, we take its next sibling and assign it to workerw. HWND wallpaper_hwnd = nullptr; EnumWindows(EnumWindowsProc, (LPARAM)&wallpaper_hwnd); // Return the handle you're looking for. return wallpaper_hwnd; } ``` The C-like casts can be replaced with `reinterpret_cast`s, according to your coding preferences. --- **One note** that isn't mentioned in the article: Since when changing wallpaper a new WorkerW window is generated to achieve the fading effect, if the user tries to change wallpaper while your program is actively drawing and refreshing your instance of WorkerW, the user set background will be placed on top of your drawing, start fading in until it reaches 100% opacity, and lastly be destroyed, leaving your WorkerW still running.
MYSQL UPDATE SET on the Same Column but with multiple WHERE Clauses With MYSQL I'm using this query: ``` UPDATE CustomerDetails_COPY SET Category_ID = 10 WHERE Category_ID = 2 ``` Thats fine but I'd like to ad 15+ more `SET/WHERE` to it like: ``` UPDATE CustomerDetails_COPY SET Category_ID = 9 WHERE Category_ID = 3 SET Category_ID = 12 WHERE Category_ID = 4 SET Category_ID = 11 WHERE Category_ID = 5 ..... ``` How would I add to this? EDIT: As Per Hunters Suggestion: ``` UPDATE CustomerDetails_COPY SET Category_ID = CASE Category_ID WHEN 2 THEN 10 WHEN 3 THEN 9 WHEN 4 THEN 12 WHEN 5 THEN 11 END WHERE Category_ID IN (2,3,4,5) ``` This works Great! Thanks
Something like this should work for you: ``` UPDATE CustomerDetails_COPY SET Category_ID = CASE Category_ID WHEN 2 THEN 10 WHEN 3 THEN 9 WHEN 4 THEN 12 WHEN 5 THEN 11 END WHERE Category_ID IN (2,3,4,5) ``` Alternatively, as Simon suggested, you could do this to save from entering the values twice: ``` UPDATE CustomerDetails_COPY SET Category_ID = CASE Category_ID WHEN 2 THEN 10 WHEN 3 THEN 9 WHEN 4 THEN 12 WHEN 5 THEN 11 ELSE Category_ID END ``` Source: <http://www.karlrixon.co.uk/writing/update-multiple-rows-with-different-values-and-a-single-sql-query/>
Enabling annotation in Adobe AxAcroPDFLib I embedded a PDF viewer in a C# Winform using `AxAcroPDFLib`. However, the annotation buttons in the toolbar (comments...) are disabled. I searched and found that they are disabled by default, but some reported enabling them using Javascript: ``` Collab.showAnnotToolsWhenNoCollab = True ``` Is there a way to do this here? **Edit:** Is it possible to use the browser plugin in a WebBrowser Control? If so, how can this be done?
**Update - The first section is relevant only to Acrobat Reader. For information on when using full versions of Acrobat, see the second section.** **Acrobat Reader** I'll preface all of this by stating this is *probably* not the answer you're looking for, but I felt this warranted more of an explanation than just a comment. A similar, self-answered question was asked on SO ([here](https://stackoverflow.com/questions/4737456/adobe-reader-x-10-0-activex-annotations-are-disabled)), where the OP came to the conclusion that this behavior is by design and nothing cannot be done about it, which I agree with, almost. While I'm sure you've seen that Reader itself can add annotations, the only straightforward means of accomplishing this using the Reader Plugin (AcroPDFLib) is for the document being loaded to be "Reader Enabled," at which point annotations become available just as they are in Reader. If you have control of the documents you wish the plugin to load, this may be a solution for you. To your question about possibly setting `Collab.showAnnotToolsWhenNoCollab = True` as a workaround, my searches only showed this being a viable workaround for those using a full version of Acrobat, not Reader. More specifically, on an Adobe forum ([here](https://forums.adobe.com/message/1924485#1924485)), an Adobe staff commented on the use of this property directly: > > No, it is not [about allowing commenting in Adobe Reader]. It is > about enabling commenting in a browser for Acrobat Standard or > Professional. If you wish to enable commenting in Reader, then you > need to "Reader Enable" the PDFs themselves using Acrobat professional > or Adobe Livecycle Reader Extension Server. > > > Granted, this comment was in reference to Acrobat 9, it appears to still be valid for Acrobat XI. One last bit. I don't know the scope of your application, so this may be completely irrelevant, but if this is a commercial application, even if do you find a functional workaround, I'd be hesitant to use it, as it might violation the Adobe Reader license agreement ([here](http://www.adobe.com/legal/licenses-terms.html)); specifically section 4.3.3, Disabled Features. The short version is, as with most companies, they don't want you circumventing their protections. **Full versions of Acrobat** The following code will create a PDF viewer (using the Form's window for drawing), open a PDF, then set `collab.showAnnotToolsWhenNoCollab = true` to allow annotations on the open PDF. This requires a reference to the Acrobat type library. ``` void CreatePdfViewerAndOpenFile(string pdfFile) { short AV_DOC_VIEW = 2; short PDUseBookmarks = 3; short AVZoomFitWidth = 2; Type AcroExch_AVDoc = Type.GetTypeFromProgID("AcroExch.AVDoc"); _acroExchAVDoc = (Acrobat.CAcroAVDoc)Activator.CreateInstance(AcroExch_AVDoc); bool ok = _acroExchAVDoc.OpenInWindowEx(pdfFile, this.Handle.ToInt32(), AV_DOC_VIEW, -1, 0, PDUseBookmarks, AVZoomFitWidth, 0, 0, 0); if (ok) { CAcroPDDoc pdDoc = (CAcroPDDoc)_acroExchAVDoc.GetPDDoc(); object jsObj = pdDoc.GetJSObject(); Type jsObjType = jsObj.GetType(); object collab = jsObjType.InvokeMember("collab", BindingFlags.GetProperty | BindingFlags.Public | BindingFlags.Instance, null, jsObj, null); jsObjType.InvokeMember("showAnnotToolsWhenNoCollab", BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.Instance, null, collab, new object[] { true }); } } ``` Call this method from wherever you want to display the PDF. When finished, be sure to call the `Close` method or the PDF file will remain open in the Acrobat process in the background. ``` _acroExchAVDoc.Close(-1); ``` Bear in mind that a lot of "normal" functionality is left out of this example, like form resize handling, etc., but it should get you started. Because resizing isn't handled by this example, you'll probably want to maximize the form before invoking the method, so the viewer is big enough to be useful. For more information on how to use the viewer in this fashion, download the Acrobat SDK ([here](http://www.adobe.com/devnet/acrobat.html)) and look at the ActiveViewVB sample project, which is what I used to build some of this example. For reference, I used the Acrobat XI SDK.
Why would you not use the 'using' directive in C#? The existing coding standards on a large C# project includes a rule that all type names be fully qualified, forbidding employment of the 'using' directive. So, rather than the familiar: ``` using System.Collections.Generic; .... other stuff .... List<string> myList = new List<string>(); ``` (It's probably no surprise that `var` also is prohibited.) I end up with: ``` System.Collections.Generic.List<string> myList = new System.Collections.Generic.List<string>(); ``` That's a 134% increase in typing, with none of that increase providing useful information. In my view, 100% of the increase is noise (clutter) that actually *impedes* understanding. In 30+ years of programming, I've seen such a standard proposed once or twice, but never implemented. The rationale behind it escapes me. The person imposing the standard is not stupid, and I don't think he's malicious. Which leaves misguided as the only other alternative unless I'm missing something. Have you ever heard of such a standard being imposed? If so, what was the reason behind it? Can you think of arguments other than "it's stupid," or "everybody else employs `using`" that might convince this person to remove this prohibition? ## Reasoning The reasons for this prohibition were: 1. It's cumbersome to hover the mouse over the name to get the fully qualified type. It's better to always have the fully qualified type visible all the time. 2. Emailed code snippets don't have the fully qualified name, and therefore can be difficult to understand. 3. When viewing or editing the code outside of Visual Studio (Notepad++, for example), it's impossible to get the fully qualified type name. My contention is that all three cases are rare, and that making everybody pay the price of cluttered and less-understandable code just to accommodate a few rare cases is misguided. Potential namespace conflict issues, which I expected to be the primary concern, weren't even mentioned. That's especially surprising because we have a namespace, `MyCompany.MyProject.Core`, which is an especially bad idea. I learned long ago that naming anything `System` or `Core` in C# is a quick path to insanity. As others have pointed out, namespace conflicts are easily handled by refactoring, namespace aliases, or partial qualification.
### The broader question: > > Have you ever heard of such a standard being imposed? If so, what was the reason behind it? > > > Yes, I've heard of this, and using fully qualified object names prevents name collisions. Though rare, when they happen, they can be exceptionally thorny to figure out. --- **An Example:** That type of a scenario is probably better explained with an example. Let's say we have two `Lists<T>` belonging to two separate projects. ``` System.Collections.Generic.List<T> MyCorp.CustomCollections.Optimized.List<T> ``` When we use the fully qualified object name, it's clear as to which `List<T>` is being used. That clarity obviously comes at the cost of verbosity. And you might be arguing, "Wait! No one would ever use two lists like that!" Which is where I'll point out the maintenance scenario. You've written module `Foo` for your corporation which uses the corporation approved, optimized `List<T>`. ``` using MyCorp.CustomCollections.Optimized; public class Foo { List<object> myList = ...; } ``` Later on, a new developer decides to extend the work you've been doing. Not being aware of the company's standards, they update the `using` block: ``` using MyCorp.CustomCollections.Optimized; using System.Collections.Generic; ``` And you can see how things go bad in a hurry. It should be trivial to point out that you could have two proprietary classes of the same name but in different namespaces within the same project. So it's not just a concern about colliding with .NET Framework supplied classes. ``` MyCorp.WeightsAndLengths.Measurement(); MyCorp.TimeAndSpace.Measurement(); ``` --- **The reality:** Now, is this *likely* to occur in most projects? No, not really. But when you're working on a large project with a lot of inputs, you do your best to minimize the chances of things exploding on you. Large projects with multiple contributing teams are a special kind of beast in the application world. Rules that seem unreasonable for other projects become more pertinent due to the input streams to the project and the likelihood that those contributing haven't read the project's guidelines. This can also occur when two large projects are merged together. If both projects had similarly named classes, then you'll see collisions when you start referencing from one project to the other. And the projects may be too large to refactor or management won't approve the expense to fund the time spent on refactoring. --- **Alternatives:** While you didn't ask, it's worth pointing out that this is ***not a great solution*** to the problem. It's not a good idea to be creating classes that will collide without their namespace declarations. `List<T>`, in particular, ought to be treated as a reserved word and not used as the name for your classes. Likewise, individual namespaces within the project should strive to have unique class names. Having to try and recall which namespace's `Foo()` you're working with is mental overhead that is best avoided. Said another way: having `MyCorp.Bar.Foo()` and `MyCorp.Baz.Foo()` is going to trip your developers up and best avoided. If nothing else, you can use partial namespaces in order to resolve the ambiguity. For example, if you absolutely can't rename either `Foo()` class you could use their partial namespaces: ``` Bar.Foo() Baz.Foo() ``` --- ### Specific reasons for your current project: You updated your question with the specific reasons you were given for your current project following that standard. Let's take a look at them and really digress down the bunny trail. > > It's cumbersome to hover the mouse over the name to get the fully qualified type. It's better to always have the fully qualified type visible all the time. > > > "Cumbersome?" Um, no. Annoying perhaps. Moving a few ounces of plastic in order to shift an on-screen pointer is not *cumbersome*. But I digress. This line of reasoning seems more like a cover-up than anything else. Offhand, I'd guess that the classes within the application are weakly named and you have to rely upon the namespace in order to glean the appropriate amount of semantic information surrounding the class name. This is not a valid justification for fully qualified class names, perhaps it's a valid justification for using partially qualified class names. > > Emailed code snippets don't have the fully qualified name, and therefore can be difficult to understand. > > > This (continued?) line of reasoning reinforces my suspicion that classes are currently poorly named. Again, having poor class names is not a good justification for requiring *everything* to use a fully qualified class name. If the class name is difficult to understand, there's a lot more wrong than what fully qualified class names can fix. > > When viewing or editing the code outside of Visual Studio (Notepad++, for example), it's impossible to get the fully qualified type name. > > > Of all the reasons, this one nearly made me spit out my drink. But again, I digress. I'm left wondering *why* is the team frequently editing or viewing code outside of Visual Studio? And now we're looking at a justification that's pretty well orthogonal to what namespaces are meant to provide. This is a tooling backed argument whereas namespaces are there to provide organizational structure to the code. It sounds like the project you own suffers from poor naming conventions along with developers who aren't taking advantage of what the tooling can provide for them. And rather than resolve those actual issues, they attempt to slap a band-aid over one of the symptoms and are requiring fully qualified class names. I think it's safe to categorize this as a misguided approach. Given that there are poorly named classes, and assuming you can't refactor, the correct answer is to use the Visual Studio IDE to its full advantage. Possibly consider adding in a plugin like the VS PowerTools package. Then, when I'm looking at `AtrociouslyNamedClass()` I can click on the class name, press F12 and be taken directly to the definition of the class in order to better understand what it's trying to do. Likewise, I can press Shift-F12 to find all the spots in the code currently suffering from having to use `AtrociouslyNamedClass()`. Regarding the outside tooling concerns - the best thing to do is to just stop it. Don't email snippets back and forth if they aren't immediately clear what they refer to. Don't use other tools outside of Visual Studio as those tools don't have the intelligence surrounding the code that your team needs. Notepad++ is an awesome tool, but it's not cut out for this task. So I agree with your assessment regarding the three specific justifications you were presented with. That said, what I think you were told was "We have underlying issues on this project that can't / won't address and this is how we 'fixed' them." And that obviously speaks to deeper issues within the team that may serve as red flags for you.
How to define a shell script with variable number of arguments? I would like to define a simple abbreviation of a call to `gs` (ghostscript) via a shell script. The first argument(s) give all the files that should be merged, the last one gives the name of the output file. Obviously, the following does not work (it's just for showing the goal): ``` #!/bin/sh gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOUTPUTFILE=$last $1 $2 ... ``` How can this be done? One would typically call this script via `myscript infile1.pdf infile2.pdf ... outfile.pdf` or `myscript *.pdf outfile.pdf`.
The bash variables `$@` and `$*` expand into the list of command line arguments. Generally, you will want to use `"$@"` (that is, `$@` surrounded by double quotes). This will do the right thing if someone passes your script an argument containing whitespace. So if you had this in your script: ``` outputfile=$1 shift gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOUTPUTFILE=$outputfile "$@" ``` And you called your script like this: ``` myscript out.pdf foo.ps bar.ps "another file.ps" ``` This would expand to: ``` gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOUTPUTFILE=out.pdf foo.ps bar.ps "another file.ps" ``` Read the ["Special Parameters"](https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html) section of the `bash` man page for more information.
Navigation error - This navigation graph is not referenced to any layout files? I am trying to apply `navigation` feature to my project. And I have this error: ``` This navigation graph is not referenced to any layout files(expected to find it in at least one layout file with a NavHostFragment with app:navGraph="@navigation/@navigation" attribute ``` ``` <?xml version="1.0" encoding="utf-8"?> <navigation xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/navigation" android:label="fragment_init" app:startDestination="@id/initFragment" > <fragment android:id="@+id/initFragment" android:label="fragment_init" tools:layout="@layout/fragment_init"> <action android:id="@+id/action_initFragment_to_authenticationFragment5" app:destination="@id/authenticationFragment" /> <action android:id="@+id/action_initFragment_to_settingFragment3" app:destination="@id/settingFragment" /> </fragment> <fragment android:id="@+id/authenticationFragment" android:name="com.example.AuthenticationFragment" android:label="fragment_authentication" tools:layout="@layout/fragment_authentication" /> <fragment android:id="@+id/settingFragment" android:name="com.example.view.main.fragment.SettingFragment" android:label="SettingFragment" tools:layout="@layout/fragment_setting" /> </navigation> ``` I added that attribute here and there (navigation and fragments ). And also, the layout files in layout folder which are used in navigation.xml. But didn't work. This is `activity_main.xml` ``` <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#000" android:tint="#555" tools:context=come.example.view.main.MainActivity"> <ImageView android:id="@+id/iv_flame" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_centerInParent="true" /> <fragment android:id="@+id/nav_host_fragment" android:layout_width="match_parent" android:layout_height="match_parent" app:navGraph="@navigation/navigation"/> </RelativeLayout> ```
You haven't set up for your `<fragment>` correctly - every `<fragment>` needs an `android:name` pointing to the Fragment class it is loading. In the case of Navigation, it must reference the `androidx.navigation.fragment.NavHostFragment` as per the [Getting Start documentation](https://developer.android.com/guide/navigation/navigation-getting-started#add-navhostfragment): ``` <fragment android:id="@+id/nav_host_fragment" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" app:navGraph="@navigation/navigation"/> ``` Once you actually tie your navigation graph to a `NavHostFragment`, the error will go away.
Debugger times out at "Collecting data..." I am debugging a Python (`3.5`) program with PyCharm (`PyCharm Community Edition 2016.2.2 ; Build #PC-162.1812.1, built on August 16, 2016 ; JRE: 1.8.0_76-release-b216 x86 ; JVM: OpenJDK Server VM by JetBrains s.r.o`) on Windows 10. **The problem: when stopped at some breakpoints, the Debugger window is stuck at "Collecting data", which eventually timeout.** (with *Unable to display frame variables*) The data to be displayed is neither special, nor particularly large. It is somehow available to PyCharm since a conditional break point on some values of the said data works fine (the program breaks) -- it looks like the process to gather it for display only (as opposed to operational purposes) fails. When I step into a function around the place I have my breakpoint, its data is displayed correctly. When I go up the stack (to the calling function, the one I stepped down from and where I wanted initially to have the breakpoint) - I am stuck with the "Collecting data" timeout again. There have been numerous issues raised with the same point since at least 2005. Some were fixed, some not. The fixes were usually updates to the latest version (which I have). **Is there a general direction I can go to in order to fix or work around this family of problems?** --- EDIT: a year later the problem is still there and there is still no reaction from the devs/support after the bug was raised. --- **EDIT April 2018: It looks like the problem is solved in the 2018.1 version**, the following code which was hanging when setting a breakpoint on the `print` line now works (I can see the variables): ``` import threading def worker(): a = 3 print('hello') threading.Thread(target=worker).start() ```
I think that this is caused by some classes having a default method \_\_str\_\_() that is too verbose. Pycharm calls this method to display the local variables when it hits a breakpoint, and it gets stuck while loading the string. A trick I use to overcome this is manually editing the class that is causing the error and substitute the \_\_str\_\_() method for something less verbose. As an example, it happens for pytorch \_TensorBase class (and all tensor classes extending it), and can be solved by editing the pytorch source torch/tensor.py, changing the \_\_str\_\_() method as: ``` def __str__(self): # All strings are unicode in Python 3, while we have to encode unicode # strings in Python2. If we can't, let python decide the best # characters to replace unicode characters with. return str() + ' Use .numpy() to print' #if sys.version_info > (3,): # return _tensor_str._str(self) #else: # if hasattr(sys.stdout, 'encoding'): # return _tensor_str._str(self).encode( # sys.stdout.encoding or 'UTF-8', 'replace') # else: # return _tensor_str._str(self).encode('UTF-8', 'replace') ``` Far from optimum, but comes in hand. UPDATE: The error seems solved in the last PyCharm version (2018.1), at least for the case that was affecting me.
Change attribute of multiple elements on hover by using CSS I have two div elements which are placed next to each other, i want to change the attribute `background-color` of both of them if the user hovers over one of them. So the background-color should be initially set to `#d8d8d8` for both divs and should change to `#cacaca` on both divs if i hover over one of the divs. I solved it using jquery: ``` $("#FOO").hover ( function() { $(this).css("background-color","#CACACA"); $("#BAR").css("background-color","#CACACA"); }, function() { $(this).css("background-color","#D8D8D8"); $("#BAR").css("background-color","#D8D8D8"); } ) $("#BAR").hover ( function() { $(this).css("background-color","#CACACA"); $("#FOO").css("background-color","#CACACA"); }, function() { $(this).css("background-color","#D8D8D8"); $("#FOO").css("background-color","#D8D8D8"); } ) ``` ``` .buttons { border:1px solid black; background-color: #D8D8D8; height: 100px; font-family: play; font-size: 30px; text-align:center; vertical-align: middle; line-height: 100px; } ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet"/> <div class="col-xs-10 buttons" id="FOO" style="border-right: 0px"> <span style="padding-left:100px">FOO</span> </div> <div class="col-xs-2 buttons" id="BAR" style="border-left: 0px"> <span>BAR</span> </div> ``` Is there a better way to achieve this? Maybe only with css?
You can wrap columns in `div` and add `:hover` on it: ``` .buttons { border:1px solid black; background-color: #D8D8D8; height: 100px; font-family: play; font-size: 30px; text-align:center; vertical-align: middle; line-height: 100px; } .row:hover > .buttons { background-color: #CACACA; } ``` ``` <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet"/> <div class='row'> <div class="col-xs-10 buttons" id="FOO" style="border-right: 0px"> <span style="padding-left:100px">FOO</span> </div> <div class="col-xs-2 buttons" id="BAR" style="border-left: 0px"> <span>BAR</span> </div> </div> ```
Can someone explain the significance of using uint32\_t from the library ? I recently saw someones code that was using this variable type and library on code chef. I was wondering if someone can explain the benefits of using `uint32_t` as opposed to `int`, `float`, `double`, etc. Also what cases should I use/not use it? Link to code: <http://www.codechef.com/viewsolution/131898>
The advantage is that a `uint32_t` is always guaranteed to be 32 bits long, as opposed to the primitive types whose lengths are platform-dependent. For instance, while `int`s are 32 bits on x86 and x86\_64, they are 64 bits on many other 64-bit platforms, and less than that on some older and/or embedded architectures. One of the cases where it may be beneficial to use a `uint32_t`, then, could be when you read binary data directly to/from disk/network. You can always just copy 4 bytes into a `uint32_t` and be sure that it fits. (You'd still have to watch out for eg. differences in endianness, however.) You may also want to use a `uint32_t` if you simply want predictable overflow/underflow behavior. Or if you're doing calculations defined in some specific size, like when running some hashing algorithms. The only thing I have been left wondering is why there aren't corresponding `float32_t` and `float64_t` types. :)
Setting path to Clang library in CMake I build llvm from git and want to use the libraries in a project, especially the libclang. The "makefiles" are generated by means of CMake and for the LLVM part I found the setting `LLVM_DIR` to reroute the path for the llvm libraries, but for Clang I cannot find such a variable and I still see in my link line (it is a Cygwin system): `/usr/lib/libclang.dll.a /usr/lib/libclangTooling.dll.a`. Question: which environment variable do I set to get the right build Clang libraries?
The variable is `Clang_DIR`. Just in case, I attach a minimalistic example of `CMakeLists.txt` file as well. ``` cmake_minimum_required(VERSION 3.12) # Find CMake file for Clang find_package(Clang REQUIRED) # Add path to LLVM modules set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${LLVM_CMAKE_DIR}" ) # import LLVM CMake functions include(AddLLVM) include_directories(${LLVM_INCLUDE_DIRS}) include_directories(${CLANG_INCLUDE_DIRS}) add_definitions(${LLVM_DEFINITIONS}) add_definitions(${CLANG_DEFINITIONS}) add_llvm_executable(myTool main.cpp) set_property(TARGET myTool PROPERTY CXX_STANDARD 11) target_link_libraries(myTool PRIVATE clangTooling) ```
RabbitMQ queue and routing key In documentation <https://docs.spring.io/spring-amqp/reference/htmlsingle/> i see ``` @RabbitListener(bindings = @QueueBinding( value = @Queue(value = "myQueue", durable = "true"), exchange = @Exchange(value = "auto.exch", ignoreDeclarationExceptions = "true"), key = "orderRoutingKey") ) public void processOrder(Order order) { } @RabbitListener(bindings = @QueueBinding( value = @Queue, exchange = @Exchange(value = "auto.exch"), key = "invoiceRoutingKey") ) public void processInvoice(Invoice invoice) { } ``` Here 1 queue and 2 another routing keys, everyone for his method But my code doesn't get message from key! ``` @RabbitListener(bindings = @QueueBinding( value = @Queue(value = DRIVER_QUEUE, durable = "true"), exchange = @Exchange(value = "exchange", ignoreDeclarationExceptions = "true", autoDelete = "true"), key = "order") ) public String getOrders(byte[] message) throws InterruptedException { System.out.println("Rout order"); } @RabbitListener(bindings = @QueueBinding( value = @Queue(value = DRIVER_QUEUE, durable = "true"), exchange = @Exchange(value = "exchange", ignoreDeclarationExceptions = "true", autoDelete = "true"), key = "invoice") ) public String getOrders(byte[] message) throws InterruptedException { System.out.println("Rout invoice"); } ``` they all get message from queue and not see key... site send in queue message with key "invoice" and i see in console "Route order" Whats problem?? Thank a lot! rabbitmq 3.7.3 spring 4.2.9 org.springframework.amqp 1.7.5
The error is that you send all messages to the same queue. You should use a different queue for each listener. Your bindings just tell that messages with RK="invoice" and RK="order" must go on the same queue, not that listener processes queue elements with that RK. You should bind e.g. exchange to DRIVER\_QUEUE1 (e.g. "queue-orders") via key "invoice" and exchange to DRIVER\_QUEUE2 (e.g. "queue-invoices") via key "order". In this way you separate messages, and you can put in place two listeners, one for invoices and one for orders. E.g. something like this: ``` @RabbitListener(queues = "queue-orders") public void handleOrders(@Payload Order in, @Header(AmqpHeaders.RECEIVED_ROUTING_KEY) String key) { logger.info("Key: {}, msg: {}",key,in.toString()); } @RabbitListener(queues = "queue-invoices") public void handleInvoices(@Payload Invoice in, @Header(AmqpHeaders.RECEIVED_ROUTING_KEY) String key) { logger.info("Key: {}, msg: {}",key,in.toString()); } ``` I don't like whole complete annotation, since when broker configuration is made IMHO full annotation becomes useless (or better, adds extra-check useless for me). But if you prefer, whole annotation should be like ``` @RabbitListener(bindings = @QueueBinding( value = @Queue(value = "queue-orders", durable = "true"), exchange = @Exchange(value = "exchange", ignoreDeclarationExceptions = "true", autoDelete = "true"), key = "invoice") ) ``` then you can send messages via convertAndSend(exchangename, routingkey, object) like in ``` Order order = new Order(...); rabbitTemplate.convertAndSend("exchange", "order", order); Invoice invoice = new Invoice(...); rabbitTemplate.convertAndSent("exchange", "invoice", invoice); ``` If your boot application implements RabbitListenerConfigurer, then you can configure everything as e.g. ``` @SpringBootApplication public class MyApplication implements RabbitListenerConfigurer { // other config stuff here.... @Bean("queue1") public Queue queue1() { return new Queue("queue-orders", true); } @Bean("queue2") public Queue queue2() { return new Queue("queue-invoices", true); } @Bean public Binding binding1(@Qualifier("queue1") Queue queue, TopicExchange exchange) { return BindingBuilder.bind(queue).to(exchange).with("invoice"); } @Bean public Binding binding2(@Qualifier("queue2") Queue queue, TopicExchange exchange) { return BindingBuilder.bind(queue).to(exchange).with("order"); } @Bean public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) { final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory); rabbitTemplate.setMessageConverter(producerJackson2MessageConverter()); return rabbitTemplate; } @Bean public Jackson2JsonMessageConverter producerJackson2MessageConverter() { return new Jackson2JsonMessageConverter(); } @Bean public DefaultMessageHandlerMethodFactory messageHandlerMethodFactory() { DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory(); factory.setMessageConverter(consumerJackson2MessageConverter()); return factory; } @Override public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) { registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory()); } // Exchange. @Bean public TopicExchange exchange() { return new TopicExchange("exchange"); } } ``` Hoping to have anwered to your request.
Questions regarding C++ non-POD unions C++11 gave us to possibility to use non-POD types within unions, say I have the following piece of code; ``` union { T one; V two; } uny; ``` Somewhere within my class, only one member will be active at a time, now my questions are rather simple. 1. What is the default value of uny? - undefined? 2. Whenever my class is destructed, which members (within the union), if any will be destructed? - Suppose I have to std::typeinfo to keep track of which is the active member, should I then call the destructor explicitly on that member in the destructor? 3. Does anyone have a link to the language proposal, which changed unions to accept non-POD types?
You're mostly on your own. A note in the standard explains this (9.5/2): > > If any non-static data member of a union has a non-trivial default > constructor (12.1), copy constructor (12.8), move constructor (12.8), copy assignment operator (12.8), move > assignment operator (12.8), or destructor (12.4), the corresponding member function of the union must be > user-provided or it will be implicitly deleted (8.4.3) for the union. > > > So if any of the member constructors are non-trivial, you need to write a constructor for the union (if they are all trivial, the default state will be uninitialized, like for `union { int; double; }`). If any members have a destructor, you need to write a destructor for the union which must take care of figuring out the active element. There's a further note (9.5/4) about typical usage of an unconstrained union: > > In general, one must use explicit destructor calls and placement new operators to change the active > member of a union. > > >
Implementing an Auto-Increment column in a DataFrame I'm trying to implement an auto-increment column in a DataFrame. I already found a solution but I want to know if there's a better way to do this. I'm using `monotonically_increasing_id()` function from `pyspark.sql.functions`. The problem with this is that start at 0 and I want it to starts at 1. So, I did the following and is working fine: `(F.monotonically_increasing_id()+1).alias("songplay_id")` ``` dfLog.join(dfSong, (dfSong.artist_name == dfLog.artist) & (dfSong.title == dfLog.song))\ .select((F.monotonically_increasing_id()+1).alias("songplay_id"), \ dfLog.ts.alias("start_time"), dfLog.userId.alias("user_id"), \ dfLog.level, \ dfSong.song_id, \ dfSong.artist_id, \ dfLog.sessionId.alias("session_id"), \ dfLog.location, \ dfLog.userAgent.alias("user_agent")) ``` Is there a better way to implement what im trying to do? I think, it's too much works to implement a udf function just for that or is just me? Thanks.-
The sequence `monotonically_increasing_id` is not guaranted to be consecutive, but they are guaranted to be monotonically increasing. Each task of your job will be assigned a starting integer from which it's going to increment by 1 at every row, but you'll have gaps between the last id of one batch and the first id of another. To verify this behavior, you can create a job containing two tasks by repartitioning a sample data frame: ``` import pandas as pd import pyspark.sql.functions as psf spark.createDataFrame(pd.DataFrame([[i] for i in range(10)], columns=['value'])) \ .repartition(2) \ .withColumn('id', psf.monotonically_increasing_id()) \ .show() +-----+----------+ |value| id| +-----+----------+ | 3| 0| | 0| 1| | 6| 2| | 2| 3| | 4| 4| | 7|8589934592| | 5|8589934593| | 8|8589934594| | 9|8589934595| | 1|8589934596| +-----+----------+ ``` In order to make sure your index yields consecutive values, you can use a window function. ``` from pyspark.sql import Window w = Window.orderBy('id') spark.createDataFrame(pd.DataFrame([[i] for i in range(10)], columns=['value'])) \ .withColumn('id', psf.monotonically_increasing_id()) \ .withColumn('id2', psf.row_number().over(w)) \ .show() +-----+---+---+ |value| id|id2| +-----+---+---+ | 0| 0| 1| | 1| 1| 2| | 2| 2| 3| | 3| 3| 4| | 4| 4| 5| | 5| 5| 6| | 6| 6| 7| | 7| 7| 8| | 8| 8| 9| | 9| 9| 10| +-----+---+---+ ``` **Notes:** - `monotonically_increasing_id` allows you to set an order on your rows as they are read, it starts at `0` for the first task and increases but not necessarily in a sequential manner - `row_number` sequentially indexes the rows in an ordered window and starts at `1`
$.when() not waiting for ajax response I have a problem where if I call functions which contain Ajax calls and return the result of those calls the caller always receives a null return value before the Ajax call has returned. Example fiddle: [ajax null return value example](http://jsfiddle.net/yp0goac2/) ``` var returnVal; $.when(fun()).done(function(a1) { console.log("finished ajax"); returnVal = a1; }); alert(returnVal); function fun() { $.ajax({//this ajax call should return an ip url: 'http://ip.jsontest.com/', data: "", dataType: 'json', success: function (data) { console.log("fun(id) ajax success"); return data; } }); } ```
You must return the deferred given by `$.ajax` : ``` function fun() { return $.ajax({//this ajax call should return an ip ``` From [the documentation](http://api.jquery.com/jquery.when/) to explain why your callback is executed : > > If a single argument is passed to jQuery.when() and it is not a > Deferred or a Promise, it will be treated as a resolved Deferred and > any doneCallbacks attached will be executed immediately > > > There's another problem in your code : you're alerting before the asynchronous call is done. You should move the `alert` inside the callback. And there's no reason to have a success function, you handle the result in your `done` callback. You can't really `alert` an object and get something meaningful. You should either do `alert(JSON.stringify(returnVal))` or use `console.log` (less painful anyway, hit F12 to show the console). ``` $.when(fun()).done(function(a1){ console.log("finished ajax"); var returnVal = a1; console.log(returnVal); }); function fun() { return $.ajax({//this ajax call should return an ip url: 'http://ip.jsontest.com/', data: "", dataType: 'json' }); } ```
Javascript keydown function conflict on input focus How to prevent keydown event when input is focused? I have a function that assigns some action to keyboard keys (mainly player control), but when using `<input>` all assigned keys trigger that action as well. Therefore it is impossible to use `<input>` properly when typing. ``` function keyboardShortcuts(e){ //Do some stuff when keyboard keys are pressed switch(e.code){ case 'KeyJ': console.log('KeyJ'); break; //... } } document.addEventListener('keydown',keyboardShortcuts); ``` ``` <input id="search" placeholder="search..."> ``` How to fix it? Should I check if input is not focused? ``` function keyboardShortcuts(e){ if(document.getElementById('search') !== document.activeElement){ switch(e.code){ //... } } } ``` Or maybe there is some method similar to `e.preventDefault()`? ``` document.getElementById('search').addEventListener('focus',function(){ //... prevent keyboardShortcuts frum running ????? }); ```
## Answer: Since you're applying the `keydown` event handler to the entire document, the best way to handle this is to diffuse the function based upon what element initiated the event. You can find this out by using `event.target` Combining this with `HTMLElement.prototype.matches` can allow you to avoid elements that match a `selector`, for instance any `input` elements. ``` if(e.target.matches("input")) return; ``` --- ## Example: ``` function keyboardShortcuts(e){ //Do some stuff when keyboard keys are pressed if(e.target.matches("input")) return; switch(e.code){ case 'KeyJ': console.log('KeyJ'); break; //... } } document.addEventListener('keydown',keyboardShortcuts); ``` ``` <h3> Will Escape Custom KeyDown </h3> <input id="search" placeholder="search..."> <hr /> <h3> Will Trigger Custom KeyDown </h3> <textarea></textarea> <hr/> <small> Pressing <kbd>J</kbd> will fire <highlight>console.log</highlight> from Custom KeyDown Event</small> ``` You'll notice, in the above, `input` doesn't fire a console log when pressing `j`, but `textarea` still does. --- ## Adding a Class: I would suggest adding a specific class to your markup that is used for avoiding the global `keydown` handler. For instance something like `defaultKeys` --- ## Example: ``` function keyboardShortcuts(e){ //Do some stuff when keyboard keys are pressed if(e.target.matches(".defaultKeys")) return; switch(e.code){ case 'KeyJ': console.log('KeyJ'); break; //... } } document.addEventListener('keydown',keyboardShortcuts); ``` ``` <h3> Defaulted KeyDown Events </h3> <hr/> <input id="search" class="defaultKeys" placeholder="search..."> <br> <textarea class="defaultKeys"></textarea> <hr/> <h3> Custom KeyDown Events </h3> <input /> <br> <textarea></textarea> <hr/> <small> Pressing <kbd>J</kbd> will fire <highlight>console.log</highlight> from Custom KeyDown Event</small> ``` --- ## Avoiding Groups of Elements: It may be beneficial to avoid entire containers of elements - a.e. if you have a submission form or a player chat, etc. You can utilize the same basic principles, but because currently `selector`s cannot look up the tree( they can't say *am I a great-grandchild of an element that **does** match `.defaultKeys` ?* ) we have to recursively check up the tree. This is done with a simple `do...while` loop that continues until it finds a match for the specified selector OR it hits the `body` tag. The second out( the `body` tag check ) is normally not necessary as `parentElement` should only be able to go up to the `HTML` element, however, when dealing with IFrames, DOMParser, and, in the future, **Realms** - you may have multiple DOMs injected into one another, and therefore, multiple `body` tags. Basically, we simply want to give a for-sure recursive out when it reaches the top of the **current** DOM. ``` function keyboardShortcuts(e) { // get init element and set default flag let element = e.target, useDefault = false; // if this element has defaultKeys class // OR if any parent element has defaultKeys class // set default flag to true do { if (element.matches(".defaultKeys")) { useDefault = true; break; } if ( element.tagName === "BODY" ) break; } while ((element = element.parentElement)); // if flag is true, escape function if (useDefault) return; // check for custom key events switch (e.code) { case "KeyJ": console.log("KeyJ"); } } document.addEventListener("keydown", keyboardShortcuts); ``` ``` <div id="group" class="defaultKeys"> <h3> These elements are in the same parent div </h3> <input /> <br/> <textarea></textarea> </div> <h3> Outside of Group </h3> <textarea></textarea> ``` --- ## Conclusion Hope this helps! **Happy coding!**
remove ids generated during the tests For load testing in the `vu` stage I generate a lot of objects with unique ids that I put them in the database. I want to delete them during `teardown` stage in order not to pollute the database. When keeping the state like this ``` let ids = []; export function setup() { ids.push('put in setup id'); } export default function () { ids.push('put in vu id'); } export function teardown() { ids.push('put in teardown id'); console.log('Resources: ' + ids); } ``` it doesn't work as the array always contains the data I put in `teardown` stage. Passing data between stages also doesn't work due to well-know `Cannot extend Go slice` issue, but even with that, you cannot pass the data from `vu` stage to `teardown` as it always gets the data from `setup` stage. The only remaining solution is either playing around with `console log` or just use a plain preset of ids and use them in tests. Is there another way?
The `setup()`, `teardown()`, and the VUs' `default` functions are executed in completely different JavaScript runtimes. For distributed execution, they may be executed on completely different machines. So you can't just have a global `ids` variable that you're able to access from everywhere. That limitation is the reason why you're supposed to return any data you care about from `setup()` - k6 will copy it and pass it as a parameter to the `default` function (so you can use whatever resources you set up) and `teardown()` (so you can clean them up). Your example has to look somewhat like this: ``` export function setup() { let ids = []; ids.push('put in setup id'); return ids; } export default function (ids) { // you cannot push to ids here console.log('Resources: ' + ids); } export function teardown(ids) { console.log('Resources: ' + ids); } ``` You can find more information at <https://k6.io/docs/using-k6/test-life-cycle>
How to use the map function in haskell? I'm trying to use map to return a list of lists. But i keep getting an error. I know map takes in a function and then uses that function. But i keep getting an error on it. ``` map (take 3) [1,2,3,4,5] ``` This is supposed to return [[1,2,3],[2,3,4],[3,4,5]], but it returns this error ``` <interactive>:6:1: error: • Non type-variable argument in the constraint: Num [a] (Use FlexibleContexts to permit this) • When checking the inferred type it :: forall a. Num [a] => [[a]] ``` is it hitting null is that why?
Let's take a look at exactly what the error message is saying. ``` map (take 3) [1, 2, 3, 4, 5] ``` `map`'s type signature is ``` map :: (a -> b) -> [a] -> [b] ``` So it takes a function from `a` to `b` and returns a function from `[a]` to `[b]`. In your case, the function is `take 3`, which takes a list and returns a list. So `a` and `b` are both `[t]`. Therefore, the second argument to `map` should be `[[t]]`, a list of lists. Now, Haskell looks at the second argument and sees that it's a list of numbers. So it says "How can I make a number into a list?" Haskell doesn't know of any good way to do that, so it complains that it doesn't know any type `Num [t]`. Now, as for what you meant to do, I believe it was mentioned in the comments. The `tails` function1 takes a list and returns the list of all tails of that list. So ``` tails [1, 2, 3, 4, 5] -- ==> [[1, 2, 3, 4, 5], [2, 3, 4, 5], [3, 4, 5], [4, 5], [5], []] ``` Now you can apply the `take` function to each argument. ``` map (take 3) (tails [1, 2, 3, 4, 5]) -- ==> [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5], [5], []] ``` Oops! We've got some extra values we don't want. We only want the values that have three elements in them. So let's *filter* out the ones we don't want. `filter` takes a predicate (which is just a fancy way of saying "a function that returns a Boolean) and a list and returns a list containing only the elements that satisfy the predicate. The predicate we want is one that takes a list and returns whether or not that list has three elements. ``` \x -> ... -- We want only the lists \x -> length x ... -- whose length \x -> length x == 3 -- is exactly equal to 3 ``` So that's our function. Now we pass that to `filter`. ``` filter (\x -> length x == 3) (map (take 3) (tails [1, 2, 3, 4, 5])) -- ==> [[1, 2, 3], [2, 3, 4], [3, 4, 5]] ``` --- [1] Note that you may need to `import Data.List` to get the `tails` function.
Difference between ASP.NET Web App and ASP.NET MVC 3 Empty Web App? I want to build my own web framework, and I want to build it in C#. I figure the easiest way to get started is to use ASP.NET to handle all the server communication stuff. The rest I want to build my self. I know next to nothing about ASP.NET. (But know C#, the MVC pattern, and other web frameworks quite well). In Visual Studio 2010 I see - ASP.NET Web Application - ASP.NET MVC 2 Empty Web Application - ASP.NET MVC 3 Empty Web Application I figure one of these should be good as a base. I just want some "entry point" into some C# code. I started with PHP so it's a little bit weird for me to not be able to just load up a file in my browser. Anyway, which should I use? What's the difference between a plain ASP.NET Web App and an empty MVC 3 app? If it's "empty" it shouldn't be using any of the MVC framework, should it? I just want to make sure I use the latest and greatest "ASP" for handling the server stuff before I embark down this road.
Perhaps you shouldn't start with an Empty Web Application, because it just gives you the references to the assemblies, but doesn't give you any sample code. **ASP.NET Web App**, is a NOT empty web app (meaning it has a basic setup of files), meant to be used with ASP.NET WebForms. For tutorials: <http://www.asp.net/web-forms> **ASP.NET MVC 2 Empty Web Application** is an empty project, meant to be used with ASP.NET MVC version 2 (the current version). For tutorials: <http://www.asp.net/mvc> **ASP.NET MVC 3 Empty Web Application** is an empty project, meant to be used with ASP.NET MVC version 3 (now a release candidate). For info: <http://www.asp.net/mvc/mvc3> Before creating your framework, dig into the tutorials, play around, and only then start building your framework. And about WebForms vs MVC, that's a whole other discussion. I personally switched for my new project to MVC, and perhaps it's better suitable for your Framework app.