prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Reloading tensorflow model I have two separate tensorflow processes, one which is training a model and writing out graph\_defs with `tensorflow.python.client.graph_util.convert_variables_to_constants`, and another which is reading the graph\_def with `tensorflow.import_graph_def`. I would like the second process to periodically reload the graph\_def as it gets updated by the first process. Unfortunately, it appears that every subsequent time I read the graph\_def the old one is still used, even if I close the current session and create a new one. I have also tried wrapping the `import_graph_def call` with `sess.graph.as_default()`, to no avail. Here is my current graph\_def loading code: ``` if self.sess is not None: self.sess.close() self.sess = tf.Session() graph_def = tf.GraphDef() with open(self.graph_path, 'rb') as f: graph_def.ParseFromString(f.read()) with self.sess.graph.as_default(): tf.import_graph_def(graph_def, name='') ```
The problem here is that, when you create a `tf.Session` with no arguments, it uses the **current default graph**. Assuming you don't create a `tf.Graph` anywhere else in your code, you get the global default graph that is created when the process starts, and this is shared between all of the sessions. As a result, `with self.sess.graph.as_default():` has no effect. It's hard to recommend a new structure from the snippet you showed in the question—in particular, I've no idea about how you created the previous graph, or what the class structure is—but one possibility would be to replace the `self.sess = tf.Session()` with the following: ``` self.sess = tf.Session(graph=tf.Graph()) # Creates a new graph for the session. ``` Now the `with self.sess.graph.as_default():` will use the graph that was created for the session, and your program should have the intended effect. A somewhat preferable (to me, at least) alternative would be to build the graph explicitly: ``` with tf.Graph().as_default() as imported_graph: tf.import_graph_def(graph_def, ...) sess = tf.Session(graph=imported_graph) ```
MySQL vs SQLite on Amazon EC2 I have a Java program and PHP website I plan to run on my Amazon EC2 instance with an EBS volume. The program writes to and reads from a database. The website only reads from the same database. On AWS you pay for the amount of IOPS (I/O requests Per Second) to the volume. Which database has the least IOPS? Also, can SQLite handle queries from both the program and website simultaneously?
The amount of IO is going to depend a lot on how you have MySQL configured and how your application uses the database. Caching, log file sizes, database engine, transactions, etc. will all affect how much IO you do. In other words, it's probably not possible to predict in advance although I'd guess that SQLite would have more disk IO simply because the database file has to be opened and closed all the time while MySQL writes and reads (in particular) can be cached in memory by MySQL itself. This site, [Estimating I/O requests](http://www.ghidinelli.com/2009/05/26/estimating-io-requests-ec2-ebs-costs), has a neat method for calculating your actual IO and using that to estimate your EBS costs. You could run your application on a test system under simulated loads and use this technique to measure the difference in IO between a MySQL solution and a SQLite solution. In practice, it may not really matter. The cost is $0.10 per million IO requests. On a medium-traffic e-commerce site with heavy database access we were doing about 315 million IO requests per month, or $31. This was negligible compared to the EC2, storage, and bandwidth costs which ran into the thousands. You can use the [AWS cost calculator](http://calculator.s3.amazonaws.com/calc5.html) to plug in estimates and calculate all of your AWS costs. You should also keep in mind that the SQLite folks only [recommend that you use it for low to medium traffic websites](http://www.sqlite.org/whentouse.html). MySQL is a better solution for high traffic sites.
My attempt to use a "connection" while trying to read in input causes R to freeze or crash Sorry, but the terminology I use in the title may not be used correctly. Whenever I try to run this code, it seems like it is trying to run it but never completes the command. When I click the stop command sign (red), it doesn't do anything. I cannot close out of R. So why is this taking forever to run? ``` con <- file('stdin', open = 'r') inputs <- readLines(con) ```
When working in RStudio, you need to use `readLines(stdin())` rather than `readLines(file('stdin'))`, though you can use either if running R in the terminal. However, there is also an issue from not specifying the number of lines of input since you are using RStudio. When reading input from stdin, `Ctrl`+`D` signals the end of input. However, if you are doing this from RStudio rather than from the terminal [`Ctrl`+`D` is unavailable](https://support.rstudio.com/hc/en-us/community/posts/200656347-Is-there-a-way-to-emulate-Ctrl-D-terminate-stdin-), so without specifying the lines of input there is no way to terminate the reading from stdin. So, if you are running R from the terminal, your code will work, and you signal the end of input via `Ctrl`+`D`. If you must work from RStudio, you can still use `readLines(stdin())` [if you know the number of lines of input](https://stackoverflow.com/a/32936135/8386140); e.g., ``` > readLines(stdin(), n=2) Hello World [1] "Hello" "World" ``` An alternate workaround is to use `scan()`, e.g.: ``` > scan(,'') 1: Hello 2: World 3: Read 2 items [1] "Hello" "World" ``` (On the third line I just pressed `Enter` to terminate input). The advantage there is that you don't need to know the number of lines of input beforehand.
Using vim, what is " '<,'>"? While using Vim, in visual mode, selecting text and then calling a colon command shows `: '<,'>` instead of just `:` as it would show when I do other things (such as opening a file). What does `'<,'>` mean? Using `linux (debian)`, `gnome-terminal`, `vim7.2`
It means that the command that you type after `:'<,'>` will operate on the part of the file that you've selected. For example, `:'<,'>d` would delete the selected block, whereas `:d` deletes the line under the cursor. Similarly, `:'<,'>w fragment.txt` would write the selected block to the file called `fragment.txt`. The two comma-separated things (`'<` and `'>`) are marks that correspond to the start and the end of the selected area. From the help pages (`:help '<`): ``` *'<* *`<* '< `< To the first line or character of the last selected Visual area in the current buffer. For block mode it may also be the last character in the first line (to be able to define the block). {not in Vi}. *'>* *`>* '> `> To the last line or character of the last selected Visual area in the current buffer. For block mode it may also be the first character of the last line (to be able to define the block). Note that 'selection' applies, the position may be just after the Visual area. {not in Vi}. ``` When used in this manner, the marks simply specify the range for the command that follows (see `:help range`). They can of course be mixed and matched with other line number specifiers. For example, the following command would delete all lines from the start of the selected area to the end of the file: `:'<,$d` The Vim Wiki has [more information](http://vim.wikia.com/wiki/Ranges) on Vim ranges.
Screensaver that just disables keyboard and mouse Does anyone know of a way that I can lock my computer without a screensaver being shown? I run some graphs that I like to check every now and then from a distance. But I don't want other people to have access to the computer. I would love if there was a way to basically disable the keyboard and mouse - or activate the locked screen once any mouse or keyboard activity is detected and require a login. Does anyone know how I could do this?
I believe that program called [Clear Lock](http://joshstine.wordpress.com/tag/windows-7-lock-screen-app/) will do the trick if you're using one monitor. > > The Windows+L shortcut is a handy for quickly password-protecting your machine, but if you want to lock it while keeping an eye on your desktop, ClearLock will lock your desktop with a transparent layer so you can see what’s going on. > > > And it can be downloaded from [here](http://www.swanrivercomputers.com/programs/clearlock/) If you don't want to run an exe file then you should check some code samples at [Lock Screen Apps](http://code.msdn.microsoft.com/windowsapps/Lock-screen-apps-sample-9843dc3a) (by Microsoft). One caveat though, it's meant for Windows 8.
Open link from Android Webview in normal browser as popup I have a simple webview which loads a page. This page has a few links that opens within the webview. That's what it supposed to do, so it's working all fine. But there is one single link from that page which should load as a popup, so I want it to open in the normal browser when people click it. But as I stated, all links are opening in the webview, so that link does it also. My question is, how can I make this link open in the normal browser as a kind of a popup? Is it even possible? The link is variable so it's changing always, it cannot be hardcoded within the application to open in a new browser browser. Is it possible and how can I do it?
Here's an example of overriding webview loading to stay within your webview or to leave: ``` import android.app.Activity; import android.os.Bundle; import android.webkit.WebView; import android.webkit.WebViewClient; public class TestWebViewActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webView = (WebView) findViewById(R.id.webview); webView.setWebViewClient(new MyWebViewClient()); } } class MyWebViewClient extends WebViewClient { @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { if(url.contains("somePartOfYourUniqueUrl")){ // Could be cleverer and use a regex return super.shouldOverrideUrlLoading(view, url); // Leave webview and use browser } else { view.loadUrl(url); // Stay within this webview and load url return true; } } } ```
Laravel 5.5 Storing image from base64 string In Laravel 5.5 project, I have successfully saved the info into product table in MySQL. The info include a base64 string which is an image basically. However, I'm facing an issue while stroing the image in public folder of the laravel project. Below is my code for the ProductController.php ``` public function update(Request $request, $id) { $data = $request->validate([ 'product_name' => 'required', 'description' => 'required', 'rating' => 'required' ]); $uploaded_image = $request->input('uploaded_image'); $data['uploaded_image'] = $uploaded_image['value']; // base64 string $product = Product::findOrFail($id); $product->update($data); // the data stored into the database with no issue $image_base64 = base64_decode($uploaded_image['value']); $path = public_path(); $success = file_put_contents($path, $image_base64.".png"); return response()->json($data); } ``` I see the following error below: ``` message:"file_put_contents(C:\xampp\htdocs\laravel-api\public): failed to open stream: Permission denied" ``` By seeing different sources, I did the following, but nothing changed. 1. php artisan clear-compiled 2. Icacls public /grant Everyone:F 3. composer dump-autoload Any idea?
As per our discussion you need to give permissions like: ``` icacls "public" /grant USER:(OI)(CI)F /T ``` Where `USER` is your pc's user Also, if you want to save base64 image in storage path then use the following code: ``` //Function to save a base64 image in laravel 5.4 public function createImageFromBase64(Request $request){ $file_data = $request->input('uploaded_image'); //generating unique file name; $file_name = 'image_'.time().'.png'; //@list($type, $file_data) = explode(';', $file_data); //@list(, $file_data) = explode(',', $file_data); if($file_data!=""){ // storing image in storage/app/public Folder \Storage::disk('public')->put($file_name,base64_decode($file_data)); } } ``` Hope this helps you!
How to place the geographical coordinates around polygon with sf? I have this folowing polygon. ``` library(ggplot2) library(sf) #> Linking to GEOS 3.11.1, GDAL 3.6.2, PROJ 9.1.1; sf_use_s2() is TRUE poly <- st_polygon(list(rbind( c(-90, 70), c(-40, 70), c(-40, 74), c(-90, 74), c(-90, 70) ))) |> st_sfc() |> st_segmentize(5) |> st_set_crs(4326) |> st_as_sf() |> st_transform(3413) |> st_cast("POLYGON") ggplot() + geom_sf(data = poly) + theme( panel.background = element_blank() ) ``` ![](https://i.stack.imgur.com/aok3E.png) Is it possible to place the coordinate labels in a way that they would follow the “shape” of the polygon (instead of the plotting area)? Created on 2023-01-11 with [reprex v2.0.2](https://reprex.tidyverse.org)
This isn't natively possible with ggplot, but it is feasible to draw the axes in using `geomtextpath`: ``` library(geomtextpath) xvals <- seq(-90, -40, 10) yvals <- c(70, 72, 74) xaxis <- lapply(xvals, function(x) { st_linestring(cbind(c(x - 5, x + 5), c(69, 69)))})|> st_sfc() |> st_set_crs(4326) |> st_transform(crs = 3413) |> st_as_sf() |> within(label <- as.character(xvals)) yaxis <- lapply(yvals, function(x) { st_linestring(cbind(c(-93, -91), c(x, x)))})|> st_sfc() |> st_set_crs(4326) |> st_transform(crs = 3413) |> st_as_sf() |> within(label <- as.character(yvals)) ggplot() + geom_sf(data = poly) + geom_textsf(data = xaxis, aes(label = label), linewidth = NA) + geom_textsf(data = yaxis, aes(label = label), linewidth = NA) + coord_sf(crs = 3413) + theme_void() ``` [![enter image description here](https://i.stack.imgur.com/JaC5T.png)](https://i.stack.imgur.com/JaC5T.png)
Is it possible to create identical guids Is it possible to create identical guids in one application ``` Guid id = Guid.NewGuid(); ```
Technically, yes. A created Guid looks for example like this: ``` 26de36b7-76f5-4f17-8f9d-44eb429f151b ``` That means 32 chars that can be a letter (26 possibilities) or a digit (10 possibilities) That means 36 posibilities per position for a total of 36^32 that is approx. 60 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000. That means that if you create 2 000 000 000 000 000 000 000 000 000 000 000 000 000 Guids every millesecond (which is impossible), you will on average get the same guid created twice one time, and all the other guids will be unique. So in practice. No ;)
How to assign same value to multiple variables in Kotlin I have two variables and want to assign same value to both the variables at the same time something like below: ``` var allGood: Boolean = false val deviceId: String = "3550200583" var isValidId: Boolean = false allGood = isValidId = deviceId.length > 0 && deviceId.length <= 16 ``` is there any way to achieve this?
Because assignment is not an expression in Kotlin, you can't do multiple assignments that way.  But there are other ways.  The most obvious is simply: ``` isValidId = deviceId.length > 0 && deviceId.length <= 16 allGood = isValidId ``` A more idiomatic (if longer) way is: ``` (deviceId.length > 0 && deviceId.length <= 16).let { allGood = it isValidId = it } ``` (By the way, you can simplify the condition to `deviceId.length in 1..16`.) There are a couple of reasons why Kotlin doesn't allow this.  The main one [is](https://discuss.kotlinlang.org/t/assignments-as-expressions/1564) that it's incompatible with the syntax for calling a function with named parameters: `fn(paramName = value)`.  But it also avoids any confusion between `=` and `==` (which could otherwise cause hard-to-spot bugs).  See also [here](https://discuss.kotlinlang.org/t/assignment-not-allow-in-while-expression/339/15).
How do I get all of the output from my .exe using subprocess and Popen? I am trying to run an executable and capture its output using `subprocess.Popen`; however, I don't seem to be getting all of the output. ``` import subprocess as s from subprocess import Popen import os ps = Popen(r'C:\Tools\Dvb_pid_3_0.exe', stdin = s.PIPE,stdout = s.PIPE) print 'pOpen done..' while: line = ps.stdout.readline() print line ``` It prints two line less than the original exe file when opened manually. I tried an alternative approach with the same result: ``` f = open('myprogram_output.txt','w') proc = Popen('C:\Tools\Dvb_pid_3_0.exe ', stdout =f) line = proc.stdout.readline() print line f.close() ``` Can anyone please help me to get the full data of the exe? ## As asked by Sebastian: Original exe file last few lines o/p: -Gdd : Generic count (1 - 1000) -Cdd : Cut start at (0 - 99) -Edd : Cut end at (1 - 100) Please select the stream file number below: 1 - .\pdsx100-bcm7230-squashfs-sdk0.0.0.38-0.2.6.0-prod.sao.ts The o/p I get after running: ``` -P0xYYYY : Pid been interested ``` -S0xYYYY : Service ID been interested -T0xYYYY : Transport ID been interested -N0xYYYY : Network ID been interested -R0xYYYY : A old Pid been replaced by this PID -Gdd : Generic count (1 - 1000) So we can see some lines missing. I have to write 1 and choose value after please select the fule number below appears. I tried to use ps.stdin.write('1\n'). It didn't print the value in the exe file New code: ``` #!/usr/bin/env python from subprocess import Popen, PIPE cmd = r'C:\Tools\Dvb_pid_3_0.exe' p = Popen(cmd, stdin=PIPE, stdout=None, stderr=None, universal_newlines=True) stdout_text, stderr_text = p.communicate(input="1\n\n") print("stdout: %r\nstderr: %r" % (stdout_text, stderr_text)) if p.returncode != 0: raise RuntimeError("%r failed, status code %d" % (cmd, p.returncode)) ``` Thanks Sebastien. I am able to see the entire output but not able to feed in any input with the current code.
To get all stdout as a string: ``` from subprocess import check_output as qx cmd = r'C:\Tools\Dvb_pid_3_0.exe' output = qx(cmd) ``` To get both stdout and stderr as a single string: ``` from subprocess import STDOUT output = qx(cmd, stderr=STDOUT) ``` To get all lines as a list: ``` lines = output.splitlines() ``` To get lines as they are being printed by the subprocess: ``` from subprocess import Popen, PIPE p = Popen(cmd, stdout=PIPE, bufsize=1) for line in iter(p.stdout.readline, ''): print line, p.stdout.close() if p.wait() != 0: raise RuntimeError("%r failed, exit status: %d" % (cmd, p.returncode)) ``` Add `stderr=STDOUT` to the `Popen()` call to merge stdout/stderr. Note: if `cmd` uses block-buffering in the non-interactive mode then lines won't appear until the buffer flushes. [`winpexpect`](https://bitbucket.org/geertj/winpexpect/wiki/Home) module might be able to get the output sooner. To save the output to a file: ``` import subprocess with open('output.txt', 'wb') as f: subprocess.check_call(cmd, stdout=f) # to read line by line with open('output.txt') as f: for line in f: print line, ``` If `cmd` always requires input even an empty one; set `stdin`: ``` import os with open(os.devnull, 'rb') as DEVNULL: output = qx(cmd, stdin=DEVNULL) # use subprocess.DEVNULL on Python 3.3+ ``` You could combine these solutions e.g., to merge stdout/stderr, and to save the output to a file, and to provide an empty input: ``` import os from subprocess import STDOUT, check_call as x with open(os.devnull, 'rb') as DEVNULL, open('output.txt', 'wb') as f: x(cmd, stdin=DEVNULL, stdout=f, stderr=STDOUT) ``` To provide all input as a single string you could use `.communicate()` method: ``` #!/usr/bin/env python from subprocess import Popen, PIPE cmd = ["python", "test.py"] p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) stdout_text, stderr_text = p.communicate(input="1\n\n") print("stdout: %r\nstderr: %r" % (stdout_text, stderr_text)) if p.returncode != 0: raise RuntimeError("%r failed, status code %d" % (cmd, p.returncode)) ``` where `test.py`: ``` print raw_input('abc')[::-1] raw_input('press enter to exit') ``` If your interaction with the program is more like a conversation than you might need [`winpexpect` module](https://bitbucket.org/geertj/winpexpect/wiki/Home). Here's an [example from `pexpect` docs](http://www.noah.org/wiki/Pexpect#Overview): ``` # This connects to the openbsd ftp site and # downloads the recursive directory listing. from winpexpect import winspawn as spawn child = spawn ('ftp ftp.openbsd.org') child.expect ('Name .*: ') child.sendline ('anonymous') child.expect ('Password:') child.sendline ('[email protected]') child.expect ('ftp> ') child.sendline ('cd pub') child.expect('ftp> ') child.sendline ('get ls-lR.gz') child.expect('ftp> ') child.sendline ('bye') ``` To send special keys such as `F3`, `F10` on Windows you might need [`SendKeys` module](http://www.rutherfurd.net/python/sendkeys/) or its pure Python implementation [`SendKeys-ctypes`](http://code.google.com/p/sendkeys-ctypes/). Something like: ``` from SendKeys import SendKeys SendKeys(r""" {LWIN} {PAUSE .25} r C:\Tools\Dvb_pid_3_0.exe{ENTER} {PAUSE 1} 1{ENTER} {PAUSE 1} 2{ENTER} {PAUSE 1} {F3} {PAUSE 1} {F10} """) ``` It doesn't capture output.
Deleted DataTable row gets added again after sorting I am using the [DataTables](https://datatables.net) jQuery plugin to display an HTML table and I have made an `AJAX` row deletion function that sends the deletion `POST` request in the background and displays the returned `HTML` message in an `#info` div and removes the related `HTML` row from the `DOM` using the jQuery `remove()` function. Here's the function that gets called as `deleteData($id)` from the row's delete button: ``` function deleteData(id) { var data = "id="+id; $.ajax( { type: "POST", url: "delete.php", data: data } ).done ( function(data) { $('#info').html(data); var setID = id; $('#row' + setID).remove(); } ); } ``` Everything works good so far, the row gets deleted and the return message is shown, however, when I click a header to sort the rows again (ascendingly or descendingly) the deleted row reappears (in current session, not after a page reload) and it's still searchable, I wish to fix that. From what I've read, the issue is that DataTables only loads the table once, but there's some way to make it load from the `DOM` at each sort. I have tried many different ways to do that but it doesn't work.
I never used DataTable before but reading your question I was curious to learn a bit. Then I noticed that you're removing the row merely using jQuery `.remove()` on the given row (in other words removing the real exposed DOM row), while [this DataTable page](https://datatables.net/reference/api/row%28%29.remove%28%29) states you should use the dedicated `.row().remove()`. So in your example I guess you should replace `$('#row' + setID).remove();` by `yourDataTable.row($('#row' + setID)).remove();`. EDIT. Thanks to the @Bryan Ramsey's comment, I just realized that my suggested solution was incomplete: - the statement used by the OP `$('#row' + setID).remove();` *only removes the row from DOM*, but keeps it in the DataTable object so it appears again furtherly - then I suggested to rather use `yourDataTable.row($('#row' + setID)).remove();`, which really removes the row from the DataTable object, *but now keeps it in the DOM* so it doesn't visually disappear before the next change happens! - so the real complete solution is `yourDataTable.row($('#row' + setID)).remove().draw();`, where `draw()` ensures the row to immediately disappear. **NOTE: (if you're getting .row is not a function or $datatable is not defined) You must reintialize your datatable before removing the row as** ``` var $datatable = $('#datatable-selector').DataTable(); $datatable.row($deletingRowSelector).remove().draw(); ```
C/C++ returning struct by value under the hood (This question is specific to my machine's architecture and calling conventions, Windows x86\_64) I don't exactly remember where I had read this, or if I had recalled it correctly, but I had heard that, when a function should return some struct or object by value, it will either stuff it in `rax` (if the object can fit in the register width of 64 bits) or be passed a pointer to where the resulting object would be (I'm guessing allocated in the calling function's stack frame) in `rcx`, where it would do all the usual initialization, and then a `mov rax, rcx` for the return trip. That is, something like ``` extern some_struct create_it(); // implemented in assembly ``` would really have a secret parameter like ``` extern some_struct create_it(some_struct* secret_param_pointing_to_where_i_will_be); ``` Did my memory serve me right, or am I incorrect? How are large objects (i.e. wider than the register width) returned by value from functions?
Here's a simple disassembling of a code exampling what you're saying ``` typedef struct { int b; int c; int d; int e; int f; int g; char x; } A; A foo(int b, int c) { A myA = {b, c, 5, 6, 7, 8, 10}; return myA; } int main() { A myA = foo(5,9); return 0; } ``` and here's the disassembly of the foo function, and the main function calling it **main:** ``` push ebp mov ebp, esp and esp, 0FFFFFFF0h sub esp, 30h call ___main lea eax, [esp+20] ; placing the addr of myA in eax mov dword ptr [esp+8], 9 ; param passing mov dword ptr [esp+4], 5 ; param passing mov [esp], eax ; passing myA addr as a param call _foo mov eax, 0 leave retn ``` **foo:** ``` push ebp mov ebp, esp sub esp, 20h mov eax, [ebp+12] mov [ebp-28], eax mov eax, [ebp+16] mov [ebp-24], eax mov dword ptr [ebp-20], 5 mov dword ptr [ebp-16], 6 mov dword ptr [ebp-12], 7 mov dword ptr [ebp-8], 9 mov byte ptr [ebp-4], 0Ah mov eax, [ebp+8] mov edx, [ebp-28] mov [eax], edx mov edx, [ebp-24] mov [eax+4], edx mov edx, [ebp-20] mov [eax+8], edx mov edx, [ebp-16] mov [eax+0Ch], edx mov edx, [ebp-12] mov [eax+10h], edx mov edx, [ebp-8] mov [eax+14h], edx mov edx, [ebp-4] mov [eax+18h], edx mov eax, [ebp+8] leave retn ``` now let's go through what just happened, so when calling foo the paramaters were passed in the following way, 9 was at highest address, then 5 then the address the myA in main begins ``` lea eax, [esp+20] ; placing the addr of myA in eax mov dword ptr [esp+8], 9 ; param passing mov dword ptr [esp+4], 5 ; param passing mov [esp], eax ; passing myA addr as a param ``` within `foo` there is some local `myA` which is stored on the stack frame, since the stack is going downwards, the lowest address of `myA` begins in `[ebp - 28]`, the -28 offset could be caused by struct alignments so I'm guessing the size of the struct should be 28 bytes here and not 25 as expected. and as we can see in `foo` after the local `myA` of `foo` was created and filled with parameters and immediate values, it is copied and re-written to the address of `myA` passed from `main` ( this is the actual meaning of return by value ) ``` mov eax, [ebp+8] mov edx, [ebp-28] ``` `[ebp + 8]` is where the address of `main::myA` was stored ( memory address go upwards hence ebp + old ebp ( 4 bytes ) + return address ( 4 bytes )) at overall ebp + 8 to get to the first byte of `main::myA`, as said earlier `foo::myA` is stored within `[ebp-28]` as stack goes downwards ``` mov [eax], edx ``` place `foo::myA.b` in the address of the first data member of `main::myA` which is `main::myA.b` ``` mov edx, [ebp-24] mov [eax+4], edx ``` place the value that resides in the address of `foo::myA.c` in edx, and place that value within the address of `main::myA.b` + 4 bytes which is `main::myA.c` as you can see this process repeats itself through out the function ``` mov edx, [ebp-20] mov [eax+8], edx mov edx, [ebp-16] mov [eax+0Ch], edx mov edx, [ebp-12] mov [eax+10h], edx mov edx, [ebp-8] mov [eax+14h], edx mov edx, [ebp-4] mov [eax+18h], edx mov eax, [ebp+8] ``` which basically proves that when returning a struct by val, that could not be placed in as a param, what happens is that the address of where the return value should reside in is passed as a param to the function and within the function being called the values of the returned struct are copied into the address passed as a parameter... hope this exampled helped you visualize what happens under the hood a little bit better :) **EDIT** I hope that you've noticed that my example was using 32 bit assembler and **I KNOW** you've asked regarding x86-64, but I'm currently unable to disassemble code on a 64 bit machine so I hope you take my word on it that the concept is exactly the same both for 64 bit and 32 bit, and that the calling convention is nearly the same
Ruby object references I came across a snippet of code in Ruby that confused me. Boiling it down to a simple example below to illustrate. Why is it that sArr does not have its contents automatically updated in the first case but it happens in the second case? The snippets are short and self explanatory with three returns separating the two scenarios. ``` 2.0.0-p598 :068 > str = "Hello" => "Hello" 2.0.0-p598 :069 > sArr = [str] => ["Hello"] 2.0.0-p598 :070 > str = str.upcase => "HELLO" 2.0.0-p598 :071 > str => "HELLO" 2.0.0-p598 :072 > sArr => ["Hello"] # Why is this not ["HELLO"] like in the sequence below? 2.0.0-p598 :073 > 2.0.0-p598 :074 > 2.0.0-p598 :075 > 2.0.0-p598 :076 > str = "Hello" => "Hello" 2.0.0-p598 :077 > sArr = [str] => ["Hello"] 2.0.0-p598 :078 > str.upcase! => "HELLO" 2.0.0-p598 :079 > sArr => ["HELLO"] ```
This is because a new String instance is created with `str = str.upcase` but not `str.upcase!`: ``` str = "Hello" str.object_id # => 70132476337960 str = str.upcase # => "HELLO" str.object_id # => 70132476374360 (new instance) str = "Hello" str.object_id # => 70132476415240 str.upcase! # => "HELLO" str.object_id # => 70132476415240 (same instance) ``` `sArr` is unaffected by `str = str.upcase` since it is holding a reference to the initial `"Hello"`. Assigning to `str` doesn't change the object in `sArr` any more than if you did `str = 'World'`. In contrast, `str.upcase!` modifies the instance held by `sArr`.
Is there a way to write custom JsonConverter per object I have a Json object something like this: ``` {"company": "My Company", "companyStart" : "2015/01/01", "employee" : { "name" : "john doe", "startDate" : 1420434000000 } } ``` And my json object like this: ``` public class Company { public string company; public DateTime companyStart; public Employee employee; } public class Employee { public string name; public DateTime startDate; } ``` My original code deserialize like this ``` JsonConvert.DeserializeObject<Company>(jsonString); ``` This code converts Company.companyStart without trouble, but when it gets to Employee.startDate it doesn't know what to do with the Long. [This](https://stackoverflow.com/questions/18088406/how-to-deserialize-date-milliseconds-with-json-net) post showed me how to create custom JsonConverter to convert long to DateTime, but as you can see in my case, this would give me trouble converting Company.companyStart to DateTime. So... I was thinking of doing something like this: ``` public class Company : JsonBase { ... } public class Employee : JsonBase { ... Employee() { Converter = new CustomDateConverter(); } } public class JsonBase { private JsonConverter converter; [JsonIgnore] public JsonConverter Converter => converter ?? (converter = new StandardConverter()); } ``` JsonBase would contain either the standard converter or and in my code I would convert something like this: ``` public T CreateJsonObject<T>() where T : JsonBase { JsonBase json = (T) Activator.CreateInstance(typeof (T)); JsonConvert.DeserializeObject<T>(jsonString, json.Converter); } ``` The problem is that this doesn't quite work because this method will simply use the top most Converter to convert everything instead of using converter per object. Is there a way to use the converter per object? Or perhaps there is a better way to do this.
How about adapting the custom converter that you wrote to understand both formats: ``` public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.ValueType == typeof(string)) { return DateTime.Parse((string)reader.Value); } else if (reader.ValueType == typeof(long)) { return new DateTime(1970, 1, 1).AddMilliseconds((long)reader.Value); } throw new NotSupportedException(); } ``` Alternatively you could apply the converter only to a specific property of your model by decorating it with the `JsonConverter` attribute: ``` public class Employee { public string name; [JsonConverter(typeof(MyConverter))] public DateTime startDate; } ``` This way you don't need to register the converter globally and it won't mess up with the other standard date formats.
Setting up Git on EC2 to pull from GitHub repo I'm kind of new to both EC2 and Git, and I have just set up my first instance of EC2, using a clean Amazon Linux AMI. I also installed MySQL, Apache and PHP and opened some ports to make it work as a normal web server, responding to an elastic IP as well. Now, my code is on a private repo on GitHub, and I would like to perform simple deployments by doing `git pull` or something like that. Git is also installed on the server already. I know I could set up my git repo on the server using my personal ssh key, but it seems odd. I guess another solution would be to create a new GitHub user and use it on the server, but it doesn't seem right either. How do I achieve this in an elegant, safe way?
To avoid having to keep an SSH private key on your EC2 instance, people often use a workflow that involves pushing to that remote server in order to deploy. Essentially, you set up a bare git repository there with a `pre-receive` hook that deploys to another directory. There is a simple example of doing this in [this tutorial](http://toroid.org/ams/git-website-howto). Then you only need to have your SSH *public* key in `~/.ssh/authorized_keys` on the server. However, with this workflow, you couldn't deploy directly from your GitHub repository - you would need to pull it locally and then push to the EC2 machine. An alternative is to use GitHub's [deploy keys](http://help.github.com/deploy-keys/) mechanism. This would involve creating a new SSH key-pair on your EC2 instance, and adding the public key as a deploy key into your private repository on GitHub. Then you can pull directly from your private GitHub repository to your EC2 instance.
Why use a hashmap? Someone told me hashmaps are rather slow. So I am just wondering whether to use hashmap or a switch case logic. My requirement is this. I have a set of CountryNames and CountryCodes. My ListView displays the names of the countries. When an country name item is clicked, I must Toast the CountryCode. In such a scenario, should I maintain a HashMap of CountryNames and Codes and access this to get the corresponding Code?: ``` myMap.put("US", 355); myMap.put("UK", 459); //etc ``` Or is it better to write a switch case like so ``` switch (vCountryNamePos): { case 0: //US vCountryCode = 355; break; case 1: //UK vCountryCode = 459; break; //etc } ``` Which is faster? If not Hashmaps, then in what practical scenarios would a Map be used? -Kiki
For two values, a switch will be faster. A hashmap will always at least check for equality of your key, so it can't beat one or two .equals() tests. For many values, a hash will be faster. A switch has to test every value until it finds the right one. For a small number of values (say up to 10 or so), prefer a switch. It's gonna be lighter and faster. For a big number of values (in upwards of 50), prefer a hash. A hash will not have to check all values, so it will be faster than a switch when the number of values increases. For 10~50 values, I'd suggest you do what you feel is more readable because the performance is gonna be similar. Now if you are looking into extreme performance on static strings known at compile time, you may look into code-generating tools like gnuperf. If you don't know your strings at compile time but you know they are gonna be decently short and decently uniform in length, or with common prefixes, you are probably gonna be fastest with a Trie data structure. If you want to keep performance on great number of very heterogeneous strings, or on objects that may not be Strings, then HashMap is the way to go. It's pretty much unbeatable when the number of objects is very high (in the billions or more).
Css fade-in-out blinking I'm trying to make a div flash, but I don't want the text inside it to flash, just the button itself. I'm not sure how I can go around this. I hope this makes sense Can anyone help please? Here is the code: ``` @-moz-keyframes blink {0%{opacity:1;} 50%{opacity:0.5;} 100%{opacity:1;}} /* Firefox */ @-webkit-keyframes blink {0%{opacity:1;} 50%{opacity:0.5;} 100%{opacity:1;}} /* Webkit */ @-ms-keyframes blink {0%{opacity:1;} 50%{opacity:0.5;} 100%{opacity:1;}} /* IE */ @keyframes blink {0%{opacity:1;} 50%{opacity:0.5;} 100%{opacity:1;}} /* Opera */ .download { background-color: red; padding: 15px 15px 15px 15px; text-align:center; margin-bottom: 4px; font-size: 24px; border-radius: 5px; -moz-transition:all 0.5s ease-in-out; -webkit-transition:all 0.5s ease-in-out; -o-transition:all 0.5s ease-in-out; -ms-transition:all 0.5s ease-in-out; transition:all 0.5s ease-in-out; -moz-animation:blink normal 1.5s infinite ease-in-out; /* Firefox */ -webkit-animation:blink normal 1.5s infinite ease-in-out; /* Webkit */ -ms-animation:blink normal 1.5s infinite ease-in-out; /* IE */ animation:blink normal 1.5s infinite ease-in-out; /* Opera */ } ``` ``` <div class="download">DOWNLOAD TRIAL</div> ```
``` @keyframes blink { 0% { background-color: rgba(255,0,0,1) } 50% { background-color: rgba(255,0,0,0.5) } 100% { background-color: rgba(255,0,0,1) } } @-webkit-keyframes blink { 0% { background-color: rgba(255,0,0,1) } 50% { background-color: rgba(255,0,0,0.5) } 100% { background-color: rgba(255,0,0,1) } } .download { padding: 15px 15px 15px 15px; text-align:center; margin-bottom: 4px; font-size: 24px; border-radius: 5px; -moz-transition:all 0.5s ease-in-out; -webkit-transition:all 0.5s ease-in-out; -o-transition:all 0.5s ease-in-out; -ms-transition:all 0.5s ease-in-out; transition:all 0.5s ease-in-out; -moz-animation:blink normal 1.5s infinite ease-in-out; /* Firefox */ -webkit-animation:blink normal 1.5s infinite ease-in-out; /* Webkit */ -ms-animation:blink normal 1.5s infinite ease-in-out; /* IE */ animation:blink normal 1.5s infinite ease-in-out; /* Opera */ } ``` ``` <div class="download"> <h1>DOWNLOAD</h1> </div> ``` `opacity` will affect the div and all it's children. What I suspect you need is a background color with an alpha (transparency) component. So...use RGBA colors on the background
Syntax sugar for signal slot Recently I searched internet for good signal-slot library and wonder why on Earth we need such a cumbersome syntax for connecting member methods to signals? Usually we have to write something like this: ``` mySignal.connect(&MyClassName::myMethodName, this); ``` or like that: ``` mySignal += std::bind(&MyClassName::myMethodName, this, std::placeholders::_1); ``` There is obvious duplication and unnessesary typing. Is it possible in modern C++ to implement such functionality in, for example, C# way: ``` mySignal += myMethodName ``` and automatically capture pointer to member function and this pointer from context?
> > Is it possible in modern C++ to implement such functionality in, for example, C# way? [...] > > > No, that's not possible in C++. The syntax for taking the address of a member function requires qualifying the function name with the class name (i.e. `&MyClassName::myMethodName`). If you don't want to specify the class name, one possibility is to use lambdas. In particular, if you can afford a C++14 compiler, generic lambdas allow writing: ``` mySignal.connect([this] (auto x) -> { myMethodName(x) }); ``` Sadly, you can't get much terser than this. You can use default lambda capture to save some syntactic noise: ``` mySignal.connect([&] (auto x) -> { myMethodName(x) }); ``` However, Scott Meyers warns against the pitfalls of default lambda capture modes in his new book *Effective Modern C++*. From the readability point of view, I'm not sure this improves things a lot compared to your first option. Besides, things soon become awkward if you want your lambda to perfectly forward its argument(s) to `myMethodName`: ``` mySignal.connect([&] (auto&& x) -> { myMethodName(std::forward<decltype(x)>(x)) }); ``` If you don't mind macros (I usually do), you can employ a preprocessor-based solution [as suggested by Quentin in their answer](https://stackoverflow.com/a/28398551/1932150). However, I would prefer using a perfect-forwarding lambda in that case: ``` #define SLOT(name) \ [this] (auto&&... args) { name (std::forward<decltype(args)>(args)...); } ``` Which you could use like so: ``` e.connect(SLOT(foo)); ``` Here is a [*live demo on Coliru*](http://coliru.stacked-crooked.com/a/2d5ff31eab8cd993).
How do I parse a date in PowerShell? I write a script that removes backups older than five days. I check it by the name of the directory and not the actual date. How do parse the directory name to a date to compare them? Part of my script: ``` ... foreach ($myDir in $myDirs) { $dirName=[datetime]::Parse($myDir.Name) $dirName= '{0:dd-MM-yyyy}' -f $dirName if ($dirName -le "$myDate") { remove-item $myPath\$dirName -recurse } } ... ``` Maybe I do something wrong, because it still does not remove last month's directories. The whole script with Akim's suggestions is below: ``` Function RemoveOldBackup([string]$myPath) { $myDirs = Get-ChildItem $myPath if (Test-Path $myPath) { foreach ($myDir in $myDirs) { #variable for directory date [datetime]$dirDate = New-Object DateTime #check that directory name could be parsed to DateTime if([datetime]::TryParse($myDir.Name, [ref]$dirDate)) { #check that directory is 5 or more day old if (([DateTime]::Today - $dirDate).TotalDays -ge 5) { remove-item $myPath\$myDir -recurse } } } } Else { Write-Host "Directory $myPath does not exist!" } } RemoveOldBackup("E:\test") ``` Directories names are, for example, 09-07-2012, 08-07-2012, ..., 30-06-2012, and 29-06-2012.
Try to calculate the difference between `[DateTime]::Today` and the result of parsing the directory name: ``` foreach ($myDir in $myDirs) { # Variable for directory date [datetime]$dirDate = New-Object DateTime # Check that directory name could be parsed to DateTime if ([DateTime]::TryParseExact($myDir.Name, "dd-MM-yyyy", [System.Globalization.CultureInfo]::InvariantCulture, [System.Globalization.DateTimeStyles]::None, [ref]$dirDate)) { # Check that directory is 5 or more day old if (([DateTime]::Today - $dirDate).TotalDays -ge 5) { remove-item $myPath\$dirName -recurse } } } ```
Switching between LWUIT Form and LCDUI Form I have built a LWUIT UI class which contains the Midlet. I am basically using a theme from this midlet. But I need to jump to another LCDUI form which contains some LCDUI controls and I need to set display that LCDUI form. So is it possible to jump from LWUIT form to LCDUI form and set display the LCDUI form ? If possible how ?
I used following code to show the both LWUIT Form and LCDUI Form. See the sample code. ``` com.sun.lwuit.Form lwuitForm; protected void startApp() throws MIDletStateChangeException { Display.init(this); lwuitForm = new com.sun.lwuit.Form("LWUIT Form"); lwuitForm.addComponent(new TextField("")); final MIDlet midlet = this; final Command abtUsCmd = new Command("Next") { public void actionPerformed(ActionEvent evt) { javax.microedition.lcdui.Form frm = new javax.microedition.lcdui.Form("LCDUI Form"); StringItem item = new StringItem("Text", "Sample text"); frm.append(item); final javax.microedition.lcdui.Command cmd = new javax.microedition.lcdui.Command("Back", javax.microedition.lcdui.Command.BACK, 0); CommandListener cmdLis = new CommandListener() { public void commandAction(javax.microedition.lcdui.Command c, Displayable d) { if(c == cmd) { Display.init(midlet); lwuitForm.show(); // Show the LWUIT form again } } }; frm.setCommandListener(cmdLis); frm.addCommand(cmd); javax.microedition.lcdui.Display.getDisplay(midlet).setCurrent(frm); // show the LCDUI Form } }; lwuitForm.addCommand(abtUsCmd); lwuitForm.show(); // Show the LWUIT Form } ```
class for handling custom exception I would like to create a class which takes std::function and allow to handle specified exceptions but I'm not sure if it is possible. Here is a pseudo draft: ``` //exception types template<class... Args> class CustomExceptionHandler { public: CustomExceptionHandler(std::function<void()> clb): clb_(std::move(clb)){} void ExecuteCallback() { try { clb_(); } /*catch specified exception types*/ } private: std::function<void()> clb_; }; //usage CustomExceptionHandler<std::out_of_range, std::overflow_error> handler(clb); handler.ExecuteCallback(); ``` I don't know how to use a variadic template to grab exception types and use it later. Is it possible? I guess that tuple may be helpful.
It's possible! I've made a solution (which you can run [here](http://coliru.stacked-crooked.com/a/37ffaff563cd1723)) that expands the parameter pack of exception types into a series of recursive function calls, where each function attempts to catch one type of exception. The innermost recursive call then invokes the callback. ``` namespace detail { template<typename First> void catcher(std::function<void()>& clb){ try { clb(); // invoke the callback directly } catch (const First& e){ // TODO: handle error as needed std::cout << "Caught an exception with type \"" << typeid(e).name(); std::cout << "\" and message \"" << e.what() << "\"\n"; } } template<typename First, typename Second, typename... Rest> void catcher(std::function<void()>& clb){ try { catcher<Second, Rest...>(clb); // invoke the callback inside of other handlers } catch (const First& e){ // TODO: handle error as needed std::cout << "Caught an exception with type \"" << typeid(e).name(); std::cout << "\" and message \"" << e.what() << "\"\n"; } } } template<class... Args> class CustomExceptionHandler { public: CustomExceptionHandler(std::function<void()> clb): clb_(std::move(clb)){} void ExecuteCallback() { detail::catcher<Args...>(clb_); } private: std::function<void()> clb_; }; int main(){ std::function<void()> clb = [](){ std::cout << "I'm gonna barf!\n"; throw std::out_of_range("Yuck"); //throw std::overflow_error("Ewww"); }; CustomExceptionHandler<std::out_of_range, std::overflow_error> handler(clb); handler.ExecuteCallback(); return 0; } ``` Output: > > `I'm gonna barf!` > > > `Caught an exception with type "St12out_of_range" and message "Yuck"` > > >
Google's Python exercise about lists very different from the given solution I found some Python exercises that were made by Google in their [Python classes](https://developers.google.com/edu/python/) and decided to spend some time with them. Given the following description: > > E. Given two lists sorted in increasing order, create and return a merged > ist of all the elements in sorted order. You may modify the passed in lists. > deally, the solution should work in "linear" time, making a single > pass of both lists. > > > So, knowing that comparing two characters is \$O(1)\$, Python's `sorted()` function in this situation is \$O(n \log{n})\$ and thinking that merging two lists into a new one is \$O(k)\$ where \$k\$ the number of elements in `list1 + list2`, which is quite "linear" in the sense of what was asked, I did... ``` def linear_merge(list1, list2): return sorted(list1 + list2) ``` However, when looking at the problem's solution, I found it to be somewhat different: ``` def linear_merge(list1, list2): result = [] # Look at the two lists so long as both are non-empty. # Take whichever element [0] is smaller. while len(list1) and len(list2): if list1[0] < list2[0]: result.append(list1.pop(0)) else: result.append(list2.pop(0)) # Now tack on what's left result.extend(list1) result.extend(list2) return result ``` which is followed by the following comment: > > Note: the solution above is kind of cute, but unforunately `list.pop(0)` > is not constant time with the standard Python list implementation, so > the above is not strictly linear time. > An alternate approach uses `pop(-1)` to remove the endmost elements > from each list, building a solution list which is backwards. > Then use reversed() to put the result back in the correct order. That > solution works in linear time, but is more ugly. > > > This confused me a little bit, since my solution looks... better in general (code and complexity, given the last comment paragraph). Are any of my assumptions about my version of the code wrong? Keep in mind that I wrote this version using Python3, when instead google's python classes uses Python2. I'm not really sure, but this may have something to do with it. Here is the rest of the related source to give a full example: ``` def test(got, expected): if got == expected: prefix = ' OK ' else: prefix = ' X ' print('{} got: {} expected: {}'.format(prefix, repr(got), repr(expected))) # Calls the above functions with interesting inputs. def main(): print('linear_merge') test(linear_merge(['aa', 'xx', 'zz'], ['bb', 'cc']), ['aa', 'bb', 'cc', 'xx', 'zz']) test(linear_merge(['aa', 'xx'], ['bb', 'cc', 'zz']), ['aa', 'bb', 'cc', 'xx', 'zz']) test(linear_merge(['aa', 'aa'], ['aa', 'bb', 'bb']), ['aa', 'aa', 'aa', 'bb', 'bb']) if __name__ == '__main__': main() ```
In a nutshell, yours is better, for two reasons. First, Python isn't designed for speed. It's decently fast, but the goal is code like yours: so clear, concise, obvious, and readable that anyone can glance at it and immediately see what it does. You can then spend the rest of the project's development time working on the difficult problems (like attending meetings). Second, the "answer" code doesn't really answer the exercise, as it notes. It looks right in theory, but popping elements from the beginning of a `list` is not the most performant operation in most languages, including Python. It would be a more reasonable solution with a linked list, which is a type that was probably omitted from Python precisely because its utility is mostly limited to fixing the micro-optimizations in bloated code. This is like optimizing a recursive function by making it tail-recursive, and then admitting that it doesn't make any difference because Python doesn't have tail call optimization. If you were using this code in a real program and determined through actual testing that this `linear_merge` function was taking too much time due to the extra sorting, you might then be justified in optimizing it. For fun, here's something with indexing instead of `pop()`: ``` def linear_merge(list1, list2): result = [] c1 = c2 = 0 while c1<len(list1) and c2<len(list2): if list1[c1] <= list2[c2]: result.append(list1[c1]) c1 += 1 else: result.append(list2[c2]) c2 += 1 result.extend(list2[c1:]) result.extend(list1[c2:]) return result ``` This might be faster due to not `pop()`ing items from the beginning of each `list`, but it also might be slower (or possibly more memory-intensive) due to having to slice a `list` at the end. I leave it as an exercise to you to time these approaches... but remember that the most important time to conserve is usually your own, not your computer's.
How can I access a single XML element's value using C#.net web-pages with WebMatrix? I've looked at a lot of resources, done a lot of research, and tried many "best-guesses" to access a single element at a time using WebMatrix with C#, web-pages, however nothing I am trying is getting through. Consider a simple xml document that looks like this: ``` <root> <requisitionData> <element1>I am element 1</element1> <element2>I am element 2</element2> </requisitionData> </root> ``` I know I can use a foreach loop, like so: ``` @using System.Xml.Linq XDocument doc = XDocument.Load(Server.MapPath("~/User_Saves/cradebaugh/testFile.xml")); foreach (XElement element in doc.Descendants("requisitionData")) { @element.Value } ``` And that, of course, works fine. But what if I simply wanted to store the single element, `<element1>`'s value in a string variable? I've looked here (link below), but I can't make heads or tails of this code (it barely even looks like C# to me, but then again, I'm so new to parsing XML...): <http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/b14ce4d1-77f1-420d-ad91-0989794a1d45/> I've also checked here: [How to Get XML Node from XDocument](https://stackoverflow.com/questions/752271/how-to-get-xml-node-from-xdocument) But the code shown makes no sense to me here either. I keep thinking there must be a simpler way to do this, hopefully without learning a whole new querying approach. ---------------------------------THINGS I'VE TRIED--------------------------------- ``` XDocument doc = XDocument.Load(Server.MapPath("~/User_Saves/cradebaugh/testFile.xml")); string element = doc.Descendants("requisitionData").Descendants("element1").Value; ``` **Error I receive:** "missing using directive or assembly reference ``` XDocument doc = XDocument.Load(Server.MapPath("~/User_Saves/cradebaugh/testFile.xml")); XElement element = doc.Descendants("element1"); string val = element.Value; ``` **Error I receive:** Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Xml.Linq.XElement'. An explicit conversion exists (are you missing a cast?) I have, indeed, tried other things, but I get pretty much the same errors as shown above. Am I making this harder than it is, or am I oversimplifying it? -------------------------UPDATE------------------------------ I was able to get this to work: ``` string element = doc.Element("root").Element("requisitionData").Element("element1").Value; @element ``` However, one thing that concerns me about this approach is that `.Element` selects the 'first' match, so in an xml document that looks like this: ``` <root> <requisitionData> <element1>I am element 1</element1> <element2>I am element 2</element2> </requisitionData> <requisitionData> <element1>I am element 1</element1> <element2>I am element 2</element2> </requisitionData> </root> ``` How could I access the second occurrence of `<element1>`?
``` @using System.Xml.Linq XDocument doc = XDocument.Load(Server.MapPath("~/User_Saves/cradebaugh/testFile.xml")); foreach (XElement element in doc.Element("root").Element("requisitionData").Descendants()) { string value = element.Value; } ``` or with XPath: ``` @using System.Xml.Linq @using System.Xml.XPath XDocument doc = XDocument.Load(Server.MapPath("~/User_Saves/cradebaugh/testFile.xml")); foreach (XElement element in doc.XPathSelectElement("//requisitionData").Descendants()) { string value = element.Value; } ``` --- UPDATE: And if you wanted to select for example the second `<element1>` node from your updated example: ``` string value = doc.XPathSelectElement("//requisitionData[2]/element1").Value; ```
Android - Get Notified when a new access point is detected? Does Android provide a notification of being in vicinity of a new Wifi Network? Whether the device is configured to connect to that wifi network depends on whether the device has the wifi configuration set for that particular wifi network, but is it possible to get notification whenever entering any new wifi network? I saw the WifiManager class but the states inside the class do not seem to achieve what I am trying to do. Any ideas?
Use a `BroadcastReceiver` registered to receive intents with action: `WifiManager.NETWORK_STATE_CHANGED_ACTION`. In this BroadcastReceiver, you can extract a [NetworkInfo](http://developer.android.com/reference/android/net/NetworkInfo.html) object from the intent: ``` NetworkInfo ni = (NetworkInfo) intent.getParcelableExtra(WifiManager.EXTRA_NETWORK_INFO); ``` Then process `ni.getState()` to check connections/disconnections from wifi networks. Is this what you were looking for? --- *Edit after answer* So if you want to know which wifi networks are available, use [WifiManager.getScanResults()](http://developer.android.com/reference/android/net/wifi/WifiManager.html#getScanResults%28%29) This gives you the list of nearby access points in `Scanresult` objects. Those contain the SSID and BSSID of the access points, which are respectively their network name and mac address. You can get this information asynchronously by using a `BroadcastReceiver` registered to receive intents with action `WifiManager.SCAN_RESULTS_AVAILABLE_ACTION`. Then you will be notified each time the system performs a wifi scan, and you can check if a new SSID (i.e. network name) has appeared since the last scan. And finally if you wish to scan more often than the system does by default, you can trigger wifi scans yourself using `WifiManager.startScan()`.
What is the equivalent of the Bootstrap 3 'btn-default' class in Bootstrap 4? There was a nice button created by `btn-default` in Bootstrap 3. ``` <a class="btn btn-default">link</a> ``` Is there an equivalent in Bootstrap 4?
The `btn-outline-secondary` class and `btn-outline-light` class in Bootstrap 4 are the 2 closest alternatives to what used to be `btn-default` in Bootstrap 3. (there's no exact equivalent in Bootstrap 4) Here's a code snippet with live preview (notice the difference between a `button` and an `a` tag): ``` <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous"> <div class="container m-3"> <a class="btn btn-outline-secondary">outline-secondary link</a> <button type="button" class="btn btn-outline-secondary">outline-secondary 'button'</button> <br><br> <a class="btn btn-outline-light">outline-light link</a> <button type="button" class="btn btn-outline-light">outline-light 'button'</button> </div> ``` Reference: <https://getbootstrap.com/docs/4.0/components/buttons/#outline-buttons>
What does PorterDuff.Mode mean in android graphics.What does it do? I would like to know what **PorterDuff.Mode** means in android graphics. I know that it is a *transfer mode*. I also know, that it has attributes such as DST\_IN, Multiply etc.
Here's an excellent article with illustrations by a Google engineer: <http://ssp.impulsetrain.com/porterduff.html> PorterDuff is described as a way of combining images as if they were "irregular shaped pieces of cardboard" overlayed on each other, as well as a scheme for blending the overlapping parts. The default Android way of composing images is [PorterDuff.Mode.SRC\_OVER](http://developer.android.com/reference/android/graphics/PorterDuff.Mode.html#SRC_OVER), which equates to drawing the source image/color *over* the target image. In other words, it does what you would expect and draws the source image (the one you're drawing) on top of the destination image (the canvas) with the destination image showing through to the degree defined by the source image's alpha. ![PorterDuff infographic from the article](https://i.stack.imgur.com/B9syg.png) You can use the key below to understand the algebra that [the Android docs](http://developer.android.com/reference/android/graphics/PorterDuff.Mode.html) use to describe the other modes (see [the article](http://ssp.impulsetrain.com/porterduff.html) for a fuller desription with similar terms). - **Sa** Source alpha - **Sc** Source color - **Da** Destination alpha - **Dc** Destination color Where alpha is a value `[0..1]`, and color is substituted once per channel (so use the formula once for each of red, green and blue) The resulting values are specified as a pair in square braces as follows. ``` [<alpha-value>,<color-value>] ``` Where `alpha-value` and `color-value` are formulas for generating the resulting alpha chanel and each color chanel respectively.
Wix Custom Dialog Validation How can you validate fields in a Wix Custom Dialog? I've got a combo box that I'm using to set a property that cannot be null.
It's going to depend on the complexity of your validation. For a simple one control must have a value you could do something like: ``` <UI...> <Dialog...> <Control Id="Next"...> <Publish Event="SpawnDialog" Value="ErrorsDlg">Not SomeProperty</Publish> <Publish Event="NewDialog" Value="NextDialog">Property</Publish> </Control> </Dialog> </UI> ``` Where ErrorsDlg is a Dialog that you create to resemble a MessageBox style dialog. If you have more complicated validation you can write a custom action that reads properties, evaluates rules and sets a flag along with an error message to be displayed. That would look more like this: ``` <UI...> <Dialog...> <Control Id="Next"...> <Publish Event="DoAction" Value="ValidateCA">1</Publish> <Publish Event="SpawnDialog" Value="ErrorsDlg">Not DataValid</Publish> <Publish Event="NewDialog" Value="NextDialog">DataValid</Publish> </Control> </Dialog> </UI> ```
How to detect if Azure Powershell session has expired? I'm writing an Azure PowerShell script and to login to Azure I call `Add-AzureAccount` which will popup a browser login window. I'm wondering what's the best way to check if the authentication credentials have expired or not and thus if I should call `Add-AzureAccount` again? What I now do is that I just call `Get-AzureVM` and see if `$?` equals to `$False`. Sounds a bit hackish to me, but seems to work. And does it still work if the subscription doesn't have any virtual machines deployed?
You need to run Get-AzureRmContext and check if the Account property is populated. In the latest version of AzureRM, Get-AzureRmContext doesn't raise error (the error is raised by cmdlets that require active session). However, apparently in some other versions it does. This works for me: ``` function Login { $needLogin = $true Try { $content = Get-AzureRmContext if ($content) { $needLogin = ([string]::IsNullOrEmpty($content.Account)) } } Catch { if ($_ -like "*Login-AzureRmAccount to login*") { $needLogin = $true } else { throw } } if ($needLogin) { Login-AzureRmAccount } } ``` If you are using the new Azure PowerShell API, it's much simpler ``` function Login($SubscriptionId) { $context = Get-AzContext if (!$context -or ($context.Subscription.Id -ne $SubscriptionId)) { Connect-AzAccount -Subscription $SubscriptionId } else { Write-Host "SubscriptionId '$SubscriptionId' already connected" } } ```
Making the JTextArea or JEditorPane in a JFrame scrollable I've been doing some research about Swing in order to build a css editor with Java. I'm stuck trying to export CSS and HTML in JTextArea's ( I'll after create .css document. ) Here is the GridLayout that my main layout calls after clicking "Build" menu item. ``` package csseditor_gui_built; import java.awt.GridLayout; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JTextArea; import javax.swing.JScrollPane; import javax.swing.JScrollBar; import javax.swing.text.DefaultCaret; import java.awt.Font; import java.awt.Color; public class ExportGridLayout extends JFrame { public ExportGridLayout(String HTML, String CSS){ GridLayout layout = new GridLayout(1,2,2,2); setLayout(layout); JTextArea textAreaHtml = new JTextArea(); JTextArea textAreaCss = new JTextArea(); //Creating a new font. Font fontumuz = new Font("Courier New", Font.PLAIN, 12); // Setting constructor strings textAreaHtml.setText(HTML); textAreaCss.setText(CSS); //Additional details.. textAreaHtml.setEditable(false); textAreaCss.setEditable(false); //Appending font to the textArea's textAreaHtml.setFont(fontumuz); textAreaCss.setFont(fontumuz); // Adding the objects to JFrame add(textAreaHtml); add(textAreaCss); } } ``` It's pretty straight forward. Just help me adding scroll bars or panes to these textArea's. Any other suggestions in the website do not work.
Its this way... `JTextArea text = new JTextArea();` `JScrollPane scroll = new JScrollPane(text);` **Edited part** `add(scroll);` Here is one working code for your help : ``` import java.awt.*; import javax.swing.*; public class JTextAreaExample { private void createAndDisplayGUI() { JFrame frame = new JFrame("JTextArea Scrollable"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel contentPane = new JPanel(); contentPane.setLayout(new GridLayout(1, 2, 2, 2)); JTextArea tArea1 = new JTextArea(); tArea1.setLineWrap(true); JTextArea tArea2 = new JTextArea(); tArea2.setLineWrap(true); tArea1.setText("I got a long long line of text in my JTextArea"); tArea2.setText("I got a long long line of text in my JTextArea"); JScrollPane scroller1 = new JScrollPane(); JScrollPane scroller2 = new JScrollPane(); scroller1.setViewportView(tArea1); scroller2.setViewportView(tArea2); contentPane.add(scroller1); contentPane.add(scroller2); frame.setContentPane(contentPane); frame.setSize(100, 100); frame.setLocationByPlatform(true); frame.setVisible(true); } public static void main(String... args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new JTextAreaExample().createAndDisplayGUI(); } }); } } ```
Adding namespace to an already created XML document I am creating a W3C Document object using a String value. Once I created the Document object, I want to add a namespace to the root element of this document. Here's my current code: ``` Document document = builder.parse(new InputSource(new StringReader(xmlString))); document.getDocumentElement().setAttributeNS("http://com", "xmlns:ns2", "Test"); document.setPrefix("ns2"); TransformerFactory tranFactory = TransformerFactory.newInstance(); Transformer aTransformer = tranFactory.newTransformer(); Source src = new DOMSource(document); Result dest = new StreamResult(new File("c:\\xmlFileName.xml")); aTransformer.transform(src, dest); ``` What I use as input: ``` <product> <arg0>DDDDDD</arg0> <arg1>DDDD</arg1> </product> ``` What the output should look like: ``` <ns2:product xmlns:ns2="http://com"> <arg0>DDDDDD</arg0> <arg1>DDDD</arg1> </ns2:product> ``` I need to add the prefix value and namespace also to the input xml string. If I try the above code I am getting this exception: ``` NAMESPACE_ERR: An attempt is made to create or change an object in a way which is incorrect with regard to namespaces. ``` Appreciate your help!
Since there is not an easy way to rename the root element, we'll have to replace it with an element that has the correct namespace and attribute, and then copy all the original children into it. Forcing the namespace declaration is not needed because by giving the element the correct namespace (URI) and setting the prefix, the declaration will be automatic. Replace the `setAttribute` and `setPrefix` with this (line 2,3) ``` String namespace = "http://com"; String prefix = "ns2"; // Upgrade the DOM level 1 to level 2 with the correct namespace Element originalDocumentElement = document.getDocumentElement(); Element newDocumentElement = document.createElementNS(namespace, originalDocumentElement.getNodeName()); // Set the desired namespace and prefix newDocumentElement.setPrefix(prefix); // Copy all children NodeList list = originalDocumentElement.getChildNodes(); while(list.getLength()!=0) { newDocumentElement.appendChild(list.item(0)); } // Replace the original element document.replaceChild(newDocumentElement, originalDocumentElement); ``` In the original code the author tried to declare an element namespace like this: ``` .setAttributeNS("http://com", "xmlns:ns2", "Test"); ``` The first parameter is the namespace of the attribute, and since it's a namespace attribute it need to have the <http://www.w3.org/2000/xmlns/> URI. The declared namespace should come into the 3rd parameter ``` .setAttributeNS("http://www.w3.org/2000/xmlns/", "xmlns:ns2", "http://com"); ```
SSH terminal in a webapp using ASP.NET Hello I creating a webapp that has a working SSH terminal similar to Putty. I'm using [SSH Library](http://sshnet.codeplex.com/) as a means of handling the ssh stream. However there is a problem. I can log into a Cisco 2950 and type in commands but it comes out jumbled and in one line. Also when I try "conf t" it gets into the configuration terminal but then you can't do anything and this pops up "Line has invalid autocommand "?". Here is the code I have so far: This is the SSH.cs that interacts with the library. ``` public class SSH { public string cmdInput { get; set; } public string SSHConnect() { var PasswordConnection = new PasswordAuthenticationMethod("username", "password"); var KeyboardInteractive = new KeyboardInteractiveAuthenticationMethod("username"); // jmccarthy is the username var connectionInfo = new ConnectionInfo("10.56.1.2", 22, "username", PasswordConnection, KeyboardInteractive); var ssh = new SshClient(connectionInfo); ssh.Connect(); var cmd = ssh.CreateCommand(cmdInput); var asynch = cmd.BeginExecute(delegate(IAsyncResult ar) { //Console.WriteLine("Finished."); }, null); var reader = new StreamReader(cmd.OutputStream); var myData = ""; while (!asynch.IsCompleted) { var result = reader.ReadToEnd(); if (string.IsNullOrEmpty(result)) continue; myData = result; } cmd.EndExecute(asynch); return myData; } } ``` This the code in the .aspx.cs that displays the code on the web page. ``` protected void CMD(object sender, EventArgs e) { SSH s = new SSH(); s.cmdInput = input.Text; output.Text = s.SSHConnect(); } ``` Any help would be appreciated.
From looking through the test cases in the code for the SSH.NET library, you can use the `RunCommand` method instead of `CreateCommand`, which will synchronously process the command. I also added a using block for the `SshClient ssh` object since it implements `iDisposable`. Remember to call `Disconnect` as well so you don't get stuck with open connections. Also the `SshCommand.Result` property (used in the `command.Result` call below), encapsulates the logic to pull the results from the `OutputSteam`, and uses `this._session.ConnectionInfo.Encoding` to read the `OutputStream` using the proper encoding. This should help with the jumbled lines you were receiving. Here is an example: ``` public string SSHConnect() { var PasswordConnection = new PasswordAuthenticationMethod("username", "password"); var KeyboardInteractive = new KeyboardInteractiveAuthenticationMethod("username"); string myData = null; var connectionInfo = new ConnectionInfo("10.56.1.2", 22, "username", PasswordConnection, KeyboardInteractive); using (SshClient ssh = new SshClient(connectionInfo)){ ssh.Connect(); var command = ssh.RunCommand(cmdInput); myData = command.Result; ssh.Disconnect(); } return myData; } ```
WildFly 9 Access Logging I am trying to set up access logging using WildFly 9 in Domain mode. I have found a few resources which suggest using something like this in the domain.xml file: ``` <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <filter-ref name="server-header"/> <filter-ref name="x-powered-by-header"/> <access-log pattern="%A%t%h%l%u%r%s%b%T%I" directory="${jboss.server.log.dir}" prefix="access" suffix=".log"/> </host> ``` I then restarted wildfly, but no logging is occurring and there are no errors in the wildfly start up, so I am just banging my head against the wall. I would really appreciate any help that anyone can provide. Also is there a way to register access logging using the cli in domain mode?
There should be a way to add all resources in CLI for both domain mode and standalone. It's possible you're editing the wrong profile in the XML. Regardless using CLI is the preferred solution. The first thing you need to know is which profile you're running under. You can determine by the server-group(s) running. ``` [domain@localhost:9990 /] /server-group=*:read-attribute(name=profile) { "outcome" => "success", "result" => [ { "address" => [("server-group" => "main-server-group")], "outcome" => "success", "result" => "full" }, { "address" => [("server-group" => "other-server-group")], "outcome" => "success", "result" => "full-ha" } ] } ``` We'll assume here we're using the `main-server-group`. You then need to add the [`access-log` setting](http://wildscribe.github.io/Wildfly/9.0.0.Final/subsystem/undertow/server/host/setting/access-log/index.html) to the `undertow` subsystem. ``` /profile=full/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add(pattern="%A%t%h%l%u%r%s%b%T%I", directory="${jboss.server.log.dir}", prefix=access, suffix=".log") ``` This will add access logging to all servers in that server-group. You will need to access a server via a web request before the log will be created. No restart or reload is required either. One extra note too you can see what settings are available to the `setting` resource in undertow with the following command. ``` /profile=full/subsystem=undertow/server=default-server/host=default-host/setting=*:read-resource-description ```
How to convert in both directions between year,month,day and dates in R? How to convert between year,month,day and dates in R? I know one can do this via strings, but I would prefer to avoid converting to strings, partly because maybe there is a performance hit?, and partly because I worry about regionalization issues, where some of the world uses "year-month-day" and some uses "year-day-month". It looks like ISODate provides the direction year,month,day -> DateTime , although it does first converts the number to a string, so if there is a way that doesn't go via a string then I prefer. I couldn't find anything that goes the other way, from datetimes to numerical values? I would prefer not needing to use strsplit or things like that. Edit: just to be clear, what I have is, a data frame which looks like: ``` year month day hour somevalue 2004 1 1 1 1515353 2004 1 1 2 3513535 .... ``` I want to be able to freely convert to this format: ``` time(hour units) somevalue 1 1515353 2 3513535 .... ``` ... and also be able to go back again. Edit: to clear up some confusion on what 'time' (hour units) means, ultimately what I did was, and using information from [How to find the difference between two dates in hours in R?](https://stackoverflow.com/questions/12977073/how-to-find-the-difference-between-two-dates-in-hours-in-r): **forwards direction:** ``` lh$time <- as.numeric( difftime(ISOdate(lh$year,lh$month,lh$day,lh$hour), ISOdate(2004,1,1,0), units="hours")) lh$year <- NULL; lh$month <- NULL; lh$day <- NULL; lh$hour <- NULL ``` **backwards direction:** ... well, I didnt do backwards yet, but I imagine something like: - create difftime object out of lh$time (somehow...) - add ISOdate(2004,1,1,0) to difftime object - use one of the solution below to get the year,month,day, hour back I suppose in the future, I could ask the exact problem I'm trying to solve, but I was trying to factorize my specific problem into generic reusable questions, but maybe that was a mistake?
Because there are so many ways in which a date can be passed in from files, databases etc and for the reason you mention of just being written in different orders or with different separators, representing the *inputted* date as a character string is a convenient and useful solution. R doesn't hold the actual dates as strings and you don't need to process them as strings to work with them. Internally R is using the operating system to do these things in a standard way. You don't need to manipulate strings at all - just perhaps convert some things from character to their numerical equivalent. For example, it is quite easy to wrap up both operations (forwards and backwards) in simple functions you can deploy. ``` toDate <- function(year, month, day) { ISOdate(year, month, day) } toNumerics <- function(Date) { stopifnot(inherits(Date, c("Date", "POSIXt"))) day <- as.numeric(strftime(Date, format = "%d")) month <- as.numeric(strftime(Date, format = "%m")) year <- as.numeric(strftime(Date, format = "%Y")) list(year = year, month = month, day = day) } ``` I forego the a single call to `strptime()` and subsequent splitting on a separation character because you don't like that kind of manipulation. ``` > toDate(2004, 12, 21) [1] "2004-12-21 12:00:00 GMT" > toNumerics(toDate(2004, 12, 21)) $year [1] 2004 $month [1] 12 $day [1] 21 ``` Internally R's datetime code works well and is well tested and robust if a bit complex in places because of timezone issues etc. I find the idiom used in `toNumerics()` more intuitive than having a date time as a list and remembering which elements are 0-based. Building on the functionality provided would seem easier than trying to avoid string conversions etc.
Delta method and correlated variables I have been reading about the delta method in regards to auto regressive distributed lag models. This is very new to me, so excuse any beginner mistakes. The problem is as follows: We have a model for gasoline consumption. $g$ is the per capita consumption, $y$ disposable income, $p$ is price, $g\_{t-1}$ is lagged consumption. All the values are in logs. $$g\_t = \alpha\_0 + \beta\_1 p\_t + \beta\_2 y\_t + \omega g\_{t-1} + u\_t$$ $\beta\_i$ denote the short-run effects and $\beta\_i/(1-\omega)$ denote the long-run effect. The problem is that these long-run estimates do not have standard errors calculated in most studies. I found only two papers that do: *Bentzen & Engsted (2001)* and *Pesaran & Shin (1997)*. They propose to calculate the standard error using the delta method. The problem that I see is that $y\_t$ (or $p\_t$) and $g\_{t-1}$ are highly correlated, thus violating the delta method assumption (as far as I understand it). The correlation is quite clear since both $y\_t$ and $p\_t$ are significant in the regression above, so taking $$g\_{t-1} = \alpha\_0 + \beta\_1 p\_{t-1} + \beta\_2 y\_{t-1} + \omega g\_{t-2} + u\_{t-1},$$ we know that there is correlation between $g\_{t-1}$ and $p\_{t-1}$ (or $y\_{t-1})$, given price (or income) persistency, correlation between $g\_{t-1}$ and $p\_t$ or $y\_t$ is surely there. I even dug a whole lot of data from Eurostat to confirm sample correlation and it was there, higher than 0.5 in absolute value. You can also see the estimated standard error using the delta method is much larger than the ones estimated using other methods. That indicates the omitted correlation might cause the overestimation of the standard error. --- So the question is: Can I use the delta method to estimate the standard error of the non-linear transformation while knowing these variables are correlated? Or does the non-linear nature of the transformation change things?
Yes, you can still use the delta method with correlated variables. Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is based upon the Taylor expansion: $f(\hat{\theta}) \approx f(\theta) + (\hat{\theta} - \theta)^Tf'(\theta)$ Rearranging terms and squaring both sides results in: $(f(\hat{\theta}) - f(\theta))^2 \approx (\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$ Taking expectations: $\text{Var} f(\hat{\theta}) \approx \mathbb{E}(\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$ Taking derivatives of $f$ and evaluating $f'$ at $\hat{\theta}$ gives: $f'(\hat{\theta})f'(\hat{\theta})^T = \frac{1}{(1-\hat{\omega})^2} \begin{bmatrix} 1 & \hat{\beta} / (1 - \hat{\omega}) \\ \hat{\beta} / (1 - \hat{\omega}) & \hat{\beta}^2 / (1 - \hat{\omega})^2 \end{bmatrix} $ Writing out the full expression for $\text{Var}f(\hat{\theta})$ and substituting estimates: $\widehat{\text{Var}} f(\hat{\theta}) = \frac{1}{(1-\hat{\omega})^2}(\hat{\sigma}^2\_{\beta} + 2\hat{\sigma}\_{\beta \omega} \hat{\beta} / (1-\hat{\omega}) + \hat{\sigma}^2\_{\omega}\hat{\beta}^2 / (1 - \hat{\omega})^2)$ You can see that positive correlation between $\beta$ and $\omega$ is going to increase the variance of the estimate of the long-run effect; it means there's a negative correlation between the estimates of $\beta$ and $1 - \omega$, the numerator and denominator of the long-run effect, so the estimated numerator and denominator tend to move in opposite directions, which naturally increases variability relative to the uncorrelated case. Note that the delta method can fail miserably, so you might want to check its performance via simulation, e.g., by specifying all the parameters and creating many data sets with different errors, estimating the long run effect for each data set, calculating the standard deviation of the long run effect estimates, and comparing that to the delta method estimates of the standard error for the various data sets.
Elixir Supervisors — How do you name a Supervised Task I'm really struggling with Elixir supervisors and figuring out how to name them so that I can use them. Basically, I'm just trying to start a supervised `Task` which I can send messages to. So I have the following: ``` defmodule Run.Command do def start_link do Task.start_link(fn -> receive do {:run, cmd} -> System.cmd(cmd, []) end end) end end ``` with the project entry point as: ``` defmodule Run do use Application # See http://elixir-lang.org/docs/stable/elixir/Application.html # for more information on OTP Applications def start(_type, _args) do import Supervisor.Spec, warn: false children = [ # Define workers and child supervisors to be supervised worker(Run.Command, []) ] # See http://elixir-lang.org/docs/stable/elixir/Supervisor.html # for other strategies and supported options opts = [strategy: :one_for_one, name: Run.Command] Supervisor.start_link(children, opts) end end ``` At this point, I don't even feel confident that I'm using the right thing (`Task` specifically). Basically, all I want is to spawn a process or task or GenServer or whatever is right when the application starts that I can send messages to which will in essence do a `System.cmd(cmd, opts)`. I want this task or process to be supervised. When I send it a `{:run, cmd, opts}` message such as `{:run, "mv", ["/file/to/move", "/move/to/here"]}` I want it to spawn a new task or process to execute that command. For my use, I don't even need to ever get back the response from the task, I just need it to execute. Any guidance on where to go would be helpful. I've read through the getting started guide but honestly it left me more confused because when I try to do what is done it never turns out as it does in the application. Thanks for your patience.
I would just use a GenServer, set up like the following: ``` defmodule Run do use Application def start(_, _) do import Supervisor.Spec, warn: false children = [worker(Run.Command, [])] Supervisor.start_link(children, strategy: :one_for_one) end end defmodule Run.Command do use GenServer def start_link do GenServer.start_link(__MODULE__, [], name: __MODULE__) end def run(cmd, opts) when is_list(opts), do: GenServer.call(__MODULE__, {:run, cmd, opts}) def run(cmd, _), do: GenServer.call(__MODULE__, {:run, cmd, []}) def handle_call({:run, cmd, opts}, _from, state) do {:reply, System.cmd(cmd, opts), state} end def handle_call(request, from, state), do: super(request, from, state) end ``` You can then send the running process a command to execute like so: ``` # If you want the result {contents, _} = Run.Command.run("cat", ["path/to/some/file"]) # If not, just ignore it Run.Command.run("cp", ["path/to/source", "path/to/destination"]) ``` Basically we're creating a "singleton" process (only one process can be registered with a given name, and we're registering the Run.Command process with the name of the module, so any consecutive calls to `start_link` while the process is running will fail. However, this makes it easy to set up an API (the `run` function) which can transparently execute the command in the other process without the calling process having to know anything about it. I used `call` vs. `cast` here, but it's a trivial change if you will never care about the result and don't want the calling process to block. This is a better pattern for something long-running. For one-off things, `Task` is a lot simpler and easier to use, but I prefer to use `GenServer` for global processes like this personally.
Azure vs Appharbor vs Amazon EC2 I am starting developing new online business and I am not sure what technology to use for hosting. I have used Microsoft Azure in my pervious projects, I did not have any problem with it, it was just expensive. My choices are Azure, Appharbor or Amazon EC2. I am not even sure if comparing them is right. I am looking for something which is really easy to setup and takes less time. We are only two developers so we just have enough time for developing our website. I have heard EC2 will be time consuming.
[AppHarbor](https://appharbor.com/) will definitely get the job done. Azure is also a PaaS (with lots of infrastructure features), but there's no [add-on program](https://appharbor.com/addons) so you're stuck with whatever services Microsoft decides to offer (or you have to install, configure and maintain them on your own VM-role instances). And as you mention, Azure gets expensive quickly. AWS Elastic Beanstalk also has [.NET support](http://aws.typepad.com/aws/2012/05/net-support-for-aws-elastic-beanstalk-amazon-rds-for-sql-server-.html) now, giving it some PaaS-features. The deployment-model is not very sophisticated though (and neither is Azure's): You have to have a Visual Studio plugin create packages that are then pushed and deployed whereas AppHarbor integrates with GitHub, Bitbucket, Codeplex, etc., runs your unit tests, sends build notifications and lots more. (Full disclosure: I'm co-founder of [AppHarbor](https://appharbor.com/))
Why is hardware acceleration not working on my View? I'm using [Facebook's Rebound library](http://facebook.github.io/rebound/) to replicate the bouncy animations seen in their chat heads implementation. The problem is, most of the time the animation stutters. A few pictures will explain this better. Here's the buttery-smooth chat heads animation: ![Facebook Messenger](https://i.stack.imgur.com/uxtUY.gif) And here's my attempt (notice how the animation for the white `View` skips nearly all frames): ![Stuttering animation](https://i.stack.imgur.com/7QCJj.gif) Once in a while it works smoothly: ![Smooth animation](https://i.stack.imgur.com/ydKQJ.gif) Below is the code I'm using currently (the entire project is [up on Github](https://github.com/vickychijwani/BubbleNote/tree/eb708e3910a7279c5490f614a7150009b59bad0b) if you want to set it up quickly). I'm guessing this has something to do with hardware acceleration not being enabled correctly on my `View`. There are 2 `Spring`s in my `SpringSystem`, one for the "bubble" (the Android icon) and another for the content (the white `View` that is displayed on tapping the bubble). Any help on how to solve this issue would be greatly appreciated. Thanks. `AndroidManifest.xml`: ``` <application android:hardwareAccelerated="true" ...> ... </application> ``` `AppService.java`: ``` // the following code is in AppService#onCreate() // AppService extends android.app.Service // full code at https://github.com/vickychijwani/BubbleNote mContent.setLayerType(View.LAYER_TYPE_HARDWARE, null); final Spring bubbleSpring = system.createSpring(); bubbleSpring.setCurrentValue(1.0); bubbleSpring.addListener(new SpringListener() { @Override public void onSpringUpdate(Spring spring) { float value = (float) spring.getCurrentValue(); params.x = (int) (mPos[0] * value); params.y = (int) (mPos[1] * value); mWindowManager.updateViewLayout(mBubble, params); // fire the second animation when this one is about to end if (spring.isOvershooting() && contentSpring.isAtRest()) { contentSpring.setEndValue(1.0); } } // ... }); final Spring contentSpring = system.createSpring(); contentSpring.setCurrentValue(0.0); contentSpring.addListener(new SpringListener() { @Override public void onSpringUpdate(Spring spring) { // always prints false?! Log.d(TAG, "hardware acc = " + mContent.isHardwareAccelerated()); float value = (float) spring.getCurrentValue(); // clamping is required to prevent flicker float clampedValue = Math.min(Math.max(value, 0.0f), 1.0f); mContent.setScaleX(value); mContent.setScaleY(value); mContent.setAlpha(clampedValue); } // ... }); ```
I've figured it out by going through the framework source code. **TL;DR**: add `WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED` to the layout flags when you manually attach a `View` to a `Window` / `WindowManager`; setting `android:hardwareAccelerated=true` in the manifest won't work. --- I'm [manually attaching my `View` to the `WindowManager`](https://github.com/vickychijwani/BubbleNote/blob/eb708e3910a7279c5490f614a7150009b59bad0b/app/src/main/java/io/github/vickychijwani/bubblenote/BubbleNoteService.java#L54) (because I need to create my UI in a `Service` to emulate chat heads) like so: ``` // code at https://github.com/vickychijwani/BubbleNote/blob/eb708e3910a7279c5490f614a7150009b59bad0b/app/src/main/java/io/github/vickychijwani/bubblenote/BubbleNoteService.java#L54 mWindowManager = (WindowManager) getSystemService(WINDOW_SERVICE); LayoutInflater inflater = (LayoutInflater) getSystemService(LAYOUT_INFLATER_SERVICE); mBubble = (LinearLayout) inflater.inflate(R.layout.bubble, null, false); // ... final WindowManager.LayoutParams params = new WindowManager.LayoutParams( WindowManager.LayoutParams.WRAP_CONTENT, WindowManager.LayoutParams.WRAP_CONTENT, WindowManager.LayoutParams.TYPE_PHONE, WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS | WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE, PixelFormat.TRANSLUCENT); // ... mWindowManager.addView(mBubble, params); ``` Let's go digging... ### Welcome to the Android framework I started debugging at [`View#draw(...)`](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.4.4_r1/android/view/View.java?av=f#14069), then went up the call stack to [`ViewRootImpl#draw(boolean)`](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.4.4_r1/android/view/ViewRootImpl.java?av=f#2280). Here I came across this piece of code: ``` if (!dirty.isEmpty() || mIsAnimating) { if (attachInfo.mHardwareRenderer != null && attachInfo.mHardwareRenderer.isEnabled()) { // Draw with hardware renderer. mIsAnimating = false; mHardwareYOffset = yoff; mResizeAlpha = resizeAlpha; mCurrentDirty.set(dirty); dirty.setEmpty(); attachInfo.mHardwareRenderer.draw(mView, attachInfo, this, animating ? null : mCurrentDirty); } else { // If we get here with a disabled & requested hardware renderer, something went // wrong (an invalidate posted right before we destroyed the hardware surface // for instance) so we should just bail out. Locking the surface with software // rendering at this point would lock it forever and prevent hardware renderer // from doing its job when it comes back. // Before we request a new frame we must however attempt to reinitiliaze the // hardware renderer if it's in requested state. This would happen after an // eglTerminate() for instance. if (attachInfo.mHardwareRenderer != null && !attachInfo.mHardwareRenderer.isEnabled() && attachInfo.mHardwareRenderer.isRequested()) { try { attachInfo.mHardwareRenderer.initializeIfNeeded(mWidth, mHeight, mHolder.getSurface()); } catch (OutOfResourcesException e) { handleOutOfResourcesException(e); return; } mFullRedrawNeeded = true; scheduleTraversals(); return; } if (!drawSoftware(surface, attachInfo, yoff, scalingRequired, dirty)) { return; } } } ``` In my case [`ViewRootImpl#drawSoftware()`](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.4.4_r1/android/view/ViewRootImpl.java?av=f#2420) was being called, which uses the software renderer. Hmm... that means the `HardwareRenderer` is `null`. So I went searching for the point of construction of the `HardwareRenderer`, which is in [`ViewRootImpl#enableHardwareAcceleration(WindowManager.LayoutParams)`](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.4.4_r1/android/view/ViewRootImpl.java?av=f#681): ``` // Try to enable hardware acceleration if requested final boolean hardwareAccelerated = (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0; if (hardwareAccelerated) { // ... mAttachInfo.mHardwareRenderer = HardwareRenderer.createGlRenderer(2, translucent); // ... } ``` Aha! There's our culprit! ### Back to the problem at hand In this case Android does not automatically set `FLAG_HARDWARE_ACCELERATED` for this `Window`, even though I've set `android:hardwareAccerelated=true` in the manifest. So the fix is simply: ``` mWindowManager = (WindowManager) getSystemService(WINDOW_SERVICE); LayoutInflater inflater = (LayoutInflater) getSystemService(LAYOUT_INFLATER_SERVICE); mBubble = (LinearLayout) inflater.inflate(R.layout.bubble, null, false); // ... final WindowManager.LayoutParams params = new WindowManager.LayoutParams( WindowManager.LayoutParams.WRAP_CONTENT, WindowManager.LayoutParams.WRAP_CONTENT, WindowManager.LayoutParams.TYPE_PHONE, // NOTE WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED | WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS | WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE, PixelFormat.TRANSLUCENT); // ... mWindowManager.addView(mBubble, params); ``` Although the animation is still not as smooth as Facebook's. I wonder why... (before anyone asks: no, there are no copious logs during the animation; and yes, I've tried with a release build)
What is fn.bind.apply(fn, arguments) doing? I saw this shortcut given as an answer on a code Kata but I am having difficulty understanding exactly what the below example is doing. ``` function func(fn) { return fn.bind.apply(fn, arguments); } ``` So far my understanding is that bind creates a new function similar to doing the following: ``` function func(fn) { return function () { return fn.apply(fn, arguments); }; } ``` Is this the case? Any clearer answers or breakdowns of what is going on would be great.
``` fn.bind ``` is just ``` Function.prototype.bind ``` So we're [applying](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply) [bind](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_objects/Function/bind) to `fn`, returning ``` fn.bind(arguments[0]/* doesn't matter, it's fn*/, arguments[1], arguments[2], etc.) ``` So the bound function is called with arguments being the arguments of `func` after `fn`. Another way to write it would have been: ``` function func(fn) { var args = [].slice.call(arguments, 1); return function () { var localArgs = [].slice.call(arguments); return fn.apply(fn, args.concat(localArgs)); }; } ``` The fact that the context of the call is the initial function (`arguments[0]`) is most certainly only a side effect. The important thing is we wrap the arguments with the function, but make it possible to dynamically pass other arguments. **Example 1, wrapping all arguments :** ``` function add(a,b){ return a+b } var f = func(add, 2 ,3); // f is a function which will always apply add to 2 and 3 console.log(f()) // logs 5 ``` **Exemple 2, [currying](http://en.wikipedia.org/wiki/Currying):** ``` function multiply(a,b){ return a*b } var multBy2 = func(multiply, 2); console.log(multBy2(3)) // logs 6 ```
How do I set the `startAt` date when using Angular Material Calendar? I'm using material2 v2.0.0-beta.10 as well as Angular v4 and having issues when using `md-calendar` component. My problem is that I'm unable to set the `startAt` date to anything. I set start date to yesterday as follows: ``` this.startAt = new Date(); this.startAt = this.startAt.setDate(this.startDate - 1); ``` [Here](http://plnkr.co/edit/wRAa5eyW5cTCdKEBIcXI?p=preview) is a Plunkr of attempting to set the `startAt` date to yesterday. What am I missing?
The start date accepts the following format: ``` startDate = new Date(1990, 0, 1); ``` Citing from the docs: > > The month or year that the calendar opens to is determined by first > checking if any date is currently selected, if so it will open to the > month or year containing that date. Otherwise it will open to the > month or year containing today's date. This behavior can be overridden > by using the startAt property of md-datepicker. In this case the > calendar will open to the month or year containing the startAt date. > > > So if you wish, say, to set the starting date to the next month, the following code should work: ``` let today = new Date(); let month = today.getMonth() + 1; //next month let year = today.getUTCFullYear(); let day = today.getDay(); this.startAt = new Date(year, month, day); ``` **[DEMO](https://stackblitz.com/edit/angular-ukqeff?file=src/app/datepicker-overview-example.ts)**
Custom FileDialog in QML I was willing to use FileDialog from QML but it turns out not usable for **SaveAs** situations (because you cannot specify a non-existing file name) and moreover the feel of the dialog is not really modern or mobile. As a workaround I have decided to build a simple **MyFileDialog** which looks like this: ``` import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Controls.Styles 1.4 import QtQuick.Controls.Material 2.0 import QtQuick.Layouts 1.3 Popup { implicitWidth: window.width / 3 * 2 implicitHeight: window.height / 3 * 2 x: (window.width - width) / 2 y: 20 modal: true focus: true property alias title: popupLabel.text contentItem: ColumnLayout { id: settingsColumn spacing: 20 // Popup title. Label { id: popupLabel font.bold: true anchors.horizontalCenter: parent.horizontalCenter } // File path. TextField { id: field placeholderText: "File path..." implicitWidth: parent.width } // Buttons. RowLayout { spacing: 10 Button { id: okButton text: "Ok" onClicked: { onOkClicked(); close();} Material.foreground: Material.primary Material.background: "transparent" Material.elevation: 0 Layout.preferredWidth: 0 Layout.fillWidth: true } Button { id: cancelButton text: "Cancel" onClicked: { state = false; } Material.background: "transparent" Material.elevation: 0 Layout.preferredWidth: 0 Layout.fillWidth: true } } } } ``` Now I would like this dialog to be reusable for several situations, e.g. to open files, to import files, to save files... But then this means that the behavior of **okButton.onClicked** is different for each of these situations. I have tried several ways to specify a custom (or say changeable) behavior for **okButton.onClicked** but with no great luck so far. Here is what I have tried: 1. Make a property alias of **okButton.onClicked** in Popup 2. Define **okButton.onClicked** where I use the Popup 3. Define a behavior function outside the Popup and provide it to the Popup None of these attempts worked and I always have compilation errors. Any idea of what I could to make my code reusable? Also I could find no recent and clean example on the internet, any idea of where I could find that? Thanks, Antoine.
`FileDialog` from the `QtQuick.Dialogs` import has a [`selectExisting`](http://doc.qt.io/qt-5/qml-qtquick-dialogs-filedialog.html#selectExisting-prop) property which you can use to save as: > > Whether only existing files or directories can be selected. > > > By default, this property is true. This property must be set to the desired value before opening the dialog. Setting this property to false implies that the dialog is for naming a file to which to save something, or naming a folder to be created; therefore selectMultiple must be false. > > > If you want a modern mobile interface, you're better off making your own. I wouldn't go with a dialog, though, as that's more desktop-centric. Dropbox, for example, uses something like a `ListView` in its mobile UI: [![dropbox](https://i.stack.imgur.com/0UOsB.png)](https://i.stack.imgur.com/0UOsB.png)
Vim insert after specific number of delimiters I am looking to make some code easier to read by taking something like this: ``` 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44 .... ``` and adding new lines to create increments of 8: ``` 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44 .... ``` Anyone know any vim magic that will allow me to do this with specifically selected lines?
This is not the prettiest solution and I am sure that it can be cleaned up / simplified some. First select the text and then enter this command: ``` :'<,'>s/\(\S\+,\s*\)\{8}/&\r/g ``` This outputs something like this: ``` 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, ... ```
Hook not updating when being called in useEffect I'm having problem with updating state after retrieving data from my API. The API response is just fine but for some reason my weatherData-hook does not update and returns undefined. What am I doing wrong? Code: ``` const renderForecastTable = ({ weatherData }) => ( <table> <thead> <th>Tid</th><th>Temperatur</th><th>Vind</th><th>Nederbörd</th> </thead> <tbody> { weatherData.Map(hour => <tr> <td>{hour.Time}</td><td>{hour.Temp}</td> <td>{hour.WindSpeed} m/s {hour.WindDirection} grader</td> <td>{hour.AvgPrecipitationIntensity} mm</td> </tr>) } </tbody> </table> ) const Weather = (props) => { const [weatherData, setWeatherData] = useState([]); const [loading, setLoading] = useState(true); const location = useLocation(); useEffect(() => { const getWeather = async () => { const response = await fetch(`weather`); const json = await response.json(); setWeatherData(json); setLoading(false); } getWeather(); }, []) return( loading ? <></> : renderForecastTable(weatherData) ) } export default Weather; ``` JSON-response: ``` [ {"time":"4/9/2021 8:00:00 AM","temp":"6","windDirection":"216","windSpeed":"8.7","avgPrecipitationIntensity":"0"}, {"time":"4/9/2021 9:00:00 AM","temp":"5.5","windDirection":"213","windSpeed":"7.9","avgPrecipitationIntensity":"0.2"}, {"time":"4/9/2021 10:00:00 AM","temp":"4.7","windDirection":"218","windSpeed":"7.1","avgPrecipitationIntensity":"0.3"}, {"time":"4/9/2021 11:00:00 AM","temp":"5.5","windDirection":"214","windSpeed":"7.3","avgPrecipitationIntensity":"0.3"}, ... ] ```
`renderForecastTable` consumes an argument and attempts to destructure a `weatherData` property, but it is passed the `weatherData` state array, `renderForecastTable(weatherData)`. Looks like you also have a typo, `weatherData.Map` should probably be `weatherData.map`, with a lowercase "map" function. Since the `weatherData` is defined as an array I'll assume you meant to simply pass it to the `renderForecastTable` function. Consume `weatherData` array, no destructuring and fix "map" typo. ``` const renderForecastTable = (weatherData) => ( <table> <thead> <th>Tid</th><th>Temperatur</th><th>Vind</th><th>Nederbörd</th> </thead> <tbody> { weatherData.map(hour => <tr> <td>{hour.Time}</td><td>{hour.Temp}</td> <td>{hour.WindSpeed} m/s {hour.WindDirection} grader</td> <td>{hour.AvgPrecipitationIntensity} mm</td> </tr>) } </tbody> </table> ); ``` `fetch` can return rejected promises and there could be errors processing the response, so you should surround the fetching logic in a `try/catch`. ``` const Weather = (props) => { const [weatherData, setWeatherData] = useState([]); const [loading, setLoading] = useState(true); const location = useLocation(); useEffect(() => { const getWeather = async () => { try { const response = await fetch(`weather`); const json = await response.json(); setWeatherData(json); } catch(error) { // handle any rejected promises or thrown errors processing response } finally { setLoading(false); } } getWeather(); }, []) return( loading ? null : renderForecastTable(weatherData) ) } ```
printf changes my output [I'm trying to solve this challenge](https://www.hackerrank.com/challenges/lisa-workbook) on Hackerrank. I've reached a problem where I cannot proceed, but I can't see where I've gone wrong - and I'm hoping someone here can help. My current solution is as follows: ``` int main() { int n,k,p,count,total; int t[n]; scanf("%d %d",&n,&k); for(int i = 0; i < n; i++){ scanf("%d",&t[i]); } p = 1; total=0; for(int x = 0; x < n; x++){ for(int j = 1; j <= t[x]; j++, count++){ if(count>k){ count = 1; p++; } if(j==p){ total++; } //printf("j: %d p: %d\tcount: %d\n",j,p,count); } p++; count=1; } printf("%d",total); return 0; } ``` The printf that I have commented out is what changes my eventual output. For example, with an input of: > > 10 5 > > > 3 8 15 11 14 1 9 2 24 31 > > > I should be getting an answer of 8. If I un-comment that `printf()` function, then I can see the current problem number and page number to see if it's 'special'. If I leave it un-commented, my eventual output is 8, which is what I want. But I don't want all the iterations printed out as well. The problem I have is that when I remove that line, or comment it out, the output becomes 5, not 8. What is causing this to change?
In your code, while defining `int t[n];`, you're using `n` uninitialized. That , invokes [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior). To elaborate, `n` is an automatic local variable that is not initialized explicitly, so the content of that variable is indeterminate. Attempt to use an indeterminate value leads to UB. Quoting `C11`, chapter §6.7.9 > > If an object that has automatic storage duration is not initialized explicitly, its value is > indeterminate. [...] > > > and, annex §J.2, undefined behavior, > > The value of an object with automatic storage duration is used while it is > indeterminate > > > You need to move the definition of `int t[n];` **after** you have successfully scanned the value from the user. Check the return value of `scanf()` to ensure the success.
Bug Report System - Getting people to use it I've built already a bug report system. It looks like this: ![enter image description here](https://i.stack.imgur.com/Hl6ZE.png) It is accessed using a secret key combination detected by the JavaScript code. Not hard. It has everything - sends the standard emails to management and testers, logs the Javascript variables and current DOM HTML. We are also improving the backend of the system to act as a full bug report system. Previously, it was only via email. The problem is, unfortunately, the bug reporters *still* preferred email. Even when I refused to answer their queries until they sent a bug report, they would not consistently send them. I don't understand it, because, for them it should be easy, but they act like it was a big deal. (The appearance was slightly more complex before, but still, all they had to do, minimally, was type a subject and press SUBMIT). I can't ask them, because they really have no idea. The second issue, more difficult, is to create a way to manage these issues that will be suitable to us developers. I actually have been satisfied managing new issues and bugs via Gmail inbox but my co-worker would like to use a system. He gets more mail than I do and has other reasons. I am completely at a loss for what to do. My problem is solved as much as I need it to be. When I look at other bug report systems, they both look terrible and are often expensive. Moreover, if even one person reports a bug via email, it will be a disaster! We can't refuse every bug via email and then will have two bug reporting systems. Sorry to put out all these complaints here, but maybe someone has some insightful advice that hasn't already been said.
It seems you are "influencing" a change in the culture, that takes time. In addition, bug management systems are a heavy cost, primarily because we have our email accounts open all day in the background, but for a bug management system, a user is asked to open yet another system which they have to learn to use, etc. A few things you can try: - Get project leader involved and discuss the importance of not losing bugs. - If there were any bugs that were missed, bring that up as a case study. - You can have regular status meetings (or triage) that are run only by the bug reporting system. - Send team-wide reports (esp. if your project leader can do it), that shows who opened how many bugs and how many are assigned to each engineer, that will have people realize the "benefit" they get (i.e. testers will be happy when they get fame because of the bug publicity, etc. As a tester, this was a great feature of bug management systems for me :-)) - I have used tools that automatically generate bug reports and send them to the team or have desktop widgets, etc. - You can consider integrating your bug management system with the emails so if an email is sent to the team along with a special address, it will automatically open a bug. - If someone sees an email that's a bug, they can add the bug management system alias to cc and there you go. Many incident management systems take this approach. Despite all of this, you could still have some bugs that will be missed because people tend to start email discussions asking a behavior and very soon it turns out its a bug, and no one opens one in the bug management system. Having that email integration may help, but then such issues just have to be managed along the way...
Distortion on human voices but not music I bought a Dell Inspiron N4110, the sound plays fine from the speaker, but when I plug my earphone (which has the 4-segment TRRS jack, i.e. stereo+mic) then human speech gets distorted although background music tends to play fine this causes speech to get drowned in the music (in movies this means I can't hear what they were talking despite my ear getting hurt by the loud background music). I tried multiple different video, they're all exhibiting the same symptom, the earphone somehow can distinguish human voice and background music and filter out only those human voices in all the videos I tried. I tried using a different earphone (a 3-segment TRS jack, i.e. stereo), and this plays fine. It wasn't the TRRS earphone either, as I regularly use it with my mobile phone fine. So my guess is that the distortion is because I'm plugging TRRS jack into a TRS socket and they doesn't work well together. The question is: 1. is there any workaround to use a TRRS earphone on a TRS socket without distortion (I don't need to use the mic)? 2. (bonus) why is it distorting only on human voices?
This is partly taken from [my answer](https://superuser.com/questions/271943/bad-sound-quality-of-3-5mm-headphone-with-mic-on-laptop/271958#271958) here: > > ### Bad sound quality of 3.5mm headphone with mic on laptop > > > The problem is that when I connect headphone to my Dell N5010 laptop to listen to music, the quality is horrible, with very weak or no vocals. > > > My answer: > > The thing is: Your phone will fit the TRRS jack. Your laptop however probably won't - it could be that one of the stereo rings doesn't match the laptop's output jack perfectly. The laptop will only have two internal connectors (for stereo), whereas the jack has three. The stereo ones will have to overlap exactly. > > > --- So, you asked: > > is there any workaround to use a TRRS earphone on a TRS socket without distortion (I don't need to use the mic)? > > > Unfortunately, not really. You might want to look for a *TRRS to TRS* converter, but they are really rare. Or get some other earplugs with normal TRS connectors. > > (bonus) why is it distorting only on human voices? > > > That's easy. Contemporary music is mixed the following way: - Vocals should be up front to the listener. They should sound like they are *in your head*, which is why they are often mixed centered. This means, the vocal signal from a left speaker/headphone is (almost) exactly the same as the vocal signal from the right speaker/headphone. - Instruments like guitars, synthesizers and drums should sound very spacious and therefore are mixed using stereo to its full extent. In order to perceive a stereo effect, the signal for left and right speakers/headphones must be different. Now because the TRRS is shaped differently than the TRS, the ground contact in the laptop's TRS jack might overlap with the microphone contact of the TRRS earset. That's why you'll hear the common parts of the stereo signal canceling each other out (i.e. the vocals).
Styling Bars and Lines with Chart.js We have been using Chart.js for several months now and like the power it gives us with ease of programming. One of the things we would like to start adding to the charts produced from Chart.js is a little nicer styling of the charts we generate. Most of the charts we are using are bar charts, with a few line charts thrown in. When I use the term "styling" what I am really talking about is making the bars or lines look a little nicer. Specifically I would like to add a drop shadow behind the bar and line charts, and maybe even a bevel to the bars. I've looked through many questions, and can't seem to find what I am looking for. I've also done some experimenting myself by modifying the Chart.js file to add a drop shadow and blur to the javascript, but I'm not getting it added in the correct place. I put these changes inside of the Chart.Element.extend draw function: ``` ctx.shadowColor = '#000'; ctx.shadowBlur = 10; ctx.shadowOffsetX = 8; ctx.shadowOffsetY = 8; ``` I put it right before the ctx.fill() and it almost does what I want. The result is I get a drop shadow that looks pretty good on both the bar and line charts I am drawing, but I also get a drop shadow on the labels for the x and y axes, which does not look good. I'd like to have the drop shadow on just the bars and the lines, not on the labels. Any help you can provide would be greatly appreciated. I am not experienced with javascript, but have been able to pull off quite a bit of coding I wouldn't otherwise be able to do without the help of everyone on Stack Overflow.
## Adding a Drop Shadow for Line Charts You can extend the line chart type to do this --- **Preview** [![enter image description here](https://i.stack.imgur.com/vdGmM.png)](https://i.stack.imgur.com/vdGmM.png) --- **Script** ``` Chart.types.Line.extend({ name: "LineAlt", initialize: function () { Chart.types.Line.prototype.initialize.apply(this, arguments); var ctx = this.chart.ctx; var originalStroke = ctx.stroke; ctx.stroke = function () { ctx.save(); ctx.shadowColor = '#000'; ctx.shadowBlur = 10; ctx.shadowOffsetX = 8; ctx.shadowOffsetY = 8; originalStroke.apply(this, arguments) ctx.restore(); } } }); ``` and then ``` ... var myChart = new Chart(ctx).LineAlt(data, { datasetFill: false }); ``` --- Fiddle - <https://jsfiddle.net/7kbz1L4t/>
Designing a splash screen (java) I want to design a splash screen that can show the current loading process with a progress bar, much like netbeans startup screen, as it shows ``` loading... modules, done!.... loading modules and so on ``` and after the loading finished the main application comes up. I have read many articles that are related to only creating a splash screen but none of them addresses about How to display progress of different background tasks on a splash screen. How can I achieve this? Can I use javafx 2 for splash screen while the rest of the application is written using java **Solved!** I finally managed it to work. **My mistake** was I was updating the GUI content in the Task Thread so My Task Thread was Blocked and could not execute the next instructions after the GUI update instructions.Now I shifted those Updating GUI instruction after Task Completion, and its working..... Thanks Jewelsea for the right path.
I created a [splash page sample](https://gist.github.com/1588531) for a standalone JavaFX 2.0 application previously. I updated the sample to demonstrate monitoring of the load progress via a progress bar and progress text on the splash page. To adapt the code to monitor the initialization progress of your application, rather than tying the `ProgressBar` to a WebEngine's `loadWorker.workDone` property, create a JavaFX [Task](http://docs.oracle.com/javafx/2.0/api/javafx/concurrent/Task.html) which performs expensive initialization work, and monitor the progress of the `Task` via the Task's [progressProperty](http://docs.oracle.com/javafx/2.0/api/javafx/concurrent/Task.html#progressProperty) and [messageProperty](http://docs.oracle.com/javafx/2.0/api/javafx/concurrent/Task.html#messageProperty). Here is a link to a [splash based sample](https://gist.github.com/2305098) based upon the Task approach outlined in the prior paragraph. For a WebStart or browser embedded JavaFX application, use a [preloader](http://docs.oracle.com/javafx/2.0/api/javafx/application/Preloader.html) as suggested by assylias.
How should I use Rails to index and query a join table? I have a ruby on Rails 4 app, using devise and with a User model and a Deal model. I am creating a `user_deals` table for has\_many/has\_many relationship between User and Deal. Here is the migration ``` class CreateUserDeals < ActiveRecord::Migration def change create_table :user_deals do |t| t.belongs_to :user t.belongs_to :deal t.integer :nb_views t.timestamps end end end ``` When a user load a Deal (for example Deal id= 4), I use a method called `show` ``` controllers/deal.rb #for the view of the Deal page def show end ``` In the view of this Deal id=4 page, I need to display the nb of views of the Devise's `current_user` **inside the Deal page the user is currently on**. ``` deal/show.html here is the nb of views of user: <% current_user.#{deal_id}.nb_views%> ``` Lets' say I have 10M+ `user_deals` lines, I wanted to know if I should use an index ``` add_index :user_deals, :user_id add_index :user_deals, :deal_id ``` or maybe ``` add_index(:deals, [:user_id, deal_id]) ``` Indeed in other situations I would have said Yes, but here I don't know how Rails works behind the scenes. It feels as if Rails is aware of what to do without me needing to speed up the process,...as if when Rails loads this view that there is no SQL query (such as 'find the nb of views WHERe `user_id= x` and `deal_id= Y`')....because I'm using just for the `current_user` who is logged-in (via devise's `current_user`) and for `deal_id` Rails knows it as we are on the very page of this deal (show page) so I just pass it as a parameter. So do I need an index to speed it up or not?
Your question on indexes is a good one. Rails **does** generate SQL\* to do its magic so the normal rules for optimising databases apply. The magic of devise only extends to the current\_user. It fetches their details with a SQL query which is efficient because the user table created by devise has helpful indexes on it by default. But these aren't the indexes you'll need. Firstly, there's a neater more idiomatic way to do what you're after ``` class CreateUserDeals < ActiveRecord::Migration def change create_join_table :users, :deals do |t| t.integer :nb_views t.index [:user_id, :deal_id] t.index [:deal_id, :user_id] t.timestamps end end end ``` You'll notice that migration included two indexes. If you never expect to create a view of all users for a given deal then you won't need the second of those indexes. However, as @chiptuned says indexing each foreign key is nearly always the right call. An index on an integer costs few write resources but pays out big savings on read. It's a very low cost default defensive position to take. You'll have a better time and things will feel clearer if you put your data fetching logic in the controller. Also, you're showing a *deal* so it will feel right to make that rather than `current_user` the centre of your data fetch. You can actually do this query without using the `through` association because you can do it without touching the users table. (You'll likely want that `through` association for other circumstances though.) Just `has_many :user_deals` will do the job for this. To best take advantage of the database engine and do this in one query your controller can look like this: ``` def show @deal = Deal.includes(:user_deals) .joins(:user_deals) .where("user_deals.user_id = ?", current_user.id) .find(params["deal_id"]) end ``` Then in your view... ``` I can get info about the deal: <%= @deal.description %> ``` And thanks to the `includes` I can get user nb\_views without a separate SQL query: <%= @deal.user\_deals.nb\_views %> \* If you want to see what SQL rails is magically generating just put `.to_sql` on the end. e.g. `sql_string = current_user.deals.to_sql` or `@deal.to_sql`
wordpress wp\_signon function not working I am using wp\_signon() function to login the user. I am doing this like `$creds = array();` `$creds['user_login'] = $username;` `$creds['user_password'] = $password;` `$creds['remember'] = true;` `$user = wp_signon( $creds, false );` i want to send user to home page after login. But i Am facing following error: Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\wpmoodle\wp-content\themes\twentyten\header.php:12) in E:\xampp\htdocs\wpmoodle\wp-includes\pluggable.php on line 690. Thanks in advance.
`wp_signon()` needs to run before you've sent any of your actual page to the browser. This is because part of what `wp_signon()` does is to set your authentication cookies. It does this by outputting a "Set-Cookie: ..." header -- if you look at line 690 of `pluggable.php`, where your error comes from, you'll see that that line sets a cookie. So, because `wp_signon()` outputs *headers*, you can't already have sent any *content* -- because headers must always be output before content. However, the error indicates that you've already sent some output -- on line 12 of `header.php`, presumably some of the first HTML of the standard WordPress theme. This basically indicates that you need to move your `wp_signon()` call to somewhere earlier in the WordPress processing, so it has a chance to output its headers before any page content is sent.
How to create atomic from unsafe memory in Rust I am learning unsafe Rust and trying to create an atomic backed by a pointer to some unsafe memory (e.g. a buffer from C or memory mapped file). I tried this: ``` use std::sync::atomic::{AtomicI64, Ordering}; fn main() -> () { let mut v = vec![1i64, 2i64]; let ptr = &mut v[0] as *mut i64; unsafe { let a = std::mem::transmute::<*mut i64, AtomicI64>(ptr); println!("{}", a.load(Ordering::Relaxed)); } } ``` But it prints the address of the pointer (e.g. `2119547391296`) instead of `1`. What is the correct way to create an atomic located in some external buffer? I want the same functionality such as e.g. C# `Interlocked.CompareExchange(ref *(long*)ptr, ...)`, so maybe there are other ways to get lock-free sync primitives in Rust? **Update:** It looks like I need `std::intrinsics::{*}`, but they are not available in stable Rust. **Update 2:** This compiles and prints `1 2 2` (i.e. `v[0]` is updated as expected via `AtomicI64` created via pointer cast and then dereferencing `AtomicI64` via `& *ptr`). But is this correct? ``` use std::sync::atomic::{AtomicI64, Ordering}; fn main() -> () { let v = vec![1i64, 2i64]; let ptr = &v[0] as *const i64 as *const AtomicI64; unsafe { let a = & *ptr; println!("{}", a.load(Ordering::SeqCst)); a.fetch_add(1i64, Ordering::SeqCst); println!("{}", a.load(Ordering::SeqCst)); println!("{}", v[0]); } } ```
The [documentation for `AtomicI64`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicI64.html) says this: > > This type has the same in-memory representation as the underlying integer type, i64. > > > However, you're trying to transmute a *pointer* to an `i64` to an `AtomicI64`: ``` unsafe { let a = std::mem::transmute::<*mut i64, AtomicI64>(ptr); // is a pointer ^^^^^^^^ // ^^^^^^^^^ is not a pointer } ``` Instead, you'd need to transmute `*mut i64` into a pointer or reference to `AtomicI64`. This can be implemented like this (safe and unsafe variants): ``` // if we have a mut reference, it must have unqiue ownership over the // referenced data, so we can safely cast that into an immutable reference // to AtomicI64 fn make_atomic_i64<'a>(src: &'a mut i64) -> &'a AtomicI64 { unsafe { &*(src as *mut i64 as *const AtomicI64) } } // if we have a mut pointer, we have no guarantee of ownership or lifetime, and // therefore it's unsafe to cast into an immutable reference to AtomicI64 unsafe fn make_ptr_atomic_i64<'a>(src: *mut i64) -> &'a AtomicI64 { &*(src as *const AtomicI64) } ``` --- [Example](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6a81e661cd16c3a499c01f03ecd1c901): ``` use std::sync::atomic::{AtomicI64, Ordering}; fn main() -> () { // declare underlying buffer let mut v = vec![1i64, 2i64]; { // get atomic safely let atomic = make_atomic_i64(&mut v[0]); // try to access atomic println!("{}", atomic.swap(10, Ordering::Relaxed)); // = 1 } unsafe { // get atomic unsafely let atomic = make_ptr_atomic_i64(&mut v[0] as *mut i64); // try to access atomic println!("{}", atomic.swap(100, Ordering::Relaxed)); // = 10 } // print final state of variable println!("{}", v[0]); // = 100 } ```
How to test private member objects without making a new object I'm trying to write unit test against a class. I can't change the class, but I think it's possible to test using reflection. I just don't know how to do it. Here's the class: ``` public class MyClass extends AnotherClass implements TheInterface { private enum SomeTypes { SAMPLE01, SAMPLE02, SAMPLE03 } private CircularList<SomeTypes> someTypesList; Date date= new Date(); private SomeOtherClassProcessor01 someOtherClassProcessor01; private SomeOtherClassProcessor02 someOtherClassProcessor02; private SomeOtherClassProcessor03 someOtherClassProcessor03; public Properties initialize (Properties properties) throws Exception { Properties propertiesToReturn = super.initialize(properties); someTypesList = new CircularList<SomeTypes> (Arrays.asList(SomeTypes.values())); someOtherClassProcessor01 = new SomeOtherClassProcessor01(); someOtherClassProcessor02 = new SomeOtherClassProcessor02(); someOtherClassProcessor03 = new SomeOtherClassProcessor03(); return propertiesToReturn; } @Override public void get(ImportedClass someParams) throws Exception { SomeTypes types = someTypesList.getFirstAndRotate(); switch(types) { case SAMPLE01: someOtherClassProcessor01.doSomething(someParams, date); break; case SAMPLE02: someOtherClassProcessor02.doSomething(someParams, date); break; case SAMPLE03: someOtherClassProcessor03.doSomething(someParams, date); break; default: throw new IllegalArgumentException("This " + types + " was not implemented."); } } } ``` For my test this is what I have so far... not sure how to actually do it. ``` @RunWith(PowerMockRunner.class) @PrepareForTest(MyClass.class) public class TestingMyClass { MyClass mockMyClass; SomeOtherClassProcessor01 someOtherClassProcessor01; SomeOtherClassProcessor02 someOtherClassProcessor02; SomeOtherClassProcessor03 someOtherClassProcessor03; Date date; @Before public void initialize () throws Exception { mockMyClass = spy(new MyClass()); mockSomeOtherClassProcessor01 = mock(SomeOtherClassProcessor01.class); mockSomeOtherClassProcessor02 = mock(SomeOtherClassProcessor02.class); mockSomeOtherClassProcessor03 = mock(SomeOtherClassProcessor03.class); } @Test public void testingGet() throws Exception { date = new Date(); //this is where I'm stuck Whitebox.setInternalState(mockMyClass, "someOtherClassProcessor01", mockSomeOtherClassProcessor01); } } ``` Would it be possible to use whitebox for this? I need to make sure that there's a call inside the getter for those objects. Should I try something like when(someOtherClassProcessor01.doSomething(any(), date)).thenReturn(true)? Please let me know if you need more details. edit: is even possible to mock private enum SomeTypes?
One option is to substitute your own (mocked) implementations of `SomeOtherClassProcessor` into `MyClass` using reflection: ``` MyClass myClass = new MyClass(); SomeOtherProcessor01 mockProcessor01 = mock(SomeOtherProcessor01.class); // reflection bit: find the field by its name // handle NoSuchFieldException Field someProcessorField = MyClass.getDeclaredField("someOtherProcessor01"); // the field is declared as private, so make it accessible in order to work with it someProcessorField.setAccessible(true); // now set your mocked processor into the field. // First argument is the object to change; second argument - new value for the field someProcessorField.set(myClass, mockProcessor01); ``` PS. Using PowerMock and/or reflection is **surrender to bad design** (as per Timothy :). You should not be depending on code you that isn't already well-tested, and if it is, you shouldn't try to test it again. Suppose your testing actually reveals a bug - how would you fix it if you don't control the code? Suppose Java 11 becomes a thing and prohibits your use of reflection. Suppose the code you're testing changes and the fields get renamed - with reflection, you don't have compile-time safety... List of potential issues goes on
Check if bit has toggled in C I am working on bitwise manipulation (in C) and I wanted to know how to check if a bit has toggled between a previous value and the new value. ``` Example : oldValue = 0x0FF //0000 1111 1111 in binary newValue = 0x100 //0001 0000 0000 in binary ``` In this example I want to check if the bit8 (9th bit) has toggled from 0 to 1. I know that if I want to know if a bit is set, is can use : ``` value & (1 << 8) ``` So, is this correct ? : ``` if( (oldValue & (1 << 8)) == (newValue & (1 << 8)) ) //return 0 if toggled ```
You can do it in two steps: First, use `XOR` to find *all* bits that have toggled: ``` int allToggled = oldValue ^ newValue; ``` Then mask the bit that you want to keep - for example, by shifting `allToggled` to the right, so that the target bit is at position zero, and mask with `1`: ``` int targetBitToggled = (allToggled >> 8) & 1; ``` Now combine these two expressions into a single condition: ``` if ((oldValue ^ newValue) & (1 << 8)) { // ... bit at position 8 has toggled } ``` Note that instead of shifting the `XOR`-ed values right I shifted the bit mask left.
Replacing all occurrences of a string with values from an array I am parsing a string before sending it to a DB. I want to go over all `<br>` in that string and replace them with unique numbers that I get from an array followed by a newLine. For example: ``` str = "Line <br> Line <br> Line <br> Line <br>" $replace = array("1", "2", "3", "4"); my function would return "Line 1 \n Line 2 \n Line 3 \n Line 4 \n" ``` Sounds simple enough. I would just do a while loop, get all the occurances of `<br>` using strpos, and replace those with the required numbers+\n using str\_replace. Problem is that I always get an error and I have no idea what I am doing wrong? Probably a dumb mistake, but still annoying. Here is my code ``` $str = "Line <br> Line <br> Line <br> Line <br>"; $replace = array("1", "2", "3", "4"); $replaceIndex = 0; while(strpos($str, '<br>') != false ) { $str = str_replace('<br>', $replace[index] . ' ' .'\n', $str); //str_replace, replaces the first occurance of <br> it finds index++; } ``` Any ideas please? Thanks in advance,
I would use a regex and a custom callback, like this: ``` $str = "Line <br> Line <br> Line <br> Line <br>"; $replace = array("1", "2", "3", "4"); $str = preg_replace_callback( '/<br>/', function( $match) use( &$replace) { return array_shift( $replace) . ' ' . "\n"; }, $str); ``` Note that this assumes we can modify the `$replace` array. If that's not the case, you can keep a counter: ``` $str = "Line <br> Line <br> Line <br> Line <br>"; $replace = array("1", "2", "3", "4"); $count = 0; $str = preg_replace_callback( '/<br>/', function( $match) use( $replace, &$count) { return $replace[$count++] . ' ' . "\n"; }, $str); ``` You can see from [this demo](http://viper-7.com/IXZcbM) that this outputs: ``` Line 1 Line 2 Line 3 Line 4 ```
image manipulation width and height setting in my project i just do image watermarking or image combine it's working fine and code for that. ``` <!DOCTYPE html> <html> <head> <title>test</title> </head> <body> <?php if(isset($_POST['submit'])) { // Give the Complete Path of the folder where you want to save the image $folder="uploads/"; move_uploaded_file($_FILES["fileToUpload"]["tmp_name"], "$folder".$_FILES["fileToUpload"]["name"]); $file='uploads/'.$_FILES["fileToUpload"]["name"]; $uploadimage=$folder.$_FILES["fileToUpload"]["name"]; $newname= time(); $ext = pathinfo($_FILES["fileToUpload"]["name"], PATHINFO_EXTENSION); // Set the thumbnail name $thumbnail = $folder.$newname.".".$ext; $imgname=$newname.".".$ext; // Load the mian image if ($ext=="png" || $ext=="PNG") { $source = imagecreatefrompng($uploadimage); } else if ($ext=="gif" || $ext=="GIF") { $source = imagecreatefromgif($uploadimage); } else if ($ext=="bmp" || $ext=="BMP") { $source = imagecreatefrombmp($uploadimage); } else{ $source = imagecreatefromjpeg($uploadimage); } // load the image you want to you want to be watermarked $watermark = imagecreatefrompng('uploads/logo1.png'); // get the width and height of the watermark image $water_width = imagesx($source)/2; $water_height = imagesy($watermark); // get the width and height of the main image image $main_width = imagesx($source); $main_height = imagesy($source); $im_middle_w = $main_width/2; $im_middle_h = $main_height/2; // Set the dimension of the area you want to place your watermark we use 0 // from x-axis and 0 from y-axis $dime_x = $im_middle_w - $water_width/2; $dime_y = $im_middle_h - $water_height/2; // copy both the images imagecopy($source, $watermark, $dime_x, $dime_y, 0, 0, $water_width, $water_height); // Final processing Creating The Image imagejpeg($source, $thumbnail, 100); unlink($file); } ?> <img src='uploads/<?php echo $imgname;?>'> </body> </html> ``` but problem with setting **$water\_width** and i want set as half of my source image. but when i have source image of less width or more width compare to $water\_width it's set it like that. see image when source image width is more. [![enter image description here](https://i.stack.imgur.com/NlY5p.jpg)](https://i.stack.imgur.com/NlY5p.jpg) and when width is less. [![enter image description here](https://i.stack.imgur.com/JHj5v.jpg)](https://i.stack.imgur.com/JHj5v.jpg) so my problem is how to set **$water\_width** as half of source image width? by Alex your answer it's came up like this. [![enter image description here](https://i.stack.imgur.com/7Vlrp.jpg)](https://i.stack.imgur.com/7Vlrp.jpg)
This will resize watermark to half-width of original image and put it in the centre: ``` // load the image you want to you want to be watermarked $watermark = imagecreatefrompng('uploads/logo1.png'); // get the width and height of the watermark image $water_width = imagesx($watermark); $water_height = imagesy($watermark); // get the width and height of the main image image $main_width = imagesx($source); $main_height = imagesy($source); // resize watermark to half-width of the image $new_height = round($water_height * $main_width / $water_width / 2); $new_width = round($main_width / 2); $new_watermark = imagecreatetruecolor($new_width, $new_height); // keep transparent background imagealphablending( $new_watermark, false ); imagesavealpha( $new_watermark, true ); imagecopyresampled($new_watermark, $watermark, 0, 0, 0, 0, $new_width, $new_height, $water_width, $water_height); // Set the dimension of the area you want to place your watermark we use 0 // from x-axis and 0 from y-axis $dime_x = round(($main_width - $new_width)/2); $dime_y = round(($main_height - $new_height)/2); // copy both the images imagecopy($source, $new_watermark, $dime_x, $dime_y, 0, 0, $new_width, $new_height); // Final processing Creating The Image imagejpeg($source, $thumbnail, 100); imagedestroy($source); imagedestroy($watermark); imagedestroy($new_watermark); ```
LINQ method to sort a list based on a bigger list ``` List<int> _lstNeedToOrder = new List<int>(); _lstNeedToOrder.AddRange(new int[] { 1, 5, 6, 8 }); //I need to sort this based on the below list. List<int> _lstOrdered = new List<int>();//to order by this list _lstOrdered.AddRange(new int[] { 13, 5, 11, 1, 4, 9, 2, 7, 12, 10, 3, 8, 6 }); order will be -->_lstNeedToOrder = 5,1,8,6 ``` How can I do it?
Well the *simple* - but inefficient - way would be: ``` var result = _lstNeedToOrder.OrderBy(x => _lstOrdered.IndexOf(x)); ``` An alternative would be to work out a *far* way of obtaining the desired index of a value. If your values will always in be the range [1...n] you could just invert that "ordered" list to be a "list of indexes by value". At which point you could use: ``` var result = _lstNeedToOrder.OrderBy(x => indexes[x]); ``` (where `indexes` would have an extra value at the start for 0, just to make things simpler). Alternatively, you could create a `Dictionary<int, int>` from value to index. That would be more general, in that it would handle a very wide range of values without taking a lot of memory. But a dictionary lookup is obviously less efficient than an array or list lookup. Just as a side note which wouldn't format well as a comment, your initialization can be simplified using a collection initializer: ``` var listToOrder = new List<int> { 1, 5, 6, 8 }; var orderedList = new List<int> { 13, 5, 11, 1, 4, 9, 2, 7, 12, 10, 3, 8, 6 }; ```
How to print map object with Python 3? This is my code ``` def fahrenheit(T): return ((float(9)/5)*T + 32) temp = [0, 22.5, 40,100] F_temps = map(fahrenheit, temp) ``` This is mapobject so I tried something like this ``` for i in F_temps: print(F_temps) <map object at 0x7f9aa050ff28> <map object at 0x7f9aa050ff28> <map object at 0x7f9aa050ff28> <map object at 0x7f9aa050ff28> ``` I am not sure but I think that my solution was possible with Python 2.7,how to change this with 3.5?
You have to turn the map into a list or tuple first. To do that, ``` print(list(F_temps)) ``` This is because maps are lazily evaluated, meaning the values are only computed on-demand. Let's see an example ``` def evaluate(x): print(x) mymap = map(evaluate, [1,2,3]) # nothing gets printed yet print(mymap) # <map object at 0x106ea0f10> # calling next evaluates the next value in the map next(mymap) # prints 1 next(mymap) # prints 2 next(mymap) # prints 3 next(mymap) # raises the StopIteration error ``` When you use map in a for loop, the loop automatically calls `next` for you, and treats the StopIteration error as the end of the loop. Calling `list(mymap)` forces all the map values to be evaluated. ``` result = list(mymap) # prints 1, 2, 3 ``` However, since our `evaluate` function has no return value, `result` is simply `[None, None, None]`
ScalaCheck: choose an integer with custom probability distribution I want to create a generator in ScalaCheck that generates numbers between say 1 and 100, but with a bell-like bias towards numbers closer to 1. `Gen.choose()` distributes numbers randomly between the min and max value: ``` scala> (1 to 10).flatMap(_ => Gen.choose(1,100).sample).toList.sorted res14: List[Int] = List(7, 21, 30, 46, 52, 64, 66, 68, 86, 86) ``` And `Gen.chooseNum()` has an added bias for the upper and lower bounds: ``` scala> (1 to 10).flatMap(_ => Gen.chooseNum(1,100).sample).toList.sorted res15: List[Int] = List(1, 1, 1, 61, 85, 86, 91, 92, 100, 100) ``` I'd like a `choose()` function that would give me a result that looks something like this: ``` scala> (1 to 10).flatMap(_ => choose(1,100).sample).toList.sorted res15: List[Int] = List(1, 1, 1, 2, 5, 11, 18, 35, 49, 100) ``` I see that `choose()` and `chooseNum()` take an implicit [Choose](https://www.scalacheck.org/files/scalacheck_2.11-1.12.5-api/index.html#org.scalacheck.Gen$$Choose) trait as an argument. Should I use that?
You could use `Gen.frequency()` [(1)](https://www.scalacheck.org/files/scalacheck_2.11-1.12.5-api/index.html#org.scalacheck.Gen$@frequency[T](gs:(Int,org.scalacheck.Gen[T])*):org.scalacheck.Gen[T]): ``` val frequencies = List( (50000, Gen.choose(0, 9)), (38209, Gen.choose(10, 19)), (27425, Gen.choose(20, 29)), (18406, Gen.choose(30, 39)), (11507, Gen.choose(40, 49)), ( 6681, Gen.choose(50, 59)), ( 3593, Gen.choose(60, 69)), ( 1786, Gen.choose(70, 79)), ( 820, Gen.choose(80, 89)), ( 347, Gen.choose(90, 100)) ) (1 to 10).flatMap(_ => Gen.frequency(frequencies:_*).sample).toList res209: List[Int] = List(27, 21, 31, 1, 21, 18, 9, 29, 69, 29) ``` I got the frequencies from <https://en.wikipedia.org/wiki/Standard_normal_table#Complementary_cumulative>. The code is just a sample of the table (% 3 or mod 3), but I think you can get the idea.
What is $ in Haskell, and how to apply functions to functions I'm doing Project Euler with Haskell, and found something to learn when completing [the very first problem](https://projecteuler.net/problem=1). Here's my code: ``` isValid x = (mod x 3 == 0) || (mod x 5 == 0) listValid :: Integer -> [Integer] listValid n = [x | x <- [1..n-1], isValid x] ``` The function `listValid` will get all the positive integers less than `n` that are divisble by either 3 or 5. Easy enough. ``` *Main> listValid 10 [3,5,6,9] ``` Now I need to sum them. I figure the sum function is the right way to do this. What I don't understand is why the first two versions work, and then third doesn't. ``` *Main> sum (listValid 10) 23 *Main> sum $ listValid 10 23 *Main> sum listValid 10 <interactive>:4:5: Couldn't match type ‘[Integer]’ with ‘a0 -> t’ Expected type: Integer -> a0 -> t Actual type: Integer -> [Integer] Relevant bindings include it :: t (bound at <interactive>:4:1) In the first argument of ‘sum’, namely ‘listValid’ In the expression: sum listValid 10 ``` Is this an order of operations problem, where I need to wrap in parentheses to assert which function should be applied first? And if so, what is the `$` doing in the second version?
It's about associativity. Function application is left-associative, so `sum listValid 10` is equivalent to `(sum listValid) 10`, not `sum (listValid 10)`. And if you think about it, it has to be that way: If you define `add x y = x+y`, you wouldn't want `add 1 2` to be equivalent to `add (1 2)`. So the issue here is that in `sum listValid 10`, it doesn't see `listValid 10` as the argument to `sum`; it sees `listValid` as the argument to `sum` and then `10` as the argument to `sum listValid`. `$` resolves this issue because it's an infix operator and it's perfectly clear that `sum` is its left operand and `listValid 10` is its right operand (keeping in mind that function application has higher precedence than any infix operator, so it can't be seen as `(sum $ listValid) 10`).
Auto login after successful registration hey all i want to make an auto login after successful registration in spring meaning: i have a protected page which requires login to access them and i want after registration to skip the login page and make an auto login so the user can see that protected page, got me ? i am using spring 3.0 , spring security 3.0.2 how to do so ?
This can be done with spring security in the following manner(semi-psuedocode): ``` import org.springframework.security.web.savedrequest.RequestCache; import org.springframework.security.web.savedrequest.SavedRequest; @Controller public class SignupController { @Autowired RequestCache requestCache; @Autowired protected AuthenticationManager authenticationManager; @RequestMapping(value = "/account/signup/", method = RequestMethod.POST) public String createNewUser(@ModelAttribute("user") User user, BindingResult result, HttpServletRequest request, HttpServletResponse response) { //After successfully Creating user authenticateUserAndSetSession(user, request); return "redirect:/home/"; } private void authenticateUserAndSetSession(User user, HttpServletRequest request) { String username = user.getUsername(); String password = user.getPassword(); UsernamePasswordAuthenticationToken token = new UsernamePasswordAuthenticationToken(username, password); // generate session if one doesn't exist request.getSession(); token.setDetails(new WebAuthenticationDetails(request)); Authentication authenticatedUser = authenticationManager.authenticate(token); SecurityContextHolder.getContext().setAuthentication(authenticatedUser); } } ``` Update: to only contain how to create the session after the registration
How to vectorize comparisons instead of for-loop in R? I would like to run a discrete-time simulation (simplified version below). I generate a data frame of population members (one member per row) with their timestamps for entering and exiting a website. I then wish to count at each time interval how many members are on the site. Currently I am looping through time and at each second counting how many members have entered and not yet exited. (I have also tried destructive iteration by removing exited members at each interval, which takes even longer. I also understand that I can use larger time intervals in the loop.) How do I use linear algebra to eliminate the for-loop and excess runtime? My current approach does not scale well as population increases, and of course it is linear with respect to duration. ``` popSize = 10000 simDuration = 10000 enterTimestamp <- rexp(n = popSize, rate = .001) exitTimestamp <- enterTimestamp + rexp(n = popSize, rate = .001) popEvents <- data.frame(cbind(enterTimestamp,exitTimestamp)) visitorLoad <- integer(length = simDuration) for (i in 1:simDuration) { visitorLoad[i] <- sum(popEvents$enterTimestamp <= i & popEvents$exitTimestamp > i) if (i %% 100 == 0) {print(paste('Sim at',i,'out of',simDuration, 'seconds.',sep=' ') )} } plot(visitorLoad, typ = 'l', ylab = 'Visitor Load', xlab='Time Elapsed (sec)') ```
You can obtain the counts of visitors entering and exiting at different times and then use the cumulative sum to compute the number of visitors there at a particular time. This seems to meet your requirement of the code running quickly, though it does not use linear algebra. ``` diffs = rep(0, simDuration+1) # Store the number of times a visitor enters and exits at each timestep. The table # will contain headers that are the timesteps and values that are the number of # people entering or exiting at the timestep. tabEnter = table(pmax(1, ceiling(enterTimestamp))) tabExit = table(pmin(simDuration+1, ceiling(exitTimestamp))) # For each time index, add the number of people entering and subtract the number of # people exiting. For instance, if in period 20, 3 people entered and 4 exited, then # diffs[20] equals -1. as.numeric(names(tabEnter)) is the periods for which at least # one person entered, and tabEnter is the number of people in each of those periods. diffs[as.numeric(names(tabEnter))] = diffs[as.numeric(names(tabEnter))] + tabEnter diffs[as.numeric(names(tabExit))] = diffs[as.numeric(names(tabExit))] - tabExit # cumsum() sums the diffs vector through a particular time point. visitorLoad2 = head(cumsum(diffs), simDuration) ```
How to open gmail in android I just wanted to open the Gmail app through my app and wanted to set email, subject and message from my application. I have tried GmailService but it is not supporting bcc or cc emails. Link: <https://github.com/yesidlazaro/GmailBackground> ``` BackgroundMail.newBuilder(this) .withUsername("[email protected]") .withPassword("password12345") .withMailto("[email protected]") .withType(BackgroundMail.TYPE_PLAIN) .withSubject("this is the subject") .withBody("this is the body") .withOnSuccessCallback(new BackgroundMail.OnSuccessCallback() { @Override public void onSuccess() { //do some magic } }).withOnFailCallback(new BackgroundMail.OnFailCallback() { @Override public void onFail() { //do some magic } }).send(); ``` I would like to use bcc and cc functionality along with the attachment, subject, and message.
open gmail via Intent ``` Intent intent = new Intent(Intent.ACTION_VIEW); intent.setData(Uri.parse("[email protected]")); intent.setClassName("com.google.android.gm", "com.google.android.gm.ComposeActivityGmail"); intent.putExtra(Intent.EXTRA_CC, new String[]{"[email protected]"}); intent.putExtra(Intent.EXTRA_BCC, new String[]{"[email protected]"}); intent.putExtra(Intent.EXTRA_SUBJECT, "your subject goes here..."); intent.putExtra(Intent.EXTRA_TEXT, "Your message content goes here..."); startActivity(intent); ``` just pass `EXTRA_CC` & `EXTRA_BCC` in intent argument **Edit** Below answer will work on android 11 ``` Intent intent = new Intent(Intent.ACTION_SENDTO); intent.setData(Uri.parse("mailto:")); intent.putExtra(Intent.EXTRA_EMAIL, new String[]{"[email protected]"}); intent.putExtra(Intent.EXTRA_SUBJECT, "Your subject here..."); intent.putExtra(Intent.EXTRA_TEXT,"Your message here..."); startActivity(intent); ``` Edit 2 ``` val selectorIntent = Intent(Intent.ACTION_SENDTO) selectorIntent.data = Uri.parse("mailto:") val emailIntent = Intent(Intent.ACTION_SEND) emailIntent.putExtra(Intent.EXTRA_EMAIL, arrayOf("[email protected]")) emailIntent.putExtra(Intent.EXTRA_SUBJECT, "Subject here...") emailIntent.putExtra(Intent.EXTRA_TEXT, "Email Body...") emailIntent.selector = selectorIntent activity!!.startActivity(Intent.createChooser(emailIntent, "Send email...")) ```
c++ concept help, pointers So i have a reasonable understanding of pointers but i was asked what the difference between these are: ``` void print(int* &pointer) void print(int* pointer) ``` I'm still a student myself and im not 100%. Im sorry if this is basic but my googleing skills failed me. Is there anyway you can help me understand this concept a bit better. I haven't used c++ in a long time, and i am trying to help to tutor a student, and i am trying to solidify my conceptual knowledge for her.
The first passes the pointer by reference, the second by value. If you use the first signature, you can modify both the memory the pointer points to, as well as which memory it points to. For example: ``` void printR(int*& pointer) //by reference { *pointer = 5; pointer = NULL; } void printV(int* pointer) //by value { *pointer = 3; pointer = NULL; } int* x = new int(4); int* y = x; printV(x); //the pointer is passed by value //the pointer itself cannot be changed //the value it points to is changed from 4 to 3 assert ( *x == 3 ); assert ( x != NULL ); printR(x); //here, we pass it by reference //the pointer is changed - now is NULL //also the original value is changed, from 3 to 5 assert ( x == NULL ); // x is now NULL assert ( *y = 5 ;) ```
Change widget's text dynamically flutter/dart I tried to script code that generate random value of array and shown on a widget card (every time he's opened will be shown another value from array{string}) In practice, the code choose a cell from array once, and everytime I opened the widget it shows the same one that choose first (without change) I worked with slimycard package (<https://pub.dev/packages/slimy_card>) ``` // in 'homepage' class widget build child: StreamBuilder( initialData: false, stream: slimyCard.stream, builder: ((BuildContext context, AsyncSnapshot snapshot) { return ListView( padding: EdgeInsets.only(top: 150), children: <Widget>[ // SlimyCard is being called here. SlimyCard( color: Colors.transparent.withOpacity(0.2), topCardHeight: 450, width: 400, topCardWidget: topCardWidget(), bottomCardWidget: bottomCardWidget(), ), ], ); }), ) // the widget I want to change his text every time he's opened. Widget bottomCardWidget() { return FutureBuilder( future: _getfromlist(widget.mode), initialData: 'loading..', builder: (BuildContext context, AsyncSnapshot<String> text) { return new Text( text.data, style: TextStyle( color: Colors.white, fontSize: 19, fontWeight: FontWeight.w500, ), textAlign: TextAlign.center, ); }); } // _getfromlist func Future<String> _getfromlist(int type) async { final getter = Random().nextInt(myList[type].length); var setter = myList[type][getter]; var text = '$setter'; return await new Future(() => text); } ``` Hope you understood my intent. Please help, thank you guys :)
You can copy paste run full code below Step 1: In `initState`, listen open/close ``` @override void initState() { slimyCard.stream.listen((value) { if (value) { handleFuture(); } }); ``` Step 2: Use `ValueNotifier` and `ValueListenableBuilder` to build `bottomCardWidget` ``` final ValueNotifier<String> _notify = ValueNotifier<String>(""); void handleFuture() async { String text = await _getfromlist(1); _notify.value = text; } ... Widget bottomCardWidget() { return ValueListenableBuilder( valueListenable: _notify, builder: (BuildContext context, String value, Widget child) { return Text( value, style: TextStyle( color: Colors.white, fontSize: 19, fontWeight: FontWeight.w500, ), textAlign: TextAlign.center, ); }); } ``` working demo [![enter image description here](https://i.stack.imgur.com/H4ynv.gif)](https://i.stack.imgur.com/H4ynv.gif) full code ``` import 'dart:math'; import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; import 'package:slimy_card/slimy_card.dart'; void main() { WidgetsFlutterBinding.ensureInitialized(); //Don't worry about these codes here, as they are not relevant for this example. SystemChrome.setSystemUIOverlayStyle(SystemUiOverlayStyle( statusBarColor: Colors.transparent, statusBarIconBrightness: Brightness.dark, systemNavigationBarColor: Colors.white, systemNavigationBarIconBrightness: Brightness.dark, systemNavigationBarDividerColor: Colors.transparent, )); SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]); runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, theme: ThemeData( scaffoldBackgroundColor: Colors.white, fontFamily: 'Poppins', ), home: HomePage(), ); } } class HomePage extends StatefulWidget { @override _HomePageState createState() => _HomePageState(); } class _HomePageState extends State<HomePage> { final ValueNotifier<String> _notify = ValueNotifier<String>(""); void handleFuture() async { String text = await _getfromlist(1); _notify.value = text; } @override void initState() { slimyCard.stream.listen((value) { if (value) { handleFuture(); } }); super.initState(); } @override Widget build(BuildContext context) { return Scaffold( body: StreamBuilder( // This streamBuilder reads the real-time status of SlimyCard. initialData: false, stream: slimyCard.stream, //Stream of SlimyCard builder: ((BuildContext context, AsyncSnapshot snapshot) { return ListView( padding: EdgeInsets.zero, children: <Widget>[ SizedBox(height: 100), SlimyCard( // In topCardWidget below, imagePath changes according to the // status of the SlimyCard(snapshot.data). topCardWidget: topCardWidget((snapshot.data) ? 'https://picsum.photos/250?image=9' : 'https://picsum.photos/250?image=15'), bottomCardWidget: bottomCardWidget(), ) ], ); }), ), ); } // This widget will be passed as Top Card's Widget. Widget topCardWidget(String imagePath) { return Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Container( height: 70, width: 70, decoration: BoxDecoration( color: Colors.white, borderRadius: BorderRadius.circular(15), image: DecorationImage(image: NetworkImage(imagePath)), boxShadow: [ BoxShadow( color: Colors.black.withOpacity(0.1), blurRadius: 20, spreadRadius: 1, ), ], ), ), SizedBox(height: 15), Text( 'The Rock', style: TextStyle(color: Colors.white, fontSize: 20), ), SizedBox(height: 15), Text( 'He asks, what your name is. But!', style: TextStyle( color: Colors.white.withOpacity(0.8), fontSize: 12, fontWeight: FontWeight.w500), ), SizedBox(height: 10), ], ); } // This widget will be passed as Bottom Card's Widget. Widget bottomCardWidget() { return ValueListenableBuilder( valueListenable: _notify, builder: (BuildContext context, String value, Widget child) { return Text( value, style: TextStyle( color: Colors.white, fontSize: 19, fontWeight: FontWeight.w500, ), textAlign: TextAlign.center, ); }); } // _getfromlist func Future<String> _getfromlist(int type) async { final getter = Random().nextInt(100); //var setter = myList[type][getter]; var text = '$getter'; return Future.value(text); } } ```
Count bits 1 on an integer as fast as GCC \_\_builtin\_\_popcount(int) I write a algorithm (taken from "The C Programming Language") that counts the number of 1-bits very fast: ``` int countBit1Fast(int n) { int c = 0; for (; n; ++c) n &= n - 1; return c; } ``` But a friend told me that `__builtin__popcount(int)` is a lot faster, but less portable. I give it a try and was MANY times faster! Why it's so fast? I want to count bits as fast as possible, but without stick to a particular compiler. **EDIT:** I may use it on PIC micro-controllers and maybe on non-intel processors, so I need the maximum portability.
> > I write a algorithm (taken from "The C Programming Language") that counts the number of 1-bits very fast: > > > I don't see why anyone would characterize your approach as "very fast". It's a bit clever, and it should be faster on average than naive alternatives. It also does not depend on the width of the representation of `int`, which is a plus. I observe that it has undefined behavior for negative arguments, but that's a common theme for bitwise operators and functions. Let's analyze, supposing a non-negative argument: ``` int c = 0; for (; n; ++c) n &= n - 1; ``` - How many loop iterations are performed? 1 for each 1 bit in the binary representation of the value, irrespective of *where* in the value each bit lies - How much work is performed per iteration - one increment of `c` - one comparison of `n` against zero (plus one more of these when breaking out of the loop) - one decrement of `n` by 1 - one bitwise 'and'That ignores reads and stores, which very likely can be made free or especially cheap by keeping the operands in registers. If we assume equal cost for each of those, that's four operations per iteration. For random 32-bit integers, there will be an average of 16 iterations, for a total of **65 operations on average**. (Best case is just one operation, but worst is 129, which is no better than a naive implementation). `__builtin_popcount()`, on the other hand, uses **a single instruction** regardless of input on platforms that support it, such as yours very likely is. Even on those that don't have a for-purpose instruction, however, it can be done faster (on average). @dbush has presented one such mechanism that has similar advantages to the one you present. In particular, it does not depend on a pre-chosen integer width, and although it does depend on *where* in the representation the 1 bits reside, it does run faster for some arguments (smaller ones) than others. If I'm counting right, that one will average **around 20 operations** on random 32-bit inputs: five in each of four loop iterations (only 0.4% of random inputs would require fewer than four iterations). I'm counting one table read per iteration there, which I assume can be served from cache, but which is probably still not as fast as an arithmetic operation on values already held in registers. One that is strictly computational would be: ``` int countBit1Fast(uint32_t n) { n = (n & 0x55555555u) + ((n >> 1) & 0x55555555u); n = (n & 0x33333333u) + ((n >> 2) & 0x33333333u); n = (n & 0x0f0f0f0fu) + ((n >> 4) & 0x0f0f0f0fu); n = (n & 0x00ff00ffu) + ((n >> 8) & 0x00ff00ffu); n = (n & 0x0000ffffu) + ((n >>16) & 0x0000ffffu); return n; } ``` That's pretty easy to count: five additions, five shifts, and ten bitwise 'and' operations, and 5 loads of constants for a total of **25 operations** for every input (and it goes up only to 30 for 64-bit inputs, though those are now 64-bit operations instead of 32-bit ones). This version is, however, intrinsically dependent on a particular size of the input data type.
CDF of a random vector I am reading a book that in one page it talks about cdf of a random vector. This is from the book: > > Given $X=(X\_1,...,X\_n)$, each of the random variables $X\_1, ... ,X\_n$ can be characterized from a probabilistic point of view by its cdf. > > > However the cdf of each coordinate of a random vector does not completely describe the probabilistic behaviour of the whole vector. For instance, if $U\_1$ AND $U\_2$ are two independent random variables with the same cdf $G(x)$, the vectors $X=(X\_1, X\_2)$ defined respectively by $X\_1=U\_1$, $X\_2=U\_2$ and $X\_1=U\_1$, $X\_2=U\_1$ have each of their coordinates with the same cdf, and they are quite different. My question is: From the very last paragraph, it says $U\_1$ and $U\_2$ are coming from the same c.d.f. And then they define $X=(X\_1, X\_2)$, but they say $X=(X\_1, X\_2)$ is different from $X=(X\_1, X\_1)$. I don't really understand why the two $X$ are different. (i.e. I don't understand why $X=(X\_1, X\_2)$ and $X=(X\_1, X\_1)$ are different). Isn't $X\_1$ the same as $X\_2$, so it doesn't matter whether you put two $X\_1$ to form $X=(X\_1, X\_1)$ or put one $X\_1$ and one $X\_2$ to form $X=(X\_1, X\_2)$. Shouldn't they be the same? why does the author says they are "quite different"? Could someone explain why they are different?
Let us take the simplest example of Bernoulli random variables with parameter $\frac12$. The value of the (joint) CDF $F\_{X\_1,X\_2}(x,y)$ of $X\_1$ and $X\_2$ is the total probability mass in the southwest quadrant with northeast corner $(x,y)$ - If $X\_1$ and $X\_2$ are two *independent* Bernoulli random variables, then we have *four* probability masses of $\frac14$ sitting at $(0,0), (1,0), (0,1)$, and $(1,1)$. Hence $$F\_{X\_1,X\_2}\left(\frac12,\frac12\right) = \frac14.$$ - If $X\_2 = 1-X\_1$, then we have *two* probability masses of $\frac12$ sitting at $(1,0)$ and $(1,0)$. Hence $$F\_{X\_1,X\_2}\left(\frac12,\frac12\right) = 0.$$ - If $X\_2 = X\_1$, then we have *two* probability masses of $\frac12$ sitting at $(0,0)$ and $(1,1)$. Hence $$F\_{X\_1,X\_2}\left(\frac12,\frac12\right) = \frac12.$$ Thus, the *joint* CDF of $X\_1$ and $X\_2$ *does* depend on what kind of relationship (if any) they have with each other, and just knowing the common CDF of $X\_1$ and $X\_2$ (these are *marginal* CDFs) tells us nothing about the behavior the joint CDF.
Missing php\_soap.dll in Ubuntu 16 I am trying install composer on Ubuntu 16. ``` curl -sS https://getcomposer.org/installer | php ``` and receiving warning: ``` PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/20151012/php_soap.dll' - /usr/lib/php/20151012/php_soap.dll: cannot open shared object file: No such file or directory in Unknown on line 0 All settings correct for using Composer Downloading 1.2.0... Composer successfully installed to: //composer.phar Use it: php composer.phar ``` I instaled soap using: ``` sudo apt-get install php-soap ``` in `/usr/lib/php/20151012/php_soap.dll` directory really there is no such e file after `php -i | grep -i soap` : ``` PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/20151012/php_soap.dll' - /usr/lib/php/20151012/php_soap.dll: cannot open shared object file: No such file or directory in Unknown on line 0 /etc/php/7.0/cli/conf.d/20-soap.ini, soap Soap Client => enabled Soap Server => enabled soap.wsdl_cache => 1 => 1 soap.wsdl_cache_dir => /tmp => /tmp soap.wsdl_cache_enabled => 1 => 1 soap.wsdl_cache_limit => 5 => 5 soap.wsdl_cache_ttl => 86400 => 86400 ``` How to resolve problem with `php_soap.dll` ?
First try to search where is the .dll file mentioned with the following command ``` grep -r "soap.dll" /etc/php/7.0/cli/ ``` If you get any matches from the command, search for a `;` before them. If they don't have it, open the file(s) and comment the lines e. ``` nano /etc/php/7.0/cli/php.ini ``` Then press `ctrl` + `W` to invoke the search, then paste the string you're searching for, in this case `soap.dll` and you should reach the line of that string. Put a `;` in front of it and save the file with `ctrl` + `X`, followed by `Y` and `enter`. Try running `php -i | grep -i soap` and to see if you still get any errors.
R Shiny: Interactively modify application theme I am trying to find a way to interactively modify the application theme from a text input. Here is an example of my ui.R. ``` shinyUI(fluidPage( tabsetPanel( tabPanel("Main"), tabPanel("Settings", textInput("skin", "Select Skin", value = "bootstrap1.css") ), type = "pills", position = "above" ),theme = input$skin ) ) ``` I am getting the following error: "ERROR: object 'input' not found" As a final note I have created a foler www within the app folder which does contain bootstrap1.css among other css files.
The `theme` option in `fluidPage` is inserting a CSS script with the following call: ``` tags$head(tags$link(rel = "stylesheet", type = "text/css", href = input$Skin)) ``` You can just add this html as a reactive element in your ui: ``` library(shiny) runApp(list(ui = fluidPage( tabsetPanel( tabPanel("Main"), tabPanel("Settings", textInput("Skin", "Select Skin", value = "bootstrap1.css") ), type = "pills", position = "above" ), uiOutput("myUI") ) , server = function(input, output, session){ output$myUI <- renderUI({ tags$head(tags$link(rel = "stylesheet", type = "text/css", href = input$Skin)) }) } )) ```
What are good heuristics for inlining functions? Considering that you're trying solely to optimize for speed, what are good heuristics for deciding whether to inline a function or not? Obviously code size should be important, but are there any other factors typically used when (say) gcc or icc is determining whether to inline a function call? Has there been any significant academic work in the area?
Wikipedia has [a](http://en.wikipedia.org/wiki/Inline_function#Problems_with_inline_functions) [few](http://en.wikipedia.org/wiki/Inline_expansion#Problems) paragraphs about this, with some links at the bottom: - In addition to memory size and cache issues, [another consideration is register pressure](http://en.wikipedia.org/wiki/Inline_expansion#Problems). From the compiler's point of view "the added variables from the inlined procedure may consume additional registers, and in an area where register pressure is already high this may force spilling, which causes additional RAM accesses." Languages with JIT compilers and runtime class loading have other tradeoffs since the virtual methods aren't known statically, yet the JIT can collect runtime profiling information, such as method call frequency: - [Design, Implementation, and Evaluation of Optimizations in a Just-in-Time Compiler](http://cseweb.ucsd.edu/classes/sp00/cse231/openjit.pdf) (for Java) talks about method inlining of static methods and dynamically loaded classes and its improvements on performance. - [Practicing JUDO: Java Under Dynamic Optimizations](http://pllab.cs.nthu.edu.tw/cs5403/Papers/JVM/p13-cierniak.pdf) claims that their "inlining policy is based on the code size and profiling information. If the execution frequency of a method entry is below a certain threshold, the method is then not inlined because it is regarded as a cold method. To avoid code explosion, we do not inline a method with a bytecode size of more than 25 bytes. . . . To avoid inlining along a deep call chain, inlining stops when the accumulated inlined bytecode size along the call chain exceeds 40 bytes." Although they have runtime profiling information (method call frequency) they are still careful to avoid inlining large functions or chains of functions to prevent bloat. [A search on Google Scholar](http://scholar.google.com/scholar?q=function%20inlining) reveals a number of papers, such as - [The effect of code expanding optimizations on instruction cache design](http://eprints.kfupm.edu.sa/69697/1/69697.pdf) - [Function Inlining under Code Size Constraints for Embedded Processors](http://www.cs.york.ac.uk/rts/docs/SIGDA-Compendium-1994-2004/papers/1999/iccad99/pdffiles/05b_1.pdf) [A search on Google Books](http://books.google.com/books?q=function+inlining&btnG=Search+Books) reveals quite a number of books with papers or chapters about function inlining in various contexts. - [The Compiler Design Handbook: Optimizations and Machine Code Generation](http://books.google.com/books?id=1kqAv-uDEPEC&pg=SA8-PA14&dq=function+inlining+heuristics&ei=D7ZdS6etEp_4lASZ8YnKAw&cd=3) has a chapter about Statisical and Machine Learning Techniques in Compiler Design, with heuristics to set various parameters, profiling the results. This chapter references the Vaswani et al paper [Microarchitecture Sensitive Empirical Models for Compiler Optimizations](http://research.microsoft.com/pubs/74546/Microarchitecture%20Sensitive%20Empirical%20Models%20for%20Compiler%20Optimizations.pdf) where they propose "the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations". - (Some other books talk about inling from the programmer's point of view, such as [C++ for Game Programmers](http://books.google.com/books?id=jvv2CVYXV1cC&pg=PA121&dq=function+inlining&ei=KLRdS5qwK5LMlAT3l6HrCQ&cd=6), which talks about the dangers of inlining functions too often and the differences between inlining and macros. Compilers often ignore the programmer's inline requests if they can determine that they would do more harm than good; this can be overridden with macros as a last resort.)
Dot product between two 3D tensors I have two 3D tensors, tensor `A` which has shape `[B,N,S]` and tensor `B` which also has shape `[B,N,S]`. What I want to get is a third tensor `C`, which I expect to have `[B,B,N]` shape, where the element `C[i,j,k] = np.dot(A[i,k,:], B[j,k,:]`. I also want to achieve this is a vectorized way. Some further info: The two tensors `A` and `B` have shape `[Batch_size, Num_vectors, Vector_size]`. The tensor `C`, is supposed to represent the dot product between each element in the batch from `A` and each element in the batch from `B`, between all of the different vectors. Hope that it is clear enough and looking forward to you answers!
``` In [331]: A=np.random.rand(100,200,300) In [332]: B=A ``` The suggested `einsum`, working directly from the ``` C[i,j,k] = np.dot(A[i,k,:], B[j,k,:] ``` expression: ``` In [333]: np.einsum( 'ikm, jkm-> ijk', A, B).shape Out[333]: (100, 100, 200) In [334]: timeit np.einsum( 'ikm, jkm-> ijk', A, B).shape 800 ms ± 25.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` `matmul` does a `dot` on the last 2 dimensions, and treats the leading one(s) as batch. In your case 'k' is the batch dimension, and 'm' is the one that should obey the `last A and 2nd to the last of B` rule. So rewriting the `ikm,jkm...` to fit, and transposing `A` and `B` accordingly: ``` In [335]: np.einsum('kim,kmj->kij', A.transpose(1,0,2), B.transpose(1,2,0)).shape Out[335]: (200, 100, 100) In [336]: timeit np.einsum('kim,kmj->kij',A.transpose(1,0,2), B.transpose(1,2,0)).shape 774 ms ± 22.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Not much difference in performance. But now use `matmul`: ``` In [337]: (A.transpose(1,0,2)@B.transpose(1,2,0)).transpose(1,2,0).shape Out[337]: (100, 100, 200) In [338]: timeit (A.transpose(1,0,2)@B.transpose(1,2,0)).transpose(1,2,0).shape 64.4 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` and verify that values match (though more often than not, if shapes match, values do to). ``` In [339]: np.allclose((A.transpose(1,0,2)@B.transpose(1,2,0)).transpose(1,2,0),np.einsum( 'ikm, jkm-> ...: ijk', A, B)) Out[339]: True ``` I won't try to measure memory usage, but the time improvement suggests it too is better. In some cases `einsum` is optimized to use `matmul`. Here that doesn't seem to be the case, though we could play with its parameters. I'm a little surprised the `matmul` is doing so much better. === I vaguely recall another SO about `matmul` taking a short cut when the two arrays are the same thing, `A@A`. I used `B=A` in these tests. ``` In [350]: timeit (A.transpose(1,0,2)@B.transpose(1,2,0)).transpose(1,2,0).shape 60.6 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [352]: B2=np.random.rand(100,200,300) In [353]: timeit (A.transpose(1,0,2)@B2.transpose(1,2,0)).transpose(1,2,0).shape 97.4 ms ± 164 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` But that only made a modest difference. ``` In [356]: np.__version__ Out[356]: '1.16.4' ``` My BLAS etc is standard Linux, nothing special.
core audio guidance / primer I've being doing some reading up on core audio for ios 4 with the aim of building a little test app. I'm pretty confused at this point in reseach with all the api's. Ideally what I want to know how to do is to extract a number of samples from two mp3s into arrays. Then in a callback loop I want to mix these samples together and send them to the speaker. There are examples on the apple dev site but I'm finding them difficult to disect and digest. Is anybody aware of a nice stripped down example somewhere? Also I can't determine which api's to be using. There extendedaudiofile and audio file. These seem to be the one's for extracting audio. Which one should I use? It it absoultly neccessary to use the mix unit or would I be as well off to do my own mixing code (I want as much sample control as possible). Do I need to use audio queue services? I've heard that they provide poor latency, is this true? Finally do I have to use an audio session service. Would an audio app work without it? How would the audio session fit in to the whole audio extraction and callback? Is it purely just to handle interruptions?
The documentation on Core Audio has improved very much over the past years but it's still incomplete, sometimes confusing and sometimes just wrong. And I find the structure of the framework itself quite confusing (AudioToolbox, AudioUnit, CoreAudio, ... what is what?). But my suggestions to tackle your task are this (Warning: I haven't done the following in iOS, only MacOS, but I think it's roughly the same): 1. Use ExtendedAudioFile (declared in the AudioToolbox framework) to read the mp3s. It does just what the name suggests, it extends the capabilities of AudioFile. I.e. you can assign a audio stream format (AudioStreamBasicDescription) to an eaf and when you read from it, it will convert into that format for you (for further processing with audio units you use the format ID 'kAudioFormatLinearPCM' and format flags 'kAudioFormatFlagsAudioUnitCanonical'). 2. Then, you use ExtAudioFile's 'ExtAudioFileRead' to read the converted audio into an AudioBufferList struct which is a collection of AudioBuffer structs (both declared in the CoreAudio framework), one for each channel (so usually two). Check out the 'Core Audio Data Types Reference' in the Audio section of the Docs for things like AudioStreamBasicDescription, AudioBufferList and AudioBuffer. 3. Now, use audio units to playback and mix the files, it's not that hard. Audio units seem this 'big thing' but they really aren't. Look into 'AudioUnitProperties.h' and 'AUComponent.h' (in the AudioUnit framework) for descriptions of available audio units. Check out 'Audio Unit Hosting Guide for iOS' in the docs. The only problem here is that there is no audio file player unit for iOS... If I remember correctly, you have to feed your audio units with samples manually. 4. Audio units live in an AUGraph (declared in the AudioToolbox framework) and are interconnected like audio hardware thru a patchbay. The graph also handles the audio output for you. You can check out the 'PlaySoftMIDI' and 'MixerHost' example code regarding this (actually, I just had a look into MixerHost again and I think, it's just what you want to do!). A rule of thumb: Look into the header files! They yield more complete and precise information than the docs, at least that was my impression. It can help a lot to look at the headers of the above mentioned frameworks and try to get familiar with them. Also, there will be a book about Core Audio ('Core Audio' by Kevin Avila and Chris Adamson), but it's not yet released. Hope, all this helps a little! Good luck, Sebastian
Angular 2+ pass directives to a custom component I created a custom component which have it' own `@input()`, `@output` and so on. This component has a `<input />` field where a user can enter some value. E.g: `<my-component ...></my-component>` I reference it in my html and it works flawlessly. I also created several directives which validate form input data via simple regexps. I can use them in plain input inside a form like: `<input type="text" validator1 validator2 validator3 />` Is there a way to pass one or more of these directive (but also none of them) to my custom component without hardcoding them in the source of the component? Some kind of `...params` to evaluate? Thanks in advance for all your help Valerio
The pattern you're looking for is definitely possible, but not achievable with directives in a sense you're trying to. This is due the fact that Angular is *compiled*, meaning you cannot **not** "hard-code" a directive (at least not without doing weird stuff that's not recommended to do in production). Your component can accept an input named `validators`, which should be an array of functions (or instances of a class, if you need it), and then use that to validate. For example, you can have the following three super-simple validators: ``` export const required = value => value != null && value != '' export const minLength3 = value => value == null || value.length > 3 export const maxLength9 = value => value == null || value.length < 9 ``` Your `my-component` accepts an array of these validators. For simplicity sake, a validator is actually a predicate of a string. In other words, it is a function with the same signature as three functions above: `(value: string) => boolean`. We initialize this input as an empty array, effectively making this the default value in case nothing is passed down to it. ``` @Input() validators: ((value: string) => boolean)[] = [] ``` In the consumer component's template (the component using `my-component`), we now use the component by passing down an array of validators to it. ``` <my-component [validators]="[required, maxLength9]"></my-component> ``` Of course, to use them, we have to either DI them or simply instantiate them as members of the component class. To use it with DI, validators would have to be classes (at least as far as versions 5.x.x and below go). ``` import {required, maxLength9} from '../validators' export class ConsumerComponent { public required = requierd public maxLength9 = maxLength9 } ``` The `my-component` component should, of course, make use of these validators. For example, the following function can be run on each `change` or `input` or `blur` event, depending on when do you want to run the validators. ``` public validate(value: string): boolean { let valid: boolean = true this.validators.forEach(validator => { const result = validator(value) valid = valid || result }) } ``` You now have better dynamic control of which validators you want to run on the field. You can also change these dynamically during run-time of the application, of course. This comes at the following cost: no tree-shaking of unused validators. Angular compiler can no longer determine which validators are you using, which means that all of them have to be imported in the final bundle of your app, even though some of them might never be used. --- You might be interested in **reactive forms in Angular**. You can read about [reactive forms in official documentation](https://angular.io/guide/reactive-forms), or take a look at [Todd Motto's article on reactive forms in Angular](https://toddmotto.com/angular-2-forms-reactive), or [Reactive Forms in Angular by Pascal Precht on thoughtram](https://blog.thoughtram.io/angular/2016/06/22/model-driven-forms-in-angular-2.html).
identify the point of intersection from two distributions Dear StackExchange community, I have a problem of identifying where two distributions $F$ and $G$ intersect (cross each other). In particular, I have an empirical estimation of $F$ and $G$ from the data I have and I'm looking for a point above which $G$ is likely true over $F$. In other words, $F$ is a reference distribution and $G$ is the target. Therefore, we would like to know if an item with value $x$ is positive (which means that $x$ has a higher probability under $G$ than in $F$). The problem is that the empirical estimations of $F$ and $G$ result in multi-modal distribution, and hence it has been a difficult task for me to obtain one most plausible intersection point. Please also see the attached image for an example (a simple scenario). Note that the empirical estimation was obtained using kernel density estimation (`density()` of R). [![enter image description here](https://i.stack.imgur.com/ysPlB.png)](https://i.stack.imgur.com/ysPlB.png)se let me know if there is a method I can try to obtain the intersection point. Thanks in advance.
A tiny bit of statistics is needed here, if only to point out the need to control the bandwidth and study the sensitivity of the solutions to the bandwidth. Provided a solution is bracketed closely by data points, it will tend to be stable even when the bandwidth is varied substantially. Here is an example involving datasets with 23 points (black density) and 14 points (light blue). [![Figure](https://i.stack.imgur.com/5WIer.png)](https://i.stack.imgur.com/5WIer.png) Red vertical lines mark the solutions. The data are shown as rug plots at the bottom. The default bandwidth for these data will be around $1/2,$ as shown in the middle panel You can see from this example how one solution (the right hand one in the right panel) persists across all bandwidths. Another solution (the left hand one in the right panel) varies appreciably because data are scarce in its neighborhood. Spurious solutions pop up when using a relatively small bandwidth (left panel). These examples were created by this `R` code. ``` set.seed(17) x <- rnorm(23) y <- rnorm(14, 2, 3/2) bw <- 0.25 # or 0.5, or 1.5, or even "SP", etc: see the help page for `density` obj <- intersect(x, y, kernel = "gaussian", n = 512, bw = bw, from = -4, to = 8) ``` All kernel densities produce a discrete grid of density estimates. The solution implemented by `intersect` allows you to exploit the default methods of finding endpoints, bandwidths, *etc* by first computing a density for the combined data. Those defaults are then used to recompute the densities for the data separately. Because both densities are computed on the same grid, it's a simple matter to locate the places where they cross and interpolate linearly on the grid. Linear interpolation is more than precise enough, because it errs less than the mesh of the grid, which presumably is already small enough for your purposes. ``` # # Find all points where density $g$ exceeds density $f.$ # intersect <- function(x, y, bw = "nrd0", from, to, ...) { # # Compute a density for all points combined. # largs <- list(x = c(x,y), bw = bw) if (!missing(from)) largs <- c(largs, from = from) if (!missing(to)) largs <- c(largs, to = to) largs <- c(largs, list(...)) obj <- do.call(density, largs) # Compute a common density # # Compute densities for the datasets separately. # x.0 <- obj$x f.x <- density(x, bw = obj$bw, from = min(x.0), to = max(x.0), ...) f.y <- density(y, bw = obj$bw, from = min(x.0), to = max(x.0), ...) # # Find the crossings. # d <- zapsmall(f.y$y - f.x$y) abscissae <- sapply(which(d[-1] * d[-length(d)] < 0), function(i) { w <- d[i+1] - d[i] if (w > 0) (d[i+1] * x.0[i] - d[i] * x.0[i+1]) / w else (x.0[i] + x.0[i+1]) / 2 }) list(Points = abscissae, xlim = range(x.0), f = f.x, g = f.y) } ```
Why can't declaration-only friend functions have default arguments? I've learned that the C++11 standard doesn't allow friend functions to have default arguments unless the friend declaration is a definition. So this isn't allowed: ``` class bar { friend int foo(int seed = 0); }; inline int foo(int seed) { return seed; } ``` but this is: ``` class bar { friend int foo(int seed = 0) { return seed; } }; ``` (Example courtesy <http://clang-developers.42468.n3.nabble.com/Clang-compile-error-td4033809.html>) What is the rational behind this decision? Friend functions with default arguments are useful, e.g. if the function is too complex to declare in place, why are they now disallowed?
In looking at [DR 136](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2000/n1263.html), it looks like there are issues when a friend declaration combines with namespace-level declarations with default arguments that makes the semantics hard to reason about (and perhaps difficult to issue quality diagnostics against), especially in the context of templates. The proposed DR resolution given on that page is to only allow default arguments in them when the declaration is the only one in the program. Since a function definition is also a declaration, that would mean the only useful way to specify default arguments in a friend declaration is to make it a definition. I would guess the C++11 standard simply chose to make this practical usage requirement explicit. (Technically, if by "program" they mean "translation unit", one could construct a complete program where the function were defined in a completely different translation unit, but since this function's definition would not have the class definition visible, the benefits of the friendship grant would be largely useless.) The workaround for this hiccup seems pretty straightforward. Declare the friend without using default arguments, and then declare it again at namespace scope with whatever default arguments are desired.
Disable or Uninstall Malicious Software Removal Tool on Windows Server KB890830 Automatic Maintenance We have a script that declines [KB890830 updates](https://www.microsoft.com/en-us/download/malicious-software-removal-tool-details.aspx) for our on-premise Windows Update Server, but we recently found someone approved one of the monthly updates before the script could run and the Malicious Software Removal Tool (MRT) was installed on all of our servers. We've had issues with MRT in the past and want to remove it, but now the script has declined the update and we cannot find anything under the `View installed updates` section to remove it. We also tried running `wusa.exe /uninstall /KB:890830` but it returned the error: > > The update KB890830 is not installed on this computer. > > > According to the `C:\Windows\debug\mrt.log`, `C:\Windows\System32\MRT.exe` is being run daily during the "Automatic Maintenance" window defined in the Action Center section of the control panel. So it is definitely installed and being run daily. I tried using [SysInternals AutoRuns](https://technet.microsoft.com/en-us/sysinternals/bb963902.aspx) and looking at the Scheduled Tasks but was not able to find where it was being started. How can we disable or uninstall the Malicious Software Removal Tool on our Windows Servers to prevent it from running?
Turns out the Automatic Maintenance tasks are managed by `C:\Windows\System32\MSchedExe.exe` and the Scheduled Tasks under the `\Microsoft\Windows\TaskScheduler` folder. It then will run other tasks that are defined but don't have a specified trigger, one being the `MRT_HB` task under `\Microsoft\Windows\RemovalTools\`. [![Malicious Software Removal Tool Scheduled Task](https://i.stack.imgur.com/ju5CC.png)](https://i.stack.imgur.com/ju5CC.png) Here you can see it calling MRT.exe to run the scan, and the last run time matches the information from the Action Center: [![Action Center Last Run Date](https://i.stack.imgur.com/ojogc.png)](https://i.stack.imgur.com/ojogc.png) If you disable this Scheduled Task it should prevent the Malicious Software Removal Tool from running. You also can delete the task and the MRT.exe program using the following in an elevated PowerShell prompt: ``` Unregister-ScheduledTask -TaskName 'MRT_HB' -TaskPath '\Microsoft\Windows\RemovalTools\' -Confirm:$false Remove-Item 'C:\Windows\System32\MRT.exe' -Force ``` Note, however, that if you haven't disabled the KB890830 update in WSUS or via [the registry](https://superuser.com/a/895554/26374) it likely will be reinstalled, as MRT gets updated every patch Tuesday.
Generics: Why the implemented collections returns an object instead of the specified type? I am trying to implement an OrderedMapEntry list with custom MapEntries - as I need a custom solution with Vectors I cannot use a TreeMap(<http://docs.oracle.com/javase/7/docs/api/java/util/TreeMap.html>). I implemented a custom list without any errors, but when I use the `OrderedMapEntries` class in an enhanced for loop it returns an `Object`. - How can I ensure type safety when using an enhanced for loop? What my implementation is doing wrong in ensuring this type safety? ``` public class OrderedMapEntries<K, V> implements Iterator, Iterable{ private Vector<MapEntry<K, Vector<V>>> vector; private int vectorIndex; // initializes with -1 class MapEntry<A, B extends AbstractList<V>> implements Iterator, Iterable{ // MapEntry implementation public void insert(int index, K key, Vector<V> vec){ MapEntry<K, Vector<V>> mapEntry = new MapEntry<>(key, vec); vector.add(index, mapEntry); } @Override public MapEntry<K, Vector<V>> next(){ vectorIndex++; return vector[vectorIndex]; } } ``` I tried to iterate over the collection with an enhanced for loop, but it fails as next() returns an Object and not my specified element. ``` OrderedMapEntries<Integer, String> ome = new OrderedMapEntries<>(); // I filled it with some test data for (OrderedMapEntries<Integer, String>.MapEntry<Integer, Vector<String>> entry : ome){ ; } ```
You are implementing `Iterable`, a raw type, not `Iterable<K>` or `Iterable<V>`. If you omit type parameter like this, then the signature of [iterator](http://docs.oracle.com/javase/7/docs/api/java/lang/Iterable.html#iterator()) becomes - ``` Iterator iterator(); ``` which returns an `Iterator`, not `Iterator<K>` or `Iterator<V>`, whose [next](http://docs.oracle.com/javase/7/docs/api/java/util/Iterator.html#next%28%29) method will have a signature like below - ``` Object next(); ``` This is the reason why you are getting `Object` in the enhanced for loop, as it internally calls the `Iterator`'s `next` to get the next element. You can almost never ensure proper type safety if you use raw types like this. For more information, please check out [Effective Java](https://rads.stackoverflow.com/amzn/click/com/0321356683), Item 23 - *Don't use raw types in new code*. Also, your `OrderedMapEntries` class should only implement `Iterable<E>` (please check out how [ArrayList](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/ArrayList.java#ArrayList.iterator%28%29) does this). Implement the `iterator` method in it such that it returns an appropriate `Iterator<E>` to the enhanced for loop.
Computing cumulative values for each year of a dataframe separately I have the foll. pandas dataframe with datetime index: ``` datetime VAL 2000-01-01 -283.0000 2000-01-02 -283.0000 2000-01-03 -10.6710 2000-01-04 -12.2700 2000-01-05 -10.7855 2001-01-06 -9.1480 2001-01-07 -9.5300 2001-01-08 -10.4675 2001-01-09 -10.9205 2001-01-10 -11.5715 ``` I would like to compute cumulative values for each year and replace the VAL column by the cumulative values. E.g, It will look something like this: ``` datetime VAL 2000-01-01 -283.0000 2000-01-02 -283.0000 + -283.0000 2000-01-03 -10.6710 + -283.0000 + -283.0000 2000-01-04 -12.2700 + -10.6710 + -283.0000 + -283.0000 2000-01-05 -10.7855 + -12.2700 + -10.6710 + -283.0000 + -283.0000 2001-01-06 -9.1480 2001-01-07 -9.5300 + -9.5300 2001-01-08 -10.4675 + -10.4675 2001-01-09 -10.9205 + -10.9205 2001-01-10 -11.5715 + -11.5715 ``` I haven't done the actual calculations which is why you see -283.000 + -283.000 instead of -566.0000 Not sure how to proceed with this, I could do a groupby and then?
You can access the year via `.year` on a DateTimeIndex, and pass that to `groupby`: ``` >>> df["cumulative_VAL"] = df.groupby(df.index.year)["VAL"].cumsum() >>> df VAL cumulative_VAL datetime 2000-01-01 -283.0000 -283.0000 2000-01-02 -283.0000 -566.0000 2000-01-03 -10.6710 -576.6710 2000-01-04 -12.2700 -588.9410 2000-01-05 -10.7855 -599.7265 2001-01-06 -9.1480 -9.1480 2001-01-07 -9.5300 -18.6780 2001-01-08 -10.4675 -29.1455 2001-01-09 -10.9205 -40.0660 2001-01-10 -11.5715 -51.6375 ```
Regex for no duplicate characters from a limited character pool Is there a way to write a regex to match a string that only contains certain characters, and never repeats those characters? I already wrote some code using a set to implement this, but would like to know if there's a regex way to do it. So for example, if I only wanted a string that contains [A,B,C], and I want to match to a string that never duplicates any of those characters, eg A, B, C, AB, AC, B, BC, ABC, and so on, but never matches AA, BB, CC, etc Thanks!
That's easy to do with a [negative lookahead assertion](http://www.regular-expressions.info/lookaround.html): ``` ^(?!.*(.).*\1)[ABC]+$ ``` matches exactly as you described. Test it [live on regex101.com](http://regex101.com/r/fN3eR1/2). **Explanation:** ``` ^ # Start of the string (?! # Assert that it's impossible to match... .* # Any number of characters (including zero) (.) # followed by one character (remember this one in group 1) .* # that's followed by any number of characters \1 # and the same character as before ) # End of lookahead [ABC]+ # Match one or more characters from this list $ # until the end of the string ```
Adding a dynamic servlet using servlet 3.0 throws exception I need to create add servlets at runtime. When I run the following code. ``` protected void processRequest(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title> URI out</title>"); out.println("</head>"); out.println("<body>"); Integer generatedKey = Math.abs(randomiser.nextInt()); out.print(generatedKey); createServlet(Integer.toString(generatedKey),request.getServletContext()); } finally { out.println("</body>"); out.println("</html>"); out.close(); } } private void createServlet(String generatedKey, ServletContext servletContext) { String servletMapping = "/"+generatedKey; ServletRegistration sr = servletContext.addServlet(generatedKey, "com.path.lbs.servlets.testDynamic"); sr.setInitParameter("keyname", generatedKey); sr.addMapping(servletMapping); } ``` I get the following error. > > java.lang.IllegalStateException: > PWC1422: Unable to configure mapping > for servlet 1114600676 of servlet > context /123-LBS, because this servlet > context has already been initialized > > > Is it impossible to add new servlets at runtime i.e. after the Servlet Context is initialised or am I doing something wrong?
> > *Is it impossible to add new servlets at runtime i.e. after the Servlet Context is initialised?* > > > That's correct. You need to do it in [`ServletContextListener#contextInitialized()`](http://download.oracle.com/javaee/6/api/javax/servlet/ServletContextListener.html#contextInitialized%28javax.servlet.ServletContextEvent%29). ``` @WebListener public class Config implements ServletContextListener { @Override public void contextInitialized(ServletContextEvent event) { // Do it here. } @Override public void contextDestroyed(ServletContextEvent event) { // ... } } ``` However, for your particular functional requirement, a single controller servlet in combination with command pattern is much better suited. You could then add commands (actions) during runtime and intercept on it based on the request URI. See also [my answer on *Design Patterns web based applications*](https://stackoverflow.com/questions/3541077/design-patterns-web-based-applications/3542297#3542297) for a kickoff.
Send file to gearman worker I'm currently separating our video conversion part of the web page (kinda like youtube where users upload videos and we convert them to flv/mp4) to a different server. I already have the system running with [gearman](http://gearman.org/) on the same machine. So when a user uploads a video file to server A in gets picked by a gearman worker on the same server A. Now I moved the worker to server B. So worker on server B needs to access the uploaded file on server A. Currently I use SCP to copy the file from A to B and then process it. This method works but I feel like there should be a more clean way of doing it but I haven't found any information about sending files (or large files) to gearman workers. How would you approach this problem? Preferably the client would send the video file as part of the command to start a background job, so I don't have to worry where the file actually is from within the worker. That way I can add more conversion servers without to much hassle. I'm using PHP (with Gearman [extension](http://pecl.php.net/package/gearman)) for both my webpage and the worker.
As was suggested in the comments, having a shared FS is the (usual) way to implement this, and simply pass the path around in the job request from gearman. Gearman is not well-suited for passing around large blobs of data, as it has to keep all of the information for a job in memory. It was never designed for handling the transfer and distribution of large files. Since MogileFS was also initially developed at Danga, there simply was no need to also incorporate file transfer and handling in Gearman (and that's a good thing, there's quite a few technologies that solve that problem better than Gearman would ever do). We're using NFS for handling distributed workers when videos arrive, and the encoder puts the encoded video back onto the NFS share that's available to the public when it's done. Haven't had a serious issue yet, NFS is stable and it's problems are well known and already solved for the kind of loads you'll see.
How do I find objects with a property inside another object in JavaScript I have a object with all my users, like so:`var users = {user1:{}, user2:{}}`, And every user has a isPlaying property. How do I get all users that have isPlaying false?
You should use `Object.keys`, `Array.prototype.filter` and `Array.prototype.map`: ``` // This will turn users object properties into a string array // of user names var userNames = Object.keys(users); // #1 You need to filter which users aren't playing. So, you // filter accessing users object by user name and you check that // user.isPlaying is false // // #2 Using Array.prototype.map, you turn user names into user objects // by projecting each user name into the user object! var usersNotPlaying = userNames.filter(function(userName) { return !users[userName].isPlaying; }).map(function(userName) { return users[userName]; }); ``` If it would be done using ECMA-Script 6, you could do using arrow functions: ``` // Compact and nicer! var usersNotPlaying = Object.keys(users) .filter(userName => users[userName].isPlaying) .map(userName => users[userName]); ``` ## Using `Array.prototype.reduce` As @RobG has pointed out, you can also use `Array.prototype.reduce`. While I don't want to overlap his new and own answer, I believe that `reduce` approach is more practical if it returns an array of user objects *not playing*. Basically, if you return an object instead of an array, the issue is that another caller (i.e. a function which calls the one doing the so-called `reduce`) may need to call `reduce` again to perform a new operation, while an array is already prepared to fluently call other `Array.prototype` functions like `map`, `filter`, `forEach`... The code would look this way: ``` // #1 We turn user properties into an array of property names // #2 Then we call "reduce" on the user property name array. Reduce // takes a callback that will be called for every array item and it receives // the array reference given as second parameter of "reduce" after // the callback. // #3 If the user is not playing, we add the user object to the resulting array // #4 Finally, "reduce" returns the array that was passed as second argument // and contains user objects not playing ;) var usersNotPlaying = Object.keys(users).reduce(function (result, userName) { if (!users[userName].isPlaying) result.push(users[userName]); return result; }, []); // <-- [] is the new array which will accumulate each user not playing ``` Clearly using `Array.prototype.reduce` concentrates both `map` and `filter` in a single loop and, in large array, reducing should outperform "filter+map" approach, because looping a large array twice once to filter users not playing and looping again to map them into objects again can be heavy... Summary: I would still use *filter+map* over *reduce* when we talk about few items because sometimes readability/productivity is more important than optimization, and in our case, it seems like *filter+map* approach requires less explanations (self-documented code!) than reduce. Anyway, readability/productivity is subjective to who does the actual coding...
JsonIgnoreProperties not working with spring boot Currently, the Spring Boot sample application is created normally. In the request, if there are any unknown fields coming, then we need to throw an error. For this the `@JsonIgnoreProperties(ignoreUnknown = false)` annotation is being used. However, when I am accessing the URL, it is not working. Please find code snippet as follows: ``` @RestController @RequestMapping(value = "/") @JsonIgnoreProperties(ignoreUnknown = false) public class UserController { private final Logger LOG = LoggerFactory.getLogger(getClass()); private final UserRepository userRepository; private final UserDAL userDAL; public UserController(UserRepository userRepository, UserDAL userDAL){ this.userRepository = userRepository; this.userDAL = userDAL; } @RequestMapping( value = "/create", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE ) public User addNewUsers(@RequestBody @Valid User user) throws JsonProcessingException { LOG.info("Saving user."); CardInfo cardInfo = new CardInfo(); cardInfo.setCardId("12345678901"); user.setCardInfo(cardInfo); ObjectMapper mapper = new ObjectMapper(); String jsonString = mapper.writeValueAsString(cardInfo); user.setCardInfo1(jsonString); userDAL.getAllUsers(); return userRepository.save(user); } ``` Please find sample Pom as follows: http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 ``` <groupId>com.journaldev.spring</groupId> <artifactId>spring-boot-mongodb</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-mongodb</name> <description>Spring Boot MongoDB Example</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.9.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> ```
You have at least 3 options: 1. Put `@JsonIgnoreProperties` on a class you deserialize, and not in Spring controller. - However, I see that the class you want to deserialize is `com.journaldev.bootifulmongodb.model.User` so, most probably, you can't modify it. 2. Configure your [ObjectMapper](https://fasterxml.github.io/jackson-databind/javadoc/2.9/com/fasterxml/jackson/databind/ObjectMapper.html) instance: ``` mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, true); ``` 3. [Customize Spring Boot's Jackson Object Mapper](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-spring-mvc.html#howto-customize-the-jackson-objectmapper) - by setting a correct environment property: spring.jackson.deserialization.fail-on-unknown-properties=true For further information, please refer to [section 76.3 of Spring Boot's reference](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-spring-mvc.html "d").
C# method name expected I just trying to pass some values but it's throwing an error all the time. Can some one correct me what I am missing here? Am getting error here ``` Thread t_PerthOut = new Thread(new ThreadStart(ReadCentralOutQueue("test")); ``` I want to pass this string value to `ReadCentralOutQueue`. ``` class Program { public void Main(string[] args) { Thread t_PerthOut = new Thread(new ThreadStart(ReadCentralOutQueue("test")); t_PerthOut.Start(); } public void ReadCentralOutQueue(string strQueueName) { System.Messaging.MessageQueue mq; System.Messaging.Message mes; string m; while (true) { try { } else { Console.WriteLine("Waiting for " + strQueueName + " Queue....."); } } } catch { m = "Exception Occured."; Console.WriteLine(m); } finally { //Console.ReadLine(); } } } } ```
This code: ``` Thread t_PerthOut = new Thread(new ThreadStart(ReadCentralOutQueue("test")); ``` tries to call `ReadCentralOutQueue` and *then* create a delegate from the result. That isn't going to work, because it's a void method. Normally you'd use a *method group* to create a delegate, or an *anonymous function* such as a lambda expression. In this case a lambda expression will be easiest: ``` Thread t_PerthOut = new Thread(() => ReadCentralOutQueue("test")); ``` You can't just use `new Thread(ReadCentralOutQueue)` as the `ReadCentralOutQueue` doesn't match the signature for either `ThreadStart` or `ParameterizedThreadStart`. It's important that you understand *why* you're getting this error, as well as how to fix it. EDIT: Just to prove it *does* work, here's a short but complete program: ``` using System; using System.Threading; class Program { public static void Main(string[] args) { Thread thread = new Thread(() => ReadCentralOutQueue("test")); thread.Start(); thread.Join(); } public static void ReadCentralOutQueue(string queueName) { Console.WriteLine("I would read queue {0} here", queueName); } } ```
Finding roughly matching genome sequences in Python dictionary The purpose of my code here is to play a part in genome sequencing analysis, and while functional it takes days to run, so I am looking for any way I can improve speed. The input is up to 500 million lines long (making speed code efficiency important) and contains sequencing reads and corresponding info. Each read takes up 4 lines within the input file and looks something like this: ``` @A001 <-header AAAAACCCCCCCCCCCC <-seq read (finalRead) + ################# <-quality (trimmed_quality) ``` The portion of my code that is very slow takes a dictionary as input, which contains all of the data found within the input sequencing file and is in the form shown below: ``` duplexDict[umi] = {'header':header, 'seq':finalRead, 'qual':trimmed_quality} ``` In the first part of the code I am looking for pairs of sequences by checking for similar keys (termed umi in the code). The goal is to find keys that when converted to complement sequence are only different by a single letter. Then for each key if there is only one closely matching key, the associated dictionaries are retained. If there are no matches or more than one matching key, all of these keys should be ignored. ``` from Levenshtein import distance deDuplexDict = {} # dict that will contain key pairs finalList = [] # list to keep track of valid key pairs for i in duplexDict: # dict with sequencing file info tempList = [] for j in duplexDict: complement = str(Seq(j).complement()) # this is just finding complementary sequence if distance(i,complement) <= 1: # find similar umi/read seq pairs tempList.append(j) # make a list of all similar pairs # only keep a complementary pair if there are exactly two matching consensus reads if len(tempList) == 1: if i not in finalList and j not in finalList: finalList.append(i) finalList.append(j) # only retain those dict values that are true pairs for key in finalList: deDuplexDict[key] = duplexDict[key] ``` The second piece is designed to now collapse combine the sequences of two matching dictionary keys together and output to file. This is done by taking the complement of one of the sequences and then comparing each character position along the sequence strings. If anything doesn't match the character in a final string is just set to 'N' rather than the character found in the reads. ``` from itertools import combinations prevScanned=[] plus = '+' # only pairs now exist, just search for them for key1, key2 in combinations(deDuplexDict, 2): finalRead = '' complement = str(Seq(key2).complement()) # complement of second read sequence # if neither key has been analysed and they are a matching pair then use for consensus read if distance(key1, complement) <= 1 and key1 not in prevScanned and key2 not in prevScanned: prevScanned.extend([key1,key2]) # keep track of analyzed keys # convert to complementary matches refRead = deDuplexDict[key1]['seq'] compRead = str(Seq(deDuplexDict[key2]['seq']).complement()) # iterate through by locus and derive consensus for base in range(readLength): if refRead[base] == compRead[base]: finalRead += refRead[base] else: finalRead += 'N' # only perfect matches are permitted # output consensus and associated info target = open(final_output_file, 'a') target.write(deDuplexDict[umi]['header'] + '\n' + finalRead + '\n' + plus + '\n' + deDuplexDict[umi]['qual'] + '\n') target.close() ```
## You need a faster search method You are comparing every entry in `duplexDict` directly with every entry in `duplexDict`. This means the number of operations will increase with the square of the number of entries in `duplexDict`. This stands out from the lines: ``` for i in duplexDict: ... for j in duplexDict: ``` More formally, you algorithm runs in \$\mathcal{O}(n^2)\$, where n is the length of the input dictionary. So, for 500 million (5e8) reads of sequence data, you need to run about 250 thousand trillion (25e16) operations. This is why it takes days to run. You will need to index your reads based on the sequences themselves. Find and implement an architecture, whether hash tables, binary trees, or something else, that allows fast searching of the input list of sequencing reads. Of course, a hashing method is built-in with Python's `dict`. There is no hard limit on the length of the key strings, and the number of entries you can put in the `dict` is limited by available memory. ## Using dictionary search In your case, in order to use Python's built-in dictionary to make and search the hash table, you might first do something like this: ``` seq_dict = {} for i in duplexDict: seq_dict[i['seq']] = {info: i['header']} # add whatever info you need to find # the original entry again in `duplexDict`} ``` The resulting dictionary has the sequences themselves as the keys. Searching for a particular sequence takes \$\mathcal{O}(1)\$ operations. (Note: I am assuming you need to be able to refer back to the original `duplexDict` once you have your hits. Then, you don't need all of the extra information to accompany the sequencing reads in the new dictionary. If each entry in `duplexDict` has an identifier like `umi`, just put that alone as the value in the new dictionary.) ## Generate mismatches and search As you are only looking for sequences different by one base, you can just generate all possible mismatches. If you eventually want to include other types of sequence similarity, you will need to use more specialized sequence analysis tools (such as BLAST). So, you will need a simple function to generate all possible one-base-pair mismatches from each input sequence. Assuming you are using only genomic bases A,T,C,G, for each sequence, there will be 3 x n possible one-base-pair mismatches, where n is the length of the sequencing reads. ``` def get_one_bp_mismatches(seq): mismatches = [] bases = ['A','T','G','C'] for e,i in enumerate(seq): for b in bases: mismatches.append(seq[:e] + b + seq[e+1:]) return mismatches ``` Then, search the dictionary like this: ``` for seq_list,info in [get_one_bp_mismatches(i['seq']),i['info'] for i in duplexDict.items()]: for seq in seq_list: if seq in seq_dict: finalList.append({'search_seq': info, 'found_seq': seq_dict[seq]['info']}) ``` Your `finalList` will contain all matching pairs, identified by whatever information you use to look them up in the original `duplexDict`. The whole search process will take on the order of \$\mathcal{O}(n)\$ operations, and should likely finish within minutes for 500 million sequencing reads. You can then use the last lines of your existing code to generate the output file.
gdb debugging process after exec call I don't know how to debug after the process after calling `execle`. I've looked at other websites and some suggested using `set fork-follow-mode child`, which helped me get into the fork. However, after the fork, I exit into the main function and never get into the program I am exec'ing. Here is the code: ``` } else if (!(pid_T2 = fork())) { char **env = NULL; char *units_env = NULL; char *sleep_env = NULL; size_t sleep_sz = 16; env = (char **) malloc(3 * sizeof(char *)); sleep_env = (char *) malloc(sleep_sz * sizeof(char)); snprintf(sleep_env, sleep_sz, "TSTALL=%d", cmd_args->sleep_num); if (cmd_args->kb) { units_env = "UNITS=1"; } else { units_env = "UNITS=0"; } *(env) = units_env; *(env + 1) = sleep_env; *(env + 2) = "TMOM=0"; /*printf("%s %s\n", *(env), *(env + 1));*/ close(pipe_A2toT2[1]); dup2(pipe_A2toT2[0], 0); close(pipe_A2toT2[0]); execle("totalsize", "totalsize", NULL, env); //Exits to main after this line, never goes into program. } ``` I know that the process image gets replaced by exec call, however why am I still exiting to this program's main instead of going into `totalsize` program?
> > Here is the code: > > > That's not *the* code. That's an un-compilable and meaningless snippet of the code. You also didn't tell what OS you are using, or which GDB commands you used. Here is an example showing how this is *supposed* to work, on Linux: // echo.c ``` #include <stdio.h> int main(int argc, char *argv[0]) { for (int i = 1; i < argc; ++i) { if (i != 1) printf(" "); printf("%s", argv[i]); } printf("\n"); return 0; } ``` // exec.c ``` #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/wait.h> #include <unistd.h> int main() { pid_t pid = fork(); int status; if (pid == 0) { execlp("./echo", "echo", "aa", "bb", (char*)0); abort(); } else { printf("parent %d waiting for %d\n", getpid(), pid); waitpid(pid, &status, 0); printf("child %d exited %d\n", pid, status); } return status; } ``` Now let's compile this all and run under GDB: ``` gcc -g -std=c99 -o echo echo.c gcc -g -o exec exec.c gdb -q ./exec Reading symbols from ./exec...done. (gdb) set follow-fork-mode child (gdb) break main Breakpoint 1 at 0x4006a5: file exec.c, line 9. (gdb) run Starting program: /tmp/exec Breakpoint 1, main () at exec.c:9 9 pid_t pid = fork(); (gdb) c Continuing. [New process 9851] ``` Note how GDB attached new program above, because `follow-fork-mode` told it to. ``` parent 9832 waiting for 9851 process 9851 is executing new program: /tmp/echo ``` Note how GDB noticed that the process is executing a new binary. ``` [Switching to process 9851] Breakpoint 1, main (argc=3, argv=0x7fffffffe8d8) at echo.c:4 4 for (int i = 1; i < argc; ++i) { ``` Note that we are now stopped in a *different* `main`.
Flags to enable thorough and verbose g++ warnings Often in C under `gcc`, I will start with the following set of warning flags (painfully assembled from multiple sources): ``` -Wall -Wextra -Wformat-nonliteral -Wcast-align -Wpointer-arith -Wbad-function-cast \ -Wmissing-prototypes -Wstrict-prototypes -Wmissing-declarations -Winline -Wundef \ -Wnested-externs -Wcast-qual -Wshadow -Wwrite-strings -Wno-unused-parameter \ -Wfloat-equal -pedantic -ansi ``` I will build (at least my debug versions) with this set of warnings and fix everything I possibly can (usually everything), and then only remove flags if they are either not relevant or not fixable (almost never the case). Sometimes, I'll also add `-Werror` if I have to step away while compiling. I'm just picking up C++ (yes, I'm 15 years behind the times), and I'd like to start off on the right foot. My question is: Does someone have a precompiled similar set of complete warning flags for C++ under `g++`? (I know many of them will be the same.)
D'oh, all of my original searches turned up 99% of posts on how to *suppress* warnings (scarily enough), but I just ran across [this comment](https://stackoverflow.com/questions/399850/best-compiler-warning-level-for-c-c-compilers/401276#401276), which has this lovely set of flags (some less relevant): Cross checked with: <http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html> ``` -g -O -Wall -Weffc++ -pedantic \ -pedantic-errors -Wextra -Waggregate-return -Wcast-align \ -Wcast-qual -Wconversion \ -Wdisabled-optimization \ -Werror -Wfloat-equal -Wformat=2 \ -Wformat-nonliteral -Wformat-security \ -Wformat-y2k \ -Wimplicit -Wimport -Winit-self -Winline \ -Winvalid-pch \ -Wlong-long \ -Wmissing-field-initializers -Wmissing-format-attribute \ -Wmissing-include-dirs -Wmissing-noreturn \ -Wpacked -Wpadded -Wpointer-arith \ -Wredundant-decls \ -Wshadow -Wstack-protector \ -Wstrict-aliasing=2 -Wswitch-default \ -Wswitch-enum \ -Wunreachable-code -Wunused \ -Wunused-parameter \ -Wvariadic-macros \ -Wwrite-strings ``` So, I think that's a good starting point. Didn't realize this was a dupe, but at least it was deeply buried. :-)
Calculate gesture distance in Android I'm trying to find a way to calculate the distance traveled during a Gesture. I can get the distance between two points using MotionEvent.ACTION\_DOWN and MotionEvent.ACTION\_UP or MotionEvent.ACTION\_MOVE. But that doesn't account for moving in say, a circle. It would calculate 0 because you moved all the way back around. I'm looking for total distance traveled, preferably in pixels so I can manipulate it further, if needed.
You can use the *historic* stuff of the MotionEvent. Based on the example of the API Doc you could do something like so (for simplicity my example doesn't deal with multi-touch): On ACTION\_MOVE and ACTION\_UP do this, where `startX`, `startY` would be the last known coordinates, e.g. from last ACTION\_DOWN event. ``` float getDistance(float startX, float startY, MotionEvent ev) { float distanceSum = 0; final int historySize = ev.getHistorySize(); for (int h = 0; h < historySize; h++) { // historical point float hx = ev.getHistoricalX(0, h); float hy = ev.getHistoricalY(0, h); // distance between startX,startY and historical point float dx = (hx - startX); float dy = (hy - startY); distanceSum += Math.sqrt(dx * dx + dy * dy); // make historical point the start point for next loop iteration startX = hx; startY = hy; } // add distance from last historical point to event's point float dx = (ev.getX(0) - startX); float dy = (ev.getY(0) - startY); distanceSum += Math.sqrt(dx * dx + dy * dy); return distanceSum; } ``` ![example image](https://i.stack.imgur.com/Qq5tF.png)
vertical-align image in div i have problem with image vertical-align in div ``` .img_thumb { float: left; height: 120px; margin-bottom: 5px; margin-left: 9px; position: relative; width: 147px; background-color: rgba(0, 0, 0, 0.5); border-radius: 3px; } .img_thumb img { display: block; margin-left: auto; margin-right: auto; vertical-align: middle; } <div class="img_thumb"> <a class="images_class" href="large.jpg" rel="images"><img src="small.jpg" title="img_title" alt="img_alt" /></a> </div> ``` As far as i know i need "display: block;" to position image in center and that works. Also in tutorials i find many answers but they are not "useful", because all of my image are NOT at the same height!
If you have a fixed height in your container, you can set line-height to be the same as height, and it will center vertically. Then just add text-align to center horizontally. Here's an example: <http://jsfiddle.net/Cthulhu/QHEnL/1/> **EDIT** Your code should look like this: ``` .img_thumb { float: left; height: 120px; margin-bottom: 5px; margin-left: 9px; position: relative; width: 147px; background-color: rgba(0, 0, 0, 0.5); border-radius: 3px; line-height:120px; text-align:center; } .img_thumb img { vertical-align: middle; } ``` The images will always be centered horizontally and vertically, no matter what their size is. Here's 2 more examples with images with different dimensions: <http://jsfiddle.net/Cthulhu/QHEnL/6/> <http://jsfiddle.net/Cthulhu/QHEnL/7/> **UPDATE** It's now 2016 (the future!) and looks like a few things are changing (finally!!). Back in 2014, [Microsoft announced](https://blogs.msdn.microsoft.com/ie/2014/08/07/stay-up-to-date-with-internet-explorer/) that it will stop supporting IE8 in all versions of Windows and will encourage all users to update to IE11 or Edge. Well, this is supposed to happen next Tuesday (12th January). Why does this matter? With the announced **death of IE8**, we can finally start using **CSS3** magic. With that being said, here's an updated way of aligning elements, both horizontally and vertically: ``` .container { position: relative; } .container .element { position: absolute; left: 50%; top: 50%; transform: translate(-50%, -50%); } ``` Using this `transform: translate();` method, you don't even need to have a fixed height in your container, **it's fully dynamic**. Your element has fixed height or width? Your container as well? No? It doesn't matter, it will always be centered because all centering properties are fixed on the child, it's independent from the parent. Thank you CSS3. If you only need to center in one dimension, you can use `translateY` or `translateX`. Just try it for a while and you'll see how it works. Also, try to change the values of the `translate`, you will find it useful for a bunch of different situations. Here, have a new fiddle: <https://jsfiddle.net/Cthulhu/1xjbhsr4/> For more information on `transform`, [here's a good resource](https://css-tricks.com/almanac/properties/t/transform/).
Pinning memory in .NET the lifetime of an object I recently learned that pinning in .NET is no actual process. It's "just" creating a pinned local variable in IL and everything this variable is pointing to is considered pinned by the GC. You can read more about this [here](https://mattwarren.org/2016/10/26/How-does-the-fixed-keyword-work/). Now I wonder: Is it possible to pin a field of a `class` or `struct` so that the `object` it points to is assumed as pinned by the GC without using `GCHandle` or so. Something like this *(pseudocode!)*: ``` public unsafe [class|struct] Something { public byte[] Data = new byte[4096]; private /*some keywords*/ byte* ptr = /*some keywords like fixed*/ Data; } ``` If this is not possible within plain C#, is it possible when using IL? Or can't `struct` or `class` fields have the effect of pinning objects? (Maybe it's only possible for local variables?)
Not as a *field*, no. Essentially, you're absolutely correct here: > > Maybe it's only possible for local variables? > > > Yes, it is only possible for local variables. The point here is that the GC does not want to have to crawl the heap to find pins (it is happy to look at the stack - it already needs to do that), and there is no concept of an object *by itself* electing to be pinned. You can of course use a pinned *local* to achieve this: ``` fixed(byte* ptr = obj.Data) { RunYourMainCode(obj); } ``` but this requires the pinned local to span the code that needs the method to retain pinned. If you really want something to not move **and you can't use a local**: - use a `GCHandle` (that's what it is *for*), or - use unmanaged memory Note that with `Memory<T>` and `Span<T>`, you can still use *managed* code (i.e. almost zero `unsafe` usage) to talk to *unmanaged* memory. Specifically, a `Memory<T>` can be constructed over unsafe memory, and the `.Span` from that provides `ref T` access to the data (`ref T` is a managed pointer, contrast to `T*` which is an unmanaged pointer; very similar, but managed pointers work with GC and do not require `unsafe`).
Is code that terminates on a random condition guaranteed to terminate? If I had a code which terminated based on if a random number generator returned a result (as follows), would it be 100% certain that the code would terminate if it was allowed to run forever. ``` while (random(MAX_NUMBER) != 0): // random returns a random number between 0 and MAX_NUMBER print('Hello World') ``` I am also interested in any distinctions between purely random and the deterministic random that computers generally use. Assume the seed is not able to be known in the case of the deterministic random. Naively it could be suggested that the code will exit, after all every number has some possibility and all of time for that possibility to be exercised. On the other hand it could be argued that there is the random chance it may not ever meet the exit condition-- the generator could generate 1 'randomly' until infinity. (I suppose one would question the validity of the random number generator if it was a deterministic generator returning only 1's 'randomly' though)
By definition, it must be possible to get an infinite sequence of 0 in a really casual ("random") sequence so this program must be able to run forever. Otherwise, this could not be considered a random sequence from a mathematical/statistical point of view. You cannot rule out a specific, legitimate sequence and still consider your system a really random one. An infinite sequence of 0 would fail to be recognized as a legitimate random sequence by most if not all the standard statistical methods of analysis we use in practice but, despite this, it actually is a perfectly legitimate random sequence for the theory. This in theory. In practice, we all know that, given enough time, we will get a number different from 0 and the program will terminate. The random generator used by computers are considered to be random when a reasonably long sequence of number they generate (say, some million numbers...) cannot be distinguished from a really casual one when analyzed whith standard statistical tools. That is: we do not really know if the sequence of number they generate is really random but we cannot tell it apart from a genuine random sequence when we analyze a finite-length sample. This is a big difference because... given a longer sequence you can discover that your sequence is not really random and can be reproduced. In cryptography, this would be a very bad discovery. As I said above, the statistical methods of analysis we use would not recognize an infinite sequence of 0 as a legitimate random sequence. Nevertheless, this is a failure of the analytical methods, not of the random generator. Mathematically speaking, you cannot rule out such a sequence just because it does not satisfy your analytical system or your personal taste. If you do not have a real, mathematical reason to rule it out, it is legitimate.
if constexpr - why is discarded statement fully checked? I was messing around with c++20 consteval in GCC 10 and wrote this code ``` #include <optional> #include <tuple> #include <iostream> template <std::size_t N, typename Predicate, typename Tuple> consteval std::optional<std::size_t> find_if_impl(Predicate&& pred, Tuple&& t) noexcept { constexpr std::size_t I = std::tuple_size_v<std::decay_t<decltype(t)>> - N; if constexpr (N == 0u) { return std::nullopt; } else { return pred(std::get<I>(t)) ? std::make_optional(I) : find_if_impl<N - 1u>(std::forward<decltype(pred)>(pred), std::forward<decltype(t)>(t)); } } template <typename Predicate, typename Tuple> consteval std::optional<std::size_t> find_if(Predicate&& pred, Tuple&& t) noexcept { return find_if_impl<std::tuple_size_v<std::decay_t<decltype(t)>>>( std::forward<decltype(pred)>(pred), std::forward<decltype(t)>(t)); } constexpr auto is_integral = [](auto&& x) noexcept { return std::is_integral_v<std::decay_t<decltype(x)>>; }; int main() { auto t0 = std::make_tuple(9, 1.f, 2.f); constexpr auto i = find_if(is_integral, t0); if constexpr(i.has_value()) { std::cout << std::get<i.value()>(t0) << std::endl; } } ``` Which is supposed to work like the STL find algorithm but on tuples and instead of returning an iterator, it returns an optional index based on a compile time predicate. Now this code compiles just fine and it prints out > > 9 > > > But if the tuple does not contain an element that's an integral type, the program doesn't compile, because the i.value() is still called on an empty optional. Now why is that?
This is just how [constexpr if](https://en.cppreference.com/w/cpp/language/if#Constexpr_If) works. If we check [[stmt.if]/2](https://timsong-cpp.github.io/cppwp/stmt.select#stmt.if-2) > > If the if statement is of the form if constexpr, the value of the condition shall be a contextually converted constant expression of type bool; this form is called a constexpr if statement. If the value of the converted condition is false, the first substatement is a discarded statement, otherwise the second substatement, if present, is a discarded statement. **During the instantiation of an enclosing templated entity ([temp.pre]), if the condition is not value-dependent after its instantiation, the discarded substatement (if any) is not instantiated.**[...] > > > emphasis mine So we can see that we only do not evaluate the discarded expression if we are in a template and if the condition is value-dependent. `main` is not a function template so the body of the if statement is still checked by the compiler for correctness. Cppreference also says this in their section about constexpr if with: > > If a constexpr if statement appears inside a templated entity, and if condition is not value-dependent after instantiation, the discarded statement is not instantiated when the enclosing template is instantiated . > > > > ``` > template<typename T, typename ... Rest> > void g(T&& p, Rest&& ...rs) { > // ... handle p > if constexpr (sizeof...(rs) > 0) > g(rs...); // never instantiated with an empty argument list. > } > > ``` > > Outside a template, a discarded statement is fully checked. if constexpr is not a substitute for the #if preprocessing directive: > > > > ``` > void f() { > if constexpr(false) { > int i = 0; > int *p = i; // Error even though in discarded statement > } > } > > ``` > >
Regex pattern isn't matching certain show titles Using C# regex to match and return data parsed from a string is returning unreliable results. The pattern I am using is as follows : ``` Regex r=new Regex( @"(.*?)S?(\d{1,2})E?(\d{1,2})(.*)|(.*?)S?(\d{1,2})E?(\d{1,2})", RegexOptions.IgnoreCase ); ``` **Following are a couple test cases that fail** --- ``` Ellen 2015.05.22 Joseph Gordon Levitt [REPOST] The Soup 2015.05.22 [mp4] Big Brother UK Live From The House (May 22, 2015) ``` Should return - Show Name (eg, `Ellen`) - Date (eg, `2015.05.22`) - Extra Info (eg, `Joseph Gordon Levitt [REPOST]`) --- ``` Alaskan Bush People S02 Wild Times Special ``` Should return - Show Name (eg, `Alaskan Bush People`) - Season (eg, `02`) - Extra Info (eg, `Wild Times Special`) --- ``` 500 Questions S01E03 ``` Should return - Show Name (eg, `500 Questions`) - Season (eg, `01`) - Episode (eg, `03`) **Examples that work and return proper data** ``` Boyster S01E13 – E14 Mysteries at the Museum S08E08 Mysteries at the National Parks S01E07 – E08 The Last Days Of… S01E06 Born Naughty? S01E02 Have I Got News For You S49E07 ``` *What it seems like, is that the pattern is ignoring the S and the E if not found, and then using the first set of matching numbers to fill in that slot.* It is clear that there is more work needed on this pattern to work with the above varying strings. Your assistance in this matter is much appreciated.
# Divide and Conquer You're trying to parse too much with one simple expression. That's not going to work very well. The *best* approach in this case is to divide the problem into smaller problems, and solve each one separately. Then, we can combine everything into one pattern later. Let's write some patterns for the data you want to extract. - Season/episode: ``` S\d+(?:E\d+(?:\s*\p{Pd}\s*E\d+)?)? ``` I used `\p{Pd}` instead of `-` to accommodate for any dash type. - Date: ``` \d{4}\.\d{1,2}\.\d{1,2} ``` Or... ``` (?i:January|February|March|April|May|June|July|August|September|October|November|December) \s*\d{1,2},\s*\d{4} ``` - Write a simple pattern for extra info: ``` .*? ``` (yeah, that's pretty generic) - We can also detect the show format like this: ``` \[.*?\] ``` - You can add additional parts as required. Now, we can put everything into one pattern, using group names to extract data: ``` ^\s* (?<name>.*?) (?<info> \s+ (?: (?<episode>S\d+(?:E\d+(?:\s*\p{Pd}\s*E\d+)?)?) | (?<date>\d{4}\.\d{1,2}\.\d{1,2}) | \(?(?<date>(?i:January|February|March|April|May|June|July|August|September|October|November|December)\s*\d{1,2},\s*\d{4})\)? | \[(?<format>.*?)\] | (?<extra>(?(info)|(?!)).*?) ))* \s*$ ``` Just ignore the `info` group (it's used for the conditional in `extra`, so that `extra` doesn't consume what should be part of the show name). And you can get multiple `extra` infos, so just concatenate them, putting a space in between each part. Sample code: ``` var inputData = new[] { "Boyster S01E13 – E14", "Mysteries at the Museum S08E08", "Mysteries at the National Parks S01E07 – E08", "The Last Days Of… S01E06", "Born Naughty? S01E02", "Have I Got News For You S49E07", "Ellen 2015.05.22 Joseph Gordon Levitt [REPOST]", "The Soup 2015.05.22 [mp4]", "Big Brother UK Live From The House (May 22, 2015)", "Alaskan Bush People S02 Wild Times Special", "500 Questions S01E03" }; var re = new Regex(@" ^\s* (?<name>.*?) (?<info> \s+ (?: (?<episode>S\d+(?:E\d+(?:\s*\p{Pd}\s*E\d+)?)?) | (?<date>\d{4}\.\d{1,2}\.\d{1,2}) | \(?(?<date>(?i:January|February|March|April|May|June|July|August|September|October|November|December)\s*\d{1,2},\s*\d{4})\)? | \[(?<format>.*?)\] | (?<extra>(?(info)|(?!)).*?) ))* \s*$ ", RegexOptions.IgnorePatternWhitespace); foreach (var input in inputData) { Console.WriteLine(); Console.WriteLine("--- {0} ---", input); var match = re.Match(input); if (!match.Success) { Console.WriteLine("FAIL"); continue; } foreach (var groupName in re.GetGroupNames()) { if (groupName == "0" || groupName == "info") continue; var group = match.Groups[groupName]; if (!group.Success) continue; foreach (Capture capture in group.Captures) Console.WriteLine("{0}: '{1}'", groupName, capture.Value); } } ``` And the output of this is... ``` --- Boyster S01E13 - E14 --- name: 'Boyster' episode: 'S01E13 - E14' --- Mysteries at the Museum S08E08 --- name: 'Mysteries at the Museum' episode: 'S08E08' --- Mysteries at the National Parks S01E07 - E08 --- name: 'Mysteries at the National Parks' episode: 'S01E07 - E08' --- The Last Days Ofâ?¦ S01E06 --- name: 'The Last Days Ofâ?¦' episode: 'S01E06' --- Born Naughty? S01E02 --- name: 'Born Naughty?' episode: 'S01E02' --- Have I Got News For You S49E07 --- name: 'Have I Got News For You' episode: 'S49E07' --- Ellen 2015.05.22 Joseph Gordon Levitt [REPOST] --- name: 'Ellen' date: '2015.05.22' format: 'REPOST' extra: 'Joseph' extra: 'Gordon' extra: 'Levitt' --- The Soup 2015.05.22 [mp4] --- name: 'The Soup' date: '2015.05.22' format: 'mp4' --- Big Brother UK Live From The House (May 22, 2015) --- name: 'Big Brother UK Live From The House' date: 'May 22, 2015' --- Alaskan Bush People S02 Wild Times Special --- name: 'Alaskan Bush People' episode: 'S02' extra: 'Wild' extra: 'Times' extra: 'Special' --- 500 Questions S01E03 --- name: '500 Questions' episode: 'S01E03' ```
Filtering results from Google Analytics Reporting API I am successfully downloading results from Google Analytics using the reporting API (version 4), with the PHP client library. But I have not figured out how to correctly filter these results. I see how this would work via cURL, but not through the client library. I looked through the client library code, and there is a class method: ``` apiclient-services/Google/Service/AnalyticsReporting/ReportRequest.php: public function setMetricFilterClauses($metricFilterClauses) ``` I do not see any documentation or any usage of the associated get method: ``` public function getMetricFilterClauses() ``` Are there examples of using filters through the PHP client library?
## Background The [Google API Client libraries](https://developers.google.com/api-client-library/) are generated from the [Google Discovery Service](https://developers.google.com/discovery/). And the [PHP client library](https://developers.google.com/api-client-library/php/) generates a `setProperty` and `getProperty` for every property of a resource. ## Analytics Reporting API V4 The [Analytics Reporting API V4 reference docs](https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet) exhustively describe the API. The [Developer Guide](https://developers.google.com/analytics/devguides/reporting/core/v4/basics#filtering_1) gives the underlying JSON example which the client libraries will generate: ``` POST https://analyticsreporting.googleapis.com/v4/reports:batchGet { "reportRequests": [ { "viewId": "XXXX", "dateRanges": [ {"endDate": "2014-11-30", "startDate": "2014-11-01"} ], "metrics": [ {"expression": "ga:pageviews"}, {"expression": "ga:sessions"} ], "dimensions": [{"name": "ga:browser"}, {"name": "ga:country"}], "dimensionFilterClauses": [ { "filters": [ { "dimensionName": "ga:browser", "operator": "EXACT", "expressions": ["Chrome"] } ] } ] } ] } ``` And the [Samples page](https://developers.google.com/analytics/devguides/reporting/core/v4/samples) gives many examples requests in Python, Java, PHP and JavaScript, which should give you a good sense of how to work with the individual client libraries. But you are correct there is not an explicit example of PHP using a filter. ## PHP Filter Example Below is the same example as the request above: ``` // Create the DateRange object. $dateRange = new Google_Service_AnalyticsReporting_DateRange(); $dateRange->setStartDate("2014-11-01"); $dateRange->setEndDate("2014-11-30"); // Create the Metrics object. $pageviews = new Google_Service_AnalyticsReporting_Metric(); $pageviews->setExpression("ga:pageviews"); $sessions = new Google_Service_AnalyticsReporting_Metric(); $sessions->setExpression("ga:sessions"); //Create the Dimensions object. $browser = new Google_Service_AnalyticsReporting_Dimension(); $browser->setName("ga:browser"); $country = new Google_Service_AnalyticsReporting_Dimension(); $country->setName("ga:country"); // Create the DimensionFilter. $dimensionFilter = new Google_Service_AnalyticsReporting_DimensionFilter(); $dimensionFilter->setDimensionName('ga:browser'); $dimensionFilter->setOperator('EXACT'); $dimensionFilter->setExpressions(array('Chrome')); // Create the DimensionFilterClauses $dimensionFilterClause = new Google_Service_AnalyticsReporting_DimensionFilterClause(); $dimensionFilterClause->setFilters(array($dimensionFilter)); // Create the ReportRequest object. $request = new Google_Service_AnalyticsReporting_ReportRequest(); $request->setViewId("XXXX"); $request->setDateRanges($dateRange); $request->setDimensions(array($browser, $country)); $request->setDimensionFilterClauses(array($dimensionFilterClause)); $request->setMetrics(array($pageviews, $sessions)); $body = new Google_Service_AnalyticsReporting_GetReportsRequest(); $body->setReportRequests( array( $request) ); return $analyticsreporting->reports->batchGet( $body ); ``` As you probably noticed I never once used a `$object->getProperty()`. Basically All it would do is give me its current value. When Calling the API you should only ever need to `$object->setProperty($value);` Hence why I gave you the background that the client libraries are generated. ## Conclusion The Analytics Reporting API itself is complex and there are many client library languages. It is not always possible to give an example every possible usage of an API in every possible client library language. That is why it is necessary to understand how to look at the reference docs and understand how the client libraries are generated from the structure described.
Assign column of repeating values from a list Suppose I have a list of data. For `eg [1,2,3,4,5]` and I have `1704` rows in my DataFrame. Now I want to add new column with this values only but it should be repeated till the last row as shown below: ``` 1 2 3 4 5 1 2 3 4 5 .. ``` and so on till the last record. I tried `df['New Column']=pd.Series([1,2,3,4,5])` but it inserts record only in first 5 rows but I want this series to be repeated till the last. I referred many posts on SO but didn't found any relevant post. I am a newbie to pandas framework. Please help me with this. Thanks in advance.
Below, I propose two solutions that also handle situations where the length of `df` is not a perfect multiple of the list length. ### [`np.tile`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.tile.html) ``` v = pd.Series([1, 2, 3, 4, 5]) df['NewCol'] = np.tile(v, len(df) // len(v) + 1)[:len(df)] ``` --- ### `cycle` and `islice` A pure-python approach featuring `itertools`. ``` from itertools import cycle, islice it = cycle([1, 2, 3, 4, 5]) df['NewCol'] = list(islice(it, len(df))) ```
prevent BufferedReader from closing the file for a list of files I have a class which extends BufferedReader, and a list of file streams b.close() is called for all except the last stream, I want to keep the streams open How do I do this? thanks ``` class TestReader(BufferedReader): pass def test(streams): for stream in streams: b=TestReader(stream) do_something(b) #all the streams except streams[-1] are closed, how do I prevent this? streams=[open('test1.txt','rb'),open('test2.txt','rb')] test(streams) streams.do_something_else() ```
Even though in the implementation `BufferedIOBase` classes wrap an `IOBase` object, their interface is a stream (*everything* inherits from `IOBase`), so the ordinary behavior of a `IOBase` object is to close themselves when they go out of scope. `BufferedIOBase` just delegates the `close()` call to the underlying stream. **You shouldn't view a `BufferedReader` as a stream wrapper (although that is how it is implemented), but as a type-casting of an existing stream.** The state of the two streams is completely bound together. However, you can unbind the wrapped stream with `detach()`, but this leaves the `BufferedIOBase` object useless. Additionally, `io.open` returns a `BufferedReader` already when mode is `rb`, so you are double-buffering. You should use `io.FileIO` instead. You have a few choices: 1. Create a new stream and a new underlying file descriptor, and pass around file names instead of streams. This is your easiest and safest option. 2. Create raw file descriptors and create streams from them as needed. This requires some care that multiple streams are not using the same file descriptor at the same time. For example: ``` fd = os.open('test.txt', os.O_RDONLY) file1 = FileIO(fd, 'r', closefd=False) file2 = FileIO(fd, 'r', closefd=False) file1.read(100) assert file1.tell() == 100 file2.read(100) assert file1.tell() == 200 ``` 3. `detach()` the underlying stream before your `BufferedIOBase` object closes its stream. (Remember to rewind the stream!) ``` def test(streams): for stream in streams: b=TestReader(stream) do_something(b) wrappedstream = b.detach() assert wrappedstream is stream ``` You can even implement this in your destructor: ``` class TestReader(BufferedReader): def __del__(self): self.detach() # self.raw will not be closed, # rather left in the state it was in at detachment ``` Or just disable `close()` delegation completely if you think the semantics are wrong: ``` class TestReader(BufferedReader): def close(self): self.closed = True ``` I don't have the big picture of what you are doing (possibly you need a different design), but this is how I would implement the code I see: ``` from io import FileIO, BufferedReader import io import os class TestReader(BufferedReader): pass def test(streams): for stream in streams: b = TestReader(stream) def test_reset(streams): """Will try to leave stream state unchanged""" for stream in streams: pos = stream.tell() b = TestReader(stream) do_something(b) b.detach() stream.seek(pos) filenames = ['test1.txt', 'test2.txt'] # option 1: just make new streams streams = [FileIO(name, 'r') for name in filenames] test(streams) streams = [io.open(name, 'rb') for name in filenames] #etc # option 2: use file descriptors fds = [os.open(name, os.O_RDONLY) for name in filenames] #closefd = False means "do not close fd on __del__ or __exit__" #this is only an option when you pass a fd instead of a file name streams = [FileIO(fd, 'r', closefd=False) for fd in fds] test(streams) streams = [] for fd in fds: os.lseek(fd, 0, os.SEEK_SET) streams.append(io.open(fd, 'rb', closefd=False)) # you can also .seek(0) on the BufferedReader objects # instead of os.lseek on the fds # option 3: detach streams = [FileIO(name, 'r') for name in filenames] test_reset(streams) # streams[*] should still be in the same state as when you passed it in ```
Angular: Extending Services and Passing Parameters I'm having a hard time understanding how to extend services in Angular. I have a service that connects to Firebase and does all sorts of common tasks (get, set, update, list, etc.) and instead of re-writing it for my special components I tried just extending it. The idea was I could pass just the new part of the path but that throws an error: ``` Cannot resolve all parameters for 'FirebaseService'(?). Make sure that all the parameters are decorated with Inject or have valid type annotations and that 'FirebaseService' is decorated with Injectable. ``` The issue is in the constructor and my lack of OOP brains. I can pass other services or providers into my service but I can no longer pass simple string parameters unless I'm missing something. I tried setting properties but I don't think I'm getting the context right. I was thinking it was an issue with the @Injectable but I'm not sure. Here's a simplified version of what I tried first: **UPDATED TO INCLUDE PLUNKER LINKS:** [Plunker for passing with parameters](http://plnkr.co/edit/6cXqsD9M1O19BjThv61p?p=preview) [Plunker for passing with constructor](http://plnkr.co/edit/7UtV4VEEpR02xIsBqnGR?p=preview) ``` @Injectable() export class FirebaseService { rootPath:string = "https://mysite.firebase.com/"; childPath:string; pathToUse: string; constructor() { if(this.childPath){ this.pathToUse = this.rootPath + this.childPath; }else{ this.pathToUse = this.rootPath; } console.log(this.pathToUse); } } //The in project_service.ts @Injectable() export class ProjectService extends FirebaseService{ childPath:string = "projects"; constructor(){ super(); } } ``` I expected it to have the "projects" line attached. It doesn't, it just repeats. So Then I tried passing through the constructor: ``` @Injectable() export class FirebaseService { rootPath:string = "https://mysite.firebase.com"; pathToUse: string; constructor(childPath:string) { if(childPath){ this.pathToUse = this.rootPath + childPath; }else{ this.pathToUse = this.rootPath; } console.log(this.pathToUse); } } //The in project_service.ts @Injectable() export class ProjectService extends FirebaseService{ constructor(){ super('projects'); } } ``` Which just blows everything up. I have a way around it but it seems like a total hack. What is the correct way to pass the "projects" parameter to the parent class?
So after some good work by CH Buckingham I've resolved that doing it the "typical" way is impossible. Angular2 simply takes over the constructor() function with the injector. What does work however is make an alternate "init" function that you can then pass parameters to. ``` @Injectable() export class ParentService { root:string = "This content comes from: "; myString:string = "The Parent"; resultString:string; constructor(){ this.init(); } init() { this.resultString = this.root + this.myString; } } @Injectable() export class ChildService extends ParentService { constructor(){ super(); } init() { this.myString = "The Child"; super.init(); } } ``` In this way you can set values on the child object or pass them through to the parent. [Plunker of this in action](http://plnkr.co/edit/6cXqsD9M1O19BjThv61p?p=preview)
How to asynchronously copy memory from the host to the device using thrust and CUDA streams I would like to copy memory from the host to the device using thrust as in ``` thrust::host_vector<float> h_vec(1 << 28); thrust::device_vector<float> d_vec(1 << 28); thrust::copy(h_vec.begin(), h_vec.end(), d_vec.begin()); ``` using CUDA streams analogously to how you would copy memory from the device to the device using streams: ``` cudaStream_t s; cudaStreamCreate(&s); thrust::device_vector<float> d_vec1(1 << 28), d_vec2(1 << 28); thrust::copy(thrust::cuda::par.on(s), d_vec1.begin(), d_vec1.end(), d_vec2.begin()); cudaStreamSynchronize(s); cudaStreamDestroy(s); ``` The problem is that I can't set the execution policy to CUDA to specify the stream when copying from the host to the device, because, in that case, thrust would assume that both vectors are stored on the device. Is there a way to get around this problem? I'm using the latest thrust version from github (it says 1.8 in the version.h file).
As indicated in the comments, I don't think this will be possible directly with `thrust::copy`. However we can use `cudaMemcpyAsync` in a thrust application to achieve the goal of asynchronous copies and overlap of copy with compute. Here is a worked example: ``` #include <thrust/host_vector.h> #include <thrust/device_vector.h> #include <thrust/system/cuda/experimental/pinned_allocator.h> #include <thrust/system/cuda/execution_policy.h> #include <thrust/fill.h> #include <thrust/sequence.h> #include <thrust/for_each.h> #include <iostream> // DSIZE determines duration of H2D and D2H transfers #define DSIZE (1048576*8) // SSIZE,LSIZE determine duration of kernel launched by thrust #define SSIZE (1024*512) #define LSIZE 1 // KSIZE determines size of thrust kernels (number of threads per block) #define KSIZE 64 #define TV1 1 #define TV2 2 typedef int mytype; typedef thrust::host_vector<mytype, thrust::cuda::experimental::pinned_allocator<mytype> > pinnedVector; struct sum_functor { mytype *dptr; sum_functor(mytype* _dptr) : dptr(_dptr) {}; __host__ __device__ void operator()(mytype &data) const { mytype result = data; for (int j = 0; j < LSIZE; j++) for (int i = 0; i < SSIZE; i++) result += dptr[i]; data = result; } }; int main(){ pinnedVector hi1(DSIZE); pinnedVector hi2(DSIZE); pinnedVector ho1(DSIZE); pinnedVector ho2(DSIZE); thrust::device_vector<mytype> di1(DSIZE); thrust::device_vector<mytype> di2(DSIZE); thrust::device_vector<mytype> do1(DSIZE); thrust::device_vector<mytype> do2(DSIZE); thrust::device_vector<mytype> dc1(KSIZE); thrust::device_vector<mytype> dc2(KSIZE); thrust::fill(hi1.begin(), hi1.end(), TV1); thrust::fill(hi2.begin(), hi2.end(), TV2); thrust::sequence(do1.begin(), do1.end()); thrust::sequence(do2.begin(), do2.end()); cudaStream_t s1, s2; cudaStreamCreate(&s1); cudaStreamCreate(&s2); cudaMemcpyAsync(thrust::raw_pointer_cast(di1.data()), thrust::raw_pointer_cast(hi1.data()), di1.size()*sizeof(mytype), cudaMemcpyHostToDevice, s1); cudaMemcpyAsync(thrust::raw_pointer_cast(di2.data()), thrust::raw_pointer_cast(hi2.data()), di2.size()*sizeof(mytype), cudaMemcpyHostToDevice, s2); thrust::for_each(thrust::cuda::par.on(s1), do1.begin(), do1.begin()+KSIZE, sum_functor(thrust::raw_pointer_cast(di1.data()))); thrust::for_each(thrust::cuda::par.on(s2), do2.begin(), do2.begin()+KSIZE, sum_functor(thrust::raw_pointer_cast(di2.data()))); cudaMemcpyAsync(thrust::raw_pointer_cast(ho1.data()), thrust::raw_pointer_cast(do1.data()), do1.size()*sizeof(mytype), cudaMemcpyDeviceToHost, s1); cudaMemcpyAsync(thrust::raw_pointer_cast(ho2.data()), thrust::raw_pointer_cast(do2.data()), do2.size()*sizeof(mytype), cudaMemcpyDeviceToHost, s2); cudaDeviceSynchronize(); for (int i=0; i < KSIZE; i++){ if (ho1[i] != ((LSIZE*SSIZE*TV1) + i)) { std::cout << "mismatch on stream 1 at " << i << " was: " << ho1[i] << " should be: " << ((DSIZE*TV1)+i) << std::endl; return 1;} if (ho2[i] != ((LSIZE*SSIZE*TV2) + i)) { std::cout << "mismatch on stream 2 at " << i << " was: " << ho2[i] << " should be: " << ((DSIZE*TV2)+i) << std::endl; return 1;} } std::cout << "Success!" << std::endl; return 0; } ``` For my test case, I used RHEL5.5, Quadro5000, and cuda 6.5RC. This example is designed to have thrust create very small kernels (only a single threadblock, as long as `KSIZE` is small, say 32 or 64), so that the kernels that thrust creates from `thrust::for_each` are able to run concurrently. When I profile this code, I see: ![nvvp output for thrust streams application](https://i.stack.imgur.com/RoGmb.png) This indicates that we are achieving proper overlap both between thrust kernels, and between copy operations and thrust kernels, as well as asynchronous data copying at the completion of the kernels. Note that the `cudaDeviceSynchronize()` operation "fills" the timeline, indicating that all the async operations (data copying, thrust functions) were issued asynchronously and control returned to the host thread before any of the operations were underway. All of this is expected, proper behavior for full concurrency between host, GPU, and data copying operations.
Post-increment and retro compatibility I have an example where I can't figure out why the result is not what I expect. Code: ``` class A { protected $a = 1; function a(){ echo $this->a++; } } class B extends A { protected $a = 10; function b(){ echo $this->a++; $this->a() . PHP_EOL; } } $b = new B(); $b->b(); ``` In my mind the max that I could expect from the result would be like 1011, but it looks like the result is 10111213. I can't figure out why I'm getting this result. Can someone explain to me what is going under the hood?
It's because your methods have the same name (case insensitive) as the classes that contain them. They're behaving like [constructors](http://php.net/manual/en/language.oop5.decon.php) as well as your explicit calls. From the PHP documentation I linked above: > > For backwards compatibility with PHP 3 and 4, if PHP cannot find a \_\_construct() function for a given class, it will search for the old-style constructor function, by the name of the class. > > > So `$b = new B();` produces 1011, and `$b->b();` produces 1213. This behavior is deprecated, and you'll get a warning to that effect if you have that level of error reporting enabled. I see this message in my dev environment: > > Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; A has a deprecated constructor in C:\Apache24\htdocs\example.php on line 2 > > > Obviously the easiest way to avoid the unexpected behavior is to change the name of the methods to something other than the class name. If you don't want to do that, you can add a modern constructor. ``` function __construct() {} ``` Even if it doesn't actually do anything else, it will stop the `a()` and `b()` methods from acting like constructors.
Kendo UI: Place Grid Summary Values in Footer Using the Kendo UI Grid and MVC 4, I haven't been able to find a way to put summary totals (financial) at the bottom of the grid for select columns. Is this possible?
Yes indeed! check [DataSource Aggregate](http://docs.kendoui.com/api/framework/datasource#aggregate%20array%20%20objectdefault%29). Example: ``` var stocksDataSource = new kendo.data.DataSource({ transport:{ read:function (options) { } }, schema :{ model:{ fields:{ name :{ type:"string" }, price:{ type:"number" } } } }, aggregate:[ { field:"price", aggregate:"sum" } ], pageSize :10 }); ``` I have defined a `DataSource` with two fields: the items `name` and `price`. I want to totalize the `price` so I defined an `aggregate` for `price` and what I'm going to do is `sum` (you can also `min`, `max`, `average` and `count`). Then in the `Grid` when I define the columns I write: ``` columns :[ { field:"name", title:"Product" }, { field:"price", title:"Price", footerTemplate:"Sum: #= sum # " } ], ``` And that's it!
using operators with zipWithN Say, we can write something like this: ``` zipWith (,) [1,2,3] [4,5,6] ``` If we want to tuple 3 list, we can write: zipWith3 (,,) [1,2,3] [4,5,6] [7,8,9] We can also use `zipWith4 (,,,) zipWith5(,,,,)` and so on. Now, I want to do the same thing, but using adding instead comma operator. Is there any way to define it in the same terse way, not using lambdas like in ``` zipWith3 (\a b c -> a + b + c) [1, 2, 3] [4, 5, 6] [7, 8, 9] ``` Thanks in advance for any answer.
It sounds like you want "point free" style code for `\a b c -> a + b + c`. Let it be known that, in general, `\a b c -> a + b + c` is often preferable to point free code because it is much easier to read four weeks later when you found a bug. There is a wiki article on point free programming ([source](http://www.haskell.org/haskellwiki/Pointfree)). You can also install the `pointfree` package, which lets you solve these problems on the command line. For example, ``` $ pointfree '\x y z -> x + y + z' ((+) .) . (+) ``` So `((+) .) . (+)` is the point free version (x, y, and z are the "points", in case you were wondering, and no, this has nothing to do with geometry). You can use that definition if you'd like, but most people will look at your code and will have no idea what that funny looking piece of ASCII art is supposed to do. Half of them will work it out with pencil and paper, but isn't the original `\x y z -> x + y + z` so much easier on the eyes? Hint: If you ever need to figure out what some point free code does, look at the type: ``` Prelude> :t ((+) .) . (+) ((+) .) . (+) :: (Num a) => a -> a -> a -> a ``` Or you can install the `pointful` package, which is approximately the inverse of `pointfree`. **Summary:** Welcome to the world of points free programming, proceed with caution lest your code be unreadable.