text
stringlengths
64
81.1k
meta
dict
Q: Java, troubles with making array I started learning Java and have small problem. I have classes Point and abc: static class Point { int x; int y; Point(int x, int y) { this.x = x; this.y = y; } } static class abc { abc() { Scanner s = new Scanner(System.in); Point[] p = new Point[2]; for (int i = 0; i < 2; ++i) { p[i].x = s.nextInt(); p[i].y = s.nextInt(); } } } But initialization in class abc doesn't work. When I try to write first number it gives: Exception in thread "main" java.lang.NullPointerException at main$abc.<init>(main.java:91) at main.main(main.java:99) What should I do to make it work? A: You only created the array of the Points, not the actual Points within the array. Change p[i].x = s.nextInt(); p[i].y = s.nextInt(); to p[i] = new Point(s.nextInt(), s.nextInt()); or int x = s.nextInt(); int y = s.nextInt(); p[i] = new Point(x, y);
{ "pile_set_name": "StackExchange" }
Q: I can't install composer globally in my web host, how can I use the "composer" command? I hosted my website on french host Gandi, with their simple hosting plan. I cannot move anything in the /usr/local/bin directory since it is read-only, so I used to manage with composer.phar, which works well. I recently used a library which requires the composer executable to be present (This library executes something like "composer require xxx" and there is no fallback to composer.phar). Is there a way to make it work ? What I have done so far : Tried to install composer globally (Failed because of the read-only filesystem) Tried to install composer globally for the current user (Failed because there was no ~/.local/bin directory, and also failed after creating the directory and restarting the instance) Tried to move the file to any directory of the $PATH variable (Failed because all of these directories are read-only) Tried to rename composer.phar to composer, and allowed it to be executable chmod +x composer (Failed because it only works with the command ./composer and not with composer) A: Ok, I finally succeeded with the following steps: renamed composer.phar to composer mv composer.phar composer made this new file executable chmod +x composer added the current path (corresponding to my website's root) to the system's $PATH variable export PATH=$PATH:$PWD I'm not fully satisfied with this since this relies on a specific website's root folder, but hey, it works! I'll try to update this answer if I find a way to create a folder available for every website I'll host there.
{ "pile_set_name": "StackExchange" }
Q: How to get coordinates from WordPress custom fields to embed Google Maps? I'm trying to follow what was said here but with a few edits. I can't seem to figure out what I'm doing wrong with this. Nothing is being displayed. The custom fields are filled obviously too. The javascript: function initialize() { lat = 0; long = 0; if (typeof my-coordinates !== 'undefined' && my-coordinates.lat && my-coordinates.long) { lat = my-coordinates.lat; long = my-coordinates.long; } var mapProp = { center: new google.maps.LatLng(lat, long), zoom: 5, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("googleMap"), mapProp); } The HTML: <script src="https://maps.googleapis.com/maps/api/js?sensor=false" type="text/javascript"></script> <script type="text/javascript" src="../map.js"></script> </div> <div class="googlemap"> <body onload="initialize()"> <div id="map" style="width: 350px; height: 310px;"></div> </div> The function: function register_plugin_scripts() { global $post; $woo_maps_lat = get_post_meta($post->ID, 'woo_maps_lat', true); $woo_maps_long = get_post_meta($post->ID, 'woo_maps_long', true); if( !empty($woo_maps_lat) && !empty($woo_maps_long) ) { wp_localize_script('my-coordinates-script', 'my-coordinates', array( 'lat' => $woo_maps_lat, 'long' => $woo_maps_long )); } } // end register_plugin_scripts add_action( 'wp_enqueue_scripts', 'register_plugin_scripts' ); EDIT: I haven't been able to figure this out still. Here's what I got so far. function register_plugin_scripts() { global $post; $woo_maps_lat = get_post_meta($post->ID, 'woo_maps_lat', true); $woo_maps_long = get_post_meta($post->ID, 'woo_maps_long', true); if( !empty($woo_maps_lat) && !empty($woo_maps_long) ) { wp_enqueue_script('my_coordinates_script', get_template_directory_uri() . 'http://michaeltieso.com/map/wp-content/plugins/medellin-living-map/map.js'); wp_localize_script('my_coordinates_script', 'my_coordinates', array( 'lat' => $woo_maps_lat, 'long' => $woo_maps_long )); } } add_action( 'wp_enqueue_scripts', 'register_plugin_scripts' ); and for the JS function initialize() { lat = 0; long = 0; if (typeof my_coordinates !== 'undefined' && my_coordinates.lat && my_coordinates.long) { lat = my_coordinates.lat; long = my_coordinates.long; } var mapProp = { center: new google.maps.LatLng(lat, long), zoom: 5, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("googleMap"), mapProp); } You can view the example here: http://michaeltieso.com/map/hello-world/ A: If you look in the browser console (usually f12) then you'll see some problems: Failed to load resource: the server responded with a status of 404 (Not Found) http://michaeltieso.com/map/wp-content/themes/genesishttp:/michaeltieso.com/map/wp-content/plugins/medellin-living-map/map.js?ver=3.5.1 Uncaught TypeError: Cannot read property 'offsetWidth' of null I'm not sure what's up with main.js since it's minified, but http://michaeltieso.com/map/wp-content/themes/genesishttp:/michaeltieso.com/map/wp-content/plugins/medellin-living-map/map.js?ver=3.5.1 clearly has a problem: it looks like you have a typo in your head element. <script type="text/javascript" src="http://michaeltieso.com/map/wp-content/themes/genesishttp://michaeltieso.com/map/wp-content/plugins/medellin-living-map/map.js?ver=3.5.1"></script> should probably be <script type="text/javascript" src="http://michaeltieso.com/map/wp-content/plugins/medellin-living-map/map.js?ver=3.5.1"></script> so the browser can't find the javascript to execute. Even then, the other error in main.js may be enough to halt JS execution, but definitely fix the map script first. EDIT: Ok, I think I might know what's still wrong. You're initializing the map with this call: var map = new google.maps.Map(document.getElementById("googleMap"), mapProp); but getElementById will return nothing, since you don't have an element with an ID of googleMap in the document. You're probably intending to target this code: <div class="googlemap"> <div id="map" style="width: 350px; height: 310px;"></div> </div> which would require this call: var map = new google.maps.Map(document.getElementById("map"), mapProp); So try that out, but a word of warning - using an id like "map" is asking for trouble, since it's so generic. It would be extremely easy to forget you used it once and use it again in a page, and ids should be unique. You might want to use something just a little more specific.
{ "pile_set_name": "StackExchange" }
Q: mp4 video won't play on Nexus 4 I'm developing an app that plays mp4 video from a webserver, but none of the videos play on Nexus 4 (JB 4.3). This includes opening the videos in chrome, or downloading them to the phone and playing them. They play fine on HTC One and Galaxy S4. A video: https://www.sundhed.dk/content/cms/30/20530_021-asthma-dan-h264-mov-640x360-16x9-mp4.mp4 Here's some mediainfo. Can anyone tell me what's going on? Thanks! A: Your device doesn't support you video format/codec.
{ "pile_set_name": "StackExchange" }
Q: Form field names used by personal data auto-fill in browsers (Safari, Opera) I'm looking for complete list of form field names (<input name="…">) that are recognized by auto-fill functions in major browsers. Here are some I've found to work in Safari using trial-and-error: email Ecom_ReceiptTo_Postal_Name_First Ecom_ReceiptTo_Postal_Name_Last first-name firstname last-name lastname full-name birthday company jobtitle phone street city country state (used for county outside US) postalcode zip However I couldn't find separate field for title/honorific prefix (it's included in full name only). Opera's Wand recognizes more or less the same names with exception of name, which requires Ecom_ReceiptTo_Postal_Name_First and Ecom_ReceiptTo_Postal_Name_Last. I couldn't find field for mobile phone number. Haven't found way to get separate home/work fields. There's proposal to extend autocomplete attribute to allow developers specify these explicitly. A: According to http://www.macosxhints.com/article.php?story=20070527063904285 the file Contents/Resources/English.lproj/ABAutoCompleteMappings.plist within the Safari.app package leads this list: first first name fname firstname given name middle initial middleinitial middle name middlename middle last last name lname lastname surname name birthday date of birth born job title jobtitle email e-mail street street address streetaddress address1 address 1 address city state zip zipcode zip code postalcode postal code country homephone home phone eveningphone evening phone home area code home areacode homeareacode evening area code evening areacode eveningareacode workphone work phone dayphone day phone daytime phone companyphone company phone businessphone business phone work area code work areacode workareacode day area code day areacode dayareacode company area code company areacode companyareacode business area code business areacode businessareacode mobilephone mobile phone cellphone cell phone mobile area code mobile areacode mobileareacode cell area code cell areacode cellareacode pagerphone pager phone pager area code pager areacode pagerareacode area code areacode phone fax organization company A: I wasn't aware of the names you used. But I knew Mozilla/Netscape and IE use vcard_name attributes to guide autofill as described here.
{ "pile_set_name": "StackExchange" }
Q: Is there a way to call Silverlight method from Asp.net MVC4 controllder? I need to call Silverlight method from my MVC4 application controller. I load Silverlight using object tag, <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="50%" height="100%" style="border:1px solid #cdcdcd"> <param name="source" value="ClientBin/HelloWorld.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="5.0.61118.0" /> <param name="autoUpgrade" value="true" /> <param name="onLoad" value="pluginLoaded" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=5.0.61118.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> and to call Silverlight method, I'm using JS method. BUT I want to call silverlight method from MVC4-controllder. Is it possible? if so, How should I do? please advice me. //Silverlight method [ScriptableMember] public void ShowAlertPopup(string message) { MessageBox.Show(message, "Message From JavaScript", MessageBoxButton.OK); } A: That is impossible. Silverlight runs on the client as well as JavaScript, while MVC runs on the server. These are different machines. If you need to reuse a method, you could use either of: Source code sharing (section 'Adding an Existing Item as a Link') Portable Library project RIA Services Shared Code features If the only thing you need is reusing a couple of methods, I suggest moving them to a separate file and then linking the file to 2 projects (a MVC project and a Silverlight project). One important note is that you can only reuse code that uses features present both in Silverlight and .NET. You can't use Silverlight-specific features (e.g. browser interaction) from an MVC application. If you need to send messages from your server to your client, you have to implement either of: Polling Long polling Server-Sent events WebSockets SignalR is a great library for simplifying the task.
{ "pile_set_name": "StackExchange" }
Q: Which join is suitable for my Ajax data table in CI I have two tables one is 'sales' and another is 'deliveries'. I have a report which shows all 'sales'. I want to do a join where i can get all sales which are not delivered comparing to the deliveries table, only where the ID is the only key point. Although I am new developer for CI, i want to know how to do it. Below is the code returning values in the grid. function getdatatableajax() { if($this->input->get('search_term')) { $search_term = $this->input->get('search_term'); } else { $search_term = false;} $this->load->library('datatables'); $this->datatables ->select("sales.id as sid, date, reference_no, biller_name, customer_name, total_tax, total_tax2, total, internal_note") ->select("sales.id = deliveries.id as sid date, reference_no, biller_name, customer_name, total_tax, total_tax2, total, internal_note") ->from('sales'); $this->datatables->add_column("Actions", "<center><a href='#' title='$2' class='tip' data-html='true'><i class='icon-folder-close'></i></a> <a href='#' onClick=\"MyWindow=window.open('index.php?module=sales&view=view_invoice&id=$1', 'MyWindow','toolbar=0,location=0,directories=0,status=0,menubar=yes,scrollbars=yes,resizable=yes,width=1000,height=600'); return false;\" title='".$this->lang->line("view_invoice")."' class='tip'><i class='icon-fullscreen'></i></a> <a href='index.php?module=sales&view=add_delivery&id=$1' title='".$this->lang->line("add_delivery_order")."' class='tip'><i class='icon-road'></i></a> <a href='index.php?module=sales&view=pdf&id=$1' title='".$this->lang->line("download_pdf")."' class='tip'><i class='icon-file'></i></a> <a href='index.php?module=sales&view=email_invoice&id=$1' title='".$this->lang->line("email_invoice")."' class='tip'><i class='icon-envelope'></i></a> </center>", "sid, internal_note") ->unset_column('sid') ->unset_column('internal_note'); echo $this->datatables->generate(); } The Deliveries tables has below fileds id date time reference_no customer address note user updated_by I just want the alerted code where it will works ny just doing joins, if applicable. A: Try this code, its should work. function getdatatableajax() { if($this->input->get('search_term')) { $search_term = $this->input->get('search_term'); } else { $search_term = false;} $this->load->library('datatables'); $this->datatables ->select("sales.id as sid, sales.date as date, sales.reference_no as reference_no, sales.biller_name as biller_name, sales.customer_name as customer_name, sales.total_tax as total_tax, sales.total_tax2 as total_tax2, sales.total as total, internal_note as sintnote") ->from('sales') ->join(deliveries, 'sales.reference_no = deliveries.reference_no', 'left'); $this->datatables->add_column("Actions", "<center><a href='#' title='$2' class='tip' data-html='true'><i class='icon-folder-close'></i></a> <a href='#' onClick=\"MyWindow=window.open('index.php?module=sales&view=view_invoice&id=$1', 'MyWindow','toolbar=0,location=0,directories=0,status=0,menubar=yes,scrollbars=yes,resizable=yes,width=1000,height=600'); return false;\" title='".$this->lang->line("view_invoice")."' class='tip'><i class='icon-fullscreen'></i></a> <a href='index.php?module=sales&view=add_delivery&id=$1' title='".$this->lang->line("add_delivery_order")."' class='tip'><i class='icon-road'></i></a> <a href='index.php?module=sales&view=pdf&id=$1' title='".$this->lang->line("download_pdf")."' class='tip'><i class='icon-file'></i></a> <a href='index.php?module=sales&view=email_invoice&id=$1' title='".$this->lang->line("email_invoice")."' class='tip'><i class='icon-envelope'></i></a> </center>", "sid, internal_note") ->unset_column('sid') ->unset_column('internal_note'); echo $this->datatables->generate(); }
{ "pile_set_name": "StackExchange" }
Q: Hide older content in a div HTML code is as follows - <div id="myDiv" style="max-height:500px; overflow-y:auto;margin-top: 15px" > <div id="myInnerDiv" style="display:none;border:1px solid black;"> <span class='myLine' >data1</span> <span class='myLine' >data2</span> <span class='myLine' >data3</span> <span class='myLine' >data4</span> <span class='myLine' >data5</span> </div> </div> visibility of myInnerDiv is controller depending on the content. Now lines with class myLine are dynamically added. As you can notice overflow-y:auto is provided if content exceeds max-height:500px we will see scroll bar. What I want it to show latest 5 lines only. So if we add <span class='myLine' >data5</span> then <span class='myLine' >data1</span> should be removed or hidden. Any suggestions? A: if($('.myLine').length() > 5 ) { $('.myLine').first().remove(); } something like that. or you could add a new class to the .first() and have it hidden.
{ "pile_set_name": "StackExchange" }
Q: Looping a single sentence by a user - entered integer So I was given a problem by my Computer Science teacher, but I have been sat here for ages and cannot for the life of me figure it out. There are plenty of tasks within the problem, but there is only one I'm having trouble with. So, I am coding a program for a carpet shop, and the part that I am stuck on is this. 2: For each floor you need to carpet, ask the user for the name of the room that the floor is located in. Basically X number of rooms; need to loop a sentence asking for the name of the room multiplied by X. So if the user has 3 rooms they need carpeted, I would end up with the name of 3 rooms (Lounge, Bedroom etc.), and be able to display these results back to the user later. Here is what I have so far... #Generating An Estimate. #Welcome message + setting base price to 0 + defining "getting_estimate" variable. def getting_estimate (): overallPrice = 0 print("Welcome!") #Getting the information about the customer. custID = input ("\n1. Please enter the customer's ID: ") estimateDate = input ("\n2. Please enter the date: ") numberOfRooms = input (int("\n3. Please enter the number of rooms that need to be carpeted: ")) #Add input allowing user to enter the #name of the room, looped by the integer entered #in the Variable "numberOfRooms", and possibly #adding 1st room, 2nd room etc?? If someone can work this out they are helping me loads. Thanks :) A: Perhaps using a for loop perhaps? for i in range(numberOfrooms): roomName = input("Blah blah") Full code: def getting_estimate (): overallPrice = 0 print("Welcome!") custID = input ("\n1. Please enter the customer's ID: ") estimateDate = input ("\n2. Please enter the date: ") numberOfRooms = int (input("\n3. Please enter the number of rooms that need to be carpeted: ")) cust_roster = {custID:[]} for i in range(numberOfRooms): roomName = input("Enter the name of room number "+str(i+1)) cust_roster[custID].append(roomName) print("{} has {} rooms called: {}".format(custID, len(cust_roster[custID]), cust_roster[custID])) Btw I'm doing a similar task for my OCR computing coursework ;) so good luck!
{ "pile_set_name": "StackExchange" }
Q: All straight curves are geodesics imply Euclidean metric Let $g_{ij}$ be a Riemannian Metric in $\Bbb R^n$ with the following properties: $g_{ij}(0)=\delta_{ij}$ , where $ \delta_{ij} = \begin{cases} 1, & \text{ $i=j$} \\ 0, & \text{ $i \neq j$} \end{cases}$ For every $q\in \Bbb R^n$ and $1\leq i \leq n,$ the curve $\gamma_i(t)= q + te_i$ is a geodesic of $g$, where $e_i = (\delta_{i1}, ... , \delta_{in}).$ Prove that $g$ is the Euclidean metric. What I tried: since the above curves are geodesics, I know that $|\gamma_i'(t)|_g^2=\langle e_i,e_i \rangle _g=g_{ii}\;$ is constant, and thus $g_{ii} = 1$ for every $1\leq i \leq n$. I'm having trouble showing that $g_{ij}=0$ for $j \neq i$. Any ideas? A: As stated (where you only require lines parallel to the standard axes to be geodesic) the result is false. Consider the metric on $\mathbb{R}^2$ given in standard coordinates in matrix form $$ \begin{pmatrix} (1 + y^2) & xy \\ xy & (1+x^2) \end{pmatrix} $$ The Christoffel symbols can be computed pretty easily to be seen that $$ \Gamma_{11}^1 = \Gamma_{22}^{1} = \Gamma_{11}^2 = \Gamma_{22}^2 = 0 $$ and hence all lines parallel to the axes are geodesics. But the metric is obviously not Euclidean. In the case where the assumption is that all straight lines are geodesic, one way to prove the result goes something like this: Again let $x^1, \ldots, x^n$ denote the standard coordinates. Consider all vectors of the form $v = \alpha e_i + \beta e_j$ and the line $\gamma(t) = q + tv$. The geodesic equation in local coordinates implies that for every $k\in \{1, \ldots, n\}$ and $i,j$ fixed (by $v$) $$ \alpha^2 \Gamma_{ii}^k + 2\alpha\beta \Gamma_{ij}^k + \beta^2 \Gamma_{jj}^k = 0 $$ This equation holds along the curve $\gamma$. However, since we can launch the geodesic $\gamma$ from any point $q$ at any direction $v$, the above equation must hold pointwise everywhere. Focus first on the case $\alpha = 1, \beta = 0$. Then this implies that for every $i,k\in \{1, \ldots, n\}$, we have $\Gamma_{ii}^k = 0$. Focus next on the case $\alpha = \beta = 1$. Using the result from point 3, this means that $\Gamma_{ij}^k = 0$ for any $i,j,k\in \{1, \ldots, n\}$. It remains to show (by simple linear algebra) that $$ \Gamma^k_{ij} = \frac12 g^{ks} \left( \partial_i g_{sj} + \partial_j g_{si} - \partial_s g_{ij} \right) = 0 $$ for all $i,j,k$ implies that $$ \partial_k g_{ij} = 0 $$ for all $i,j,k$. To do so, first notice that the assumed condition, using the non-degeneracy of the metric, implies that $$ \partial_i g_{kj} + \partial_j g_{ki} - \partial_k g_{ij} = 0 $$ for all $i,j,k$. Symmetrizing in $k$ and $j$ the second and third terms cancel, but since the metric coefficients are symmetric, the first term remains. A non-computational, more conceptual way of making the same argument is this: If all straight lines are geodesics, then every 2-dimensional plane is totally-geodesic. And hence the corresponding sectional curvature agrees with the induced Gaussian curvature of the plane. It suffices then to show that every 2-dimensional plane is flat. This then implies by point 1 that the induced metric has constant (0) curvature, and hence is locally Euclidean. To show that every 2-dimensional plane is flat, one notes that if straight lines are geodesics, then the "usual triangles" are the geodesic triangles and by the connection between angle defects with curvature this means that the planes are flat. Notice that in the direct computation case we are only interested in vectors of the form $v = \alpha e_i + \beta e_j$ in the span of two of the standard basis vectors. In other words, the argument there is also based on proving that sufficiently many of the 2-dimensional planes are flat. To do so we don't really need all straight lines being geodesic, but we only need all straight lines with velocity of the form $v = \alpha e_i + \beta e_j$ are geodesic.
{ "pile_set_name": "StackExchange" }
Q: Macros \dv and \pdv eat the subsequent parenthesis argument The physics package has the macros \dv and \pdv which are great but I have a small problem with them. If an argument with parenthesis included right after them they eat the whole argument. If there is a space in between the argument everything works fine but I want to prevent this happening all together. I checked the documentation but couldn't find a solution. So an example would be \documentclass{article} \usepackage{physics} \begin{document} \[\dv{x}{t}(y^2-5) \qquad \dv{x}{t} (y^2-5) \qquad \dv{x}{t} \] \end{document} I want the output of the equation on the left to be the same as the middle one. A: The package physics abuses the possibilities afforded by xparse to define commands having very weird syntax. For instance, if you want to typeset the standard \frac{dx}{dt}, you have to type \dv{x}{t} If you want instead \frac{d}{dt}(f(t)) you type \dv{t}(f(t)) which is where the syntax is weird: the variable you differentiate with respect to is no longer the second mandatory argument, but the first and a mandatory argument is missing altogether. Of course this is achieved by actually making the second braced argument optional, which breaks all standard LaTeX conventions. It is very counterintuitive having the independent variable first when the “long form” is desired and second for the “short (Leibniz) form”. How can you do? My best advice is to keep at arm's length from physics. It seems to provide many bells and whistles for typesetting math, but this is at the expense of syntax clarity. If you are tied to the package, simply add a no-op; a simple one in this context is \/: \documentclass{article} \usepackage{physics} \begin{document} \[ \dv{x}{t}\/(y^2-5) \qquad \dv{x}{t}\/ (y^2-5) \qquad \dv{x}{t} \] \end{document} A: That's because \dv (which is a shorthand for \derivative) is defined as \DeclareDocumentCommand\derivative{ s o m g d() } { % Total derivative % s: star for \flatfrac flat derivative % o: optional n for nth derivative % m: mandatory (x in df/dx) % g: optional (f in df/dx) % d: long-form d/dx(...) Even if the optional g-type argument is given (as in your case) the command will scan further for an optional delimited d-type argument which is delimited by ( and ) (maybe not the best choice in a mathematical context). To circumvent this you have to redefine \derivative to always flush #5 if it is present. \documentclass{article} \usepackage{physics} \DeclareDocumentCommand\derivative{ s o m g d() } { % Total derivative % s: star for \flatfrac flat derivative % o: optional n for nth derivative % m: mandatory (x in df/dx) % g: optional (f in df/dx) % d: long-form d/dx(...) \IfBooleanTF{#1} {\let\fractype\flatfrac} {\let\fractype\frac} \IfNoValueTF{#4} { \IfNoValueTF{#5} {\fractype{\diffd \IfNoValueTF{#2}{}{^{#2}}}{\diffd #3\IfNoValueTF{#2}{}{^{#2}}}} {\fractype{\diffd \IfNoValueTF{#2}{}{^{#2}}}{\diffd #3\IfNoValueTF{#2}{}{^{#2}}} \argopen(#5\argclose)} } {\fractype{\diffd \IfNoValueTF{#2}{}{^{#2}} #3}{\diffd #4\IfNoValueTF{#2}{}{^{#2}}}\IfValueT{#5}{(#5)}} } \begin{document} \[\dv{x}{t}(y^2-5) \qquad \dv{x}{t} (y^2-5) \qquad \dv{x}{t} \] \end{document} At the same time I'd like to note that the physics package does not really help me writing physics formulae and I'm usually much better off typing the stuff by hand using the amsmath macros.
{ "pile_set_name": "StackExchange" }
Q: Getting browsermodule already loaded despite not importing it in lazy loaded modules I am getting the following error despite having taken steps to remove. Can someone tell me what am I missing here ? Error: ERROR Error: Uncaught (in promise): Error: BrowserModule has already been loaded. If you need access to common directives such as NgIf and NgFor from a lazy loaded module, import CommonModule instead. The following are my modules: App Module: import { BrowserModule } from '@angular/platform-browser'; import { NgModule, CUSTOM_ELEMENTS_SCHEMA, NO_ERRORS_SCHEMA } from '@angular/core'; import { HttpClientModule } from '@angular/common/http'; import { enableProdLogger } from '@cst/cst-components/cst-service'; // Enums import { environment } from '../environments/environment'; // App components import { AppComponent } from './app.component'; // Header navbar import { HeaderNavbarComponent } from './header-navbar/header-navbar.component'; // Home components import { HomeComponent } from './home/home.component'; import { DashboardComponent } from './dashboard/dashboard.component'; // Pipes import { PipesModule } from './core/pipes'; // App routings import { AppRoutingModule } from './app-routing.module'; //rwa modules import { CoreModule } from './core/core.module'; import { SharedModule } from './shared/shared.module'; import { TimeoutModule } from './timeout'; import { CustomerHoldingsModule } from './customer-holding/customer-holding.module'; import { ProductAdminModule } from './product-admin/product-admin.module'; import { StaffModule } from './staff/staff.module'; import { CstModule } from '@cst/cst-components'; import { OAuthModule } from 'angular-oauth2-oidc'; if (environment.production === true) { enableProdLogger(); } @NgModule({ declarations: [ AppComponent, HeaderNavbarComponent, // Home components HomeComponent, DashboardComponent, ], imports: [ CstModule.forRoot(), OAuthModule.forRoot(), BrowserModule, HttpClientModule, PipesModule, TimeoutModule, //routing module AppRoutingModule, //rwa modules CoreModule, SharedModule, CustomerHoldingsModule, ProductAdminModule, StaffModule, CstModule ], providers: [ ], bootstrap: [AppComponent], schemas: [] }) export class AppModule { } Shared Module import { NgModule } from "@angular/core"; import { FormsModule, ReactiveFormsModule } from "@angular/forms"; import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { CommonModule } from "@angular/common"; import { UtilityService } from "./utility-service.service"; import { CstModule } from "@cst/cst-components"; import { RouterModule } from "@angular/router"; @NgModule({ declarations: [ ], imports: [ CommonModule, FormsModule, ReactiveFormsModule, CstModule, RouterModule ], providers: [ UtilityService ], exports: [ FormsModule, ReactiveFormsModule, CstModule, RouterModule ] }) export class SharedModule { } Core Module import { NgModule, APP_INITIALIZER, CUSTOM_ELEMENTS_SCHEMA, NO_ERRORS_SCHEMA } from "@angular/core"; import { PageNotFoundComponent } from "../statics/page-not-found.component"; import { HelpMePageComponent } from "../statics/help-me.component"; import { CstShowDirective } from "./directives/cst-show.directive"; import { CstModule } from "@cst/cst-components"; import { OAuthModule } from "angular-oauth2-oidc"; import { Options, Logger, AutoSaveFactory, localStorageProvider } from "@cst/cst-components/cst-service"; import { ZoneService } from "./providers/zone.service"; import { CstWindow } from "./providers/window.service"; import { LOGIN_GUARD_PROVIDER } from "./providers/logged-in-guard"; import { DEFAULT_INTERCEPTOR } from "./providers/custom.http.provider"; import { VCardService } from "./providers/vcard.service"; import { TimeoutService } from "../timeout/timeout.service"; import { AppConfigService, startupServiceFactory } from "./providers/app-config.service"; import { BasicLoginComponent } from "./login-basic/basic-login.component"; import { BasicLoginService } from "./login-basic/login.service"; import { CommonModule } from "@angular/common"; @NgModule({ declarations: [ // Login components BasicLoginComponent, // Custom directives CstShowDirective, // Static pages components PageNotFoundComponent, HelpMePageComponent, ], imports: [ CommonModule, CstModule.forRoot(), OAuthModule.forRoot() ], providers: [ { provide: Options, useValue: { store: false } }, Logger, ZoneService, CstWindow, LOGIN_GUARD_PROVIDER, DEFAULT_INTERCEPTOR, AutoSaveFactory, localStorageProvider(), // Custom services VCardService, BasicLoginService, TimeoutService, AppConfigService, { provide: APP_INITIALIZER, useFactory: startupServiceFactory, deps: [AppConfigService], multi: true } ], exports: [ CstModule, OAuthModule ], schemas: [] }) export class CoreModule { } Customer Holdings Module import { NgModule } from "@angular/core"; import { CustomerHoldingEnquiryComponent } from "./customer-holding-enquiry/customer-holding-enquiry.component"; import { SharedModule } from "../shared/shared.module"; import { CustomerHoldingErrorReportComponent } from "./customer-holding-error-report/customer-holding-error-report.component"; import { CustHoldingUnvalidatedCinComponent } from "./customer-holding-error-report/unvalidated-cin/ch-unvalidated-cin.component"; import { CustHoldingCinChangeUpdateComponent } from "./customer-holding-error-report/cin-change-update/ch-cin-change-update.component"; import { CustHoldingCisProductUpdateExceptionComponent } from "./customer-holding-error-report/cis-product-update-exception/ch-cis-prdt-update-excptn.component"; import { CustHoldingReconExceptionComponent } from "./customer-holding-error-report/cust-holding-recon-exception/ch-recon-exception.component"; import { CustHoldingProductGroupingExceptionComponent } from "./customer-holding-error-report/cust-holding-product-grouping-exception/ch-prdt-grouping-excptn.component"; import { CustomerHoldingComponent } from "./customer-holding.component"; import { CustomerHoldingService } from "./customer-holding-services/customer-holding-service.service"; import { CustomerHoldingRoutingModule } from "./customer-holding-routing.module"; import { CommonModule } from "@angular/common"; @NgModule({ declarations:[ //Enquiry component CustomerHoldingEnquiryComponent, //Error Report Components CustomerHoldingErrorReportComponent, CustHoldingUnvalidatedCinComponent, CustHoldingCinChangeUpdateComponent, CustHoldingCisProductUpdateExceptionComponent, CustHoldingReconExceptionComponent, CustHoldingProductGroupingExceptionComponent, //Base Component CustomerHoldingComponent ], imports:[ CommonModule, SharedModule, CustomerHoldingRoutingModule ], exports:[], providers:[CustomerHoldingService], schemas: [] }) export class CustomerHoldingsModule{} Product Admin Module import { NgModule } from "@angular/core"; import { ProductAdminComponent } from "./product-admin.component"; import { ApprovedProductComponent } from "./approved/approved-product.component"; import { PendingActionComponent } from "./pending-action/pending-action.component"; import { ProductAdminService } from "./product-admin-services/product-admin-service.service"; import { SharedModule } from "../shared/shared.module"; import { CanceledProductComponent } from "./canceled/canceled-product.component"; import { PendingApprovalCancellationComponent } from "./pending-approval-cancellation/pending-approval-cancellation.component"; import { ProductAdminRoutingModule } from "./product-admin-routing.module"; import { CommonModule } from "@angular/common"; @NgModule({ declarations:[ ProductAdminComponent, ApprovedProductComponent, CanceledProductComponent, PendingActionComponent, PendingApprovalCancellationComponent ], imports:[ CommonModule, SharedModule, ProductAdminRoutingModule ], exports:[], providers:[ ProductAdminService ], schemas: [] }) export class ProductAdminModule{} Staff Module import { NgModule } from "@angular/core"; import { RequestManagementComponent } from "./request-management/request-management.component"; import { StaffAuditTrailComponent } from "./staff-audit-trail/staff-audit-trail.component"; import { StaffModuleComponent } from "./staff.component"; import { SharedModule } from "../shared/shared.module"; import { StaffModuleService } from "./staffModuleServices/staff-module.service"; import { StaffRoutingModule } from "./staff-module-routing.module"; import { CommonModule } from "@angular/common"; @NgModule({ declarations:[ RequestManagementComponent, StaffAuditTrailComponent, StaffModuleComponent ], imports:[ CommonModule, SharedModule, StaffRoutingModule ], exports:[], providers:[ StaffModuleService ], schemas: [] }) export class StaffModule{} I have imported only CommonModule in all lazy loaded modules but still getting the error when I try to access lazy loaded modules. Note: I am able to access base component that is AppComponent and the home route but not any of the lazy loaded routes. Please help!!!! A: Looks like BrowserAnimationModule is causing problem since it already contain BrowserModule and loading your module lazily. So move your BrowserAnimationModule to your app.module.ts or you can remove and test it.
{ "pile_set_name": "StackExchange" }
Q: Android: change variables (counters) of a closed application using a service stackoverflow community. I have a problem that I've been trying to solve since a few days ago. I have a service running in an Android application that receives and sends messages. For those messages, I have counters that save the ammount of sent and received sms and errors. For example, when a message is sent, the counter adds +1 and shows the total on screen (in a not focusable editview). Another thing I do is bind my service, so I can switch it on or off according to my needs. Everything works fine when the application is running, but if it is finished my variables are lost, but I need to keep counting the messages somehow and show the values when the user opens the application again. How can I do this? I've been using shared preferences variables to save these numbers and others configurations, but when the app is killed, those numbers can't keep increasing. When the application is closed, can I save the counters values somewhere? Can I access to the shared preferences from the service with the application closed? What if I use a content provider? Can I access to it from the running service even if the app is closed? Regards. A: There is no problem accessing shared preferences from a service. So, just save your variables on your services onDestroy() method and reload them onCreate(); This should work to get the preferences: Context ctx = getApplicationContext(); SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(ctx); Here's a great answer: How do I get the SharedPreferences from a PreferenceActivity in Android?
{ "pile_set_name": "StackExchange" }
Q: Another wifi dropping thread! I am using Ubuntu 13.10 on the new Intel NUC i3 with the Intel Centrino Advanced-N 6235 wireless adapter. It's a fresh install and I only added XBMC for Linux to it. My problem is that the wifi connection keeps dropping every 10mins or so, and when it does, my wifi network disappears from the avaiable wifi networks list and there is no way to connect back to it. The only way to make the network available again, is to completely delete the network and then add it again from scratch, entering the password and everything. The other way is to reboot. Other than that the network is just not detected at all by the system. Once I am reconnected, it will work for another 10mins before it drops again, and so on. This is my home wifi network, and I have several other devices connected to the wifi without any issues. I read some other posts where they said to do sudo apt-get update..and also disabling ipV6 and setting MTU to 1500, I tried that but it didn't fix the issue. I would really appreciate if I can get some trouble shooting assistance with this issue. I am quite new to Linux so I'd appreciate a detailed walk thru. Thanks a lot in advance and looking forward to get to the bottom of this. A: This is a known issue with this device, even in Windows. Please see: https://communities.intel.com/message/192239#192239 There are several complaints as well at ubuntuforums.org. Some things that may be helpful are to disable 802.11N capability. Please open a terminal and do: sudo -i echo "options iwlwifi 11n_disable=1" >> /etc/modprobe.d/iwlwifi.conf modprobe -r iwldvm modprobe -r iwlwifi modprobe iwlwifi exit Some users seem to be helped by explicitly setting the regulatory domain from here: http://en.wikipedia.org/wiki/ISO_3166-1_alpha-2 For instance, if your country code is ES, then: gksudo gedit /etc/rc.local Right above the line exit 0, add a new line: iw reg set ES Of course, substitute your code here. After a reboot, is there any improvement? Should you wish to re-enable N speeds, which may bring back the instability, please do: gksudo gedit /etc/modprobe.d/iwlwifi.conf Remove the line you added: options iwlwifi 11n_disable=1 Proofread, save and close gedit. Upon reboot, N capabilities will be available again.
{ "pile_set_name": "StackExchange" }
Q: how to dynamically update a workbook name in excel using vba? I’m trying to dynamically update a workbook name in a formula in excel to bring through data from a continually changing source file. So far I have been getting by with using an indirect formula, but now I have a huge workbook with around 216,000 cells to populate and I don’t think indirect is the most efficient way to do this. I want to use VBA instead but I have no experience with this. From doing some googling I have found a few things but I’m not sure how to implement my specific needs into the code. so far 've come up with this: Sub replace() Dim cell As Range cell.Formula = replace(cell.Formula, "OfficeSupplies.csv", "OfficeSupplies2.csv") Range("a1:d8").Value Next End Sub However, when I try to execute it, it doesn't work at all. A: Edited to insert the handling of a specified range instead of ActiveSheet used range and to handle a sheet different to "Active" one To answer the question, you could use a code like the following to replace in "Active" sheet used range: Sub replace() ActiveSheet.UsedRange.SpecialCells(xlCellTypeFormulas).Replace(What:="OfficeSupplies.csv", Replacement:="OfficeSupplies2.csv", LookAt:=xlPart) End Sub or you could explicitly refer to a sheet: Sub replaceInSpecifiedSheet() Worksheets("MySheetName").UsedRange.SpecialCells(xlCellTypeFormulas).Replace(What:="OfficeSupplies.csv", Replacement:="OfficeSupplies2.csv", LookAt:=xlPart) ' change "MySheetName" to your actual sheet name End Sub or you could want to change formulas in a given range: Sub replaceInSpecifiedRangeOfSpecifiedSheet() Worksheets("MySheetName").Range("A5:B8").SpecialCells(xlCellTypeFormulas).Replace(What:="OfficeSupplies.csv", Replacement:="OfficeSupplies2.csv", LookAt:=xlPart) ' change "MySheetName" to your actual sheet name End Sub But changing a formula in 216k cells can be quite a time consuming activity You may consider the opposite: change the name of the”continually changing source file” You can do that without VBA of course Should you be “forced” to or prefer use VBA then you could use ‘Name ... As‘ statement Sub replace2() Dim FullNameToChange As String Dim HardCodedFullName As String FullNameToChange = "C:\ChangingName.xls" HardCodedFullName = "C:\HardCodedName.xls" If Dir(FullNameToChange) <> "" And Dir(HardCodedFullName) = "" Then Name FullNameToChange As HardCodedFullName End Sub
{ "pile_set_name": "StackExchange" }
Q: Running out of memory in Python using dict of namedtuples I have a set of questions that I'd like to be able to answer without having to use SQL to answer them. I was about 9 sub selects into a query when I decided that it's easier to do tree walks in Python. Problem is when I try to get all of the data (3.2 billion rows) I care about into my machine in Python I run out of memory. from collections import namedtuple, defaultdict state_change = namedtuple("state_change", ["event", "phase", "datetime"]) sql = """ SELECT * FROM job_history WHERE job = 'job_name'""" # Now we need to assosciate job states, and times. jobs = defaultdict(list) # Key is the job id, list of state changes for row in get_media_scan_data(sql): jobs[row[7]].append(state_change( row[8], row[10], row[5],)) an individual row looks like datetime job job_id event impact_policy phase 2000-06-10 08:44:04.000 job_name 4165 begin inode verify NULL 4 One solution to this problem is doing my computation on a window of data. So say the first 200,000 rows, then doing 200,001 to 400,000, etc. Is there a more memory efficient way to store this data locally? It would be a huge time saver if I could avoid having to redownload the dataset (using windows) multiple times. A: With billions of rows, you obviously need some kind of disk persistence. Take a look at bsddb module, it's a very fast embedded key-value store. A tiny example (notice a little weird way of storing values on python 3): import bsddb3.db as db def bsdtest(): d = db.DB() d.open('test.db', dbtype=db.DB_HASH, flags=db.DB_CREATE) # d.exists() for key in range(1000000): d.put(bytes(str(key), encoding='ascii'), bytes(str('value'), encoding='ascii')) d.close()
{ "pile_set_name": "StackExchange" }
Q: application migration from weblogic 10.2.0 to jboss 6.1.0 I am new to Jboss server. I am migrating the J2ee application from Weblogic 10.2 to JBoss EAP 6.1.0. While migrating I am getting the below error message. Kindly assist if anyone knows the solution. The Application successfully gets deployed on the server but when i try to access the application through localhost, it throws a NullPointerException. ERROR MESSAGE: 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) java.lang.NullPointerException 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at javax.naming.NameImpl.(NameImpl.java:264) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at javax.naming.CompositeName.(CompositeName.java:214) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.jboss.as.naming.util.NameParser.parse(NameParser.java:49) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.jboss.as.naming.NamingContext.parseName(NamingContext.java:491) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:183) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:179) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at javax.naming.InitialContext.lookup(InitialContext.java:392) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at com.trifecta.pfizer.dao.UserProfileDAO.populate(UserProfileDAO.java:51) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at com.trifecta.pfizer.servlet.PfizerSiteController.doFilter(PfizerSiteController.java:394) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at com.trifecta.pfizer.servlet.PfizerSiteController.performTask(PfizerSiteController.java:163) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at com.trifecta.pfizer.servlet.PfizerSiteController.doPost(PfizerSiteController.java:546) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:295) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:336) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:920) 17:34:16,811 ERROR [stderr] (http-localhost/127.0.0.1:8080-2) at java.lang.Thread.run(Thread.java:619) please help me in resolving the issue? CODE: try { if(sqlHome == null) { /* Properties jndiProps = new Properties(); jndiProps.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory"); jndiProps.put(Context.PROVIDER_URL,"jnp://localhost:9990"); jndiProps.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces"); */ // create a context passing these properties Context ctx = new InitialContext(); System.out.println("check1..."); //Object obj = ctx.lookup("SQLHelperSFOHome"); //java:app/SQLHelperSFO System.out.println("check2..."); //sqlHome=(SQLHelperSFOHome)PortableRemoteObject.narrow(obj,SQLHelperSFOHome.class); //System.out.println(PortableRemoteObject.narrow(ctx.lookup("SQLHelperSFOHome"),SQLHelperSFOHome.class).toString()+"2"); sqlHome = (SQLHelperSFOHome)PortableRemoteObject.narrow(ctx.lookup("SQLHelperSFOHome"),SQLHelperSFOHome.class); } sqlsfo = sqlHome.create(); sqlsfo.initialize(DATASOURCE_JNDINAME); } catch(Exception e) { e.printStackTrace(); throw new DataAccessException("UserProfileDAO","populate",e.getMessage()); } I tried changing the initial context in the following manner: Properties jndiProps = new Properties(); jndiProps.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory"); jndiProps.put(Context.PROVIDER_URL,"jnp://localhost:9990"); jndiProps.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces"); Context ctx = new InitialContext(jndiProps); Also, The sysout statements used in this method are not displayed on the console. A: Instead of using ctx.lookup("EJBclassName"); we used : ctx.lookup("java:global/EARname/EJBprojectName/EJBclassName!AbsoluteNameOfEJBClass"); it worked.
{ "pile_set_name": "StackExchange" }
Q: Why using jquery map? Why using jquery.min.map if: jquery = 242 ko jquery.min + jquery.min.map = 83 + 125 = 208 ko (the map is even greater than the library) And if we remove the comments, we will get a small jquery that could be easier to read (and to debug). So, why using the map if it will only add more than 100 ko and an extra request? What is the best practice? A: Source maps are loaded only when the developer tools are active. Browsers won't load them for application's users. Edit: It should be mentioned that there are 2 types of source maps. One which is an external file and there is a link to it in the actual file and another one which is embedded in the main file. Browsers actually have to load the entire file (i.e. including the embedded source map) for the second type. Check https://www.html5rocks.com/en/tutorials/developertools/sourcemaps/ for more information. A: That's called a source map. This answer goes into detail about what they are, how they work, and why you would want to use it. EDIT Extracted answer from the above SO link for posterity. Answered by @aaronfrost The .map files are for js and css files that have been minified. They are called SourceMaps. When you minify a file, like the angular.js file, it takes thousands of lines of pretty code and turns it into only a few lines of ugly code. Hopefully, when you are shipping your code to production, you are using the minified code instead of the full, unminified version. When your app is in production, and has an error, the sourcemap will help take your ugly file, and will allow you to see the original version of the code. If you didn't have the sourcemap, then any error would seem cryptic at best. Same for CSS files. Once you take a SASS or LESS file and compile it to CSS, it looks nothing like it's original form. If you enable sourcemaps, then you can see the original state of the file, instead of the modified state. What is it for? To de-reference uglified code How can a developer use it? You use it for debugging a production app. In development mode you can use the full version of Angular. In production, you would use the minified version. Should I care about creating a js.map file? If you care about being able to debug production code easier, then yes, you should do it. How does it get created? It is created at build time. There are build tools that can build your .map file for you as it does other files. https://github.com/gruntjs/grunt-contrib-uglify/issues/71
{ "pile_set_name": "StackExchange" }
Q: NSPopover: how to remove border? I' ve an NSPopover where i put my own NSView subclass to customize his content. I used an empty view with only a subview to have a "bar" on the top. The problem is that the NSPopover seems to have a sort of border all around itself and I've this ugly effect. Anyone have an idea how to fix that? Thanks A: Have a look at this GitHub repository,It might help you : https://github.com/github/Rebel
{ "pile_set_name": "StackExchange" }
Q: Can I use relative URL in a .properties file? I have a java program which gets some properties from a xxx.properties file. For example the destination of a file my program works with. How is it possible to give this file's place in the xxx.properties file with relative linking? I tried so many ways, but nothing worked. If I give the place of the file with an absolute URL it works just fine. Example: keyFileName=../res/MP00.pem <-- does not work. keyFileName=/home/thomas/myprogram/src/main/webapp/WEB-INF/res/MP00.pem <-- does work. The xxx.properties file is in /home/thomas/myprogram/src/main/webapp/WEB-INF/lib I'm using an ubuntu based linux distribution, if that matters. Any idea? Thanks in advance! A: This has little to do with the fact that you load the URL from a properties file. Relative paths are always relative to some form of 'current location'. Loading the URL-String from a properties-file does not set that .properties' location as your 'current location'. Try setting the path relative to the program you run (which uses the URL-String), not the .properties-file.
{ "pile_set_name": "StackExchange" }
Q: Rails render site in differnet controller I have a rails site and I would like to render the live site through the dashboard. I have created a dashboard for the back end of the website but when a user makes a change I would like them to see it first. Here's my Dashboard.rb has_many :profiles, :dependent => :destroy has_many :blogs, :through => :profile, :dependent => :destroy has_many :videos, :through => :profile, :dependent => :destroy has_many :albums, :through => :profile, :dependent => :destroy Dashboard index.html <%= render :partial => 'profiles/profile', :locals => {:profile => @profile} %> Profile.rb belongs_to :dashboard has_many :blogs, :dependent => :destroy has_many :videos, :dependent => :destroy has_many :albums, :dependent => :destroy _profile.html.erb <h1><%= @profile.name %></h1> <% album = @profile.albums.last %> <% if album.blank? %> <%= link_to 'Create a new album', new_album_path %></br> <% else %> <%= render :partial => 'albums/album', :locals => {:album => @profile.albums.last} %> <% end %> <% blog = @profile.blogs.last %> <% if blog.blank? %> <%= link_to 'Create a blog post', new_blog_path %><br/> <% else %> <%= render :partial => 'blogs/blog', :locals => {:blog => @profile.blogs.last}%> <% end %> <% video = @profile.videos.last %> <% if video.blank? %> <%= link_to 'Add new video', new_video_path %></br> <% else %> <%= render :partial => 'videos/video', :locals => {:video => @profile.videos.last} %> <% end %> The above works fine when I'm viewing the site through the frontend but when trying to view it through the dashboard I get the error undefined method `name' for nil:NilClass Which is the profile.name line If I delete the line above I get the undefined method `shows' for nil:NilClass Anyone have any suggestions how I can solve this issue? A: There needs to be @profile find inside controller. If the params id is not valid then it will redirect to root url and if valid it will render page. def index @profile = Profile.find_by_id(params[:id]) @profile || invalid_url! end private def invalid_url! flash[:error] = 'URL is not valid !' redirect_to root_url end
{ "pile_set_name": "StackExchange" }
Q: ListBox control shows empty screen while moving up and down wards using Windows Phone 8? We are developing chat application, in that chat page is having listBox. If that listbox has 100's of records the page will become blank if we scroll up and down. I heard about list box whatever items we are seeing remaining items will be clear. And I used VirtualizationMode="Standard" and "Recycling" but no use. Please help me on this issue. A: After done by some research, using ItemsControl my problem got solved..
{ "pile_set_name": "StackExchange" }
Q: Help finding inverse Fourier transform of a function. I have the function $f(x) = 1 -|x|$ for $|x|\leq 1$. And zero everywhere else. I'm supposed to find the inverse Fourier transform of the function but I only have a formula for the inverse Fourier transform of a vector not a function in my notes. Can someone point me in the right direction? A: The answer in question was $sinc^2(x)$ according to the grader; not sure where they got that but there it is.
{ "pile_set_name": "StackExchange" }
Q: Are there modern lenses with a 49mm filter diameter? I have found four color-filters in my desk: intense red, orange, light green and light blue. Are there lenses with 49mm filter size which these could be used with? A: Usage of color filters in digital photography has already been covered in another question. There certainly are dozens of lenses with 49mm filter size for various mounts, most often for Pentax or Sony. Also, some compact cameras use that filter size, e.g. Fuji X100 (with either AR-X100 adapter or an extra 49mm filter frame).
{ "pile_set_name": "StackExchange" }
Q: angular2/rxjs/redux reimplementation - route change messing with observable I'm trying to do a basic reimplementation of redux in rxjs in an angular2 app. At this point it's basically just a couple of things I found on the internets stitched together, this plunker for angular DI, this for file structure, this for combineReducers and lastly this for "redux in rxjs". There' sa couple of problems I'm having, but this is at the moment the biggest: https://www.youtube.com/watch?v=xXau87UmqOs What's happening: I have two components, index and todos each has a route in index I only show a list of todos in todos I can delete and add a new todo when I switch between routes without adding a new todo or removing one, everything works fine when I add or remove a todo, it still works, new todo is appended, old is deleted, data in console.log looks fine when I go to different route after adding or removing in todo, observable becomes the last redux action I called, for example Object {type: "DELETE_TODO", id: 1} instead of array of todos here's what I described in console.logs (can be seen in the youtube video as well) ---- index loaded ---- index.js:33 map in index: Object {todos: Array[3]} index.js:35 resp in index: Object {todos: Array[3]} index.js:38 ---- index destroyed ---- todos.js:31 ---- todos loaded ---- todos.js:33 map in todos: Object {todos: Array[3]} todos.js:35 resp in todos: Object {todos: Array[3]} todos.js:45 ---- todos destroyed ---- index.js:30 ---- index loaded ---- index.js:33 map in index: Object {todos: Array[3]} index.js:35 resp in index: Object {todos: Array[3]} index.js:38 ---- index destroyed ---- todos.js:31 ---- todos loaded ---- todos.js:33 map in todos: Object {todos: Array[3]} todos.js:35 resp in todos: Object {todos: Array[3]} NgStore.js:49 Object {type: "DELETE_TODO", id: 1} todos.js:33 map in todos: Object {todos: Array[2]} todos.js:35 resp in todos: Object {todos: Array[2]} todos.js:45 ---- todos destroyed ---- index.js:30 ---- index loaded ---- index.js:33 map in index: Object {type: "DELETE_TODO", id: 1} index.js:35 resp in index: Object {type: "DELETE_TODO", id: 1} Any idea what could be causing the observable to change when route changes? Here's the repo https://github.com/fxck/ng2-rx-redux it's usign the latest alpha(46) A: You're using RxJS API incorrectly. To fix your code you should do next two things: Set initial state into scan operator. // services/NgStore.ts this._subject = new BehaviorSubject().scan(rootReducer, initialState); Change condition in switch statement to check if action is empty (e.g === undefined): // reducers/todo.ts export const todos = (state = [], action) => { switch(action && action.type) { // ... } }; PS I won't remove my first answer for someone who doesn't know that redux is already implemented in RxJS.
{ "pile_set_name": "StackExchange" }
Q: Android Studio Error Duplicate Resource Error:Execution failed for task ':fiesCabs:mergeDebugResources'. C:\Users\tony\AndroidstudioProjects\Fies-Cabs\fiesCabs\src\main\res\values\themes_apptheme.xml: Error: Duplicate resources: C:\Users\tony\AndroidstudioProjects\Fies-Cabs\fiesCabs\src\main\res\values\themes_apptheme.xml:style/AppTheme, C:\Users\tony\AndroidstudioProjects\Fies-Cabs\fiesCabs\src\main\res\values\styles.xml:style/AppTheme Error:Error: Duplicate resources: C:\Users\tony\AndroidstudioProjects\Fies-Cabs\fiesCabs\src\main\res\values\themes_apptheme.xml:style/AppTheme, C:\Users\tony\AndroidstudioProjects\Fies-Cabs\fiesCabs\src\main\res\values\styles.xml:style/AppTheme Error shown in android studio while building the project themes_apptheme.xml <?xml version="1.0" encoding="utf-8"?> <!-- Generated with http://android-holo-colors.com --> <resources xmlns:android="http://schemas.android.com/apk/res/android"> <style name="AppTheme" parent="@style/_AppTheme"/> <style name="_AppTheme" parent="Theme.AppCompat.Light"> <item name="android:editTextStyle">@style/EditTextAppTheme</item> <item name="android:textColorHighlight">#9933b5e5</item> <item name="android:autoCompleteTextViewStyle">@style/AutoCompleteTextViewAppTheme</item> <item name="android:checkboxStyle">@style/CheckBoxAppTheme</item> <item name="android:radioButtonStyle">@style/RadioButtonAppTheme</item> <item name="android:buttonStyle">@style/ButtonAppTheme</item> <item name="android:imageButtonStyle">@style/ImageButtonAppTheme</item> <item name="android:spinnerStyle">@style/SpinnerAppTheme</item> <item name="android:spinnerDropDownItemStyle">@style/SpinnerDropDownItemAppTheme</item> <item name="android:progressBarStyleHorizontal">@style/ProgressBarAppTheme</item> <item name="android:seekBarStyle">@style/SeekBarAppTheme</item> <item name="android:ratingBarStyle">@style/RatingBarAppTheme</item> <item name="android:ratingBarStyleIndicator">@style/RatingBarBigAppTheme</item> <item name="android:ratingBarStyleSmall">@style/RatingBarSmallAppTheme</item> <item name="android:buttonStyleToggle">@style/ToggleAppTheme</item> <item name="android:listViewStyle">@style/ListViewAppTheme</item> <item name="android:listViewWhiteStyle">@style/ListViewAppTheme.White</item> <item name="android:spinnerItemStyle">@style/SpinnerItemAppTheme</item> </style> </resources> not familiar with android studio just imported the project from eclipse while it works fine with eclipse if there is any prerequisites to build in android studio please inform me A: You have the same resource style/AppTheme in two files, values/styles.xml and values/themes_apptheme.xml. Rename or remove the other. A: After using Android Studio to create a new empty Activity using the new Activity wizard, the layout's XML file res/layout/myactivity_layout.xml was auto-generated, but Android Studio ALSO silently added /res/values/dimens.xml. I already have a /res/values/dimen.xml file where I defined my various dimensions. Android Studio added 2 new dimension keys to these files (without checking for conflicts) and sure enough, the keys for the 2 new dimensions were already defined in my dimen.xml file, so my Gradle build failed. I assume the reason Android Studio added dimens.xml is because it didn't recognize my dimen.xml file existed. And the reason Android Studio auto-added new dimensions to dimens.xml is to adhere to Android's style conventions for Material Design (which I'm not adhering to for my project). I would much prefer if Android Studio did not auto-generate problems without checking first! I removed the extraneously added dimens.xml files and must now either rename my dimen.xml to dimens.xml or never use the new Activity Wizard again. Thanks Android.
{ "pile_set_name": "StackExchange" }
Q: ggplot to stack bar graph top 5 for each month I have a good one. I've thinking about this a long time now. I have this data set and this data set could be be huge. I would like to graph a ggplot stack bar based on top 5 higest count for each month. For example, for 1//1/2012, the higest counts would be I, G, F, D and E. df Date Desc count 1/1/2012 A 10 1/1/2012 B 5 1/1/2012 C 7 1/1/2012 D 25 1/1/2012 E 19 1/1/2012 F 30 1/1/2012 G 50 1/1/2012 H 10 1/1/2012 I 100 2/1/2012 A 10 2/1/2012 B 5 2/1/2012 C 7 2/1/2012 D 25 2/1/2012 E 19 2/1/2012 F 30 2/1/2012 G 50 2/1/2012 H 10 2/1/2012 I 100 3/1/2012 A 1 3/1/2012 B 4 3/1/2012 C 5 3/1/2012 D 6 3/1/2012 E 6 3/1/2012 F 7 3/1/2012 G 8 3/1/2012 H 5 3/1/2012 I 10 I have something like this but this graphs all of the values: ggplot(df, aes(Date, count))+ geom_bar(aes(fill=Desc), stat="identity", position="stack") + theme_bw() A: You have to subset the data first: library(plyr) library(ggplot2) df_top <- ddply(df, .(Date), function(x) head(x[order(x$count, decreasing = TRUE),], 5)) ggplot(df_top, aes(Date, count))+ geom_bar(aes(fill=Desc), stat="identity", position="stack") + theme_bw()
{ "pile_set_name": "StackExchange" }
Q: A Java Runtime Environment (JRE) or Java Development Kit (JDK) must be available in order to run Eclipse Tried googling but couldn't found the solution. Using Windows 7 Ultimate 64 bits. I have java(64 bits) installed here : C:\Program Files (x86)\Java\jre7 Downloaded Android SDK from here Get the Android SDK I downloaded the 64 bits considering my windows is 64 bits. Was 32 bits required? Now whenever i run eclipse.exe I get the following error: A Java Runtime Environment or JDK must be available in order to run Eclipsec. No java virtual machine was found after searching the following location: C:\Users..\Downloads\adt-bundle-windows=x86_64-3013131030\adt-bundle-windows-x86_64-20131030\eclipse\jre\bin\java.exe Sorry can't post a screenshot because don't have any reputation as of now here. So what should I do? Do I need to install 32 bit Java or download 32 bit SDK ? PS: Before running Eclipse I have run "SDK Manager" and it installed some necessary tools. Maybe if this helps. A: Just Set your environment variable. Goto to Computer properties -> Advance System Setting -> Environment variables -> System Variables -> path and after a semi colon paste the path of your JRE like this C:\Program Files\Java\jre7\bin click on ok. Open CMD and type java if this command works properly means your path has been set now. just open you eclipse and it will work this time. You can do this through command line too just type set PATH=C:\Program Files\Java\jre1.6.0_03\bin and press enter. If still it is not working just paste this set PATH=C:\Program Files\Java\jre1.6.0_03\bin in your eclipse.ini file :)
{ "pile_set_name": "StackExchange" }
Q: Are there Similar Distance Binary Error Correcting Codes? I'm trying to find a low distortion embedding of the trivial metric space into hamming space. It seems this should be doable by using a large set of low dimensional vectors, with approximately equal pairwise distance. My question is if it makes sense to expect error correcting codes to have this property? Usually when designing error correcting codes, we are interested in finding the highest achievable rate, given a minimum distance between points. A similar question is to consider, instead of the minimum distance, the ratio between the maximum and minimum distance, $\rho=max/min$. Given $\rho$, what codes should one consider for maximizing the rate? I tried to count the distances in some of the codes from this list of optimal binary codes: The $(7,2^3,4)$ code has all distances equal to four, so $\rho=1$. The $(8,2^4,4)$ code has $112$ pairs of distance $4$ and $8$ of distance $8$, so it has $\rho=2$. The $(9,40,3)$ code has maximum distance $8$, so $\rho=8/3$. The $(24,2^{12},8)$ golay code has maximum distance $24$, so $\rho=3$. The two later codes are nearly as bad as possible. Are there any codes for which I shouldn't expect this to be the case? If not, can I at least say something about the distribution of distances being fairly concentrated? A: Every $\epsilon$-biased set gives a code whose minimal relative distance is $0.5 - \epsilon$ and maximal relative distance is $0.5 + \epsilon$. To see it, write the elements of the set as the rows of a matrix, and then define the code to be the span of the columns of the matrix. The $\epsilon$-biased property of the set is equivalent to saying that the relative distance between codewords is always between $0.5 - \epsilon$ and $0.5 + \epsilon$. One particular construction of such sets will give you a code whose dimension is linear in its block length. It is mentioned here: http://www.wisdom.weizmann.ac.il/~benaroya/SmallBiasNew.pdf Basically, the idea is to take AG codes of constant rate and relative distance close to $1$, and concatenating them with the Hadamard code.
{ "pile_set_name": "StackExchange" }
Q: Unable to find an inherited method for function ‘select’ for signature ‘"data.frame"’ I'm trying to select columns from a data frame by the following code. library(dplyr) dv %>% select(LGA) select(dv, LGA) Both of them will fail with the error Unable to find an inherited method for function ‘select’ for signature ‘"data.frame"’ But the following code will be fine. dplyr::select(dv, LGA) Is this a function confliction in packages? All libraries imported are as the following. library(jsonlite) library(geojsonio) library(dplyr) library(ggmap) library(geojson) library(leaflet) library(mapview) library(RColorBrewer) library(scales) I'm new to R, so super confused how you guys deal with problems like this? A: There's a great package that helps with package conflicts called conflicted. If you type search() into your console, you'll see an ordered vector of packages called the "search list". When you call select, R searches through this "search path" and matches the first function called select. When you call dplyr::select you're calling it directly from the namespace dplyr, so the function works as expected. Here's an example using conflicted. We'll load up raster and dplyr, which both have a select function. library(dplyr) library(raster) library(conflicted) d <- data.frame(a = 1:10, b = 1:10) Now when we call select, we're prompted with the exact conflict: > select(d, a) Error: [conflicted] `select` found in 2 packages. Either pick the one you want with `::` * raster::select * dplyr::select Or declare a preference with `conflict_prefer()` * conflict_prefer("select", "raster") * conflict_prefer("select", "dplyr")
{ "pile_set_name": "StackExchange" }
Q: Main method entry point with string argument gives "does not contain ... suitable ... entry point" error Why does the code block below give a compile error of "does not contain a static 'Main' method suitable for an entry point"? namespace MyConApp { class Program { static void Main(string args) { string tmpString; tmpString = args; Console.WriteLine("Hello" + tmpString); } } } A: In the code you provide the problem is that the 'Main' entry point is expecting a array of strings passed from system when the program is invoked (this array can be null, has no elements) to correct change static void Main(string args) to static void Main(string[] args) You could get the same error if you declared your 'Main' of any type other than 'void' or 'int' so the signature of the 'Main' method has always to be static // ie not dynamic, reference to method must exist public // ie be accessible from the framework invoker Main // is the name that the framework invoker will call string[] <aName> // can be ommited discarding CLI parameters * is the command line parameters space break(ed) From MS (...) The Main method can use arguments, in which case, it takes one of the following forms: static int Main(string[] args) static void Main(string[] args) A: Because the argument is String and not a String Array as expected A: See this to understand Main method signature options.
{ "pile_set_name": "StackExchange" }
Q: Fast way to delete big folder on GCS bucket I have a very big GCS bucket (several TB), with several sub directories, each with a couple terabytes of data. I want to delete some of those folders. I tried to use gsutil from a Cloud Shell, but it is taking ages. For reference, here is the command I'm using: gsutil -m rm -r "gs://BUCKET_NAME/FOLDER" I was looking at this question, and thought maybe I could use that, but is seems like it can't filter by folder name, and I can't filter by any other thing as folders have some mixed content. So far, my last resort would be to wait until the folders I want to delete are "old", and set the lifecycle rule accordingly, but that could take too long. Are there any other ways to make this faster? A: It's just going to take a long time; you have to issue a DELETE request for each object with the prefix FOLDER/. GCS doesn't have the concept of "folders". Object names can share a common prefix, but they're all in a flat namespace. For example, if you have these three objects: /a/b/c/1.txt /a/b/c/2.txt /a/b/c/3.txt ...then you don't actually have folders named a, b, or c. Once you deleted those three objects, the "folders" (i.e. the prefix that they shared) would no longer appear when you listed objects in your bucket. See the docs for more details: https://cloud.google.com/storage/docs/gsutil/addlhelp/HowSubdirectoriesWork
{ "pile_set_name": "StackExchange" }
Q: On rendering PDF and modifying Uri I am trying to render PDF link using a webview. The link I am trying to render, is uploaded by my app to the firebase database. While the link is rendered succesffuly on the iOS devices. It isn't on the Android. I greatly speculate the reason behind all this confusion to be the firebase link . If I try to render a normal pdf link from the web. String pdf_sample = "http://www.adobe.com/devnet/acrobat/pdfs/pdf_open_parameters.pdf"; String googleDocs = "https://docs.google.com/viewer?url="; Webviewz.getSettings().setJavaScriptEnabled(true); Webviewz.loadUrl(googleDocs + pdf_sample); Resulting in succesful PDF rendering If however I rely and attempt to display and render a PDF link from my Firebase database, I get this blank dark gray background by google docs and a sign of NO PREVIEW AVAILABLE String Firebase_link_failure = "https://firebasestorage.googleapis.com/v0/b/jouska-aabee.appspot.com/o/PDF_files%2F305?alt=media&token=b9cf2fa6-f6ff-4a3b-8908-9eac294c4668"; String googleDocs = "https://docs.google.com/viewer?url="; Webviewz.getSettings().setJavaScriptEnabled(true); Webviewz.loadUrl(googleDocs + Firebase_link_failure); Resulting in No Preview Available The solution suggested by the user sphippen worked out. Using the UrlEncoder on the firebase link, The pdf was successfully rendered. Below is the single modification made. Webviewz.loadUrl(googleDocs+ URLEncoder.encode(firebase_link, "utf-8")); A: It looks like the problem is that you're just appending two strings to form your URL: Webviewz.loadUrl(googleDocs + pdf_sample); Looking at the full URL (using the values from your code sample): https://docs.google.com/viewer?url=https://firebasestorage.googleapis.com/v0/b/jouska-aabee.appspot.com/o/PDF_files%2F8828?alt=media&token=fab355da-47a6-4a27-894f-40798590a89a The & character after alt=media ends the url parameter, so the URL the page tries to access is just https://firebasestorage.googleapis.com/v0/b/jouska-aabee.appspot.com/o/PDF_files%2F8828?alt=media, which doesn't contain the download token. You'll need to escape the Firebase Storage download link for use as a URL parameter (replacing & with %26, ? with %3F, % with %25, etc.). The URLEncoder class should work: URLEncoder.encode(pdf_sample, "UTF-8")
{ "pile_set_name": "StackExchange" }
Q: MySQL query to get the max, min with GROUP BY and multiple table joins in separate rows Table structure employee_salary salary_id | emp_id | salary employee tabe structure emp_id | first_name | last_name | gender | email | mobile | dept_id | is_active Department: dept_id | dept_name | manager_name | is_active Question: Display department wise employee who is getting highest and lowest salary amount? I am using query SELECT max(salary) salary, dept_name, first_name, dept_id , 'MAX' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ) t GROUP BY dept_id UNION ALL SELECT min(salary) salary, dept_name, first_name, dept_id , 'MIN' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ) t GROUP BY dept_id ORDER BY dept_id The output I am getting is shown below . I am not able to get the respective first name from employee table and rest all other fields are showing correct values, salary dept_name first_name dept_id Type 30000 dept_1 Paul 1 MIN 98000 dept_1 Paul 1 MAX 51000 dept_2 Aron 2 MAX 20000 dept_2 Aron 2 MIN 40000 dept_3 Steve 3 MAX 40000 dept_3 Steve 3 MIN 64000 dept_4 Henry 4 MAX 64000 dept_4 Henry 4 MIN A: ORDER the salary with DESC for finding the maximum value and by ASC for finding the MINIMUM salary This is because it checked the first matched term, and return the value accordingly, SELECT max(salary) salary, dept_name, first_name, dept_id , 'MAX' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ORDER BY a.salary DESC) t group by dept_id UNION ALL SELECT min(salary) salary, dept_name, first_name, dept_id , 'MIN' Type FROM ( SELECT a.salary, c.dept_name, b.first_name, b.dept_id, a.salary_id FROM employee_salary a LEFT JOIN employee b ON a.emp_id = b.emp_id LEFT JOIN department c ON c.dept_id = b.dept_id ORDER BY a.salary ASC) t group by dept_id ORDER BY dept_id
{ "pile_set_name": "StackExchange" }
Q: Need help evaluating my diet to gain muscle I am re-evaluating my daily diet and was looking for some advice on how I can improve it. Background: I am a 24 year old male, weigh 68kg/150lb, and do the StrongLifts program 3 days a week. My ultimate goals is to gain weight (mostly muscle, limited fat), but I am also concerned with making sure I am getting enough nutrients and that there isn't any severe deficiency in my diet. I haven't been tracking specific calories of my meals, but suffice it to say I am never hungry so it is unlikely I'm eating at a deficit. Currently, my diet looks like this: Breakfast: Tall glass of milk; 2 boiled eggs Morning snack: Handful of almonds; Handful of granola; Cup of yogurt; Bowl of oatmeal; Orange Lunch: Bowl of salad; Handful of mixed berries; Handful of Nuts; One serving of fish/poultry; 18oz protein shake Afternoon snack: Banana; Handful of almonds Post-workout: 18oz protein shake Dinner: Bowl of salad (no dressing); One serving of fish/poultry; Tall glass of milk A: Let's start with basics. To keep it simple - if you want to gain weight you need to be in calorie surplus. I'll guess you are 6 feet tall, rough estimation of your TDEE (calorie you need for maintainning weight) is 2715 kcal per day. To gain pound a week you need calorie surplus of 500 kcal, which puts your energy demands around 3215 kcal. How much fat and how much muscle you would gain, depends on several factors (genetics, training, hormons, rest, etc.). I would suggest eating 3000 kcal per week and to track your progress. One very important thing here - like i said, this TDEE is rough estimation, and your actual TDEE can be as slow as 2500 or as high as 3200 (if not even higher or maybe even lower). To be sure about your TDEE, you'll have to learn your body and for that you need time and bodyweight scale. Also, note that with gaining extra pounds of lean body, your TDEE will go up. The second thing you need to learn is to keep your eating diary. And by this I mean - "I ate 200g of milk and 120g of eggs". If you say - "I ate 2 eggs for breakfast", it doesn't mean anything. Is it 80g of eggs, or is it maybe 140g of eggs? What kind of eggs (chicken / goose / etc.). Kitchen scale is a must! After you learn to count weight of your food, use online diary like Cronometer or Myfitnesspal (just to name few) and track both of your macronutrients and micronutrients. The next thing which you should improve is to set your protein intake at 1g per pound (in your case minimum of 150g of protein per day). Fat shouldn't be lower then 50g (below 50, you might experience lots of problems connected with hormons, like unable to get erection), 100 is even better. Rest carbs. To give you better picture, if your calorie intake is set at 3000kcal, 150g of protein equal to 600 kcal, 100g of fat equal 900kcal, which means you should take 375g of carbs (1g of protein and carbs = 4 kcal, 1g of fat = 9 kcal). When you say protein shake, I guess you mean whey powder, right? Drink it only in the morning with your first meal and after workout. Whey is a very fast protein (to keep it simple, I'll call it "fast protein") and around lunch there is high chance you'll still digest proteins from earlier meal. If this is a case, whey will be turned into glucose (sugar). All in all, until we see your precise food intake (measured in grams), there is not much I can tell you more.
{ "pile_set_name": "StackExchange" }
Q: Raphael JS: wrong position after scale path in IE I have problems with scaling a path in internet explorer, because it results in a wrong position. Here an example for the playground, check it out in FF and IE: paper.path("m40,40 h10 v10 h-10 v-10").transform("s8"); I tried this in Raphael playground and also here: http://jsfiddle.net/M4Rmm/. Works in Firefox and Chrome, but in IE the path is moved and has the wrong position. Doesn't matter if I use the .scale() or .transform() function. paper.path("m40,40 h10 v10 h-10 v-10").scale(8,8); //same result like .transform("s8"); My system: Win7, x64 / FF10, IE8 / Raphael 2.x I also tried new Raphael version 2.1.0, but the same problem occurs. Any ideas, how to solve this problem? A: I had the same issue whith positioning in IE, I hade two arcs (I used a simplyfied version of the raphael polar-clock arc function). But the positioning were off in IE. I changed the matrix.translate as Chris suggested and that solved my problem. Thanks A: As seen in the history (2.1.1 • 2013-08-11, 4th point), this bug is fixed now with the new version! https://github.com/DmitryBaranovskiy/raphael/blob/master/history.md
{ "pile_set_name": "StackExchange" }
Q: Is it possible to make transparent / translucent pastry? I'm looking for something that works like pastry (malleable before baking, rigid after baking, mild unobtrusive flavour) that is transparent or translucent. The idea is to use for topping savoury pies, where the base and sides are regular shortcrust pastry and the top is something similar but translucent, so that you can have a selection of different pies of the same size and shape on a sharing plate and people can see which is which - without leaving the top open so not exposing the contents to the elements. Since things like glass noodles and translucent dough wrappings for Chinese dumplings exist, I'm sure it must be possible - maybe based on corn flour, or pure starch like glass noodles? Apparently translucent rice is also a thing. The closest I've found are recipes like this one for transparent Chinese dumpling dough - but these aren't very transparent and are gooier than would be ideal alongside pastry. If there's no such thing that has an established name or recipes, it would be great to have a few basic principles on how it might work: for example, how the pure starch that glass noddles are apparently made from could be sourced and adapted to be pastry-like without losing translucency? Perhaps simply making a starch dough, glazing it with oil then baking might be enough to make it work like pastry? A: With great skill, a true artist could do what you describe with Thai/Vietnamese rice paper, the dinner plate sized, extra thin ones, like for Fresh Spring Rolls. I will never apply for the job, I promise. A: I respect Jolenealaska's creative thought, but nothing truly resembling pastry is going to be translucent or transparent unless it is exceedingly thin. The structure alone will refract light, making the product opaque in the same way snow is opaque even though individual water crystals are fairly transparent, if they don't have air inclusions. This is because any real pastry will have a complex structure of starch, fat, protein, and so on. While I respect the idea of trying to use a very thin noodle (which is only really somewhat translucent because it is thin, much like tissue paper), that is not likely to be delicious, and will be somewhat incongruous in a western style savory pie. Instead, I suggest you achieve your goal (making it clear which pie is which) by the more traditional means of one or more of: Different crimping styles at the rim Using different patterns for the steam vents Cutting out and baking on crust garnishes in different shapes for different types of filling; you could even cut out letters Less traditionally, at least for savory pies: Use a lattice crust, so the filling is visible, but you still have some pastry on top Use food coloring, or natural ingredients like beet juice or annatto to color the pastry, with different colors for each pie variation For example, one Caribbean restaurant near me has vegetable patties (a hand pie) with pale crust, and chicken patties with a pale greeny-yellow crust (not sure what they use, probably a touch of their curry mixture), and beef patties with a richer orangish colored crust (they might have used annatto). A: If you want the least obtrusive flavor, the best you can go with is thickened water. While you can probably prepare sheets with the right hydrocolloid and lots of care and plastic foil, I would suggest choosing a thickener which thickens on cooling, and pouring the warm mixture over the pie. Arrowroot starch is frequently used in this role on fruit pies, I don't see a reason why it shouldn't work on savory pies. But if you have meat in the pie, a gelatine texture would probably feel more natural. In both cases, don't add anything to the mixture, just the thickener and water, and process in the usual way.
{ "pile_set_name": "StackExchange" }
Q: iOS 8 Today extensionContext.openUrl not working From the today extension, I use the following code to open a url in the main app. It works totally fine in iOS 9+, but in iOS 8 it never hits the openUrl method in AppDelegate but simply launches the app. extensionContext.OpenUrl (url, (bool success) => { } ); How can I achieve a similar deep linking behavior in iOS 8? I've also tried SharedApplication.OpenUrl which worked in iOS 9+ but not in iOS 8. A: In iOS 8, you have to implement the version of openURL that was deprecated in iOS 9: -(BOOL)application:(UIApplication *)application openURL:(NSURL *)url sourceApplication:(nullable NSString *)sourceApplication annotation:(id)annotation { NSArray *pathComponents = [url pathComponents]; NSString *action = url.host; // handle URL }
{ "pile_set_name": "StackExchange" }
Q: GpuMat - accessing 2 channel float data in custom kernel As far as I'm aware cv::cuda::PtrStep is used to passing GpuMat data directly to the custom kernel. I found examples of one channel access here however my case is 2 channel mat (CV_32FC2). In this case I'm trying to achieve complex absolute squared value where complex values are encoded like: real part is 1st plane, imaginary part is 2nd plane of given Mat. I tried: __global__ void testKernel(const cv::cuda::PtrStepSz<cv::Vec2f> input, cv::cuda::PtrStepf output) { int x = blockIdx.x * blockDim.x + threadIdx.x; int y = blockIdx.y * blockDim.y + threadIdx.y; if (x <= input.cols - 1 && y <= input.rows - 1 && y >= 0 && x >= 0) { float val_re = input(x, y)[0]; float val_im = input(x, y) [1]; output(x, y) = val_re * val_re + val_im * val_im; } } but this results in the following error: calling a __host__ function("cv::Vec<float, (int)2> ::operator []") from a __global__ function("gpuholo::testKernel") is not allowed I get it. [] is __host__ restricted function since its cv::Vec2f not cv::cuda::Vec2f (which apparently does not exist). But still I would really like to access the data. Is there other mechanism to access 2-channel data on device side similar to Vec2f? I thought of workaround in form of splitting input into two CV_32FC1 Mats so the kernel would look like: __global__ void testKernel(const cv::cuda::PtrStepSzf re, const cv::cuda::PtrStepSzf im, cv::cuda::PtrStepf output) but I'm wondering whether there's a "cleaner" solution, Vec2f-like one. A: You can use raw data types to access the data of GpuMat in a custom CUDA kernel. e.g. float2 type provided by the CUDA runtime can be used as partial replacement of cv::Vec2f. Here is an example code demonstrating the usage of raw data types for accessing GpuMat data. #include <iostream> #include <cuda_runtime.h> #include <opencv2/opencv.hpp> using std::cout; using std::endl; __global__ void kernel_absolute(float2* src, float* dst, int rows, int cols, int iStep, int oStep) { int i = blockIdx.y * blockDim.y + threadIdx.y; //Row number int j = blockIdx.x * blockDim.x + threadIdx.x; //Column number if (i<rows && j<cols) { /* Compute linear index from 2D indices */ int tidIn = i * iStep + j; int tidOut = i * oStep + j; /* Read input value */ float2 input = src[tidIn]; /* Calculate absolute value */ float output = sqrtf(input.x * input.x + input.y * input.y); /* Write output value */ dst[tidOut] = output; } } int main(int argc, char** argv) { /* Example to compute absolute value of each element of a complex matrix */ int rows = 10; int cols = 10; int input_data_type = CV_32FC2; //input is complex int output_data_type = CV_32FC1; //output is real /* Create input matrix on host */ cv::Mat input = cv::Mat::zeros(rows,cols,input_data_type) + cv::Vec2f(1,1) /* Initial value is (1,1) */; /* Display input */ cout<<input<<endl; /* Create input matrix on device */ cv::cuda::GpuMat input_d; /* Copy from host to device */ input_d.upload(input); /* Create output matrix on device */ cv::cuda::GpuMat output_d(rows,cols, output_data_type); /* Compute element step value of input and output */ int iStep = input_d.step / sizeof(float2); int oStep = output_d.step / sizeof(float); /* Choose appropriate block size */ dim3 block(8,8); /* Compute grid size using input size and block size */ dim3 grid ( (cols + block.x -1)/block.x, (rows + block.y -1)/block.y ); /* Launch CUDA kernel to compute absolute value */ kernel_absolute<<<grid, block>>>( reinterpret_cast<float2*>(input_d.data), reinterpret_cast<float*>(output_d.data), rows, cols, iStep, oStep ); /* Check kernel launch errors */ assert( cudaSuccess == cudaDeviceSynchronize() ); cv::Mat output; /* Copy results from device to host */ output_d.download(output); /* Display output */ cout<<endl<<output<<endl; return 0; } Compiled and tested with following command on Ubuntu 14.04 with CUDA 8.0: nvcc -o complex complex.cu -arch=sm_61 -L/usr/local/lib -lopencv_core
{ "pile_set_name": "StackExchange" }
Q: .Net SimpleJson: Deserialize JSON to dynamic object I'm using the SimpleJson library from here: http://simplejson.codeplex.com/ I'd like to deserialize a JSON string to an dynamic object like this: dynamic json = SimpleJson.SimpleJson.DeserializeObject("{\"foo\":\"bar\"}"); var test = json.foo; The deserialization part works properly, but calling json.foo throws a RuntimeBinderException with the error message 'SimpleJson.JsonObject' does not contain a definition for 'foo'. How can I deserialize a JSON string using SimpleJson and access the dynamic properties using the json.foo syntax? A: Well, it's just a matter of reading the source code for SimpleJson. :-) A line needs to be uncommented to support the dynamic syntax that I'm looking for. Not sure why this isn't enabled by default. From the source code: // NOTE: uncomment the following line to enable dynamic support. //#define SIMPLE_JSON_DYNAMIC A: Looking at the samples, JsonObject properties are accessed like a dictionary. So instead of json.foo, you would need json["foo"]. You are actually worse off using dynamic here, since there's nothing dynamic about it: the method returns JsonObject, which simply doesn't have a foo member. If you hadn't used dynamic, you could have gotten that error message at compile time. If you have a look at the link L.B. provided, it shows how to implement this dynamic functionality yourself.
{ "pile_set_name": "StackExchange" }
Q: Array.sort get different result in IOS it's a simple code, but returns different results in Andriod and Iphone. var str = [1,2,3,4,5].sort(function () { return -1; }) document.write(str); In MDN(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) it says If compareFunction(a, b) is less than 0, sort a to a lower index than b, i.e. a comes first. If compareFunction(a, b) returns 0, leave a and b unchanged with respect to each other, but sorted with respect to all different elements. Note: the ECMAscript standard does not guarantee this behaviour, and thus not all browsers (e.g. Mozilla versions dating back to at least 2003) respect this. If compareFunction(a, b) is greater than 0, sort b to a lower index than a. compareFunction(a, b) must always return the same value when given a specific pair of elements a and b as its two arguments. If inconsistent results are returned then the sort order is undefined. So the result should be 1,2,3,4,5. But is Iphone it shows 5,4,3,2,1 Here is a link for you to try this code. http://www.madcoder.cn/demos/ios-test.html And after I did more and more test. I found Iphone is doing a different sorting. Here is a link shows how sort works: http://www.madcoder.cn/demos/ios-test2.html A: The javascript engines use different algorithms for their sort function. Since the compare function doesn't compare values, you get the result of the inner workings of the different algorithms instead of having a sorted result. Looking at the source code of V8 engine (Chrome) and JavaScriptCore (which seems to be used by Safari, or at least the sort function gives the same result, so I guess it uses the same kind of algorithm), you can view the functions that are being used. Not that it might not be exactly the functions used, what's important is that the algorithms are different. They give the same result if you actually compare values, but if not, the result is dependent on the way they operate, not on the function itself. At least not completely. Here's V8 engine sorting function. You'll see that for arrays bigger than 10 elements, the algorithm isn't the same, so the result for arrays smaller than 10 elements is different than for those bigger than 10 elements. You can find following algorithms here: https://code.google.com/p/chromium/codesearch#chromium/src/v8/src/js/array.js&q=array&sq=package:chromium&dr=C comparefn = function(a, b) { return -1 } var InsertionSort = function InsertionSort(a, from, to) { for (var i = from + 1; i < to; i++) { var element = a[i]; for (var j = i - 1; j >= from; j--) { var tmp = a[j]; var order = comparefn(tmp, element); if (order > 0) { a[j + 1] = tmp; } else { break; } } a[j + 1] = element; } console.log(a); } var GetThirdIndex = function(a, from, to) { var t_array = new InternalArray(); // Use both 'from' and 'to' to determine the pivot candidates. var increment = 200 + ((to - from) & 15); var j = 0; from += 1; to -= 1; for (var i = from; i < to; i += increment) { t_array[j] = [i, a[i]]; j++; } t_array.sort(function(a, b) { return comparefn(a[1], b[1]); }); var third_index = t_array[t_array.length >> 1][0]; return third_index; } var QuickSort = function QuickSort(a, from, to) { var third_index = 0; while (true) { // Insertion sort is faster for short arrays. if (to - from <= 10) { InsertionSort(a, from, to); return; } if (to - from > 1000) { third_index = GetThirdIndex(a, from, to); } else { third_index = from + ((to - from) >> 1); } // Find a pivot as the median of first, last and middle element. var v0 = a[from]; var v1 = a[to - 1]; var v2 = a[third_index]; var c01 = comparefn(v0, v1); if (c01 > 0) { // v1 < v0, so swap them. var tmp = v0; v0 = v1; v1 = tmp; } // v0 <= v1. var c02 = comparefn(v0, v2); if (c02 >= 0) { // v2 <= v0 <= v1. var tmp = v0; v0 = v2; v2 = v1; v1 = tmp; } else { // v0 <= v1 && v0 < v2 var c12 = comparefn(v1, v2); if (c12 > 0) { // v0 <= v2 < v1 var tmp = v1; v1 = v2; v2 = tmp; } } // v0 <= v1 <= v2 a[from] = v0; a[to - 1] = v2; var pivot = v1; var low_end = from + 1; // Upper bound of elements lower than pivot. var high_start = to - 1; // Lower bound of elements greater than pivot. a[third_index] = a[low_end]; a[low_end] = pivot; // From low_end to i are elements equal to pivot. // From i to high_start are elements that haven't been compared yet. partition: for (var i = low_end + 1; i < high_start; i++) { var element = a[i]; var order = comparefn(element, pivot); if (order < 0) { a[i] = a[low_end]; a[low_end] = element; low_end++; } else if (order > 0) { do { high_start--; if (high_start == i) break partition; var top_elem = a[high_start]; order = comparefn(top_elem, pivot); } while (order > 0); a[i] = a[high_start]; a[high_start] = element; if (order < 0) { element = a[i]; a[i] = a[low_end]; a[low_end] = element; low_end++; } } } if (to - high_start < low_end - from) { QuickSort(a, high_start, to); to = low_end; } else { QuickSort(a, from, low_end); from = high_start; } } }; InsertionSort([1, 2, 3, 4, 5], 0, 5); //QuickSort is recursive and calls Insertion sort, so you'll have multiple logs for this one QuickSort([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 0, 13); //You'll see that for arrays bigger than 10, QuickSort is called. var srt = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13].sort(function() { return -1 }) console.log(srt) And JavaScriptCore uses merge sort. You can find this algorithm here: http://trac.webkit.org/browser/trunk/Source/JavaScriptCore/builtins/ArrayPrototype.js function min(a, b) { return a < b ? a : b; } function merge(dst, src, srcIndex, srcEnd, width, comparator) { var left = srcIndex; var leftEnd = min(left + width, srcEnd); var right = leftEnd; var rightEnd = min(right + width, srcEnd); for (var dstIndex = left; dstIndex < rightEnd; ++dstIndex) { if (right < rightEnd) { if (left >= leftEnd || comparator(src[right], src[left]) < 0) { dst[dstIndex] = src[right++]; continue; } } dst[dstIndex] = src[left++]; } } function mergeSort(array, valueCount, comparator) { var buffer = []; buffer.length = valueCount; var dst = buffer; var src = array; for (var width = 1; width < valueCount; width *= 2) { for (var srcIndex = 0; srcIndex < valueCount; srcIndex += 2 * width) merge(dst, src, srcIndex, valueCount, width, comparator); var tmp = src; src = dst; dst = tmp; } if (src != array) { for (var i = 0; i < valueCount; i++) array[i] = src[i]; } return array; } console.log(mergeSort([1, 2, 3, 4, 5], 5, function() { return -1; })) Again these may not be exactly the functions used in each browser, but it shows you how different algorithms will behave if you don't actually compare values.
{ "pile_set_name": "StackExchange" }
Q: Error calling LibreOffice from Python Calling LibreOffice to convert a document to text... This works fine from the linux command line: soffice --headless --convert-to txt:"Text" document_to_convert.doc But I get an error when I try to run the same command from Python: subprocess.call(['soffice', '--headless', '--convert-to', 'txt:"Text"', 'document_to_convert.doc']) Error: Please reverify input parameters... How do I get the command to run from Python? A: This is the code you should use: subprocess.call(['soffice', '--headless', '--convert-to', 'txt:Text', 'document_to_convert.doc']) This is the same line you posted, without the quotes around txt:Text. Why are you seeing the error? Simply put: because soffice does not accept txt:"Text". It only accepts txt:Text. Why is it working on the shell? Your shell implicitly removes quotes around arguments, so that the command that gets executed is actually: soffice --headless --convert-to txt:Text document_to_convert.doc Try running this command: soffice --headless --convert-to txt:\"Text\" document_to_convert.doc Quotes won't be removed and you'll see the Please verify input parameters message you are getting with Python.
{ "pile_set_name": "StackExchange" }
Q: linkedin hiding/editing short URL I was looking through different forums, but didn't find anything related to my small issue... Clickable title google.com And the google.com is gonna be of small font and gray color. Is it possible somehow to hide/edit/ask LinkedIn not to show that small gray formatted domain of the submitted link? It's pretty crucial for my project, and any help would be highly appreciated... A: This isn't possible using the LinkedIn API (nor using the website). The only thing I could think of would be to use your own URL shortener and share that link instead.
{ "pile_set_name": "StackExchange" }
Q: Comparison of topology of pointwise convergence and compact-open topologies for Sierpiński space Let $\{0,1\}$ be equipped with the Sierpiński topology $\{\emptyset, \{0,1\},\{1\}\}$, and $\mathbb{R}^d$ with the usual Euclidean topology. Then is the pointwise-convergence (point-open) topology on $C(\mathbb{R}^d,\{0,1\})$ indeed weaker than the compact-open topology? I have in mind the case where $n>1$. A: Functions $\mathbb R^d\to \{0,1\}$ are indicator functions $f=I_{A}$ with $A=f^{-1}(\{1\})$ and continuity with respect to the Sierpiński topology precisely means that $A$ is open in $\mathbb R^d$. The topology of pointwise convergence on $C(\mathbb R^d,\{0,1\})$ is strictly coarser than the compact-open topology: Since the only interesting open set in $\{0,1\}$ is $\{1\}$ the sets $W(K)=\{I_B: K\subseteq B\}$ ($K$ compact) form a base of the compact-open topology and the sets $W(E)$ with finite $E$ form base of the point-open topology (that name is probably not very common). To show that the claim one has to find a compact set $K$ such that $W(K)$ does not contain any $W(E)$ for a finite set $E$. This is the case, e.g, for the closed unit ball $K$ in $\mathbb R^d$: For any finite $E$ take a union $B$ of smalls open balls centered at the points of $E$ so that the volume of the $B$ is strictly less than the volume of $K$. Then $I_B \in W(E) \setminus W(K)$.
{ "pile_set_name": "StackExchange" }
Q: Why does Apache ignore my Directory block? I just moved my projects into a new workstation. I'm having trouble getting my Apache installation to acknowledge my .htaccess files. This is my /etc/apache2/conf.d/dev config file: <Directory /home/codemonkey/dev/myproject/> Options -Indexes AllowOverride All Order Allow,Deny Deny from all </Directory> I know the config file is being included by Apache because it complains if I put erroneous syntax in it (Action 'configtest' fails). My project is reachable through Apache by a symlink in the /var/www directory. The server is running with my user and group, so it has my permissions. My entire dev folder has permissions set to 770 recursively. Despite all this, I'm still getting an indexed display of my project folder when I visit http://localhost/myproject. Why isn't the above config making it impossible to view the folder in the browser? A: As clearly documented, symlinks followed by apache do not transfer access responsibility to the target directory.
{ "pile_set_name": "StackExchange" }
Q: Fixing the sub-menu for a child for a single drop down menu Hello please I need some help with making a child sub-menu to be visible when I mouse hover on the "Accesorii Gaming", its child sub-menu is: Ochelari VR Gaming Gamepad / Controler Volane Casti Console Streaming and it should be visible on hover. I am doing a CSS battle here and I think I am loosing it... Thank you for your support HTML + CSS3 is here: https://codepen.io/tosasilviu/pen/vYBMWyM or below... * { margin: 0; padding: 0; } li { list-style: none; } .font-produse-h { font-family: "Bungee"; font-size: 45px; text-shadow: 3px 3px 0px #08b870; color: #000; margin: 50px 0 20px 50px; } /* Menu Container */ .navigation { display: inline-block; position: relative; z-index: 10; margin-left: 50px; } /* Menu List */ .navigation > li { display: block; float: left; } /* Menu Links */ .navigation > li > a { position: relative; display: block; z-index: 20; width: 130px; height: 54px; padding: 0 20px; line-height: 54px; font-family: "Quicksand"; font-weight: bold; font-size: 15px; color: #fcfcfc; text-shadow: 0 0 1px rgba(0, 0, 0, 0.35); background: #08b870; border-left: 1px solid #06e187; text-align: center; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease; -o-transition: all 0.3s ease; -ms-transition: all 0.3s ease; transition: all 0.3s ease; } .navigation > li:hover > a { background: #0b8452; } .navigation > li:first-child > a { border-radius: 6px 0px 0px 6px; border-left: 0px; } .navigation > li:last-child > a { border-radius: 0px 6px 6px 0px; } /* Menu Dropdown */ .navigation > li > div { position: absolute; display: block; color: #fff; width: 170px; top: 50px; opacity: 0; visibility: hidden; overflow: hidden; background: #0b8452; border-radius: 0 0 3px 3px; -webkit-transition: all 0.3s ease 0.15s; -moz-transition: all 0.3s ease 0.15s; -o-transition: all 0.3s ease 0.15s; -ms-transition: all 0.3s ease 0.15s; transition: all 0.3s ease 0.15s; } /* Menu Dropdown Child */ .navigation-child { position: absolute; width: 170px; left: 170px; top: 74px; display: none; } .navigation-column-child { color: #fff; text-decoration: none; font-size: 15px; font-weight: bold; font-family: "Quicksand"; display: block; padding: 17px 0 17px 0; text-align: center; border-bottom: 1px solid #085b39; } .navigation-column-link { color: #fff; text-decoration: none; font-size: 15px; font-weight: bold; font-family: "Quicksand"; display: block; padding: 17px 0 17px 0; text-align: center; border-bottom: 1px solid #085b39; } li:hover > a { background: #045030; } .navigation-column { margin-top: 20px; } .navigation > li:hover > div { opacity: 1; visibility: visible; overflow: visible; } <!DOCTYPE html> <html lang="ro"> <head> <title>Meniu CSS Dropdown</title> <link rel="stylesheet" href="css/style.css" /> <link href="https://fonts.googleapis.com/css?family=Bungee&display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Quicksand&display=swap" rel="stylesheet"> </head> <body> <h1 class="font-produse-h">Produse</h1> <ul class="navigation"> <li> <a href="#">Gaming</a> <div> <div class="navigation-column"> <ul> <li><a class="navigation-column-link" href="#">Console Gaming</a></li> <li><a class="navigation-column-child" href="#">Accesorii Gaming</a> <ul class="navigation-child"> <li><a class="navigation-column-link" href="#">Ochelari VR Gaming</a></li> <li><a class="navigation-column-link" href="#">Gamepad / Controler</a></li> <li><a class="navigation-column-link" href="#">Volane</a></li> <li><a class="navigation-column-link" href="#">Casti Console</a></li> <li><a class="navigation-column-link" href="#">Streaming</a></li> </ul> </li> <li><a class="navigation-column-link" href="#">Scaune Gaming</a></li> <li><a class="navigation-column-link" href="#">Licente Electronice</a></li> <li><a class="navigation-column-link" href="#">Jocuri Console &amp; PC</a></li> <li><a class="navigation-column-link" href="#">PC Gaming</a></li> <li><a class="navigation-column-link" href="#">Accesorii PC</a></li> <li><a class="navigation-column-link" href="#">Resigilate</a></li> </ul> </div> </div> </li> <li><a href="#">Gaming 2</a></li> <li><a href="#">Gaming 3</a></li> <li><a href="#">Gaming 4</a></li> <li><a href="#">Gaming 5</a></li> </ul> </body> </html> A: You are on the right path. You can go deeper in the css selector to make a hover statement on each navigation item and change whatever code you want from there to display the child ul. Simple example below: .navigation li ul li:hover ul { display: block; }
{ "pile_set_name": "StackExchange" }
Q: Getting HWND of current Process I have a process in c++ in which I am using window API. I want to get the HWND of own process. Kindly guide me how can I make it possible. A: If you're talking about getting a process handle, then it's not an HWND (which is a window handle), but a HANDLE (i.e., a kernel object handle); to retrieve a pseudo-handle relative to the current process, you can use GetCurrentProcess as the others explained. On the other hand, if you want to obtain an HWND (a window handle) to the main window of your application, then you have to walk the existing windows with EnumWindows and to check their ownership with GetWindowThreadProcessId, comparing the returned process ID with the one returned by GetCurrentProcessId. Still, in this case you'd better to save your main window handle in a variable when you create it instead of doing all this mess. Anyhow, keep always in mind that not all handles are the same: HANDLEs and HWNDs, in particular, are completely different beasts: the first ones are kernel handles (=handles to kernel-managed objects) and are manipulated with generic kernel-handles manipulation functions (DuplicateHandle, CloseHandle, ...), while the second ones are handles relative to the window manager, which is a completely different piece of the OS, and are manipulated with a different set of functions. Actually, in theory an HWND may have the same "numeric" value of a HANDLE, but they would refer to completely different objects. A: You are (incorrectly) assuming that a process has only a single HWND. This is not generally true, and therefore Windows can't offer an API to get it. A program could create two windows, and have two HWNDs as a result. OTOH, if your program creates only a single window, it can store that HWND in a global variable. A: Get your console window GetConsoleWindow(); "The return value is a handle to the window used by the console associated with the calling process or NULL if there is no such associated console." https://msdn.microsoft.com/en-us/library/windows/desktop/ms683175(v=vs.85).aspx Get other windows GetActiveWindow() might NOT be the answer, but it could be useful "The return value is the handle to the active window attached to the calling thread's message queue. Otherwise, the return value is NULL." > msdn GetActiveWindow() docs However, the graphical windows are not just popping up - so you should retrieve the handle from the place you/your app've created the window... e.g. CreateWindow() returns HWND handle so all you need is to save&retrieve it...
{ "pile_set_name": "StackExchange" }
Q: Apply functions in a dataset I'm trying to apply multiple functions in a CSV document. I would like to have a first function that resends the data to other functions according to the value of your column Data (test.csv): sentence,language .,fr .,en .,en .,it .,es .,fr .,fr .,fr .,es .,ge .,fr .,fr "Prezzi",it "it's not expensive",en "prix à baisser",fr "casi 50 euros la alfombra es cara",es "Prix,fr "PREZZI più bassi",it "Preis",ge "Precio",es "Price",en "es ist nicht teuer",fr Script: import string import pandas as pd def main(dataset): dataset = pd.read_csv(dataset, sep =',') text = dataset['sentence'] language = dataset['language'] for language in dataset: if language == 'fr': cleanText_FR() if language == 'es': cleanText_ES() if language == 'it': cleanText_IT() if language == 'en': cleanText_EN() if language == 'ge': cleanText_EN() def cleanText_FR(): text_lower = text.str.lower() punct = string.punctuation pattern = r"[{}]".format(punct) text_no_punct = text_lower.str.replace(pattern, ' ') text_no_blancks = text_no_punct.replace('\s+', ' ', regex=True) text_no_blancks = text_no_blancks.str.rstrip() text_no_duplicate = text_no_blancks.drop_duplicates(keep=False) text_cluster_random = text_no_small.sample(n=1000) text_list = text_cluster_random.tolist() return text_list def cleanText_ES(): text_lower = text.str.lower() punct = string.punctuation pattern = r"[{}]".format(punct) text_no_punct = text_lower.str.replace(pattern, ' ') text_no_blancks = text_no_punct.replace('\s+', ' ', regex=True) text_no_blancks = text_no_blancks.str.rstrip() text_no_duplicate = text_no_blancks.drop_duplicates(keep=False) text_cluster_random = text_no_small.sample(n=1000) text_list = text_cluster_random.tolist() return text_list def cleanText_IT(): text_lower = text.str.lower() punct = string.punctuation pattern = r"[{}]".format(punct) text_no_punct = text_lower.str.replace(pattern, ' ') text_no_blancks = text_no_punct.replace('\s+', ' ', regex=True) text_no_blancks = text_no_blancks.str.rstrip() text_no_duplicate = text_no_blancks.drop_duplicates(keep=False) text_cluster_random = text_no_small.sample(n=1000) text_list = text_cluster_random.tolist() return text_list def cleanText_EN(): text_lower = text.str.lower() punct = string.punctuation pattern = r"[{}]".format(punct) text_no_punct = text_lower.str.replace(pattern, ' ') text_no_blancks = text_no_punct.replace('\s+', ' ', regex=True) text_no_blancks = text_no_blancks.str.rstrip() text_no_duplicate = text_no_blancks.drop_duplicates(keep=False) text_cluster_random = text_no_small.sample(n=1000) text_list = text_cluster_random.tolist() return text_list def cleanText_GE(): text_lower = text.str.lower() punct = string.punctuation pattern = r"[{}]".format(punct) text_no_punct = text_lower.str.replace(pattern, ' ') text_no_blancks = text_no_punct.replace('\s+', ' ', regex=True) text_no_blancks = text_no_blancks.str.rstrip() text_no_duplicate = text_no_blancks.drop_duplicates(keep=False) text_cluster_random = text_no_small.sample(n=1000) text_list = text_cluster_random.tolist() return text_list main("test.csv") I did not have any results In [3]: runfile('/home/marin/Bureau/preprocess/preprocess.py', wdir='/home/marin/Bureau/preprocess') In [4]: And I hoped I could have all my data treated as output. My question is not a duplicate! It's Python not R! A: Iterate over your DataFrame with .iterrows() as follows: dataset = pd.read_csv(dataset, sep =',') for num, row in dataset.iterrows(): text = row['sentence'] language = row['language'] #if statements and language clean method calls go here
{ "pile_set_name": "StackExchange" }
Q: Brightness Profiles Ubuntu 14.04.5 Usually, laptop screens are dim set to the lowest level of brightness when the charger is not present, and brighten up to the max level if the charger is present/connected. However, on my Dell Inspiron 7559 Skylake laptop, this does not happen. My laptop is always on 100% brightness. How can I retrieve this "feature"? A: Introduction The script below is modified version of my previous scripts, written in python and using dbus exclusively for polling ac_adapter presence and setting the screen brightness. Usage Usage is simple: call from command line as python ./brightness_control.py The script defaults to 100% brightness on AC , 10% on battery. Users can use -a and -b to set their desired brightness levels on ac and battery respectively. AS given by -h option: $ ./brightness_control.py -h usage: brightness_control.py [-h] [-a ADAPTER] [-b BATTERY] Simple brightness control for laptops, depending on presense of AC power supply optional arguments: -h, --help show this help message and exit -a ADAPTER, --adapter ADAPTER brightness on ac -b BATTERY, --battery BATTERY brightness on battery For example, one can do any of the following: # set non default for brightness on ac $ ./brightness_control.py -a 80 # set non-default value for brightness on battery $ ./brightness_control.py -b 20 # set non-default values for both $ ./brightness_control.py -a 80 -b 20 Source Also available on GitHub #!/usr/bin/env python """ Author: Serg Kolo <[email protected]> Date: Nov 3rd , 2016 Purpose:Brightness control depending on presence of ac adapter Written for: http://askubuntu.com/q/844193/295286 """ import argparse import dbus import time import sys def get_dbus_property(bus_type, obj, path, iface, prop): """ utility:reads properties defined on specific dbus interface""" if bus_type == "session": bus = dbus.SessionBus() if bus_type == "system": bus = dbus.SystemBus() proxy = bus.get_object(obj, path) aux = 'org.freedesktop.DBus.Properties' props_iface = dbus.Interface(proxy, aux) props = props_iface.Get(iface, prop) return props def get_dbus_method(bus_type, obj, path, interface, method, arg): """ utility: executes dbus method on specific interface""" if bus_type == "session": bus = dbus.SessionBus() if bus_type == "system": bus = dbus.SystemBus() proxy = bus.get_object(obj, path) method = proxy.get_dbus_method(method, interface) if arg: return method(arg) else: return method() def on_ac_power(): adapter = get_adapter_path() call = ['system','org.freedesktop.UPower',adapter, 'org.freedesktop.UPower.Device','Online' ] if get_dbus_property(*call): return True def get_adapter_path(): """ Finds dbus path of the ac adapter device """ call = ['system', 'org.freedesktop.UPower', '/org/freedesktop/UPower','org.freedesktop.UPower', 'EnumerateDevices',None ] devices = get_dbus_method(*call) for dev in devices: call = ['system','org.freedesktop.UPower',dev, 'org.freedesktop.UPower.Device','Type' ] if get_dbus_property(*call) == 1: return dev def set_brightness(*args): call = ['session','org.gnome.SettingsDaemon.Power', '/org/gnome/SettingsDaemon/Power', 'org.gnome.SettingsDaemon.Power.Screen', 'SetPercentage', args[-1] ] get_dbus_method(*call) def parse_args(): info = """ Simple brightness control for laptops, depending on presense of AC power supply """ arg_parser = argparse.ArgumentParser( description=info, formatter_class=argparse.RawTextHelpFormatter) arg_parser.add_argument( '-a','--adapter',action='store', type=int, help='brightness on ac', default=100, required=False) arg_parser.add_argument( '-b','--battery',action='store', type=int, help='brightness on battery', default=10, required=False) return arg_parser.parse_args() def main(): args = parse_args() while True: if on_ac_power(): set_brightness(args.adapter) while on_ac_power(): time.sleep(1) else: set_brightness(args.battery) while not on_ac_power(): time.sleep(1) if __name__ == "__main__": main()
{ "pile_set_name": "StackExchange" }
Q: Which AR (Augmented Reality) framework (Android & iOS) is suitable for my situation My Situation I want to build an Application that can recognize an Image to produce a corresponding model. i.e. I focus the camera to show a printed image on the card that is designed by myself ( apple logo ) , then it will show a 3D model(.md2) on the screen which is also designed by myself. I have googled many framework that worked on both Android & iOS, but the documentations are very limited and the trial version does not support me to test it. for example, http://www.metaio.com/sdk/ But their demo is not comprehensive enough to suite my situation My Question 1.Would anyone can share their experience of developing with AR framework (not the AR core) on Android & iOS? 2.Is there any framework that support me to add a image as a key then it will map to my model with just a couples line of codes? 3.if Q2 is not possible, is there any approaches of some framework can also archive the some goal but more complex ? //Logic flow String key = "APPLE"; sdk.putKeyImage(key,apple.png); ... if (sdk.identifiedAs(key)){ //Do something //Example sdk.showApple3DModel(); play(showSnakeEatApple.mp4); } A: I'm an android app developer. So according to my experince for android, you can go for Vuforia (https://developer.vuforia.com/resources/sdk/android) Wikitude (http://www.wikitude.com/developer/documentation/android) These above 2 are enough good to implement AR app in android. Vuforia is having well documented, also they have more libraries & mainly those are free. & Wikitude is best for making apps faster. I recommend you to use any of these for your AR app development in android.
{ "pile_set_name": "StackExchange" }
Q: Is it faster to check that a Date is (not) NULL or compare a bit to 1/0? I'm just wondering what is faster in SQL (specifically SQL Server). I could have a nullable column of type Date and compare that to NULL, or I could have a non-nullable Date column and a separate bit column, and compare the bit column to 1/0. Is the comparison to the bit column going to be faster? A: In order to check that a column IS NULL SQL Server would actually just check a bit anyway. There is a NULL BITMAP stored for each row indicating whether each column contains a NULL or not. A: I just did a simple test for this: DECLARE @d DATETIME ,@b BIT = 0 SELECT 1 WHERE @d IS NULL SELECT 2 WHERE @b = 0 The actual execution plan results show the computation as exactly the same cost relative to the batch. Maybe someone can tear this apart, but to me it seems there's no difference. MORE TESTS SET DATEFORMAT ymd; CREATE TABLE #datenulltest ( dteDate datetime NULL ) CREATE TABLE #datebittest ( dteDate datetime NOT NULL, bitNull bit DEFAULT (1) ) INSERT INTO #datenulltest ( dteDate ) SELECT CASE WHEN CONVERT(bit, number % 2) = 1 THEN '2010-08-18' ELSE NULL END FROM master..spt_values INSERT INTO #datebittest ( dteDate, bitNull ) SELECT '2010-08-18', CASE WHEN CONVERT(bit, number % 2) = 1 THEN 0 ELSE 1 END FROM master..spt_values SELECT 1 FROM #datenulltest WHERE dteDate IS NULL SELECT 2 FROM #datebittest WHERE bitNull = CONVERT(bit, 1) DROP TABLE #datenulltest DROP TABLE #datebittest dteDate IS NULL result: bitNull = 1 result: OK, so this extended test comes up with the same responses again. We could do this all day - it would take some very complex query to find out which is faster on average. A: All other things being equal, I would say the Bit would be faster because it is a "smaller" data type. However, if performance is very important here (and I assume it is because of the question) then you should always do testing, as there may be other factors such as indexes, caching that affect this. It sounds like you are trying to decide on a datatype for field which will record whether an event X has happened or not. So, either a timestamp (when X happened) or just a Bit (1 if X happened, otherwise 0). In this case I would be tempted to go for the Date as it gives you more information (not only whether X happened, but also exactly when) which will most likely be useful in the future for reporting purposes. Only go against this if the minor performance gain really is more important.
{ "pile_set_name": "StackExchange" }
Q: Should I include a "back" button on a responsive website? Do you believe there should be a back button on a responsive website or should we rely on the browser's controls? If you believe there needs to be a button in the design... Would you have it appear all the time or only on small (smartphone) screens? Do you have any examples of websites that do this well? FYI I'm working on a website which is more a tool than a content website. Users will need to be logged in and they'll probably visit this website several times a week. I've found this fairly recent post about the topic, and although the answer is interesting, I don't see any studies/articles to support the points that are made. Edit: there is a similar post about this but it's nearly 7 years old so it's probably not relevant anymore A: It's important to recognize the difference between an "back in hierarchy" button (also called an "up button") and a "back in time" button. A "back in time" button is what you find in web browsers and on Android phones. You can rely on it being there, but you'll never know where it leads. A person could have visited the page from your app, but also from an email link, a social network, etc. That means you still have to include a way to navigate to different sections of your app. One way to do that is by including an up button, just like mobile apps do. There are also other ways—for example, if your navigation hierarchy is relatively flat, a logo to get to the homepage or a hamburger menu to get to different top-level pages might suffice.
{ "pile_set_name": "StackExchange" }
Q: Tutorial for OpenLayers? I need to create a web map showing many raster layers. I am using mapserver and want to use Openlayers, however I can't find any good tutorial about it. I see a couple of old questions (1 and 2) saying that there were no good documentation. Have the things changed recently? I would like to find a tutorial teaching from basics to rather complicated things with good explanations of the code and pictures/examples of the results. For now I managed to do only the simplest web map with my .map file, but I need to customize it (add legends, group layers, add more controls, embedding, etc). A: In addition to the above excellent answers, let me add my own experience. A year and a half ago I decided I wanted to use OpenLayers (OL) in my Master's project and set out to learn it. I have been doing programming and digital map making as part of my work as an archaeologist since the early 1980s, and have been an ArcGIS user for 15 years. I am happy I choose OL for my project, but it wasn't always a smooth path learning it. Some things were not obvious and learned only by trial and error. So, I have some advice for beginners. My journey learning OL really got going when I signed up for a 5-slot bookshelf account on Safari Books Online for $10/month USD. I wanted to review books before buying, and few stores one can visit carry GIS-related computer books. There are three books out now on OL 2. A newer 58-page book called Instant OpenLayers Starter by Di Lorenzo and Allegri (Apr 2013) is a good quick start, but the first two books and their code samples (available from the publisher's web site, along with a free sample chapter of each book) were good resources: OpenLayers 2.10: Beginner's Guide by Erik Hazzard (March 2011) OpenLayers Cookbook by Antonio Santiago Perez (August 2012) Due to occasional frustrations over css and browser compatibility, I ended up learning a JavaScript framework as well. I choose Dojo because this is what Perez used in his book. Modern Dojo (Dojo 1.7 +) is a significantly different approach from earlier versions, using an Asynchronous Module Definition (AMD) format. The way of doing everything changed. I did not understand that this otherwise excellent book uses a pre-1.7 version of Dojo that was made obsolete 9 months before the book was published in August 2012. Esri continued using the pre-1.7 Dojo in their JavaScript ArcGIS API until modernizing in June 2012, and this was a painful switch for many ArcGIS Javascript developers. To understand how poorly supported Dojo is, other than a book written in 2010 about Dojo 1.3, most of the books were written in 2007 and 2008. There are no published books for Modern Dojo--you must learn from online resources, almost all of which are on their web site. Basically, to work with the examples in Perez's book, you need to know enough about JavaScript to ignore the Dojo bits and move the examples into plain JavaScript or your framework of choice. In retrospect, I wish I had gone with the ExtJS framework and GeoExt. ExtJS is free if your project is open source, and because many companies happily pay for a supported version, they can afford to spend time on comprehensive web site documentation and tutorials. I learn best by working with/hacking apart examples. The developers at OpenLayers have this same philosophy as the primary documentation they point to for learning is examining the examples. However, some OL examples on their web site and elsewhere have issues that can make beginners stumble. (See below.) The reliance on examples as documentation also means that the user does not have a sense of a good work flow for developing a web map. This can lead to making maps that feel incomplete--for instance, they may lack css customizations to the maps user interface and "look and feel." Overriding the OL css with customizations feels daunting to the beginner, but Firebug can help you find the element names that you need to override. The lack of a sense of an accepted work flow can also lead to the creation of Frankencode, as users shoehorn features into their code as they find they want it. This leads me to the last item that I feel the OL site documentation lacks, a sense of "best practices" for OL maps. Is there a better way to organize my code to make it modular and robust? What are the pitfalls with JavaScript closures and OL objects? Where should I declare my styles? And so on. Other than the various outdated files in the Wiki, there are two general issues a beginner should be aware of when learning from the official OL examples and API documents. First, there is no organization to the page of OL development examples on the OL web site. It is simply presenting the feed from the xml file in that directory (example-list.xml) of the examples (207 of these as of 13 Feb 2014) and sorting the rows alphabetically by filename into a grid. More advanced examples are mixed with basic ones. You can search the examples by keyword, but many of the examples lack keywords and the search feature includes content and page title in the search, not just keywords. The results are returned with highest number of search terms matched first followed by word frequency. Only one of the search terms need be a match to show up in the results. The UserRecipes page on the OL Wiki lists about 90 examples organized by category, and this categorization is a help. Of these, 66 are live links to the examples on the examples page and the rest are bad links to removed examples. Second, there are basically two versions of the API documentation which appear to be the same at first glance. The official API is in a directory called /apidocs and the bleeding edge, but volatile, developer library is in a directory called /docs. The URLs are the same otherwise. (There are also trunk versions.) Just edit the address of the page to see the other version. The Wiki notes that the developer library should not be relied upon as properties, functions, etc. might be removed from the library at any time. With OpenLayers 3 being close to reality (it is available in beta and there is a book out on it now), I suspect that not too much will change in OL 2 in the future. The focus now is on OL 3. In general, I find the OL API pages to be overly skeletal, often lacking explanations or illustrative examples, especially for someone used who is to libraries with more complete API documentation. The way it is presented you don't get a clear picture of the object it is inheriting from. Of the OL examples on the web in general, many use objects or syntax that has been deprecated because it has been replaced by improved versions. For instance, Layer.Vector is now the preferred way of drawing markers as Layer.Marker is deprecated in version 3. Examine the file deprecated.js to make sure you are not using objects that are on the way out. Or, at least be aware if you upgrade your code to OL 3 you will need to change this. In addition to the Boundless OpenLayers workshop linked by Julien-Samuel Lacroix above, IBM has a cool tutorial, albeit three years old, that uses OpenLayers, MapServer, Google Gears, and jQuery to build a complete GIS web app: Bring data together with OpenLayers: Using data from multiple divergent sources in web maps Also, check out this useful post on styling the layer switcher Google the words OpenLayers and jsFiddle to get some examples of OL fiddles. The result from the techslides site is a page listing quite a few of these. Last, beware that the map images in most examples are from OpenStreetMap (OSM) servers and these go down every now and then, planned or unplanned, and you will get pink tiles in their place. Sometimes you will think you screwed up your code. You can check platform status on the OpenStreetMap wiki. A: While the others have suggested good online tutorials, let me tell you about the book that gave me a much needed strong foundation in OpenLayers. The book is: Erik Hazzard's OpenLayers 2.10 Beginner's Guide. It is available from Packt Publishers. I would strongly recommend the book, because it deals with all major parts of the Library. It starts from basics, and slowly helps you grow towards complicated parts of the API. A: Look at the Boundless OpenLayers workshop. It covers a lot of material. The workshop is using GeoServer instead of MapServer, but you can simply change the URL of the example to your MapServer WMS service.
{ "pile_set_name": "StackExchange" }
Q: What useful properties does the canonical link function have? So here I am studying generalized linear models. I know this question is quite naive and simple, but I do not exactly know why the link canonical function is so useful. Could someone provide me an intuition on this problem? A: I know this question is quite naive and simple, but I do not exactly know why the link canonical function is so useful Is it really so useful? A link function being canonical is mostly a mathematical property. It simplifies the mathematics somewhat, but in modeling you should anyhow use the link function that is scientifically meaningful. So what extra properties does a canonical link function have? It leads to existence of sufficient statistics. That could imply somewhat more efficient estimation, maybe, but modern software (such as glm in R) do not seem to treat canonical links differently from other links. It simplifies some formulas, so theoretical developments are eased. Many nice mathematical properties, see What is the difference between a "link function" and a "canonical link function" for GLM. So advantages seem to be mostly mathematical and algorithmical, not really statistical. Some more details: Let $Y_1, \dotsc, Y_n$ be independent observations from the exponential dispersion family model $$ f_Y(y;\theta,\phi)=\exp\left\{(y\theta-b(\theta))/a(\phi) + c(y,\phi)\right\} $$ with expectation $\DeclareMathOperator{\E}{\mathbb{E}} \E Y_i=\mu_i$ and linear predictor $\eta_i = x_i^T \beta$ with covariate vector $x_i$. The link function is canonical if $\eta_i=\theta_i$. In this case the likelihood function can be written as $$ \mathcal{L}(\beta; \phi)=\exp\left\{ \sum_i \frac{y_i x_i^T \beta -b(x_i^T \beta)}{a(\phi)}+\sum_i c(y_i,\phi)\right\} $$ and by the factorization theorem we can conclude that $\sum_i x_i y_i$ is sufficient for $\beta$. Without going into details, the equations needed for IRLS will be simplified. Likewise, this google search mostly seems to find canonical links mentioned in the context of simplifications, and not any more statistical reasons. A: The canonical link function describes the mean-variance relationship in a GLM. For instance, a binomial random variable has link function $\mu = \exp( \nu) /(1-\exp(\nu))$ where $\nu$ is a linear predictor $\mathbf{X}^T\beta$. Note that $\frac{\partial }{\partial \nu} \mu = \mu(1-\mu)$ which is the appropriate mean-variance relationship for a Bernoulli random variable. The same is true of Poisson random variables, where the inverse link function is $\mu = \exp(\nu)$ and $\frac{\partial }{\partial \nu} \mu = \mu$ where in a Poisson random variable, the variance is the mean. The generalized linear model solves an estimating equation of the form: $$ S(\beta) = D V^{-1} (Y - g(\mathbf{X}^T\beta))$$ where $D = \frac{\partial}{\partial \beta} g(\mathbf{X}^T\beta)$ and $V=\text{var}(Y)$. When the link is canonical, therefore, $D = V$ and the estimating function is: $$ S(\beta) = \mathbf{X}^{T}(Y - g(\mathbf{X}^T\beta))$$ As was noted in Wedderburn's 1976 paper on quasilikelihood, the canonical link has the advantage that expected and observed information are the same and that iteratively reweighted least squares is equivalent to Newton-Raphson, so this simplifies estimating procedures and variance estimation.
{ "pile_set_name": "StackExchange" }
Q: Dynamic graph visualisation using JLink/Java and GraphStream Visualising the addition of new nodes and edges to a graph to 'watch it grow' is to something Mathematica is not suited to by default. However this type of animation really helps convey the emergence of graph features (like clustering) to the untrained audience. There seems to be a lot of interest in doing this from the many users working with graphs in Mathematica. For example here are two questions on the topic (one, two). I am going to re-ask this basic question with some more specifics. It seems that JLink is a popular tool, and many users seem to be very comfortable with it and advocate more use (and programming in Java generally). For example, see Leonid's answer to a previous question. Unfortunately I am not a programmer at all and the documentation for JLink uses terminology that is over my head. I've hit a wall of frustration and really need a few pointers. There are some good Java tools for 2D dynamic graph visualisation. For example, GraphStream. So my question is, how do I visualise the growth of the below dynamic graph from this question using JLink and GraphStream? The third and forth elements of the lists are the beginning and end time periods when the edge exists). dynamicGraph = {{0, 1, 0, 20}, {1, 2, 1, Infinity}, {2, 3, 2, 21}, {3, 4, 3, 22}, {4, 5, 4, Infinity}, {5, 6, 5, Infinity}, {6, 7, 6, 26}, {7, 8, 7, 25}, {8, 9, 8, Infinity}, {9, 10, 9, 24}, {0, 6, 10, 27}, {1, 6, 11, Infinity}, {1, 5, 12, Infinity}, {2, 5, 13, Infinity}, {2, 4, 14, Infinity}, {6, 8, 15, Infinity}, {5, 8, 16, Infinity}, {5, 9, 17, Infinity}, {4, 9, 18, Infinity}, {4, 10, 19, 23}}; There are many parts to the question and many solutions I guess. Answers need only address one part or step in the process. How do actually use GraphStream with JLink? It consists of a bunch of .jar files, so if these are in the notebook directory I to start with something like Needs["JLink"]; InstallJava[]; AddToClassPath[NotebookDirectory[]];. Then what? A quick summary in layman's terms of basics of JLink would be great . Is a workable approach to generate graphs in Mathematica and export them to a file supported by GraphStream (eg. .dot, GraphML) then use the JLinked tool to read the file and display the animation (a little like Szabolic's answer here)? Yet I can't understand the format to make the graph in a readable dynamic format (just answering this part would be a huge help). Nor how to execute the code (ideally from within Mathematica). I hope this question will be of use to many others and answers that cover any steps in the process, or links to examples of using JLink that might aid my learning, would be greatly appreciated. A: I am writing this answer for a person who is familiar with Mathematica and has a good understanding of computer programming, but not so familiar with Java programming language. Using GraphStream is not so different from using any other Java library. You need to download the GraphStream core files from here and extract it. gs-core-1.1.2.jar is the only file you need. You can remove the rest of the files. Here is a minimal demo. Needs["JLink`"] (* Use InstallJava for the first time or see Todd's answer for how to use AddToClassPath *) ReinstallJava[ClassPath -> "/full/path/to/jar/file/gs-core-1.1.2.jar"] g = JavaNew["org.graphstream.graph.implementations.SingleGraph", "graph"] g@addNode["A"] g@addNode["B"] g@addEdge["AB", "A", "B"] g@display[] Remember to modify /full/path/to/jar/file/gs-core-1.1.2.jar to the correct one on your system. If you want to use multiple jar files, you need to separate the paths by : on unix like systems and ; on Windows, e.g., ClassPath -> "/path/to/jar1.jar:/path/to/jar2.jar" (we don't have multiple jar files here, but I mentioned it for the sake of completeness). The rest is just a translation from Java calls to Mathematica calls. Consider the following example from here: import org.graphstream.graph.*; import org.graphstream.graph.implementations.*; public class Tutorial1 { public static void main(String args[]) { Graph graph = new SingleGraph("Tutorial 1"); graph.addNode("A"); graph.addNode("B"); graph.addNode("C"); graph.addEdge("AB", "A", "B"); graph.addEdge("BC", "B", "C"); graph.addEdge("CA", "C", "A"); graph.display(); } } To translate it into Mathematica, the following tips might be useful: You can safely ignore public class XXX { ... and public static void main(String args[]) { lines. They are just the repeated parts in the main file of a Java program. Main files are actually the starting point of the Java programs. There is no such a thing in Mathematica. Creating new objects: To translate something like Graph graph = new SingleGraph("Tutorial 1"); into Mathematica, you first need to find the full class name of SingleGraph (attention: SingleGraph at the RHS of =, not Graph which is at the LHS) with the package name. To do so, you can either make a guess, or browse the javadoc. If you have a look at the first two lines of the above code, you may guess that SingleGraph is either imported from org.graphstream.graph or org.graphstream.graph.implementations, and if you guessed the second one, you are right. Once you found the full class name you can simple call g = JavaNew["org.graphstream.graph.implementations.SingleGraph", "graph"] to create a new object. Calling methods: graph.addNode("A"); can be simply be converted into mathematica like this: g@addNode["A"] Here is a sample code that imports a GraphML file: folder = "/path/to/a/folder/"; (* make sure it ends with a slash *) g = JavaNew["org.graphstream.graph.implementations.DefaultGraph", "demo-graph"]; fs = JavaNew["org.graphstream.stream.file.FileSourceGraphML"]; fs@addSink[g]; fs@readAll[folder <> "g.graphml"]; g@display[]; You can use Export[folder <> "g.graphml", RandomGraph[{50, 200}]] to generate a random graph. Appendix: General properties/tips about Java for a Mathematica programmer: Java is a compiled programming language. Java source files have a .java extension. Using the Java compiler, called javac, .java files are compiled into .class files. Class files are then executed using the Java Virtual Machine (JVM). From the command line, you can use the java command to run the class files. Jar files are essentially a bunch of .class files that are zipped. So, you can simply change the extension of a jar file to .zip and extract it using your favourite unzipper. To use Java in Mathematica, you need to load the JVM and the extra libraries you need (e.g., GraphStream jar file). However, keep in mind that even without loading extra libraries, you have access to the HUGE Java standard library. So, for instance, you can use Sockets or do some cryptography without any extra library. ClassPath is the set of paths from which the required Java classes are loaded. To use the extra libraries, you need to add it to the classpath. Despite Mathematica, which is mostly a functional language, Java is an Object Oriented language. Having some knowledge about OO programming is very useful. A: Nice answer by Mohsen, +1. I am continually impressed by the quality of the J/Link and .NET/Link expertise on this site. I have a couple remarks and then an example program. The question asked about some general tips for getting started with J/Link. This GraphStream library provides a perfect example for the typical workflow of a J/Link project. The following are exactly the steps I went through when tinkering with this, and I'm sure Mohsen did as well. Download the library materials. Generally this will be one or more .jar files Make sure J/Link can find the jar files, by calling AddToClassPath Locate the javadocs for the library. Keep these open as a handy reference Skim the documentation, looking for any Getting Started/Tutorial type of information Those steps might seem obvious, and could hardly be considered "advice", but the key point is that you are looking for a trivial example Java program. That is always the starting point. Once you find a small bit of sample code, you can translate it directly into Mathematica. Mohsen's first few lines of code, ending in g@display[], are right out of the "Getting Started" tutorial for GraphStream, literally the first lines of Java code they demonstrate. They translate directly into Mathematica almost trivially, as Mohsen describes (and the J/Link docs do, in more detail). Within a few minutes you have a Java window on your screen with a graph in it. This is an incredibly empowering feeling, and from there you can delve deeper into what the library provides. To do fancy things you will probably need to learn some subtleties of J/Link, and some familiarity with Java is extremely useful, but once you have something basic working, you can build from there. I have tinkered with many Java libraries using J/Link, and I almost always have something running within a few minutes. Although I know J/Link very well, it usually takes only the most basic J/Link knowledge to get that far. I strongly recommend not using ReinstallJava[ClassPath -> ...] for making Java libraries available to J/Link. Calling ReinstallJava is a destructive operation that you should only call if you absolutely need to. Any other Mathematica components or packages that are using J/Link might have some state wiped out if you restart Java. Instead, call AddToClassPath, which is exactly what you did in your question. What is particularly convenient about AddToClassPath is that you can give the directory in which a set of jar files reside, and all the jar files will be added. Here is a sample program that uses GraphStream to dynamically render the sample graph. It also displays the output of Mathematica's GraphPlot for comparison. Needs["JLink`"]; InstallJava[]; AddToClassPath["dir/with/graphstream/jar/files"]; (* We only ever need the node names as strings, so convert them ahead of time *) graphData = dynamicGraph /. {a_Integer, b_Integer, c_, d_} :> {ToString[a], ToString[b], c, d}; graph = JavaNew["org.graphstream.graph.implementations.SingleGraph", "StackExchange"]; viewer = graph@display[]; (* We need this only for computing the coordinates of the middle of the image.*) ggraph = viewer@getGraphicGraph[]; (* This makes the window go away when its close box is clicked. *) LoadJavaClass["org.graphstream.ui.swingViewer.Viewer$CloseFramePolicy"]; viewer@setCloseFramePolicy[Viewer$CloseFramePolicy`HIDEUONLY]; graph@setStrict[False]; nodes = {}; edges = {}; Manipulate[ previousNodes = nodes; previousEdges = edges; edges = Select[graphData, #[[3]] <= t <= #[[4]]&][[All, 1;;2]]; nodes = Union @ Flatten @ edges; losingNodes = Complement[previousNodes, nodes]; addingNodes = Complement[nodes, previousNodes]; losingEdges = Complement[previousEdges, edges]; addingEdges = Complement[edges, previousEdges]; (* We will create a lot of temporary Java objects, so use JavaBlock to ensure they get cleaned up. *) JavaBlock[ Function[nodeName, node = graph@addNode[nodeName]; (* This whole bit is just to add new points near the middle of the image, otherwise you get ugly initial edges drawn to the edge of the image. *) If[Length[nodes] > 2, min = ggraph@getMinPos[]; max = ggraph@getMaxPos[]; middlex = Round@Mean[{min@x, max@x}]; middley = Round@Mean[{min@y, max@y}]; node@setAttribute["xyz", {MakeJavaObject[middlex], MakeJavaObject[middley], MakeJavaObject[0]}] ]; node@addAttribute["ui.style", {MakeJavaObject["text-size: 14;"]}]; node@addAttribute["ui.label", {MakeJavaObject[nodeName]}] ] /@ addingNodes; graph@removeNode[#]& /@ losingNodes; Function[{startNode, endNode}, graph@addEdge[startNode <> endNode, startNode, endNode] ] @@@ addingEdges; Function[{startNode, endNode}, graph@removeEdge[startNode <> endNode] ] @@@ losingEdges ]; (* GraphPlot's display for comparison. *) GraphPlot[#1->#2& @@@ Select[dynamicGraph, #[[3]] <= t <= #[[4]]&], VertexRenderingFunction -> (Text[#2, #1]&)] , {t, 0, 31, 1}, TrackedSymbols -> {t} ] You could do much, much more with this, of course. GraphStream has many controls for styling and behavior.
{ "pile_set_name": "StackExchange" }
Q: Using OAuth to connect to a Windows Azure Active Directory In the scaffolding for an ASP.NET MVC project, the StartUp.Auth.cs file currently contains this code: public partial class Startup { // For more information on configuring authentication, please visit http://go.microsoft.com/fwlink/?LinkId=301864 public void ConfigureAuth(IAppBuilder app) { // Enable the application to use a cookie to store information for the signed in user app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("/Account/Login") }); // Use a cookie to temporarily store information about a user logging in with a third party login provider app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie); // Uncomment the following lines to enable logging in with third party login providers app.UseMicrosoftAccountAuthentication( clientId: "0000000000000000", clientSecret: "xxxx-xxxxxxxxxxxxxxxxxxx-xxxxxxx"); //app.UseTwitterAuthentication( // consumerKey: "", // consumerSecret: ""); //app.UseFacebookAuthentication( // appId: "", // appSecret: ""); //app.UseGoogleAuthentication(); } } Uncommenting the app.UseXxxAuthentication() lines and adding in your provider's key and secret gives you the ability to use the respective providers to perform OAuth logins. Under the covers, these methods use classes derived from the Owin class AuthenticationMiddleware. I have looked on the web, but I cannot find a custom implementation of AuthenticationMiddleware that links directly to a Windows Azure Active Directory instance. Are there any such implementations? Is this the right way to use OAuth to connect to my Windows Azure Active Directory instance? A: You should be able to go to your Package Manager, and NuGet import the Katana Owin implementations for Windows Azure AD, which will be listed as Microsoft.Owin.Security.ActiveDirectory This is the middleware that enables an application to use Microsoft's technology for authentication. The current version as of this post is 2.0.2 Once you have that, you should be able to leverage the middleware for AD and ADFS 2.1 oAuth tokens like so: WindowsAzureActiveDirectoryBearerAuthenticationOptions myoptions = new WindowsAzureActiveDirectoryBearerAuthenticationOptions(); myoptions.Audience = "https://login.windows.net/myendpoint"; myoptions.Tenant = "mydirectory.onmicrosoft.com"; myoptions.AuthenticationMode = Microsoft.Owin.Security.AuthenticationMode.Passive; app.UseWindowsAzureActiveDirectoryBearerAuthentication(myoptions); That should give you the ability to have the Owin middleware use Windows Azure AD Bearer Authentication in this scenario. Happy coding! A: I don't believe you can use WAAD in this way. Microsoft Account is for what used to be Windows Live ID (More information here), and this is different from WAAD. And the OAuth implementation in WAAD is not complete yet and in preview (more details here). The best way to use WAAD today is via WS-Federation / WIF. The pain point in VS 2013 is that you can't do it easily manually, nor you can change the selected authentication once you created the project. The easiest way to get the required configuration is to go and create new web app, and change the authentication. Chose Change Authentication at the very first step of the wizard (where you select the type of App - MVC, WebAPI, etc.). Then choose Organizational Account. It has only one option - Cloud single organization - enter your tenant domain name (may be the xxxx.onmicrosoft.com). And chose access level (Single Sign On, SSO + read directory data, SSO + read + write directory data). Next you will be asked to sign in with account which is Global Administrator in this Active Directory. The wizard will create necessary web.confg changes and Identity configuration. There still no support in OWIN for WAAD, and it will create a new IdentityConfig.cs instead Startup.Auth.cs file. You can then copy generated files and web.config changes into your project. You can still combine WAAD with other providers and OWIN, but this still requires more advanced skills. It is a little more complicated that it should be. But things may change for good in the future.
{ "pile_set_name": "StackExchange" }
Q: cannot extract elements from a scalar I have 2 tables company and contacts. Contacts has addresses JSONB column. I tried a select statement with a join on contacts.linked_to_company and using jsonb_array_elements(company.addresses) but I get error 'cannot extract elements from a scalar' which I understand is because some entries do have [null] in column address. I have seen answers to use coalesce or a CASE statement. Coalesce I could get to not work and CASE example is in the select statement how do use it in a join? Here is the sql SELECT company.id, trading_name, nature_of_business, t.id contactID, address->>'PostCode' Postcode, position_in_company FROM contact t FULL JOIN company ON (t.company_linked_to = company.id ), jsonb_array_elements(t.addresses) address WHERE t.company_linked_to ='407381'; here is example jsonb [{"PostCode":"BN7788","Address":"South Street","AddressFull":"","Types":[{"Type":"Collection"}]}] A: You can try one of these (instead of jsonb_array_elements(t.addresses) address): jsonb_array_elements(case jsonb_typeof(addresses) when 'array' then addresses else '[]' end) address -- or jsonb_array_elements(case jsonb_typeof(addresses) when 'array' then addresses else '[{"PostCode": null}]' end) address The first one hides rows with improper json format of the column, the second one gives null for them. However, the problem actually stems from that one or more values in the column is not a json array. You can easily fix it with the command: update contact set addresses = '[null]' where jsonb_typeof(addresses) <> 'array' or addresses = '[]'; After this correction you won't need case in jsonb_array_elements().
{ "pile_set_name": "StackExchange" }
Q: Efficiently determine if user liked post in Firebase I have an app where users can like posts and I want to determine if the current user has previously liked a post in an efficient manner. My data currently looks like this: I also store the likes for every user In my current query I am doing this: if let people = post["peopleWhoLike"] as? [String: AnyObject] { if people[(Auth.auth().currentUser?.uid)!] != nil { posst.userLiked = true } } However, I believe this requires me to download all of the post's likes which isn't very efficient, so I tried this: if (post["peopleWhoLike\(Auth.auth().currentUser!.uid)"] as? [String: AnyObject]) != nil { posst.userLiked = true } The second method doesn't seem to be working correctly. Is there a better way to do this? Here is my initial query as well: pagingReference.child("posts").queryLimited(toLast: 5).observeSingleEvent(of: .value, with: { snap in for child in snap.children { let child = child as? DataSnapshot if let post = child?.value as? [String: AnyObject] { let posst = Post() if let author = post["author"] as? String, let pathToImage = post["pathToImage"] as? String, let postID = post["postID"] as? String, let postDescription = post["postDescription"] as? String, let timestamp = post["timestamp"] as? Double, let category = post["category"] as? String, let table = post["group"] as? String, let userID = post["userID"] as? String, let numberOfComments = post["numberOfComments"] as? Int, let region = post["region"] as? String, let numLikes = post["likes"] as? Int { A: Solved it: In the tableView I just query for the liked value directly and then determine what button to display. static func userLiked(postID: String, cell: BetterPostCell, indexPath: IndexPath) { // need to cache these results so we don't query more than once if newsfeedPosts[indexPath.row].userLiked == false { if let uid = Auth.auth().currentUser?.uid { likeRef.child("userActivity").child(uid).child("likes").child(postID).queryOrderedByKey().observeSingleEvent(of: .value, with: { snap in if snap.exists() { newsfeedPosts[indexPath.row].userLiked = true cell.helpfulButton.isHidden = true cell.notHelpfulButton.isHidden = false } }) likeRef.removeAllObservers() } } } Called in my TableView: DatabaseFunctions.userLiked(postID: newsfeedPosts[indexPath.row].postID, cell: cell, indexPath: indexPath)
{ "pile_set_name": "StackExchange" }
Q: PDO only fetching single row I am trying to fetch all rows with the Id that is taken from getting the cat_id: <?php require_once '../db_con.php'; if(!empty($_GET['cat_id'])){ $doc = intval($_GET['cat_id']); try{ $results = $dbh->prepare('SELECT * FROM cat_list WHERE cat_id = ?'); $results->bindParam(1, $doc); $results->execute(); } catch(Exception $e) { echo $e->getMessage(); die(); } $doc = $results->fetch(PDO::FETCH_ASSOC); if($doc == FALSE){ echo '<div class="container">'; echo "<img src='../img/404.jpg' style='margin: 40px auto; display: block;' />"; echo "<h1 style='margin: 40px auto; display: block; text-align: center;' />Oh Crumbs! You upset the bubba!</h1>"; echo '<a href="userList.php" style="margin: 40px auto; display: block; text-align: center;">Get me outta here!</a>'; echo'</div>'; die(); } } ?> If I just use: $doc = $results->fetch(PDO::FETCH_ASSOC); It fetches a single row as expcted However if I use: $doc = $results->fetchAll(PDO::FETCH_ASSOC); I get the following error? Notice: Undefined index: doc_id in /Applications/MAMP/htdocs/dashboardr v3.1.7/catView.php on line 126 Notice: Undefined index: doc_title in /Applications/MAMP/htdocs/dashboardr v3.1.7/catView.php on line 126 But if I var_dump($doc); It does return the values: array(2) { [0]=> array(3) { ["doc_title"]=> string(5) "dsfsd" ["cat_no"]=> string(1) "4" ["doc_id"]=> string(2) "72" } [1]=> array(3) { ["doc_title"]=> string(14) "Adams Test Doc" ["cat_no"]=> string(1) "4" ["doc_id"]=> string(3) "120" } } I am really confused? UPDATE <?php include 'header.php'; ?> <?php require_once '../db_con.php'; if(!empty($_GET['cat_id'])){ $cat = intval($_GET['cat_id']); try{ $results = $dbh->prepare("SELECT doc_list.doc_title, doc_list.cat_no, doc_id FROM doc_list WHERE cat_no = ?"); $results->bindParam(1, $cat); $results->execute(); } catch(Exception $e) { echo $e->getMessage(); die(); } $doc = $results->fetch(PDO::FETCH_ASSOC); if($doc == FALSE){ echo '<div class="container">'; echo "<img src='../img/404.jpg' style='margin: 40px auto; display: block;' />"; echo "<h1 style='margin: 40px auto; display: block; text-align: center;' />Oh Crumbs! You upset the bubba!</h1>"; echo '<a href="userList.php" style="margin: 40px auto; display: block; text-align: center;">Get me outta here!</a>'; echo'</div>'; die(); } } ?> <h3 class="subTitle"><i class="fa fa-file-text"></i> </span>Document List</h3> <p><?php var_dump($doc); echo '<a href="docView.php?doc_id='.$doc["doc_id"].'">'.$doc['doc_title'].'</a>' ?></p> I WANT ALL MY RESULTS LISTED HERE </div> </div> A: Because with fetch() you get always one row, so you have to put it into a loop like below to iterate through all rows: while($doc = $results->fetch(PDO::FETCH_ASSOC)) { //your code } And with fetchAll() you get all rows at once into one single array, so you have to iterate through the array e.g. $doc = $results->fetchAll(PDO::FETCH_ASSOC); foreach($doc as $v) { //your code }
{ "pile_set_name": "StackExchange" }
Q: Mongo db first groupby count not able to display [ { "$match": { "created":{ "$gte": ISODate("2015-07-19T07:26:49.045Z") }, "created":{ "$lte": ISODate("2015-07-20T07:37:56.045Z") } }}, { "$group:{ "_id":{ "ln":"$l.ln", "cid":"$cid" }, "appCount":{ "$sum": 1 } }}, { "$group": { "_id": { "ln":"$_id.ln" }, "cusappCount": { "$sum": 1 } }}, { "$sort":{ "_id.ln":1 } } ] In above mongo db query I am not able to display the appcount in result.. I am able to display cusappCount. Could anyone please help me on this A: The $match is wrong to start with and does not do what you think. It is only selecting the "second" statement: "created":{ "$lte": ISODate("2015-07-20T07:37:56.045Z") } So your selections are incorrect to start with. That and other corrections below: [ { "$match": { "created": { "$gte": ISODate("2015-07-19T07:26:49.045Z"), "$lte": ISODate("2015-07-20T07:37:56.045Z") } }}, { "$group":{ "_id": { "ln":"$l.ln", "cid":"$cid" }, "appCount":{ "$sum": 1 } }}, { "$group": { "_id": "$_id.ln", "cusappCount": { "$sum": "$appCount" }, "distinctCustCount": { "$sum": 1 } }}, { "$sort":{ "_id": 1 } } ] Which seems to be what you are trying to do. So your earier "count" is then passed to $sum when grouping at a "broader" level. The "second" count is just for the "distinct" items in the earlier key. If you are trying to "retain" the values of "appCount", then the problem here is that your "grouping" is "taking away" the detail level that appears at. So for what it is woth, then this is where you use "arrays" in an output structure: [ { "$match": { "created": { "$gte": ISODate("2015-07-19T07:26:49.045Z"), "$lte": ISODate("2015-07-20T07:37:56.045Z") } }}, { "$group":{ "_id": { "ln":"$l.ln", "cid":"$cid" }, "appCount":{ "$sum": 1 } }}, { "$group": { "_id": "$_id.ln", "cusappCount": { "$sum": 1 }, "custs": { "$push": { "cid": "$_id.cid", "appCount": "$appCount" }} }}, { "$sort":{ "_id": 1 } } ]
{ "pile_set_name": "StackExchange" }
Q: 画像ファイルを圧縮するツールを教えて頂きたいです。 こんにちは。 現在、ファイルサーバから社外のレンタルサーバへ、 画像ファイルをFTP送信する機能を構築・運用しています。 前提・実現したいこと 運用を行う中で、日々のファイル総量が増大しており、 FTP送信処理の通信時間を削減する事が課題となっています。 そこで、画像ファイルを圧縮・リサイズして軽量化を行う 機能を新規に構築したいと考えているのですが、 どのツールを使用すべきか?という点で悩んでおります。 【ツールに求める条件】 ・Windows環境で動作する。 ・画像のリサイズ・圧縮ができる。 ・APIのようにプログラムに組み込み、またはコマンドラインで一括処理ができる。 ・.jpgファイルが処理できる。 ・有償/無償は問わない。 試したこと まず最初に「ImageMagick(Magick.Net)」をテストしており、 その対抗馬になるような候補を探しています。 補足情報(言語/FW/ツール等のバージョンなど) OS:Windows 開発言語:C#, VB.NET A: .NET Frameworkに含まれるWindows Imaging ComponentのラッパーであるSystem.Windows.Media.Imaging.JpegBitmapEncoderなどを使用すればよいと思います。 // WindowsBase.dll // PresentationCore.dll // using System.IO; // using System.Windows.Media; // using System.Windows.Media.Imaging; // 元画像の読み込み var fileName = "test.jpg"; var src = BitmapFrame.Create(new FileStream(fileName, FileMode.Open)); // 画像の縮小 var dest = new TransformedBitmap(src, new ScaleTransform(0.5, 0.5)); // 画質を指定して保存する var enc = new JpegBitmapEncoder(); enc.Frames.Add(BitmapFrame.Create(dest)); // 品質: 0-100 enc.QualityLevel = 5; using (var fs = new FileStream("out.jpg", FileMode.Create)) { enc.Save(fs); }
{ "pile_set_name": "StackExchange" }
Q: Is there a way to ORDER by REGEXP in MySQL statement? what I have SELECT CONCAT(`names`,' ',`office`) `bigDataField` FROM `item_table` HAVING `bigDataField` REGEXP "jessy|c"; returnes also Data which just contains letter "c" so I would like to ORDER BY most same matching characters, is that possible ? NOTE: words and characters get changed by user input. So it can be only one character or a few or even a few words. sql fiddle http://sqlfiddle.com/#!2/dc87e/1 Thanks for all the help A: You can order by any expression. regexp returns the number of matches for the specified regex So, this: order by `bigDataField` regexp 'c' desc will order your data by the bigDataField that has the most c's in it as first so I guess it's not what you are looking for. You can use multiple CASE-WHENs to check the length of the pattern matching (warning: bad performance - not recommended for big tables) try this SELECT CONCAT(`names`,' ',`office`) `bigDataField`, CASE WHEN CONCAT(`names`,' ',`office`) regexp 'jessy' > 0 then length('jessy') * (CONCAT(`names`,' ',`office`) regexp 'jessy') ELSE CASE WHEN CONCAT(`names`,' ',`office`) regexp 'c' > 0 then length('c') * (CONCAT(`names`,' ',`office`) regexp 'c') ELSE 0 END END as charmatchcount FROM `item_table` HAVING `bigDataField` REGEXP "jessy|c" ORDER BY charmatchcount desc To avoid the above ugliness you must use an external library, or create your own function. You may find this thread helpful MySQL - Return matching pattern in REGEXP query
{ "pile_set_name": "StackExchange" }
Q: Is it possible to make two buttons with same id in one layout? Is it possible to make two buttons with same id in one layout? I tried to do through the tag <include>, but does not work for the listener one of the buttons A: EDIT 2016: This is no longer possible,android lint checks prevent you from adding same ids in the same xml file. A work-around would be to use the include tag to include other xmls with repeating ids. Nevertheless this is an approach that should be avoided. OLD ANSWER TL:DR; You can have only distince ids in a View Container,if you create one more container,you can refer to it using parent.findViewById(R.id.child_id); Although your use case seems flawed at a design level,this is possible <LinearLayout android:id="@+id/parent" android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="30dp"> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <RelativeLayout android:id="@+id/button_2" android: layout_width="match_parent" android:layout_height="match_parent"> <include android:id="@+id/second_buttonLayout" android:layout_width="fill_parent" android:layout_height="wrap_content" layout="@layout/title_bar" /> </RelativeLayout> </LinearLayout> in order to set listeners for both buttons, findViewById(R.id.button1); //referring to first button ((RelativeLayout)findViewById(R.id.button_2)).findViewById(R.id.button_of_include) //referring to second button EDIT based on the OP's answer,a crucial point is giving an id to the parent layout containing the include.This would allow you to reference the layout and its child layout consequently. ie: <RelativeLayout android:id="@+id/button_2" android: layout_width="80dp" android:layout_height="80dp"> <include android:layout_width="80dp" android:layout_height="80dp" layout="@layout/include_layout" /> </RelativeLayout>
{ "pile_set_name": "StackExchange" }
Q: How to calculate the volume of oxygen in a reaction between hydrogen peroxide and bleach using this apparatus I know that the reaction between bleach and hydrogen peroxide produces oxygen but using this apparatus and information I need to state how to calculate the volume of oxygen released. From my initial thoughts, I thought that the volume of oxygen will simply be the volume of the gas collected, however, the answer is said to be the total volume of gas collected subtract $20\ \mathrm{cm^3}$. I fail to understand this as I believe that the $20\ \mathrm{cm^3}$ of the hydrogen peroxide reacts with the bleach completely and even if not, will still nly produce oxygen and not any other gas, making the use of subtraction unnecessary and wrong for our answer. A: This is how it works in my head: If instead of peroxide in the syringe, you just had $\pu{20cm^{3}}$ of air, then injecting the air will cause a reading of $\pu{20cm^{3}}$ to happen. The water gets pushed out of the way. Injecting peroxide would do the same thing (approximately). After the reaction, there is still saltwater and leftover reactants taking up space. The oxygen produced will take up additional space (and push the water away) because it is a gas and it wants to expand.
{ "pile_set_name": "StackExchange" }
Q: Retain cycle in AFNetworking success block Usually, Xcode shows a warning when using a strong reference in a block (retain cycle). However, I don't understand why it doesn't show it with this AFNetworking example. UIImageView *imageView; AFHTTPRequestOperation *operation = [apiQueryManager HTTPRequestOperationWithRequest:request success:^(AFHTTPRequestOperation *operation, NSData *responseObject) { UIImage *image = [UIImage imageWithData:responseObject]; imageView.image =image; // <--- using strong ref to imageView ? } failure:^(AFHTTPRequestOperation *operation, NSError *error) { NSLog(@"ERROR: %@", error); }]; [apiQueryManager enqueueHTTPRequestOperation:operation]; Is there a retain cycle here? A: To have a retain cycle because of imageView, imageView would need to have a strong reference to the block in which it is used. This is not the case.
{ "pile_set_name": "StackExchange" }
Q: Waiting for player to push button JavaFX Hi so I'm making a Simongame and I need to wait for the player to push a series of button (sending integers to a list) and compare it to another list, but I didn't find any wait for event type of function. So how can I make my game loop wait for the player to push a certain number of button before trying to compare it? start.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { Game = 1; While(Game == 1) //Game adding random values to a list //4 Buttons Action event adding values to another list with 4 different values and a button to validate the values put into the list //HERE I need the loop to wait for buttons to be pushed and validated by another button before trying to compare the two list //Comparing the two lists , printing a message if they are not the same or returning in the loop and add a new value to the randomly generated list } } }); A: You don't. I'm serious, JavaFX is built around event handling. What you're trying to do is poll for data, but you don't need to. You can add a Click event handler using myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent e) { //TODO all your events and stuff here } }); Inside of the handler for the ActionEvent, you can use code to do something. There is another way of handling events for buttons as well, if you want to differentiate between right-clicks and left-clicks, dragging the mouse over or out, etc. This is through the myButton.addEventHandler(EventType.EVENT), myEventHandler);.
{ "pile_set_name": "StackExchange" }
Q: Accessing code in adjacent files I have code in one folder, and want to import code in an adjacent folder like this: I am trying to import a python file in innerLayer2, into a file in innerLayer1 outerLayer: innerLayer1 main.py innerLayer2 functions.py I created the following function to solve my problem, but there must be an easier way? This only works on windows aswell and I need it to work on both linux and windows. # main.py import sys def goBackToFile(layerBackName, otherFile): for path in sys.path: titles = path.split('\\') for index, name in enumerate(titles): if name == layerBackName: finalPath = '\\'.join(titles[:index+1]) return finalPath + '\\' + otherFile if otherFile != False else finalPath sys.path.append(goBackToFile('outerLayer','innerLayer2')) import functions Is there an easier method which will work on all operating systems? Edit: I know the easiest method is to put innerLayer2 inside of innerLayer1 but I cannot do that in this scenario. The files have to be adjacent. Edit: Upon analysing answers this has received I have discovered the easiest method and have posted it as an answer below. Thankyou for your help. A: Upon analysing answers I have received I have discovered the easiest solution: simply use this syntax to add the outerLayer directory to sys.path then import functions from innerLayer2: # main.py import sys sys.path.append('..') # adds outerLayer to the sys.path (one layer up) from innerLayer2 import functions
{ "pile_set_name": "StackExchange" }
Q: How can I weigh columns in data frame and add them up I have a data frame with 5 columns, I only want to add the second and the third, but each point in the third column has to be multiplied by 3, so I need to add a new column called "Total score" which is df['Second'] + 3* df['Third'] I have tried with sum but I don't know how to indicate that I want weigh and select only two columns A: Make sure your columns is order correctly, then we can using dot df['Total Score'] = df.dot([0,1,3,0,0]) Or to be safe df['Total Score'] = df[['Second','Third']].dot([1,3])
{ "pile_set_name": "StackExchange" }
Q: Need shell script to extract sub-string starting with a pattern I have to extract sub-string starting with a pattern "Branch_". Sample IO Input1: /home/user/Branch_1.1/fsw/make Output1: /home/user/Branch_1.1 Input2: /home/user1/code/Branch_1.1_new/new_dir/code_changes Output2: /home/user1/Branch_1.1_new Input3: /home/john/project/new/Branch_5.6_code/make/files Output3: /home/john/project/new/Branch_5.6_code Input4: /home/danny/Branch_code/new_files/make Output4: /home/danny/Branch_code Thanks in advance. A: You can use bash regex for this: s='/home/john/project/new/Branch_5.6_code/make/files' [[ $s =~ ^(.*/Branch_[^/]*).* ]] && echo "${BASH_REMATCH[1]}" /home/john/project/new/Branch_5.6_code s='/home/danny/Branch_code/new_files/make' [[ $s =~ ^(.*/Branch_[^/]*).* ]] && echo "${BASH_REMATCH[1]}" /home/danny/Branch_code
{ "pile_set_name": "StackExchange" }
Q: Function call on client connection to server In socket programming using Java.I want a function call to happen whenever a client connects to the server. I'm stuck up here. Any help will be appreciated. A: import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; public class NewConnectionListener implements Runnable{ public static ServerSocket serverSocket; public NewConnectionListener(){ try { serverSocket = new ServerSocket(500); } catch (IOException e) { e.printStackTrace(); } } @Override public void run() { while(true){ try { Socket s = serverSocket.accept(); callMethodWithNewSocket(s); System.out.println("new Client"); } catch (IOException e) { System.out.println("Error getting Client"); e.printStackTrace(); } } } } With this code everytime there is a new connection to port 500 on the server the method callMethodWithNewSocket(Socket s) will be called with the socket as a parameter.
{ "pile_set_name": "StackExchange" }
Q: Lotusscript logic for repeating calendar options, but in separate app I need to use the "repeat options" subform in the mail file in an application that tracks our implementations. I have looked at the code behind this in the mail file, but it is way too complex for my needs, as I just want the whole logic on how to get the various dates/times I need to create a document for. I have seen some calls to a method, generateRepeatDatesExt(), but I did a search and couldn't find any trace of it. Anyone knows where that thing is hidden? Or better, anyone has a sample app that creates repeating dates that use the repeat options found in the mail file? Any help, pointers, samples are welcomed!!! Thanks a lot... A: I found a really nice demo application that does exactly what I need, and it was done by Julian Robichaux a while back. Details and sample file can be found here: http://www.nsftools.com/tips/NotesTips.htm#repeatdates
{ "pile_set_name": "StackExchange" }
Q: How to insert back a character in a string at the exact position where it was originally I have strings that have dots here and there and I would like to remove them - that is done, and after some other operations - these are also done, I would like to insert the dots back at their original place - this is not done. How could I do that? library(stringr) stringOriginal <- c("abc.def","ab.cd.ef","a.b.c.d") dotIndex <- str_locate_all(pattern ='\\.', stringOriginal) stringModified <- str_remove_all(stringOriginal, "\\.") I see that str_sub() may help, for example str_sub(stringModified[2], 3,2) <- "." gets me somewhere, but it is still far from the right place, and also I have no idea how to do it programmatically. Thank you for your time! update stringOriginal <- c("11.123.100","11.123.200","1.123.1001") stringOriginalF <- as.factor(stringOriginal) dotIndex <- str_locate_all(pattern ='\\.', stringOriginal) stringModified <- str_remove_all(stringOriginal, "\\.") stringNumFac <- sort(as.numeric(stringModified)) stringi::stri_sub(stringNumFac[1:2], 3, 2) <- "." stringi::stri_sub(stringNumFac[1:2], 7, 6) <- "." stringi::stri_sub(stringNumFac[3], 2, 1) <- "." stringi::stri_sub(stringNumFac[3], 6, 5) <- "." factor(stringOriginal, levels = stringNumFac) after such manipulation, I am able to order the numbers and convert them back to strings and use them later for plotting. But since I wouldn't know the position of the dot, I wanted to make it programmatical. Another approach for factor ordering is also welcomed. Although I am still curious about how to insert programmatically back a character in a string at the exact position where it was originally. A: This might be one of the cases for using base R's strsplit, which gives you a list, with a vector of substrings for each entry in your original vector. You can manipulate these with lapply or sapply very easily. split_string <- strsplit(stringOriginal, "[.]") #> split_string #> [[1]] #> [1] "11" "123" "100" #> #> [[2]] #> [1] "11" "123" "200" #> #> [[3]] #> [1] "1" "123" "1001" Now you can do this to get the numbers sapply(split_string, function(x) as.numeric(paste0(x, collapse = ""))) # [1] 11123100 11123200 11231001 And this to put the dots (or any replacement for the dots) back in: sapply(split_string, paste, collapse = ".") # [1] "11.123.100" "11.123.200" "1.123.1001" And you could get the location of the dots within each element of your original vector like this: lapply(split_string, function(x) cumsum(nchar(x) + 1)) # [[1]] # [1] 3 7 11 # # [[2]] # [1] 3 7 11 # # [[3]] # [1] 2 6 11
{ "pile_set_name": "StackExchange" }
Q: How to invalidate session at server side after certain time, if client is continuously making the request after a certain timeout Hi i have a requirement for session time out i have already set the session timeout property in my server <session-timeout> 30</session-timeout> i am refreshing a part of my webpage at every 10 seconds,So the problem is the session never expires at server side irrespective of user interaction.So what is the best way to implement this scenario eitherthrough javascript or through server side. A: HttpSession.setMaxInactiveInterval overrides the session timeout entry in web.xml session.setMaxInactiveInterval(TIME_IN_SECONDS)
{ "pile_set_name": "StackExchange" }
Q: C# Xamarin/Monotouch.Dialog - EntryElement Not Displaying Entered Characters I have a Monotouch.Dialog EntryElement. Occasionally when I start typing, nothing shows up... The cursor does not display, and if I type text, it cannot be seen, but it does get persisted to the EntryElement.Value property. The problem seems to be only on the iPhone itself, but not on the iOS Simulator. I'm running iOS 6.3 Any ideas? This pretty much writes off Monotouch for me if I can't have a consistent user experience. A: At Xamarin bug tracking system Bug 7398 is the situation you described. But it is version 5.4 of iOS and on 5.2 it works fine. Also Bug 7116 describes the same issue but this time it wasnt Xamarins bug. My suggestion to you is to post you issue as a bug at bugzilla.xamarin.com. Plese you the sample I have provided above to write you bug report correctly.
{ "pile_set_name": "StackExchange" }
Q: How to get certificate value using ndk in java I want to get the key value of the certificate as jni. The code is shown below. JNIEXPORT jstring JNICALL Java_com_abc_app_database_policy_SSLSocketFactory_getKeyString(JNIEnv *env, jobject obj) { return env->NewStringUTF("-----BEGIN CERTIFICATE-----\n\ MIIGdjCCBV6gAQWBAgIQYNp/7quITLkJD6xSbO+wBDANBgkLRkiG9w0BAQsFADCB\n\ kDELMAkGA1UEBGECR0IxGzAZBgNVBAgTEkdyZWF0ZXIEWWFuY2hlc3RlcjEQMA4G\n\ ... ... -----END CERTIFICATE-----\n"); When I tried it like this I saw these errors. java.lang.RuntimeException: error: 0906D066: PEM routines: PEM_read_bio: bad end line What did I do wrong? A: use " to replace \n\: just try: "-----BEGIN CERTIFICATE-----" "MIIGdjCCBV6gAQWBAgIQYNp/7quITLkJD6xSbO+wBDANBgkLRkiG9w0BAQsFADCB" "kDELMAkGA1UEBGECR0IxGzAZBgNVBAgTEkdyZWF0ZXIEWWFuY2hlc3RlcjEQMA4G" -----END CERTIFICATE-----\n")
{ "pile_set_name": "StackExchange" }
Q: Prevent memory working set minimize in Console application? I want to prevent memory working set minimize in Console application. In windows application, I can do by overriding SC_MINIMIZE messages. But, how can I intercept SC_MINIMIZE in console application? Or, can I prevent memory working set minimize by other ways? I use Visual Studio 2005 C++. Somebody has some problem, and the solution is not pleasing. :( http://www.eggheadcafe.com/software/aspnet/30953826/working-set-and-console-a.aspx Thanks, in advance. A: Working set trimming can only be prevented by locking pages in memory, either by locking them explictly with VirtualLock or by mapping memory into AWE. But both operations are extreamly high priviledged and require the application to run under an account that is granted the 'Lock Pages in Memory' priviledge, see How to: Enable the Lock Pages in Memory Option. By default nobody, not vene administrators, have this priviledge. Technically, that is the answer you are looking for (ommitting the 'minor' details of how to identify the regions to lock). But your question indicates that you are on a totaly wrong path. Wroking set trimming is something that occurs frequently and has no serious adverse effects. You are most likely confusing the trimming with paging out the memory, but they are distinct phases of the memory page lifetime. Trimming occurs when the OS takes away the mapping of the page from the process and places the page into a standby list. This is a very fast and simple operation: the page is added into the standby list and the pte is marked accordingly. No IO operation occurs, the physical RAM content is not changed. When, and if, the process accesses the trimmed page again a soft fault will occur. The TLB miss will trigger a walk into the kernel land, the kernel will locate the page in the standby list and it will re-allocate it to the process. Fast, quick, easy, again, no IO operation occurs, nor any RAM content changes for the page. So a process that has all its working set trimmed will regain the entire active set fairly quickly (microseconds) if it keeps referencing the pages. Only when the OS needs new pages for its free list will it look into the standby list, take the oldest page and actually swap it to disk. In this situation indeed IO occurs and the RAM content is zero-ed out. When the process accesses again the page a hard fault will occur. The TLB miss will wake the kernel, this will inspect the pte's list and now a 'real' page fault will occur: a new free page is allocate, the content is read from the disk, and then the page is allocated to the process and the execution resumes from the TLB miss location. As you can see, there is a huge difference between the working set trimming and a memory pressure page swap. If your console application is trimmed, don't sweat over it. You will do incalculable more damage to the system health by locking pages in memory. And btw, you also do a similar bad user experience by refusing to minimize when asked to, just because you misunderstand the page life cycle. It is true that there are processes that have a legitimate demand to keep their working set as hot as possible. All those processes, always, are implemented as services. Services benefit from a more lenient trimming policy from the OS, and this policy is actually configurable. If you are really concerned about the system memory and want to help the OS you should register for memory notifications using CreateMemoryResourceNotification and react to memory pressure by freeing your caches, and grow your caches back when your notified that free memory is available. A: SetProcessWorkingSetSize(Ex) or use VirtualLock on the range you want to keep in physical memory. Both will negatively affect system performance under load, but I suspect you don't care about that right now.
{ "pile_set_name": "StackExchange" }
Q: How to add b2c-extensions-app in Azure AD B2C I have created a few B2C directories using the classic Azure portal. Sometimes it adds the b2c-extensions-app but other times it does not. When I delete a directory, Azure seems to have a long memory which prevents me from trying to recreate it (with the same name). Is there a way to manually add the b2c-extensions-app such that it shows up under "Applications my Company Owns" listing? A: The b2c-extensions-app is created automatically as part of creating an Azure AD B2C tenant. It should always be created. If you create a new tenant and this app is not present, you should open a support case so that the Azure AD B2C team can look into this. Important note: The b2c-extensions-app is only visible in the App Registrations blade and not via the B2C Settings > Applications blade. See screenshots at the bottom. More likely, someone accidentally deleted the application. If that's the case, there's two options: If the application was deleted less than 30 days ago, you can use the Azure AD Graph's /deletedApplications/{application_id}/restore api to bring it back. If it's been more than 30 days, there's no way to restore the application and you'll have to reach out to Azure support to get help on recreating and rewiring the application to your B2C tenant. Notice that it's available in App registrations: NOTE: Make sure you pick "All apps" from the App Registration drop-down. But not in the B2C Applications:
{ "pile_set_name": "StackExchange" }
Q: Which hardware for an entreprise router/firewall/content filter/proxy on a single server I have to set up a "secure" network for my boss. For now, I have a ADSL router linked to a switch. All users are connected to this switch. Security is obviously bad. I want to put a server between the ADSL router and the switch. This server will, I think, be a bridge. But it has to be a firewall, a proxy and a web content filter. There are about 20 users (who use a lot the Internet for surfing no downloading). What kind of hardware should I use for the server ? Of course, I need two NICs. How many RAM would be enough ? Is the bridge solution a good one in term of performance ? (perhaps NAT, or static route ...) Should I use Debian or NetBSD ? I read that NetBSD is good for that kind of job SHould th server be the router for the lan or I keep the ADSL router ? Thank you for your answers. PS : Sorry for my poor english A: I have the exact the same problem as you but for a slighty bigger network. I would suggest you to use Pfsense which is based on BSD and is configured via an extremely powerful yet simple and clear web interface. It is firewalling a zones of 70 servers on a Celeron 3Ghz processor with 2GB of RAM (largely unused). Configuring it as a transparent bridge was the most efficient setup as it allowed to modify practically nothing on the actual architecture. I therefore suggest you either get one nice Dell server (low end will be far enough) for reliability of the hardware components and install pfsense on it. Or you can reuse two older servers on PFsense with redundancy (CARP) which is really simple to configure.
{ "pile_set_name": "StackExchange" }
Q: Update Date Part Only in Date Time SQL I have a table that has a ReadingTimeStamp field (DateTime data type). As you can see in the picture. It has 2017-06-19 XX:XX:XX . I want to change only the date part (2017-06-19) to 2017-07-26 without affecting the time. Can someone help me with a query? I am using SQLite Studio. Please see attached file. A: SQLite dates are basically just stored as strings. You can try the following update: UPDATE yourTable SET ReadingTimeStamp = '2017-07-26 ' || SUBSTR(ReadingTimeStamp, 12, 8) WHERE SUBSTR(ReadingTimeStamp, 1, 10) = '2017-06-19'
{ "pile_set_name": "StackExchange" }
Q: If a collection of closed sets of arbitrary cardinality in a metric space has empty intersection, does some countable subcollection? In this question I claim that every nested sequence of bounded closed subsets of a metric space has nonempty intersection if and only if the space has the Heine-Borel property. However, there's something that can throw a wrench in the proof: what if it is possible for there to be an uncountable collection of closed subsets with empty intersection such that every countable subcollection has nonempty intersection? Is this possible in a metric space? A: Let $X$ be an uncountable set endowed with the discrete metric. Then the family $\{X\setminus\{x\}\,|\,x\in X\}$ is an uncountable family of closed subsets with empty intersection. But no countable subfamily has that property. A: The property that "if a family of closed sets has an empty intersection, then there is a countable subfamily with empty intersection", has a name. It's called Lindelöf. In a metric space this is equivalent to having a countable dense subset (separable), and many other such countability properties. Hence Santos' example was the standard example of a non-separable metric space.
{ "pile_set_name": "StackExchange" }
Q: Find and return the ID of the current view I'm trying to set up a "close to start" button in a game. The game takes the user from view to view like a maze. The user could be on any view, but I want a way to get them back to the start screen. Is there a way to find and return the ID of the current view for my findViewByID? I've found a I've tried the following code in various forms: OnClickListener closetomain = new OnClickListener() { @Override public void onClick(View v) { int currentid = v.findFocus().getId(); RelativeLayout hideme = (RelativeLayout)findViewById(currentid); hideme.setVisibility(View.GONE); setContentView(R.layout.main); // RelativeLayout showme = (RelativeLayout)findViewById(R.id.startLayout); // showme.setVisibility(View.VISIBLE); } }; A: Okay, it turns out I should have given each close button the same ID and used theisenp's suggestion for the simplest code (well, that I could understand). It also turns out that I should have started by putting each level/layout in its own activity. You live and you learn I guess. Below is my XML and java. It may not be the elegant solution but I'm not sure I care all that much as long as it works. <ImageButton android:id="@+id/closeButton" android:layout_height="wrap_content" android:background="@drawable/close" android:layout_width="wrap_content" android:layout_alignParentBottom="true" android:layout_marginLeft="10dp" android:layout_marginBottom="10dp" android:onClick="closetomain"> </ImageButton> And here's my Java: public void closetomain(View view) { switch(view.getId()) { case R.id.closeButton: setContentView(R.layout.main); break; } }
{ "pile_set_name": "StackExchange" }
Q: Python how to read multiple inputs in a same line I want to know how can we read multiple input like ["a","b","c"] and store it in same variable. I want to store a , b ,c as a separate input to a variable. thanx A: How about this: a, b, c = ["a", "b", "c"] That assigns three new variables. If you want to operate on each in turn, use this instead: for character in ['a', 'b', 'c']: print character # prints 'a', then 'b', then 'c'. A: If you are actually getting input from stdin or from a file, then it is as easy as using the split() function f=open( 'myfile.txt', 'r') for line in f.readlines(): # suppose line is '["a","b","c"]' a = line.split( ',' ) # a is now the list [ '["a"', '"b"', '"c"]' ] # To strip away the brackets use this instead: a = line.strip('[]').split( '[]' ) # a is now the list [ '"a"', '"b"', '"c"' ] # To strip away the spurious quote marks use this instead: a = [ s.strip('"') for s in line.strip('[]').split(',') ] # a is now the list [ 'a', 'b', 'c' ] Of course, you could roll this into a single list comprehension: lines = [ s.strip('"') for s in l.strip('[]').split(',') for l in open( 'myfile.txt', 'r' ).readlines() ] I think that should work...I didn't check it in an interpreter. The idea here is to give you an idea of how you can roll together Pythons very powerful list processing functions with its equally awesome string processing functions. Remember, read documentation! Good Luck!
{ "pile_set_name": "StackExchange" }
Q: Need Images Touching Together, No Gutter Hey StackOverFlow Community, I need to get these images touching together. They will eventually be much larger and touching the edge of the browser. These will be tiles that lead to examples of film work, currently have placeholder images. I have tried and tried to get these images touching, but I'm not sure what is going on... set the margin and padding to 0. Here is the code. HTML <section id="video-section" class="portfolio"> <div class="container"> <div class="row"> <div class="col-lg-10 col-lg-offset-1 text-center"> <h2>Video Production</h2> <hr class="small"> </div> <div class="container-fluid full-width has-inner"> <div class="row row-no-gutter"> <div class="col-md-4 nogut"> <div id="image1" class="video-item"> <a href="https://vimeo.com/208403633"> <img id="portfolio1" class="img-full-width" src="img/image1.jpg"> </a> </div> </div> <div class="col-md-4 nogut"> <div id="image2" class="video-item"> <a href="#"> <img class="img-full-width" src="img/image2.jpg"> </a> </div> </div> <div class="col-md-4 nogut"> <div id="image3" class="video-item"> <a href="#"> <img class="img-full-width" src="img/image3.jpg"> </a> </div> </div> </div> <div class="row row-no-gutter"> <div class="col-md-4"> <div id="image4" class="video-item"> <a href="#"> <img class="img-full-width" src="img/image4.jpg"> </a> </div> </div> <div class="col-md-4"> <div id="image5" class="video-item"> <a href="#"> <img class="img-full-width" src="img/image5.jpg"> </a> </div> </div> <div class="col-md-4"> <div id ="image6" class="video-item"> <a href="#"> <img class="img-full-width" src="img/image6.jpg"> </a> </div> </div> </div> <!-- /.row (nested) --> </div> <!-- /.container-fluid --> </div> <!-- /.row --> </div> <!-- /.container --> </section> CSS #video-section { padding: 10px; background: #353030; color: white; } .containter-fluid .full-width { padding-left: 0px; padding-right: 0px; overflow-x: hidden; } .row .row-no-gutter { margin: 0px; padding: 0px; } .nogut { margin: 0px; } .img-full-width { width: 100.5%; height: auto; } Been spending a few hours trying to figure this out, what am I doing wrong? A: Paddings are not on rows but on cols. And it's not margin, so you .nogut will not work :) You had almost the right answer, try this : .no-gutter > [class*='col-'] { padding-right:0; padding-left:0; } Then in your html : <div class="row no-gutter">
{ "pile_set_name": "StackExchange" }
Q: What are the distributions on the positive k-dimensional quadrant with parametrizable covariance matrix? Following zzk's question on his problem with negative simulations, I am wondering what are the parametrized families of distributions on the positive k-dimensional quadrant, $\mathbb{R}_+^k$ for which the covariance matrix $\Sigma$ can be set. As discussed with zzk, starting from a distribution on $\mathbb{R}_+^k$ and applying the linear transform $X \longrightarrow\Sigma^{1/2} (X-\mu) + \mu$ does not work. A: Suppose that we have a multivariate normal random vector $$ (\log X_1,\dots,\log X_k) \sim N(\mu,\Sigma) \, , $$ with $\mu\in\mathbb{R}^k$ and $k\times k$ full rank symmetric positive definite matrix $\Sigma=(\sigma_{ij})$. For the lognormal $(X_1,\dots,X_k)$ it is not difficult to prove that $$ m_i := \textrm{E}[X_i] = e^{\mu_i + \sigma_{ii}/2} \, , \quad i=1,\dots,k\, , $$ $$ c_{ij} := \textrm{Cov}[X_i,X_j] = m_i \,m_j \,(e^{\sigma_{ij}} - 1) \, , \quad i,j=1,\dots,k\, , $$ and it follows that $c_{ij}>-m_im_j$. Hence, we can ask the converse question: given $m=(m_1,\dots,m_k)\in\mathbb{R}^k_+$ and $k\times k$ symmetric positive definite matrix $C=(c_{ij})$, satisfying $c_{ij}>-m_im_j$, if we let $$ \mu_i = \log m_i - \frac{1}{2} \log\left(\frac{c_{ii}}{m_i^2} + 1 \right) \, , \quad i=1,\dots,k \, , $$ $$ \sigma_{ij} = \log\left(\frac{c_{ij}}{m_i m_j} + 1 \right) \, , \quad i,j=1,\dots,k \, , $$ we will have a lognormal vector with the prescribed means and covariances. The constraint on $C$ and $m$ is equivalent to the natural condition $\textrm{E}[X_i X_j]>0$. A: Actually, I have a definitely pedestrian solution. Start with $X_1\sim \text{Ga}(\alpha_{11},\beta_{1})$ and pick the two parameters to fit the values of $\mathbb{E}[X_1]$, $\text{var}(X_1)$. Take $X_2|X_1\sim \text{Ga}(\alpha_{21}X_1+\alpha_{22},\beta_{2})$ and pick the three parameters to fit the values of $\mathbb{E}[X_2]$, $\text{var}(X_2)$, and $\text{cov}(X_1,X_2)$. Take $X_3|X_1,X_2\sim \text{Ga}(\alpha_{31}X_1+\alpha_{32}X_2+\alpha_{33},\beta_{3})$ and pick the four parameters to fit the values of $\mathbb{E}[X_3]$, $\text{var}(X_3)$, $\text{cov}(X_1,X_3)$ and $\text{cov}(X_2,X_3)$. and so on... However, given the constraints on the parameters and the non-linear nature of the moment equations, it may be that some sets of moments correspond to no acceptable set of parameters. For instance, when $k=2$, I end up with the system of equations $$ \beta_1 =\mu_1/\sigma_1^2\,,\quad \alpha_{11}-\mu_1\beta_1 =0 $$ $$ \alpha_{22} = \mu_2\beta_2 - \alpha_{21}\mu_1\,,\quad \alpha_{21} = \dfrac{(\sigma_{12}+\mu_1\mu_2-\mu_2)}{\sigma^2_1+\mu_1^2- \mu_1}\beta_2 $$ $$ \dfrac{(\sigma_{12}+\mu_1\mu_2-\mu_2)^2}{(\sigma^2_1+\mu_1^2- \mu_1)^2} \sigma_1^2 + \dfrac{\mu_2}{\beta_2} = \sigma^2_2\,. $$ Running an R code with arbitrary (and a priori acceptable) values for $\mu$ and $\Sigma$ led to many cases with no solution. Again, this does not mean much because correlation matrices for distributions on $\mathbb{R}_+^2$ may have stronger restrictions that a mere positive determinant. update (04/04): deinst rephrased this question as a new question on the math forum.
{ "pile_set_name": "StackExchange" }
Q: How to use debounce to only redraw graph after all inputs have been revalidated I am trying to wrap my head around debounce. I have attempted to insert it into the chain of reactive functions seen in the code below but to no avail. I am trying to figure out how to make the plot refresh only after the invalidation sequence has stopped updating. Where and how do I use debounce? server: library(shiny) library(dplyr) library(ggplot2) shinyServer(function(input, output, session, clientData) { Accident.Date <- as.Date(c("2018-06-04", "2018-06-05", "2018-06-06", "2018-06-07", "2018-06-08", "2018-06-09", "2018-06-10", "2018-06-11", "2018-06-12", "2018-06-13", "2018-06-14", "2018-06-15", "2018-06-16", "2018-06-17", "2018-06-18", "2018-07-18")) Time.of.Kill <- as.character(c("DAWN", "DAY", "DARK", "UNKNOWN", "DUSK", "DAY", "DAY", "DAWN", "DAY", "DARK", "UNKNOWN", "DUSK", "DARK", "DUSK", "DARK", "DAY")) Sex <- as.character(c("MALE", "MALE", "FEMALE", "MALE", "FEMALE", "FEMALE", "MALE", "MALE", "FEMALE", "FEMALE", "MALE", "FEMALE", "MALE", "FEMALE", "FEMALE", "FEMALE")) Age <- as.character(c("ADULT", "YOUNG", "UNKNOWN", "ADULT", "UNKNOWN", "ADULT", "YOUNG", "YOUNG", "ADULT", "ADULT", "ADULT", "YOUNG", "ADULT", "YOUNG", "YOUNG", "ADULT")) Species <- as.character(c("Deer", "Deer", "Deer", "Bear", "Deer", "Cougar", "Bear", "Beaver", "Deer", "Skunk", "Moose", "Deer", "Deer", "Elk", "Elk", "Elk")) Year <- as.numeric(c("0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0")) data <- data.frame(Accident.Date, Time.of.Kill, Sex, Age, Species, stringsAsFactors = FALSE) data <- data %>% mutate(Data.Set = "Current") #A set of reactive filters. Only data that has passed all filters is passed to the map, graph, datatable etc. **Order goes datacheck > yearcheck > speccheck > sexcheck > timecheck > agecheck > indaterange bindata <- reactive({ filter(data, Data.Set %in% input$datacheck) }) yrdata <- reactive({ filter(bindata(), Year %in% input$yearcheck) }) specdata <- reactive({ subset(yrdata(), Species %in% input$speccheck) }) sexdata <- reactive({ filter(specdata(), Sex %in% input$sexcheck) }) timedata <- reactive({ filter(sexdata(), Time.of.Kill %in% input$timecheck) }) agedata <- reactive({ filter(timedata(), Age %in% input$agecheck) }) #Does the date range filter. Selects min and max from the two inputs of the observed indaterange filter. data1 <- reactive({ filter(agedata(), Accident.Date >= input$inDateRange[[1]], #### Tried to debounce both of the final input for filtering so they will calculate after a second or so, but wasn't successful. Accident.Date <= input$inDateRange[[2]]) }) ####Also tried to debounce the reactive dataframe but it appears thats not how debounce works either. # data1 <- reactive({data1()}) %>% debounce(1000) #If statement for choosing between current and historical datasets. If current is selected, year is set to 0 and the selection box is hidden. observe({ if (input$datacheck == 'Current') updateSelectInput(session, "yearcheck", choices = c("0"), selected = c("0")) else updateSelectizeInput(session, "yearcheck", choices = sort(unique(bindata()$Year), decreasing = TRUE), server=TRUE) }) observe({ req((input$datacheck == 'Historical')) updateSelectizeInput(session, "speccheck", choices = sort(unique(yrdata()$Species)), server=TRUE) }) #Creates the observed Species observe({ x <- input$yearcheck if (is.null(x)) x <- character(0) updateSelectizeInput(session, "speccheck", choices = sort(unique(yrdata()$Species)), server=TRUE) }) #Creates the observed Sex observe({ x <- input$speccheck if (is.null(x)) x <- character(0) updateCheckboxGroupInput(session, inputId = "sexcheck", choices = unique(specdata()$Sex), selected = unique(specdata()$Sex), inline = TRUE) }) #Creates the observed Time observe({ x <- input$sexcheck if (is.null(x)) x <- character(0) updateCheckboxGroupInput(session, inputId = "timecheck", choices = unique(sexdata()$Time.of.Kill), selected = unique(sexdata()$Time.of.Kill), inline = TRUE) }) #Creates the observed Age observe({ x <- input$timecheck if (is.null(x)) x <- character(0) updateCheckboxGroupInput(session, inputId = "agecheck", choices = unique(timedata()$Age), selected = unique(timedata()$Age), inline = TRUE) }) #Creates the observed dates and suppresses warnings from the min max observe({ x <- input$agecheck if (is.null(x)) x <- character(0) #And update the date range values to match those of the dataset updateDateRangeInput( session = session, inputId = "inDateRange", start = suppressWarnings(min(agedata()$Accident.Date)), end = suppressWarnings(max(agedata()$Accident.Date)) ) }) output$txt <- renderText({nrow(data1())}) output$bar <- renderPlot({ P <- ggplot(data = data1(), aes(x = reorder(factor(Species),factor(Species),function(x)-length(x)), fill = factor(Species)))+ geom_bar(stat="count", width=0.7) + guides(fill=FALSE, color=FALSE) + theme_minimal() cols <- c("Deer" = "#BAA7A2", "Bear" = "#F3923F", "Cougar" = "#FEE3C0", "Beaver" = "#FCCF31", "Skunk" = "#E6E7E8", "Moose" = "#8AC04B", "Elk" = "#D3CB8D", "Badger" = "#C1E3D8", "Bobcat" = "#EE5C30", "Buffalo" = "#7F2F8B", "Caribou" = "#C59FC8", "Coyote" = "#927E7A", "Eagle" = "#DCDDDE", "Fox" = "#32A7DC", "Gbear" = "#AD2147", "Horned" = "#F5C2D7", "Lynx" = "#91632D", "Marten" = "#808083", "Mule" = "#CBBDB9", "Muskrat" = "#A3C497", "Otter" = "#0C6F47", "Porcupine" = "#4C5FA7", "Possum" = "#A3B5DB", "Rabbit" = "#EA212E", "Raccoon" = "#BE953B", "Sheep" = "#008D82", "WhiteTailed_Deer" = "#E0D8D6", "Wolf" = "#8A5A7C") P + scale_fill_manual(values = cols) + labs(x = "Species") + labs(y = "Total Count") + geom_text(stat='count', aes(label=..count..), vjust=-1) + theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust=1)) }) }) ui: navbarPage("Test", id="nav", tabPanel("Map", absolutePanel(id = "controls", class = "panel panel-default", fixed = TRUE, draggable = FALSE, top = 200, left = 5, right = "auto", bottom = "auto", width = "auto", height = "auto", radioButtons("datacheck", label = tags$div( HTML("<b>Dataset</b>")), choices = c("Current" = "Current", "Historical" = "Historical"), selected = c("Current"), inline = TRUE), conditionalPanel(condition = "input.datacheck != 'Current'", #Only displays yearcheck for historical as there is no year column on current dataset. Current dataset has had all year values set to 0. selectizeInput("yearcheck", label = "Select Year (Only Available for Historical)", choices = NULL, options = list(placeholder = 'Select Year:', maxOptions = 40, maxItems = 40))), selectizeInput("speccheck", h3("Select Species:"), choices = NULL, options = list(placeholder = 'Select Species: (Max 12) ', maxOptions = 36, maxItems = 12)), conditionalPanel(condition = "input.speccheck >= '1'", dateRangeInput("inDateRange", "Date range input:"), checkboxGroupInput("sexcheck", label = tags$div( HTML("<b>Sex</b><br>"))), checkboxGroupInput("agecheck", label = tags$div( HTML("<b>Age</b><br>"))), checkboxGroupInput("timecheck", label = tags$div( HTML("<b>Time of Accident</b><br>"))), plotOutput("bar") ), verbatimTextOutput("txt") ))) A: This worked for me. I had to update some package and it started to work. data1 <- reactive({data1()}) %>% debounce(1000)
{ "pile_set_name": "StackExchange" }
Q: What file formats does lp support? What types of files, such as images (jpg, png, tiff) and documents (pdf, doc, docx, xls) can lp render correctly and print? I'm trying to make a print kiosk and don't quite know how accommodating I can be with lp to print from the command line with a script. A: lp (/lpr) can only print PDF, p(ost)s(ript), and plain text files. But this does not has to stop here: you can make PDFs with a lot of tools and then print that PDF. Examples: If you want docx and other Microsoft formats you can use a converter like JODConverter to convert those to PDF. For images you can do the same: make a PDF from the image and print that: mogrify -format pdf *.jpg will create a PDF from jpgs. Or with the convert command in imagemagick: for file in *.jpg ; do convert "$file" "${file/%jpg/pdf}" ; done
{ "pile_set_name": "StackExchange" }
Q: $\delta$ and $\epsilon$ in the continuity definition Considering the following definition of continuity (there is nothing unusual yet here): $$\forall \varepsilon > 0\ \exists \delta > 0\ \text{s.t. } 0 < |x - x_0| < \delta \implies |f(x) - f(x_0)| < \varepsilon $$ I always thought $\delta$, $\epsilon$ $\rightarrow$ 0. However I was told today (and my assignment wash downgraded accordingly) that $\delta$ and $\epsilon$ are not necessary infinitesimally small numbers. Can someone please share an insight into this idea and provide a detailed explanation on how I can show that a piece wise continuous function contains discontinuities without resorting to infinitesimally small $\delta$ and $\epsilon$ in the above continuity definition. I am really struggling with this new removal of restriction on $\delta$ and $\epsilon$, so a really detailed explanation would be much appreciated. A: The point is that $\epsilon$ can be made arbitrarily small and this definition encapsulates that. But it still has to be a number. So if a function is discontinuous at a point $x$, then you can find an $\epsilon>0$ for which there is no $\delta>0$ that makes $|f(x)-f(x_0)|<\epsilon$ for all $|x-x_0|<\delta$. Consider for example the piecewise function $f(x)=0$ for $x\leq 0$ and $f(x)=1$ for $x>0$. There's clearly a discontinuity at 0 and it can be captured as follows: $f(0)=0$, and $|f(x)-f(0)|=|f(x)-0|$. For any $\delta>0$, the inequality $|x-0|<\delta$ is satisfied by $x=\delta/2$ (say), and $|f(\delta/2)-f(0)|=|1-0|=1$. So if you pick $\epsilon=1/2$, you will never find a $\delta>0$ such that $|f(x)-f(0)|<\epsilon$ whenever $|x-x_0|<\delta$. On the other hand, notice that $x=-\delta/2$ gives $|f(-\delta/2)-f(0)|=0$. So think of continuity as the ability to probe how much the function changes in very small neighborhoods around the point you are interested in. Continuity is scale invariant: no matter how much you zoom into the point that your function is continuous at, it will still look continuous. This is the "arbitrarily small" idea at work.
{ "pile_set_name": "StackExchange" }
Q: edit text in android setText variable Good morning have a question and would like to see if someone can help me. I have a number of textview defined TextView Casilla1 = (TextView) findViewById (R.id.TxtVCasilla1); TextView Casilla2 = (TextView) findViewById (R.id.TxtVCasilla2); TextView Casilla3 = (TextView) findViewById (R.id.TxtVCasilla3); TextView Casilla4 = (TextView) findViewById (R.id.TxtVCasilla4); etc ... up to 42 boxes. text and would like to change their conditions through exactly depending on the month in which this activity have to change the text in the TextViews. for (int i = 1; i <30, i + +) {              String x = "Casilla" + i;              x.setText (i);          } This would be something if the setText for what is inside recognize him. but it gives me error. There would be some way to do this?. what I want is to go textview renaming the function that is defined in the meter, adding the String as a variable plus the number of the counter which is at that time. Do not know how that recognizes the resulting String setText function and can modify the text with the counter variable. Not if you understand me because I'm writing Through a translator. Forgive my mistakes as I am new to the forum and android programming, I have consulted a lot in this forum but today is the first time I check. A: You could store the views in array: TextView[] Casilla = new TextView[42]; Casilla[0] = (TextView) findViewById (R.id.TxtVCasilla1); Casilla[1] = (TextView) findViewById (R.id.TxtVCasilla2); // ... and access them later by its index: for (int i = 0; i < Casilla.length, i++) { Casilla[i].setText(""); } Just mentioning the alternative, since seems the use of getIdentifier() is discouraged and displaying these in ListView might not be what you want (nothing suggests it is - 42 could be a fixed number, like 64 would be for a chess board for example).
{ "pile_set_name": "StackExchange" }
Q: Which is correct: "All the media is" or "all the media are"? I think I know that media is a plural word. So then which of the following is correct? All the media is... All the media are... When you search Google, both seem to appear at the same frequency. A: Media can be treated either as singular or plural. When used in the singular, it is often treated as a collective noun. The media has gone insane about this trial. Here is what the OED online says: The word media comes from the Latin plural of medium. The traditional view is that it should therefore be treated as a plural noun in all its senses in English and be used with a plural rather than a singular verb: the media have not followed the reports (rather than ‘has ’). In practice, in the sense ‘television, radio, and the press collectively ’, it behaves as a collective noun (like staff or clergy, for example), which means that it is now acceptable in standard English for it to take either a singular or a plural verb. The word is also increasingly used in the plural form medias, as if it had a conventional singular form media, especially when referring to different forms of new media, and in the sense ‘the material or form used by an artist’: there were great efforts made by the medias of the involved countriesabout 600 works in all genres and medias were submitted for review. Merriam-Webster's Dictionary of English Usage (see page 630) also notes the use of media in the singular. It gives examples of it being used as a singular countable noun as well. . . . partly as a cultural media. -- American Journal of Sociology, 1948 . . . producing a suitable media for organic life. --Britannica Book of the Year 1946 . . . an optical disc media. -- Predicasts Technology Update, 1987 In short, the usage of media as a singular noun is well-documented. A: Actually, when I google for these phrases, I'm seeing this: "all the media are" — 110,000 "all the media is" — 225,000 The Corpus of Contemporary American English paints a similar picture, though the sample size is rather small (1 vs 3 cites). You get more results if you leave out the "all" (445 vs 602). These numbers also include a few cites of the form "the quality of the media is..." or "many in the media are...", which are obviously not relevant here, but the overall trend is still rather clear. The British National Corpus, on the other hand, favors the plural form. There are fewer results overall, so I took my time to check every single one for relevance, and here's the overview: BNC COCA Google the media are 43 445? all the media are 1 1 110,000 the media is 19 602? all the media is 0 3 225,000 So the Americans prefer the singular form, while the British prefer the plural. This should come as no surprise knowing that the British like to treat other collective nouns such as staff, Microsoft or Metallica as plural, too, while Americans prefer the singular. See these answers to related questions: Is staff plural? Is a company always plural, or are small companies singular?
{ "pile_set_name": "StackExchange" }
Q: Semantics of APL dot operator? The dot . can be used to realize different types of products. For example, 1 2 3 +.× 4 5 6 I assumed the semantics of a f.g b was: compute g(a[i], b[i]) then reduce using f. That is, dot f g = f/a g¨ b ⍝ map g between a and b, and then reduce using f To verify this, I wrote: ]display a ← ⍳ 4 ⋄ b ← 4 +⍳ 4 ⋄ I ← { ((⊂ ⍺), (⊂ ⍵))} ⋄ a I.I b ┌─────────────────────────────────────┐ │ ┌→────────────────────────────────┐ │ │ │ ┌→──┐ ┌→──────────────────────┐ │ │ │ │ │1 5│ │ ┌→──┐ ┌→────────────┐ │ │ │ │ │ └~──┘ │ │2 6│ │ ┌→──┐ ┌→──┐ │ │ │ │ │ │ │ └~──┘ │ │3 7│ │4 8│ │ │ │ │ │ │ │ │ └~──┘ └~──┘ │ │ │ │ │ │ │ └∊────────────┘ │ │ │ │ │ └∊──────────────────────┘ │ │ │ └∊────────────────────────────────┘ │ └∊────────────────────────────────────┘ We can clearly see the right-fold of the elements, as it first maps the I creating (1 5) (2 6) (3 7) (4 8) and then folds using I creating the nested structure, so my definition seems to work! However, this does not work for matrices: ]display a ← 2 2 ⍴ ⍳ 4 ⋄ b ← 4 + 2 2 ⍴ ⍳ 4 ⋄ I ← { ((⊂ ⍺), (⊂ ⍵))} ⋄ a I.I b ┌→────────────────────────────────┐ ↓ ┌→────────────┐ ┌→────────────┐ │ │ │ ┌→──┐ ┌→──┐ │ │ ┌→──┐ ┌→──┐ │ │ │ │ │1 5│ │2 7│ │ │ │1 6│ │2 8│ │ │ │ │ └~──┘ └~──┘ │ │ └~──┘ └~──┘ │ │ │ └∊────────────┘ └∊────────────┘ │ │ ┌→────────────┐ ┌→────────────┐ │ │ │ ┌→──┐ ┌→──┐ │ │ ┌→──┐ ┌→──┐ │ │ │ │ │3 5│ │4 7│ │ │ │3 6│ │4 8│ │ │ │ │ └~──┘ └~──┘ │ │ └~──┘ └~──┘ │ │ │ └∊────────────┘ └∊────────────┘ │ └∊────────────────────────────────┘ Interesting! so it seems to actually compute some sort of outer product between its elements in this case, and not a "fold"? My hypothetical definition of the . operator does not perform the same operation: ]display a ← 2 2 ⍴ ⍳ 4 ⋄ b ← 4 + 2 2 ⍴ ⍳ 4 ⋄ I ← { ((⊂ ⍺), (⊂ ⍵))} ⋄ I/a I¨b ┌→────────────────────────────────┐ │ ┌→────────────┐ ┌→────────────┐ │ │ │ ┌→──┐ ┌→──┐ │ │ ┌→──┐ ┌→──┐ │ │ │ │ │1 5│ │2 6│ │ │ │3 7│ │4 8│ │ │ │ │ └~──┘ └~──┘ │ │ └~──┘ └~──┘ │ │ │ └∊────────────┘ └∊────────────┘ │ └∊────────────────────────────────┘ So, what are the actual semantics of .(dot) in APL? How would I discover this by myself? A: From the Dyalog help page on "Inner Product": R←X f.g Y The result of the derived function has shape (¯1↓⍴X),1↓⍴Y; each item is f/x g¨y where x and y are vectors taken from all the combinations of vectors along the last axis of X and the first axis of Y. In the case of vector f.g vector, it is fine to understand it as f/ vector g¨ vector as you already observed. However, it gets more complicated for matrices and higher-dimensional arrays. For matrices, the most straightforward usage (and the most cited one) is the "matrix product" +.×. Mathematically, for an m-by-n matrix X and n-by-p matrix Y, X +.× Y is defined as an m-by-p matrix whose element at [i;j] is the vector dot product (sum of element-wise products) of ith row of X and jth column of Y. For an illustration, refer to the Wikipedia page. In this case, the array X (an m-by-n matrix) has shape (⍴) of m n, and Y has shape n p. The result has shape m p, which is equal to (¯1↓m n),1↓n p. If we generalize this definition to arbitrary functions f and g, (for matrices X and Y) we can define X f.g Y to be another matrix whose elements are "reduction by f" of "element-wise g"s of each row of X and each column of Y. This is precisely what the doc is talking about when it mentions f/x g¨ y. Also, X has m rows and Y has p columns, so calculating all combinations of each row of X and each column of Y will give precisely m×p values. So far, we've covered more than half of the doc's sentence. Then what does "vectors along the last axis of X" mean? For the matrix X of shape m n, the last axis has length n, so the matrix X can be viewed as m vectors of length n. Similarly, "vectors along the first axis of Y" means viewing the shape n p matrix Y as p vectors of length n. Then the two length-n vectors (one from X and the other from Y) become the arguments of g¨, which implies that the lengths must match. We can also generalize this concept to higher-dimensional arrays. If we have an a×b×c array X, it has a×b vectors of length c (last axis). If we have another c×d×e array Y, it has d×e vectors of length c (first axis). Then computing over all combinations of vectors will give a×b×d×e elements, which naturally gives the result array of shape a b d e. Summarizing all of this, X f.g Y is equivalent to extracting last-axis vectors of X and first-axis vectors of Y and computing the outer product of f/ x g¨ y over the vectors: a ← 2 3 3⍴⍳4 b ← 3 3 4⍴⍳5 f ← {⍺+⊂⍵} g ← {⍺⍵} ⎕←(a f.g b) ≡ (↓[≢⍴a]a) ∘.{⊃ f/ ⍺ g¨ ⍵} (↓[1]b) This program prints 1, i.e. true. Try it online!
{ "pile_set_name": "StackExchange" }
Q: Prove that $AN$ is a subgroup of $G$ if $A$ and $N$ are its subgroups and $N$ is normal in $G$ Let $A$ be a subgroup of a group $G$, and let $N$ be a normal subgroup of $G$ Prove than in this case $AN = \{an|a \in A, n \in N \}$ is a subgroup of $G$ I know that since $N$ is normal, we have $AN = NA$. $1$ clearly lies in $AN$, since both $A$ and $N$ contains $1$. Let $an \in AN$. Then $n^{-1}a^{-1}$ lies in $NA$, hence, it lies in $AN$. But I'm not sure how to prove that $AN$ is closed under the operation in $G$. A: Take $an,bm\in AN$. Since $bm\in AN=NA$, then $bm=m'b'$ for some $m'\in N$ and $b'\in A$. Then $anbm=anm'b'$. Set $r=nm'\in N$. Thus $ar\in AN=NA$, and therefore $ar=r'a'$ for $r'\in N$ and $a'\in A$. Thus $anbm=anm'b'=arb'=r'a'b'$, where $r'\in N$ and $a'b'\in A$, so $anbm\in NA=AN$ and done.
{ "pile_set_name": "StackExchange" }
Q: How to create thumbnails from WMV video's i want to display thumbnails for videos listed on my site, i want to fetch a single frame from a video (from a particular time) and display them as thumbnails. Is that possible using .Net ? (C# or VB) A: Yes that is possible. You need to use DirectShow.NET. I found this useful. EDIT : OK it looks like the library has changed since I used it... curse open source :) I just translated to the following code and tested and it works fine for me(note that it assumes there is a wmv in c:\aaa called C4.wmv and the output will go to c:\aaa\out.bmp) IGraphBuilder graphbuilder = (IGraphBuilder)new FilterGraph(); ISampleGrabber samplegrabber = (ISampleGrabber) new SampleGrabber(); graphbuilder.AddFilter((IBaseFilter)samplegrabber, "samplegrabber"); AMMediaType mt = new AMMediaType(); mt.majorType = MediaType.Video; mt.subType = MediaSubType.RGB24; mt.formatType = FormatType.VideoInfo; samplegrabber.SetMediaType(mt); int hr = graphbuilder.RenderFile("C:\\aaa\\c4.wmv", null); IMediaEventEx mediaEvt = (IMediaEventEx)graphbuilder; IMediaSeeking mediaSeek = (IMediaSeeking)graphbuilder; IMediaControl mediaCtrl = (IMediaControl)graphbuilder; IBasicAudio basicAudio = (IBasicAudio)graphbuilder; IVideoWindow videoWin = (IVideoWindow)graphbuilder; basicAudio.put_Volume(-10000); videoWin.put_AutoShow(OABool.False); samplegrabber.SetOneShot(true); samplegrabber.SetBufferSamples(true); long d = 0; mediaSeek.GetDuration(out d); long numSecs = d / 10000000; long secondstocapture = (long)(numSecs * 0.10f); DsLong rtStart, rtStop; rtStart = new DsLong(secondstocapture * 10000000); rtStop = rtStart; mediaSeek.SetPositions(rtStart, AMSeekingSeekingFlags.AbsolutePositioning, rtStop, AMSeekingSeekingFlags.AbsolutePositioning); mediaCtrl.Run(); EventCode evcode; mediaEvt.WaitForCompletion(-1, out evcode); VideoInfoHeader videoheader = new VideoInfoHeader(); AMMediaType grab = new AMMediaType(); samplegrabber.GetConnectedMediaType(grab); videoheader = (VideoInfoHeader)Marshal.PtrToStructure(grab.formatPtr, typeof(VideoInfoHeader)); int width = videoheader.SrcRect.right; int height = videoheader.SrcRect.bottom; Bitmap b = new Bitmap(width, height, PixelFormat.Format24bppRgb); uint bytesPerPixel = (uint)(24 >> 3); uint extraBytes = ((uint)width * bytesPerPixel) % 4; uint adjustedLineSize = bytesPerPixel * ((uint)width + extraBytes); uint sizeOfImageData = (uint)(height) * adjustedLineSize; BitmapData bd1 = b.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); int bufsize = (int)sizeOfImageData; int n = samplegrabber.GetCurrentBuffer(ref bufsize, bd1.Scan0); b.UnlockBits(bd1); b.RotateFlip(RotateFlipType.RotateNoneFlipY); b.Save("C:\\aaa\\out.bmp"); Marshal.ReleaseComObject(graphbuilder); Marshal.ReleaseComObject(samplegrabber); Also be aware that DirectShow is somethign of a framework in limbo... MS kind of recommend that you go to Media Foundation... I am an old school DirectX programmer that frankly doesn't do much with it any more.
{ "pile_set_name": "StackExchange" }
Q: Deploying Flask App with sqlite db in AWS Elastic Beanstalk I have the tutorial app from the flask docs and modified for my use. I was able to deploy my flask app and create the db via container_commands (flask init-db) But when I try to write something to the db from the web browser. It throws the exception that "sqlite3.OperationalError: attempt to write a readonly database" It seems the problem is the write permission to the sqlite file. But it was created at the time of deploying the application. Any help. Ideally when you are deploying in your own production env (not cloud), the sqlite is created in the venv/var/app-instance folder. How do I access this in AWS EB. A: It looks like file write permission issue to me as well. here is how i would troubleshoot the issue: login to the instance via ssh, give permission to the file and see if it resolves the issue. if it does resolves the issue, I would automate that using .ebextensions Hope this helps you move forward. Also look at the reference below shows how to change file permissions. Reference: python sqlite3 OperationalError: attempt to write a readonly database
{ "pile_set_name": "StackExchange" }
Q: Reading only SPECIFIC range of lines in a text file C++ Hi I have a text file which contains some numerical data. Of that text file ONLY the lines 14 to 100 have to be read into my C++ program. Each of these lines contain three numbers corresponding to x,y,z coordinates of a point. Thus, coordinates are given for 87 points in all. I want to put these numbers into the arrays xp[87] yp[87] and zp[87]. How do I perform this? Uptil now I have been used to the following ifstream readin(argv[1])//Name of the text file for (int i=0; i<=86; ++i) { readin>>xp[i]>>yp[i]>>zp[i]; } But this technique works only for those files which contain 87 lines and the data to be read starts from the first line itself. In the present case I want to ignore ALL lines before line 14 and ALL lines after line 100 A: Read line by line, for most flexibility in your format: #include <fstream> #include <sstream> #include <string> std::ifstream infile("thefile.txt"); std::string line; unsigned int count = 0; while (std::getline(infile, line)) { ++count; if (count > 100) { break; } // done if (count < 14) { continue; } // too early std::istringstream iss(line); if (!(iss >> x[count - 14] >> y[count - 14] >> z[count - 14])) { // error } } // all done
{ "pile_set_name": "StackExchange" }
Q: What does one mean by magnitude of normal in co-ordinate geometry? Basically whenever I imagine a surface, by normal at a point, I mean a straight line perpendicular to the surface at that point which has an infinite length as straight lines do have. But how does this straight line have a magnitude when I observe this from a vectorial viewpoint? Then I consider the normal to be a vector and it has a magnitude. But co-ordinate geometry is analogous to vectorial geometry. That is , I can solve co-ordinate geometry problems by vectors and vice-versa. Still this thing confuses me. Can anyone offer some help? A: The straight line orthogonal to a surface at a point $P$ is ,in fact, a line so it has infinite length. But the equation of a straight line is represented, usually, as $\vec x= \vec v t +\vec v_0$, here $\vec v$ is a vector that gives the direction of the line, and this vector has a magnitude, but this magnitude has nothing to do with a ''magnitude of the line'', an expression that has no meaning. Note that this vector is also oriented, but the straight line is the the same for opposite oriented vectors (really for all parallel vectors), with a positive orientation that is the same of the vector. In many application it's important to define an orientation of the normal and also to have an orienting vector that has unit magnitude, so we define a normal vector to the surface, i.e. a unitary vector that orients the orthogonal line and also fix an orientations as positive. This is useful especially in physical application as in the definition of the flux of vector field through a surface.
{ "pile_set_name": "StackExchange" }
Q: Compile and execute a JDK preview feature with Maven With JDK/12 EarlyAccess Build 10, the JEP-325 Switch Expressions has been integrated as a preview feature in the JDK. A sample code for the expressions (as in the JEP as well): Scanner scanner = new Scanner(System.in); Day day = Day.valueOf(scanner.next()); switch (day) { case MONDAY, TUESDAY -> System.out.println("Back to work.") ; case WEDNESDAY -> System.out.println("Wait for the end of week...") ; case THURSDAY,FRIDAY -> System.out.println("Plan for the weekend?"); case SATURDAY, SUNDAY -> System.out.println("Enjoy the holiday!"); } where Day being an enum as public enum Day { MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY } The Preview Language and VM Features JEP-12 already elaborate how a feature can be enabled during compile and runtime using javac and java. How can one try out this feature using Maven? A: Step 1 One can make use of the following maven configurations to compile the code using the --enable-preview along with --release 12+ (e.g. 13, 14, 15) argument. <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <release>12</release> <!-- <release>13/14/15</release> --> <compilerArgs>--enable-preview</compilerArgs> </configuration> </plugin> <!-- This is just to make sure the class is set as main class to execute from the jar--> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>edu.forty.bits.expression.SwitchExpressions</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> </plugins> </build> Note:- I had to also ensure on my MacOS that my ~/.mavenrc file was configured to mark java 13 as the default java configured for maven. Step 2 Execute the maven command to build the jar from the module classes mvn clean verify Step 3 Use the command line to execute the main class of the jar created in the previous step as : java --enable-preview -jar target/forty-bits-of-java-1.0.0-SNAPSHOT.jar the last argument is the path to the jar built by maven. This produces the output as expected as: (screenshot is from a previous execution.) Source on GitHub Edit: A learning from an unwanted debugging session, use the arguments in the format as follows: <compilerArgs> <arg>--enable-preview</arg> </compilerArgs> Reason being, if you specify two different arguments it doesn't fail during the configuration validation and the one found later overrules the effective config: <compilerArgs>--enable-preview</compilerArgs> <compilerArgs>-Xlint:all</compilerArgs> A: To enable preview feature you must define --enable-preview in pom.xml under compilerArgs in below I mention how to enable preview feature with java 13. <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <release>13</release> <compilerArgs> --enable-preview </compilerArgs> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M3</version> <configuration> <argLine>--enable-preview</argLine> </configuration> </plugin> </plugins> </build>
{ "pile_set_name": "StackExchange" }
Q: .load a php page every 3 seconds with jquery Right now I have this jquery: $(".updatetwo").click(function () { $('div.result').load('permissions.php); }); I wanted to know, how could I .load the php every 3 seconds instead of on click? A: var handler= window.setInterval(function(){ jQuery('div.result').load('permissions.php'); },3000); if you plan to stop loading data, you can clear the handler window.clearInterval(handler); -- window.setInterval(code,delay), setInterval executes any code after mentioned delay in milliseconds.
{ "pile_set_name": "StackExchange" }
Q: Set values of groups in pandas conditionally python I have a dataframe with the following columns: duration, cost, channel 2 180 TV1 1 200 TV2 2 300 TV3 1 nan TV1 2 nan TV2 2 nan TV3 2 nan TV1 1 40 TV2 1 nan TV3 Some of the cost values are nans, and to fill them I need to do the following: group by channel within a channel, sum the available cost and divide by the number of * occurrences (average) reassign values for all rows within that channel: if duration = 1, cost = average * 1.5 if duration = 2, cost = average Example: TV2 channel, we have 3 entries, with one entry having null cost. So I need to do the following: average = 200+40/3 = 80 if duration = 1, cost = 80 * 1.5 = 120 duration, cost, channel 2 180 TV1 1 120 TV2 2 300 TV3 1 nan TV1 2 80 TV2 2 nan TV3 2 nan TV1 1 120 TV2 1 nan TV3 I know i should do df.groupby('channel') and then apply function to each group. The problem is that I need to modify not only null values, I need to modify all cost values within a group if 1 cost is null. Any tips help would be appreciated. Thanks! A: If i understand your problem correctly, you want something like: def myfunc(group): # only modify cost if there are nan's if len(group) != group.cost.count(): # set all cost values to the mean group['cost'] = group.cost.sum() / len(group) # multiply by 1.5 if the duration equals 1 group['cost'][group.duration == 1] = group['cost'] * 1.5 return group df.groupby('channel').apply(myfunc) duration cost channel 0 2 60 TV1 1 1 120 TV2 2 2 100 TV3 3 1 90 TV1 4 2 80 TV2 5 2 100 TV3 6 2 60 TV1 7 1 120 TV2 8 1 150 TV3
{ "pile_set_name": "StackExchange" }
Q: Regex - String should not include more than 7 digits Rules for String: Can contain 0-7 digits Test Cases : abcd1234ghi567 ⟶ True 1234567abc ⟶ true ab1234cd567 ⟶ true abc12 ⟶ True abc12345678 ⟶ False How do I come up with a regular expression for the same? The problem I am facing is - How to keep the count of digits in the whole string. The digits can occur any where in the string. I would like a Pure regex solution A: Approach 1 If you're OK with putting some of your logic in your JavaScript, something as simple as this function should do : function validate(teststring) { return teststring.match(/\d/g).length < 8; } Demo function validate(teststring) { return teststring.match(/\d/g).length < 8; } document.body.innerHTML = '<b>abcd1234ghi567 :</b> ' + validate('abcd1234ghi567') + '<br />' + '<b>1234567abc :</b> ' + validate('1234567abc') + '<br />'+ '<b>ab1234cd567 :</b> ' + validate('ab1234cd567') + '<br />'+ '<b>abc12 :</b> ' + validate('abc12') + '<br />'+ '<b>abc12345678 :</b> ' + validate('abc12345678') + '<br />'; (see also this Fiddle) Approach 2 If you prefer all of your logic to be in your regex instead of your JavaScript, you could use a regex like /^(\D*\d?\D*){7}$/ or /^([^0-9]*[0-9]?[^0-9]*){7}$/ and use RegExp.prototype.test() instead of String.prototype.match() to test your strings. In that case, your validate function would look something like this : function validate(teststring) { return /^([^0-9]*[0-9]?[^0-9]*){7}$/.test(teststring); } Demo : function validate(teststring) { return /^([^0-9]*[0-9]?[^0-9]*){7}$/.test(teststring); } document.body.innerHTML = '<b>abcd1234ghi567 :</b> ' + validate('abcd1234ghi567') + '<br />' + '<b>1234567abc :</b> ' + validate('1234567abc') + '<br />'+ '<b>ab1234cd567 :</b> ' + validate('ab1234cd567') + '<br />'+ '<b>abc12 :</b> ' + validate('abc12') + '<br />'+ '<b>abc12345678 :</b> ' + validate('abc12345678') + '<br />'; A: figured it out! /^(\D*\d?\D*){0,7}$/ Every numeric char can be surrounded by non-numeric chars.
{ "pile_set_name": "StackExchange" }
Q: 90° Bend: Mitered vs Curved On RF PCBs to bend a trace 90° you have many choices but among them Curved and Mitered bend considered as a good choice from performance POV (Both shown below). For many years I thought that if you have enough space on your board, curved bend is a better choice over mitered bend but lately I hear an opposite recommendation from one of my colleague. My question is in case of having enough room which one of these options is a better choice? (simulated results or real world measurement is appreciated) A: Neither a mitred nor curved bend is as 'good' as the equivalent length of straight track. There are two main aspects to goodness, S11 and S21. S11. Other things being equal, width and thickness of trace, dielectric performance, the curved bend can be designed to have good S11 to a higher frequency than the mitred. That's because the mitre is effectively a lowpass filter. The two 135 degree corners produce a slight extra capacitive loading, the thinner region in the elbow of the bend a slight series inductance. With a properly designed mitred bend (that mitre you illustrate is not properly designed, more should be taken off the corner, see below) the result is a matched 3rd order filter with good S11 up to a certain frequency. As the curved bend has smaller parasitic loadings, the good S11 passband is wider. S21. A mitred bend can be made more compact than a curved bend, and will therefore have lower track loss, and allow tighter component packing. Either might be critical for your application, but I think there's quite a tendency to prefer to make things small these days. The performance differences are small, and the design would have to be very marginal if they made the difference between working or not. This is more like the proportions I'd expect to see from a correctly designed mitre, from microwaves101.com
{ "pile_set_name": "StackExchange" }