text
stringlengths
64
81.1k
meta
dict
Q: PHP upload filename to mysql This code transfers the file to the specified folder and db table but when I launch/open/run the page on the browser,it automatically sends something to the db table and the filename field is empty.I haven't even clicked/uploaded anything yet. I don't know if i explained it properly.The problem is when i opened the page,i checked the db table rows and an empty row was made.the id increments btw, and the filename field is empty.(not uploading anything yet.) What's wrong with the code? <?php if(isset($_FILES['filename'])){ $errors= array(); $file_name = $_FILES['filename']['name']; $file_size =$_FILES['filename']['size']; $file_tmp =$_FILES['filename']['tmp_name']; $file_type=$_FILES['filename']['type']; $file_ext=strtolower(end(explode('.',$_FILES['filename']['name']))); $expensions= array("jpeg","jpg","png"); if(in_array($file_ext,$expensions)=== false){ $errors[]="extension not allowed, please choose a JPEG or PNG file."; } if($file_size > 2097152){ $errors[]='File size must be excately 2 MB'; } if(empty($errors)==true){ move_uploaded_file($file_tmp,"uploads/".$file_name); echo "Success"; }else{ print_r($errors); } } ?> <?php $servername = "localhost"; $username = "root"; $password = ""; $dbname = "admin"; $filename = false; if(isset($_FILES['filename'])){ $filename = $_FILES['filename']['name']; } // Create connection mysql_connect($servername, $username, $password) or die ('MySQL Not found // Could Not Connect.'); mysql_select_db("admin") or die(mysql_error()) ; mysql_query("INSERT INTO upload_test (fileName) VALUES ('$filename')") ; ?> my form: <form name="form" method="POST" enctype="multipart/form-data" > <input name="filename" type="file" id="filename" /> <input name="submit" type="submit" id="submit"/> </form> A: That is because you are always executing the INSERT statement. You only want insert a record once you have uploaded. if(isset($_FILES['filename'])){ $errors = array(); $file_name = $_FILES['filename']['name']; $file_size =$_FILES['filename']['size']; $file_tmp =$_FILES['filename']['tmp_name']; $file_type=$_FILES['filename']['type']; $file_ext=strtolower(end(explode('.',$_FILES['filename']['name']))); $expensions= array("jpeg","jpg","png"); if(in_array($file_ext,$expensions)=== false){ $errors[]="extension not allowed, please choose a JPEG or PNG file."; } if($file_size > 2097152){ $errors[]='File size must be excately 2 MB'; } // if there are no errors... if (empty($errors)==true) { // upload the file... move_uploaded_file($file_tmp,"uploads/".$file_name); $servername = "localhost"; $username = "root"; $password = ""; $dbname = "admin"; // and create a new record in the database mysql_connect($servername, $username, $password) or die ('MySQL Not found // Could Not Connect.'); mysql_select_db("admin") or die(mysql_error()) ; mysql_query("INSERT INTO upload_test (fileName) VALUES ('$file_name')") ; echo "Success"; }else{ print_r($errors); } } On a side-note, a shorter way to get the extension of file is to use pathinfo() $file_ext = pathinfo($_FILES['filename']['name'], PATHINFO_EXTENSION);
{ "pile_set_name": "StackExchange" }
Q: How to display FieldSet for an sObject List in PageBlockSection? I have seen examples where people display fieldset for an sobject. But instead of mySobject if i want to display the field set for a list of sobjects? A: I suppose you mean by a list of SObjects of the same SObject type, say Contact. And then the code can be pretty easy. page <apex:page standardController="Contact" extensions="fieldSetExtension"> <apex:repeat value="{!contactList}" var="c"> <apex:repeat value="{!$ObjectType.Contact.FieldSets.Name_Set}" var="f"> <apex:outputText value="{!c[f]}" /> <br/> </apex:repeat> <br/> </apex:repeat> </apex:page> Controller public class fieldSetExtension { public List<Contact> contactList {get; set;} public fieldSetExtension(ApexPages.StandardController stdController) { contactList = [Select Id, Name, LastName, Email, MobilePhone, Level__c From Contact]; } } Of course, you should be using Dynamic SOQL here, which I am too lazy to implement here. But the concept is the same.
{ "pile_set_name": "StackExchange" }
Q: Can't CREATE EXTERNAL DATA SOURCE in SQL I'm trying to create an external data source to access Azure Blob Storage. However, I'm having issues with creating the actual data source. I've followed the instructions located here: Examples of bulk access to data in azure blob storage and Create external data source - transact sql. I'm using SQL Server 2016 on a VM accessing via SSMS on a client machine using Windows Authentication with no issues. Instructions say creating this external data source works for SQL Server 2016 and Azure Blob Storage. I have created the Master Key: CREATE MASTER KEY ENCRYPTION BY PASSWORD = <password> and, the database scoped credential CREATE DATABASE SCOPED CREDENTIAL UploadCountries WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = <key>; I have verified both of these exist in the database by querying sys.symmetric_keys and sys.database_scoped_credentials. However, when I try executing the following code it says 'Incorrect syntax near 'EXTERNAL' CREATE EXTERNAL DATA SOURCE BlobCountries WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://<somewhere>.table.core.windows.net/<somewhere>', CREDENTIAL = UploadCountries ); Your thoughts and help are appreciated! Steve. A: In “Examples of Bulk Access to Data in Azure Blob Storage”, we can find: Bulk access to Azure blob storage from SQL Server, requires at least SQL Server 2017 CTP 1.1. And in Arguments section of “CREATE EXTERNAL DATA SOURCE (Transact-SQL)”, we can find similar information: Use BLOB_STORAGE when performing bulk operations using BULK INSERT or OPENROWSET with SQL Server 2017 You are using SQL Server 2016, so you get Incorrect syntax near 'EXTERNAL' error when you create external data source for Azure Blob storage.
{ "pile_set_name": "StackExchange" }
Q: single word for 'Hospital' and 'Clinic' I am developing a software that requires users to enter hospital or clinic name. The software treats clinics and hospitals the same way. I wanted to know a single word that can be used for any medical institution. Example use cases: In forms: Hospital/clinic name: ___________ In URL: http://website.com/hospital/search While I can use something like "Medical Facility" as blanket term for clinic, hospital, trauma center, nursing home, etc... I do not like it because: it's big. it has two words so URLs wouldn't look great http://website.com/medical_facility/search. Thanks. By the way, 'no' is perfectly OK answer. A: In that case,'no'. :) If you must use a single word (and I don't see why two words is bad here), I would just use facility. In the context of a medical applicaion, it would be just as clear as medical facility. A: I would just use Hospital: Definition of HOSPITAL 1: a charitable institution for the needy, aged, infirm, or young 2: an institution where the sick or injured are given medical or surgical care —usually used in British English without an article after a preposition It is understandable and I really doubt that your user is going to say "Ah, darn it, they only deal with hospitals and I am looking for a clinic". You could also make it clear that you are using hospital as a blanket word in the text of your web page. If hospital won't serve, perhaps facility or institution will. Your context seems to make it very clear that you are referring to medical institutions.
{ "pile_set_name": "StackExchange" }
Q: Angular 2 Mobile Toolkit --mobile flag doesn't work I'm trying to create a new mobile app via Angular 2 Mobile Toolkit (https://mobile.angular.io/). When I type: ng new hello-mobile --mobile, I get this error: `The option '--mobile' is not registered with the new command. Run 'ng new --help' for a list of supported options. In ng new --help I don't see anything with mobile. Here is my ng -v result: @angular/cli: 1.0.2 node: 6.10.1 os: win32 x64 What I'm missing? This work only with some other angular version? A: The angular-cli --mobile flag was removed and a new solution is under design. The readme in the mobile-kit repo is not correct in the description of the angular-cli generation while Angular-Cli doesn't support the --mobile flag. See this Github Issues: https://github.com/angular/mobile-toolkit/issues/138 https://github.com/angular/angular-cli/issues/2228
{ "pile_set_name": "StackExchange" }
Q: What is the moment generating function of the generalized (multivariate) chi-square distribution? To be specific, suppose we have $(n,1)$ random vector $x \sim N(\mu, \Sigma)$ where $\mu$ is $(n,1)$ and $\Sigma$ is $(n,n)$. Define: \begin{align*} Y & = x'Ax + b'x + c \end{align*} Then what is the following (for $t \in \mathbb{R}$)? \begin{align*} E(e^{tY}) \end{align*} A: I will build on my answer from here: https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables/442916#442916 and use notation from there. First I will look at the case without the linear and constant term, then we will see how to take them into account. So let $Q(X)=X^T A X$ be a quadratic form in the multivariate normal vector $X$, with expectation $\mu$ and covariance matrix $\Sigma$. We found that $$ Q(X)=\sum_{j=1}^n \lambda_j (U_j+b_j)^2 $$ where $Z=Y-\Sigma^{-1/2}\mu$, we use the spectral theorem to write $\Sigma^{1/2}A \Sigma^{1/2} = P^T \Lambda P$, $P$ orthogonal and $\Lambda$ diagonal with positive diagonal elements $\lambda_j$, and $U=PZ$ so that $U$ has independent standard normal components $U_j$. The we can define $b=P \Sigma^{-1/2} \mu$. To summarize so far, $Q(X)$ is written above as the sum of independent scaled noncentral chisquare random variables. Using https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution we can see that $(U_j+b_j)^2$ is noncentral chisquare with one degree of freedom and noncentrality parameter $b_j^2$. Then its moment generating function (mgf) is given by $$ M_j(t) = \frac{\exp\left(\frac{t b_j^2}{1-2t} \right)}{(1-2t)^{1/2}} $$ Then we find the mgf of $\lambda_j (U_j+b_j)^2$ as $M_j(\lambda_j t)$, and the mgf $M(t)$ of the sum $Q(X)$ the product of this: $$ M(t) = \frac{\exp\left(\sum_{j=1}^n \frac{b_j^2 \lambda_j t}{1-2t\lambda_j} \right)}{\exp(\frac12 \sum_1^n \log(1-2t\lambda_j))} $$ which is the mgf for the quadratic form in the case without linear and constant term. To use this result for the general case, write as in the question, $$ Y=X^T B X + f^t X + g $$ (where we have changed name for the constants to avoid name clashes). To use the above result we must transform $X$ to eliminate the linear term. To obtain this, replace $X$ with $X-h$ where $$ h = -\frac12 B^{-1}f $$ Then we obtain $$ Y = (X-h)^T B (X-h) +g - h^T B h $$ And then we are ready to apply the mgf found in the first part: $$ \DeclareMathOperator{\E}{\mathbb{E}} \E e^{tY} = e^{g-h^T B h} M(t) $$ where $M(t)$ is the mgf from the first part.
{ "pile_set_name": "StackExchange" }
Q: AngularJS intellisense not working on Visual Studio 2015 According to this post intellisense should also be working on the new VS 2015, but so far I only get intellisense for the angular object and not for the dependencies or my custom modules. Here's what I did: Added the angular.intellisense.js to the global javascript references at C:\Program Files (x86)\Microsoft Visual Studio 14.0\JavaScript\References Restarted VS2015 And then nothing, it just showed exclamation marks whenever I tried to use intellisense on a $http object. I also added the file to the same place as my angular.js but it still didn't work. The question that I have in this case is, where should I place the file? on the angular public folder with only my angular.js, or on my dev angular folder where all the files downloaded from bower are. I also tried addind it directly into the tools/options/text editor/javascriptr/intellisense/reference menu, on the Implicit(Web) reference group, but it still didn't work. On my project I have the following folder structure inside the src folder: wwwroot app (my angular site stuff) controllers services views lib (js dependencies, only the .min.js file of each library) angular angular-route .... _references.js (the visual studio js references file, contains reference to the files inside the app and lib folders) Libraries (contains the full libraries as downloaded by bower) angular angular-route ... As a side note, I don't have a /scripts folder and therefore no /scripts/_references.js file . A: This was not working for me in Visual Studio 2015 RTM in a web project, but I solved the problem. This project was not created with Visual Studio and does not have a _references.js file anywhere. So I think this will work in any situation. I removed all other intellisense resources from within the VS UI to make sure what I did was what fixed it. Go to https://www.angularjs.org and pull up the download dialog box. Copy the Uncompressed CDN url. Today that happens to be https://ajax.googleapis.com/ajax/libs/angularjs/1.4.4/angular.js In Visual Studio 2015 RTM, go to Tools, Options, Text Editor, Javascript, Intellisense, References. Choose the appropriate Reference Group; for most web project this is Implicit (Web). Paste the url at the bottom text box and click the Add button. Don't dismiss the dialog box yet. Under Text Editor, Javascript, Intellisense, General, make sure the check box is checked for Download remote references. Click the OK button. (optional) If you want intellisense for the angular providers that you create (not part of the angular framework), add _references.js to the root of your project. Don't bother making a Scripts folder. Right click on it and choose auto-sync, then choose update. Go into it and remove any js files created by a build process. If you don't, they can be so large they will break intellisense. Be prepared for a ~5-10 second delay the first time you use intellisense, as it has to load all these references from your project. You may need to disable intellisense in Resharper for javascript if it interferes with the native intellisense. Restart Visual Studio. It will not work until you do this. Also, I'm paranoid about closing all other instances other than this instance first, so these settings "stick". So I suggest you do that before restarting this instance. A: As @Balthasar pointed out (and in case you are using Resharper) you will need to enable intellisense from Visual Studio for it to work: Resharper -> options -> environment -> intellisense -> general, select 'Custom Intellisense' and for Javascript you can select Visual studio. Alternatively you can use the 'Visual Studio' statement completion (second option) A: i've just realized that the automatic order that _reference.js file uses (first my files then the framework's files) prevented intellinsense to work on other files that weren't the app.js file this is how it now my _references.js looks like: /// <autosync enabled="false" /> /// <reference path="angular.js" /> /// <reference path="angular-resource.js" /> /// <reference path="angular-ui-router.min.js" /> /// <reference path="jquery-2.1.4.js" /> /// <reference path="materialize/materialize.js" /> /// <reference path="../App/App.js" /> /// <reference path="../App/Controllers/productsController.js" /> /// <reference path="../App/Controllers/productsEditController.js" /> /// <reference path="../App/Controllers/valuesController.js" /> /// <reference path="../common/common.services.js" /> /// <reference path="../common/productsResource.js" /> /// <reference path="../common/valuesResource.js" />
{ "pile_set_name": "StackExchange" }
Q: How do I set the MONGOHQ_URL environment variable in the run configuration of Netbeans? We're working on deploying a Java project to Heroku that uses MongoDB. According to the Heroku docs, the DB connection parameters are read from an environment variable, MONGOHQ_URL. When I run the project in Netbeans on my laptop, how do I set this variable? I tried adding it as a VM option with -DMONGOHQ_URL=... in Run -> Set Project Configuration -> Customize -> Run and as well in Actions -> Run project and Run file via main(), but to no avail. When the program reads it with System.getvar it's not set. A: Ok, I figured it out. This may be obvious to Java coders, but I'm not one, so here is what I cobbled together. String mongo_url = System.getenv("MONGOHQ_URL"); // If env var not set, try reading from Java "system properties" if (mongo_url == null) { mongo_url = System.getProperty("MONGOHQ_URL"); } MongoURI mongoURI = new MongoURI(mongo_url); this.db = mongoURI.connectDB(); // Only authenticate if username or password provided if (!"".equals(mongoURI.getUsername()) || mongoURI.getPassword().length > 0) { Boolean success = this.db.authenticate(mongoURI.getUsername(), mongoURI.getPassword()); if (!success) { System.out.println("MongoDB Authentication failed"); return; } } this.my_collection = db.getCollection("my_collection");
{ "pile_set_name": "StackExchange" }
Q: setting Live Tile back image issue I’m trying to implement the live tile to my app. This is the easier processes that I found online. But I get an Uri exception. Here is the Code: private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { InternetIsAvailable(); GetDataFeed(); BackTile(); } public void BackTile() { StandardTileData backData = new StandardTileData { BackBackgroundImage = new Uri(@"https://dl.dropbox.com/u/27136243/AchivementHunters/Images/LatestTile.png", UriKind.Absolute), }; ShellTile tile = ShellTile.ActiveTiles.First(); tile.Update(backData); } I want to update the back tile by just replacing the image. I see many tutorials that include servers, but I do not know anything about servers. If I include the file in the soluction and use this: BackBackgroundImage = new Uri(@"LatestTile.png", UriKind.Absolute), it works fine. how can i download the image from the URL and save it the specific path needed for the back image? A: The documentation for StandardTileData states that "Secondary Tiles can be created only using local resources for images" You will need to use a WebClient to download the image and save it to IsolatedStorage and then specify that isostore URI for the live tile to use. Hope this helps!
{ "pile_set_name": "StackExchange" }
Q: rtl_fm stream with ffmpeg and low bandwith I currently try to stream audio from rtl_fm via ffmpeg to node-media-server. This is working fine. rtl_fm -f 103.0M -M fm -s 44.1k -A std -l 1 -g 40 | ffmpeg -f s16le -ac 1 -i pipe:0 -f flv rtmp://192.168.178.42/live/lorem But: The I want to listen to the signal from the frequency e.g. 83.0M and the bandwith (-s) is set to 20k. Now the streamed audio is to fast. The audio sounds like pitched up and the terminal output of ffmpeg for speed is about 0.5x instead of 1x. How can i stream this frequency with a bandwith of 20k without getting a bad output? A: As per the rtl_fm guide, -s is the output sampling rate, so you need to adjust that in the ffmpeg input parameter. rtl_fm -f 83.0M -M fm -s 20k -A std -l 1 -g 40 | ffmpeg -f s16le -channels 1 -sample_rate 20k -i pipe:0 -f flv rtmp://192.168.178.42/live/lorem
{ "pile_set_name": "StackExchange" }
Q: XSLT 2.0 - change namespace without discarding existing prefix bindings Here's my input XML document: <test xmlns="http://www.example.com/v1"> <qnameValue xmlns:foo="http://foo.example.com/">foo:bar</qnameValue> </test> I want to use XSLT (2.0) to change the namespace of this document to v2, i.e. the desired output is: <test xmlns="http://www.example.com/v2"> <qnameValue xmlns:foo="http://foo.example.com/">foo:bar</qnameValue> </test> I'm trying to use this stylesheet: <xsl:stylesheet version='2.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' xmlns:previous='http://www.example.com/v1'> <xsl:output encoding='UTF-8' indent='yes' method='xml'/> <!-- Identity transform --> <xsl:template match='@*|node()'> <xsl:copy> <xsl:apply-templates select='@*|node()'/> </xsl:copy> </xsl:template> <!-- Previous namespace -> current. No other changes required. --> <xsl:template match='previous:*'> <xsl:element name='{local-name()}' namespace='http://www.example.com/v2'> <xsl:apply-templates select='@* | node()' /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the output turns out as: <test xmlns="http://www.example.com/v2"> <qnameValue>foo:bar</qnameValue> </test> i.e. the crucial namespace binding on qnameValue has gone. Is there any way to force a copy of all namespace bindings to the output? A: This should do it, and is XSLT 1.0 compatible: <xsl:stylesheet version='2.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' xmlns:previous='http://www.example.com/v1'> <xsl:output encoding='UTF-8' indent='yes' method='xml'/> <!-- Identity transform --> <xsl:template match='@*|node()'> <xsl:copy> <xsl:apply-templates select='@*|node()'/> </xsl:copy> </xsl:template> <!-- Previous namespace -> current. No other changes required. --> <xsl:template match='previous:*'> <xsl:element name='{local-name()}' namespace='http://www.example.com/v2'> <xsl:copy-of select='namespace::*[not(. = namespace-uri(current()))]' /> <xsl:apply-templates select='@* | node()' /> </xsl:element> </xsl:template> </xsl:stylesheet> When run on your sample input, the result is: <test xmlns="http://www.example.com/v2"> <qnameValue xmlns:foo="http://foo.example.com/">foo:bar</qnameValue> </test> This is a similar approach that might be a small bit more efficient by storing the old uri in a variable and accessing it from there: <xsl:stylesheet version='2.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' xmlns:previous='http://www.example.com/v1'> <xsl:output encoding='UTF-8' indent='yes' method='xml'/> <xsl:variable name='oldUri' select='namespace-uri((//previous:*)[1])' /> <!-- Identity transform --> <xsl:template match='@*|node()'> <xsl:copy> <xsl:apply-templates select='@*|node()'/> </xsl:copy> </xsl:template> <!-- Previous namespace -> current. No other changes required. --> <xsl:template match='previous:*'> <xsl:element name='{local-name()}' namespace='http://www.example.com/v2'> <xsl:copy-of select='namespace::*[not(. = $oldUri)]' /> <xsl:apply-templates select='@* | node()' /> </xsl:element> </xsl:template> </xsl:stylesheet>
{ "pile_set_name": "StackExchange" }
Q: Read app.json (or exp.json) programmatically Is there a way to read the contents of app.json programmatically from within you app, so you could for example get the current version number and show it within an About screen? A: You can access this through Constants.manifest. This includes your app.json config without any of the potentially sensitive information such as API secret keys. import Constants from 'expo-constants'; Constants.manifest.version A: For expo SDK 33 use: import Constants from "expo-constants"; {`v${Constants.manifest.version}`}
{ "pile_set_name": "StackExchange" }
Q: Querying 1:1 private rooms efficiently with Firebase I am using Firebase and trying to build 1:1 conversations. For rooms, I used a structure like this: - rooms - "user1_user2" // roomName - timestampedId1 - message: "hello" - sender: "user2" - timestampedId2 - message: "hey" - sender: "user1" - "user2_user3" - timestampedId3 - message: "Mew" - sender: "user3" For room names, I used (which I got from this answer): let roomName = (from<to ? from+"_"+to : to+"_"+from) However, now I am trying to retrieve, and I got confused. With Firebase, what is the proper structure of creating private rooms and retrieving them? Should I store 'from' and 'to' individually inside 'roomName' node? But if so, then how can I compare them and list them as descending timestamp (new to old)? I think there should be a way of doing it with one request. But how can I do it with this 'roomName' approach? Or is there any other better way to achieve it? let roomRef = Firebase(url: self.url + "/rooms") // some query? .observeEventOfType(.Value, withBlock: { details in }) What is the proper way of handling this kind of case? Should I change the structure completely or is there a way to query it properly? A: I would structure it as follows and have three nodes: roommembers, chat and lastmessages: roommembers user1iduser2id user1id: true user2id: true chat user1iduser2id -KIyPwdDfAA6GxMlwLJB message userid etc (-KIyPwdDfAA6GxMlwLJB is a childByAutoId() which is also your timestamp) lastmessages user1id user1iduser2id lastmessage lastuser etc. user2id user1iduser2id lastmessage lastuser etc.
{ "pile_set_name": "StackExchange" }
Q: Reaction of methacrylic acid with BH3 If methacrylic acid (2-methylprop-2-enoic acid) is reacted with first $\ce{BH_3/THF}$ and second $\ce{H_2O_2/HO^{-}}$ then $\ce{OH}$ group is attached at less subtituted carbon. But if it is treated with $\ce{CH_3COOH}$ in the second step, what is the end product? A: It's simply a protonolysis: Except in the picture D is used instead of H. Hydroboration based reactions always proceed via this boron-ate complex (negatively charged) then migration and loss of leaving group. Search google for carbonylation, amination, cyanidation using boron.
{ "pile_set_name": "StackExchange" }
Q: Convert GLSL to C# Hello guys can someone help me with the conversion of glsl to c#? i'm new to glsl and i really need alot of help! gladly appreciate all your helps! :) #version 120 uniform sampler2D tex; void main() { vec4 pixcol = texture2D(tex, gl_TexCoord[0].xy); vec4 colors[3]; colors[0] = vec4(0.,0.,1.,1.); colors[1] = vec4(1.,1.,0.,1.); colors[2] = vec4(1.,0.,0.,1.); float lum = (pixcol.r+pixcol.g+pixcol.b)/3.; int ix = (lum < 0.5)? 0:1; vec4 thermal = mix(colors[ix],colors[ix+1],(lum-float(ix)*0.5)/0.5); gl_FragColor = thermal; } A: You don't need to convert GLSL to c# to use it, as it is used by OpenGL API directly. There are several OpenGl wrappers for c#, not sure if all support shaders, but openTk supports for sure, example is here: using (StreamReader sr = new StreamReader("vertex_shader.glsl")) { GL.ShaderSource(m_shader_handle, sr.ReadToEnd()); } You can load shader either from file, either from string directly: string shader = "void main() { // your shader code }" GL.ShaderSource(m_shader_handle, shader);
{ "pile_set_name": "StackExchange" }
Q: Injective objects in Mor(Ab) Consider the abelian (Grothendieck) category $\mathcal{C} := \mathrm{Fun}(\{0<1\},\mathrm{Ab}) = \mathrm{Mor}(\mathrm{Ab})$. Objects are morphisms $(A \to B)$ of abelian groups, morphisms are commutative diagrams. Equivalently, this is the category of abelian sheaves on the Sierpinski space. Question. How do injective objects in $\mathcal{C}$ look like? Since injective sheaves are stable under restriction (use extension by zero), clearly $(A \to B)$ injective implies that $A$ is injective. But is this sufficient (probably not)? When $A,B$ are injective, is the same true for $(A \to B)$? A: I will use notation $A_0 \to A_1$ for objects of $\mathrm{Mor}(\mathrm{Ab})$. EDIT: previously I claimed something stronger (that I can produce lifting properties in the functor category without factorizations), but I am not so sure about it. The following is a lot more general than necessary, but I think this added generality is also useful. Let $(\mathcal{L}, \mathcal{R})$ be a weak factorization system in a category $\mathcal{C}$ with enough colimits and limits for the following to make sense. Let $J$ be a Reedy category. Then in the functor category $\mathcal{C}^J$ the "Reedy $\mathcal{L}$-cofibrations" and "Reedy $\mathcal{R}$-fibrations" form a weak factorization system. By "Reedy $\mathcal{L}$-cofibrations" I mean morphisms of diagrams $X \to Y$ such that for every $j \in J$ the latching morphism $X_j \sqcup_{L_j X} L_j Y \to Y_j$ is in $\mathcal{L}$ and dually "Reedy $\mathcal{R}$-fibrations" are morphisms $X \to Y$ such that for every $j \in J$ the matching morphism $X_j \to M_j X \times_{M_j Y} Y_j$ is in $\mathcal{R}$. The proof is exactly as in the construction of the Reedy model structures and can be found for example in Hovey's Model Categories. Now we take $\mathcal{C} = \mathrm{Ab}$, $\mathcal{L} = $ monomorphisms and $J = [1]$. Then $\mathcal{R}$ are split epimorphisms with injective kernel. The lifting properties are easily verified while the factorizations use the fact that there are enough injectives in $\mathrm{Ab}$. If $f : A \to B$ is a map in $\mathrm{Ab}$, pick an injective hull $i : A \to \hat A$, then $f$ factors as an injection $[i, f] : A \to \hat A \oplus B$ followed by a split surjection with injective kernel $\hat A \oplus B \to B$. We consider $J$ as a Reedy category where $0$ has degree $1$ and $1$ has degree $0$. Then "Reedy $\mathcal{L}$-cofibrations" are monomorphisms again, so an object $X$ is injective if and only if the map $X \to 0$ is a "Reedy $\mathcal{R}$-fibration" i.e. when both $X_1 \to 0$ and $X_0 \to X_1$ are split epimorphisms with injective kernel i.e. when $X_0 \to X_1$ is a split epimorphism with injective source.
{ "pile_set_name": "StackExchange" }
Q: Внести рандом номер в таблицу Есть запрос обновления таблицы и внесение раздомного номера: UPDATE `user1` SET `u1_01_00`= FLOOR(RAND() * '0.287') + '0.001', `u1_02_00`= FLOOR(RAND() * '0.009') + '0.001', `u1_03_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_04_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_05_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_06_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_07_00`= FLOOR(RAND() * '1.356') + '0.001', `u1_08_00`= FLOOR(RAND() * '2.356') + '0.001', `u1_09_00`= FLOOR(RAND() * '4.356') + '0.001', `u1_10_00`= FLOOR(RAND() * '9.356') + '0.001', `u1_11_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_12_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_13_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_14_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_15_00`= FLOOR(RAND() * '12.356') + '0.001', `u1_16_00`= FLOOR(RAND() * '3.356') + '0.001', `u1_17_00`= FLOOR(RAND() * '5.356') + '0.001', `u1_18_00`= FLOOR(RAND() * '8.356') + '0.001', `u1_19_00`= FLOOR(RAND() * '0.356') + '0.001', `u1_20_00`= FLOOR(RAND() * '18.356') + '0.001', `u1_21_00`= FLOOR(RAND() * '1.356') + '0.001', `u1_22_00`= FLOOR(RAND() * '2.356') + '0.001', `u1_23_00`= FLOOR(RAND() * '3.356') + '0.001', `u1_23_59`= FLOOR(RAND() * '0.356') + '0.001' where `user1_date`=current_date and `user_id` = '1'; Но рандом не совсем рандомный получается. Порактически всегда принимает значения минимальные, но и больше значения повторяют структуру. Где кроется ошибка? Если взглянуть на таблицу, то что уже внесено. Можно заметить, что внесена практически МИНИМАЛЬНАЯ единица, в то время как максимум 0.287 и минимум 0.001 - во всех 10-ти случаях внесено 0.001, а что-либо иное (к примеру, 0.057 или 0.009 или 0.156). Если вносятся максимальные числа, которые больше, то после запятой все равно .001. В этом проблема. Не сосвсем достоверный рандом между двумя числами получается. A: Из каких соображений выбрано 0.356 и 0.001? Вообще любая подобная пара - 18.356 / 0.001? Если читать рук-во, то видно, что: 1) формула рандомизации у вас неверно записана FLOOR (RAND * k) + l а надо FLOOR(l + RAND() * (k – l)) Тогда мы будем получать случайное R, которое лежит между l и k; 2) k и l - это integer, то есть целые числа. Если вам нужен decimal - получите достоверное случайное целое, а потом делите его на 1000.
{ "pile_set_name": "StackExchange" }
Q: remove parent element but keep the child element using jquery in HTML From the below HTML i want to remove the div and H2 tags but keep the UL tag using jquery. Please advise how i can achieve this <div class="row"> <h2><asp:Literal ID="categoryTitle" runat="server"></asp:Literal></h2> <ul class="tree js-catTree" id="treeContainer" runat="server"></ul> </div> Thanks A: You can use replaceWith() $('.row').replaceWith(function() { return $('ul', this); }); Working Demo A: I stumbled across this page when simply trying to remove a parent element (whilst still keeping the children) - just in case it's useful to others I found unwrap to be exactly what I needed. For me it was a simple find-an-element-and-unwrap: $('.target-child').unwrap(); However the addition of the h2 removal in this case makes things a little bit more involved: $('.row h2').siblings('ul').unwrap().end().remove(); http://jsfiddle.net/t8uQ8/ The above should be more optimal as it doesn't rely on the use of an anonymous function creation and call.
{ "pile_set_name": "StackExchange" }
Q: Flask not displaying http address when I run it I'm trying to run the Hello World using Flask framework : from flask import Flask app = Flask(__name__) @app.route('/') def hello() -> str: return 'Hell world from Flask!' app.run() Then I go to my cmd and I run my script according to the documentation: set FLASK_APP = flaskhello.py python -m flask run And what I get is a grey window with a click me header and when I click it i get the X and Y but I don't get the http address on my cmd to run it on browser. What should I do to correct this ? I've already installed flask correctly as it seems, but I'm not sure. edit. I also tried creating a new venv and the same happens A: You are mixing old and new documentation. You can lose the last line in flaskhello.py (app.run()). Then, don't pass the flask run command to python, but run it directly in the CMD. So not python -m flask run, but flask run.
{ "pile_set_name": "StackExchange" }
Q: Repeatedly calling a hotkey I've tried to implement a t flip flop(I think this is what it's called) into my program but am having some issues with it. The idea is to have the program start and stop while using the same hotkey. This is what I have so far. looping := false pass = 0 max = 2 ^r:: pass++ looping := true while(looping = true AND pass < max) { Send, stack overflow, save me! } looping := false pass = 0 return When I run the program and hit the hotkey the while loop starts. However, when I attempt to break the loop by pressing ^r I get no response and the program keeps looping. A: I think you are referring to a "toggle" script. I am not what sure you are trying to achieve exactly, but the key is using a logical not: looping := !true. More about it here. looping := false pass = 0 max = 2 ^r:: pass++ looping := !true while (looping & pass < max) { Send, stack overflow, save me! } pass = 0 return There's a lot of resources for this, here are a few: https://autohotkey.com/boards/viewtopic.php?t=11952 http://maul-esel.github.io/ahkbook/en/toggle-autofire.html https://www.reddit.com/r/AutoHotkey/comments/6wqgbu/how_do_i_toggle_hold_down_a_key/dmad0xx
{ "pile_set_name": "StackExchange" }
Q: Laplace transform of a random variable My professor says that the Laplace transform of a nonnegative RV uniquely determines the RV up to distributional equality among all nonnegative RVs. He says one can argue this by appealing to a fact I already know, which is that the distribution of an RV is determined by its characteristic function. I don't see how to attack the problem this way, nor any other way. If the RVs were bounded, I think I'd know what to do. The ch.f would then have an analytic continuation to the entire complex plane, and a perpendicular slice of the function gives the Fourier transform, so therefore both must have the same ch. f. A: Let $X$ and $Y$ be non-negative random variables which have the same Laplace transform. Define $$f(z):=\int_{\Bbb R}e^{-tz}d\mu_X(t)-\int_{\Bbb R}e^{-tz}d\mu_Y(t),$$ where $z\in\ U:=\{z=x+iy,x>0\}$. Then $f$ is analytic on the connected open set $U$, and is $0$ on the non-discrete subset $\{x,x>0\}\subset U$. It thus follows that $f(z)=0$ for all $z\in U$, and by dominated convergence, $f(is)=0$ for all $s\in\Bbb R$.
{ "pile_set_name": "StackExchange" }
Q: Why can't you exchange points for dollars on Stack Exchange? As I see it, Stack Exchange is a marketplace for the exchange of ideas and for collaborative consumption. When a user submits a question and an expert answers it correctly, the expert gets rewarded with points, and keeps building points. What is happening here is that “experts” have some extra time, and they are using that extra time to answer questions. This is very similar to how Uber/Airbnb works. Somebody has an extra “car and time”/”extra-room in their house”, and they use that “car and time”/”extra-room in their house” to give other people rides in their car. Of course, some people could make it their entire job to drive people around/become fulltime hosts, but that doesn’t negate the benefit that everybody gets from the Uber/Airbnb marketplace. In Stack Exchange this happens when users who need an answer and experts who have answers to that question come together. The experts have free time on their hand, and a valuable expertise, with which they help users. When experts answer said questions correctly, Stack Exchange gives them points which makes them want to come back and answer more questions. However, frequently the most busy people/most knowledgeable people don’t actually have time to answer these kinds of questions on the regular, especially as the answers to that question go further and further away from the “how to code that “hello world” C++ program” type of question. To get more experts to answer this question, the user can put up bounties. However, if the user is new, they won’t have enough points to entice very busy people to share their valuable time to answer these questions. As a results, a lot of questions probably have poor answers or no answers. I think it would be a great improvement, if Stack Exchange went beyond the “free-for-all” type of platform and to a little more commercialized form of website, and it will benefit all: the people who answer the questions, the people who ask the question and Stack Exchange. Perhaps this will turn off some people who think that everything should be free and open-source software rules all, but that’s not how the real-life world works. One solution that I propose is to allow users to buy points that can then be used to buy bounties. Part of these bounties will go to Stacke Exchange to pay for its operating costs. The rest of the bounty will go to the expert with the best answer. I am sure there are some other loopholes, and I can see some of those loopholes, but I am sure we can come up with something to close that loophole. Advantages of this system Pays for Stack Exchange/Stack Overflow/Super User, that we all love and we can support them reduces the barrier to entry for new users who have hard or complicated questions, with very few “expert” users to answer said questions. Entices expert users who have limited time to answer questions and who are not necessarily motivated by gaining points, which are useless Will enable Stack Exchange to make high-quality answers available for free Disadvantages of this system proponents of “free-for-all” system will cry foul :) proponents will point out that EE has failed, which is misguided for 2 reasons 1. EE hasn’t failed and 2. Even if it didn’t do as well as Stack Exchange, it's because of the founding business structure of EE and how google works.. among a host of other reasons. some people point out how people are gaming the SO system, and it will get worse with bounties. Whenever there is money there will be corruption. That doesn’t mean we have to stop progress. P.S. I didn't say you should add the points to a user’s reputation, as the thread on the top of this post seems to suggest. I specifically clarified below that if you can buy your way with dollars to higher reputation that will destroy the reputation of this place. I mentioned that you should create a category which will allow users to buy points with $, which can then be used to create bounties to answer hard questions with few "expert" users. This is partially pay for the cost of running Stack Exchange, and to bring in people to answer detailed questions, if they want to make money out of answering questions. I am not suggesting you close down the free-for-all system. Personally I think the debate of whether to allow members to buy points (with $) is similar to the closed-source/open-source software. Yes, everybody likes free information, free use of somebody else's time. But that’s not how the real world works. Not a software guy, but I have been a "power" user of lots of different softwares. Based on this I will hypothesize that commercial software dwarfs open source software in volume. I think a free/open platform is great to scale fast, as a lot of softwares/platforms have done. It's good at the start of a platform, but you have to monetize it at some point or the other. You could go on an infinite donation based system like Wikipedia or reddit which periodically asks for donations. But I think going into a different system like the one I proposed above will benefit all parties, and will lead to further growth of Stack Exchange. You can never please everybody, and I don’t see the above points raised in any of the posts. A: Wouldn’t it be great if you could actually exchange the points for real dollars, so you can then use the real dollars to buy something else you might be interested in? No, it would make users furious if you close their question, or someone else's question that they know an answer to. Giving reputation points as an incentive to help is already enough to attract people that don't give anything about site rules and really educating people, including helping them to try for themselves. Also, where does the money come from? Users who want their question answered? If you want to have someone fix your code, why not hire someone? Also read: The problem with extrinsic motivation, as suggested by Oded.
{ "pile_set_name": "StackExchange" }
Q: Not Able to Add contact in google Contacts using AuthSuB in Asp.net My problem is that I am not able to add contacts in google. I am using asp.net 2008.Same thing when I am using with google calender it is saving without any problem. I am not sure wheather it is a token problem or something else so i decided to ask here. Below is my code for adding Contacts protected void Create_Click() { GAuthSubRequestFactory authFactory_con = new GAuthSubRequestFactory("cp", "ContactApp"); authFactory_con.Token = (String)Session["token"]; ContactsService ser = new ContactsService(authFactory_con.ApplicationName); ser.RequestFactory = authFactory_con; string str = ""; ContactDetail contact = new ContactDetail { Name = NameTextBox.Text + " " + LastTextBox.Text, EmailAddress1 = primaryEmailTextBox.Text, EmailAddress2 = secondryEmailTextBox.Text, Phone = phoneTextBox.Text, Mobile = MobileTextBox.Text, Street = StreetTextBox.Text, City = CityTextBox.Text, Region = RegionTextBox.Text, PostCode = PostCodeTextBox.Text, Country = CountryTextBox.Text, Details = detailsTextBox.Text }; GoogleContactService.AddContact(contact,ser); str = "<script>alert('Contact Added Sucessfully')</script>"; Response.Write(str); } Above function calls AddContact function of GoogleContactService. Below is the code for Add Contact Function public void AddContact(ContactDetail contact, ContactsService GContactService) { ContactEntry newEntry = new ContactEntry(); newEntry.Title.Text = contact.Name; //newEntry.Name.FullName = contact.Name; newEntry.Name = new Name(); newEntry.Name.FullName = contact.Name; EMail primaryEmail = new EMail(contact.EmailAddress1); primaryEmail.Primary = true; primaryEmail.Rel = ContactsRelationships.IsWork; newEntry.Emails.Add(primaryEmail); EMail secondaryEmail = new EMail(contact.EmailAddress2); secondaryEmail.Rel = ContactsRelationships.IsHome; newEntry.Emails.Add(secondaryEmail); PhoneNumber phoneNumber = new PhoneNumber(contact.Phone); phoneNumber.Rel = ContactsRelationships.IsHome ; newEntry.Phonenumbers.Add(phoneNumber); PhoneNumber phoneNumber_ = new PhoneNumber(contact.Mobile ); phoneNumber_.Primary = true; phoneNumber_.Rel = ContactsRelationships.IsMobile ; newEntry.Phonenumbers.Add(phoneNumber_); newEntry.PostalAddresses.Add(new StructuredPostalAddress() { Rel = ContactsRelationships.IsWork, Primary = true, Street = contact.Street , City = contact.City , Region = contact.Region , Postcode = contact.PostCode , Country = contact.Country , FormattedAddress = contact.Street + " , " + contact.City + " , " + contact.Region + " , " + contact.PostCode + " , " + contact.Country, }); newEntry.Content.Content = contact.Details; Uri feedUri = new Uri(ContactsQuery.CreateContactsUri("default")); // Uri feedUri = new Uri("http://www.google.com/m8/feeds/contacts/default/full"); System.Net.ServicePointManager.Expect100Continue = false; ContactEntry createdEntry = (ContactEntry)GContactService.Insert(feedUri, newEntry); } I am getting token at page load Below is the error getting when I am trying to add Contact. GDataRequestException was unhandled by user code Execution of request failed: https//www.google.com/m8/feeds/contacts/default/full A: Well I got the solution for my problem. its a very silly mistake. What I was doing that i was using multiscope token (for calender and contacts) For getting multiscope token I was using below code string nextUrl = Request.Url.ToString(); const string scope = "http://www.google.com/calendar/feeds/%20http://www.google.com/m8/feeds/"; const bool secure = false; const bool session = true; string authSubUrl = AuthSubUtil.getRequestUrl(nextUrl, scope, secure, session); Response.Redirect(authSubUrl); From The above "const string scope" variable you have to remove %20 and give space.In google Api documentation it is given as %20 but it dont work. You have to remove %20 and add space between them.Like below string. const string scope = "http://www.google.com/calendar/feeds/ http://www.google.com/m8/feeds/"; By using %20 in the string you get only access to first service not second.Thats why stated error was coming.
{ "pile_set_name": "StackExchange" }
Q: Which Star Trek species are not uniform? Which recurring Star Trek species (having appeared in 5 or more separate episodes) are physically not uniform? E.g. Humans Have different skin colour Have different eye shapes They are therefore not uniform. Vulcans Have different skin colour They are not uniform. Bajorans Have different skin colour They are therefore not uniform. Compared to Klingons Are dark skinned Have forehead ridges (although every Klingon has a different ridge. Can the family they belong to be determined by that?) They are therefore, from what I know, uniform. Ferengi All have the same skin colour All have the same nose shape All have the same ear shape They are uniform. Half-species (like Commander Sela) do not count as exceptions for the Romulan brow ridge. A: Andorians. In 'Enterprise' (Bakula et.al), we find out that there is a minority Andorian offshoot called the Aenar - blind albinos with some telepathic capability who live under the snow and ice on Andor. A: By far the most diverse species we see is the Xindi from Enterprise, who all share biological origins but come in: reptilian insectoid primate amphibian arboreal avian (extinct) The relevant Memory Alpha page says: The different Xindi species were extremely similar in their functionally important DNA, sharing over 99.5% despite the apparent physical differences. (ENT: "The Xindi") All the Xindi species shared distinctive ridges on their cheekbones and foreheads. At least with the primates, you also see variations on skin tone. Also from Enterprise are the standard blue Andorians, but also their albino subspecies. A: You've omitted the most obvious example, the Ariannians, who had two entirely distinct races, with appearance so opposite that nobody could possibly mistake one for the other: As with races on 20th and 21st century Earth, Ariannians' social and economic status depended strongly on their coloration. (And of course the episode in which they appear was intended very clearly to demonstrate the absurdity of Earth's racial divisions)
{ "pile_set_name": "StackExchange" }
Q: How to design the max function for integers using only additions and multiplications? I want to design a function which outputs the maximum value between two integers, something like this $f(x,y) = \begin{cases} 1, & \text{if } x > y, \\ 0, & \text{otherwise}. \end{cases}$ , using only additions, substractions and multiplications. I would like to have a single equation $f(x,y) = x *y *x ..$ where $*$ maybe any operation from the ones mentioned above, if necessary division can be used too but preferably not. I can restrict the operation to a finite algebraic field $Z_t$. For equality comparison, I can define the function $EQU(x,y) = \begin{cases} 1, & \text{if } x == y, \\ 0, & \text{otherwise}. \end{cases}$ If $t$ is prime, the equality comparison can be computed like this $EQU(x,y)=1-(x-y)^\phi$, where the Euler totient $\phi(t)=t-1$, because $t$ is prime. Now, I am asking if something similar could be done for greater than comparison. I need these comparison functions for a homomorphic encryption application where functions are computed as arithmetic circuits. A: A function defined using only addition, subtraction and multiplication is a polynomial function. The function $\max(x, y)$ is not a polynomial function. To see this, note that if $\max(x, y)$ were a polynomial, then the function $g$ defined by $g(x)= \max(x, 0)$ would be a non-zero polynomial function with infinitely many roots.
{ "pile_set_name": "StackExchange" }
Q: How to create an ensureCapacity method that deals with Array Generics in Java So, I am creating a generic data structure named "Sack". In this I add items to a sack, grab a random item, see if it's empty, or dump out its contents etc. Also I'm creating it to expand to hold as many items as needed. I am currently working on a ensureCapacity method which it should ensure that the sack has the capacity for its parameter value, and if not, create a new underlying data structure for the sack that is one more than twice the current capacity of the sack. I've tried numerous of methods of doing this, but I keep receiving an error. I'll drop down most of my code, but also the two methods I've tried and pointing out the errors I receive. public class Sack<E> { public static final int DEFAULT_CAPACITY = 10; private E [] elementData; private int size; @SuppressWarnings("unchecked") public Sack() { elementData = (E[]) new Object[DEFAULT_CAPACITY]; } @SuppressWarnings("unchecked") public Sack(int capacity) { if(capacity < 0) { throw new IllegalArgumentException("capacity " + capacity); } this.elementData = (E[]) new Object[capacity]; } public boolean isEmpty() { if(size == 0) { return true; } else { return false; } } public E [] dump() { E [] E2 = Arrays.copyOf(elementData, size); for(int i = 0; i < size; i++) { elementData[i] = null; } size = 0; return E2; } First One: In this error, it's mainly when I run my tests saying that AssertionFailedError: ensureCapacity is not working correctly private void ensureCapacity(int capacity) { if (size != capacity) { int newCapacity = (capacity * 2) + 1; elementData[capacity] = elementData[newCapacity]; } } A little update, I will posts my tests. You guys can check it out and let me know, however I cannot modify my tests at all. Only my code. I commented the first line since that's where my error occurs. @Test public void testEnsureCapacity() { assertEquals(2, ensureCapacity.getModifiers(), "ensureCapacity does not have the correct modifiers"); // My error occurs here currently. try { for(int i=0; i<=10; ++i) { ensureCapacity.invoke(s, i); assertEquals(10, ((Object[])elementData.get(s)).length, "ensureCapacity is not working correctly (capacity changing unnecessarily)"); } ensureCapacity.invoke(s, 11); assertEquals(21, ((Object[])elementData.get(s)).length, "ensureCapacity is not working correctly (capacity not increased correctly)"); Random rand = new Random(); int capacity = rand.nextInt(100)+1; s = new Sack<Integer>(capacity); for(int i=0; i<=capacity; ++i) { ensureCapacity.invoke(s, i); assertEquals(capacity, ((Object[])elementData.get(s)).length, "ensureCapacity is not working correctly (capacity changing unnecessarily)"); } ensureCapacity.invoke(s, capacity+1); assertEquals(capacity*2+1, ((Object[])elementData.get(s)).length, "ensureCapacity is not working correctly (capacity not increased correctly)"); } catch (Exception e) { fail("ensureCapacity is not working correctly"); } } A: I figured it out, here's what the solution is to my question. private void ensureCapacity(int capacity) { if (elementData.length < capacity) { int newCapacity = elementData.length * 2 + 1; elementData = Arrays.copyOf(elementData, newCapacity); } }
{ "pile_set_name": "StackExchange" }
Q: What connects a bird, a childrens’ toy, and a mode of transport? I got asked this today and I don’t know the answer. Will someone please help me figure it out? Here is the riddle: “What connects a bird, a childrens’ toy, and a mode of transport?” A: My answer would have to be Superman ... "It's a bird, it's a plane, it's Superman!" A: Going for the Monkey Island Solution: A rubber chicken with a pulley inside it's a chicken (bird) it's a kids toy (toy) you can use the pulley on a zip wire (transport)
{ "pile_set_name": "StackExchange" }
Q: Compress UIImage I need help resizing a UIImage. For example: I'm displaying a lot images in a UICollection View, but the size of those images is 2 to 4 MB. I need compress or resize those images. I found this: How to compress/resize image on iPhone OS SDK before uploading to a server? but I don't understand how to implement it. A: Not quite sure if you want to resize or compress or both. Below is the code for just compression : Use JPEG Compression in two simple steps: 1) Convert UIImage to NSData UIImage *rainyImage =[UImage imageNamed:@"rainy.jpg"]; NSData *imgData= UIImageJPEGRepresentation(rainyImage,0.1 /*compressionQuality*/); this is lossy compression and image size is reduced. 2) Convert back to UIImage; UIImage *image=[UIImage imageWithData:imgData]; For scaling you can use answer provided by Matteo Gobbi. But scaling might not be a the best alternative. You would rather prefer to have a thumbnail of the actual image by compression because scaling might make look your image bad on a retina display device. A: I wrote this function to scale an image: - (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)newSize { CGSize actSize = image.size; float scale = actSize.width/actSize.height; if (scale < 1) { newSize.height = newSize.width/scale; } else { newSize.width = newSize.height*scale; } UIGraphicsBeginImageContext(newSize); [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } The use is easy, for example: [self scaleImage:yourUIImage toSize:CGMakeSize(300,300)]; A: lowResImage = [UIImage imageWithData:UIImageJPEGRepresentation(highResImage, quality)];
{ "pile_set_name": "StackExchange" }
Q: Cytoscape.js - draw edges below compound node I have a set of nodes grouped inside a parent (compound) node. I would like to display the edges from the "outer" nodes (those outside the compound node) to the "inner" nodes (those inside the compound node) below the compound node. (Approximately like this demo.) Thus far, I've tried setting the z-index property like this, with z-index-compare set to manual, but it doesn't work: style: [ { selector: 'node', style: { 'z-index-compare': 'manual', 'width': 10, 'height': 10, 'background-color': '#46A', 'z-index': 3 } }, { selector: ':parent', style: { 'z-index-compare': 'manual', 'background-color': '#CDF', 'z-index': 9 } }, { selector: 'edge', style: { 'z-index-compare': 'manual', 'width': 1, 'line-color': '#BCE', 'z-index': 1 } }, { selector: '.dense', style: { 'z-index-compare': 'manual', 'width': 0.5, 'z-index': 1 } } ] The documentation for Cytoscape.js says nothing about where to specify the z-index-compare property, so maybe there's an error in my CSS. A: One solution I found was to remove the z-index tags and use z-compound-depth on the :parent selector, like this: style: [ { selector: 'node', style: { 'width': 10, 'height': 10, 'background-color': '#46A' } }, { selector: ':parent', style: { 'z-compound-depth': 'top', 'background-color': '#CDF' } }, { selector: 'edge', style: { 'width': 1, 'line-color': '#BCE' } }, { selector: '.dense', style: { 'width': 0.5 } } ]
{ "pile_set_name": "StackExchange" }
Q: Ansible serialize roles I have some tasks which needs to be executed one server at time. I have separated them in a separate role. My question is if it's possible somehow to tell a role to run one on one server at time. A: It needs the playbook to be split in different playbooks, then include the first playbook on the second one which can be set serial. Therefore all the roles on the first playbook will be run in paralells while the rest of the roles from the second playbook will be run in serial. Also if any facts needs to be carried over from the first playbook to the second one fact caching needs to be enabled.
{ "pile_set_name": "StackExchange" }
Q: Does normal paper currency contain enough narcotics residue to attract a drug-sniffing dog? It seems pretty well established that paper currency (e.g. in the US) is commonly contaminated with trace amounts of cocaine or other illegal narcotics. (See Snopes for instance, though I'm happy to have this assumption challenged.) Is this contamination sufficient to attract alerts by drug detection dogs? Wikipedia says yes: "The drug content is too low for prosecution but not too low to trigger response to drug-sniffing dogs". But their citation is to a newspaper article that makes no mention of money or currency. This 1998 Slate article says: "In 1994, a U.S. Circuit Court held that ordinary money contains enough cocaine to attract a drug-sniffing dog." They don't give a reference to the case, but I think it might be US v. Florez; however, the court's opinion only seems to say that the defense claimed this was true (note 12), and doesn't seem to come to a conclusion as to whether it actually is. There is also US v $30,060 in US Currency, in which the court found that a dog's alert on currency is not evidence that the possessor is somehow involved with narcotics, partly due to the fact that money is commonly contaminated. But this seems like a somewhat different question and a different standard of proof, and anyway I'm not inclined to consider judges as scientific experts. On the other hand, the American Society of Canine Trainers says that "currency in circulation does not contain enough narcotic scent for a narcotic detector dog to alert to", and they also cite case law in their favor. What data actually supports any of these claims? Does currency commonly contain enough narcotics residue to trigger an alert by a typical drug detection dog, at a rate significantly higher than the usual rate of false positives for such dogs? [I became interested in this claim after it appeared in this travel.SE answer. Thanks RoboKaren, even though you're not a notable source by yourself.] A: Evidence: Research has shown that drug detection dogs act routinely based on the behavioral cues of their handlers, rather than only acting on their sense of smell for odor detection. In conclusion, these findings confirm that handler beliefs affect working dog outcomes, and human indication of scent location affects distribution of alerts more than dog interest in a particular location. These findings emphasize the importance of understanding both human and human–dog social cognitive factors in applied situations. Source: Handler beliefs affect scent detection dog outcomes Research has also showed that some of the high odor compounds are not used in the manufacture of training scents used in training drug detection dogs which might lead to failure of detection of those drugs. A small number of volatile and semi-volatile compounds present in very low concentrations and associated with very low odor detection thresholds cause the characteristic smell of a drug. These high odor impact compounds are not being used to manufacture surrogate training scents used in training forensic canines. This omission could explain why these surrogate scents are generally not effective. This information could lead to increased understanding of what drug detection canines are using as the signature odor of street drugs. Source: Investigating the aroma of marijuana, cocaine, and heroin for forensic applications using simultaneous multidimensional gas chromatography - mass spectrometry - olfactometry Various factors such as breed type, drug type and searching environment type might influence drug detection performance in drug detector dogs. The olfactory acuity of dogs’ sense of smell toward various volatile chemical compounds may differ considerably, though results may also reflect different experimental designs of different laboratories. Odors of different drugs may be differently sensed by dogs, and consequently ease of detection may differ. These differences may be related to polymorphic forms of olfactory receptor genes or their breed specific allelic variants or to the proportion of functional vs. non-functional genes showing affinity to the volatile chemical compounds characteristic of a drug. It is well known that scent detection dog performance depends not only on olfactory acuity but also on canine cognitive and learning abilities. The detection performance of sniffer dogs is context-dependent. Source: Efficacy of drug detection by fully-trained police dogs varies by breed, training level, type of drug and search environment TL;DR: Based on current research, microgram levels of cocaine present on circulated US currency is insufficient to draw an alert from drug detector dogs. The authors concluded that as the average level of cocaine present in a single bill (10 lg) is 100,000 less than the average level required for drug-detector canine alert (1 g), “it is not plausible that innocently-contaminated US currency contains sufficiently enough quantities [sic] of cocaine and associated volatile chemicals to signal an alert from a properly-train drug detector dog.”. Source: Drug Contamination of U.S. Paper Currency and Forensic Relevance of Canine Alert to Paper Currency: A Critical Review of the Scientific Literature. However there are only two published experimental studies on drug-detector dog alerts to U.S. currency spiked with various amounts of cocaine for the above comment and more research is needed to drawn an conclusion whether normal US paper currency contains enough narcotics residue to attract a drug-sniffing dog.
{ "pile_set_name": "StackExchange" }
Q: Magento & Google Analytics - Tracking Code or Google API? Is there any difference between using the Google Analytics Tracking Code or using the inbuilt Google API analytics section in Magento? Using the Google API section seems 'neater' as I don't have to copy & paste code into my site. I assume Magento inserts this code automatically once configured. I just wondered if there was much difference in terms of usability and features? A: It depends on which version of Magento you are using. The new Universal Analytics Code is only available since version 1.9.1.0. 1.9.0.1 on the other hand only had the old code, which made it a pain to add any multiple event tracking. So, if you're on 1.9.1.0, use the built in Google API. Anything prior to that I would consider adding the universal code yourself.
{ "pile_set_name": "StackExchange" }
Q: Real time PCR standard curve As blunt as possible: when performing real time PCR it is a routine step to run one PCR in order to plot a "standard curve" with several decreasing dilution ratios from your sample. what is the real purpose of this? how should results be used/interpreted? A: You can use 3 or 4 dilutions- 1:1, 1:5, 1:25, 1:125 Purpose of doing this is to calculate the primer efficiency. Ideally primer efficiency should be 2 i.e. two molecules of DNA are formed in a round of PCR. So after n-rounds of PCR there should be $2^n$ DNA. However, this may not be the case always. You calculate primer efficiency like this: Plot $Ct$ vs $log_2(Conc)$ Get the trendline (in excel) or use a linear regression command for other applications Get the slope ($s$) of the trendline Efficiency in this case would be $2^{-s}$. Some people use base of 10 in the log instead of 2 (for which you have to do $10^{-s}$ instead. This, you can directly use to find out how many copies are produced after n cycles i.e instead of $2^n$ it will be $x^n$ where $x$ is the calculated efficiency. This is particularly useful when you are calculating the fold changes using the comparative Ct method. For details see this article.
{ "pile_set_name": "StackExchange" }
Q: How do I run Counter-Strike 1.6 in Zombie mode? Can someone explain in detail how to run Counter-Strike 1.6 in zombie mode? A: The most recommended way of using add-ons is by running them on top of AMX MOD X. This is a system which runs on Counter-Strike and allows for plugins to be installed. You need to setup and install AMX MOD X. Now, you need to download the actual plugin. You coud use: Allied Modders - Zombie MOD. If you are unsure of what to install where, both archives should be extracted inside the cstrike or czero directory. Zombie MOD. Zombie MOD Resources. Now just start a server and this should work.
{ "pile_set_name": "StackExchange" }
Q: How to receive an app update from Play Store I have an app that i used to host on a private server and send the link to the new version by using GCM. I now host the app on the Play Store. When i have an updated version, i increment the version code and publish it to production. Can anyone tell me why the phones are not receiving a notification of the update? I have checked the notification checkbox in the app itself and i have checked auto-update from within the play store app settings. I've had a look around SO and certain individuals seem to think that in order to receive a notification, the app must implement GCM? Others think that the Play Store app regularly checks the version number of the installed app against the new one hosted on the Play Store, then notifies. Can anyone explain what the users ought to see from an updated app on play store and what i have to do for them to receive a notification. thanks in advance, Matt A: Dear unchecked auto update from your Google play for the app. because if that is checked when your app will be updated you will not be asked for permission. it will automatically install update. have patience, some times it take more time to send update to users. There is no need of gsm into this process. uploading new app is enough.
{ "pile_set_name": "StackExchange" }
Q: Setup of Amazon Cloudfront with EC2 instance as origin and custom domain name Can you guys help me out in identifying what I am doing wrong in setting up the cloudfront for my ec2 instance (web server) for a custom domain of mine. I am using my domain name (www.example.com) as the origin domain name. I have also supplied a certificate to the cloudfront (*.example.com) using ACM. The problem I am facing is, when i point out my custom domain name to the cloud fronts domain name in route53 using an alias record. My website responds with an error 502. I'll really appreciate any help. I have explored all the content provided by AWS in respect to this but nothing seems to work till now. A: Most 502 from CloudFront caused by the SSL communication between CloudFront and Origin. CloudFront makes sure that your origin: 1.Has Trusted certificate 2. Ciphers matches 3. CloudFront uses the SNI filed in Client hello which is defined as Origin domain name, it most cases if you have cert on EC2 with www.example.com CN, you can forward HOST header and it should solve your problem. If you don't have HTTPS running on Origin, you can select HTTP only in Origin protocol policy as its bydefault set to Viewer match. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-502-bad-gateway.html
{ "pile_set_name": "StackExchange" }
Q: Finish recursive promise function execution in Javascript I have 3 functions, func1() returns some api data to func2() and func2() is called from func3().Func2() has a Promise return type, in Func2() I resolve only of certain conditions are met else I want to call same Func2() until condition met but when I execute func3().I do not see my response from func2().I get error message "callback" argument must be a function: TypeError: "callback" argument must be a function. //func1() const apiRequest = (options, func_callback) => { request(options, (err, res, body) => { let result = { body: body, error: err, res: res } func_callback(result); }); }; //func2 const getPromise = (options) => { return new Promise((resolve, reject) => { apiRequest(options, (response) => { if (response.error) { reject(response.error); } if (response.body.hasOwnProperty('message')) { console.error(`Error: Invalid token`); new Promise((resolve, reject) => { const payload = { url: 'https://abc', form:{}, method: 'post' }; request(payload, (err, res, body) => { if (err) { reject(err); } else { resolve(body); } }); }).then((result) => { options.headers.Authorization = 'Bearer '+result; getPromise(options); // seems Issue having this line to call again }); } else { resolve(response.body); } }); }); }; // func3() function getSession() { const options={url:''someurl', getPromise.then(result => { console.log('all ID'+result); // I can not see result here .catch(error => { console.log('Error ', error); }); } A: In the if condition where you create the new Promise(…).then(…), you never resolve the outer promise. You could solve that by adding resolve in the right places, but you shouldn't create promises within promises anyway. You should promisify at the lowest possible level. Make api_request return a promise instead of having it take a callback. // func1() function apiRequest(options, func_callback) { return new Promise((resolve, reject) => { request(options, (err, res, body) => { if (err) reject(err); else resolve({ body, res }); }); }); } You can even reuse it, and use proper promise chaining: //func2 functon getPromise(options) { return apiRequest(options).then(response => { if (response.body.hasOwnProperty('message')) { console.error(`Error: Invalid token`); const payload = { url: 'https://abc', form:{}, method: 'post' }; return apiRequest(payload).then(result => { options.headers.Authorization = 'Bearer '+result.body; return getPromise(options); // seems Issue having this line to call again }); } else { return response.body; } }); }
{ "pile_set_name": "StackExchange" }
Q: wx.ProgressDialog causing seg fault and/or GTK_IS_WINDOW failure when being destroyed This only happens on Linux (possible OS X also, can't test atm), works fine on Windows. I have a wx.ProgressDialog that is spawned with the main thread. I send the work off to another thread, and it periodically calls back to a callback function in the main thread that will update the ProgressDialog or, at the end of the work, destroy it. However, I get an interesting message on Linux when this happens: (python:12728): Gtk-CRITICAL **: IA__gtk_window_set_modal: assertion 'GTK_IS_WINDOW (window)' failed The dialog does close, but if I try to spawn it again it looks like it's already almost finished. Sometimes a seg fault will follow this message as well. I've tried to simulate it with a stripped down version here: import wxversion wxversion.select("2.8") import wx import sys import threading MAX_COUNT = 100 ## This class is in a different area of the codebase and class WorkerThread(threading.Thread): def __init__(self, callback): threading.Thread.__init__(self) self.callback = callback def run(self): # simulate work done. IRL, this calls another function in another # area of the codebase. This function would generate an XML document, # which loops through a list of items and creates a set of elements for # each item, calling back after each item. Here, we simply set up a for # loop and simulate work with wx.MilliSleep for i in xrange(MAX_COUNT): print i wx.MilliSleep(30) wx.CallAfter(self.callback, i) # Send done signal to GUI wx.CallAfter(self.callback, -1) class Frame(wx.Frame): def __init__(self, title): wx.Frame.__init__(self, None, title=title, pos=(150,150), size=(350,200)) panel = wx.Panel(self) box = wx.BoxSizer(wx.VERTICAL) m_btn = wx.Button(panel, wx.ID_ANY, "Run Stuff") m_btn.Bind(wx.EVT_BUTTON, self.OnRunButton) box.Add(m_btn, 0, wx.ALL, 10) panel.SetSizer(box) panel.Layout() def OnRunButton(self, event): self.progressDialog = wx.ProgressDialog("Doing work", "Doing Work", maximum=MAX_COUNT, parent=self, style=wx.PD_APP_MODAL | wx.PD_ELAPSED_TIME) self.worker(self.threadCallback) self.progressDialog.ShowModal() def worker(self, callback): # This bit is in another part of the codebase originally. In the test, # I could have added it to OnRunButton, but I wanted function calls to # be similar between test and actual code thread = WorkerThread(callback) thread.start() def threadCallback(self, info): # We update based on position, or destroy if we get a -1 if info == -1: self.progressDialog.Destroy() else: self.progressDialog.Update(info) app = wx.App(redirect=False) top = Frame("ProgressDialog Test") top.Show() app.MainLoop() (we select 2.8, but ideally any fix should work in both 2.8 and 3.0. I actually haven't been able to test it in 3.0 in linux due to a bad 3.0 build) This does a good job at representing the issue: works fine in Windows, but seg fault when it tries to destroy the progress dialog. However, I can't get the example to show the GTK_IS_WINDOW Ive tried searching for solutions. I've read that it might be due to the fact that the worker thread finishes too quickly, and thus leaves the GUI with some messages in it's queue. I'm not sure I completely understand this (never got the hang of Yields and messages, etc), but what I believe this to mean is that when the worker is at 100%, the ProgressDialog (being slower), might only be at 75%, and still has the extra 25% of messages to use to "Update" the GUI, but instead gets destroyed. I'd like some clarification on if I'm understanding that correctly or not. Also, I believe .Hide() works as a work around, but I'd like to Destroy it instead because that's the proper thing to do. Regardless, any help would be greatly appreciated. =) A: I've tried your code, also many modifications been tried to overcome this issue, but failed. Anyway, I've created the following wxPython script to fulfill your purpose, see below: import wxversion wxversion.select("2.8") # version 3.0 works, too. import wx import sys import threading import time MAX_COUNT = 200 class WorkerThread(threading.Thread): def __init__(self, target, countNum): threading.Thread.__init__(self, target = target) self.setDaemon(True) self.cnt = countNum self.target = target self.pb = self.target.pb def run(self): for i in xrange(self.cnt): print i+1 wx.MilliSleep(50) wx.CallAfter(self.pb.SetValue, i+1) wx.CallAfter(self.target.MakeModal, False) wx.CallAfter(self.target.Close) class ProgressBarFrame(wx.Frame): def __init__(self, parent, title, range = 100) : wx.Frame.__init__(self, parent = parent, title = title) self.range = range self.createProgressbar() self.SetMinSize((400, 10)) self.Centre() self.Show() self.t0 = time.time() self.elapsed_time_timer.Start(1000) def createProgressbar(self): self.pb = wx.Gauge(self) self.pb.SetRange(range = self.range) self.elapsed_time_st = wx.StaticText(self, label = 'Elapsed Time:') self.elapsed_time_val = wx.StaticText(self, label = '00:00:00') vbox_main = wx.BoxSizer(wx.VERTICAL) hbox_time = wx.BoxSizer(wx.HORIZONTAL) hbox_time.Add(self.elapsed_time_st, 0, wx.ALIGN_LEFT | wx.EXPAND | wx.ALL, 5) hbox_time.Add(self.elapsed_time_val, 0, wx.ALIGN_LEFT | wx.EXPAND | wx.ALL, 5) vbox_main.Add(self.pb, 0, wx.EXPAND | wx.ALL, 5) vbox_main.Add(hbox_time, 0, wx.EXPAND | wx.ALL, 5) self.SetSizerAndFit(vbox_main) self.elapsed_time_timer = wx.Timer(self) self.Bind(wx.EVT_TIMER, self.onTickTimer, self.elapsed_time_timer) def onTickTimer(self, event): fmt='%H:%M:%S' self.elapsed_time_val.SetLabel(time.strftime(fmt, time.gmtime(time.time()-self.t0))) class Frame(wx.Frame): def __init__(self, title): wx.Frame.__init__(self, None, title=title, pos=(150,150), size=(350,200)) panel = wx.Panel(self) box = wx.BoxSizer(wx.VERTICAL) m_btn = wx.Button(panel, wx.ID_ANY, "Run Stuff") self.Bind(wx.EVT_BUTTON, self.OnRunButton, m_btn) box.Add(m_btn, 0, wx.ALL, 10) panel.SetSizer(box) def OnRunButton(self, event): self.progressbar = ProgressBarFrame(self, 'Working Processing', MAX_COUNT) self.progressbar.MakeModal(True) worker = WorkerThread(self.progressbar, MAX_COUNT) worker.start() app = wx.App(redirect=False) top = Frame("ProgressDialog Test") top.Show() app.MainLoop() I'm using wx.Gauge to do what wx.ProgressDialog does, as well as an additional wx.Timer to show the elapsed time. MakeModal() method is used to mimic the ShowModal effect which is the default style that Dialog shows, do not forget to release the Modal status by MakeModal(False) or the frame would be freezed. You can add more stuff in the ProgressBarFrame class. I'm thinking the segment fault error may arise from the events calling, especially when multithreading issue is involved, maybe carefully inspect into the wx.ProgressDialog class would show some clue.
{ "pile_set_name": "StackExchange" }
Q: Flex Text Control Undo I'm having trouble finding any resource for adding ctrl-z undo capability to a Flex RichTextEditor control (a lack it apparently shares with other Flex text controls). I'm baffled that it's not in the native forms because it's such a fundamental capability, available in even standard browser text controls I believe. Any mention of this issue on the Flex sites (there are several) conflict; one says the issue is "Closed" and the resolution is "External" (whatever that means). Does anyone have any insight to offer? I've got an app the heavily requires extensive text editing. Flex in general works nicely, but this trivial lack is just about fatal, as anyone would imagine. A: I've read elsewhere -- in fact, in the answers to one of my questions on SO -- that the issue is not going to be resolved in Flex 3. Which seems to be correct since we are in 3.2 or maybe even beyond that, and there's no undo in sight. I was brave/stupid enough to implement an undo-redo in this component myself. At that time I was working on Windows. Now I'm on OSX and I realize just how non-cross-platform my solution is. The very statement of the problem (adding ctrl-z undo capability) is a large part of the problem (OSX has control AND this Apple key thing). Now I have to check how much work it would be to make the thing cross-platform... could be trivial. By amazing coincidence, just today I've been thinking about NOT using the RichTextEditor but rather something external (FckEditor comes to mind) because the RTE leaves so much to be desired (hence I arrived at your question). I've worked with the RTE a ton and gotten it to do a lot of what I want, but I still wonder why they didn't "finish" this component...
{ "pile_set_name": "StackExchange" }
Q: Unsure how to find out whether the Child process is terminated via completion or a signal I'm trying to write a program where, My parent process should be able to ignore the SIGINT signal and My child process, on the other hand, should perform the default function of the SIGINT signal.And finally, I should find out whether the Child process was terminated by any particular signal.My child process basically executes an execvp(), based on some input.Hence, the way I went about trying to do the same is, I added a SIGINT handler for my parent process and my guess is when the execvp() function is invoked, the child process would perform the SIGINT process using the default handlers. void sigint_handler(int signum){ printf(""); } int main(int argc, char **argv){ pid_t pid = fork(); if(pid<0) { printf("fork: error"); exit(-1); } else if(pid>0){ int status; signal(SIGINT,sigint_handler); pid_t child_pid= waitpid(pid, &status, 0); printf(""); } else if(pid==0){ printf(""); if(execvp(argv[1],&argv[1]) == -1) { printf(" "); exit(-1); } } } Is this code sufficient to handle the same? Also, if my child process is terminated by SIGINT or SIGSEGV etc, how would I go about finding whether my child process was terminated after completion or because of a signal and what signal was used. A: The value from wait() or friends for the status tells you — use the macros WIFEXITED() and WIFEXITSTATUS() for ordinary exit, or WIFSIGNALED() and WIFTERMSIG() for signal (the IF macros test how it exited, of course). There are also WIFSTOPPED(), WIFCONTINUED() and WSTOPSIG() for job control. See POSIX waitpid() (which documents wait() too). And most Unix-based systems also provide a WCOREDUMP() too, but POSIX does not mandate it. You can also look at the exit status in hex (4 digits) and it does not take long to spot the patterns, but you should use the macros. For the first part of my question, would my above-mentioned code ignore the SIGINT signal while it is still in the parent part of the process? Beware: avoid using printf() in signal handlers. It wasn't clear how your signal handling code was working; you'd chopped too much code from the question. However, you've now added enough of the missing code, and, as I guessed, your code isn't bullet proof because the signal() call shown is after the fork() and in the parent process. You should disable the signal before calling fork(); then the child should re-enable it. That protects the processes properly. For example (file sig11.c — compiled to create sig11 using gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes sig11.c -o sig11): #include <assert.h> #include <signal.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> int main(int argc, char **argv) { if (argc < 2) { fprintf(stderr, "Usage: %s command [arg ...]\n", argv[0]); return 1; } bool handle_signals = (signal(SIGINT, SIG_IGN) != SIG_IGN); pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "fork: error\n"); return 1; } else if (pid > 0) { int status; pid_t child_pid = waitpid(pid, &status, 0); printf("child %d exited with status 0x%.4X\n", (int)child_pid, status); } else { assert(pid == 0); if (handle_signals) signal(SIGINT, SIG_DFL); execvp(argv[1], &argv[1]); fprintf(stderr, "failed to exec %s\n", argv[1]); exit(-1); } return 0; } Example output: $ ./sig11 sleep 40 ^Cchild 19721 exited with status 0x0002 $ We can debate whether you should be using sigaction() rather than signal() another day. The check for handle_signals means that if the program is run with SIGINT ignored, it does not suddenly enable the signal. This is a standard defensive technique. Note that errors are reported on standard error, not standard output. There's no need to check the return value from execvp(); it doesn't return if it is successful and if it does return it definitively failed. It would be possible to report a better error using errno and strerror(), but what's shown is not wholly unreasonable. The check for enough arguments is another defensive measure; reporting the correct usage is good manners. I also converted the if / else if / else if into if / else if / else + assertion — the third test is unnecessary. Using the macros from <sys/wait.h>: #include <assert.h> #include <signal.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> int main(int argc, char **argv) { if (argc < 2) { fprintf(stderr, "Usage: %s command [arg ...]\n", argv[0]); return 1; } bool handle_signals = (signal(SIGINT, SIG_IGN) != SIG_IGN); pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "fork: error\n"); return 1; } else if (pid > 0) { int status; int corpse = waitpid(pid, &status, 0); printf("child %d exited with status 0x%.4X\n", corpse, status); if (WIFEXITED(status)) printf("child %d exited normally with exit status %d\n", corpse, WEXITSTATUS(status)); else if (WIFSIGNALED(status)) printf("child %d exited because of signal %d\n", corpse, WTERMSIG(status)); else printf("child %d neither exited normally nor because of a signal\n", corpse); } else { assert(pid == 0); if (handle_signals) signal(SIGINT, SIG_DFL); execvp(argv[1], &argv[1]); fprintf(stderr, "failed to exec %s\n", argv[1]); exit(-1); } return 0; } Example outputs: $ ./sig11 sleep 4 child 19839 exited with status 0x0000 child 19839 exited normally with exit status 0 $ ./sig11 sleep 10 ^Cchild 19842 exited with status 0x0002 child 19842 exited because of signal 2 $
{ "pile_set_name": "StackExchange" }
Q: Limit data coming into Spotfire by a different data table I have Table A prompted on Year/Month and Table B. Table B also has a Year/Month column. Table A is the default data table (gets pulled in first). I have set up a relationship between Table A and B on the common Year/Month column. The goal is to get Table B to only pull through data where the Year/Month matches the Year/Month on Table A (what the user entered). The purpose is to keep the user from entering the Year/Month multiple times. The issue is Table B contains almost 35 million records. What I do not want to do is have Spotfire pull across all 35 Million records. What is currently happening is Spotfire is pulling all those records, then by setting filtering to include Filtered Rows Only on Table B, I am limiting what is seen in the visualization to under 200,000 rows. I would much rather just pull across 200,000 rows to start with. The question: Is there a way to force Spotfire to filter the data table (Table B) by another data table (Table A) as it pulls the data table (Table B) across, thus only pulling a small number of records into memory? A: I'm writing this off the basis that most people utilize information links to get data into Spotfire, especially large data sets where the data is not embedded in the analysis. With that being said, I prefer to handle as much if not all of the joining / filtering / massaging at the data source versus the Spotfire application. Here are my views on the best practices and why. Tables / Views vs Procedures as Information Links Most people are familiar with the Table / View structure and get data into Spotfire in one of 2 ways Create all joins / links in information designer based off data relations defined by the author by selecting individual tables from the data sources avaliable Create a view (or similar object) at the data source where all joining / data relations are done, thus giving Spotfire a single flat file of data Personally, option 2 is much easier IF you have access to the data source since the data source is designed to handle this type of work. Spotfire just makes it available but with limited functionality (i.e. complex queries, Intellisense, etc aren't available. No native IDE). What's even better is Stored Procedures IMHO and here is why. In options 1 and 2 above, if you want to add a column you have to change the view / source code at the data source, or individually add a column in the information designer. This creates dwarfed objects and clutters up your library. For example, when you create an information link there is a folder with all the elements associated with it. If you want to add columns later, you'll have another folder for any columns added, and this gets confusing and hard to manage. If you create a procedure at the data source to return the data you need, and later want to add some columns, you only have to change this at the data source. i.e. change the procedure. Everything else will be inherited by Spotfire... all you have to do is click the "reload data" button in Spotfire. You don't have to change anything in the information designer. Additionally, you can easily add new parameters, set default parameter properties or prompt the user, making this a very efficient method of data retrieval. This is perfect when the data source is an OLTP and not a data-mart/data-warehouse (i.e. the data isn't already aggregated / cleansed) but can also be powerful in data warehouse environments as well. Ditch the GUI, Edit the SQL I find managing conditions, parameters, join paths, etc a bit annoying--but that's me. Instead, when possible, I prefer to click "Edit SQL" next to all the elements in my Information Link and alter the SQL there. This will allow database guys to work in an environment which is more familiar.
{ "pile_set_name": "StackExchange" }
Q: Adding attachments to feature with ArcPy I was trying to add some pictures as attachments to my features in a feature class in a .mdb with ArcPy. I'm fairly new to this, so I was just playing with some test data and I modified the demo codes from ArcGIS tutorial like this: # encoding: utf-8 import arcpy import _csv import os import sys arcpy.env.workspace = "F:/全国省级、地市级、县市级行政区划shp/全国省级、地市级、县市级行政区划shp/New Personal Geodatabase.mdb" input = "F:/New Personal Geodatabase.mdb/Province" inputField = "OBJECTID" matchTable = r"F:\matchtable.csv" matchField = "OBJECTID" pathField = "Picture" picFolder = r"F:\MGS" writer = _csv.writer(open(matchTable, "wb"), delimiter=",") writer.writerow([matchField, pathField]) for file in os.listdir(picFolder): if str(file).find(".jpg") > -1: writer.writerow([str(file).replace(".jpg", ""), file]) del writer arcpy.EnableAttachments_management(input) arcpy.AddAttachments_management(input, inputField, matchTable, matchField, pathField, picFolder) (Because I'm Chinese, so the file name contained Chinese character. However, in other applications this file name worked with no problem) Then I got error like this: Traceback (most recent call last): File "F:/���ֿμ��͹�������/ʵϰ/ArcPy Sandbox/AutomateScript/BatchAddAttribute.py", line 29, in <module> arcpy.AddAttachments_management(input, inputField, matchTable, matchField, pathField, picFolder) File "F:\ArcGIS 10.2.1\Engine10.2\arcpy\arcpy\management.py", line 124, in AddAttachments raise e arcgisscripting.ExecuteError: ERROR 000228: Cannot open the dataset. Failed to execute (AddAttachments). ArcGIS version is 10.2 Any idea why? A: I suspect half your problem is you're trying to match up an objectID to an image name.. writer.writerow([str(file).replace(".jpg", ""), file]) You can't join an OBJECTID (long integer) to a text field so you're going to need a different field. I personally don't use CSV files as I find them to fail at the worst possible time - and you can never trust you've accessed all the records.. perhaps this might work for you: import arcpy, os, sys arcpy.env.workspace = "F:/全国省级、地市级、县市级行政区划shp/全国省级、地市级、县市级行政区划shp/New Personal Geodatabase.mdb" input = "F:/New Personal Geodatabase.mdb/Province" inputField = "PicName" # field in your feature class that has the name of the picture matchTable = r"F:\matchtable.dbf" matchField = "PicName" # field in your new table that has the name of the picture pathField = "Picture" picFolder = r"F:\MGS" arcpy.CreateTable_management("F:\\","MatchTable.dbf") arcpy.AddField_management(matchTable,matchField,"TEXT",field_length = 255) arcpy.AddField_management(matchTable,pathField,"TEXT",field_length = 255) if arcpy.Exists(matchTable): try: arcpy.Delete_management(matchTable) except: arcpy.AddError("Unable to purge existing table") sys.exit(-1) with arcpy.da.InsertCursor(matchTable,[matchField,pathField]) as ICur: for file in os.listdir(picFolder): fName, fExt = os.path.splitext(file) if fExt.upper() == '.JPG': arcpy.AddMessage(file) ICur.insertRow([fName,os.path.join(picFolder,file)]) arcpy.EnableAttachments_management(input) arcpy.AddAttachments_management(input, inputField, matchTable, matchField, pathField, picFolder)
{ "pile_set_name": "StackExchange" }
Q: Does the Eastern Orthodox Church believe in Aerial Toll Houses? In the Oration, St. Gregory of Nazianzus talked about “last baptism,” the baptism of eschatological fire, longer lasting and more painful than one’s penance for sins (Or. 39.19). Some Protestants have been using arguments from few Orthodox who refused to acknowledge any intermediary state for the departed. Claiming that such is a Gnostic idea which crept into Eastern Orthodoxy from Latin scholasticism and has no Apostolic foundation. This question is not necessarily coincide with the controversy between Fr. Seraphim Rose and Fr. Lazar Puhalo but comment surrounding the dispute can be be included in the answer. In Eastern rites Catholicism we believe in Aerial Toll Houses, it's a consensus in our tradition but apparently this is not the case in Eastern Orthodoxy. A: It's a complex issue which might not be addressed completely by this answer right now. But given enough time I'll elaborate more detail to address many aspects related to this controversy in great depth on my next update. For those unfamiliar with the debate between Hieromonk Seraphim Rose and Archbishop Lazarus Puhalo, this might be a good preliminary introduction to Toll Houses controversy within Eastern Orthodox communion.1 As for now I'll provide a support for this view. Later, I'll add a counter argument from those who rejected this teaching. Each sides on this controversy read the same passages from the Fathers but come to two opposite conclusions: Fr. Seraphim herald this teaching as an Orthodox faith while Fr. Lazarus condemn this teaching as a heterodox innovation. The former claim to preserve this faith from the Fathers while the later claim that this novelty is unknown to the Fathers.2 We believe that the souls of those that have fallen asleep are either at rest or in torment, according to what each has done; — for when they are separated from their bodies, they depart immediately either to joy, or to sorrow and lamentation; though confessedly neither their enjoyment nor condemnation are complete. For after the common resurrection, when the soul shall be united with the body, with which it had behaved itself well or ill, each shall receive the completion of either enjoyment or of condemnation. And the souls of those involved in mortal sins, who have not departed in despair but while still living in the body, though without bringing forth any fruits of repentance, have repented — by pouring forth tears, by kneeling while watching in prayers, by afflicting themselves, by relieving the poor, and finally by showing forth by their works their love towards God and their neighbor, and which the Catholic Church has from the beginning rightly called satisfaction — [their souls] depart into Hades, and there endure the punishment due to the sins they have committed. But they are aware of their future release from there, and are delivered by the Supreme Goodness, through the prayers of the Priests, and the good works which the relatives of each do for their Departed; especially the unbloody Sacrifice benefiting the most; which each offers particularly for his relatives that have fallen asleep, and which the Catholic and Apostolic Church offers daily for all alike. Of course, it is understood that we do not know the time of their release. We know and believe that there is deliverance for such from their direful condition, and that before the common resurrection and judgment, but when we know not. Patriarch Dositheous of Jerusalem, Council of Jerusalem in 1672, Decree 18. Some EOs argue that this confession might be influenced by Latin's influence. As I'll later show, such accusations can't be sustained because this synod is accepted by Patriarchal Sees of Constantinople, Serbia, and Moscow. If it's true that Dositheous was influenced by the Jesuits, those Sees wouldn't accept this synod. To make it more complicated I'll go through one by one EO saints and discuss whether or not they teach the doctrine of Toll Houses explicitly: Two Nuns, who had both been Abbesses, died. The Lord revealed to me how their souls had been subjected to the aerial tests, how they had been tried and then condemned. For three days and nights I prayed, wretched as I am, entreating the Mother of God for them, and the Lord in His goodness pardoned them through the prayers of the Mother of God; they passed all the aerial tests and received forgiveness through God's mercy. Archimandrite Lazarus Moore, St. Seraphim of Sarov: A Spiritual Biography, New Sarov Press, 1994. This is a very controversial subject to discuss. The Orthodoxy of this teaching as it is now stand has not been decided. Personally, as a Byzantine rite Catholic I believe in Toll Houses, especially because Pope St. John Paul II referred to Hieromonk Seraphim of Sarov as a Catholic saint. To keep my neutrality on this subject, on my next update I'll add counter arguments from those who deny Toll Houses. The purpose of this answer is to bring an awareness and to inform Catholics, Orthodox, and Evangelicals alike about this controversy. That it's not true that the idea of purgatory is unique to Latin's theology. As this answer will later show, both East and West do develop this doctrine around the same time period. The Latins dogmatized this teaching at the Council of Florence in 1439 while the Greeks kept it as theological discourse among the Fathers without dogmatizing it. 1 In my next update, I'll include Eastern Catholic and Oriental Orthodox position on Toll Houses. 2 To be neutral in my answer I'll address both sides as thorough as possible. Because some of the debates resolve around the variations between the Greek and Russian texts. I'll try my best to simplify my answer by discussing the most debated texts for brevity.
{ "pile_set_name": "StackExchange" }
Q: How to change PageView after change tabs? I apologize immediately for my English). I found a code from a google, I installed it on an android studio, but the result of the code is not the one I want. When I click on the tab the control comes to another (for example, item1, item2, item3, item4) from item1 to item2 come in but the ListViev elements remain the same. How to make PageViev elements change after clicking tab control? final TabLayout tab=findViewById(R.id.tabs); final ViewPager viewPager=findViewById(R.id.view_pager); tab.removeAllTabs(); for (int k = 0; k <GoodsGroupList.size(); k++) { tab.addTab(tab.newTab().setText("" + GoodsGroupList.get(k))); } tab.setTabMode(TabLayout.MODE_SCROLLABLE); PlansPagerAdapter adapter = new PlansPagerAdapter(getSupportFragmentManager(), tab.getTabCount()); viewPager.setAdapter(adapter); viewPager.addOnPageChangeListener(new TabLayout.TabLayoutOnPageChangeListener(tab)); Fragment_home.xml <com.google.android.material.tabs.TabLayout android:id="@+id/tabs" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="#000000" tools:ignore="MissingConstraints" /> <androidx.viewpager.widget.ViewPager android:id="@+id/view_pager" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_marginTop="48dp" app:layout_behavior="@string/appbar_scrolling_view_behavior" android:layout_below="@+id/tabs"/> Fragment_menu.xml <ListView android:id="@+id/ListViewer" android:layout_width="match_parent" android:layout_height="match_parent" android:divider="@drawable/list_divider" tools:ignore="MissingConstraints" /> A: Tabview does not change tab on touch I found a similar problem and added part of the code and in May the problem was resolved. tab.addOnTabSelectedListener(new TabLayout.OnTabSelectedListener() { @Override public void onTabSelected(TabLayout.Tab tab) { viewPager.setCurrentItem(tab.getPosition()); Log.d("Tab", String.valueOf(tab.getPosition())); } @Override public void onTabUnselected(TabLayout.Tab tab) { } @Override public void onTabReselected(TabLayout.Tab tab) { } });
{ "pile_set_name": "StackExchange" }
Q: Replace values in multiple columns by values of another column on condition I have a data frame similar to this: > var1<-c("01","01","01","02","02","02","03","03","03","04","04","04") > var2<-c("0","4","6","8","3","2","5","5","7","7","8","9") > var3<-c("07","41","60","81","38","22","51","53","71","72","84","97") > var4<-c("107","241","360","181","238","222","351","453","171","372","684","197") > df<-data.frame(var1,var2,var3,var4) > df var1 var2 var3 var4 1 01 0 07 107 2 01 4 41 241 3 01 6 60 360 4 02 8 81 181 5 02 3 38 238 6 02 2 22 222 7 03 5 51 351 8 03 5 53 453 9 03 7 71 171 10 04 7 72 372 11 04 8 84 684 12 04 9 97 197 I want to replace all values of the variables var2,var3,var4 with "0" that exist where var1 is 02 and/or 03. The digit number also needs to be the same so that df looks like this: var1 var2 var3 var4 1 01 0 07 107 2 01 4 41 241 3 01 6 60 360 4 02 0 00 000 5 02 0 00 000 6 02 0 00 000 7 03 0 00 000 8 03 0 00 000 9 03 0 00 000 10 04 7 72 372 11 04 8 84 684 12 04 9 97 197 Now, I also need to be sure the command will be executed, even if var1 would not contain 02 or 03. Basically something like if var1 contains 01 or 02 set the corresponding values in var2,var3 and var4 to 0 according to the number of digits in var2,var3 and var4 (e.g. 97 will be 00 and 197 will be 000) and if not, do nothing. Any suggestions? A: One solution is to use mutate and case_when from dplyr library(dplyr) df <- df %>% mutate(var2 = case_when(var1 %in% c('02','03') ~ '0', TRUE ~ as.character(var2)), var3 = case_when(var1 %in% c('02','03') ~ '00', TRUE ~ as.character(var3)), var4 = case_when(var1 %in% c('02','03') ~ '000', TRUE ~ as.character(var4)))
{ "pile_set_name": "StackExchange" }
Q: possible collectionName values for System.Data.OleDb.OleDbConnection.GetSchema(string collectionName) The code below works swimingly. I'm not trying to accomplish anything specific. I am however; convinced that there must be more possible values for the collectionName parameter. Does someone know the full list of possible values? void Foo(string pathToAccessDb) { OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0; " + "Data Source=" + pathToAccessDb; DataTable tables = conn.GetSchema("Tables"); DataTable columns = conn.GetSchema("Columns"); //DataTable other = conn.GetSchema("other values ???"); } A: The valid values for GetSchema's collectionName parameter are found in the OleDbMetaDataCollectionNames Class.
{ "pile_set_name": "StackExchange" }
Q: JSON_decode with file_get_contents, why is not working? I use the following simple code to calculate a copper price in my website. <?php $copper_data = json_decode(file_get_contents('https://www.quandl.com/api/v3/datasets/LME/PR_CU.json?limit=1&api_key=XXXXXXX'), true); $currency_data = json_decode(file_get_contents('https://openexchangerates.org/api/latest.json?app_id=XXXXXXX'), true); $copper_lv_per_ton = $copper_data['dataset']['data'][0][2]*$currency_data['rates']['BGN']; ?> The code works just fine in a static php page, but when included in Joomla article (via plugin called Sourcerer) it does not work A: The problem was that allow_url_fopen was disabled in php.ini or php73-fcgi.ini allow_url_fopen = 1 (or On) In my server there are many files php73-fcgi.ini, php72-fcgi.ini and more... In all of them I found this allow_url_fopen and enabled it (set to 1 or On) then it worked. Be careful because the change took 10-15 minutes in my case. I believe because of caching ..
{ "pile_set_name": "StackExchange" }
Q: How are File Timestamps recorded in classic Mac OS? When I save a file it has a 'Created At' and 'Updated At' Value saved as well. I would like to know HOW these values are saved? Are they saved as an integer representing a number of seconds since 1904? or are they saved as some kind of string data? Also Where are they saved? as part of the file itself or in the desktop file? A: The HFS filesystem stores file metadata in a single large file called the "catalog file", with one record for each file or directory. Creation and modification times are stored as 32-bit unsigned integers representing a count of seconds since midnight, January 1, 1904. (Source: Inside Macintosh: Files)
{ "pile_set_name": "StackExchange" }
Q: SQL Server 2008 "issues" There is something wrong with my installation of SQL Server that I can't put my finger on, which is why the vague title of this post. As an example, I was trying to open an MDF file that is part of an example solution from MSDN for DataGrid templates, and got the following error. The info about my install from SQL Server Management Studio is below. Can someone help me solve the problem I'm having opening the MDF file? Is this a symptom of my install in general, as I suspect? If so, can you suggest how to resolve the install? Cheers, Berryl Microsoft SQL Server Management Studio 10.0.2531.0 Microsoft Data Access Components (MDAC) 6.0.6002.18005 Microsoft MSXML 3.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421 Microsoft .NET Framework 2.0.50727.4216 Operating System 6.0.6002 A: Your server is version 612 which is actually SQL Server 2005 (SELECT @@VERSION to verify this) and the file your attaching is 622 which is SQL Server 2008, you cannot attach an mdf sourced from a newer version of SQL server.
{ "pile_set_name": "StackExchange" }
Q: SQL Server Determining Hard Coded Date as Larger When It's Not? An old employee left a massive query behind that I've been debugging and it appears that the issue has come down to SQL Server itself determining a comparison differently than what I would have expected. I have a table with a column col1 containing the value 20191215 as a datetime. The part in question is similar to the following: select case when col1 > '01/01/2020' then 1 else 0 end This statement is returning 1, suggesting that '12/15/2019' is larger than '01/01/2020'. I do not need assistance correcting the query, as I have already made changes to do so other than using the comparison the previous employee was using, I am simply curious as to why SQL Server would evaluate this as I have described. I understand that this is not the typically way SQL Server would store dates as well, would the issue simply be the formatting of the dates? Current SQL Server version is: SQL Server 2014 SP3 CU3. SQL Fiddle link that shows the same results Please note that the link does not contain an exact replica of my case Edit: Included additional info relevant to actual query. A: It is a string comparison not a date comparison: select case when '12/15/2019' > '01/01/2020' then 1 else 0 end vs select case when CAST('12/15/2019' AS DATE) > CAST('01/01/2020' AS DATE) then 1 else 0 end db<>fiddle demo I am simply curious as to why SQL Server would evaluate this as I have described. '12/15/2019' it is a string literal, SQL Server does not know you want to treat a date unless you explicitly express your intention. I have a table with a column col1 containing the value 20191216 If you are comparing with a column then the data type of column matters and data type precedence rules
{ "pile_set_name": "StackExchange" }
Q: Как расположить элемент во весь экран с заданным отношением сторон? Я делаю canvas-игру. Нужно расположить canvas так, чтобы он сохранял пропорции при изменении размера окна и чтобы всегда касался верхней и нижней или правой и левой сторон окна. На рисунке: чёрным - окно, красным - canvas. Везде одинаковые пропорции. Спасибо за помощь! A: html, body { height: 100%; margin: 0; padding: 0; overflow: hidden; } .wrapper, .wrapper > canvas { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } .wrapper { width: 100vw; height: 0; padding-bottom: 60%; } .wrapper > canvas { width: calc(100vh * (1 / 0.6)); height: 100%; max-width: 100%; max-height: 100vh; box-shadow: inset 0 0 0 3px #f00; } <div class="wrapper"> <canvas></canvas> </div> Похоже что в сниппете отображается некорректно, вот то же самое в codepen.
{ "pile_set_name": "StackExchange" }
Q: Silverlight: How to databind on a condition I have an ItemsControl object and am setting the DataTemplate to hold a Grid with a couple controls in it. The controls are databoud to a collection of some object MyObj, specifically a TextBlock and a ComboBox. MyObj has its own collection inside of it for a property. If that property has only 1 object in its collection, only the TextBlock is visible. But if there is more than 1 object in the collection, the TextBlock is visible and the ComboBox becomes visible once the TextBlock is clicked on. I have the ComboBox filled up with what it needs, I just can't figure out how to specify which ComboBox needs to become visible when the TextBlock is Clicked on. I guess my question is, how would I even go about doing this? Or, is there a better way to think about this problem? I'm new to databinding in Silverlight and running into a bunch of issues on my own. Any help is always appreciated. Thank in advance. A: One thing that you could do is add an extra property to the data item that you binding to, something like 'IsSelectionAvailable'. Make the visibility of your combobox bound to this property (via a boolean to Visibility enum Value Converter). Finally, add a click event handler for the text box that sets the IsSelectionAvailable property to true for the object it is bound to. Hope that helps.
{ "pile_set_name": "StackExchange" }
Q: CSS - menu items overlapping I have a menu with two menu items and when the user clicks on each item, a sub-menu is displayed. The issues is both menues displayed at the same spot - under the first item. I have been tweaking it for a while but cannot figure out a way to fix the issue. Also I need to make sure as one menu item clicked, the sub-menu for the other item disappears. Can anyone point me in the right direction? $(document).ready(function(){ $('.menu-item').on('click', function() { $(this).children(".dropdown-content").toggle(); }); }); #nav { width: 100%; height: 3em; color: #fff; line-height: 3em; } #nav .nav-wrapper { height: 100%; position: relative; top: 0; } .right {float: right !important;} #nav-mobile { list-style-type: none; margin-top: 0; } #nav-mobile li { display: inline; margin: 0 2.5em 1.5em 1.5em; font-family: Roboto, Helvetica, Arial, sans-serif; } #nav-mobile li a { text-decoration: none; /*position: relative;*/ } #nav-mobile li img { position: relative; top: .4em; } #nav-mobile li .dropdown-content { display: none; position: absolute; color: #188CCC; background-color: white; z-index: 1; box-shadow: 0 .5em 1.5em 0 rgba(28, 24, 28, 0.65); min-width: 120px; } #nav-mobile li .dropdown-content li { display: block; margin:0; width: 100%; } #nav-mobile li .dropdown-content li a { display: block; margin:0; padding: 0.25em 1.75em 0.25em 1.2em; } #nav-mobile li .dropdown-content li:hover { background-color: #E0E0E0; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <ul id="nav-mobile"> <li class="menu-item"> <img src="images/img1.png"> <a class="hide-on-med-and-down white-text" href='#'><span id="lblLinks">Links</span></a> <ul id="linksdrop" class="dropdown-content"> <li><a href="#">Link1</a></li> <li><a href="#">Link2</a></li> <li><a href="#">Link3</a></li> </ul> </li> <li class="menu-item"> <img src="images/img2.png"> <a class="hide-on-med-and-down white-text" href='#'> <span>User</span></a> <ul id="userdrop" class="dropdown-content"> <li><a href="profile.html">Profile</a></li> <li><a href="logout.html">Log Off</a></li> </ul> </li> </ul> A: For the positioning, set display: inline-block on #nav-mobile li and give it a width. In this example, I set its min-width to 5em but do what makes sense in your design. To close the other one there’s a few options but just doing $(this).siblings().children(".dropdown-content").hide(); may be enough. $(document).ready(function(){ $('.menu-item').on('click', function() { $(this).children(".dropdown-content").toggle(); $(this).siblings().children(".dropdown-content").hide(); }); }); #nav { width: 100%; height: 3em; color: #fff; line-height: 3em; } #nav .nav-wrapper { height: 100%; position: relative; top: 0; } .right {float: right !important;} #nav-mobile { list-style-type: none; margin-top: 0; } #nav-mobile li { display: inline-block; margin: 0 2.5em 1.5em 1.5em; font-family: Roboto, Helvetica, Arial, sans-serif; min-width: 5em; } #nav-mobile li a { text-decoration: none; /*position: relative;*/ } #nav-mobile li img { position: relative; top: .4em; } #nav-mobile li .dropdown-content { display: none; position: absolute; color: #188CCC; background-color: white; z-index: 1; box-shadow: 0 .5em 1.5em 0 rgba(28, 24, 28, 0.65); min-width: 120px; } #nav-mobile li .dropdown-content li { display: block; margin:0; width: 100%; } #nav-mobile li .dropdown-content li a { display: block; margin:0; padding: 0.25em 1.75em 0.25em 1.2em; } #nav-mobile li .dropdown-content li:hover { background-color: #E0E0E0; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <ul id="nav-mobile"> <li class="menu-item"> <img src="images/img1.png"> <a class="hide-on-med-and-down white-text" href='#'><span id="lblLinks">Links</span></a> <ul id="linksdrop" class="dropdown-content"> <li><a href="#">Link1</a></li> <li><a href="#">Link2</a></li> <li><a href="#">Link3</a></li> </ul> </li> <li class="menu-item"> <img src="images/img2.png"> <a class="hide-on-med-and-down white-text" href='#'> <span>User</span></a> <ul id="userdrop" class="dropdown-content"> <li><a href="profile.html">Profile</a></li> <li><a href="logout.html">Log Off</a></li> </ul> </li> </ul>
{ "pile_set_name": "StackExchange" }
Q: How to create regions on App Fabric via Powershell I think the title is clear, I'm surfing the web about an hour but every single page talks about creating regions dynamically using .net. I'm sure we have a command to execute on powershell. do you know it? Thanks in advance, A: There's no Powershell commandlet out of the box for creating/managing regions. The solution - write one! As Daniel Richnak says in the comments, Powershell is .NET under the covers, and this means you can write extra Powershell commandlets to fill in the gaps. A commandlet is a regular class that inherits from System.Management.Automation.Cmdlet, and it's decorated with the System.Management.Automation.Cmdlet attribute as well. Making it work is then a matter of overriding the ProcessRecord method. Command-line parameters are implemented as properties on the class, decorated with the System.Management.Automation.Parameter attribute. So a commandlet for creating regions would look something like: using System.Management.Automation; using Microsoft.ApplicationServer.Caching; [Cmdlet(VerbsCommon.New, "CacheRegion")] public class NewCacheRegion : Cmdlet { [Parameter(Mandatory = true, Position = 1)] public string Cache { get; set; } [Parameter(Mandatory = true, Position = 2)] public string Region { get; set; } protected override void ProcessRecord() { base.ProcessRecord(); DataCacheFactory factory = new DataCacheFactory(); DataCache cache = factory.GetCache(Cache); try { cache.CreateRegion(Region); } catch (DataCacheException ex) { if (ex.ErrorCode == DataCacheErrorCode.RegionAlreadyExists) { Console.WriteLine(string.Format("There is already a region named {0} in the cache {1}.", Region, Cache)); } } } }
{ "pile_set_name": "StackExchange" }
Q: Lebesgue integral on given set i´d like to calculate the following integral $\int\limits_{\mathbb{R^3}}\chi_B\,d\cal L^3 $with lebesgue measure on the set $B:= \{(x,y,z) \in \mathbb{R^3}: 0≤y≤x\sqrt{3}\,,0≤z≤2\,,1≤x^2+y^2≤4\}$ and $\chi_B$ is defined as follows $\chi_B:=\begin{cases} 0,\, x\not\in B \\ 1,\,x\in B \end{cases}$ I´d like to do it with cylinder coordinates $x=r\cdot cos(\varphi)\\y=r\cdot sin(\varphi) \\ z=z$ i have tried it by my own but i can't find the right limits for the transformation. can someone tell me the right limits and how to get them please ? thanks in advance A: Firstly, notice that $\int\limits_{\mathbb{R^3}}\chi_B\,d\cal L^3$ = $\int\limits_{B}d\cal L^3$. You have the correct transformation. To find the limits: Put $x=r\cdot \cos{(\phi)}$, $y=r\cdot \sin{(\phi)}$ in $1\leq x^2+y^2\leq 4$. You get: $1\leq r^2\leq 4$. Hence, $1\leq r\leq 2$. Now, with $0\leq y\leq \sqrt{3}x$ you get with the transformation the following: $$ 0\leq r\sin{(\phi)}\leq \sqrt{3}r \cos{(\phi)} $$ Dividing by $r\cos{(\phi)}$ yields $$ 0\leq \tan{(\phi)}\leq \sqrt{3} $$ Hence, $0\leq \phi\leq \pi/3$. Now, we have $$ \int_0^2\int_0^{\pi/3}\int_1^2r\,drd\phi dz=\frac{3}{2}\cdot \int_0^2\int_0^{\pi/3}d\phi dz= \frac{3}{2} \frac{\pi}{3}2 = \pi. $$ Best regards, serdar
{ "pile_set_name": "StackExchange" }
Q: Manage external files when set Shadow build in Qt (Win8) My problem is to manage some text tables contained in a folder used in my Qt project; when I compile release or debug version, I must copy the same folder in all place where project is compiled. The disadvantage is when I change a data in one of text tables and forget to copy this table in all places. Is it possible to set an absolute path or an unique folder for all files in some ways external to Qt sources, headers and classes file? A: You can use QMAKE_POST_LINK to add a custom copy command after your build process is finished. For example I use this code in my PRO file to copy the release executable into a dedicated bin folder that is used for building a setup executable. win32 { release { COPY_CMD = "$$OUT_PWD\\release\\$${TARGET}.exe $$PWD\\bin\\$${TARGET}.exe" COPY_CMD = $${QMAKE_COPY} $$replace( COPY_CMD, "/", "\\" ) $$escape_expand( \\n\\t ) QMAKE_POST_LINK += $$COPY_CMD } } You can do the same, just in the other direction and copy your needed files into your build path. For example, this would create a folder MyList in your build path and copy every *.txt from your source root to this folder. win32 { release { COPY_CMD += mkdir $$OUT_PWD/MyList & COPY_CMD += copy $$IN_PWD/MyList/*.txt $$OUT_PWD/MyList/*.txt & COPY_CMD = $$replace( COPY_CMD, "/", "\\" ) $$escape_expand( \\n\\t ) QMAKE_POST_LINK += $$COPY_CMD } }
{ "pile_set_name": "StackExchange" }
Q: How does Arduino Wiring Language work? I am new to Arduino and just read from the book < Intel Galileo and Intel Galileo Gen 2 API Features and Arduino Projects for Linux Programmers > that: In 2003, a student named Hernando Barragan created a hardware thesis describing an IDE and the integration with circuit boards powered by micro-controllers. With contributions from other researches the concept evolved allowing developers to write just a few lines of code in order to reproduce simple connections of hardware components. Could anyone explain how software could change hardware wiring as the bold part says? A: This is not talking about changing physical wires. It means the code can drive a micro-controller to communicate with the hardware. Each pin of a micro-controller can do different things and speak with different hardware but you do have to physically connect the hardware yourself. For example:- To communicate with different hardware, such as a gps, we plug the gps wires into pins of the micro-controller and then use code to monitor the pins. The Arduino will monitor the voltage on the pins to determine power on/off (0's and 1's) and allow you to know the result in your own code. It is similar to morse code but much faster. Eight zero's or 1's gives us one byte, one byte is one letter or number. Wait long enough and we have a whole message (in reality it takes a few milliseconds for quite a big message) Some hardware uses 0's and 1's as described above, some uses analog values to give readings. For example a temperature sensor, when powered, might produce a voltage between 0 and 5 volts. It would have a wire that plugs into one of the Analog pins on the Arduino. The Arduino code can read the voltage of the temperature sensor connected to an analog pin, perform a bunch of calculations and determine what the temperature is. Some hardware such as motors and other sensors use more complex messaging systems but all connect to pins of the Arduino micro-controller to be read or written to using the methods described in the specification of the hardware. Normally this involves some quite complex code but Arduino/Wiring is a simple set of instructions that in the background uses the complex code.
{ "pile_set_name": "StackExchange" }
Q: Gaussian-Like integral What is the integral of this? $$\int_0^\infty xe^{-(ax^2+bx)}\,\mathrm{d}x$$ $a$ and $b$ are positive integers. A: $$ ax^{2} + bx = a\left[\left(x-\frac{b}{2a}\right)^{2} - \frac{b^{2}}{4a^{2}}\right] $$ make the substitution $$ z = \sqrt{2a}\left(x -\frac{b}{2a}\right) $$ you will end up with an integral like this $$ \frac{\mathrm{e}^{\frac{b^{2}}{4a}}}{2a}\int \left(z-\frac{b}{\sqrt{2a}}\right)\mathrm{e}^{-\frac{z^{2}}{2}}dz $$ or if we split it up $$ \frac{\mathrm{e}^{\frac{b^{2}}{4a}}}{2a}\left[\int^{\infty}_{0} z\mathrm{e}^{-\frac{z^{2}}{2}}dz -\frac{b}{\sqrt{2a}} \int_{0}^{\infty} \mathrm{e}^{-\frac{z^{2}}{2}}dz \right] = \int_{0}^{\infty}x\mathrm{e}^{-\left(ax^{2} + bx\right)}dx $$ at this point you can make use of $$ \int z\mathrm{e}^{-\frac{z^{2}}{2}} = \int -\frac{d}{dz}\mathrm{e}^{-\frac{z^{2}}{2}} = -\mathrm{e}^{-\frac{z^{2}}{2}} $$ the rest you can probably do. Any errors, or need further help comment :). $\textbf{Edit:}$ $$ \frac{\mathrm{e}^{\frac{b^{2}}{4a}}}{2a}\left[1 - \frac{b}{\sqrt{2a}}\sqrt{\frac{\pi}{2}}\right] $$ you can simplify. $\textbf{Edit 2:}$ As pointed out by @harrypeter i forgot to choose the appropriate limits for the integral. If one does use the correct limits you will achieve the same result. A: $$\int_0^\infty xe^{-(ax^2+bx)}~dx$$ $$=\int_0^\infty xe^{-a\left(x^2+\frac{bx}{a}\right)}~dx$$ $$=\int_0^\infty xe^{-a\left(x^2+\frac{bx}{a}+\frac{b^2}{4a^2}-\frac{b^2}{4a^2}\right)}~dx$$ $$=e^\frac{b^2}{4a}\int_0^\infty xe^{-a\left(x+\frac{b}{2a}\right)^2}~dx$$ $$=e^\frac{b^2}{4a}\int_\frac{b}{2a}^\infty\left(x-\dfrac{b}{2a}\right)e^{-ax^2}~dx$$ $$=e^\frac{b^2}{4a}\int_\frac{b}{2a}^\infty xe^{-ax^2}~dx-\dfrac{be^\frac{b^2}{4a}}{2a}\int_\frac{b}{2a}^\infty e^{-ax^2}~dx$$ $$=e^\frac{b^2}{4a}\int_\frac{b}{2a}^\infty xe^{-ax^2}~dx-\dfrac{be^\frac{b^2}{4a}}{2a}\int_\frac{b}{2\sqrt a}^\infty e^{-x^2}~d\left(\dfrac{x}{\sqrt a}\right)$$ $$=e^\frac{b^2}{4a}\left[-\dfrac{e^{-ax^2}}{2a}\right]_\frac{b}{2a}^\infty-\dfrac{be^\frac{b^2}{4a}}{2a\sqrt a}\int_\frac{b}{2\sqrt a}^\infty e^{-x^2}~dx$$ $$=\dfrac{1}{2a}-\dfrac{b\sqrt\pi e^\frac{b^2}{4a}}{4a\sqrt a}\text{erfc}\left(\dfrac{b}{2\sqrt a}\right)$$
{ "pile_set_name": "StackExchange" }
Q: Performance and profiling on SELECT * FROM [table] ISNULL([column], '') = '' VS EXIST (SELECT * FROM [table] WHERE [condition]) I asked a similar question here. The answerer told me to ask about profiling and performance here on [dba.stackexchange]. I have two condition in my queries that both do the same thing. I was wondering how can I mesure the performance of both to choose the best one. I know I should do profiling, but i don't know how. I've read this and found it to be more confusing than helping. And do do not wish to profile the whole query, only this condition. IF (SELECT * FROM [table] WHERE ISNULL([column], '')) = '' IF EXIST (SELECT [column] FROM [table]) I expect the second way to be more performant because it does not call a function, but i am far from being an expert in SQL. How can i profile them? or which one is the best? A: Well, the IF (SELECT * FROM [table] WHERE ISNULL([column], '')) = '' test isn't really a working test, so it is possible that you made whatever code you are working with too generic when posting here. HOWEVER, in a very general sense, the IF EXISTS should nearly always be better because it is designed to stop processing its inner query upon the first row being returned to it. The first condition you posted does not have that built-in efficiency and would process all rows in the inner query. Of course, sometimes the Query Optimizer can re-write your code to be an IF EXISTS if it recognizes that your non-IF EXISTS query would logically be the same, but I am not sure if that would happen in this specific case. I believe I have seen that the Query Optimizer does rewrite IF ( (SELECT COUNT(*) FROM [table]) > 0 ) as an IF EXISTS. Regarding how to test the efficiency of each, I would use a base query against the same table so that I could use the same column(s) for filtering (to hopefully make use of the same indexes that the real queries would use) and repeat that query, once for each of the two conditions you are testing for. Then, wrap each in SET STATISTICS TIME, IO [ON | OFF]; as follows: SET STATISTICS IO, TIME ON; IF (SELECT * FROM dbo.Table WHERE ISNULL()... ) BEGIN PRINT 'First'; END; SET STATISTICS IO, TIME OFF; PRINT '-----------'; SET STATISTICS IO, TIME ON; IF (EXISTS(SELECT * FROM dbo.Table WHERE condition)) BEGIN PRINT 'Second'; END; SET STATISTICS IO, TIME OFF; Please read the MSDN page for the EXISTS operator for more info regarding how to use it in WHERE conditions.
{ "pile_set_name": "StackExchange" }
Q: Passing a Parameter Array by ref to C# DLL via Reflection All, I have a number of C# DLLs that I want to call from my application at runtime using System.Reflection. The core code I use is something like DLL = Assembly.LoadFrom(Path.GetFullPath(strDllName)); classType = DLL.GetType(String.Format("{0}.{0}", strNameSpace, strClassName)); if (classType != null) { classInstance = Activator.CreateInstance(classType); MethodInfo methodInfo = classType.GetMethod(strMethodName); if (methodInfo != null) { object result = null; result = methodInfo.Invoke(classInstance, parameters); return Convert.ToBoolean(result); } } I would like to know how I can pass in the array of parameters to the DLL as ref so that I can extract information from what happened inside the DLL. A clear portrayal of what I want (but of course will not compile) would be result = methodInfo.Invoke(classInstance, ref parameters); How can I achieve this? A: Changes to ref parameters are reflected in the array that you pass into MethodInfo.Invoke. You just use: object[] parameters = ...; result = methodInfo.Invoke(classInstance, parameters); // Now examine parameters... Note that if the parameter in question is a parameter array (as per your title), you need to wrap that in another level of arrayness: object[] parameters = { new object[] { "first", "second" } }; As far as the CLR is concerned, it's just a single parameter. If this doesn't help, please show a short but complete example - you don't need to use a separate DLL to demonstrate, just a console app with a Main method and a method being called by reflection should be fine.
{ "pile_set_name": "StackExchange" }
Q: Haskell hello world won't compile What is wrong with this code? Trying to do a basic haskell hello world. module Main ( hello ) where hello :: [Char] -> [Char] hello p = "Hello " ++ p ++ "!" main = let msg = hello "World" putStrLn msg A: You're missing a do: main = do let msg = hello "World" putStrLn msg You'll also want to export your main: module Main ( main ) where Since this is the main module, there is no need to export hello. A: You're missing a in: main = let msg = hello "World" in putStrLn msg
{ "pile_set_name": "StackExchange" }
Q: broken link avatar image So I must get this out of my chest. ;-) Why do so many people have a broken image symbol as avatar image ? Do they provide broken image link ? Or is it askubuntu ? Can this be fixed ? Isn't this has to do with the default avatar because I see this for new members a lot. A: I don't know what's giving you the broken link image, I use NoScript and Disconnect in Firefox and silb's avatar won't display here either. I selected inspect this element from the context menu and found that his avatar is hosted on Facebook. Enabling Facebook in Disconnect made the avatar show up.
{ "pile_set_name": "StackExchange" }
Q: How does Stack Exchange prevent all its contents from being stolen through its API? There's something I have hard time to understand with this world of APIs: when you have your WebApp/MobileApp ecosystem entirely based on an API giving access to your database, how do you prevent all of it to be parsed and stolen as one pleases? How does Stack Exchange (or any other API based WebApp) protects its core content (blog posts/questions/answers) from being stolen and duplicated elsewhere? A: They don't. At the very bottom of every page, you will notice this: user contributions licensed under cc by-sa 3.0 with attribution required This pretty much explicitly says that you can "steal" posts—as long as you both link to the post and state that your copy also uses the same license. I'm sure there are many sites out there that copy SE's posts and properly satisfy both of those conditions, but, unfortunately, there are also many sites out there that copy SE's posts and pretend that they're their own. (If you do happen to see a site like this, please refer to this post on the subject.) So, they do not in any way try to prevent people from copying content. The only thing they are concerned about is unlawful duplication of content. In terms of trying to prevent that, I'm not sure what they do—other than ask people to report it when they see it. A: They don't. All user-generated content is licensed under CC BY-SA 3.0, so anybody else is free to copy and use it elsewhere as long as they give attribution to the original author. It's even easier than using the API, in fact: Stack Exchange provides a data dump too, so you can download every question, answer, comment, etc. ever posted on any site in one go.
{ "pile_set_name": "StackExchange" }
Q: Split View Controller Object Not Showing up I started developing an app for the iPhone, but then decided I want to have it to be universal for both iPhone and iPad. What I did was just go to the project target -> build setting -> targeted device family -> iPhone/iPad and also in the summary -> devices -> universal. I'm pretty sure this doesn't finish the conversion because when I go to one of the controller nib files, I can't create a Split view Controller. What is the proper way to convert the iPhone application to universal? A: u have to change TARGETED_DEVICE_FAMILY in project settings to run app on bouth devices. http://i-and-world.appspot.com/2011/01/10/converting-iphone-apps-to-universal.html http://iphonedevelopment.blogspot.com/2010/04/converting-iphone-apps-to-universal.html
{ "pile_set_name": "StackExchange" }
Q: Numbered listbox I have a sorted listbox and need to display each item's row number. In this demo I have a Person class with a Name string property. The listbox displays a a list of Persons sorted by Name. How can I add to the datatemplate of the listbox the row number??? XAML: <Window x:Class="NumberedListBox.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300"> <ListBox ItemsSource="{Binding Path=PersonsListCollectionView}" HorizontalContentAlignment="Stretch"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Path=Name}" /> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Window> Code behind: using System; using System.Collections.ObjectModel; using System.Windows.Data; using System.Windows; using System.ComponentModel; namespace NumberedListBox { public partial class Window1 : Window { public Window1() { InitializeComponent(); Persons = new ObservableCollection<Person>(); Persons.Add(new Person() { Name = "Sally"}); Persons.Add(new Person() { Name = "Bob" }); Persons.Add(new Person() { Name = "Joe" }); Persons.Add(new Person() { Name = "Mary" }); PersonsListCollectionView = new ListCollectionView(Persons); PersonsListCollectionView.SortDescriptions.Add(new SortDescription("Name", ListSortDirection.Ascending)); DataContext = this; } public ObservableCollection<Person> Persons { get; private set; } public ListCollectionView PersonsListCollectionView { get; private set; } } public class Person { public string Name { get; set; } } } A: Finally! If found a way much more elegant and probably with better performance either. (see also Accessing an ItemsControl item as it is added) We "misuse" the property ItemsControl.AlternateIndex for this. Originally it is intended to handle every other row within a ListBox differently. (see http://msdn.microsoft.com/en-us/library/system.windows.controls.itemscontrol.alternationcount.aspx) 1. Set AlternatingCount to the amount of items contained in the ListBox <ListBox ItemsSource="{Binding Path=MyListItems}" AlternationCount="{Binding Path=MyListItems.Count}" ItemTemplate="{StaticResource MyItemTemplate}" ... /> 2. Bind to AlternatingIndex your DataTemplate <DataTemplate x:Key="MyItemTemplate" ... > <StackPanel> <Label Content="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=TemplatedParent.(ItemsControl.AlternationIndex)}" /> ... </StackPanel> </DataTemplate> So this works without a converter, an extra CollectionViewSource and most importantly without brute-force-searching the source collection. A: This should get you started: http://weblogs.asp.net/hpreishuber/archive/2008/11/18/rownumber-in-silverlight-datagrid-or-listbox.aspx It says it's for Silverlight, but I don't see why it wouldn't work for WPF. Basically, you bind a TextBlock to your data and use a custom value converter to output the current item's number. A: The idea in David Brown's link was to use a value converter which worked. Below is a full working sample. The list box has row numbers and can be sorted on both name and age. XAML: <Window x:Class="NumberedListBox.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:NumberedListBox" Height="300" Width="300"> <Window.Resources> <local:RowNumberConverter x:Key="RowNumberConverter" /> <CollectionViewSource x:Key="sortedPersonList" Source="{Binding Path=Persons}" /> </Window.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <ListBox Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" ItemsSource="{Binding Source={StaticResource sortedPersonList}}" HorizontalContentAlignment="Stretch"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Converter={StaticResource RowNumberConverter}, ConverterParameter={StaticResource sortedPersonList}}" Margin="5" /> <TextBlock Text="{Binding Path=Name}" Margin="5" /> <TextBlock Text="{Binding Path=Age}" Margin="5" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> <Button Grid.Row="1" Grid.Column="0" Content="Name" Tag="Name" Click="SortButton_Click" /> <Button Grid.Row="1" Grid.Column="1" Content="Age" Tag="Age" Click="SortButton_Click" /> </Grid> </Window> Code behind: using System; using System.Collections.ObjectModel; using System.Windows.Data; using System.Windows; using System.ComponentModel; using System.Windows.Controls; namespace NumberedListBox { public partial class Window1 : Window { public Window1() { InitializeComponent(); Persons = new ObservableCollection<Person>(); Persons.Add(new Person() { Name = "Sally", Age = 34 }); Persons.Add(new Person() { Name = "Bob", Age = 18 }); Persons.Add(new Person() { Name = "Joe", Age = 72 }); Persons.Add(new Person() { Name = "Mary", Age = 12 }); CollectionViewSource view = FindResource("sortedPersonList") as CollectionViewSource; view.SortDescriptions.Add(new SortDescription("Name", ListSortDirection.Ascending)); DataContext = this; } public ObservableCollection<Person> Persons { get; private set; } private void SortButton_Click(object sender, RoutedEventArgs e) { Button button = sender as Button; string sortProperty = button.Tag as string; CollectionViewSource view = FindResource("sortedPersonList") as CollectionViewSource; view.SortDescriptions.Clear(); view.SortDescriptions.Add(new SortDescription(sortProperty, ListSortDirection.Ascending)); view.View.Refresh(); } } public class Person { public string Name { get; set; } public int Age { get; set; } } } Value converter: using System; using System.Windows.Data; namespace NumberedListBox { public class RowNumberConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { CollectionViewSource collectionViewSource = parameter as CollectionViewSource; int counter = 1; foreach (object item in collectionViewSource.View) { if (item == value) { return counter.ToString(); } counter++; } return string.Empty; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } }
{ "pile_set_name": "StackExchange" }
Q: Why stepwise feature selection method do not perform well when there is a large number of explanatory variables I have the feeling that when I have a large number of predictors, it will be better to use feature selection regression model such as lasso, to fit the model, and better not use the stepwise feature selection method to select important predictors. The stepwise feature selection methods are only useful when I have only a few of predictors and sufficient samples. I am not sure whether my understanding is correct. If I am correct, why stepwise feature selection method do not perform well when there is a large number of explanatory variables A: Your probability of spurious correlations grows as the number of dimensions grows, since you have many more "sub manifolds" over which the data can covary. Here's a more in-depth treatment: http://www.jmlr.org/papers/volume17/16-068/16-068.pdf Therefore, simple stepwise regression will "overfit" to apparent correlations whilst regularized approaches like lasso will tend to correct for this if you use via cross validation to set your regularization parameter. However, even lasso and friends start to have a hard time in very high dimensions (in that they would not be guaranteed to converge to the true model as sample size increases). However, if you keep in mind that all models are wrong, but some are useful, then as long as your "inconsistent" lasso model is working for predictions, then I wouldn't really worry about the theoretical inconsistency for infinite sample sizes and a perfect model.
{ "pile_set_name": "StackExchange" }
Q: Openssl adding a telephone number or cell number to the distinguished_name In the openssl.cnf file, I see fields like countryName, stateOrProvinceName etc which are typically present in the distinguished_name. Where can I get a list of fields which can be added to the distinguished_name? I want to add a mobile(cellular) number to the distinguished_name. How do I go about doing this? I tried adding stuff like telephoneName, telephone_default, telephone_min, telephone_max to distinguished_name in openssl.cnf but openssl seems to ignore it. A: telephoneNumber is what you're looking for. Add the following to openssl.cnf; I used the Ubuntu 14 system version to start with, section names may vary by your distribution: Under policy_match and policy_anything sections (probably policy_* in whatever openssl.cnf you're using as a template): telephoneNumber = optional Under req_distinguished_name section: telephoneNumber = Telephone Number telephoneNumber_min = 11 telephoneNumber_max = 40 Then generate a key and CSR as normal. The interactive prompts will now include a telephone number: What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:Massachusetts Locality Name (eg, city) []:Boston Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:www.example.com Email Address []:[email protected] Telephone Number []:617-555-1234 And you can verify that the telephone number makes it into the cert: $ openssl x509 -in certificate.crt -text -noout | grep Subject: Subject: C=US, ST=Massachusetts, L=Boston, O=Internet Widgits Pty Ltd, CN=www.example.com/[email protected]/telephoneNumber=617-555-1234 $ Some references as to how I figured this out: RFC 5280 says that Standard sets of attributes have been defined in the X.500 series of specifications [X.520]. A bit of Googling led me to Naming and Structuring Guidelines for X.500 (RFC 1617) which has section "4.4.7 Telecom Attributes". There's not much info there but it does state telephoneNumber. So I Googled for that along with openssl.cnf and found an example openssl.cnf file incorporating telephoneNumber. From there it was simple to test the necessary parameters on my own system. Finally, knowing that telephoneNumber is a valid thing, it was possible to Google "site:openssl.org telephoneNumber" and the source code for obj_mac.h has that along with facsimileTelephoneNumber, homeTelephoneNumber, mobileTelephoneNumber, and pagerTelephoneNumber. (Personally, I think if you're just putting one number in, you should be general ("telephoneNumber") rather than pedantic ("mobileTelephoneNumber"), so that's what I've used above).
{ "pile_set_name": "StackExchange" }
Q: How to convert XML to Html in c#? Which is the best way to convert xml to html, currently i am using Xpathnavigator and xpathnodeiterator to query and traverse the xml. This works fine, but i need to convert this xml to html,and display it in browser with some tables, which is the best way to achieve this. And the xml is not constant always, i mean the xml is generated dynamically. Xml is below: - <win32_networkadapter> <entity> <index>1</index> <speed /> <manufacturer /> <pnpdeviceid /> <name>RAS Async Adapter</name> <adaptertype /> </entity> - <entity> <index>2</index> <speed /> <manufacturer>Microsoft</manufacturer> <pnpdeviceid>ROOT\MS_L2TPMINIPORT\0000</pnpdeviceid> <name>WAN Miniport (L2TP)</name> <adaptertype /> </entity> </win32_networkadapter> This xml contains the details of the network adapters in a system, to get details wmi is used and xml is generated dynamically. So one system may contain 2 network adapter as in the above xml and other system may contain 3 or 4 network adapters in which case the xml grows. In this case how can generate html from this xml dynamically. Display the network details in tables in browser. THank you A: XML file can be transformed to HTML using XSLT. A: Instead of converting xml to html you can style the xml document itself, you can read more about that documentation, w3 for xml stylesheets, Styling XML Documents with CSS and adding style to xml
{ "pile_set_name": "StackExchange" }
Q: Testing my package from Sonatype Staging I have a library that is in maven central. When I push a new version it goes to the sonatype staging first before I have to promote it on to production. I want to create a sample app that will pull the lib from staging so I can run some tests and what not before I promote it on to production. What URL would I use in my build.gradle for the staging repo? Just to clarify I have tried using: https://oss.sonatype.org/content/repositories/staging/ But my project is not there yet, only versions I have promoted on to production are in this repo. A: The url https://oss.sonatype.org/content/repositories/staging/ is actually correct. Before it will sync into staging you have to "close" it. That will sync it to staging where you can do the tests. Once you are satisfied simply "release" if you are happy, or "drop" if you need to redo it and start over with a new upload.
{ "pile_set_name": "StackExchange" }
Q: alert message not displayed in jquery mobile application I am new to jquery and jquery mobile. I am currently trying to create a simple mobile application that, once a page is loaded, displays an alert message. I have added in the head section of the html file, as suggested, the following code: <link rel="stylesheet" href="jquery.mobile-1.3.0.min.css" /> <script type="text/javascript" src="jquery-1.9.1.min.js"></script> <script type="text/javascript"> $('#chart').live('pageinit', function() { alert('jQuery Mobile: pageinit for Config page'); }); </script> <script type="text/javascript" src="jquery.mobile-1.3.0.min.js"></script> In the body section I have added, as expected, the code for the chart page: <div data-role="page" data-theme="b" id="chart"> <div data-role="header" data-position="fixed"> <a href="#main-page" data-icon="arrow-r" data-iconpos="left" data-transition="slide" data-direction="reverse">Back</a> <h1>Year Chart</h1> </div> <div data-role="content"> <div id="container"></div> </div> <div data-role="footer" data-position="fixed"> <div data-role="navbar"> </div> </div> </div> However, when the chart page is loaded in the browser of my computer (not the phone), no alert message is displayed. Does anyone know where the problem may lie? Thanks in advance! A: Here is the full HTML and code that I can success alert in my all browser(IE, chrome, firefox). I guess some of your javascript are not exist or typo. Try to use CDN. <html> <head> <link rel="stylesheet" href="http://code.jquery.com/mobile/1.3.0/jquery.mobile-1.3.0.min.css" /> <script src="http://code.jquery.com/jquery-1.8.2.min.js"></script> <script src="http://code.jquery.com/mobile/1.3.0/jquery.mobile-1.3.0.min.js"></script> <script type="text/javascript"> $( '#chart' ).live( 'pageinit',function(event){ alert( 'This page was just enhanced by jQuery Mobile!' ); }); </script> </head> <body> <div data-role="page" data-theme="b" id="chart"> <div data-role="header" data-position="fixed"> <a href="#main-page" data-icon="arrow-r" data-iconpos="left" data-transition="slide" data-direction="reverse">Back</a> <h1>Year Chart</h1> </div> <div data-role="content"> <div id="container"></div> </div> <div data-role="footer" data-position="fixed"> <div data-role="navbar"> </div> </div> </div> </body> </html> UPDATED: If you are using jquery 1.9+, we should use .on instead of .live. It is because .live has been removed from 1.9. http://api.jquery.com/live/ So, the code should change like this if you are using 1.9 $( document ).on( 'pageinit','#chart', function(event){ alert( 'This page was just enhanced by jQuery Mobile!' ); });
{ "pile_set_name": "StackExchange" }
Q: Trying to display two concatenated variables in email subject line using PHP I am trying to display two concatenated variables in a subject line through an mailer.php page, but the subject line in the email always comes in blank. Below is the pertinent code. /* Subject and To */ $to = '[email protected]'; $subject = $company . ' ' . $name; /* Gathering Data Variables */ $name = $_POST['name']; $email = $_POST['email']; $company = $_POST['company']; $body = <<<EOD <br><hr><br> Email: $email <br> Name: $name <br> Company: $company <br> EOD; $headers = "From: $email\r\n"; $headers .= "Content-type: text/html\r\n"; $success = mail($to, $subject, $body, $headers); A: $to = '[email protected]'; $subject = $company . ' ' . $name; /* Gathering Data Variables */ $name = $_POST['name']; $email = $_POST['email']; $company = $_POST['company']; You're not setting $company and $name until after you use them in $subject Try switching the lines round: /* Gathering Data Variables */ $name = $_POST['name']; $email = $_POST['email']; $company = $_POST['company']; $to = '[email protected]'; $subject = $company . ' ' . $name;
{ "pile_set_name": "StackExchange" }
Q: Unable to get CiviCRM running on Drupal 8/Ubuntu server 18.04 LTS/Linode I have tried following the Roundearth instructions twice to get CiviCRM up and running on a clean install of Drupal 8, on Ubuntu server 18.0.4. First time through, the installer failed with a permissions error but CiviCRM reported as installed—if there's a way of reverting from this situation, I couldn't figure it out, and I ended up having to start over. Second time through, it all looked good, but going to any CiviCRM page came up blank. Checking PHP errors, it was maxing out memory. So I increased it to 256, 512, and unlimited—each time, it maxed out on memory (3.2 gigs in the case of unlimited, whereas the server has 4), and I gave up and went home. Only to discover when I got there that the pages now loaded, and continued to load with sane memory restrictions. (???) However, there's no CiviCRM menu (it appears to be there, but empty), and when I go to handpasted admin URLs, things that are supposed to be there according to the docs are blank. Both of these were with the Roundearth instructions—I found a different set here that I'm trying out, but I don't feel good about it. I have to deploy a staged site by Friday, and if I can't get CiviCRM to work, I'll plow on without it—but I gather that it really wants to be installed before I start piling on other modules. Any help greatly appreciated. EDIT: Occurs to me that I have not tried installing Drupal from apt-get (if it's there), because the Roundearth composer script had it included. Could try that next before moving on to the linked instructions. Or not, as apparently it's not in apt. A: This ended up being a combination of adding the settings listed at https://hq.megaphonetech.com/projects/commons/wiki/CiviCRM_for_Drupal_8_installation_notes to civicrm.settings.php and the file system permissions.
{ "pile_set_name": "StackExchange" }
Q: OpenCV, area between two curves I work with OpenCV library in Python. The question is how to select in separate roi the area across two curves? Curves are defined by two quadric polynoms. I want to find count of black pixels at the area restricted between curve 1 and curve 2 A: You can create mask by drawing ellipse, but you should have the following data from your equation, center – Center of the ellipse (here I used centre of image). axes – Half of the size of the ellipse main axes (here I used image size/2 and image size/4 respectively for both curve). angle – Ellipse rotation angle in degrees, (here I used 0) startAngle – Starting angle of the elliptic arc in degrees. (here I used 0) endAngle – Ending angle of the elliptic arc in degrees.(here I used -180) If you got the above data for both curve, you can simply draw ellipse with thickness=CV_FILLED like, First draw largest ellipse with color=255. Now draw second ellipse with color = 0. See an example, Mat src(480,640,CV_8UC3,Scalar(0,0,0)); ellipse(src,Point(src.cols/2,src.rows/2), Size (src.cols/2,src.rows/2), 0, 0,-180,Scalar(0,0,255), -1,8, 0); ellipse(src,Point(src.cols/2,src.rows/2), Size (src.cols/4,src.rows/4), 0, 0,-180,Scalar(0,0,0), -1,8, 0); Draw it on a single channel image, if you want to use it as mask. Edit:- To find the area, draw above to single channel image with color=255. Then use countNonZero to get white pixel count.
{ "pile_set_name": "StackExchange" }
Q: What's the meaning of 角刈り in this context? I came across this sentence: 冗談はその角刈りだけにしてくださいよ (Source: https://www.youtube.com/watch?v=i1e-WwxCZ3w) From the context, I reckon it means "Please don't joke /Give me a break", but I'm stumped by 角刈り. My dictionary says 角刈り = a crew cut (haircut), which makes no sense here. Is this some sort of slang/idiom? A: Actually, you're pretty much right. The literal translation would be "Let the only joke be your crew cut (or hairstyle) and I would localize it to something like "The only joke here is your hair." Basically, the speaker is mocking his target's hairstyle. It should be noted that usually this line isn't used with 角刈り, but with 顔, as in 「冗談は顔だけにしてくれ」(thus taking a jab at someone's looks).
{ "pile_set_name": "StackExchange" }
Q: Is there an efficiency measure for airports? For large, busy airports (such as JFK), how do they measure how well they are being utilized? I assume airport management wants to serve as many flights as possible, for maximum revenue. Any time their gates are empty, or runways aren't being utilized, the airport has more capacity to sell. Is there some sort of industry standard measurement for "airport utilization efficiency"? A: The Air Transport Research Society (ATRS) has a benchmarking system for determining the efficiency of the airports. Basically,it takes the airport size and related infrastructure (runways, no. of gates etc.) as inputs and measures the airport efficiency in terms of aircraft and passenger movement,revenue etc. There is a yearly Global Airport Benchmarking Report.
{ "pile_set_name": "StackExchange" }
Q: Question on expansion into Neumann eigenfunctions Let $\Omega$ be an open bounded domain with a boundary $\partial\Omega$. Consider the following Neumann eigenvalue problem for Laplacian: find $(\phi_n,\lambda_n)\in H^1(\Omega)\times \mathbb{R}$ \begin{align*} -\Delta \phi_n& = \lambda_n \phi_n\quad \mbox{in }\Omega,\\ \frac{\partial \phi_n}{\partial \nu} & = 0 \quad \mbox{on }\partial\Omega. \end{align*} Now by spectral theory (see the notes at here https://faculty.math.illinois.edu/~laugesen/595Lectures.pdf), it is known that the sequence of eigenfunctions $\phi_n$ (ordered nondecreasingly by the eigenvalues, with multiplicity counted) can be taken to be a complete orthonormal basis in $L^2(\Omega)$, and also forms a complete orthogonal basis in $H^1(\Omega)$. Thus any function $u\in H^1(\Omega)$ can be expanded into \begin{equation*} u = \sum_{n=1}^\infty (u,\phi_n)_{L^2(\Omega)}\phi_n\quad \mbox{in } L^2(\Omega), \end{equation*} and this expansion holds also in $H^1(\Omega)$, since $u\in H^1(\Omega)$ by assumption. I am puzzled over the fact that all the eigenfunctions $\phi_n$ have zero Neumann boundary condition, so any $n$-term truncation $u_n$ $$ u_n = \sum_{i=1}^n(u,\phi_n)_{L^2(\Omega)}\phi_n $$ has a zero Neumann boundary condition. However, a function $u$ in $H^1(\Omega)$ may not have a zero Neumann boundary condition. How shall one understand the convergence in $H^1(\Omega)$ ? A: The boundary has measure=0, and so there is no contradiction, because the convergence takes place in $L^2$ (or in $H^1$). The convergence is not pointwise. It is probably best to convince yourself of this for an interval $\Omega=(0,L)$ in $\mathbb R$.
{ "pile_set_name": "StackExchange" }
Q: Missing user password with parse REST Api when linking facebook user I'm trying to implement Facebook login under a Symfony2 project using parse.com REST Api. Here is the code I'm using for making the CURL call: $headers = array( "Content-Type: application/json", "Content-Length: " . strlen($facebookJson), "X-Parse-Application-Id: " . $applicationId, "X-Parse-REST-API-Key: " . $parseRestAPIKey ); $handle = curl_init(); curl_setopt($handle, CURLOPT_URL, $url); curl_setopt($handle, CURLOPT_HTTPHEADER, $headers); curl_setopt($handle, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($handle, CURLOPT_RETURNTRANSFER, true); curl_setopt($handle, CURLOPT_POSTFIELDS, $facebookJson); $data = curl_exec($handle); curl_close($handle); The $facebookJson variable represents the Facebook authData: { "facebook": { "id": "user's Facebook id number as a string", "access_token": "an authorized Facebook access token for the user", "expiration_date": "token expiration date of the format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'" } } In this example, the user is not a registered user in parse, so it should create a new user (as stated in the documentation), but instead of creating a new user, it gives the following error message: "{"code":201,"error":"missing user password"} Since I'm using Facebook to register the user, I don't think it should ask for a password, the documentation doesn't mention this. Any suggestions how to fix this? Should I try a different approach in order to implement Facebook login with parse? (may be it's easier with parse javascript sdk for example). A: You've missed authData field. Here is the example of JSON from Parse documentation: { "authData": { "twitter": { "id": "12345678", "screen_name": "ParseIt", "consumer_key": "SaMpLeId3X7eLjjLgWEw", "consumer_secret": "SaMpLew55QbMR0vTdtOACfPXa5UdO2THX1JrxZ9s3c", "auth_token": "12345678-SaMpLeTuo3m2avZxh5cjJmIrAfx4ZYyamdofM7IjU", "auth_token_secret": "SaMpLeEb13SpRzQ4DAIzutEkCE2LBIm2ZQDsP3WUU" } } }
{ "pile_set_name": "StackExchange" }
Q: Replacing 5 V pin with battery not working I have a small breadboard that has a L293D motor controller and 2 DC motors. The board is hooked up to a Nordic DK and everything on the PWM side is fine. The breadboard is powered by the development board's 5 V pin but I am ready to move to the next step of a PCB so I will need a battery. I have two 2032 coin cells that were taped together and have jumpers for positive and negative but whenever I try to use it as the power source on the breadboard it fails to work. I checked it with a multimeter and it's throwing out at least 5.6 V. Is there something I am missing? Are lithium batteries not applicable for this? Can I still make a PCB with what I have now? A: Your coin cells have far too little peak current capacity to run anything but the tiniest sort of motor. As a result the voltage is probably much lower under load. Your L293D being a bipolar bridge will also have very high loss - probably in excess of a 1-volt drop by the time you count both top and bottom switches. Further, your develoment board may not be designed to handle the (lightly loaded) voltage of two coins cells in series, so you may have already damaged it. If you want an "easy" way to replace a 5v supply with a battery, you might consider using a USB powerbank, though they can have various sorts of turn-on behavior and some may turn themselves off below a minimum current draw. Doing it yourself is likely to either require a number of AA cells to get well above the target voltage even at end-of life, followed by a linear regulator. Or better your can use switching regulator or potentially a boost converter from a lower battery voltage (which incidentally is what a USB power bank is - some buck regulate from 2 lithium cells, others boost from 1, both typically cells good for well over an amp).
{ "pile_set_name": "StackExchange" }
Q: Is an SN1 mechanism feasible with allylic or benzylic halides as substrates? My teacher stated that allylic carbocations are comparable in stability to tertiary carbocations. $\def\SN#1{\mathrm{S_N}#1}$ With this in mind I am confused why question 13 of this MIT practice test states that these two molecules are only expected to undergo $\SN 2$ reactions: An $\SN 1$ reaction on the first molecule with the phenyl substituent would yield a primary carbocation, yes, but this primary carbocation is also benzylic, and can be stabilized through resonance. An $\SN 1$ on the second molecule similarly yields a carbocation, but this is an allylic carbocation, and can easily be stabilized through resonance and delocalization of pi-electrons. So, would $\SN 1$ be valid pathways of reactions for both of these molecules, or is $\SN 2$ simply the only pathway for these molecules to react? A: I planned to add this as a comment but I don't have enough reputation ... A definite answer can only come by referring to the literature and/or from the lab. In the context of an undergraduate exam it is reasonable to accept both possible answers ($\mathrm{S_N1}$ and $\mathrm{S_N2}$ ) for allylic and benzylic halides. $\ce{EtOH}$ solvolysis of benzyl bromide is a known reaction and the hypothesis that it goes via $\mathrm{S_N1}$ is the most reasonable one. Is it possible that the solvolysis acually goes via $\mathrm{S_N2}$ instead? The answer lies on performing the appropriate mechanistic studies. However, in this case, having checked the actual exam question, it specifies acetone as the solvent. This polar aprotic solvent favours $\mathrm{S_N2}$ conditions, which is aprotic (see this question also). Incidentally, aside from the use of $\ce{KI}$ over $\ce{NaI}$, this halide exchange is basically the Finkelstein reaction, which occurs via $\mathrm{S_N2}$. (Credit to Greg E. who wrote this in a now deleted comment.)
{ "pile_set_name": "StackExchange" }
Q: Issue identifying pattern with vertica match clause I'm having some difficulty understanding how to leverage Vertica's match clause to identify sessions in which a user searched for something on our site (event_category ='Search') and then saw a product carousel item (product_list ='banner' AND event_action ='impression'). Varying events are captured before, after, and during the pattern I'd like to identify, as the number of products that appear on a page and a user's engagement with our site vary can from session to session and user to user. Raw Data Example | hit_number | product_list | Event_Category | Event_Action | Event_Label | |------------|----------------------|----------------|--------------|---------------| | 105 | (null) | Search | Submit | chocolate | | 106 | (null) | eec | impression | search-result | | 107 | search-result | eec | impression | sendData | | 107 | search-result | eec | impression | sendData | | 107 | search-result | eec | impression | sendData | | 107 | search-result | eec | impression | sendData | | 108 | (null) | (null) | (null) | (null) | | 109 | (null) | eec | impression | banner | | 110 | banner-105-chocolate | eec | impression | sendData | | 110 | banner-105-chocolate | eec | impression | sendData | | 110 | banner-105-chocolate | eec | impression | sendData | For the pattern to be valid, there must be at least 1 search event and 1 banner impression, I've set the pattern to (Search+ Banner+) to reflect this, but I'm not returning any results when I run execute the SQL query shown below. SELECT page_title ,event_label ,event_name() ,match_id() ,pattern_id() FROM (SELECT unique_visit_id ,hit_number ,event_category ,event_label ,event_action ,product_list FROM atomic.ga_sessions_hits_product_expanded WHERE 1=1 AND ga_sessions_date >= CURRENT_DATE -3 AND unique_visit_id = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' ORDER BY hit_number ASC) base Match (Partition by unique_visit_id Order by hit_number Define Search as event_category ='Search' and event_action = 'Submit', Banner as product_list ilike 'banner-%' and event_action ='impression' Pattern P as (Search+ BannerImpression+) ROWS MATCH FIRST EVENT) Please let me know if there's anything I should clarify, any insights or assistance would be greatly appreciated! A: First, the column you're partitioning by is not in the example input. I added it and gave it the value 42 for all rows in your input data. Your problem is that there are no patterns, in that data snippet, where an event that you named banner immediately follows an event that you named search I added yet another event into the DEFINE clause, at the end. If the other two don't evaluate to true, the last, which is just defined as other AS true, will be picked (that's the behaviour of ROWS MATCH FIRST EVENT) . And the pattern then becomes (search+ other* banner+), and that one is then found. See here: WITH ga_sessions_hits_product_expanded( unique_visit_id,hit_number,product_list,Event_Category,Event_Action,Event_Label ) AS ( SELECT 42,105,NULL,'Search','Submit','chocolate' UNION ALL SELECT 42,106,NULL,'eec','impression','search-result' UNION ALL SELECT 42,107,'search-result','eec','impression','sendData' UNION ALL SELECT 42,107,'search-result','eec','impression','sendData' UNION ALL SELECT 42,107,'search-result','eec','impression','sendData' UNION ALL SELECT 42,107,'search-result','eec','impression','sendData' UNION ALL SELECT 42,108,NULL,NULL,NULL,NULL UNION ALL SELECT 42,109,NULL,'eec','impression','banner' UNION ALL SELECT 42,110,'banner-105-chocolate','eec','impression','sendData' UNION ALL SELECT 42,110,'banner-105-chocolate','eec','impression','sendData' UNION ALL SELECT 42,110,'banner-105-chocolate','eec','impression','sendData' ) SELECT * , event_name() , pattern_id() , match_id() FROM ga_sessions_hits_product_expanded MATCH( PARTITION BY unique_visit_id ORDER BY hit_number DEFINE search AS event_category='Search' AND event_action='Submit' , banner AS product_list ILIKE 'banner-%' AND event_action='impression' , other AS true PATTERN p AS (search+ other* banner+) ROWS MATCH FIRST EVENT ); -- out Null display is "NULL". -- out unique_visit_id | hit_number | product_list | Event_Category | Event_Action | Event_Label | event_name | pattern_id | match_id -- out -----------------+------------+----------------------+----------------+--------------+---------------+------------+------------+---------- -- out 42 | 105 | NULL | Search | Submit | chocolate | search | 1 | 1 -- out 42 | 106 | NULL | eec | impression | search-result | other | 1 | 2 -- out 42 | 107 | search-result | eec | impression | sendData | other | 1 | 3 -- out 42 | 107 | search-result | eec | impression | sendData | other | 1 | 4 -- out 42 | 107 | search-result | eec | impression | sendData | other | 1 | 5 -- out 42 | 107 | search-result | eec | impression | sendData | other | 1 | 6 -- out 42 | 108 | NULL | NULL | NULL | NULL | other | 1 | 7 -- out 42 | 109 | NULL | eec | impression | banner | other | 1 | 8 -- out 42 | 110 | banner-105-chocolate | eec | impression | sendData | banner | 1 | 9 -- out 42 | 110 | banner-105-chocolate | eec | impression | sendData | banner | 1 | 10 -- out 42 | 110 | banner-105-chocolate | eec | impression | sendData | banner | 1 | 11 -- out (11 rows) -- out -- out Time: First fetch (11 rows): 50.632 ms. All rows formatted: 50.721 ms
{ "pile_set_name": "StackExchange" }
Q: how to clear or replace a cached image I know there are many ways to prevent image caching (such as via META tags), as well as a few nice tricks to ensure that the current version of an image is shown with every page load (such as image.jpg?x=timestamp), but is there any way to actually clear or replace an image in the browsers cache so that neither of the methods above are necessary? As an example, lets say there are 100 images on a page and that these images are named "01.jpg", "02.jpg", "03.jpg", etc. If image "42.jpg" is replaced, is there any way to replace it in the cache so that "42.jpg" will automatically display the new image on successive page loads? I can't use the META tag method, because I need everuthing that ISN"T replaced to remain cached, and I can't use the timestamp method, because I don't want ALL of the images to be reloaded every time the page loads. I've racked my brain and scoured the Internet for a way to do this (preferrably via javascript), but no luck. Any suggestions? A: If you're writing the page dynamically, you can add the last-modified timestamp to the URL: <img src="image.jpg?lastmod=12345678" ... A: <meta> is absolutely irrelevant. In fact, you shouldn't try use it for controlling cache at all (by the time anything reads content of the document, it's already cached). In HTTP each URL is independent. Whatever you do to the HTML document, it won't apply to images. To control caching you could change URLs each time their content changes. If you update images from time to time, allow them to be cached forever and use a new filename (with a version, hash or a date) for the new image — it's the best solution for long-lived files. If your image changes very often (every few minutes, or even on each request), then send Cache-control: no-cache or Cache-control: max-age=xx where xx is the number of seconds that image is "fresh". Random URL for short-lived files is bad idea. It pollutes caches with useless files and forces useful files to be purged sooner. If you have Apache and mod_headers or mod_expires then create .htaccess file with appropriate rules. <Files ~ "-nocache\.jpg"> Header set Cache-control "no-cache" </Files> Above will make *-nocache.jpg files non-cacheable. You could also serve images via PHP script (they have awful cachability by default ;) A: Contrary to what some of the other answers have said, there IS a way for client-side javascript to replace a cached image. The trick is to create a hidden <iframe>, set its src attribute to the image URL, wait for it to load, then forcibly reload it by calling location.reload(true). That will update the cached copy of the image. You may then replace the <img> elements on your page (or reload your page) to see the updated version of the image. (Small caveat: if updating individual <img> elements, and if there are more than one having the image that was updated, you've got to clear or remove them ALL, and then replace or reset them. If you do it one-by-one, some browsers will copy the in-memory version of the image from other tags, and the result is you might not see your updated image, despite its being in the cache). I posted some code to do this kind of update here.
{ "pile_set_name": "StackExchange" }
Q: Would this JS function be considered recursive? Not sure whether or not this function can be considered recursive. var capitalizeWords = function(input) { var results = []; if(typeof input === 'string'){ return input.toUpperCase(); }else{ input.forEach(function(word){ results = results.concat(capitalizeWords(word)); }); } return results; }; //capitalizes all words in the array A: Yes, but it's not direct recursion but indirect recursion. The recursion doesn't happen in the actual function but in an anonymous higher order function.
{ "pile_set_name": "StackExchange" }
Q: How do i get two drop down list to be displayed side by side? I've been trying to make two drop down lists to be displayed side by side but can't figure it out. What CSS element property to set to do this. I have to show it in the following format: [company] [mobile] instead of [company] [mobile] There are 3 such pairs. Also the pair of 2 select drop boxes doesn't seem to stick to its division. <html> <head> <style> body { background-image:url('gradient1.jpg'); background-repeat:repeat-x; } .ex { margin:auto; width:90%; padding:10px; border:outset; } select { display:inline; cursor:pointer; } .ey { display:inline; } .gap { clear:both; margin-bottom:2px; } </style> </head> <body> <div class="ex"> <form id='dd1.mob1' name='dd1.mob1' method='post' action=' '> <p><label>Select Company</label></p><br/> <select onchange=filter.submit() name='dd1mob1' id='dd1mob1'> <option>1</option> <option>2</option>" . $options . " </select> </form> <form class="ey" id='dd2.mob1' name='dd2.mob1' method='post' action=''> <p><label>Select Mobile</label></p><br/> <select onchange=filter.submit() name='dd2mob1' id='dd2mob1'> " . $options . " </select> </form> </div> <div class="ex" class="gap" > <form id='dd1.mob2' name='dd1.mob2' method='post' action=' '> <p><label>Select Company</label></p><br/> <select onchange=filter.submit() name='dd1mob2' id='dd1mob2'> <option>1</option> <option>2</option>" . $options . " </select> </form> <form class="ey" id='dd2.mob2' name='dd2.mob2' method='post' action=''> <p><label>Select Mobile</label></p><br/> <select onchange=filter.submit() name='dd2mob2' id='dd2mob2'> " . $options . " </select> </form> </div> <div class="ex" class="gap"> <form id='dd1.mob3' name='dd1.mob3' method='post' action=' '> <p><label>Select Company</label></p><br/> <select onchange=filter.submit() name='dd1mob3' id='dd1mob3'> <option>1</option> <option>2</option>" . $options . " </select> </form> <form class="ey" id='dd2.mob3' name='dd2.mob3' method='post' action=''> <p><label>Select Mobile</label></p><br/> <select onchange=filter.submit() name='dd2mob3' id='dd2mob3'> " . $options . " </select> </form> </div> </body> </html> A: Try this- Example .ey { display:inline-block; } form{ display:inline-block; } See this thread for a good explanation of display: inline-block; What is the difference between display: inline and display: inline-block? A: this displays 2 dropdown list side by side: <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <div style="display:block;"> <select> <option>test1</option> <option>test2</option> </select> <select> <option>test1</option> <option>test2</option> </select> </div> </body> </html>
{ "pile_set_name": "StackExchange" }
Q: JsViews doesn't seem to work with numeric properties When I try to pass JavaScript object with numeric properties { 1: "One", 2: "Two", 3: "Three" } Data-binding doesn't render property values, only numbers like in example $.templates("template", "#template"); $.link.template("#row", { 1: "One", 2: "Two", 3: "Three" }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jsviews/0.9.90/jsviews.min.js"></script> <script id="template" type="text/x-jsrender"> <td>{{:1}}</td> <td>{{:2}}</td> <td>{{:3}}</td> </script> <table> <tr id="row"> </tr> </table> But if I change property names of object to something beginning with letter it works OK $.templates("template", "#template"); $.link.template("#row", { n1: "One", n2: "Two", n3: "Three" }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jsviews/0.9.90/jsviews.min.js"></script> <script id="template" type="text/x-jsrender"> <td>{{:n1}}</td> <td>{{:n2}}</td> <td>{{:n3}}</td> </script> <table> <tr id="row"> </tr> </table> Is it bug or feature? How to make jsViews work with numeric properties without converting passed object? A: If you write {{:4}} for some integer, then JsRender treats that as an expression, and evaluates it. (For example {{:4*12+2}} will render 50). In JavaScript if an object property name (key) is not a valid identifier name you have to use the square bracket accessor syntax. In JsRender/JsViews templates, the same is true. (See www.jsviews.com/#paths). Here are multiple examples: $.templates("template", "#template"); $.link.template("#row", { 1: "One", "2": "Two", 3: "Three", other: { 50: "fifty" }, 4: { 5: "five"}, "a b": "AB" }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jsviews/0.9.90/jsviews.min.js"></script> <script id="template" type="text/x-jsrender"> <td>{{:#data[1]}}</td> <td>{{:#data[1+1]}}</td> <td>{{:#data["3"]}}</td> <td>{{:other[50]}}</td> <td>{{:~root[1]}}</td> <td>{{:#data[4]["5"]}}</td> <td>{{:#data["a b"]}}</td> </script> <table> <tr id="row"> </tr> </table>
{ "pile_set_name": "StackExchange" }
Q: Javascript: getting wrong dates of current weeks I want to get all the seven dates (Monday to Sunday) from the current week. The below code is working well. let curr = new Date(); // today's date is: 15th April 2020 let week = [] for (let i = 1; i <= 7; i++) { let first = curr.getDate() - curr.getDay() + i; let day = new Date(curr.setDate(first)).toISOString().slice(0, 10) week.push(day); } console.log(week); // output: ["2020-04-13", "2020-04-14", "2020-04-15", "2020-04-16", "2020-04-17", "2020-04-18", "2020-04-19"] However, assume the current date is 19th April 2020. Then the code is returning the wrong dates. let curr = new Date('2020-04-19'); // today's date is: 19th April 2020 let week = [] for (let i = 1; i <= 7; i++) { let first = curr.getDate() - curr.getDay() + i; let day = new Date(curr.setDate(first)).toISOString().slice(0, 10) week.push(day); } console.log(week); // output: ["2020-04-20", "2020-04-21", "2020-04-22", "2020-04-23", "2020-04-24", "2020-04-25", "2020-04-26"] It should return the output like ["2020-04-13", "2020-04-14", "2020-04-15", "2020-04-16", "2020-04-17", "2020-04-18", "2020-04-19"] A: It looks like if curr falls on a Sunday, you want to jump back an entire week, so I would change this line: let first = curr.getDate() - curr.getDay() + i; to: let first = curr.getDate() - ( curr.getDay() ? curr.getDay() : 7 ) + i;
{ "pile_set_name": "StackExchange" }
Q: Serving resources from `node_modules` dir I want to serve files from node_modules directory in project root. so for example I added to my page this: [:link {:href "/font-awesome/css/font-awesome.css" :rel "stylesheet" :type "text/css"}] now I need to tell compojure to serve statically anything that's in node_modules directory, and I can't find a way. It works if I move node_modules to resources/public dir, but I don't want that. I need to find a way to serve files from anywhere in the project directory (in this case from ./node_modules) I tried adding :resource-paths ["node_modules"] to profiles.clj, I tried (compojure.route/resources "node_modules" {:root "../.." }), that still didn't work. A: This is what I did: added :resource-paths ["node_modules"] to project.clj - take a look at leiningen's sample and then (compojure.route/resources "/" {:root "" }). Seems it worked. upd: this apparently exposed things that should not be exposed. e.g. it's possible now to download project.clj by navigating to it in the browser. not good. upd 2: (compojure.route/files "/" {:root "node_modules" }) this time it's right.
{ "pile_set_name": "StackExchange" }
Q: How do I run a graphical sudo in bash on kubuntu 18.04 now that kdesudo is gone? TL;DR: What's the new right way to do a graphical sudo from a shell script? Flailing: I just upgraded from kubuntu 16.04 to 18.04 and I'm doing the normal triage. kdesudo is gone in 18.04 (unmaintained). I use it a lot in bash scripts with GUI i/o. Some post said use kdesu - which seems weird. I seem to recall that it messes with the effective user or something like that. That's not installed in my PATH. I found it at bigbird@sananda:~/pq$ ls -l /etc/alternatives/kdesu rwxrwxrwx 1 root root 41 Aug 19 03:23 /etc/alternatives/kdesu -> /usr/lib/kde4/libexec/kdesu-distrib/kdesu which still says kde4. I tried sudo -A ls and it said bigbird@sananda:~$ sudo -A ls sudo: no askpass program specified, try setting SUDO_ASKPASS I went in a few circles looking at ksshaskpass and ssh-askpass, but both say they're not intended to be called directly. I am not doing anything with ssh. I need this for bash scripts that do almost everything as a normal user and then run one or two commands as root. These scripts are often launched from desktop icons where there is no terminal window open (and I don't need or want one.) They often use yad (like zenity or kdialog) to interface with the user. A: As you have discovered, you can use the -A option with sudo, but you need a gui method of supplying the password to sudo. You can write such a tool anyway you want, as long as it passes the password back to sudo on stdout. I use a simple solution which someone suggested to me a very long time ago, that uses kdialog, and like all simple solutions, it has remained my go to ever since. So create yourself a simple kdialog script, such as this #!/bin/bash kdialog --password "Password required to proceed" Now you use this with sudo like this #!/bin/bash export SUDO_ASKPASS=<path to your kdialog script> sudo -A foo You can of course use any language you want to for your gui password provider if you don't have kde EDIT: Solution to bypassing sudo passwd_tries So that you can just ask for the password once only (as you want to do), you can capture the password in a variable within the script and pass that variable directly to the sudo command using the -S switch. This has the advantage that it ignores the sudo passwd_tries rule, and still requires the interactive password input, so the password is not stored within the script. PASSWD=$(kdialog --password "sudo password required") echo $PASSWD | sudo -S foo You can also do it directly on a line, if you do not need multiple sudo commands in the script, like this echo $(kdialog --password "sudo password required") | sudo -S foo And of course you can use your own kdialog script that we discussed earlier in place of using kdialog here, if you want a standard kdialog prompt in all your scripts. The problem bypassing sudo's passwd_tries, from my POV, is that if you get the password wrong, your script will continue processing any commands after the sudo command, so if the sudo elevated command was critical to the script's success then you have problems. The caveat is that the password from kdialog (or alternative such as zenity) is written on stdout, something I should have mentioned before, so anyone that has captured the PID's stdout would see your password. But then any hacker on your system would be doing a lot more than just that.
{ "pile_set_name": "StackExchange" }
Q: What's the reasoning behind phrase "dissertation submitted in partial fulfillment of the requirements"? Phrase "dissertation submitted in partial fulfillment of the requirements" seems strange. Can one submit dissertation in full fulfillment of the requirements? A: Most doctoral programs have other requirements for completion of the degree, such as a certain number of course hours and the passing of qualifying exams. Hence, while the dissertation is the culmination of the doctoral program, on its own it does not satisfy all the requirements for graduation.
{ "pile_set_name": "StackExchange" }
Q: Will Cassini contaminate Saturn? Cassini is going to crash into Saturn later this month to avoid contaminating one of its moons. Why isn't anyone worried that Cassini will contaminate Saturn itself? Life might exist in Saturn's atmosphere. Different altitudes, pressures & temperatures may have environments where life has evolved. It's closed minded to rule this out. If we REALLY wanted to not contaminate anything, we should sling it out of the plane of the solar system entirely. A: If we were going to send a probe into Saturn's atmosphere and were concerned about contamination of a potential ecosphere there, we would sterilize the probe first, e.g. with dry heat microbial reduction, to make sure that nothing viable was on the probe. Cassini has no protection from the entry heat like a probe would, and will be entering at an incredible velocity, so every tiny bit of Cassini will be massively sterilized in the fiery entry, far beyond puny humans ability to do so. A: I was thinking this same thing myself. I did some cursory research and apparantly there is an unidentified strain of a genus of thermally resistant bacteria, Microbispora, that survived the reentry and crash of the Columbia, as well as a strain of thermophilic bacteria, Thermoanaerobacter siderophilus, placed in basalt disks on the exterior of the Russian satellite, Foton-M4, with 1/6 of the cultures surviving. Granted, Saturn's atmosphere is significantly thicker, but it still seems at least very vaguely plausible that certain extremophiles could survive or be whisked off into the less dense upper atmosphere, kept afloat by the potent winds and storms. I realize at a certain point it all becomes a matter of "good enough", as there are very few, if any, practical ways to avert any and all risk of contamination, but it still seems like an odd and interesting subject. Love to hear if anybody knows anything more about this! :DDD Sources: Microbispora - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144675/ Thermoanaerobacter siderophilus - https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132611
{ "pile_set_name": "StackExchange" }
Q: Silverlight 5 and MVVM. Do I really need other frameworks? What is the best way for rapid development? I've been reading and watching videos on MVVM and Silverlight.. I'm pretty new to Silverlight but not new to .NET. Interesting that I used MVVM in my WPF apps without knowing that it's MVVM. I was creating easily-bindable classes to serve as a layer between data and XAML :) Project that we start will be done with Silverlight 5 and WCF on a back end. Project will be rather large with several modules 50 or screens each, ideally I would like to load them on demand. MOST(not all) UI will be straight-forward data entry stuff. I'm trying to see what should be our architecture and how will it benefit us in future. I think I GOT IT as far as what MVVM and WHY. I also checked Caliburn Micro (and understood what it does). I see ReactiveUI and MVVMLight. To be honest I don't like external libraries/dependencies. Also, I don't really care about using naming conventions to satisfly external framework and perfectly OK with binding in XAML. Since we have good commands support and XAML debugging in SL5 - I don't think I need external framework. So, I think having ViewModels and binding via XAML with minimal view-related code in code-behind will be perfectly fine with me. Here is my dilemma: If I use RIA services. 80% of my UI will bind perfectly to RIA generated stuff with some converters of course. Will it be bad architecture to have everything bind directly and just some more complex views to use ModelView? Should I use RIA services?! I think YES. I'm all for generated code especially when it's plain data-entry stuff. Will it keep client code synced with server-side? From what I see - ModelView have to be manually coded. Am I correct? Again, for 80% of project that's probably going to be waste of effort. If I want to have multiple xap files that load on demand, should I use some kind of framework? I think keeping it in one file may get too big. Thanks! A: I built a MVVM (WPF) application for a client about a year ago. The app consisted of multiple modules, tens of screens with up t0 50 fields per screen. I used Prism for the MVVM framework for the screen/region management. I added another simple framework to handle multi-lingual support and built security myself. I found it beneficial having Prism handling all of the screen region stuff. It was a chunk of work I did not have to worry about plus I could get working samples and on-line help when dealing with problems. In some areas were guidance was missing or light on (popup screens and field security) I went my own way. Prism handled all of my module discovery and screen handling capabilities. Silverlight will have a different set of concerns, such as network latency. These concerns would have been dealt with if you use an existing framework. Some of my modules have web service back-ends, so had hand coded VM and Models. One had a Entity framework back-end, so the VM was hand coded and the Model was generated. We briefly looked at a RIA back-end it seemed well setup for a 5 minute demo, I'm not sure how well it would scale for a complex application with lots of business logic. Long and short, if you are going down the full MVVM path, I'd suggest using a MVVM framework but be aware it may not cover everything you need. If you application really is that simple, direct binding to RIA could work but validate it against your most complex screen before you commit to it - this means checking security and validation as well as the simple stuff.
{ "pile_set_name": "StackExchange" }
Q: Finding the convex hull of an object in opencv? I've written this based on the tutorial here but I'm unable to obtain the convex hull of the image (I'm using a similar hand image as shown in the tutorial). I get the source and edges output fine but the "Drawings" output which should draw the contour and convex hull lines don't show anything drawn and instead is completely black. Any ideas as to why this could be? #include <opencv/cv.h> #include <opencv/highgui.h> #include <opencv/cxcore.h> int main(int argc,char **argv) { cvNamedWindow( "Source", 1 ); cvNamedWindow( "edges window", 1 ); cvNamedWindow( "Drawings", 1 ); IplImage* src = cvLoadImage( "img.jpg", 0 ); IplImage* edges = cvCreateImage( cvGetSize(src), 8, 1 ); // Finding edges cvThreshold( src, edges, 150, 255, CV_THRESH_BINARY ); CvMemStorage* storage = cvCreateMemStorage(); CvSeq* first_contour = NULL; int Nc = cvFindContours( edges, storage, &first_contour, sizeof(CvContour), CV_RETR_LIST ); // Finding convex Hull CvMemStorage* hull_storage = cvCreateMemStorage(); CvSeq* retHulls = NULL; for(CvSeq* i = first_contour; i != 0; i = i->h_next){ // note h_next is next sequence. retHulls = cvConvexHull2(first_contour,hull_storage,CV_CLOCKWISE,1); } // drawing contours and hull IplImage* draw = cvCreateImage(cvGetSize(edges), 8, 3 ); for(CvSeq* i = first_contour; i != 0; i = i->h_next){ cvDrawContours(draw,first_contour,cvScalar(255,0,0,0),cvScalar(255,0,0,0),0,1,8); cvDrawContours(draw,retHulls,cvScalar(255,0,0,0),cvScalar(255,0,0,0),0,1,8); } cvShowImage( "Source", src ); cvShowImage( "edges window", edges ); cvShowImage( "Drawings", draw ); cvWaitKey(); cvDestroyAllWindows(); cvReleaseImage( &src ); cvReleaseImage( &edges ); cvReleaseImage( &draw ); return 0; } A: You have to do the following changes: Change parameter from CV_RETR_LIST to CV_RETR_EXTERNAL in cvFindContours function. Change CV_THRESH_BINARY to CV_THRESH_OTSU in cvThreshold. Here's proof (input/output):
{ "pile_set_name": "StackExchange" }
Q: Error while fitting data in auto.arima - R I am running auto.arima for forecasting time series data and getting the following error: 1: The time series frequency has been rounded to support seasonal differencing. 2: In value[3L] : The chosen test encountered an error, so no seasonal differencing is selected. Check the time series data. This is what I am executing: fit <- auto.arima(data,seasonal = TRUE, approximation = FALSE) I have weekly time series data. This is how dput(data) looks like: structure(c(12911647L, 12618317L, 12827388L, 12967840L, 13264925L, 13557838L, 13701131L, 13812463L, 13971928L, 13837658L, 13550635L, 13022371L, 13507596L, 13456736L, 12992393L, 12831883L, 13262301L, 12831691L, 12808893L, 12726330L, 11893457L, 12434051L, 12363464L, 12077055L, 12107221L, 11986124L, 11997087L, 12264971L, 12164412L, 12438279L, 12733842L, 12543251L, 12627134L, 12480153L, 12276238L, 12443655L, 12497753L, 12279060L, 12549138L, 12308591L, 12416680L, 12516725L, 12326545L, 12772578L, 12524848L, 13429830L, 14188044L, 16611840L, 16476565L, 15659941L, 10785585L, 12150894L, 13436366L, 12985213L, 13097555L, 13204872L, 13786040L, 13760281L, 13295389L, 14734578L, 15043941L, 14821169L, 14361765L, 14300180L, 14357964L, 14271892L, 13248168L, 13813784L, 14092489L, 14100024L, 13378374L, 13225650L, 12582444L, 13267163L, 13026181L, 12747286L, 12707074L, 12534595L, 12546094L, 13030406L, 12950360L, 12814398L, 13405187L, 13277755L, 13142375L, 12742153L, 12610817L, 12267747L, 12570075L, 12704157L, 12835948L, 12851893L, 12978880L, 13104906L, 12754018L, 13213958L, 13584642L, 13963433L, 14471672L, 16312595L, 16630000L, 16443882L, 11555299L, 12018373L, 13031876L, 13013945L, 13164137L, 13313246L, 13652605L, 13803606L, 13308310L, 14466211L, 15092736L, 15346015L, 14467260L, 14767785L, 13914271L, 14185070L, 13851028L, 13605858L, 13597999L, 13876994L, 13026270L, 13113250L, 12288727L, 12925846L, 13525010L, 12594472L, 12654512L, 12888260L), .Tsp = c(2016.00819672131, 2018.48047598209, 52.1785714285714), class = "ts") This is how I am reading data from the csv read_data <- read.csv(file="data.csv", header=TRUE) data_ts <- ts(read_data, freq=365.25/7, start=decimal_date(ymd("2016-1-4"))) data <- data_ts[, 2:2] This is the data in the csv: Year si_act 1/4/16 12911647 1/11/16 12618317 1/18/16 12827388 1/25/16 12967840 2/1/16 13264925 2/8/16 13557838 2/15/16 13701131 2/22/16 13812463 2/29/16 13971928 3/7/16 13837658 3/14/16 13550635 3/21/16 13022371 3/28/16 13507596 4/4/16 13456736 4/11/16 12992393 4/18/16 12831883 4/25/16 13262301 5/2/16 12831691 5/9/16 12808893 5/16/16 12726330 5/23/16 11893457 5/30/16 12434051 6/6/16 12363464 6/13/16 12077055 6/20/16 12107221 6/27/16 11986124 7/4/16 11997087 7/11/16 12264971 7/18/16 12164412 7/25/16 12438279 8/1/16 12733842 8/8/16 12543251 8/15/16 12627134 8/22/16 12480153 8/29/16 12276238 9/5/16 12443655 9/12/16 12497753 9/19/16 12279060 9/26/16 12549138 10/3/16 12308591 10/10/16 12416680 10/17/16 12516725 10/24/16 12326545 10/31/16 12772578 11/7/16 12524848 11/14/16 13429830 11/21/16 14188044 11/28/16 16611840 12/5/16 16476565 12/12/16 15659941 12/19/16 10785585 12/26/16 12150894 1/2/17 13436366 1/9/17 12985213 1/16/17 13097555 1/23/17 13204872 1/30/17 13786040 2/6/17 13760281 2/13/17 13295389 2/20/17 14734578 2/27/17 15043941 3/6/17 14821169 3/13/17 14361765 3/20/17 14300180 3/27/17 14357964 4/3/17 14271892 4/10/17 13248168 4/17/17 13813784 4/24/17 14092489 5/1/17 14100024 5/8/17 13378374 5/15/17 13225650 5/22/17 12582444 5/29/17 13267163 6/5/17 13026181 6/12/17 12747286 6/19/17 12707074 6/26/17 12534595 7/3/17 12546094 7/10/17 13030406 7/17/17 12950360 7/24/17 12814398 7/31/17 13405187 8/7/17 13277755 8/14/17 13142375 8/21/17 12742153 8/28/17 12610817 9/4/17 12267747 9/11/17 12570075 9/18/17 12704157 9/25/17 12835948 10/2/17 12851893 10/9/17 12978880 10/16/17 13104906 10/23/17 12754018 10/30/17 13213958 11/6/17 13584642 11/13/17 13963433 11/20/17 14471672 11/27/17 16312595 12/4/17 16630000 12/11/17 16443882 12/18/17 11555299 12/25/17 12018373 1/1/18 13031876 1/8/18 13013945 1/15/18 13164137 1/22/18 13313246 1/29/18 13652605 2/5/18 13803606 2/12/18 13308310 2/19/18 14466211 2/26/18 15092736 3/5/18 15346015 3/12/18 14467260 3/19/18 14767785 3/26/18 13914271 4/2/18 14185070 4/9/18 13851028 4/16/18 13605858 4/23/18 13597999 4/30/18 13876994 5/7/18 13026270 5/14/18 13113250 5/21/18 12288727 5/28/18 12925846 6/4/18 13525010 6/11/18 12594472 6/18/18 12654512 6/25/18 12888260 I was able to read the data without any errors before, initially, I had 160 records & the model does not throw any error but, then for 80-20 test I removed the last 30 records and this error cropped up. Now also, if I run with all the data I don't get any error but is I run it with first 130 as 80% I get this error. A: when using auto.arima with seasonal = TRUE the parameter S is not calibrated but taken from the frequency of the ts object you are providing. So in your case S = 52.17. In case the frequency of the time series is not and integer, S is rounded to next integer so auto.arima takes S = 52. With S=52 and a data of length 150 it becomes difficult to calibrate a seasonal arima model: e.g if P = 2 and and all other variables are zero the first 104 observations cannot be used. I guess that is what the warning is about. You are being told that the seasonal component cannot be calibrated due to the large coefficient S (or due to your short data). So either you get a longer data history, or you aggregate your data to monthly data (such that S = 12).
{ "pile_set_name": "StackExchange" }
Q: Only seeing main in xDebug I'm debugging a huge and messy PHP codebase. The application currently misbehaves and redirects all traffic to the login screen because it seems unable to start sessions. I traced such a scenario with xDebug and only see main, with no branches at all. Does this mean that an uncaught exception is unwinding the stack completely? If that should be the case, is there a way to get a call graph even if that happens? A: The callgraph should show all functions that have been called, of course, if none have been called you only see "main()" (for example, if in main you tried to call an undefined function). With Xdebug, you can trace which functions are called through "function tracing", which you can enable by setting "xdebug.auto_trace=1". You will then get a file in /tmp ending in .xt that lists all the function calls. You can also include more information as you can read about at http://www.xdebug.org/docs/execution_trace#collect_assignments Another way of tackling the debugging is by using single-step debugging (also called, remote debugging) which many IDEs support combined with Xdebug. See for some more information: http://www.xdebug.org/docs/remote
{ "pile_set_name": "StackExchange" }
Q: Compiling java code I wrote some code with Netbeans some time ago. I ended up moving the code to a new server, which I access remotely and does not have netbeans installed. I recently made some changes to that code and compiled with this command javac -classpath /home/me/JSAP-2.1.jar /home/me/Fin2/src/fin2/Fin2.java /home/me/Fin2/src/fin2/CommandLine.java /home/me/Fin2/src/fin2/Reader.java /home/me/Fin2/src/fin2/Manager.java -Xlint But it seems like the new code never compiled. I am getting the same output as before I made the changes. When I have previously ran across this problem on the old server, I would just open netbeans and reset the 'main project' to the program I was trying to run, recompile from within netbeans and it would work fine. Without doing that, I have no idea how to fix the problem. When I run the code I run it with java -jar /home/me/NetBeansProjects/Fin2/dist/Fin2.jar [commandline args] Can anyone make any suggestions? A: You are missing a step that your IDE does for you. You will need to create the jar file from the class file output. See http://docs.oracle.com/javase/tutorial/deployment/jar/build.html http://docs.oracle.com/javase/6/docs/technotes/tools/solaris/javac.html Since you are not adding a -d the class files are in the directory of the source files. Lets actually add a compiled output directory rm -r /home/me/Fin2/build/ mkdir /homr/me/Fin2/build/ Now lets add that folder to the javac, so the classes are created in that folder: javac -Xlint -d /home/me/Fin2/build/ -classpath /home/me/JSAP-2.1.jar /home/me/Fin2/src/fin2/Fin2.java /home/me/Fin2/src/fin2/CommandLine.java /home/me/Fin2/src/fin2/Reader.java /home/me/Fin2/src/fin2/Manager.java Last we need a jar of that output. The jar file is basically just a zip of the directory. In fact, you can open .jar files as zip files. You must also tell it what class has your main method entry point. jar cvfe /home/me/Fin2/dist/Fin2.jar [entry point] /home/me/Fin2/build/ Now, when you run it, run the newly created jar: java -jar /home/me/Fin2/dist/Fin2.jar [commandline args]
{ "pile_set_name": "StackExchange" }
Q: Extract raster values of particular polygons of a SpatialPolygonsDataFrame (indexation) I have a SpatialPolygonsDataFrame with 120 Polygons and some associated data. Now I’d like to extract the mean of the values on a raster within each polygon separately. I succeded in plotting individual polygons with: plot(SpatialPolygons(SPdataframe@polygons)[i]) But it did not work to extract the values in the same manner: extract(raster, SpatialPolygons(SPdataframe@polygons)[i],fun="mean",na.rm=TRUE,method="simple") Can anyone explain the difference between the use of the same indexation in this two cases? What is the official way to choose particular polygons of a SpatialPolygonsDataFrame with indices? Thank you a lot for your help in advance! A: The correct indexation for single polygons of a SpatialPolygonsDataFrame is: SPdataframe[i,] (Merci to R-sig_geos user Rafael Wüest)
{ "pile_set_name": "StackExchange" }
Q: What is the "M- notation" and where is it documented? The Man-Page of cat says: -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB What is the M- notation and where is it documented? Example: $cat log -A wrote 262144 bytes from file test.x in 9.853947s (25.979 KiB/s)^M$ ^M> ^H^H ^H^H> What means ^M and ^H? A: I was wondering this too. I checked the source but it seemed easier to create a input file to get the mapping. I created a test input file with a Perl scrip for( my $i=0 ; $i < 256; $i++ ) { print ( sprintf( "%c is %d %x\n", $i, $i ,$i ) ); } and then ran it through cat -v Also if you see M-oM-;M-? at the start of a file it is the UTF-8 byte order mark. Scroll down through these to get to the M- values: ^@ is 0 0 ^A is 1 1 ^B is 2 2 ^C is 3 3 ^D is 4 4 ^E is 5 5 ^F is 6 6 ^G is 7 7 ^H is 8 8 (9 is tab) (10 is NL) ^K is 11 b ^L is 12 c ^M is 13 d ^N is 14 e ^O is 15 f ^P is 16 10 ^Q is 17 11 ^R is 18 12 ^S is 19 13 ^T is 20 14 ^U is 21 15 ^V is 22 16 ^W is 23 17 ^X is 24 18 ^Y is 25 19 ^Z is 26 1a ^[ is 27 1b ^\ is 28 1c ^] is 29 1d ^^ is 30 1e ^_ is 31 1f ...printing chars removed... ^? is 127 7f M-^@ is 128 80 M-^A is 129 81 M-^B is 130 82 M-^C is 131 83 M-^D is 132 84 M-^E is 133 85 M-^F is 134 86 M-^G is 135 87 M-^H is 136 88 M-^I is 137 89 M-^J is 138 8a M-^K is 139 8b M-^L is 140 8c M-^M is 141 8d M-^N is 142 8e M-^O is 143 8f M-^P is 144 90 M-^Q is 145 91 M-^R is 146 92 M-^S is 147 93 M-^T is 148 94 M-^U is 149 95 M-^V is 150 96 M-^W is 151 97 M-^X is 152 98 M-^Y is 153 99 M-^Z is 154 9a M-^[ is 155 9b M-^\ is 156 9c M-^] is 157 9d M-^^ is 158 9e M-^_ is 159 9f M- is 160 a0 M-! is 161 a1 M-" is 162 a2 M-# is 163 a3 M-$ is 164 a4 M-% is 165 a5 M-& is 166 a6 M-' is 167 a7 M-( is 168 a8 M-) is 169 a9 M-* is 170 aa M-+ is 171 ab M-, is 172 ac M-- is 173 ad M-. is 174 ae M-/ is 175 af M-0 is 176 b0 M-1 is 177 b1 M-2 is 178 b2 M-3 is 179 b3 M-4 is 180 b4 M-5 is 181 b5 M-6 is 182 b6 M-7 is 183 b7 M-8 is 184 b8 M-9 is 185 b9 M-: is 186 ba M-; is 187 bb M-< is 188 bc M-= is 189 bd M-> is 190 be M-? is 191 bf M-@ is 192 c0 M-A is 193 c1 M-B is 194 c2 M-C is 195 c3 M-D is 196 c4 M-E is 197 c5 M-F is 198 c6 M-G is 199 c7 M-H is 200 c8 M-I is 201 c9 M-J is 202 ca M-K is 203 cb M-L is 204 cc M-M is 205 cd M-N is 206 ce M-O is 207 cf M-P is 208 d0 M-Q is 209 d1 M-R is 210 d2 M-S is 211 d3 M-T is 212 d4 M-U is 213 d5 M-V is 214 d6 M-W is 215 d7 M-X is 216 d8 M-Y is 217 d9 M-Z is 218 da M-[ is 219 db M-\ is 220 dc M-] is 221 dd M-^ is 222 de M-_ is 223 df M-` is 224 e0 M-a is 225 e1 M-b is 226 e2 M-c is 227 e3 M-d is 228 e4 M-e is 229 e5 M-f is 230 e6 M-g is 231 e7 M-h is 232 e8 M-i is 233 e9 M-j is 234 ea M-k is 235 eb M-l is 236 ec M-m is 237 ed M-n is 238 ee M-o is 239 ef M-p is 240 f0 M-q is 241 f1 M-r is 242 f2 M-s is 243 f3 M-t is 244 f4 M-u is 245 f5 M-v is 246 f6 M-w is 247 f7 M-x is 248 f8 M-y is 249 f9 M-z is 250 fa M-{ is 251 fb M-| is 252 fc M-} is 253 fd M-~ is 254 fe M-^? is 255 ff A: ^M is for Control-M (a carriage return), ^H for Control-H (a backspace). M-Something is Meta-Something (Meta- is what the Alt key does in some terminals).
{ "pile_set_name": "StackExchange" }
Q: Getting text after a match from stdout I want to extract the next word/number after specific words I find using grep or whatnot. As an example lets say this is what I have in stdout string-a0 match-a1 string-a2 string-a3 string-b0 string-b1 match-b2 string-b3 match-c0 string-c1 string-c2 string-c3 I want to be left with just this string-a2 string-b3 string-c1 mind that match-a1 != match-b2 != match-c0 EDIT A concrete example... stdout is this open 0.23 date ... close 1.52 date ... open 2.34 date ... close 5.92 date ... open 10.78 date ... close 24.21 date ... total 45.3 I'm searching for the words open, close and total so the output should be 0.23 1.52 2.34 5.92 10.78 24.21 45.3 A: This doesn't match the general case, but works for your example: awk '/^open|^close|^total/{print $2}' input For the general case, if your definition of "string" is based on whitespace, perhaps you want: tr -s ' \n' \\n < input | awk 'p{print; p=0} {p=/^open|^close|^total/}'
{ "pile_set_name": "StackExchange" }
Q: Auto populate per user information from a SharePoint List Is it possible to auto populate per user information from a SharePoint List? I have a form with the following text input Payroll Name Job Title Description Home Department Description Location Description Reports To Name Work Contact: Work Email Also have a SharePoint list with all the users information data with the column above. Wanted to see if there is way to auto populate that user information once they open up the form. Thank you in advance! A: Yes, that should be possible. Make the connection to SharePoint list where the users info are stored. In the form you want to pre-populate information select a card e.g. Payroll Name Change the Default property of that card for something like First(Filter('User Info List', username = currentUser)).PayrollName 'User Info List' is the connection name username is a column in that list "user" is the information you are matching to find list items Since filter returns a table we use First to get a record currentUser is the variable that has the user's e-mail You can get the current user details by using and store that OnVisible for that screen User().Email Set(currentUser,User().Email)
{ "pile_set_name": "StackExchange" }
Q: Programmatically get memory usage in Chrome How can I programmatically get memory usage (JS and total) of my website in Google Chrome? I looked at doing it from a Chrome extension using the undocumented HeapProfiler (see here), but I can't find a way to get data from that. I want to measure the memory consumption it at every release, so this needs to be programmatic. EDIT: I figured out how to get the HeapProfiler method to work. Each addHeapSnapshotChunk event has a chunk of a JSON object. chrome.browserAction.onClicked.addListener(function(tab) { var heapData, debugId = {tabId:tab.id}; chrome.debugger.attach(debugId, '1.0', function() { chrome.debugger.sendCommand(debugId, 'Debugger.enable', {}, function() { function headerListener(source, name, data) { if(source.tabId == tab.id && name == 'HeapProfiler.addProfileHeader') { function chunkListener(source, name, data) { if(name == 'HeapProfiler.addHeapSnapshotChunk') { heapData += data.chunk; } else if(name == 'HeapProfiler.finishHeapSnapshot') { chrome.debugger.onEvent.removeListener(chunkListener); chrome.debugger.detach(debugId); //do something with data console.log('Collected ' + heapData.length + ' bytes of JSON data'); } } chrome.debugger.onEvent.addListener(chunkListener); chrome.debugger.sendCommand(debugId, 'HeapProfiler.getHeapSnapshot', {uid:data.header.uid, type:data.header.typeId}); } chrome.debugger.onEvent.removeListener(headerListener); } chrome.debugger.onEvent.addListener(headerListener); chrome.debugger.sendCommand(debugId, 'HeapProfiler.takeHeapSnapshot'); }); }); }); When parsed, the JSON has nodes, edges, and descriptive metadata about the node and edge types and fields. Alternatively, I could use Timeline events if I just want totals. That said, are there any better ways than what I've found out here? A: For anyone that finds this in the future, since version 20 Chrome supports window.performance.memory, which returns something like: { totalJSHeapSize: 21700000, usedJSHeapSize: 13400000, jsHeapSizeLimit: 1620000000 } A: An alternative approach: write a web page scraper pointing to this URL: chrome://system/ (note: in case this URL changes again, this is the 'master' URL that lists all the chrome diagnostic pages: chrome://chrome-urls/ the page has a section 'mem_usage' that gives details of memory usage. maybe there is some way to script Chrome as a user (say in AutoIT or Python?) that loads this URL in Chrome, and then presses the Update button, and then parses the JSON to get the memory usage for whatever tabs you are interested in. OTHER approaches: from JavaScript - use window.performance from JavaScript - use browser-report.js - https://www.npmjs.com/package/browser-report A: The chrome dev channel has a process api, chrome.process. You can query it for a tab's process information, which includes all kinds of memory information. http://developer.chrome.com/extensions/processes.html
{ "pile_set_name": "StackExchange" }
Q: Is the news story about the missing hard drive containing £4 million GBP of Bitcoins technically feasable? I'll admit to being a complete newbie to Bitcoins. The whole thing has up until now passed me by, until my interest was peaked by this news story about a gentleman in the UK who supposedly threw away a hard drive containing £4 million GBP of Bitcoins, and is now apparently searching through his local landfill. I'm a little skeptical to say the least, and have a few questions: Is this technically feasable? I have seen comments about the place suggesting that one individual wouldn't have had access to the computing power needed to mine this many bitcoins in 2009 Is there any way to verify this guys story? Surely there is a record somewhere of which bitcoins are registered to who Doesn't Bitcoin have a 'Forgot my password' feature or something similar? Surely something like this exists, or there is some administrative body who could be contacted in this scenario. Thanks A: Yes - it's feasible. Bitcoins are released at a constant rate determined by the protocol. At the time, 50 Bitcoins were being generated for every block(this is now 25); and blocks are supposed to be found every 10 minutes. Miners compete amongst each other for this prize. In 2009, you could mine using your computer's CPU and you were only competing with very few other people. There weren't large mining pools and dedicated hardware, so it was easy for anyone to run the client and mine hundreds or thousands of Bitcoins. It was so easy that you wouldn't think that losing your wallet was a big deal. Maybe, but it'd be a lot of work. All transactions - including mined Bitcoins - are recorded in a publicly accessible global ledger. Many people download the entire transaction history on their own computer, and there are tools for working with it. It might be possible with some detective work to try to track down what address they were likely stored in using the clues given, but it's not easy. Nope - if you lose your wallet those Bitcoins are lost forever. That's terrible for the person who owned them, but it doesn't impair the rest of the Bitcoin economy. It makes the value of the remaining Bitcoins go up to compensate. There are already exchanges that will hold your wallet for you if you're worried about things like this. They're more secure than holding them on your personal computer, and in the future it's possible some of them might offer insurance or other guarantees in case they lose your money. It's a design feature that there's no administrative body who can restore your Bitcoins. You would have to trust this administrative organization not to abuse their power. The developers of the original Bitcoin client have no more power than anyone else using Bitcoin.
{ "pile_set_name": "StackExchange" }