prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Submitting form with get method, cant recieve on other page My code is super easy, but yet it's not working. It used to work, but then I changed something up and now I'm back to basic again. Can you guys see why it doesn't work? **Client code** ``` <h1>Logg inn</h1> <form method="get" action="herpaderp.php"> <input type="text" class="form-control" id="title" value="Testtittel"> <input type="text" class="form-control" id="desc" value="Testbeskrivelse"> <input type="text" class="form-control" id="src" value="lollolol.jpg"> <button type="submit" class="btn btn-primary">Logg inn</button> </form> ``` **Server code:** ``` <?php if(isset($_GET["title"])) echo $_GET["title"]; if(isset($_GET["desc"])) echo $_GET["desc"]; if(isset($_GET["src"])) echo $_GET["src"]; ?> ```
Simple, your form elements need to be named and not as an ID. You have `id="title"` yet it should be `name="title"` (added to what you presently have) You can use `id="title"`, but just make sure you also include `name="title"` - **Do the same for the others** --- (***A lookahead***) => I also noticed you are using the word `desc` be careful if you're using that in conjunction with a DB, as `desc` is an MySQL [**reserved word**](http://dev.mysql.com/doc/refman/5.5/en/reserved-words.html), *just saying.* An example: The following will throw an error: ``` "INSERT INTO yourTable (desc) VALUES ('value')" ``` Correct method: (using the tick ` character around the column name) ``` "INSERT INTO yourTable (`desc`) VALUES ('value')" ```
Is there a std::noncopyable (or equivalent)? There's a [boost::noncopyable](http://www.boost.org/doc/libs/master/libs/core/doc/html/core/noncopyable.html) and I have my own noncopyable class in my library. Is there a std::noncopyable or equivalent knocking around in the latest C++ standard? It's a small thing but deriving from such a class makes the intention much clearer.
No, because there is a standard way to make a class non-copyable: ``` class MyClass { MyClass(const MyClass&) = delete; MyClass& operator=(const MyClass&) = delete; }: ``` A class that is non-copyable can however be made movable by overloading a constructor from `MyClass&&`. The declaration to make the class non-copyable (above) can be in the public or private section. If you don't really want to type all that out every time, you can always define a macro something like: ``` #define NONCOPYABLE(Type) Type(const Type&)=delete; Type& operator=(const Type&)=delete class MyClass { NONCOPYABLE(MyClass); // etc. }; ```
Save (Private) Application Settings on iOS? I'm aware of `NSUserDefaults` for saving/restoring ***user*** preferences. What is the equivalent class for an ***application***? For example, the application may have a "last run" field; or it may have a field for a unique identification of the device for use at the application level. My intention is to keep the application's settings (not user's settings) out of the Settings Application, and not backup those settings in iTunes, Time Machine, {whatever}. I'm getting a lot of noise for Java and C#, but not much for iOS/iPhone/iPad.
if you can store value by NSUserDefaults, then it is good to store application preferences too. or add settings.plist on your project and read that (what you are not changing later) and you can use like., ``` + (NSDictionary*)getBundlePlist:(NSString *)plistName { NSString *errorDesc = nil; NSPropertyListFormat format; NSString *plistPath = [[NSBundle mainBundle] pathForResource:plistName ofType:@"plist"]; NSData *plistXML = [[NSFileManager defaultManager] contentsAtPath:plistPath]; NSDictionary *temp = (NSDictionary *)[NSPropertyListSerialization propertyListFromData:plistXML mutabilityOption:NSPropertyListMutableContainersAndLeaves format:&format errorDescription:&errorDesc]; return temp; } + (id) getPropValue:(NSString *)PropertyName { // I am supposing you had add your app preferences on settings.plist. return [[Property getBundlePlist:@"settings"] objectForKey:PropertyName]; //here Property is my class name, then you can use value by //NSString *value = [Property getPropValue:@"setting1"]; } ```
Why is UTF-8 encoded chars not being displayed correctly on remote machine only When I view this webpage which is physically encoded as UTF-8 and has UTF-8 specified as the charset on my PC (using firefox) it does not display characters that need to be encoded with multiple bytes properly such as as the ö in Björk, please click on the link and then open up the B section to see what I mean <http://www.jthink.net/songkong/reports/FixSongsReport00084/FixSongsReport00084_index.html> (The page is hosted on a linux server using jakarta-tomcat) However the original file displays perfectly okay in Firefox when stored as a file on my harddrive. I even copied the file back from the remote site to my local PC to esure had the same file, and it still displays okay. So how come it doesn't display ok on the website, could it be a tomcat problem ? **EDIT** In the comment on the first answer it says i need to ensure that I need to set the response encoding correctly, how do I do this - the html page is not generated by code tomcat is just serving the page as provided Note I don't to parse uri parameters as utf8, and I dont want the jsp pages that I created encoded as UTF8, these work fines as ISO-8859-1 and may break if I change them. I just want .html pages to be displayed as UTF8 , and only for this application, I have multiple applications in webapps folder and I am using Tomcat 7 **EDIT** So as suggested in the answer below I've added to my web.xml file ``` <filter> <filter-name>CharacterEncoding</filter-name> <filter-class>org.apache.catalina.filters.SetCharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> </filter> <mime-mapping> <extension>html</extension> <mime-type>text/html;charset=UTF-8</mime-type> </mime-mapping> ``` and that sort of works, the url now displays correctly, but the link doesn't work. When I have a look at the source it seems to be using the correct link, **but the error message shows it as an expanded notation rather than UTF8**. hers the whole report, so you can click on link in left handside and see result in righthandside <http://www.jthink.net/songkong/reports/FixSongsReport00084/FixSongsReport00084.html> Even if I copy link and paste it doesnt work as the link seems thats get pasted is wrong, although it then corrects itself
your page is returning this header : > > Content-Type:text/html; charset=**ISO-8859-1** > > > but your page is encoded in UTF-8. You can follow this thread to see how to change the response header : [Tomcat 7.0.35 set HTTP response header Content-Type charset for static HTML files](https://stackoverflow.com/questions/14716878/tomcat-7-0-35-set-http-response-header-content-type-charset-for-static-html-file) **[EDIT]** The second problem relates to the encoding your server is expecting the urls to be encoded with. As they will be encoding with utf-8, you can just update your tomcat config with this : ``` <Connector port="<whatever>" URIEncoding="UTF-8"/> ``` But what I'd strongly recommend is not to use this kind of characters neither in your urls nor in your html file names. There are more things involved here, as the encoding that is being used by your user when the server starts ..... and many more tweaks you will need to take care about. Just avoiding to use these chars will keep you away of these problems. **[/EDIT]** Hope it helps.
Build Objective-C Library with CMake with ARC enabled I am trying to build an Objective-C ARC enabled library using CMake. When using the "Unix Makefiles" generator I run into a warning: > > method possibly missing a [super dealloc] call > > > I don't run into this warning when using the XCode generator. Is there a flag I can pass to CMake to make sure that the command line build also recognizes this to be an ARC build and not have that warning generated? Thanks
You need to let CMake know that you want to build the project with ARC. Otherwise, it will show the warning. **Option 1** However, `CTSetObjCArcEnabled` is only available only if we have cmake-toolkit installed. If it is not installed, you can use the following: ``` set_property (TARGET target_name APPEND_STRING PROPERTY COMPILE_FLAGS "-fobjc-arc") ``` **Option 2 (deprecated since 3.0)** Use `CTSetObjCARCEnabled`. Reference is available [here](http://nickhutchinson.me/cmake-toolkit/): > > Enables or disables Objective-C Automatic Reference Counting on a per-directory, per-target or per-source basis. > > > ``` CTSetObjCARCEnabled(<value> <DIRECTORY | TARGETS targets... | SOURCES sources... >) ``` **Useful Tip** Also, as recommended from this [answer](https://stackoverflow.com/a/12159090/1035008), it is helpful to use the following to make sure the project is compiled with ARC enabled: ``` #if ! __has_feature(objc_arc) #error "ARC is off" #endif ```
`do` block inside a computation expressions got error? The following code can be compiled. ``` async { //do ( use outStream = File.Create(downloading) do! httpRequestStreamCopyTo url outStream //) if File.Exists(fullname) then File.Delete(fullname) File.Move(downloading, fullname) } ``` And I will need to close the `outStream` before renaming. So it's changed to ``` async { do ( use outStream = File.Create(downloading) do! httpRequestStreamCopyTo url outStream // Error ) if File.Exists(fullname) then File.Delete(fullname) File.Move(downloading, fullname) } ``` And it got the following error on `do! httpRequestStreamCopyTo (reportingUri url) outStream`? > > Error FS0750 This construct may only be used within computation expressions > > >
You can await an embedded `async` body like this so that `outStream` is properly scoped: ``` async { do! async { use outStream = File.Create(downloading) do! httpRequestStreamCopyTo url outStream } if File.Exists(fullname) then File.Delete(fullname) File.Move(downloading, fullname) } ``` Since that embedded body is blocking, this is conceptually equivalent to sequential `async` calls: ``` async { use outStream = File.Create(downloading) do! httpRequestStreamCopyTo url outStream } async { if File.Exists(fullname) then File.Delete(fullname) File.Move(downloading, fullname) } ```
How to Pivot table in BigQuery I am using Google Big Query, and I am trying to get a pivoted result out from public sample data set. A simple query to an existing table is: ``` SELECT * FROM publicdata:samples.shakespeare LIMIT 10; ``` This query returns following result set. ![enter image description here](https://i.stack.imgur.com/Oj73n.png) Now what I am trying to do is, get the results from the table in such way that if the word is brave, select "BRAVE" as column\_1 and if the word is attended, select "ATTENDED" as column\_2, and aggregate the word count for these 2. Here is the query that I am using. ``` SELECT (CASE WHEN word = 'brave' THEN 'BRAVE' ELSE '' END) AS column_1, (CASE WHEN word = 'attended' THEN 'ATTENDED' ELSE '' END) AS column_2, SUM (word_count) FROM publicdata:samples.shakespeare WHERE (word = 'brave' OR word = 'attended') GROUP BY column_1, column_2 LIMIT 10; ``` But, this query returns the data ![enter image description here](https://i.stack.imgur.com/y4XB5.png) What I was looking for is ![enter image description here](https://i.stack.imgur.com/gaM5T.png) I know this pivot for this data set does not make sense. But I am just taking this as an example to explain the problem. It will be great if you can put in some directions for me. EDITED: I also referred to [How to simulate a pivot table with BigQuery?](https://stackoverflow.com/questions/19414609/how-to-simulate-a-pivot-table-with-bigquery) and it seems it also has the same issue I mentioned here.
**Update 2020:** Just call `fhoffa.x.pivot()`, as detailed in this post: - <https://medium.com/@hoffa/easy-pivot-in-bigquery-one-step-5a1f13c6c710> For the 2019 example, for example: ``` CREATE OR REPLACE VIEW `fh-bigquery.temp.a` AS ( SELECT * EXCEPT(SensorName), REGEXP_REPLACE(SensorName, r'.*/', '') SensorName FROM `data-sensing-lab.io_sensor_data.moscone_io13` ); CALL fhoffa.x.pivot( 'fh-bigquery.temp.a' , 'fh-bigquery.temp.delete_pivotted' # destination table , ['MoteName', 'TIMESTAMP_TRUNC(Timestamp, HOUR) AS hour'] # row_ids , 'SensorName' # pivot_col_name , 'Data' # pivot_col_value , 8 # max_columns , 'AVG' # aggregation , 'LIMIT 10' # optional_limit ); ``` **Update 2019:** Since this is a popular question, let me update to #standardSQL and a more general case of pivoting. In this case we have multiple rows, and each sensor looks at a different type of property. To pivot it, we would do something like: ``` #standardSQL SELECT MoteName , TIMESTAMP_TRUNC(Timestamp, hour) hour , AVG(IF(SensorName LIKE '%altitude', Data, null)) altitude , AVG(IF(SensorName LIKE '%light', Data, null)) light , AVG(IF(SensorName LIKE '%mic', Data, null)) mic , AVG(IF(SensorName LIKE '%temperature', Data, null)) temperature FROM `data-sensing-lab.io_sensor_data.moscone_io13` WHERE MoteName = 'XBee_40670F5F' GROUP BY 1, 2 ``` [![enter image description here](https://i.stack.imgur.com/Qz7lf.png)](https://i.stack.imgur.com/Qz7lf.png) As an alternative to `AVG()` you can try `MAX()`, `ANY_VALUE()`, etc. --- **Previously**: I'm not sure what you are trying to do, but: ``` SELECT NTH(1, words) WITHIN RECORD column_1, NTH(2, words) WITHIN RECORD column_2, f0_ FROM ( SELECT NEST(word) words, SUM(c) FROM ( SELECT word, SUM(word_count) c FROM publicdata:samples.shakespeare WHERE word in ('brave', 'attended') GROUP BY 1 ) ) ``` ![enter image description here](https://i.stack.imgur.com/gaM5T.png) UPDATE: Same results, simpler query: ``` SELECT NTH(1, word) column_1, NTH(2, word) column_2, SUM(c) FROM ( SELECT word, SUM(word_count) c FROM publicdata:samples.shakespeare WHERE word in ('brave', 'attended') GROUP BY 1 ) ```
Why is it impossible for Google to port V8 along with Chrome's codebase in C/Obj-C on iOS? Google has given a reason that All apps on iOS have to run in the sandbox environment except for special apps by apple that get to tap into some private APIs. So Chrome on iOS uses whatever the UIWebView provides plus it might be doing its UI stuff and some external caching to add something extra to it. But Why does Google need access to Nitro engine that Safari on iOS has access to. Can't Chrome port the whole webkit engine and V8 for iOS ? Xamarin 2.0 manages to port mono (.net runtime) on iOS along with every app.
There are two reasons. 1. Apple wants to review all code that runs on an iOS device in order to ensure the quality of the overall platform. Obviously, in order to *review* the code, they need to *have* it. So, Apple requires that all code that is run by your app, needs to be either part of the app or part of the public iOS APIs. You *can* embed an execution engine in your app, but you *cannot* execute code downloaded from the web. However, the *whole point* of an ECMAScript engine in a web browser is to execute arbitrary code from the web. 2. [Note: I'm not 100% sure about this.] The iOS security model does not allow an app to execute native code from writeable memory. Memory is either writeable or executable (but read-only). However, V8 is a pure compiler, it doesn't have an interpreter. It compiles ECMAScript to native code in memory, then executes it. But the security model prevents this. So, Google would have to first develop an interpreter for V8. But that would be potentially devastating for performance, and would be a substantial development effort. > > Xamarin 2.0 manages to port mono (.net runtime) on iOS along with every app. > > > 1. The code that is executed by the Mono runtime in this case is all part of the app. No code is downloaded from anywhere. 2. Mono contains an interpreter.
EF Core 3 Linq could not be translated I tried to build a query in ef core 3.0 that gets the full process from the db server ``` IEnumerable<int> stIds = stateIds; var rtables = await db.Order. Join(db.OrderDetail, order => order.OrderId, orderdetail => orderdetail.OrderId, (order, orderdetail) => new { order, orderdetail }). Where(x => x.order.SellerId == sellerId && stIds.Contains(x.orderdetail.OrderStateId)&&x.order.RtableId != null) .GroupBy(x => x.order.RtableId ) .Select(x => new RtableState { RtableId = x.Key ?? 0, OrderStateId = x.OrderByDescending(x => x.orderdetail.OrderStateId).Select(x => x.orderdetail.OrderStateId).FirstOrDefault() }).ToListAsync(); ``` I get this error: { "Message": "Processing of the LINQ expression 'AsQueryable<<>f\_\_AnonymousType52>(OrderByDescending<<>f\_\_AnonymousType52, int>(\r\n source: NavigationTreeExpression\r\n Value: default(IGrouping, <>f\_\_AnonymousType52>)\r\n Expression: (Unhandled parameter: e), \r\n keySelector: (x) => x.orderdetail.OrderStateId))' by 'NavigationExpandingExpressionVisitor' failed. This may indicate either a bug or a limitation in EF Core. See <https://go.microsoft.com/fwlink/?linkid=2101433> for more detailed information.", "Inner": "" } I know the query is too complex for EF Core 3.0, but is this a bug or should it not work? My solution is to split the request. ``` IEnumerable<int> stIds = stateIds; var rtableStatesServer = await db.Order. Join(db.OrderDetail, order => order.OrderId, orderdetail => orderdetail.OrderId, (order, orderdetail) => new { order, orderdetail }). Where(x => x.order.SellerId == sellerId && stIds.Contains(x.orderdetail.OrderStateId) && x.order.RtableId != null) .GroupBy(x => new RtableState { RtableId = x.order.RtableId ?? 0, OrderStateId = x.orderdetail.OrderStateId }) .Select(x => new RtableState { RtableId = x.Key.RtableId, OrderStateId = x.Key.OrderStateId }).ToListAsync(); var rtableStates = rtableStatesServer.GroupBy(r => r.RtableId, (key, value) => new RtableState { RtableId = key, OrderStateId = value.OrderByDescending(x=>x.OrderStateId).Select(x => x.OrderStateId).FirstOrDefault() }).ToList(); ```
As indicated in the exception message, the problem is caused by the expression ``` x.OrderByDescending(y => y.orderdetail.OrderStateId) .Select(y => y.orderdetail.OrderStateId) .FirstOrDefault() ``` where `x` is `IGrouping<,>` produced by `GroupBy` operator. > > This may indicate either a bug or a limitation in EF Core. > > > I would consider it a limitation, which might never be fixed because `GroupBy` result containing expressions other than key and aggregate expressions have no natural SQL equivalent. The general solution is to avoid `GroupBy` where possible and use alternative constructs with correlated subqueries. But this particular query has simple natural solution because the expression ``` set.OrderByDescending(item => item.Property).Select(item => itm.Property).FirstOfDefault() ``` can be expressed with ``` set.Max(item => item.Property) ``` which is a standard (thus supported aggregate). Replace the aforementioned problematic expression with ``` x.Max(y => y.orderdetail.OrderStateId) ``` and the problem will be solved.
VBA auto hide ribbon in Excel 2013 How to `Auto-hide Ribbon` in Excel 2013 in VBA? I would like to achieve exactly what I get by clicking on the upper arrow icon at the right top of Excel menu marked with blue in the picture below and then clicking on the first option marked with orange: [![enter image description here](https://i.stack.imgur.com/x95d8.png)](https://i.stack.imgur.com/x95d8.png) I would be also interested in VBA switching back to the third option `Show Tabs and Commands`. Important thing for me is to keep in the Excel menu the upper arrow icon (marked with blue). I have tried hints shown in this thread: [VBA minimize ribbon in Excel](https://stackoverflow.com/questions/19019546/vba-minimize-ribbon-in-excel) but I am not satisfied with results. **Attempt 1** ``` Application.ExecuteExcel4Macro "Show.ToolBar(""Ribbon"",False) ``` This is good but hides the blue icon. **Attempt 2** ``` CommandBars.ExecuteMso "MinimizeRibbon" ``` This is close to what I want. This keeps the blue icon but does not hide the entire menu. It switches to the second option displayed in the picture `Show Tabs`. **Attempt 3** ``` SendKeys "^{F1}" ``` The attampt does not work at all. Moreover, it is supposed to imitate the attempt 2. So even that would not satisfy me.
I can't see that anyone else has brought this up... This isn't a workaround, this is the actual idMSO for what I think you're looking for. This code makes my excel window look like everything is gone the same way the first option does for `Auto-Hide Ribbon`. **Before the code runs, my window looks like this, in the 'Restore' size:** [![enter image description here](https://i.stack.imgur.com/rZ9yl.png)](https://i.stack.imgur.com/rZ9yl.png) **Running the following code:** ``` Sub HideTheRibbon() CommandBars.ExecuteMso "HideRibbon" End Sub ``` **Will make your window look like this, in the maxamized window size (just like what would happen if you were to press the `Auto-Hide Ribbon` button manually):** [![enter image description here](https://i.stack.imgur.com/mWIoA.png)](https://i.stack.imgur.com/mWIoA.png) **If you want the ribbon automatically hidden when the workbook opens, put this in the workbook code:** ``` Sub Workbook_Open() CommandBars.ExecuteMso "HideRibbon" End Sub ``` **Alternatively, to achieve the same thing, you could put this code in a module:** ``` Sub Auto_Open() CommandBars.ExecuteMso "HideRibbon" End Sub ``` **If you want the window to revert back to normal, you run the exact same code again.** In other words, the following code would make no visual change at all when ran because the idMSO "HideRibbon" is a toggleButton: ``` Sub HideTheRibbon() CommandBars.ExecuteMso "HideRibbon" CommandBars.ExecuteMso "HideRibbon" End Sub ``` **If you want a full list of all the idMSO in excel, click the following that apply to you: [Excel 2013+](https://www.microsoft.com/en-us/download/details.aspx?id=36798), [Excel 2010](https://www.microsoft.com/en-us/download/details.aspx?id=6627), [Excel 2007](https://msdn.microsoft.com/en-us/library/dd909393(v=office.12).aspx)**
How to integrate line and area google chart in same chart I need to make chart which have use same data and display line chart and area chart.How to composite line and area chart. This is the data ['Year', 'Sales', 'Expenses','Total'], ['2004', 1000, 400,600], ['2005', 1100, 200,900], ['2006', 6000, 5000,1000], ['2007', 1000, 500,500] And i need sales and expenses line chart and total area chart.
You could use a [combo chart](https://developers.google.com/chart/interactive/docs/gallery/combochart). These are A charts that lets you render each series as a different marker type from the following list: line, area, bars, candlesticks and stepped area. To assign a default marker type for series, specify the seriesType property. Use the series property to specify properties of each series individually. There is an [example](http://code.google.com/apis/ajax/playground/?type=visualization#combo_chart) in the link that you could edit. You used to be able to do a [compound chart](https://developers.google.com/chart/image/docs/gallery/compound_charts) but these are sadly deprecated now. example of area and line: ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title> Google Visualization API Sample </title> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('visualization', '1', {packages: ['corechart']}); </script> <script type="text/javascript"> function drawVisualization() { // Create and populate the data table. var data = google.visualization.arrayToDataTable([ ['Year', 'Sales', 'Expenses', 'Total'], ['2004', 1000, 400, 600 ], ['2005', 1100, 200, 900 ], ['2006', 6000, 5000, 1000], ['2007', 1000, 500, 500 ], ]); // Create and draw the visualization. var ac = new google.visualization.ComboChart(document.getElementById('visualization')); ac.draw(data, { title : 'Sales & Expenses by Year', width: 600, height: 400, vAxis: {title: "Sales"}, hAxis: {title: "Year"}, seriesType: "area", series: {5: {type: "line"}} }); } google.setOnLoadCallback(drawVisualization); </script> </head> <body style="font-family: Arial;border: 0 none;"> <div id="visualization" style="width: 600px; height: 400px;"></div> </body> </html> ``` prints ![Sales and Expenses](https://i.stack.imgur.com/AaPhz.png)
algorithm to detect time, date and place from invitation text I am researching some Natural Language Processing algorithms to read a piece of text, and if the text seems to be trying to suggest a meeting request, it sets up that meeting for you automatically. For example, if an email text reads: > > Let's **meet tomorrow** someplace in **Downtown at 7pm**". > > > The algorithm should be able to detect the Time, date and place of the event. Does someone know of some already existing NLP algorithms that I could use for this purpose? I have been researching some NLP resources (like [NLTK](http://nltk.org) and [some tools in R](http://cran.r-project.org/web/views/NaturalLanguageProcessing.html)), but did not have much success. Thanks
This is an application of [information extraction](https://en.wikipedia.org/wiki/Information_extraction), and can be solved more specifically with sequence segmentation algorithms like hidden Markov models (HMMs) or conditional random fields (CRFs). For a software implementation, you might want to start with the [MALLET toolkit](http://mallet.cs.umass.edu/) from UMass-Amherst, it's a popular library that implements CRFs for information extraction. You would treat each token in a sentence as something to be labeled with the fields you are interested in (or 'x' for none of the above), as a function of word features (like part of speech, capitalization, dictionary membership, etc.)... something like this: ``` token label features ----------------------------------- Let x POS=NNP, capitalized 's x POS=POS meet x POS=VBP tomorrow DATE POS=NN, inDateDictionary someplace x POS=NN in x POS=IN Downtown LOCATION POS=NN, capitalized at x POS=IN 7pm TIME POS=CD, matchesTimeRegex . x POS=. ``` You will need to provide some hand-labeled training data first, though.
Hashing string into integer in Java applet - how does it work? I'm now working on a challenge from website <http://www.net-force.nl/challenges/> and I stand before an interesting problem I can't solve. I'm not asking for the whole result (as it would be breaking the rules), but I need help with the programming theory of hash function. Basically, it's based on Java applet with one textfield, where user has to enter the right password. When I decompile the .class file, one of the methods I get is this hash method. `string s` contains entered password, immediately given to the method: ``` private int hash(string s) { int i = 0; for(int j = 0; j < s.length(); j++) i += s.charAt(j); return i; } ``` The problem is that **the method returns integer** as the "hash", but *how can characters be converted to integer at all*? I got an idea that maybe the password is a number, but it doesn't lead anywhere at all. Another idea talks about ASCII, but still nothing. Thanks for any help or tips.
The trick is that it's converting each *character* into an integer. Each character (`char`) in Java is a *UTF-16 code unit*. For the most part1, you can just think of that as each character is mapped to a number between 0 and 65535 inclusive, in a scheme called [Unicode](http://unicode.org). For example, 65 is the number for 'A', and if you'd typed in the Euro symbol, that would map to [Unicode U+20AC](http://www.fileformat.info/info/unicode/char/20ac/index.htm) (8364). Your hashing function basically adds together the numbers for each character in the string. It's a very poor hash (in particular it gives the same results for the same characters regardless of ordering), but hopefully you'll get the idea. --- 1 Things get trickier when you need to bear in mind *surrogate pairs*, where a single Unicode character is actually made up of two UTF-16 code units - that's for characters with a Unicode number of more than 65535. Let's stick to the basics for the moment though :)
Update vue.js data automatically when binded input field's value is changed by JavaScript I'm new to vue.js. It seems [Two-way binding](http://vuejs.org/guide/index.html#Two-way_Binding "Two-way binding") of vue.js only listens to input event from user, if you change value by JS, vue.js's value is not updated Here is an example for what I mean: ``` function setNewValue() { document.getElementById('my-field').value = 'New value'; } new Vue({ el: '#app', data: { message: 'Old value' } }) ``` ``` <script src="https://cdn.jsdelivr.net/vue/latest/vue.js"></script> <div id="app"> <p>{{ message }}</p> <input id="my-field" v-model="message"> </div> <button onclick="setNewValue()">Set new value</button> ``` If you click "Set new value" button, the field's value is changed to "New value" but the text above it is still "Old value" instead of "New value". But if you change the text in the field directly, the text above is changed synchronously. Is there anyway we can do this synchronous when updating value with JavaScript?
Besides using `$set`, as @taggon suggested, you can also work with the data model directly. ``` function setNewValue() { model.message = 'New value'; } var model = { message: 'Old value' }, app = new Vue({ el: '#app', data: model }); ``` ``` <script src="https://cdn.jsdelivr.net/vue/latest/vue.js"></script> <div id="app"> <p>{{ message }}</p> <input id="my-field" v-model="message"> </div> <button onclick="setNewValue()">Set new value</button> ``` Or better still, manipulate the data with a method of your Vue instance, and use `v-on:` to handle the `click` event: ``` var app = new Vue({ el: '#app', data: { message: 'Old value' }, methods: { setNewValue: function () { this.message = 'New value'; } } }); ``` ``` <script src="https://cdn.jsdelivr.net/vue/latest/vue.js"></script> <div id="app"> <p>{{ message }}</p> <input id="my-field" v-model="message"> <button v-on:click="setNewValue">Set new value</button> </div> ``` The `button` must be part of the `app` template in this case, otherwise the `v-on:` event binding doesn't work.
Deno on CentOS 7: 'GLIBC\_2.18' not found How to run [Deno](https://deno.land/) on [Webfaction](https://www.webfaction.com/)'s CentOS 7 (64-bit)? It gives an error: ``` deno: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by deno) ```
Current Deno release 1.0.0 (latest today) is not compatible with CentOS 7. I tried on the latest distribution released on 27 April 2020: ``` $ cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core) ``` This issue [7 GLIBC\_2.18 not found](https://github.com/denoland/deno/issues/1658) suggests that there should be a way to solve this problem, but the thread seems to be abandoned for a year. As of today Deno requires `GLIBC_2.18`, but unfortunately CentOS 7 is running `2.17`, an old version of the gclib which is not enough: ``` $ ldd --version ldd (GNU libc) 2.17 ``` If you need to run Deno on CentOS you'll need to use `CentOS 8`. Tested it and it works. From [How to fix “/lib/x86\_64-linux-gnu/libc.so.6: version `GLIBC\_2.14' not found”](https://superuser.com/a/537694/251249): > > That means the program was compiled against glibc version 2.14, and it > requires that version to run, but your system has an older version > installed. You'll need to either recompile the program against the > version of glibc that's on your system, or install a newer version of > glibc (the "libc6" package in Debian). > > > It will not happen for CentOS 7. From [glibc\_2.18 on Centos 7](https://forums.centos.org/viewtopic.php?t=71740): > > No. Never going to happen. We ship glibc 2.17 as part of CentOS 7 and > that will never change. It's part of the basic RHEL standards that > stuff like this does not change within a major version. > > > I wouldn't count on Deno — which is a new technology — to backport with old compilers. Consider upgrading your servers to CentOS 8.
Passing one class/struct vs passing several parameters Passing one class/struct vs passing several parameters I am wondering about performance considerations. Would it be better **performance-wise** to pass into a method call a class or a struct (which?) holding two int parameters and an object reference, or would it be better to pass these three as three parameters? Note: I am developing on the Unity3D platform, so it's possible some things work differently then the classical .NET
If you pass a `class`, then the first thing you need to do is create an object, which will later need to be garbage collected. Not a problem normally, but on some Unity target platforms that could be undesirable *if* you are doing this lots. If you pass a `struct` *normally*, then it needs to copy the struct. If it was big enough for you to be asking questions about encapsulation (i.e. representing more than a few parameters), then it is probably non-trivially big, and could be a pain to copy all the time, especially if passing down a few layers. Passing individual parameters is fairly well known, and shouldn't cause a huge problem. Optional parameters in 4.0 can make adding more parameters less painful, *but* they are still passed (just with their default values). An interesting option **for unity** is passing a `struct` *by ref*, since this **doesn't** copy the contents (it just passes a reference to the value you created, typically on the stack), and **doesn't** require any garbage collection, i.e. ``` var args = new SomeArgs(123, "abc", 23.4F, false); // <=== a struct SomeMethod(ref args); ``` However, in most useful cases, I can't see that this gains you much over just ``` SomeMethod(123, "abc", 23.4F, false); ``` unless of course you are passing the parameters down a few layers - in which case it has the advantage that it doesn't need any copying. But watch out: the values are no longer independent between layers in the call-stack (regular parameters, not passed `ref`, are isolated to the method). I *think* that at least describes some of the things to *consider*, but I don't have a conclusive answer for you. If you want something more scientific, you probably need to measure.
How to paint over safe area in jetpack compose? I am trying to paint that dark blue area with the gradient as well. ![](https://i.stack.imgur.com/rkRtH.png). I am basically looking for `ignoreSafeArea` (iOS SwiftUI) ![](https://i.stack.imgur.com/O146K.png) Equivalent for Jetpack Compose. I could try painting that bar the same shade of blue I used for my gradient but I don't think that is the best solution. I have tried changing the `appBar` color but the result is not what I am looking for.
This bar is the Android [Status Bar](https://material.io/design/platform-guidance/android-bars.html#status-bar). To change its color in Jetpack Compose you can use the Google [Accompanist](https://google.github.io/accompanist/) library, specifically the [System UI Controller](https://google.github.io/accompanist/systemuicontroller/). > > System UI Controller provides easy-to-use utilities for updating the System UI bar colors within Jetpack Compose. > > > Specifically the `setSystemBarsColor` or `setStatusBarColor` functions. ``` systemUiController.setStatusBarsColor( color = Color.Transparent, //set your color here darkIcons = true ) ``` Then, to draw under the status bar area you can use the [`WindowCompat`](https://developer.android.com/reference/kotlin/androidx/core/view/WindowCompat) in your `MainActivity` ``` WindowCompat.setDecorFitsSystemWindows(window, false) setContent { MyApp( ... ) } ``` To prevent content (like AppBar) from going under system icons I used [Inset-aware layouts](https://google.github.io/accompanist/insets/#inset-aware-layouts-insets-ui) by setting a Box with top padding passed from Accompanist Scaffold. ``` Box(Modifier.padding(top = contentPadding.calculateTopPadding())) { // my app content } ```
CORS Error in Laravel 7 using Laravel Lighthouse I have an API built with Laravel and Lighthouse-php(for GraphQL). My client is built with Vue js and uses Apollo for the graphQL client-side implementation. Anytime I make a request, I get the following error: ``` Access to fetch at 'http://localhost:8000/graphql' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. ``` Naturally, I proceeded to install laravel-cors package but I realized afterwards that it came by default with my Laravel installation (7.2.2). Which meant that `\Fruitcake\Cors\HandleCors::class` was already added to the middleware array in `Kernel.php` and the cors config file was already in my config directory. After some googling, I realized that I needed to add `\Fruitcake\Cors\HandleCors::class` to the `route.middleware` array in my `config/lighthouse.php` file It still did not work. I have restarted the server, cleared cache, cleared config and run `composer dump-autoload` but I still get the error. I have no idea how to get past this. Any help will be appreciated. **Versions** Laravel 7.2.2 Laravel Lighthouse 4.10
I got some help from the folks at lighthouse [here](https://github.com/nuwave/lighthouse/issues/1266). The problem was with my cors configuration. I needed to add `graphql` to the path array in config/cors but I mistakenly added `graphql/*`. So the path array looked like this ``` 'paths' => ['api/*', 'graphql/*'], ``` instead of this ``` 'paths' => ['api/*', 'graphql'], ``` After making the changes, I run the following: `php artisan cache:clear`, `php artisan config:clear` and `composer dump-autoload` before the CORS error was gone. The full configuration that worked for me was ``` return [ 'paths' => ['api/*', 'graphql'], 'allowed_methods' => ['*'], 'allowed_origins' => ['*'], 'allowed_origins_patterns' => [], 'allowed_headers' => ['*'], 'exposed_headers' => false, 'max_age' => false, 'supports_credentials' => false, ]; ```
Correctly Mapping viewmodel to entity I have my entity as: ``` public class User { public int Id { get; set; } public string Name { get; set; } public string Address { get; set; } } ``` I have my UserViewModel as ``` public class UserViewModel { public int Id { get; set; } public string Name { get; set; } public string Address { get; set; } } ``` I am using these as below in my controller: ``` //This is called from my view via ajax public void Save(UserViewModel uv) { // this throws error: cannot convert from UserViewModel to Entity.User MyRepository.UpdateUser(uv); } ``` My UpdateUser in repository class is as below: ``` public void UpdateUser(User u) { var user = GetUserDetails(u.Id); user.Name = u.Name; user.Address = u.Address; //using entity framework to save _context.SaveChanges(); } ``` How can I correctly map UserViewModel in my controller to my entity
By using [AutoMapper](https://www.nuget.org/packages/AutoMapper/) you can do something like: ``` public void Save(UserViewModel uv) { // this throws error: cannot convert from UserViewModel to Entity.User var config = new MapperConfiguration(cfg => { cfg.CreateMap<UserViewModel , User>(); }); User u = config.CreateMapper().Map<User>(uv); MyRepository.UpdateUser(u); } ``` Or manually : ``` public void Save(UserViewModel uv) { User u = new User() { Id = uv.Id Name = uv.Name; Address = uv.Address; }; MyRepository.UpdateUser(u); } ``` Which is not good to do it manually if you change your view-model and your model then you should change your code also, but with `Automapper` you don't need to change the code. **EDIT1:** This is not good idea to use model-view in repository (DataAccess Core) so it would be better to keep your `public void UpdateUser(User u)` and don't change it, in outside it is better to pass `user` to `UpdateUser` not `UserViewModel` like what you have done before. **EDIT2:** In my opinion non of answered posts doesn't related to SOC (Separation on concerns) even mine... 1- When I passed `UserViewModel` I've violated the SOC .... 2- In the other side if I got User in Peresentation layer directly I also violated the SOC. I think the best way is a middle layer as proxy.... Presentation <----> Proxy <----> Repository.
How can I log a message on every Marionette view render? I have a Marionette application with a large number of views. I want to log a debug message to the console when each is rendered (and possibly, in the future, on other events) I am currently doing this by logging from each View's `onRender` method: ``` MyModule.MyViewType = Marionette.ItemView.extend({ // ... view properties and methods onRender: function() { // ... other onRender code console.debug('MyModule.MyViewType %s %s', this.cid, 'render'); } }); ``` This works, but it has several disadvantages: - The logging code must be added manually to each view. - Several views don't have custom `onRender` actions so I'm adding methods for the purpose of debugging only. That feels wrong. - If I want to alter or remove the logging methods completely (e.g. to go to production), I need to alter a lot of code. - If I want to add code for another event, e.g. `show`, I will need to add an event handler or a new method to every view. **Is there a way to log every View render without adding code to each View?**
Yes. You can [decorate](http://en.wikipedia.org/wiki/Decorator_pattern) `Backbone.View.constructor` to hook into the View creation lifecycle. You can then register callbacks for any event on all View instance.. ``` !function() { // Development: log view renders and other events // Don't execute this function in production. // Save a reference to the original Backbone.View and create a new type // that replaces the constructor method var OriginalView = Backbone.View, LoggingView = OriginalView.extend({ constructor: function() { // Execute the original constructor first OriginalView.apply(this, arguments); // Allow views to define a `type` property to clarify log messages var type = this.type || 'Unknown View Type', cid = this.cid; // Bind to Marionette.View's `render` event (and any other events) this.listenTo(this, 'render', function(e,b) { console.debug('%s %s - %s', type, cid, 'render'); }); } }); // Replace Backbone.View with our decorated view Backbone.View = LoggingView; }(); ``` To log view types, add a property to your View implementations: ``` MyModule.MyViewType = Marionette.ItemView.extend({ type: 'MyModule.MyViewType', // ... rest of the view code }); ``` Reliably determining a JavaScript's object "type" (constructor name) is [problematic](https://stackoverflow.com/questions/9686001/get-a-backbone-model-instances-model-class-name), so adding this property is the best approach to determine the type of view that is being rendered. This answer is easily generalised to multiple event types by providing an array of the events you want to be logged: ``` var events = ['render', 'show', 'beforeClose']; events.forEach(function(eventType) { this.listenTo(this, eventType, function() { console.debug('%s %s - %s', type, cid, eventType) }); }).bind(this); ``` Example output: ``` Projects.ViewType1 view13 - render Projects.ViewType2 view3 - render Projects.ViewType3 view6 - render Projects.ViewType4 view9 - render Projects.ViewType4 view17 - render Projects.ViewType4 view19 - render Projects.ViewType2 view3 - render ``` This approach solves all of the problems described in the question - there is a a small amount of code, Views don't need altering directly and there is a single function call that can be omitted to remove logging in production code. This approach is specific to Marionette - because vanilla Backbone's `render` method is user-defined there is no equivalent of the `render` event. For more detail on extending Backbone constructor methods see [Derick Bailey's blog](http://lostechies.com/derickbailey/2013/07/29/prototypes-constructor-functions-and-taxidermy/).
Strange behavior of unset in foreach by reference loop I know one should not modify physical structure of array while looping by reference, but I need explanation of what is going on in my code. Here we go: ``` $x= [[0],[1],[2],[3],[4]]; foreach ($x as $i => &$upper) { print $i; foreach ($x as $j => &$lower) { if($i == 0 && $j == 2) { unset($x[2]); } else if($i == 1 && $j == 3) { unset($x[3]); } } } ``` The output is `01`. Surprising that outer loop iterates only twice, for indices `0` and `1`. I was expecting the output to be `014`. I have read lots of blog posts and questions about hazards of using array references, but nothing that can explain this phenomenon. I am breaking my head over it for hours now. # EDIT: The code above is the minimal reproducible code. One explanation (but an *incorrect* one) that might seem to be the case is this: *The outer loop goes through two iterations before the internal pointer is set to index `2`. But the loop does not find any element at index `2` and thus thinks no elements are left and quits.* The problem with this theory is it doesn't quite explain this code: ``` $x= [[0],[1],[2],[3],[4]]; foreach ($x as $i => &$upper) { print $i; foreach ($x as $j => &$lower) { if($i == 0 && $j == 2) { unset($x[2]); // No if else here unset($x[3]); } } } ``` By the same token, the above code should also produce `01`, but its actual output is `014`, as expected. Even when two items in a series are removed, php knows that are still elements left to be iterated over. Could this possibly be a bug with php scripting engine?
A simple code to reproduce your issue: ``` $x = [0, 1, 2]; foreach ($x as $k => &$v) { print $k; if ($k == 0) { unset($x[1]); } end($x); // move IAP to end next($x); // move IAP past end (that's the same as foreach ($x as $y) {} would do) } ``` If you foreach over an array, it's copied (= no problem when iterating, you'll iterate over the full original array). But if you foreach by reference, the array is *not* copied (the reference needs to match the original array, so copying impossible). Foreach internally always saves the position of the next element to iterate over. But when the next position of an array is removed, foreach needs to go back to the array and check it's internal array pointer (IAP). In this case the next position is destroyed *and* the IAP is past the end, it ends the loop. That's what you're seeing here. --- Also interesting: hhvm has a different behaviour to php here: <http://3v4l.org/81rl8> --- Addendum: The infinite foreach loop: ``` $x = [0,1,2]; foreach ($x as $k => &$v) { print $k; if ($k == 1) { unset($x[2]); } else { $x[2] = 1; } reset($x); } ``` If you understood my explanations above, guess why that loops indefinitely.
operator const char\* overwrites (?) my another variable in strange way ``` #include <iostream> #include <sstream> class Vector { double _x; double _y; public: Vector(double x, double y) : _x(x), _y(y) {} double getX() { return _x; } double getY() { return _y; } operator const char*() { std::ostringstream os; os << "Vector(" << getX() << "," << getY() << ")"; return os.str().c_str(); } }; int main() { Vector w1(1.1,2.2); Vector w2(3.3,4.4); std::cout << "Vector w1(" << w1.getX() << ","<< w1.getY() << ")"<< std::endl; std::cout << "Vector w2(" << w2.getX() << ","<< w2.getY() << ")"<< std::endl; const char* n1 = w1; const char* n2 = w2; std::cout << n1 << std::endl; std::cout << n2 << std::endl; } ``` Output of this program: ``` $ ./a.out Vector w1(1.1,2.2) Vector w2(3.3,4.4) Vector(3.3,4.4) Vector(3.3,4.4) ``` I don't understand why I get the output. It seems that "const char\* n2 = w2;" overwrites n1 and then I get two times "Vector(3.3,4.4)". Could anybody explain me this phenomena?
It's *undefined behavior* that sometimes works (by luck), sometimes doesn't. You're returning a pointer to a temporary local object. The pointer to a temporary local object being the internals of the string object obtained by calling `os.str().c_str()`. If you want print those objects easily by `cout`, you can overload operator `<<` for output streams. Like: ``` ostream& operator<<(ostream& out, const Vector &a) { std::ostringstream os; os << "Vector(" << a.getX() << "," << a.getY() << ")"; out << os.str(); return out; } ``` and then ``` std::cout << w1 << std::endl; std::cout << w2 << std::endl; ```
Storing 100+ Objects in Node.js "global" variable My configuration settings are dynamic, so I decided to save them in a database. These configurations are classified by the `NODE_ENV`, so I want to bootstrap the configuration by querying the database once the server starts, because of the necessity of having the configuration for every request. So, I keep the configuration in a global object. My questions are: 1. Is this feasible approach? 2. Is there any way to bootstrap the configuration? 3. Will storing this object lead to a memory leak? 4. How much space will it take to store the configuration in a global? 5. Will the global objects be garbage collected?
1. It is definitely feasible. However, I don't know the context of your application, so I can't say if it would be good. 2. In order to load the configuration, you'd have to connect to the database and perform the load before you attempted to access it. Doing this would be dependent on your application, but you can always connect to the database before you give up control to the application. Then, it would be a simple matter of picking the global name to store it in, and store it in the global. 3. No. Not unless you keep creating objects over and over again uncontrollably, and somehow have them referenced from some object that is in scope somewhere, which is extremely difficult to do without noticing. 4. Honestly? Not much. The standard library of node.js is probably much bigger than your configuration. Hell, probably the library you'd be using to connect to the database to get your configuration would be bigger than your configuration. 5. No. Not as long as you are able to access it. Node.JS uses a heavily modified mark-and-sweep garbage collector; this means that Node.JS has a list of objects that are *never, ever* garbage collected (the "roots"), and marks every object it can access from these objects. Anything not marked is collected, and freed. The global namespace is one of these roots, meaning anything that the global namespace references is not collected. Hope that helps.
Json Parser Example for iOS5 I have created one app in iOS5. I want to store and retrive data on web server. For that I want to use JSON parser. I have used demo given by ray wenderlich for ios4. I heard that for ios 5.Some ready made frameworks are there.So can anybody having any idea regarding this? I am using sqlite for data storage. Please Help.
You may want to refer to SBJSON Framework which is frequently used for objective-C, I have used this framework in may of my iOS Apps. **You may get a reference about it from these links:** <https://github.com/stig/json-framework> <http://stig.github.com/json-framework/> **Example Code**: Also here is an example on, How to parse JSON with SBJSON Parser: [How to parse JSON into Objective C - SBJSON](https://stackoverflow.com/questions/4919249/how-to-parse-json-into-objective-c-sbjson) <http://jasarien.com/?p=428> Here is how you need to configure SBJSON into your app: <http://deshartman.wordpress.com/2011/09/02/configuring-sbjson-framework-for-xcode-4-2/> Hope this helps you.
Setup Grails in GGTS 3.4 behind Proxy I wish to share the knowledge that how I fix the proxy problem to setup GGTS in my company's PC after I run the setup.exe of GGTS 3.4 downloaded from SpringSource (by googling for a whole day around www ;)) Problem: When I start to build my hello world project after the IDE installation, GGTS prompts me this error message: "Error Failed to resolve dependencies". How to fix it?
I'll answer my own question: 1. Go to IE and find the proxy url from the specific proxy file (in Internet Connection Settings), or just ask your firm's support guys. 2. Go to your grails' bin folder and run this: ``` grails add-proxy client "--host=your.proxy.com" "--port=xxxx" "–noproxy='localhost'" ``` 3. It will give you a line of feedback like: "Added proxy client to \\path.grails\ProxySettings.groovy". Now you should open the file, check the url, port, username & password, and also make sure it contains a second line like this: **currentProxy='client'** 4. There is an article suggesting changing a string in the first line of the ProxySettings.groovy from **http.proxyUser** to **http.proxyUserName**. In my own case the proxy doesn't require un/pw so not sure if it is vital or not (source: <http://web.archive.org/web/20130910035021/http://jira.grails.org/browse/GRAILS-10097>) 5. Now right click the project in GGTS, Grail Tools > Refresh Dependencies, or just re-create the hello world project. Huray!
Display transparent box on a HTML table row when mouse over (not highlight row) I want to display a transparent box (with some button on that) surround a table row which user mouse over. I searched on Google, but all pages just tell about how to highlight row when mouse over. I use JavaScript to add mouse over event. ``` $('tr').on('mouseover', displayBox); ``` Can you help me solve this problem or give me some reference article? For example: ![Pic](https://s11.postimg.org/l8la6vcab/Example.png)
## The Overlay We can create the overlay with a [`:before` pseudo element](https://developer.mozilla.org/en-US/docs/Web/CSS/::before) — `tbody tr td:first-child:before`: - It is given 100% width and will stretch the width of the row. - It is given the same height as the `td` and will stretch the height of the row - The table is made `position: relative` so that the cells `:before` child is positioned relative to the table and can stretch across the entire row. ## The Buttons div The buttons can be provided in a div of the last cell in each row — no javascript is needed. This will need to be tweaked slightly as they are offset slightly too low in Firefox. - The div inside each rows last cell is hidden with `opacity` until the row is hovered. When hovered it is shown with: ``` tr:hover td > div { opacity: 1; } ``` - The `td:last-child` is made `position: relative` so that the overlay div which has `position: absolute` will be positioned relative to its parent td ## Working Example ``` * { box-sizing: border-box; } table, tr td:last-child { position: relative; } th, td { padding: 0 10px; height: 2em; } td > div { position: absolute; opacity: 0; transition: opacity 0.5s; right: 0; top: 0.5em; /* 1/4 height of td*/ height: 2em; /*height of td*/ } tr:hover td > div { opacity: 1; } tbody tr td:first-child:before { width: 100%; content: ''; display: block; height: 2em; position: absolute; background: rgba(0, 0, 0, 0); margin-top: -6px; /* off set space above text */ left: 0; transition: background 0.5s; } tbody tr:hover td:first-child:before { background: rgba(0, 0, 0, 0.6); } td > div > a { margin: 0 0.25em 0 0; background: #1DE9B6; color: #FFF; text-decoration: none; border-radius: 2px; padding: 3px; transition: color 0.5s, background 0.5s; } /*Not important -- example only*/ td > div > a:hover { background: #A7FFEB; color: #000; } table { border-collapse: collapse; border: solid 1px #EEE; } th, td { border: solid 1px #EEE; transition: background 0.5s; } tr:nth-child(even) { background: #E3F2FD; } ``` ``` <table> <thead> <tr> <th>Heading</th> <th>Heading</th> <th>Heading</th> <th>Heading</th> </tr> </thead> <tbody> <tr> <td>Content</td> <td>Content</td> <td>Content</td> <td>Content <div><a href="#">Action</a><a href="#">Action</a><a href="#">Action</a></div> </td> </tr> <tr> <td>Content</td> <td>Content</td> <td>Content</td> <td>Content <div><a href="#">Action</a><a href="#">Action</a><a href="#">Action</a></div> </td> </tr> <tr> <td>Content</td> <td>Content</td> <td>Content</td> <td>Content <div><a href="#">Action</a><a href="#">Action</a><a href="#">Action</a></div> </td> </tr> <tr> <td>Content</td> <td>Content</td> <td>Content</td> <td>Content <div><a href="#">Action</a><a href="#">Action</a><a href="#">Action</a></div> </td> </tr> </tbody> <tfoot> <tr> <td colspan="4">Footer</td> </tr> </tfoot> </table> ```
Rvalue reference: Why aren't rvalues implicitly moved? On Artima article about C++ rvalue reference (<http://www.artima.com/cppsource/rvalue.html>) there is words: *That's why it is necessary to say move(x) instead of just x when passing down to the base class. This is a key safety feature of move semantics designed to prevent accidently moving twice from some named variable.* I can't think situation when such double move can perform. Can you give an example of this? In other words, what will go wrong if all members of `T&&` would be rvalue references and not just references?
Consider this scenario: ``` void foo(std::string x) {} void bar(std::string y) {} void test(std::string&& str) { // to be determined } ``` We want to call `foo` with `str`, then `bar` with `str`, both with the same value. The best way to do this is: ``` foo(str); // copy str to x bar(std::move(str)); // move str to y; we move it because we're done with it ``` It would be a mistake to do this: ``` foo(std::move(str)); // move str to x bar(std::move(str)); // move str to y...er, except now it's empty ``` Because after the first move the value of `str` is unspecified. So in the design of rvalue references, this implicit move is not there. If it were, our best way above would not work because the first mention of `str` would be `std::move(str)` instead.
What is special about deleting an empty list? Consider the following... ``` In [1]: del [] In [2]: del {} File "<ipython-input-2-24ce3265f213>", line 1 SyntaxError: can't delete literal In [3]: del "" File "<ipython-input-3-95fcb133aa75>", line 1 SyntaxError: can't delete literal In [4]: del ["A"] File "<ipython-input-5-d41e712d0c77>", line 1 SyntaxError: can't delete literal ``` What is special about `[]`? I would expect this to raise a `SyntaxError` too. Why doesn't it? I've observed this behavior in Python2 and Python3.
The [`del` statement syntax](http://docs.python.org/3/reference/simple_stmts.html#the-del-statement) allows for a [`target_list`](http://docs.python.org/3/reference/simple_stmts.html#grammar-token-target_list), and that includes a list or tuple of variable names. It is intended for deleting several names at once: ``` del [a, b, c] ``` which is the equivalent of: ``` del (a, b, c) ``` or ``` del a, b, c ``` But python does not enforce the list to actually have *any* elements. The expression ``` del () ``` on the other hand *is* a syntax error; `()` is seen as a literal empty tuple in that case.
Reliable iOS image flipping for RTL languages I'm trying to get image flipping to work reliably, but i'm seeing some issues. I manage my images in an Xcode asset catalog, and the ones that need to be flipped in RTL languages have been configured with the 'Direction' property set to 'Left to right, Mirrors'. In some cases I have 2 separate assets, since flipping was not enough. Those are configured with the 'Direction' property set to 'Both' and two separate image assets are provided. All images are PDF assets with 'Preserve vector data' enabled. This all worked great as long as I test with iOS 11 and Xcode 9.2. The problems start when I test on iOS 9. Images that are configured for RTL don't show up when I launch the app, even when I'm not running in a RTL language. Since Xcode 9.3, I seem to have a new problem. The asset catalogs get updated automatically; the RTL images are reconfigured to direction 'Both' and changes are made to the json files in the catalog. Selecting 'Left to right, Mirrors' does not work anymore. Is there anyone who has figured out how to get this to work reliably? Or am I just looking at the latest bugs in Xcode and some old issues with iOS9? [edit] After further analysis of the IPA file that is generated, it seems that the Assets.car file does not contain .png files for the RTL images. It seems that they are not generated (since the source files are PDF), so that would explain the missing images in iOS 9 (which does not use PDF images).
I fixed it, but it took someone at the Apple Developer Forum to point out that the asset catalog 'direction' property was not introduced until Xcode 8 / iOS 10. This means it just doesn't work in iOS 9, and it is probably an Xcode bug that you can even select it when the deployment target is set below iOS 10. So, don't try to use this feature when you want to be compatible with older iOS devices! You can still get it to work programatically. If you have an image in for example a UIBarButtonItem, you can make an outlet to this button and run the following in the viewDidLoad: ``` self.someButton.image = [self.someButton.image imageFlippedForRightToLeftLayoutDirection]; ``` This works because imageFlippedForRightToLeftLayoutDirection **is** supported by iOS 9. It only flips the image when your app is in RTL mode. If you need to load a completely different image, you can do that as follows: ``` if ([UIApplication sharedApplication].userInterfaceLayoutDirection == UIUserInterfaceLayoutDirectionRightToLeft) { self.someButton.image = [UIImage imageNamed:@"someRTLImage"]; } ```
Python RandomForest - Unknown label Error I have trouble using RandomForest fit function This is my training set ``` P1 Tp1 IrrPOA Gz Drz2 0 0.0 7.7 0.0 -1.4 -0.3 1 0.0 7.7 0.0 -1.4 -0.3 2 ... ... ... ... ... 3 49.4 7.5 0.0 -1.4 -0.3 4 47.4 7.5 0.0 -1.4 -0.3 ... (10k rows) ``` I want to predict P1 thanks to all the other variables using sklearn.ensemble RandomForest ``` colsRes = ['P1'] X_train = train.drop(colsRes, axis = 1) Y_train = pd.DataFrame(train[colsRes]) rf = RandomForestClassifier(n_estimators=100) rf.fit(X_train, Y_train) ``` Here is the error I get: ``` ValueError: Unknown label type: array([[ 0. ], [ 0. ], [ 0. ], ..., [ 49.4], [ 47.4], ``` I did not find anything about this label error, I use Python 3.5. Any advice would be a great help !
When you are passing label (y) data to `rf.fit(X,y)`, it expects y to be 1D list. Slicing the Panda frame always result in a 2D list. So, conflict raised in your use-case. You need to convert the 2D list provided by pandas DataFrame to a 1D list as expected by fit function. Try using 1D list first: ``` Y_train = list(train.P1.values) ``` If this does not solve the problem, you can try with solution mentioned in [MultinomialNB error: "Unknown Label Type"](https://stackoverflow.com/questions/20722986): ``` Y_train = np.asarray(train['P1'], dtype="|S6") ``` So your code becomes, ``` colsRes = ['P1'] X_train = train.drop(colsRes, axis = 1) Y_train = np.asarray(train['P1'], dtype="|S6") rf = RandomForestClassifier(n_estimators=100) rf.fit(X_train, Y_train) ```
How to split the HTTP error code from the contents in cURL? Yes, this is related to [Getting curl to output HTTP status code?](https://superuser.com/questions/272265) but unfortunately not the same. In a script I would like to run: ``` curl -qSfsw %{http_code} URL ``` where the `-f` option ensures that the exit code is non-zero to signal an error. On success I want to get the (textual) output from the fetched file, whereas otherwise I want to use the error code. Problem: - Due to race conditions I must not use more than a single HTTP request - I cannot use a temporary file for storage of the content How can I still split the HTTP return code from the actual output? --- Pseudo code: ``` fetch URL if http-error then print http-error-code else print http-body # <- but without the HTTP status code endif ```
There is no need to use a temporary file. The following bash script snippet send a single request and prints the exit code of `curl` and the HTTP status code, or the HTTP status code and response, as appropriate. ``` # get output, append HTTP status code in separate line, discard error message OUT=$( curl -qSfsw '\n%{http_code}' http://superuser.com ) 2>/dev/null # get exit code RET=$? if [[ $RET -ne 0 ]] ; then # if error exit code, print exit code echo "Error $RET" # print HTTP error echo "HTTP Error: $(echo "$OUT" | tail -n1 )" else # otherwise print last line of output, i.e. HTTP status code echo "Success, HTTP status is:" echo "$OUT" | tail -n1 # and print all but the last line, i.e. the regular response echo "Response is:" echo "$OUT" | head -n-1 fi ``` `head -n-1` (print all but the last line) requires GNU, doesn't work on BSD/OS X.
Share Arc between closures I'm trying to write a simple tcp server which would read and broadcast messages. I'm using Tokio, but I think it's more of a general Rust question. I have an Arc with a shared state: `let state = Arc::new(Mutex::new(Shared::new(server_tx)));` Later I want to spawn 2 threads which would use a reference to that state: ``` let server = listener.incoming().for_each(move |socket| { // error[E0382]: capture of moved value: `state` process(socket, state.clone()); Ok(()) }).map_err(|err| { println!("accept error = {:?}", err); }); let receive_sensor_messages = sensors_rx.for_each(move |line| { println!("Received sensor message, broadcasting: {:?}", line); // error[E0597]: borrowed value does not live long enough // error[E0507]: cannot move out of borrowed content for (_, tx) in state.clone().lock().unwrap().clients { tx.unbounded_send(line.clone()).unwrap(); } Ok(()) }).map_err(|err| { println!("line reading error = {:?}", err); }); ``` ([playground](https://play.rust-lang.org/?gist=2100d6844c06e91c01e152db2b636fe9&version=stable&mode=debug&edition=2015)) As far as I understand what it's trying to tell me is that `state` is borrowed in the first closure `listener.incoming().for_each(move |socket| {` so when I try to do it again in `sensors_rx.for_each(move |line| {` it's saying it's not possible. My question is how do I solve it? Isn't `Arc` supposed to solve the issue of sharing a variable between threads? I tried different combinations of `clone` (doing clone outside of the closure and then doing `clone` inside again), but none worked. Cheers!
Essentially, your problem can be boiled down [to the following MCVE:](https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=a7103fe508a732f56c912b394426a3c4) ``` use std::sync::{Arc, Mutex}; struct Bar; fn foo(_ : &Bar){ println!("foo called"); } fn main(){ let example = Arc::new(Mutex::new(Bar)); std::thread::spawn(move ||{ let _ = example.clone(); }); // --- (1) --- std::thread::spawn(move ||{ foo(&example.clone().lock().unwrap()); }); } ``` Now, the first problem here is that `example` is moved. That is, as soon as we crossed `(1)`, the original `example` is considered to be moved from. Instead, we need to *first* `clone` and *then* `move`: ``` let example = Arc::new(Mutex::new(Bar)); let local_state = example.clone(); std::thread::spawn(move ||{ let _ = local_state; // now fine! }); ``` The other error stems from the short lived `Arc`. Essentially, it only lives long enough for you to us `lock` on the underlying `Mutex`. While *we* know that there is at least one other `Arc` pointing to the memory, the compiler cannot prove that. However, if we get rid of the `clone()` it's fine: ``` let local_state = example.clone(); std::thread::spawn(move ||{ foo(&local_state.lock().unwrap()); }); ``` However, you also loop over your container by consuming its contents (the `clients`). Instead, use `&` there, e.g. `&local_state().unwrap().clients`). You can find the complete fixed code below [or on the playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=9ead81d80700d1b97d6cdfff9331d632): ``` use std::sync::{Arc, Mutex}; struct Bar; fn foo(_ : &Bar){ println!("foo called"); } fn main(){ let example = Arc::new(Mutex::new(Bar)); let local_state = example.clone(); std::thread::spawn(move ||{ let _ = local_state; }); let local_state = example.clone(); std::thread::spawn(move ||{ foo(&local_state.lock().unwrap()); }).join(); } ```
Why does sbt console not see packages from subproject in multi-module project? This is my **project/Build.scala**: ``` package sutils import sbt._ import Keys._ object SutilsBuild extends Build { scalaVersion in ThisBuild := "2.10.0" val scalazVersion = "7.0.6" lazy val sutils = Project( id = "sutils", base = file(".") ).settings( test := { }, publish := { }, // skip publishing for this root project. publishLocal := { } ).aggregate( core ) lazy val core = Project( id = "sutils-core", base = file("sutils-core") ).settings( libraryDependencies += "org.scalaz" % "scalaz-core_2.10" % scalazVersion ) } ``` This seems to be compiling my project just fine, but when I go into the console, I can't import any of the code that just got compiled?! ``` $ sbt console scala> import com.github.dcapwell.sutils.validate.Validation._ <console>:7: error: object github is not a member of package com import com.github.dcapwell.sutils.validate.Validation._ ``` What am I doing wrong here? Trying to look at the usage, I don't see a way to say which subproject to load while in the console ``` $ sbt about [info] Loading project definition from /src/sutils/project [info] Set current project to sutils (in build file:/src/sutils/) [info] This is sbt 0.13.1 [info] The current project is {file:/src/sutils/}sutils 0.1-SNAPSHOT [info] The current project is built against Scala 2.10.3 [info] Available Plugins: org.sbtidea.SbtIdeaPlugin [info] sbt, sbt plugins, and build definitions are using Scala 2.10.3 ```
There's the solution from @Alexey-Romanov to start the `console` task in the project the classes to import are in. ``` sbt sutils/console ``` There's however another solution that makes the root `sutils` project *depend on* the other `core`. Use the following snippet to set up the project - note `dependsOn core` that will bring the classes from the `core` project to `sutils`'s namespace. ``` lazy val sutils = Project( id = "sutils", base = file(".") ).settings( test := { }, publish := { }, // skip publishing for this root project. publishLocal := { } ).aggregate( core ).dependsOn core ``` BTW, you should really use a simpler `build.sbt` for your use case as follows: ``` scalaVersion in ThisBuild := "2.10.0" val scalazVersion = "7.0.6" lazy val sutils = project.in(file(".")).settings( test := {}, publish := {}, // skip publishing for this root project. publishLocal := {} ).aggregate(core).dependsOn(core) lazy val core = Project( id = "sutils-core", base = file("sutils-core") ).settings( libraryDependencies += "org.scalaz" %% "scalaz-core" % scalazVersion ) ``` You could make it even easier when you'd split the build to two `build.sbt`s, each for the projects.
How to validate a checkbox in ZF2 I've read numerous workarounds for Zend Framework's lack of default checkbox validation. I have recently started using ZF2 and the documentation is a bit lacking out there. Can someone please demonstrate how I can validate a checkbox to ensure it was ticked, using the Zend Form and Validation mechanism? I'm using the array configuration for my Forms (using the default set-up found in the example app on the ZF website).
Try this Form element : ``` $this->add(array( 'type' => 'Zend\Form\Element\Checkbox', 'name' => 'agreeterms', 'options' => array( 'label' => 'I agree to all terms and conditions', 'use_hidden_element' => true, 'checked_value' => 1, 'unchecked_value' => 'no' ), )); ``` In filters, add digit validation ``` use Zend\Validator\Digits; // at top $inputFilter->add($factory->createInput(array( 'name' => 'agreeterms', 'validators' => array( array( 'name' => 'Digits', 'break_chain_on_failure' => true, 'options' => array( 'messages' => array( Digits::NOT_DIGITS => 'You must agree to the terms of use.', ), ), ), ), ))); ```
subset with pattern Say I have a data frame df ``` df <- data.frame( a1 = 1:10, b1 = 2:11, c2 = 3:12 ) ``` I wish to subset the columns, but with a pattern ``` df1 <- subset( df, select= (pattern = "1") ) ``` To get ``` > df1 a1 b1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 ``` Is this possible?
It is possible to do this via ``` subset(df, select = grepl("1", names(df))) ``` For automating this as a function, one can use use `[` to do the subsetting. Couple that with one of R's regular expression functions and you have all you need. By way of an example, here is a custom function implementing the ideas I mentioned above. ``` Subset <- function(df, pattern) { ind <- grepl(pattern, names(df)) df[, ind] } ``` Note this does not error checking etc and just relies upon `grepl` to return a logical vector indicating which columns match `pattern`, which is then passed to `[` to subset by columns. Applied to your `df` this gives: ``` > Subset(df, pattern = "1") a1 b1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 ```
Decorators with and without arguments *Disclaimer*: This is my first post here, so I'm not completely sure if this is on-topic. I recently added a decorator that wraps a context manager in [pyfakefs](https://github.com/jmcgeheeiv/pyfakefs) (I'm a contributor), which has some optional arguments. In order to make the usage more convenient, I allowed both the usage with and without parentheses (mostly arguments are not needed, so the call without parentheses is the default). The code works but is not nice, and also probably not good if performance matters. I will show the full code here, and the question is just - can this be written nicer, and without the need to call another decorator function in the default case. This is the complete code including comments: ``` def _patchfs(f): """Internally used to be able to use patchfs without parentheses.""" @functools.wraps(f) def decorated(*args, **kwargs): with Patcher() as p: kwargs['fs'] = p.fs return f(*args, **kwargs) return decorated def patchfs(additional_skip_names=None, modules_to_reload=None, modules_to_patch=None, allow_root_user=True): """Convenience decorator to use patcher with additional parameters in a test function. Usage:: @patchfs test_my_function(fs): fs.create_file('foo') @patchfs(allow_root_user=False) test_with_patcher_args(fs): os.makedirs('foo/bar') """ def wrap_patchfs(f): @functools.wraps(f) def wrapped(*args, **kwargs): with Patcher( additional_skip_names=additional_skip_names, modules_to_reload=modules_to_reload, modules_to_patch=modules_to_patch, allow_root_user=allow_root_user) as p: kwargs['fs'] = p.fs return f(*args, **kwargs) return wrapped # workaround to be able to use the decorator without using calling syntax # (the default usage without parameters) # if using the decorator without parentheses, the first argument here # will be the wrapped function, so we pass it to the decorator function # that doesn't use arguments if inspect.isfunction(additional_skip_names): return _patchfs(additional_skip_names) return wrap_patchfs ``` Some more usage context: The decorator can be used in unittest methods to execute the test in a fake filesystem. The actual work is done by the patcher, which is instantiated by the decorator. The fake filesystem is represented by the argument `fs`,which is taken from the patcher instance, and can be used for some convenience functions like file creation or copying of files from the real file system. In most cases, this will work out of the box (the decorator without parentheses can be used), but in some cases additional configuration is needed, which can be done by adding some optional arguments to the decorator, which are passed to the patcher.
Your `_patchfs` and `wrap_patchfs` functions are virtually identical. You don’t need the `_patchfs` version, just the internal one. One function instead of two is easier to maintain: ``` def patchfs(_func=None, *, additional_skip_names=None, modules_to_reload=None, modules_to_patch=None, allow_root_user=True): """Your docstring here ...""" def wrap_patchfs(f): @functools.wraps(f) def wrapped(*args, **kwargs): with Patcher( additional_skip_names=additional_skip_names, modules_to_reload=modules_to_reload, modules_to_patch=modules_to_patch, allow_root_user=allow_root_user) as p: kwargs['fs'] = p.fs return f(*args, **kwargs) return wrapped if _func: if not callable(_func): raise TypeError("Decorator argument not a function.\n" "Did you mean `@patchfs(additional_skip_names=...)`?") return wrap_patchfs(_func) return wrap_patchfs ``` The `if not callable: ...` ensures you don’t accidentally try to use `@patchfs(names_to_skip)`. Using `*` forces the remaining arguments to keyword only arguments; you cannot just list the four arguments, which makes the decorator a little less error-prone. --- Your docstring's `Usage::` examples lack the required `def` keywords for defining functions.
pythonic implementation of Bayesian networks for a specific application **This is why I'm asking this question:** Last year I made some C++ code to compute posterior probabilities for a particular type of model (described by a Bayesian network). The model worked pretty well and some other people started to use my software. Now I want to improve my model. Since I'm already coding slightly different inference algorithms for the new model, I decided to use python because runtime wasn't critically important and python may let me make more elegant and manageable code. Usually in this situation I'd search for an existing Bayesian network package in python, but the inference algorithms I'm using are my own, and I also thought this would be a great opportunity to learn more about good design in python. I've already found a great python module for network graphs (networkx), which allows you to attach a dictionary to each node and to each edge. Essentially, this would let me give nodes and edges properties. For a particular network and it's observed data, I need to write a function that computes the likelihood of the unassigned variables in the model. For instance, in the classic "Asia" network (<http://www.bayesserver.com/Resources/Images/AsiaNetwork.png>), with the states of "XRay Result" and "Dyspnea" known, I need to write a function to compute the likelihood that the other variables have certain values (according to some model). **Here is my programming question:** I'm going to try a handful of models, and in the future it's possible I'll want to try another model after that. For instance, one model might look exactly like the Asia network. In another model, a directed edge might be added from "Visit to Asia" to "Has Lung Cancer." Another model might use the original directed graph, but the probability model for the "Dyspnea" node given the "Tuberculosis or Cancer" and "Has Bronchitis" nodes might be different. All of these models will compute the likelihood in a different way. All the models will have substantial overlap; for instance, multiple edges going into an "Or" node will always make a "0" if all inputs are "0" and a "1" otherwise. But some models will have nodes that take on integer values in some range, while others will be boolean. In the past I've struggled with how to program things like this. I'm not going to lie; there's been a fair amount of copied and pasted code and sometimes I've needed to propagate changes in a single method to multiple files. This time I *really* want to spend the time to do this the right way. Some options: 1. I was already doing this the right way. Code first, ask questions later. It's faster to copy and paste the code and have one class for each model. The world is a dark and disorganized place... - Each model is its own class, but also a subclass of a general BayesianNetwork model. This general model will use some functions that are going to be overridden. Stroustrup would be proud. - Make several functions in the same class that compute the different likelihoods. - Code a general BayesianNetwork library and implement my inference problems as specific graphs read in by this library. The nodes and edges should be given properties like "Boolean" and "OrFunction" which, given known states of the parent node, can be used to compute the probabilities of different outcomes. These property strings, like "OrFunction" could even be used to look up and call the right function. Maybe in a couple of years I'll make something similar to the 1988 version of Mathematica! Thanks a lot for your help. **Update:** Object oriented ideas help a lot here (each node has a designated set of predecessor nodes of a certain node subtype, and each node has a likelihood function that computes its likelihood of different outcome states given the states of the predecessor nodes, etc.). OOP FTW!
I've been working on this kind of thing in my spare time for quite a while. I think I'm on my third or fourth version of this same problem right now. I'm actually getting ready to release another version of Fathom (https://github.com/davidrichards/fathom/wiki) with dynamic bayesian models included and a different persistence layer. As I've tried to make my answer clear, it's gotten quite long. I apologize for that. Here's how I've been attacking the problem, which seems to answer some of your questions (somewhat indirectly): I've started with Judea Pearl's breakdown of belief propagation in a Bayesian Network. That is, it's a graph with prior odds (causal support) coming from parents and likelihoods (diagnostic support) coming from children. In this way, the basic class is just a BeliefNode, much like what you described with an extra node between BeliefNodes, a LinkMatrix. In this way, I explicitly choose the type of likelihood I'm using by the type of LinkMatrix I use. It makes it easier to explain what the belief network is doing afterwards as well as keeps the computation simpler. Any subclassing or changes that I'd make to the basic BeliefNode would be for binning continuous variables, rather than changing propagation rules or node associations. I've decided on keeping all data inside the BeliefNode, and only fixed data in the LinkedMatrix. This has to do with ensuring that I maintain clean belief updates with minimal network activity. This means that my BeliefNode stores: - an array of children references, along with the filtered likelihoods coming from each child and the link matrix that is doing the filtering for that child - an array of parent references, along with the filtered prior odds coming from each parent and the link matrix that is doing the filtering for that parent - the combined likelihood of the node - the combined prior odds of the node - the computed belief, or posterior probability - an ordered list of attributes that all prior odds and likelihoods adhere to The LinkMatrix can be constructed with a number of different algorithms, depending on the nature of the relationship between the nodes. All of the models that you're describing would just be different classes that you'd employ. Probably the easiest thing to do is default to an or-gate, and then choose other ways to handle the LinkMatrix if we have a special relationship between the nodes. I use MongoDB for persistence and caching. I access this data inside of an evented model for speed and asynchronous access. This makes the network fairly performant while also having the opportunity to be very large if it needs to be. Also, since I'm using Mongo in this way, I can easily create a new context for the same knowledge base. So, for example, if I have a diagnostic tree, some of the diagnostic support for a diagnosis will come from a patient's symptoms and tests. What I do is create a context for that patient and then propagate my beliefs based on the evidence from that particular patient. Likewise, if a doctor said that a patient was likely experiencing two or more diseases, then I could change some of my link matrices to propagate the belief updates differently. If you don't want to use something like Mongo for your system, but you are planning on having more than one consumer working on the knowledge base, you will need to adopt some sort of caching system to make sure that you are working on freshly-updated nodes at all times. My work is open source, so you can follow along if you'd like. It's all Ruby, so it would be similar to your Python, but not necessarily a drop-in replacement. One thing that I like about my design is that all of the information needed for humans to interpret the results can be found in the nodes themselves, rather than in the code. This can be done in the qualitative descriptions, or in the structure of the network. So, here are some important differences I have with your design: - I don't compute the likelihood model inside the class, but rather between nodes, inside the link matrix. In this way, I don't have the problem of combining several likelihood functions inside the same class. I also don't have the problem of one model vs. another, I can just use two different contexts for the same knowledge base and compare results. - I'm adding a lot of transparency by making the human decisions apparent. I.e., if I decide to use a default or-gate between two nodes, I know when I added that and that it was just a default decision. If I come back later and change the link matrix and re-calculate the knowledge base, I have a note about why I did that, rather than just an application that chose one method over another. You could have your consumers take notes about that kind of thing. However you solve that, it's probably a good idea to get the step-wise dialog from the analyst about why they are setting things up one way over another. - I may be more explicit about prior odds and likelihoods. I don't know for sure on that, I just saw that you were using different models to change your likelihood numbers. Much of what I'm saying may be completely irrelevant if your model for computing posterior beliefs doesn't break down this way. I have the benefit of being able to make three asynchronous steps that can be called in any order: pass changed likelihoods up the network, pass changed prior odds down the network, and re-calculate the combined belief (posterior probability) of the node itself. One big caveat: some of what I'm talking about hasn't been released yet. I worked on the stuff I am talking about until about 2:00 this morning, so it's definitely current and definitely getting regular attention from me, but isn't all available to the public just yet. Since this is a passion of mine, I'd be happy to answer any questions or work together on a project if you'd like.
How to convert a "raw" string into a normal string? In Python, I have a string like this: ``` '\\x89\\n' ``` How can I decode it into a normal string like: ``` '\x89\n' ```
If your input value is a `str` string, use `codecs.decode()` to convert: ``` import codecs codecs.decode(raw_unicode_string, 'unicode_escape') ``` If your input value is a `bytes` object, you can use the [`bytes.decode()` method](https://docs.python.org/3/library/stdtypes.html#bytes.decode): ``` raw_byte_string.decode('unicode_escape') ``` Demo: ``` >>> import codecs >>> codecs.decode('\\x89\\n', 'unicode_escape') '\x89\n' >>> b'\\x89\\n'.decode('unicode_escape') '\x89\n' ``` Python 2 byte strings can be decoded with the `'string_escape'` codec: ``` >>> import sys; sys.version_info[:2] (2, 7) >>> '\\x89\\n'.decode('string_escape') '\x89\n' ``` For *Unicode* literals (with a `u` prefix, e.g. `u'\\x89\\n'`), use `'unicode_escape'`.
Recover 1TB disk erased with startup disk creator Hello. I was trying to make a bootable flash drive from the startup disk creator, and when I went to erase the flash drive I accidentally erased the wrong drive, a 1TB removable drive. The operation took less than a second and gave no warning. I tried erasing the flash drive and it took about a minute, but it took but a second to erase the 1 TB drive, which is very surprising. Please help, Gabriel
Try to recover the partition with TestDisk. Careful: don't write on the drive. Instructions step-by-step: 1. Install TestDisk 2. Mount the drive 3. Launch `sudo testdisk` (eventually enlarge the terminal) 4. Create a new log file 5. Select the drive 6. Select partition Table (usually Intel should be good) 7. Analyse 8. Quick Search (this should found only the actual partition) 9. [Enter] 10. Deeper Search (this should find your old partition, you can stop it after it found it) Once you found what you think it's your partition select it with up/down arrows 11. [P] for list files and look if it seems it 12. [q] to quit list files 13. [Enter] 14. Write to save new partition table to MBR. Look carefully: this don't overwrite / recover any data, just the MBR of the disk.
Define functions for multiple shells I'm currently using fish shell. As I use frequently `fish`, `zsh`, `bash`, how could I define a function in one of them which will be available for all shell? Do I have to define them in `.profile`? [![enter image description here](https://i.stack.imgur.com/hl9oT.png)](https://i.stack.imgur.com/hl9oT.png) Once I leave the terminal and reuse it after, I got : [![enter image description here](https://i.stack.imgur.com/8l17a.png)](https://i.stack.imgur.com/8l17a.png)
You could just define the function in the `~/.*rc` file of each of the shells you use. Or you could make a new file for that function and other shell functions that you want to use in all shells... for example... ``` nano shell-functions ``` I define my function inside the file... ``` hi() { echo "How are you $1?" ; } ``` save and exit, then I edit my `~/.bashrc` and `~/.zshrc` and at the end of each of them add the line: ``` source shell-functions ``` or just ``` . shell-functions ``` Which does the same thing. After editing my `~/.*rc` files, I open a new shell and the function is available: ``` $ bash $ hi zanna how are you zanna ? $ zsh % hi zanna how are you zanna ? ``` --- The `source` command reads files and executes commands from them *in the current shell* (unlike when you run a script like `./script` which executes the script in a new shell). In this case, you want to define a function for the shell being opened so you want to `source` the file that contains it to make it available in the shell. If you look in your `~/.profile` you can see an example of one configuration file sourcing another like this: ``` # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" ``` So the default `~/.profile` sources `~/.bashrc` in Ubuntu. You can also test out `source` by making a file, let's call it `file1`, with some command in it like (for bash) `PS1='I messed up my prompt '` save, exit, then in the shell type `source file1` and you will see the effect (open a new shell (for example type `bash` or open a new terminal window) and everything will be back to normal)...
How to get the default Store code within the Magento backend I am trying to find out the default store code from within Magento's backend. While this sounds rather simple, I just couldn't find any solution. The snippets I found are either ``` Mage::app()->getStore()->getCode() ``` (although this doesn't correspond to the default but the current store) or ``` Mage::app()->getStore(Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID)->getCode(); ``` But from within the backend these will only return "admin" (since the backend is treated as some kind of special store with store ID 0 - which is the value of `DEFAULT_STORE_ID`). Could anyone please point me to a way to get the actual default store code from anywhere? (That store code that is set by Magento if both "Add Store Code to URLs" and "Auto-redirect to Base URL" options are activated) Just a little background why I need this: I need to generate a URL within the Magento configuration that still works if "Add Store Code to URLs" is activated. I can set any store code, so if I'm within the configuration scope of one of them, I can just use that one. But since it also has to work if the configuration scope is set to default or website, I want to use the default store code in that case. I found a solution with: ``` $websites = Mage::app()->getWebsites(); $code = $websites[1]->getDefaultStore()->getCode(); ``` However, this leaves me with some follow-up questions. Why does `Mage::app()->getWebsite()` return a special website object that only includes the special `admin` store, while `Mage::app()->getWebsites()` will return an array that **only** includes the usual frontend website, but **not** the object returned by `getWebsite()`? Why does the frontend website object occupy index 1 in the array, while index 0 is unused? I would really like to know the reason for having to use a magic number there (if I have to).
There is no such thing as a default store in Magento. The only special store is the admin one; all other stores have the same rights in Magento. > > [...] while Mage::app()->getWebsites() will return an array that only > includes the usual frontend website, but not the object returned by > getWebsite()? > > > You should look at `Mage_Core_Model_App::getWebsites()`'s source code: ``` public function getWebsites($withDefault = false, $codeKey = false) { $websites = array(); if (is_array($this->_websites)) { foreach ($this->_websites as $website) { if (!$withDefault && $website->getId() == 0) { continue; } //... } } return $websites; } ``` If you call `$websites = Mage::app()->getWebsites(true);`, you will get an array of websites, with the admin one at index 0.
How to "join" two partitions I have a laptop hard drive that I'm using now as an external USB hard drive. The drive has two partitions - I don't want this configuration and only want one partition. How can I join the partitions into one, without losing any existing data from them?
If you're using **Windows**, you can go into the built-in **Disk Management** console and delete a partition that does not have any data and extend the other partition into the unallocated space. As Shinrai pointed out in the comments, this does not work if you want to do this for the active drive. **UPDATE**: How to open the **Disk Management** console (Windows 7; similar for other Windows OS's): Right click **Computer** and select **Manage**: ![enter image description here](https://i.stack.imgur.com/kzCNP.png) Select **Storage** then **Disk Management**: ![enter image description here](https://i.stack.imgur.com/UKLfo.png) The **Disk Management** console is now open. ![enter image description here](https://i.stack.imgur.com/URp2f.png) **UPDATE 2:** How to delete one partition and extend the other into it: Open the **Disk Management** console Delete one partition, preferably the one that does not have data on it by right clicking on the partition and selecting **Delete Volume**: ![enter image description here](https://i.stack.imgur.com/6iGyD.png) You will then have an **Unallocated** partition: Right click on the other partition and select **Extend Volume**: ![enter image description here](https://i.stack.imgur.com/dhU36.png)
Best way to manage update-review-publish workflow? I manage an online catalog. Currently in-house personnel update it manually, and their changes are immediately visible. Now we want to add a verification step: Tom makes a change, Jerry approves it. I see two paths, but neither seems elegant. 1. Keep a second, 'working copy' of the whole database. 2. Keep a second set of 'dirty' tables inside the same database. Both of these seem to require a lot of code just for housekeeping, as well as double the work whenever the structure of a table changes. Is there a better way? In case it matters, the DBMS is SQL Server, the web app is asp.net. **Edited to add:** 1. The two changes I outlined above are both backward-compatible with existing code. I know I'm going to have to make some changes, but I can't alter *every* query. 2. I think my key constraints prohibit simply cloning rows and marking them 'pending'. Let's say Supplier with SupplierID 99 has two Products. (Products can belong to only one SupplierID.) The Supplier's phone number has changed, so I clone the Supplier record, change the phone number, and mark it 'pending'. But the new record can't have an ID of 99, so there's no longer a way to connect it to either its Products or even the record it's intended to replace. I suppose I could add an unconstrained identifier, SupplierPseudoID, but this seems just as complicated and error-prone as the above ideas.
Why do you need a copy of the tables? Why not just add an `approved` field on the table instead? --- Answer to the Edit: If you have a table like ``` id | name | text | modified | etc ----------------------------------- 1 | aaaa | blabla | 20100210 | xxx 2 | bbbb | yadayada| 20100212 | yyy 3 | cccc | asdfkad | 20090102 | zzz ``` you can just alter it to add a new field called `appoved` and make the primary key be both `id` and `modified` ``` id | name | text | modified | etc | approved ----------------------------------------------- 1 | aaaa | blabla | 20100210 | xxx | 1 2 | bbbb | yadayada| 20100212 | yyy | 1 3 | cccc | asdfkad | 20090102 | zzz | 1 3 | cccc | qwerklj | 20100219 | zzz | 0 ``` You create a view that only brings you ``` id | name | text | modified | etc ----------------------------------- 1 | aaaa | blabla | 20100210 | xxx 2 | bbbb | yadayada| 20100212 | yyy 3 | cccc | asdfkad | 20090102 | zzz ``` By defining it as something like `SELECT id, name, text, modified, etc FROM catalog WHERE approved = 1;`, that way you only have to modify the "table" the queries select from. To avoid having to modify the insertion you should give approved a default value of `0` and modify the update queries to do something like ``` INSERT INTO catalog (id, name, text, modified, etc, approved) VALUES (SELECT id, name, text, NOW(), etc, 0) ``` which would end up with something like ``` id | name | text | modified | etc | approved ----------------------------------------------- 1 | aaaa | blabla | 20100210 | xxx | 1 2 | bbbb | yadayada| 20100212 | yyy | 1 3 | cccc | asdfkad | 20090102 | zzz | 1 3 | cccc | qwerklj | 20100219 | zzz | 0 ``` and the new bit of interface that you will have to do to "approve a field" would have to ``` UPDATE catalog SET approved = 1; DELETE FROM catalog WHERE id = @id AND approved = 1 AND MIN(modified); ``` which would result in ``` id | name | text | modified | etc | approved ----------------------------------------------- 1 | aaaa | blabla | 20100210 | xxx | 1 2 | bbbb | yadayada| 20100212 | yyy | 1 3 | cccc | qwerklj | 20100219 | zzz | 1 ``` This last bit could be simplified even more if you make a trigger or a stored procedure to do this. *This is a very vague example, adapt to your needs.*
How does curl protect a password from appearing in ps output? I noticed some time ago that usernames and passwords given to `curl` as command line arguments don't appear in `ps` output (although of course they may appear in your bash history). They likewise don't appear in `/proc/PID/cmdline`. (The length of the combined username/password argument can be derived, though.) Demonstration below: ``` [root@localhost ~]# nc -l 80 & [1] 3342 [root@localhost ~]# curl -u iamsam:samiam localhost & [2] 3343 [root@localhost ~]# GET / HTTP/1.1 Authorization: Basic aWFtc2FtOnNhbWlhbQ== User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 Host: localhost Accept: */* [1]+ Stopped nc -l 80 [root@localhost ~]# jobs [1]+ Stopped nc -l 80 [2]- Running curl -u iamsam:samiam localhost & [root@localhost ~]# ps -ef | grep curl root 3343 3258 0 22:37 pts/1 00:00:00 curl -u localhost root 3347 3258 0 22:38 pts/1 00:00:00 grep curl [root@localhost ~]# od -xa /proc/3343/cmdline 0000000 7563 6c72 2d00 0075 2020 2020 2020 2020 c u r l nul - u nul sp sp sp sp sp sp sp sp 0000020 2020 2020 0020 6f6c 6163 686c 736f 0074 sp sp sp sp sp nul l o c a l h o s t nul 0000040 [root@localhost ~]# ``` **How is this effect achieved?** Is it somewhere in the source code of `curl`? (I assume it is a `curl` feature, not a `ps` feature? Or is it a kernel feature of some sort?) --- Also: **can this be achieved from outside the source code of a binary executable?** E.g. by using shell commands, probably combined with root permissions? In other words could I somehow mask an argument from appearing in `/proc` or in `ps` output (same thing, I think) that I passed to some *arbitrary* shell command? (I would guess the answer to this is "no" but it seems worth including this extra half-a-question.)
When the kernel executes a process, it copies the command line arguments to read-write memory belonging to the process (on the stack, at least on Linux). The process can write to that memory like any other memory. When `ps` displays the argument, it reads back whatever is stored at that particular address in the process's memory. Most programs keep the original arguments, but it's possible to change them. The [POSIX description of `ps`](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ps.html) states that > > It is unspecified whether the string represented is a version of the argument list as it was passed to the command when it started, or is a version of the arguments as they may have been modified by the application. Applications cannot depend on being able to modify their argument list and having that modification be reflected in the output of ps. > > > The reason this is mentioned is that most unix variants do reflect the change, but POSIX implementations on other types of operating systems may not. This feature is of limited use because the process can't make arbitrary changes. At the very least, the total length of the arguments cannot be increased, because the program can't change the location where `ps` will fetch the arguments and can't extend the area beyond its original size. The length can effectively be decreased by putting null bytes at the end, because arguments are C-style null-terminated strings (this is indistinguishable from having a bunch of empty arguments at the end). If you really want to dig, you can look at the source of an open-source implementation. On Linux, the source of `ps` isn't interesting, all you'll see there is that it reads the command line arguments from the [proc filesystem](https://en.wikipedia.org/wiki/Procfs), in `/proc/*PID*/cmdline`. The code that generates the content of this file is in the kernel, in [`proc_pid_cmdline_read` in `fs/proc/base.c`](http://elixir.free-electrons.com/linux/v4.12/source/fs/proc/base.c#L207). The part of the process's memory (accessed with `access_remote_vm`) goes from the address `mm->arg_start` to `mm->arg_end`; these addresses are recorded in the kernel when the process starts and can't be changed afterwards. Some daemons use this ability to reflect their status, e.g. they change their `argv[1]` to a string like `starting` or `available` or `exiting`. Many unix variants have a [`setproctitle`](https://www.freebsd.org/cgi/man.cgi?query=setproctitle&sektion=3) function to do this. Some programs use this ability to hide confidential data. Note that this is of limited use since the command line arguments are visible while the process starts. Most high-level languages copy the arguments to string objects and don't give a way to modify the original storage. Here's a C program that demonstrates this ability by changing `argv` elements directly. ``` #include <stdlib.h> #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { int i; system("ps -p $PPID -o args="); for (i = 0; i < argc; i++) { memset(argv[i], '0' + (i % 10), strlen(argv[i])); } system("ps -p $PPID -o args="); return 0; } ``` Sample output: ``` ./a.out hello world 0000000 11111 22222 ``` You can see `argv` modification in the curl source code. Curl defines a [function `cleanarg` in `src/tool_paramhlp.c`](https://github.com/curl/curl/blob/45a560390c4356bcb81d933bbbb229c8ea2acb63/src/tool_paramhlp.c#L139) which is used to change an argument to all spaces using `memset`. In `src/tool_getparam.c` this function is used a few times, e.g. by redacting the [user password](https://github.com/curl/curl/blob/581011a3d2bb7d2c6f74e4f4dea9f8c12e7cc382/src/tool_getparam.c#L1934). Since the function is called from the parameter parsing, it happens early in a curl invocation, but dumping the command line before this happens will still show any passwords. Since the arguments are stored in the process's own memory, they cannot be changed from the outside except by using a debugger.
How to make continuous WebJob? I create an Azure WebJob as shown in the following script which dequeues an item from Azure Storage Queue, and then stores it to an Azure Table. The process is finished running within 1 or 2 seconds, but it runs only few a times in a minute (and occasionally halted in approximately 10 minutes). Overall, it does not work well. What am I missing? Maybe I'm mixing trigger job and continuous job, but it is hard to find an appropriate sample. ``` class Program { static void Main(string[] args) { Console.WriteLine("started at {0}", DateTime.Now.ToString("s")); // Continuous job should have infinite loop. while(true){ var host = new JobHost(); host.RunAndBlock(); } } public static void ProcessQueueMessage([QueueInput("blogqueue")] string json) { var storageAccount = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings ["StorageConnectionString"].ConnectionString); var tableClient = storageAccount.CreateCloudTableClient(); // Store it to Azure Table Storage // omitted... Console.WriteLine("Received JSON: {0}", json); } ```
A few comments: 1. You don't need the `while(true)` statement. `RunAndBlock` already has a while loop inside and it will block there. 2. You don't need Manually retrieve the table. You can bind to a table as shown in the samples [here](https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/TableOperations/Program.cs) 3. You can bind to the storage account, you don't need to manually read the connection string from the configuration file and create the instance. Here is an example of binding to storage account: ``` public static void ProcessQueueMessage([QueueInput("blogqueue")] string json, CloudStorageAccount) { ... } ``` This is a wild guess but, from your code, it seems that the storage connection string is stored in `StorageConnectionString` in the config file. The `JobHost` expects either (1) the connection strings for runtime and data to be stored in `AzureJobsData` and `AzureJobsRuntime` (as described [here](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/getting-started-with-windows-azure-webjobs)) or (2) pass the connection string as parameter to the `JobHost` constructor. I think this is the reason of the crash.
RequireJS load string In my app there are dynamic parts that are loaded from database as string that looks like: ``` "define(['dependency1', 'dependency2'], function(){"+ " // function body" + "})" ``` which is just a simple requireJS module, as a string. I want to lazy load the script above using async require call. So, my main requireJS script looks like: ``` require(["jquery"], function($){ $(document).ready(function(){ // logic to load specific script from database var scriptString = functionToLoadTheStringAbove(); // ideally i would like to call it like this require([scriptString], function(){ // scriptString, dependency1, dependency2 are loaded } }); }); ``` How do I load those string in requireJS? I know about text plugin, but it only allow loading from files. I tried eval but it doesn't resolve dependencies correctly.
This is quite late, but I just post my solution here in case anyone needs. So I ended up asking in requireJS forum and examining the source of [text! plugin](https://github.com/requirejs/text) and [json! plugin](https://github.com/millermedeiros/requirejs-plugins/blob/master/src/json.js). The cleanest way to load module from String in RequireJS is by making your own plugin to load the String, and then use [onLoad.fromText()](http://requirejs.org/docs/plugins.html#apiload) that will `eval` your String and resolve all dependencies. Example of my plugin (let's call it `db!` plugin): ``` define([], function(){ var db = new Database(); // string is loaded from LocalStorage return { load: function(name, req, onLoad, reqConfig){ db.get(name, function(err, scriptString){ if (err) onLoad(err); else onLoad.fromText(scriptString); }); } } }); ``` You can then use the plugin like: ``` require(["jquery", "db!myScript"], function($, myScript){ // jQuery, myScript and its dependencies are loaded from database }); ``` *Note:* 1. There's no way to `require()` from String without `eval`. This is what `onLoad.fromText()` does internally. Since eval is evil, you should only use it if you know what String you're going to `eval()`. If you're using it in browser extension, you might want to relax the CSP policy. 2. To name your String module, you can use [explicit naming syntax](http://requirejs.org/docs/api.html#modulename). This way, your module will always have the same absolute name.
Difference between onResume() and onResumeFragments() [`FragmentActivity.onResume()` javadoc:](http://developer.android.com/reference/android/support/v4/app/FragmentActivity.html#onResume()) > > Dispatch onResume() to fragments. Note that for better inter-operation with older versions of the platform, at the point of this call the fragments attached to the activity are not resumed. This means that in some cases the previous state may still be saved, not allowing fragment transactions that modify the state. To correctly interact with fragments in their proper state, you should instead override onResumeFragments(). > > > [`FragmentActivity.onResumeFragments()` javadoc:](http://developer.android.com/reference/android/support/v4/app/FragmentActivity.html#onResumeFragments()) > > This is the fragment-orientated version of onResume() that you can override to perform operations in the Activity at the same point where its fragments are resumed. Be sure to always call through to the super-class. > > > Does the above mean that the platform *guarantees* that: - fragments are **never** going to be resumed (their `onResume()` not called) while executing `FragmentActivity.onResume()` and - fragments are **always** going to be resumed (their `onResume()` called) while executing `FragmentActivity.onResumeFragments()`? If not, how can a developer correctly utilize and be vigilant regarding the above?
**Will `onResume()` be called?** Yes, `FragmentActivity.onResume()` will still be called (same context as `Activity.onResume()`). Even if you override `FragmentActivity.onResumeFragments()` (additional method from `FragmentActivity` that knows it contains `Fragments`). **What is the difference between `onResume()` and `onResumeFragments()`?** `FragmentActivity.onResumeFragments()` is a callback on `FragmentActivity` as to *when the `Fragments` it contains are resuming*, which is not the same as *when the `Activity` resumes*. > > This is the fragment-orientated version of onResume() that you can override to perform operations in the Activity at the same point where its fragments are resumed. Be sure to always call through to the super-class. > > > **When to use which method?** If you are using the support-v4 library and `FragmentActivity`, try to always use `onResumeFragments()` instead of `onResume()` in your `FragmentActivity` implementations. [FragmentActivity#onResume() documentation](https://developer.android.com/reference/android/support/v4/app/FragmentActivity.html#onResume()): > > To correctly interact with fragments in their proper state, you should instead override onResumeFragments(). > > > Difference is subtle, see <https://github.com/xxv/android-lifecycle/issues/8>: > > onResume() should be used for normal Activity's and onResumeFragments() when using the v4 compat library. > This is only required when the application is waiting for the initial FragmentTransaction's to be completed by the FragmentManager. > > >
Onchange event with AngularStrap select element I want to execute a function when the value of the select element changes (the select element in angular-strap is html tag) My HTML: ``` <button type="button" class="btn btn-default" ng-model="selectedCriteria" data-html="1" ng-options="choice.value as choice.label for choice in selectChoices" bs-select> Action <span class="caret"></span> </button> ``` My JS: ``` $scope.selectedCriteria = "Location"; $scope.selectChoices = [{'value':'Location','label':'<i class=\'fa fa-map-marker\'></i> Location'}, {'value':'Age','label':'<i class=\'fa fa-male\'></i> Age'}]; ``` I tried putting an ng-click directive with a function in the controller but it captures the value of the current selected value on click not when the element changes Thanks
There are a couple of options one is using `ngChange` [Reference](https://docs.angularjs.org/api/ng/directive/ngChange) The other is using `$watch`. See the $watch section of the [`scope`api reference](https://docs.angularjs.org/api/ng/type/$rootScope.Scope) **An example using `watch`** *(this would be in your controller)* ``` $scope.$watch('selectedCriteria', function() { $scope.SomeFunction(); }); ``` **An example using `ngChange`** ``` <button type="button" class="btn btn-default" ng-change="SomeFunction()" ng-model="selectedCriteria" data-html="1" ng-options="choice.value as choice.label for choice in selectChoices" bs-select> Action <span class="caret"></span> </button> ```
rotate cammera around center of object wpf Expression blend enables you to import 3d models. I want to animate a 3d object with code. I just can't seem to figure out what are the property values that I have to modify in order to make an object rotate. Let me show you what I mean: ![enter image description here](https://i.stack.imgur.com/556BS.png) so if I want to rotate this object I could use the camera orbit tool ![enter image description here](https://i.stack.imgur.com/fu9nq.png) and If I use it I can end up with something like: ![enter image description here](https://i.stack.imgur.com/2XI8X.png) I know I can create a storyboard and create the animation by modifying the object. I need to rotate the object along the x axis with a slider. If I modify just one value it will rotate in a weird way I actually have to change several properties if I wish to do so. For example when I am rotating the object along the x-axis with the camera orbit tool I can see that all these ![enter image description here](https://i.stack.imgur.com/RNynU.png) properties are changing. I need to figure out what is the algorithm being used to rotate the object.
The math to move the camera position around so that you appear to be rotating around the X axis is just the parametric equation of a circle: ![Parametric Equation of a Circle](https://i.stack.imgur.com/NSHye.png) where *t* is the angle from zero to *2 pi*. Imagine you are standing on the street looking at a house. The camera's coordinates have to follow a circle around the house and the latitude and longitude are continuously changing to keep the same distance from the house. So there is no one value you can change to make it rotate. Once you know the camera position, the direction is just the difference between the origin and the camera position. All this is not hard to calculate but there is an easier way. Instead, **keep the camera fixed and rotate the object**. This makes animations much easier. Here is an MSDN article contains examples of that approach, including animations: - [3-D Transformations Overview](http://msdn.microsoft.com/en-us/library/ms753347.aspx) That article is meant for WPF and Visual Studio but you can easily adapt the same ideas to Expression Blend.
Custom Map Marker Icons with Dynamically Generated Numbers I want to have numbered map markers for my google maps and currently I'm using the Google Charts API method to dynamically create numbered markers. However I am not able to use my own icons with that method. **Is there a way to use my own custom map marker icons, then overlay/have a number on top of it?** Alternatively, is there a quick way to create 1000 .PNG markers running from number `1` to `1000`? Like a batch process in Photoshop
I borrowed this code from an article I wrote and tweaked it a little. You should download [this image](http://maps.gstatic.com/intl/en_ALL/mapfiles/marker.png), edit it a bit in Photoshop and place it in the same directory as that of the PHP script. Tweak the numbers in the script until you get something decent. ``` <?php define("FONT_SIZE", 6); // font size in points define("FONT_PATH", "c:/windows/fonts/arial.ttf"); // path to a ttf font file define("FONT_COLOR", 0x00000000); // 4 byte color // alpha -- 0x00 thru 0x7F; solid thru transparent // red -- 0x00 thru 0xFF // greeen -- 0x00 thru 0xFF // blue -- 0x00 thru 0xFF $text = $_GET["text"]; $gdimage = imagecreatefrompng("marker.png"); imagesavealpha($gdimage, true); list($x0, $y0, , , $x1, $y1) = imagettfbbox(FONT_SIZE, 0, FONT_PATH, $text); $imwide = imagesx($gdimage); $imtall = imagesy($gdimage) - 14; // adjusted to exclude the "tail" of the marker $bbwide = abs($x1 - $x0); $bbtall = abs($y1 - $y0); $tlx = ($imwide - $bbwide) >> 1; $tlx -= 1; // top-left x of the box $tly = ($imtall - $bbtall) >> 1; $tly -= 1; // top-left y of the box $bbx = $tlx - $x0; // top-left x to bottom left x + adjust base point $bby = $tly + $bbtall - $y0; // top-left y to bottom left y + adjust base point imagettftext($gdimage, FONT_SIZE, 0, $bbx, $bby, FONT_COLOR, FONT_PATH, $text); header("Content-Type: image/png"); header("Expires: " . gmdate("D, d M Y H:i:s", time() + 60 * 60 * 24 * 180) . " GMT"); imagepng($gdimage); ?> ``` Sample output on my system: ![/marker.php?text=9](https://i.stack.imgur.com/sI5b8.png) ![/marker.php?text=99](https://i.stack.imgur.com/CVN6H.png) ![/marker.php?text=999](https://i.stack.imgur.com/vkUSD.png) ![/marker.php?text=AA](https://i.stack.imgur.com/OHnLi.png)
How can I play raw PCM file that I dumped from a codec? I'm debugging an audio codec and need to test if the data is coming out of a codec properly. I have my code dumping the raw pcm stream into a file and now I want to try to play it in VLC or another OS X compatible player. I've heard that this is possible with VLC from the command line.
Short of writing some code to output your own header followed by your PCM data which will of course work - you could use [Audacity](https://www.audacityteam.org/) to import the file as RAW data where you then specify : ``` Encoding : signed 16 bit PCM Byte order : little endian (typically unless you're on powerpc or such) Channels : 1 mono start offset : 0 sample rate 44100 Hz (typical) ``` once imported Audacity will let you play the clip ... not as slick as some command line but easy to do none the less this also works ``` ffplay -autoexit -f s16le myrawaudio.pcm ``` above assumes your raw audio is using bit depth of 16 bits which is typical of CD quality audio
How to write a CASE clause with another column as a condition using knex.js So my code is like one below: ``` .select('id','units',knex.raw('case when units > 0 then cost else 0 end')) ``` but it gives me error like this one > > hint: "No operator matches the given name and argument type(s). You might need to add explicit type casts." > > > Any idea how I should right my code so I can use another column as an condition for different to column ?
I don't get the same error you do: > > CASE types integer and character varying cannot be matched > > > but regardless, the issue is that you're trying to compare apples and oranges. Postgres is quite strict on column types, so attempting to put an integer `0` and a string (value of `cost`) in the same column does not result in an implicit cast. Turning your output into a string does the trick: ``` .select( "id", "units", db.raw("CASE WHEN units > 0 THEN cost ELSE '0' END AS cost") ) ``` Sample output: ``` [ { id: 1, units: null, cost: '0' }, { id: 2, units: 1.2, cost: '2.99' }, { id: 3, units: 0.9, cost: '4.50' }, { id: 4, units: 5, cost: '1.23' }, { id: 5, units: 0, cost: '0' } ] ```
How to configure google identity platform with CLI sdk? I am trying to change google identity platform configuration with cli using gcloud CLI SDK for linux. to be specific I want to call these API [getConfig](https://cloud.google.com/identity-platform/docs/reference/rest/v2/projects/getConfig) and [UpdateConfig](https://cloud.google.com/identity-platform/docs/reference/rest/v2/projects/updateConfig) Is there a way to do this using gcloud cli? there seems to be a group called `Identity` as per the docs, but this does not seem to be doing what I want
I struggled using Google's Identity Toolkit/Platform and how these correspond with Firebase-Auth too. The term is overloaded by Google and Cloud Identity, Identity Platform and Firebase Auth have overlapping sets of functionality. | Name | URL | | --- | --- | | Cloud Identity API | cloudidentity.googleapis.com | | Identity Toolkit API | identitytoolkit.googleapis.com | > > **NOTE** Identity Toolkit is **inaccessible** through Google [APIs Explorer](https://developers.google.com/apis-explorer) > > > I wanted to be able to programmatically update Firebase Auth's authorized domains. By observing the calls made by Firebase Console, I needed to use [`getConfig`](https://cloud.google.com/identity-platform/docs/reference/rest/v2/projects/getConfig) and [`updateConfig`](https://cloud.google.com/identity-platform/docs/reference/rest/v2/projects/updateConfig) too. These **aren't** surfaced through `gcloud`. Essentially: 1. `GET` the response from `getConfig` 2. I used [jq](https://stedolan.github.io/jq/) to transform it into my desired state 3. `PATCH` `config`1 using an `updateMask` 1 The endpoint for the `updateConfig` method is actually (just) `config` This worked for me and hopefully helps you better understand how you can use these methods directly. I blogged about it more comprehensively [here](https://pretired.dazwilkin.com/posts/211026/)
How to create a custom deserializer in Jackson for a generic type? Imagine the following scenario: ``` class <T> Foo<T> { .... } class Bar { Foo<Something> foo; } ``` I want to write a custom Jackson deserializer for Foo. In order to do that (for example, in order to deserialize `Bar` class that has `Foo<Something>` property), I need to know the concrete type of `Foo<T>`, used in `Bar`, at deserialization time (e.g. I need to know that `T` is `Something` in that particluar case). How does one write such a deserializer? It should be possible to do it, since Jackson does it with typed collections and maps. **Clarifications:** It seems there are 2 parts to solution of the problem: 1) Obtain declared type of property `foo` inside `Bar` and use that to deserialize `Foo<Somehting>` 2) Find out at deserialization time that we are deserializing property `foo` inside class `Bar` in order to successfully complete step 1) How does one complete 1 and 2 ?
You can implement a custom [`JsonDeserializer`](https://fasterxml.github.io/jackson-databind/javadoc/2.7/com/fasterxml/jackson/databind/JsonDeserializer.html) for your generic type which also implements [`ContextualDeserializer`](https://fasterxml.github.io/jackson-databind/javadoc/2.7/com/fasterxml/jackson/databind/deser/ContextualDeserializer.html). For example, suppose we have the following simple wrapper type that contains a generic value: ``` public static class Wrapper<T> { public T value; } ``` We now want to deserialize JSON that looks like this: ``` { "name": "Alice", "age": 37 } ``` into an instance of a class that looks like this: ``` public static class Person { public Wrapper<String> name; public Wrapper<Integer> age; } ``` Implementing `ContextualDeserializer` allows us to create a specific deserializer for each field in the `Person` class, based on the generic type parameters of the field. This allows us to deserialize the name as a string, and the age as an integer. The complete deserializer looks like this: ``` public static class WrapperDeserializer extends JsonDeserializer<Wrapper<?>> implements ContextualDeserializer { private JavaType valueType; @Override public JsonDeserializer<?> createContextual(DeserializationContext ctxt, BeanProperty property) throws JsonMappingException { JavaType wrapperType = property.getType(); JavaType valueType = wrapperType.containedType(0); WrapperDeserializer deserializer = new WrapperDeserializer(); deserializer.valueType = valueType; return deserializer; } @Override public Wrapper<?> deserialize(JsonParser parser, DeserializationContext ctxt) throws IOException { Wrapper<?> wrapper = new Wrapper<>(); wrapper.value = ctxt.readValue(parser, valueType); return wrapper; } } ``` It is best to look at `createContextual` here first, as this will be called first by Jackson. We read the type of the field out of the `BeanProperty` (e.g. `Wrapper<String>`) and then extract the first generic type parameter (e.g. `String`). We then create a new deserializer and store the inner type as the `valueType`. Once `deserialize` is called on this newly created deserializer, we can simply ask Jackson to deserialize the value as the inner type rather than as the whole wrapper type, and return a new `Wrapper` containing the deserialized value. In order to register this custom deserializer, we then need to create a module that contains it, and register that module: ``` SimpleModule module = new SimpleModule() .addDeserializer(Wrapper.class, new WrapperDeserializer()); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.registerModule(module); ``` If we then try to deserialize the example JSON from above, we can see that it works as expected: ``` Person person = objectMapper.readValue(json, Person.class); System.out.println(person.name.value); // prints Alice System.out.println(person.age.value); // prints 37 ``` There are some more details about how contextual deserializers work in the [Jackson documentation](http://wiki.fasterxml.com/JacksonFeatureContextualHandlers).
Does Golang Copy the String on Modification/Write? After reading the answer on [Does Go language use Copy-on-write for strings](https://stackoverflow.com/questions/8532127/does-go-language-use-copy-on-write-for-strings/) , I feel the question was not adequately answered. Given the example below, what is actually happening under the hood? ``` package main import "fmt" func main() { s := "Hello" t := s // t shares the same data as s s += "World" // a new string is created t += "There" // a new string is created. fmt.Printf("%s %s\n", s, t) } ``` output: ``` HelloWorld HelloThere ``` The question is when will golang determine there is a need to create a new copy?
In Go, string values are `read-only` byte slices and you cannot change its elements (immutable). Since it is a slice, it means that it has a backing (underlaying) array that has defined capacity. This being said, we can say that string is a data structure that points to a read-only backing array. Strings are optimized for high reusability and thus read-only. Whenever you modify a string a new string (byte slice) is created in the background which makes it a bit of costly operation. One recommendation is to convert a string to an actual byte slice `[]byte(string)` and work with bytes or use [strings.Builder](https://golang.org/pkg/strings/#Builder) when your program needs to do a lot of string manipulations. ``` s := "Hello" // backing array for "hello" created; `s` points to the backing array t := s // `t` a new string structure and points to the same backing array as `s`, s += "World" // new backing array created for "HelloWorld"; `s` points to the new backing array t += "There" // `t` was still pointing to "Hello" and with this operation, a new backing array is created for "HelloThere" and `t` points to it ``` [StringHeader - data structure](https://golang.org/src/reflect/value.go?s=56526:56578#L1873) [String Data Type in GO](https://medium.com/rungo/string-data-type-in-go-8af2b639478)
Why is casting from char to std::byte potentially undefined behavior? The `std::byte` of C++17 is required to be enum class: ``` enum class byte : unsigned char {}; ``` We may want to use that `std::byte` to represent raw memory instead of one of `char`s since it is more type-safe, has its byte-specific operators defined and can't promote to `int` out of blue like `char`s do. We need to use explicit casts or `to_integer` to convert `std::byte` to other integers. However from lot of sources we still get `char` (or more likely whole buffers of `char`) and so may want to convert it: ``` void fn(char c) { std::byte b = static_cast<std::byte>(c); // ... that may invoke undefined behavior, read below } ``` The signedness of `char` is implementation-defined so `std::numeric_limits<char>::is_signed` may be `true`. Therefore above `c` may have negative values that are outside of range of `unsigned char`. Now in C++17 standard in 8.2.9 Static cast [expr.static.cast] paragraph 10 we can read that: > > A value of integral or enumeration type can be explicitly converted to > a complete enumeration type. The value is unchanged if the original > value is within the range of the enumeration values (10.2). Otherwise, > the behavior is undefined. > > > And from 10.2 we can see that the mentioned range is range of underlying type. Therefore to avoid undefined behavior we have to write more code. For example we can add a cast to `unsigned char` to achieve defined effects of modular arithmetic during cast: ``` void fn(char c) { std::byte b = static_cast<std::byte>(static_cast<unsigned char>(c)); // ... now we have done it in portable manner? } ``` Did I misunderstand something? Isn't that over-abundantly complicated and restrictive? Why can't the `enum class` that has unsigned underlying type follow modular arithmetic like its underlying type does? Note that the whole row of casts is most likely compiled into nothing by compiler anyway. The `char` when it is signed has to be two's complement since C++14 and so its bitwise representation has to be same as after modular arithmetic conversion to `unsigned char`. Who benefits from that formal undefined behavior and how?
This is going to be fixed in the [next standard](http://eel.is/c++draft/expr.static.cast#10): > > A value of integral or enumeration type can be explicitly converted to a complete enumeration type. If the enumeration type has a fixed underlying type, **the value is first converted to that type by integral conversion, if necessary, and then to the enumeration type**. If the enumeration type does not have a fixed underlying type, the value is unchanged if the original value is within the range of the enumeration values ([dcl.enum]), and otherwise, the behavior is undefined > > > [Here's](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1766) the rationale behind the change from (C++11) unspecified to (C++17) undefined:  > > Although issue 1094 clarified that the value of an expression of enumeration type might not be within the range of the values of the enumeration after a conversion to the enumeration type (see 8.2.9 [expr.static.cast] paragraph 10), **the result is simply an unspecified value. This should probably be strengthened to produce undefined behavior, in light of the fact that undefined behavior makes an expression non-constant.** > > > And [here's](https://wg21.cmeerw.net/cwg/issue2338) the rationale behind the C++2a fix: > > The specifications of std::byte (21.2.5 [support.types.byteops]) and bitmask (20.4.2.1.4 [bitmask.types]) have revealed a problem with the integral conversion rules, according to which both those specifications have, in the general case, **undefined behavior**. The problem is that a conversion to an enumeration type has undefined behavior unless the value to be converted is in the range of the enumeration. > > > For enumerations with an unsigned fixed underlying type, **this requirement is overly restrictive, since converting a large value to an unsigned integer type is well-defined**. > > >
Generate secure URL safe token ``` using Microsoft.AspNetCore.WebUtilities; using System.Security.Cryptography; namespace UserManager.Cryptography { public class UrlToken { private const int BYTE_LENGTH = 32; /// <summary> /// Generate a fixed length token that can be used in url without endcoding it /// </summary> /// <returns></returns> public static string GenerateToken() { // get secure array bytes byte[] secureArray = GenerateRandomBytes(); // convert in an url safe string string urlToken = WebEncoders.Base64UrlEncode(secureArray); return urlToken; } /// <summary> /// Generate a cryptographically secure array of bytes with a fixed length /// </summary> /// <returns></returns> private static byte[] GenerateRandomBytes() { using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider()) { byte[] byteArray = new byte[BYTE_LENGTH]; provider.GetBytes(byteArray); return byteArray; } } } } ``` I've created the above class (C#, .Net Core 2.0) to generate a cryptographically secure string token that is URL safe, so it can be used in an URL without the necessity to be encoded. I will use that token as a `GET` parameter (e.g. `www.site.com/verify/?token=v3XYPmQ3wD_RtOjH1lMekXloBGcWqlLfomgzIS1mCGA`) in a user manager application where I use the token to verify the user email or to recover a user password. The above link will be sent as email to the user that has requested the service. I store the token into a DB table with an associated expiration datetime. I've seen other implementations on this and other sites but all seem to be unnecessarily complicated. Am I missing something?
Minor suggestions: ``` public class UrlToken ``` The class has no instance data, so it could be made `static`: ``` public static class UrlToken ``` [Microsoft's Naming Guidelines](https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/general-naming-conventions) and their [Framework Design Guidelines](https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/capitalization-conventions) suggest not using underscores and also using PascalCasing for constants, so ``` private const int BYTE_LENGTH = 32; ``` could be: ``` private const int ByteLength = 32; ``` However, even that name doesn't tell us much of what it is for. Let's try again: ``` private const int NumberOfRandomBytes = 32; ``` Typo/misspelling in the XML doc comment: "encoding" is written as "endcoding". There is mixed curly brace formatting. Microsoft guidelines (see links above) suggest the opening and closing curly braces should be on their own line. ``` using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider()) { ``` to: ``` using (RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider()) { ``` By the way, kudos to you on your proper use of the `using` construct! Looks fantastic!
Is it a good practice to always scale/normalize data for machine learning? My understanding is that when some features have different ranges in their values (for example, imagine one feature being the age of a person and another one being their salary in USD) will affect negatively algorithms because the feature with bigger values will take more influence, is it a good practice to simply ALWAYS scale/normalize the data? It looks to me that if the values are already similar among then, then normalizing them will have little effect, but if the values are very different normalization will help, however it feels too simple to be true :) Am I missing something? Are there situations/algorithms were actually it is desirable to let some features to deliberately outweigh others?
First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That'd be nice, wouldn't it! Concretely though, here are a few ways in which: it just depends. **Some times when normalizing is good:** 1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can't recall). 2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should. But of course -- not all algorithms *are* sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don't, scale your data, because it's looking at *proportional* relationships between them. **Some times when normalizing is bad:** 1) When you want to interpret your coefficients, and they don't normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not. 2) When, in fact, the units on your features are meaningful, and distance *does* make a difference! Back to SVMs -- if you're trying to find a max-margin classifier, then the units that go into that 'max' matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you're going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!) In I'm sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you're using.
Inserting multiple dictionary objects into a MySQL database using python and MySQLdb I have been struggling with this for hours and I feel like crying now as I'm unable to fathom out what is happening. Here is a simplified version of my data: ``` mydata = [ { 'id': 123, 'thing': 'ghi', 'value': 1 }, { 'id': 456, 'thing': 'xyz', 'value': 0 } ] ``` This is the code I have: ``` import MySQLdb as mdb con = None con = mdb.connect('localhost', 'testing', 'anothervalue', 'andanother'); cur = con.cursor() sql = "INSERT INTO `tablename` ( `id`, `thing`, `value` ) VALUES ( %(id)s, %(thing)s, %(value)s )" cur.executemany( sql, ( mine for mine in mydata ) ) con.close() ``` What I expected to happen was that 2 rows would be inserted into **tablename**. What actually happens is that the script executes without any errors and no rows are inserted. What am I doing wrong? If I do a single INSERT by hand it inserts into the table properly so I think know its not a problem with the MySQL database but rather how I'm passing the variables into the database. An additional question I have is how to I insert **value** as a float? At the moment I have the definition of **value** in the table as **TINYTEXT NOT NULL**, I would like this to be **FLOAT NOT NULL** but I'm not sure how to handle the substition above.
There is no need to use a generator to loop over `mydata`. Just pass in `mydata` directly: ``` cur.executemany(sql, mydata) ``` The database adapter will loop over `mydata` for you and a generator expression only inserts a needless extra loop. If you do not get any error messages but there are no changes either, check the following: - Make sure you commit the transaction; run `con.commit()` after the `.executemany()` call. - *Tripple-check* that `mydata` is not empty. The database adapter will correctly handle float values for you; if a column is marked as `FLOAT NOT NULL` and you pass in a Python float value for that column, Things Just Work. That's what SQL parameters are for, handling quoting of different datatypes correctly.
AWS Public Subnet Internet Access for Non Public IP Instances I have a public subnet in AWS and I have 3 instances in it ... 1. WebApp01 (Elastic IP - 54.23.61.239 for example) 2. WebApp02 (Private IP - 192.168.0.24) 3. WebApp03 (Private IP - 192.168.0.25) And my route table is setup as 192.168.0.0/16 -> local 0.0.0.0/0 -> Internet Gateway I can see that the instance that has the public IP has internet access but the instances that don't have public IP are not able to access the internet. How can I give internet access to the other instances inside the Public Subnet ? I'm a newbie in networking and any help will be appreciated. Just FYI : I know that creating a NAT, and then creating a separate route table with 0.0.0.0 -> NAT and associating that route with the Private Subnet gives internet access to the instances in the private subnet, but I cant figure out how to give internet access to the non public IP instances in the public subnet. Please help !
You will need to assign public IP addresses to your instances that do not have one or add an EIP in order for them to access the Internet. An AWS Internet Gateway is a special type of NAT Gateway (1 - 1 address mapping). Without a public IP address there is nothing for the Internet Gateway to map to the EC2 instance - one public IP maps to one private IP inside the Internet Gateway. Although you add a NAT Gateway to a public subnet with an Internet Gateway, the NAT Gateway does provide address translation (NAT) to instances in the public subnet - only to instances in the private subnets. The default route in each private subnet points to the NAT Gateway. If you want these instances to be protected by only having private IP addresses then you will need to move the instances to a private subnet (one with a NAT Gateway). An instance in a public subnet without a public IP address is orphaned from the Internet. The instance can talk to other instances in the VPC (the Intranet) but cannot talk to the Internet. There is a lot of confusion on what are AWS VPC subnets. There are three types. 1) Public subnet (one that has an IGW). 2) Private subnet (one that has a NAT Gateway or NAT instance, or neither). 3) Hybrid (one that has a VGW routing to a data center or similar).
Advantages of using Core Graphics I would like to know what kind of advantages I get from using Core Graphics instead of Open GL ES. My main question is based on this: - Creating simple View animations. - Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc). - Time consuming (both learning and implementing) - Simple 2D Games - Complex 2D Games - 3D Games - Code maintenance ad also cleaner code. - Easier integration with other UI elements. Thanks.
First, I want to clear up a little terminology here. When people talk about Core Graphics, they generally are referring to Quartz 2D drawing, which is a 2-D vector-based drawing API. It is used to draw out vector elements either to the screen or to offscreen contexts like PDFs. Core Animation is responsible for animation, layout, and some limited 3-D effects involving rectangular layers and UI elements. OpenGL ES is a lower-level API for talking with the graphics hardware on iOS devices for both 2-D and 3-D drawing. You're asking a lot in your question, and the judgment on what's best in each scenario is subjective and completely up to the developer and their particular needs. I can, however, provide a few general tips. In general, a recommendation you'll see in Apple's documentation and in presentations by engineers is that you're best off using the highest level of abstraction that solves your particular problem. If you need to just draw a 2-D user interface, the first thing you should try is to implement this using Apple's provided UIKit elements. If they don't have the capability you need, make custom UIViews. If you are designing Mac-iOS cross-platform code (like in the Core Plot framework), you might drop down to using custom Core Animation CALayers. Each step down in this process requires you to write more code to handle things that the level above did for you. You can do a surprising amount of stuff with Core Animation, with pretty good performance. This isn't just limited to 2-D animations, but can extend into some simple 3-D work as well. OpenGL ES is underneath the drawing of everything you see on the screen for an iOS device, although this is not exposed to you. As such, it provides the least abstraction for onscreen rendering, and requires you to write the most code to get something done. However, it can be necessary in situations where you want to extract the most performance from 2-D display (say, in an action game) or to render true 3-D objects and environments. Again, I tend to recommend that people start at the highest level of abstraction when writing an application, and only drop down when they find that they cannot do something or the performance is not within the specification they are trying to hit. Fewer lines of code makes applications easier to write, debug, and maintain. That said, there are some nice frameworks that have developed around abstracting away OpenGL ES, such as cocos2D and Unity 3D, which might make working with OpenGL ES easier in many situations. For each case, you'll need to evaluate what makes sense for the particular needs of your application.
How to trigger click on a button I've [this page](https://www.flipkart.com/kemei-km-1006-high-precision-hair-beard-trimmer-men/p/itme8gdd7zk7gyz7?pid=SHVEH782895ZGURK&otracker=hp_omu_Deals%20of%20the%20Day_2_Just%20%E2%82%B9349_8849cc32-8e31-4ec0-bdd2-937feda25f24). I need to trigger a click on the BUY NOW button on this page using AngularJS. I've tried these ways to click on this "BUY NOW" in content script(myscript.js) but does not work: ``` angular.element($('ul form button:contains("BUY NOW")').get(0)).triggerHandler('click'); $('ul form button:contains("BUY NOW")').get(0).click(); $('ul form button:contains("BUY NOW")').get(0).dispatchEvent(new MouseEvent('click', { 'view': window, 'bubbles': true, 'cancelable': true })); ``` The manifest.json looks like this: ``` "content_scripts": [ { "run_at": "document_end", "all_frames": false, "matches": ["*://www.flipkart.com/*"], "css": [ "jqueryui/jquery-ui.css", "js/slidenavi/sidenavi-right.css","main.css", "js/bootstrap-switch-master/dist/css/bootstrap3/bootstrap-switch.min.css"], "js": ["jquery-2.1.4.min.js", "jqueryui/jquery-ui.min.js","js/angular.min.js", "js/jquery.cookie.js", "jqueryui/jquery-ui.min.js","js/slidenavi/SideNavi.js", "client_server_common.js", "user-selections.js", "jquery.countdown.min.js", "js/bootstrap-switch-master/dist/js/bootstrap-switch.min.js", "js/cryptojs/rollups/md5.js", "common.js", "myscript.js" ] } ], ``` **What is the way to make it work?**
Try with this code; it simulates a mouse left click on the element by a quick succession of mousedown, mouseup and click events fired in the center of the button: ``` var simulateMouseEvent = function(element, eventName, coordX, coordY) { element.dispatchEvent(new MouseEvent(eventName, { view: window, bubbles: true, cancelable: true, clientX: coordX, clientY: coordY, button: 0 })); }; var theButton = document.querySelector('ul form button'); var box = theButton.getBoundingClientRect(), coordX = box.left + (box.right - box.left) / 2, coordY = box.top + (box.bottom - box.top) / 2; simulateMouseEvent (theButton, "mousedown", coordX, coordY); simulateMouseEvent (theButton, "mouseup", coordX, coordY); simulateMouseEvent (theButton, "click", coordX, coordY); ```
fabric8 docker-maven-plugin: include additional tags on build I have the fabric8 docker-maven-plugin configured in my pom.xml as follows: ``` <build> ... <plugins> ... <plugin> <groupId>io.fabric8</groupId> <artifactId>docker-maven-plugin</artifactId> <version>${docker.plugin.version}</version> <executions> <execution> <phase>package</phase> <goals> <goal>build</goal> </goals> </execution> </executions> <configuration> <images>  </images> </configuration> </plugin> ... </plugins> ... </build> ``` I'm using the `%l` placeholder which tags the image with the `latest` label if the version contains `-SNAPSHOT`, otherwise it uses the pom version. When building from CI, I'd like to include some additional tags (possibly more then one) to my image (e.g. build number / branch name) but I'd like to keep `%l` placeholder behavior. I think that it should be possible using maven properties from command line, but I couldn't figure it out from the plugin docs (<https://dmp.fabric8.io/>) How can I include additional tags when executing the docker:build goal?
You can use the `<tags>` tag: <https://dmp.fabric8.io/#build-configuration> ``` <properties> ... <!-- set default --> <docker.image-tag>${project.version}</docker.image-tag> ... </properties>  ``` this will tag your image with both the `%l` behavior and the custom set `${docker.image-tag}`. ``` mvn docker:build -Ddocker.image-tag=mytag ```
Forward declaration of objects with STL containers Consider the following code snippet, where the first line serves only as forward declaration ``` class A; ``` followed by defining new class ``` class B { vector<A> Av; //line 1 map<int, A> Am; //line 2 pair<int, A> Ap; //line 3 }; ``` line 1 and line 2 seems to be fine with the forward declaration (which may tell me that those container use pointer type of implementation) where as line 3 does not seem to compile on VS2012. My question is that behavior dictated by the standard or specific to the compiler I am using?
The relevant rules for the standard library types are in [res.on.functions]: > > In particular, the effects are undefined in the following cases: [...] if an incomplete type (3.9) is used as a template argument when instantiating a template component, unless specifically allowed for that component. > > > This: ``` vector<A> Av; ``` is fine. `std::vector` is allowed to be instantiated with an incomplete type, as long as it becomes complete before you use any of the members. There is an explicit exception for this in the standard in [vector.overview]: > > An incomplete type `T` may be used when instantiating `vector` if the allocator satisfies the allocator completeness > requirements 17.6.3.5.1. `T` shall be complete before any member of the resulting specialization of vector > is referenced. > > > There is similar wording for `std::list` and `std::forward_list`. This: ``` map<int, A> Am; ``` is ill-formed. `std::map` requires a complete type at point of instantiation as per the first quote. There is no exception for this container in the way that there is for `vector`. This: ``` pair<int, A> Ap; ``` cannot possibly ever work, since `pair` is just a simply struct with two members. In order to have a member of type `A`, you need a complete type.
Managed Stacks in Process Monitor Is it possible to see managed stack traces in Process Monitor for file access events of a given process? If not, is there the other way to accomplish such task?
Unfortunately Process Monitor does not yet support managed call stacks. But [perfview](https://github.com/Microsoft/perfview/blob/master/documentation/Downloading.md) is pretty good at decomposing managed stacks and it also provides a way to collect File I/O events: ### Select File I/O events in Run/Collect dialog: ![enter image description here](https://i.stack.imgur.com/41h7e.png) ### After tracing is finished you should have File I/O events available in the directory tree: ![enter image description here](https://i.stack.imgur.com/vTWEj.png) ### Double clicking it should present you a window with all collected File I/O operations ![enter image description here](https://i.stack.imgur.com/8rdQf.png) Each operation has a call stack registered and you may browse it on other tabs available in the events window.
Is there a frame drop option in X11Forwarding? I'm running a computer vision model on a headless remote VM (Ubuntu 16.04) over X11Forwarding with good ol' Putty and Xming as my Windows X Server. All is well but seems there is no frame drop if the client-server bandwidth can't keep up, which means my application is slowed down and only renders a few frames a second when it can do hundreds if bandwidth is plenty. Is there a *force frame drop* option built into X11 forwarding, and if there is, how do i turn it on?
I highly recommend [Xpra](https://www.xpra.org/) for this sort of use-case: not only does it provide the ability to disconnect and reconnect to X applications running on a remote host, it also supports a variety of [image encodings](https://github.com/Xpra-org/xpra/blob/master/docs/Usage/Encodings.md) to provide a decent experience in different circumstances, can [accelerate OpenGL applications](https://github.com/Xpra-org/xpra/blob/master/docs/Usage/OpenGL.md) and [use OpenGL in the client for better performance](https://github.com/Xpra-org/xpra/blob/master/docs/Usage/Client-OpenGL.md). It has a [native Windows client](https://github.com/Xpra-org/xpra/wiki/Download) so it should be easy enough to set up. You’ll need to install it on the remote VM too, but that’s as easy as `apt install xpra` on Ubuntu.
Disable Chrome's default headers & footers in headless print-to-pdf I got a multipage HTML document, which I want to export to PDF using headless Google Chrome / Chromium from the command line: `chrome --headless --print-to-pdf [...]`. The issue with this is, that Chrome adds auto-generated headers and footers to the page when "printing". Others have suggested using `@page {margin: 0}` in my CSS, but as some others also stated, that only works for a few pages for some magic reason, hence there's a footer on the last page of my example. I am aware of an [NPM package](https://www.npmjs.com/package/chrome-headless-render-pdf) that supports export without headers, but that export takes around 30% more time than headless Chrome itself and would require installing and setting up NPM and the package on my company's servers. If anyone knows any way to hide Google Chrome's default headers / footers on headless PDF-export by CSS or some setting, please let me know. --- By the way, since I did not figure out another solution, I went with the [NPM package](https://www.npmjs.com/package/chrome-headless-render-pdf) instead. It's been working very well and reliably so far, it just took about 30% more time in my tests, so keep that in mind.
There is an option available in latest Google Chrome Canary builds for this. Use the `--print-to-pdf-no-header` option. ``` canary \ --headless \ --disable-gpu \ --run-all-compositor-stages-before-draw \ --print-to-pdf-no-header \ --print-to-pdf=example.pdf \ http://example.com ``` ref: [source code](https://source.chromium.org/chromium/chromium/src/+/master:headless/app/headless_shell_switches.cc?originalUrl=https:%2F%2Fcs.chromium.org%2Fchromium%2Fsrc%2Fheadless%2Fapp%2Fheadless_shell_switches.cc)
Autostart MySQL Server on Mac OS X Yosemite/El Capitan I would like to auto start the MySQL server on startup. This was possible in Mavericks but seems to be not working on Yosemite. **edit**: seems this works with El Capitan as well ![enter image description here](https://i.stack.imgur.com/iHIPl.png)
@dcc was very close. This is how MySQL autostarts again on Yosemite: The `com.mysql.mysql.plist` in `/Library/LaunchDaemons`: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <true/> <key>Label</key> <string>com.mysql.mysqld</string> <key>ProgramArguments</key> <array> <string>/usr/local/mysql/bin/mysqld_safe</string> <string>--user=mysql</string> </array> </dict> </plist> ``` Additionally I've changed the permissions based on [this answer](https://stackoverflow.com/questions/5555589/mysql-5-5-10-mac-10-6-x-auto-start#5557091) ``` sudo chown root:wheel /Library/LaunchDaemons/com.mysql.mysql.plist sudo chmod 644 /Library/LaunchDaemons/com.mysql.mysql.plist ``` Finally I run this command ``` sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysql.plist ``` If you have any addition please share below!
How to properly work with Timezone? I'm reading a lot about **timezone**, **offset**, **utc**, **local time**, **javascript functions**, **DST**, **bacon** and I 'm trying to put this all together to build a solid/correct structure for my app. Suppose my app is something like StackOverflow. **That is how I'm doing ...** - The server is in another country, so I set it to UTC 00:00. - I'm storing date as `DateTimeOffset`. - I'm **not** storing `TimeZoneID`. - Date is being sent to the client in this format: `2012-07-19T14:30:00-03:00`. - I'm using angular filter to convert it to local time. I have a few questions about it ... # 1º Server TimeZone? About my server (single server) ... should it be running with a "neutral" UTC (+00:00)? And what if, in the future, we move to a farm where servers run on different locations? # 2º What should I store? Currently, I'm storing just date as `DateTimeOffset`. I'm reading about saving the `TimeZoneID` but I'm seeing no use at all for this. Am I missing something? Or should I store date as `DateTimeUtc` with a `TimeZoneID` and manually convert every date with the `TimeZone` class? # 3º How to convert to local time? Is it safe to convert data on the client? Or date conversions should be always on the server side? # 4º About DST. Using my current approach. Will DST be respected?
One very important thing to understand about date/time is that there is no one right way for doing everything. The common answer "use UTC" is *not always* applicable. Context is very important, and there are different techniques and approaches based on what the values you are working with are representing. If you could elaborate on what they are used for in your application, I will update my answer accordingly. In the meantime, I'll try to address the specific points you have brought up already: ## #1 - Server Time Zone Keeping your server at UTC is a best practice, and it is what you can expect from cloud providers like Azure or AWS also. But it isn't something that you should be dependent on. Your server should be able to be set to *any* time zone without it affecting your application. As long as the clock is in sync with an NTP server, choice of time zone should not matter. So how do you ensure that? Simple, just make sure your application avoids all of the following: - `DateTime.Now` - `DateTimeKind.Local` - `TimeZone` (the entire class) - `TimeZoneInfo.Local` - `DateTime.ToLocalTime()` - `DateTime.ToUniversalTime()` (because it assumes the input is local) - Misc. other methods that assume a local input or output, such as `TimeZoneInfo.ConvertTimeToUtc(DateTime)` (this particular overload doesn't take a time zone, so it assumes the local time zone) See also my blog post: [The Case Against DateTime.Now](http://codeofmatt.com/2013/04/25/the-case-against-datetime-now/). Note that I didn't include `DateTimeOffset.Now` in the list. Although it's a little bit of a design smell, it is still "safe" to use. ## #2 - What to store I suggest you read my answer to [DateTime vs DateTimeOffset](https://stackoverflow.com/questions/4331189/datetime-vs-datetimeoffset/14268167). It should clarify some things. Without regurgitating the whole thing, the main point is that while both represent a point in time accurately, a `DateTimeOffset` provides perspective, while a UTC `DateTime` does not. You also asked when you should store a `TimeZoneInfo.Id`. There are at least two scenarios where this is required: - If you are recording events in the past or present, and you plan on allowing *modifications* to the recorded timestamps. You need the time zone to determine what the new offset should be, or how the new input converts back to UTC. - If you are scheduling time out into the future, you will need the time zone as part of the recurrence pattern (even for a single occurrence). See [here](https://stackoverflow.com/a/19170823/634824) and [here](https://stackoverflow.com/a/19627330/634824) also, (while for other languages, the same principles apply). Again, the exact answer depends on what exactly the timestamps represent. There is no one ring to rule them all. ## #3 - Client Safety If it's a .NET client, sure you can convert there. But I think you are asking about a JavaScript client browser. "Safe" is a relative term. If you're asking for exact perfectness, then no. JavaScript isn't safe for that, due to an error in the ECMAScript specification (ES1 through ES5.1. It is being worked on for ES6). You can read more in my blog post: [JavaScript Date type is horribly broken](http://codeofmatt.com/2013/06/07/javascript-date-type-is-horribly-broken/). However, if you are working with relatively current data, and the users of your application are not in a part of the world where time zones are volatile, or you don't require precise results 100% of the time, then you can "safely" use JavaScript to convert to the user's local time zone. You might avoid *some* of these issues with libraries that implement the IANA TZDB in JavaScript, such as the ones I list [here](https://stackoverflow.com/a/15171030/634824). But many of them are still dependent on JS `Date`, so they still have issues. (Side note - I'm working on a JS library that will counter this, but it is not ready to share yet). Conversions on the server side are a much better choice, as long as you can ask the user for their time zone. Most of the time, I think this is doable. You might consider asking using a map-based timezone picker, such as [this one](https://github.com/dosx/timezone-picker) or [this one](http://timezonepicker.com/). Both of which will require you use IANA time zones, which for .NET means using [Noda Time](http://nodatime.org), which is a great idea anyway (IMHO). ## #4 - Daylight Saving Time With your current approach, DST will be respected within the definition of the *current* DST rules for the time zone the user has set for their local browser. (Again, refer to my blog post for why this is the case). A conversion from any value with an offset (whether `-03:00` or `Z`) that passes through the `Date` object (which I believe an Angular filter will do), will properly convert to the specific unix timestamp. The errors that would crop up with DST conversions for prior DST rules are because going from the unix timestamp inside the `Date` object to the local time zone will always assume that the *current* DST rule is applicable, even if the time fell into a period that had a different rule.
How do I determine the files corresponding to a uinput device? In linux when a uinput device is created, one or more event files corresponding to that device get created in the file system. (For example, if I create a uinput mouse, then the file /dev/input/mouseN is created.) But how do I determine *which* files got created for a given uinput device? The uinput kernel module does not appear to provide any ioctl for getting that information. One possible approach is to poll the file system immediately after creating the uinput device to see what files appear, but that approach does not work because of races with other devices, both real and uinput, that are also plugged in or created around the same time. Am I overlooking something, or must I hack the kernel to get this info?
If you look in *sysfs* you can find your information. Once you have created your uinput device do: ``` $ ls /sys/class/input/ event0 event1 ... eventN input0 input2 ... input19 ... inputN mouse0 mouse1 ... mouseN mice $ ls /sys/devices/virtual/input/ input19 mice ``` Notice that you can find virtual device in a different path. In this case, input19 is my uinput device. Which is the correspondent char device? ``` $ ls /sys/devices/virtual/input/input19/ event14 name id ... ``` My char device is `/dev/input/event14`. I know that `input19` is my uinput device because I'm the only user who is creating uinput devices. If you want to be sure, you must read its sysfs attribute *name* and verify that it is really your device ``` $ cat /sys/devices/virtual/input/input19/name foo-keyboard-201303261446 ``` You can retrieve information about your new uinput devices by reading kernel messages: ``` $ dmesg | tail -n 7 input: foo-keyboard-201303261445 as /devices/virtual/input/input14 input: foo-keyboard-201303261445 as /devices/virtual/input/input15 input: foo-keyboard-201303261445 as /devices/virtual/input/input16 input: foo-keyboard-201303261445 as /devices/virtual/input/input17 input: foo-keyboard-201303261446 as /devices/virtual/input/input18 input: foo-keyboard-201303261446 as /devices/virtual/input/input19 input: foo-keyboard-201303261446 as /devices/virtual/input/input20 ``` From your program you can read from `/dev/kmsg` and catch your event. Maybe you can open the device `/dev/kmsg`, flush it, wait on `select()` until you receive the uinput notification. An alternative is to use *libudev* to retrieve you uinput device. Take a look at the following link: [libudev tutorial](http://www.signal11.us/oss/udev/) **UPDATE**: thanks to your question I improved my libuinput library available on github: [libuinput by Federico](https://github.com/FedericoVaga/libuinput). I implemented the solution that use hte *kmsg* device. **UPDATE**: in 2014 the Linux `uinput` driver has been improved (git SHA1 e3480a61fc). Now it is possible to get the sysfs path directly form the `uinput` driver using the following `ioctl` command: ``` /** * UI_GET_SYSNAME - get the sysfs name of the created uinput device * * @return the sysfs name of the created virtual input device. * The complete sysfs path is then /sys/devices/virtual/input/--NAME-- * Usually, it is in the form "inputN" */ #define UI_GET_SYSNAME(len) _IOC(_IOC_READ, UINPUT_IOCTL_BASE, 300, len) ``` So if you have the possibility to use a Linux kernel more recent than 3.13, you can use the above `ioctl` to improve your code that uses uinput.
Understanding MCMC and the Metropolis-Hastings algorithm Over the past few days I have been trying to understand how Markov Chain Monte Carlo (MCMC) works. In particular I have been trying to understand and implement the Metropolis-Hastings algorithm. So far I think I have an overall understanding of the algorithm but there are a couple of things that are not clear to me yet. I want to use MCMC to fit some models to data. Because of this I will describe my understanding of Metropolis-Hastings algorithm for fitting a straight line $f(x)=ax$ to some observed data $D$: 1) Make an initial guess for $a$. Set this $a$ as our current $a$ ($a\_0$). Also add $a$ at the end of the Markov Chain ($C$). 2) Repeat the steps bellow a number of times. 3) Evaluate current likelihood (${\cal L\_0}$) given $a\_0$ and $D$. 4) Propose a new $a$ ($a\_1$) by sampling from a normal distribution with $\mu=a\_0$ and $\sigma=stepsize$. For now, $stepsize$ is constant. 5) Evaluate new likelihood (${\cal L\_1}$) given $a\_1$ and $D$. 6) If ${\cal L\_1}$ is bigger than ${\cal L\_0}$, accept $a\_1$ as the new $a\_0$, append it at the end of $C$ and go to step 2. 7) If ${\cal L\_1}$ is smaller than ${\cal L\_0}$ generate a number ($U$) in range [0,1] from a uniform distribution 8) If $U$ is smaller than the difference between the two likelihoods (${\cal L\_1}$ - ${\cal L\_0}$), accept $a\_1$ as the new $a\_0$, append it at the end of $C$ and go to step 2. 9) If $U$ is bigger than the difference between the two likelihoods (${\cal L\_1}$ - ${\cal L\_0}$), append the $a\_0$ at the end of $C$, keep using the same $a\_0$, go to step 2. 10) End of Repeat. 11) Remove some elements from the start of $C$ (burn-in phase). 12) Now take the average of the values in $C$. This average is the estimated $a$. Now I have some questions regarding the above steps: - How do I construct the likelihood function for $f(x)=ax$ but also for any arbitrary function? - Is this a correct implementation of Metropolis-Hastings algorithm? - How the selection of the random number generation method at Step 7 can change the results? - How is this algorithm going to change if I have multiple model parameters? For example, if I had the model $f(x)=ax+b$. Notes/Credits: The main structure of the algorithm described above is based on the code from an MPIA Python Workshop.
There seem to be some misconceptions about what the Metropolis-Hastings (MH) algorithm is in your description of the algorithm. First of all, one has to understand that MH is a sampling algorithm. As stated in [wikipedia](http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) > > In statistics and in statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult. > > > In order to implement the MH algorithm you need a *proposal density* or *jumping distribution* $Q(\cdot\vert\cdot)$, from which it is easy to sample. If you want to sample from a distribution $f(\cdot)$, the MH algorithm can be implemented as follows: 1. Pick a initial random state $x\_0$. 2. Generate a candidate $x^{\star}$ from $Q(\cdot\vert x\_0)$. 3. Calculate the ratio $\alpha=f(x^{\star})/f(x\_0)$. 4. Accept $x^{\star}$ as a realisation of $f$ with probability $\alpha$. 5. Take $x^{\star}$ as the new initial state and continue sampling until you get the desired sample size. Once you get the sample you still need to *burn* it and *thin* it: given that the sampler works asymptotically, you need to remove the first $N$ samples (burn-in), and given that the samples are dependent you need to subsample each $k$ iterations (thinning). An example in R can be found in the following link: `http://www.mas.ncl.ac.uk/~ndjw1/teaching/sim/metrop/metrop.html` This method is largely employed in Bayesian statistics for sampling from the posterior distribution of the model parameters. The example that you are using seems unclear to me given that $f(x)=ax$ is not a density unless you restrict $x$ on a bounded set. My impression is that you are interested on fitting a straight line to a set of points for which I would recommend you to check the use of the Metropolis-Hastings algorithm in the context of linear regression. The following link presents some ideas on how MH can be used in this context (Example 6.8): [Robert & Casella (2010), *Introducing Monte Carlo Methods with R*, Ch. 6, "Metropolis–Hastings Algorithms"](http://www.springer.com/cda/content/document/cda_downloaddocument/9781441915757-c1.pdf) There are also lots of questions, with pointers to interesting references, in this site discussing about the meaning of likelihood function. Another pointer of possible interest is the R package `mcmc`, which implements the MH algorithm with Gaussian proposals in the command `metrop()`.
How can I skip some setup for specific Rspec tags? Rspec makes it easy to configure setup based on the **presence** of test tags. For example, if some tests need a parallel universe to be created (assuming you have code to do that): ``` # some_spec.rb describe "in a parallel universe", alter_spacetime: true do # whatever end # spec_helper.rb RSpec.configure do |config| config.before(:each, :alter_spacetime) do |example| # fancy magic here end end ``` But I want to do the opposite: "before each test, **unless you see this tag**, do the following..." How can I skip a setup step in `spec_helper` based on the presence of a tag on some tests?
At first, you would expect something like ``` RSpec.configure do |config| config.before(:each, alter_spacetime: false) do |example| # fancy magic here end end ``` to work that way, but it doesn't. But you have access to `example`, which is an [Example](http://www.rubydoc.info/gems/rspec-core/RSpec/Core/Example) instance and has the `#metadata` method, which returns [Metadata](http://www.rubydoc.info/gems/rspec-core/RSpec/Core/Metadata) object. You can check the value of the flag with that, and a flag on the specific example will override one on the containing `describe` block. ``` config.before(:each) do |example| # Note that we're not using a block param to get `example` unless example.metadata[:alter_spacetime] == false # fancy magic here end end ```
Possible to install Windows 8.1/10 32 bit while in UEFI mode? I am currently trying to have the laptop (HP Stream 11 - 2015 model with Intel Celeron N3050 processor) detect a Windows 8.1 32 bit UEFI install on my flash drive. I used RUFUS to setup the flash drive and used the UEFI GPT partition setup. When I insert the flash drive into my laptop, it somehow does not detect the flash drive. Now when I put a 64 bit Windows 8.1 install onto the flash drive, the laptop is able to detect it in the bootable device menu and is able to boot into the Windows installation with no issue. The reason why I want to install Windows 32-bit is because the laptop is limited to only 2GB of RAM and cannot be upgraded as it is soldered to the motherboard. Now when I looked at the EFI Boot folder for the 32 bit Windows installation usb that I made, I noticed it has bootia32.efi and when I experimented by putting a bootx64.efi instead, it was able to detect the USB before getting to an error screen while trying to load the 32-bit Windows installation. I am wondering if the EFI system on this HP Stream 11 (with 2GB RAM) has a x64 bit EFI system hence why it cannot even detect a 32 bit UEFI USB or maybe I'm doing something wrong here. If that is the case, I am wondering if there are any workarounds or will I have to disable UEFI and use BIOS instead to install Windows 32 bit (unless there is a good reason to install a 64 bit Windows on a computer with 2GB of RAM?) Thank you.
An EFI can run programs, including boot loaders, only for the same architecture for which the EFI is compiled. On an x86-64/AMD64/X64 system, this means that if the EFI is 64-bit in nature, it can launch only 64-bit boot loaders, and if the EFI is 32-bit in nature, it can launch only 32-bit boot loaders. The Windows boot loader, in turn, can launch only a like-bit-depth kernel (AFAIK), and so on down the chain. Thus, if you have a 64-bit EFI, you can launch only a 64-bit Windows, AFAIK. *In theory,* you could work around this limitation with an appropriate boot loader. GRUB can launch a Linux kernel across the bit-depth boundaries, for instance. So can Apple's macOS boot loader (to launch a 32-bit kernel on a 64-bit EFI; I don't think it works the other way around). *In practice,* AFAIK this isn't possible with Windows. By enabling the Compatibility Support Module (CSM), you can boot either a 32-bit or 64-bit OS, but only in BIOS/CSM/legacy mode. This has some drawbacks, as outlined in my answer to [this question](https://superuser.com/a/642433/304457) (referenced earlier in athosbr99's comment). If you want to install your 32-bit Windows on the computer's only hard disk, you'd pretty much have to go all-in on this. (There are ways to mix boot modes on a single disk, but there are so many caveats that I don't want to confuse the issue by describing them.) If you wanted to dual-boot with another OS, particularly on another disk (say, 64-bit Windows internally and 32-bit Windows on an external disk), you could install the 64-bit OS(es) in EFI mode and the 32-bit OS(es) in BIOS mode, then use the computer's built-in boot manager or my [rEFInd boot manager](http://www.rodsbooks.com/refind/) to manage the boot process. This would be a bit of a pain to set up, though, and you should research the issues thoroughly before attempting such a cross-mode install. Start with [this page of mine,](http://www.rodsbooks.com/efi-bootloaders/csm-good-bad-ugly.html) which describes the CSM and its pitfalls. (It's written for Linux users attempting a dual-boot with Windows, but the background information is relevant.) I then recommend reading the rEFInd documentation thoroughly, and paying careful attention to the description of the `scanfor` option in `refind.conf`. Overall, then, your best bet is to either stick with a 64-bit boot in EFI mode or wipe everything, turn on the CSM and install the 32-bit Windows in BIOS mode. I don't know if Windows' 32- vs. 64-bit performance is sufficiently different on a system with 2 GiB of RAM to justify the switch. Note that most manufacturers install a bunch of bloatware, and eliminating that is likely to improve performance quite a bit, so you might want to look into that before installing a 32-bit Windows.
Returning a private variable in JavaScript I don't know why `console.log(Set.current_index)` shows `0` instead of `3`. ``` var Set = (function() { var set = []; var index = 0; function contains(set, e) { for (var i = 0; i < set.length; i++) { if (set[i] === e) { return true; } } return false; } var add = function(e) { if (!contains(set, e)) { set[index++] = e; } } var show = function() { for (var i = 0; i < set.length; i++) { console.log(set[i]); } } return { add: add, show: show, current_index: index }; })();​ Set.add(20); Set.add(30); Set.add(40); Set.show(); console.log(Set.current_index); ```
As written `current_index` just gets the *initial* value of `index` - it doesn't mirror any changes to that value because that variable is of *primitive* type. If you have a 'reference type' (i.e. an object or array) then changes to its contents become visible in any other variable that references the same object. That doesn't happen with primitive types, they're copied "by value" into the new variables, and changes to the original variable don't affect the copy. You need to make `current_index` into a *function* that returns the current value of `index`, or write it as a `getter` which allows you to treat `.index` as a read-only property by invisibly calling a function to return the current value. For an example of the latter method (which requires ES5, or shims to replicate the functionality) see <http://jsfiddle.net/alnitak/WAwUg/>, which replaces your current `return` block with this: ``` var interface = { add: add, show: show }; Object.defineProperty(interface, 'index', { get: function() { return index; }, enumerable: true }); return interface; ```
How do F# types transfer over to C#? If in F# I have methods like: ``` drawBox drawSphere paintImage ``` Would they be transferred to C#, exactly the same? If that's the case, then wouldn't it compromise the naming conventions in C#, where methods are supposed to be PascalCase? Or should I make them PascalCase in F# as well to remedy this?
You should follow the [F# Component Design Guidelines](http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/fsharp-component-design-guidelines.pdf). The notable parts are: - Do use the .NET naming and capitalization conventions for object-oriented code, including F#-facing libraries. - Do use either PascalCase or camelCase for public functions and values in F# modules. camelCase is generally used for public functions which are designed to be used unqualified (e.g. `invalidArg`), and for the "standard collection functions" (e.g. `List.map`). In both these cases, the function names act much like keywords in the language. Another way to slice it: - *members* and *types* should always be PascalCase, just like C# - let-bound entities in modules can be camelCase, but this stuff is typically not stuff you'd expose publicly out of a library that is intended to be consumed by C#
Why does ls -l output a different size from ls -s? I can't figure out why I'm getting the following results: `ls -l` tells me the size of a given file (HISTORY) is "581944": ``` $ ls -l HISTORY -rw-rw-r-- 1 waldyrious waldyrious 581944 Feb 22 10:59 HISTORY ``` `ls -s` says it is "572": ``` $ ls -s HISTORY 572 HISTORY ``` I obviously need to make the values use a comparable scale. So first I confirm that using `--block-size 1` in `ls -l` gives me the same result as before: ``` $ ls -l --block-size 1 HISTORY -rw-rw-r-- 1 waldyrious waldyrious 581944 Feb 22 10:59 HISTORY ``` Then I do the same to `ls -s` to get a value in the same scale: ``` $ ls -s --block-size 1 HISTORY 585728 HISTORY ``` Different results! **581944 ≠ 585728**. I tried generating comparable values the other way around, using `-k`, but I get: ``` $ ls -lk HISTORY -rw-rw-r-- 1 waldyrious waldyrious 569 Feb 22 10:59 HISTORY $ ls -sk HISTORY 572 HISTORY ``` Again, different results, **569 ≠ 572**. I tried specifying `--si` to make sure both options were using the same scale, to no avail: ``` $ ls -lk --si HISTORY -rw-rw-r-- 1 waldyrious waldyrious 582k Feb 22 10:59 HISTORY $ ls -sk --si HISTORY 586k HISTORY ``` ...again, different values: **582k ≠ 586k**. I tried searching the web but the only thing I could find that seemed relevant was [this](http://www.seas.upenn.edu/cets/answers/quota-commands.html): > > Some files have "holes" in them, so that the usage listed by `ls -s` (...) is less than the file size listed by `ls -l`." > > > (note that in my results the opposite happens: `ls -s` returns sizes bigger than `ls -l`, not smaller.) Meanwhile, [this page](http://linux.die.net/man/5/srec_binary) says that > > there is no elegant way to detect Unix file holes. > > > So, how can I deal with this discrepancy? Which of these values can be considered correct? Could this possibly be a bug in `ls`?
**Short answer:** - `ls -l` gives the size of the file (= the amount of data it contains) - `ls -s --block-size 1` gives the size of the file on the file system **Let's create two files:** A *sparse file* of 128 bytes length (A sparse file is a file containing empty blocks, see [Sparse File](https://en.wikipedia.org/wiki/Sparse_file)): ``` # truncate -s 128 f_zeroes.img # hexdump -vC f_zeroes.img 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000080 ``` Another file with random data, also of 128 bytes size: ``` # dd if=/dev/urandom of=f_random.img bs=1 count=128 # hexdump -vC f_random.img 00000000 bc 82 9c 40 04 e3 0c 23 e6 76 79 2f 95 d4 0e 45 |...@...#.vy/...E| 00000010 19 c6 53 fc 65 83 f8 58 0a f7 0e 8f d6 d6 f8 b5 |..S.e..X........| 00000020 6c cf 1b 60 cb ef 06 c6 d0 99 c6 16 3f d3 95 02 |l..`........?...| 00000030 85 1e b7 80 27 93 27 92 d0 52 e8 72 54 25 4d 90 |....'.'..R.rT%M.| 00000040 11 59 a2 d9 0f 79 aa 23 2d 44 3d dd 8d 17 d9 36 |.Y...y.#-D=....6| 00000050 f5 ae 07 a8 c1 b4 cb e1 49 9e bc 62 1b 4f 17 53 |........I..b.O.S| 00000060 95 13 5a 1c 2a 7e 55 b9 69 a5 50 06 98 e7 71 83 |..Z.*~U.i.P...q.| 00000070 5a d0 82 ee 0b b3 91 82 ca 1d d0 ec 24 43 10 5d |Z...........$C.]| 00000080 ``` So, as you can see in the hex representation, both files have *the same amount of data*, although the content is quite different. **Now, let us look at the directory:** ``` # ls -ls --block-size 1 f_* 1024 -rw-r--r-- 1 user user 128 Mar 18 15:34 f_random.img 0 -rw-r--r-- 1 user user 128 Mar 18 15:32 f_zeroes.img ^ ^ | | Amount which the Actual file size files takes on the fs ``` The first value is given by the `-s --block-size 1` option, it is the *amount of space used by the file on the file system*. As you can see, the sparse file takes up zero space, since the file system (`ext3` in this case) was smart enough to recognize that it only contains zeroes. Also, the file with random data takes up 1024 bytes on the disk! The value depends on how the underlying file system treats files (block size, sparse file capability, ...). In the sixth column is the size of the file if you would read it - it is *the amount of data the file contains* and it's **128 bytes for both files!**
Javascript delete a function How can I delete a function i.e ``` test=true; delete test; => true function test() {..} delete test() => false ``` Delete usually works for variables but it doesn't work for functions.
### No, you can not `delete` the result of a [function declaration](http://es5.github.io/#x13). This is a part of the [language specification](http://es5.github.io/). If you check out the description of the [delete operator](http://es5.github.io/#x8.12.7) in JavaScript: > > If desc.[[Configurable]] is true, then > > > - Remove the own property with name P from O. > - Return true. > > > If you go to the browser and run the following in the console: ``` >function f(){} >Object.getOwnPropertyDescriptor(window,"f") ``` You would get: `Object {value: function, writable: true, enumerable: true, configurable: false}` ### What can be done: You can however, assign the result to another value that is not a function, assuming that is your last reference to that function, garbage collection will occur and it will get de-allocated. For all purposes other than `getOwnPropertyNames` `hasOwnProperty` and such, something like `f = undefined` should work. For those cases, you can use a functionExpression instead and assign that to a variable instead. However, for those purposes like `hasOwnProperty` it will fail, try it in the console! ``` function f(){} f = undefined; window.hasOwnProperty("f");//true ``` ### Some more notes on delete - When your modern browser sees a `delete` statement, [that forces it to fall to hash map mode on objects](http://coding.smashingmagazine.com/2012/11/05/writing-fast-memory-efficient-javascript/), so `delete` can be [very slow (perf)](http://jsperf.com/delete-vs-setting-to-undefined). - In a managed language with a garbage collector, using `delete` might prove problematic. You don't have to handle your memory, the language does that for you. - In the case you do want to use objects like a map, that's a valid use case and [it's on the way](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Map) :)
LuaSocket socket/core.dll required location? When I use ``` local socket = require("socket.core") ``` It works fine, the dll is located at "dir/socket/core.dll" but when I move the dll to say "dir/folder/core.dll" and use ``` local socket = require("folder.core.") ``` It returns that it was found however it could not find the specific module in folder.core. How do I use Luasocket outside of it's socket.core requirements? Thanks!
If you want to `require("socket.core")`, the shared library (dll) has to have an exported function called luaopen\_socket\_core (which the LuaSocket library has). Thus, it always needs to be called as `require("socket.core")`. If you want to move the DLL into some other folder, you have to modify [`package.cpath`](http://www.lua.org/manual/5.1/manual.html#pdf-package.cpath), which contains the file patterns which will be checked. Let's say that you want to move the LuaSocket binary to `folder`. You have to place the binary in `folder/socket/core.dll` and modify `package.cpath` prior to calling `require`: ``` package.cpath = package.cpath .. ';folder/?.dll' local socket = require('socket.core') ``` `?` represents the name passed to `require` translated to file paths: `. -> /`, i.e. `socket.core -> socket/core`.
How can I double every character in a file excepting newlines? Ηow can I double every character in a file except newlines? Ιt should look something like this: File content before: ``` echo hello world ``` File content after: ``` eecchhoo hheelllloo wwoorrlldd ```
With sed: ``` sed 's/./&&/g' yourfile ``` Ex. ``` $ echo 'echo hello world' | sed 's/./&&/g' eecchhoo hheelllloo wwoorrlldd ``` --- Alternatively, with Perl's string multiplication operator: ``` $ echo 'echo hello world' | perl -lne 'print map { $_ x 2 } split //' eecchhoo hheelllloo wwoorrlldd ``` It's possible to do string concatenation in awk of course, but AFAIK not without an explicit loop over characters: ``` $ echo 'echo hello world' | awk 'BEGIN{OFS=FS=""} {for(i=1;i<=NF;i++) $i = $i $i}1' eecchhoo hheelllloo wwoorrlldd ```
Fixed Table Cell Width A lot of people still use tables to layout controls, data etc. - one example of this is the popular jqGrid. However, there is some magic happening that I cant seem to fathom (its tables for crying out loud, how much magic could there possibly be?) How is it possible to set a table's column width and have it obeyed like jqGrid does!? If I try to replicate this, even if I set every `<td style='width: 20px'>`, as soon as the content of one of those cells is greater than 20px, the cell expands! Any ideas or insights?
You could try using the `<col>` tag manage table styling for all rows but you will need to set the `table-layout:fixed` style on the `<table>` or the tables css class and set the `overflow` style for the cells <http://www.w3schools.com/TAGS/tag_col.asp> ``` <table class="fixed"> <col width="20px" /> <col width="30px" /> <col width="40px" /> <tr> <td>text</td> <td>text</td> <td>text</td> </tr> </table> ``` and this be your CSS ``` table.fixed { table-layout:fixed; } table.fixed td { overflow: hidden; } ```
Low power, quiet and semi affordable domain controller for distributed environment use I work in a distibuted environment, and I would like to setup a domain controller. I don't really want a full blown desktop running full time, just to handle requests from 1 client pc in my home office. I was wondering do you get any specialized boxes which are specifically used as windows based domain controllers, something like those small modern NAS servers? I need real domain functionality for testing, so it can't be light weight or watered down. Might sound a bit off the wall, but I would like to just have a device like a router, I could rdc in when required, but it would only be a Windows 2008 r2 domain controller. And support upgrading to future versions. Does such a solution exists. I don't want my study to be turned into a server room. Thanks
You seem like a good candidate for an HP ProLiant Microserver.... See: <http://h10010.www1.hp.com/wwpc/us/en/sm/WF31a/15351-15351-4237916-4237918-4237917-4248009.html> and <http://h30423.www3.hp.com/?fr_story=c080b0e57b3504e624b5e1e2dbc8122de745469c&rf=sitemap> It's pretty tiny, has four hot-swap bays and is more than robust enough for your application. A good number of people have used them as Solaris ZFS NAS appliances. There's also a [thread for owners of the device detailing their experiences here](http://www.hardforum.com/showthread.php?t=1555868). ![enter image description here](https://i.stack.imgur.com/SHkCZ.jpg)
faceted piechart with ggplot2 I've done a faceted piechart with ggplot2 like this: ``` qplot(x=factor(1), data=mtcars, fill=factor(cyl)) + geom_bar(width=1) + coord_polar(theta="y") + facet_grid(~gear) ``` ![enter image description here](https://i.stack.imgur.com/sv58f.jpg) but as all the piecharts share the y axis scale, some of them doesn't cover the full circle. I've tried with `facet_grid(~gear, scales="free")` but it doesn't work. How could I get full circles for all the piecharts?
I think you just want `position = 'fill'`: ``` ggplot(mtcars,aes(x = factor(1),fill=factor(cyl))) + facet_wrap(~gear) + geom_bar(width = 1,position = "fill") + coord_polar(theta="y") ``` For future reference, from the Details section of `geom_bar`: > > By default, multiple x's occuring in the same place will be stacked a > top one another by position\_stack. If you want them to be dodged from > side-to-side, see position\_dodge. Finally, position\_fill shows > relative propotions at each x by stacking the bars and then stretching > or squashing to the same height. > > >
Coarse-grained vs fine-grained What is the difference between coarse-grained and fine-grained? I have searched these terms on Google, but I couldn't find what they mean.
From [Wikipedia (granularity)](http://en.wikipedia.org/wiki/Granularity): > > Granularity is the extent to which a > system is broken down into small > parts, either the system itself or its > description or observation. It is the > extent to which a larger entity is > subdivided. For example, a yard broken > into inches has finer granularity than > a yard broken into feet. > > > Coarse-grained systems consist of > fewer, larger components than > fine-grained systems; a coarse-grained > description of a system regards large > subcomponents while a fine-grained > description regards smaller components > of which the larger ones are composed. > > >
Reading UTF-8 XML and writing it to a file with Python I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding. My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level) Here's the XML (from the parts that matter at least, it's a \*.resx file which contains UI strings) ``` <?xml version="1.0" encoding="utf-8"?> <root> <resheader name="foo"> <value>bar</value> </resheader> <data name="lorem" xml:space="preserve"> <value>ipsum öä</value> </data> </root> ``` And here's my python script ``` from xml.dom.minidom import parse names = [] values = [] def getStrings(path): dom = parse(path) data = dom.getElementsByTagName("data") for i in range(len(data)): name = data[i].getAttribute("name") names.append(name) value = data[i].getElementsByTagName("value") values.append(value[0].firstChild.nodeValue.encode("utf-8")) def writeToFile(): with open("uiStrings-fi.py", "w") as f: for i in range(len(names)): line = names[i] + '="'+ values[i] + '"' #varName='varValue' f.write(line) f.write("\n") getStrings("ResourceFile.fi-FI.resx") writeToFile() ``` And here's the traceback: ``` Traceback (most recent call last): File "GenerateLanguageFiles.py", line 24, in writeToFile() File "GenerateLanguageFiles.py", line 19, in writeToFile line = names[i] + '="'+ values[i] + '"' #varName='varValue' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran ge(128) ``` How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.
You'll need to remove the call to `encode()` - that is, replace `nodeValue.encode("utf-8")` with `nodeValue` - and then change the call to `open()` to ``` with open("uiStrings-fi.py", "w", "utf-8") as f: ``` This uses a "Unicode-aware" version of `open()` which you will need to import from the `codecs` module, so also add ``` from codecs import open ``` to the top of the file. The issue is that when you were calling `nodeValue.encode("utf-8")`, you were converting a Unicode string (Python's internal representation that can store all Unicode characters) into a regular string (which can only store single-byte characters 0-255). Later on, when you construct the line to write to the output file, `names[i]` is still a Unicode string but `values[i]` is a regular string. Python tries to convert the regular string to Unicode, which is the more general type, but because you don't specify an explicit conversion, it uses the ASCII codec, which is the default, and ASCII can't handle characters with byte values greater than 127. Unfortunately, several of those do occur in the string `values[i]` because the UTF-8 encoding uses those upper-range bytes frequently. So Python complains that it sees a character it can't handle. The solution, as I said above, is to defer the conversion from Unicode to bytes until the last possible moment, and you do that by using the Unicode-aware version of open (which will handle the encoding for you). Now that I think about it, instead of what I said above, an alternate solution would be to replace `names[i]` with `names[i].encode("utf-8")`. That way, you convert `names[i]` into a regular string as well, and Python has no reason to try to convert `values[i]` back to Unicode. Although, one could make the argument that it's good practice to keep your strings as Unicode objects until you write them out to the file... if nothing else, I believe `unicode` becomes the default in Python 3.
How to build apk in release mode using maven Hey I am having a hard time building and signing apk in release mode. I am able to create the apk using maven but when I try to upload it, Google playstore says that I need to build in release mode. here is my pom.xml: ``` <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.kannact4.gladstonesign</groupId> <artifactId>gladstonesign</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>apk</packaging> <name>Android Maven Example</name> <profiles> <profile> <id>release</id> <activation> <property> <name>performRelease</name> <value>true</value> </property> </activation> </profile> </profiles> <dependencies> <dependency> <groupId>com.google.android</groupId> <artifactId>android</artifactId> <version>2.2.1</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>unaligned-gladstonesign</finalName> <sourceDirectory>src</sourceDirectory> <pluginManagement> <plugins> <plugin> <groupId>com.jayway.maven.plugins.android.generation2</groupId> <artifactId>android-maven-plugin</artifactId> <version>3.1.1</version> <extensions>true</extensions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jarsigner-plugin</artifactId> <version>1.2</version> <configuration> <keystore>C:\Project\Eclipse workspace\apk_key</keystore> <alias>gs3key</alias> <storepass>****</storepass> <keypass>****</keypass> <verbose>true</verbose> <certs>true</certs> </configuration> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <groupId>com.jayway.maven.plugins.android.generation2</groupId> <artifactId>android-maven-plugin</artifactId> <configuration> <sdk> <platform>17</platform> </sdk> </configuration> </plugin> </plugins> </build> </project> ``` Can anyone help me with how to release it in the build mode?
Your apk needs to be signed and zipaligned. So you should first 'bind' the `jarsigner` execution to the `package` phase and in an 2nd time configure `zipalign` in your android plugin. see <http://www.simpligility.com/2010/07/sign-zipalign-and-to-market-to-market-with-maven/> ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jarsigner-plugin</artifactId> <executions> <execution> <id>signing</id> <goals> <goal>sign</goal> <goal>verify</goal> </goals> <phase>package</phase> <inherited>true</inherited> <configuration> <removeExistingSignatures>true</removeExistingSignatures> <archiveDirectory/> <includes> <include>${project.build.directory}/${project.artifactId}.apk</include> </includes> <keystore>${sign.keystore}</keystore> <alias>${sign.alias}</alias> <storepass>${sign.storepass}</storepass> <keypass>${sign.keypass}</keypass> <verbose>true</verbose> </configuration> </execution> </executions> </plugin> <plugin> <groupId>com.jayway.maven.plugins.android.generation2</groupId> <artifactId>maven-android-plugin</artifactId> <inherited>true</inherited> <configuration> <sign> <debug>false</debug> </sign> <zipalign> <verbose>true</verbose> <inputApk>${project.build.directory}/${project.artifactId}.apk</inputApk> <outputApk>${project.build.directory}/${project.artifactId}-signed-aligned.apk </outputApk> <skip>false</skip> </zipalign> </configuration> <executions> <execution> <id>alignApk</id> <phase>package</phase> <goals> <goal>zipalign</goal> </goals> </execution> </executions> </plugin> ```
What is the resolution of a scanned pdf file? 1. I scanned some documents using a Canon scanner, with resolution specified to be 300. I saved it into a pdf file Then I checked the resolution of the pdf file, by ImageMagick ``` identify -verbose my.pdf ``` In the output it returns, for every page it says ``` Resolution: 72x72 ``` which is not 300 which i specified with the scanner earlier. Why is it different? Note: The same problem if I saved it as a jpg file, and if I use a Xerox workCenter. 2. When I use Adobe Premium Pro to OCR my pdf file, it lets me specify which resolution that it will downsample the pdf file to from several options (72, 300, 600). If I don't want the OCRed pdf file to have less resolution than the original pdf file, how shall I find out the resolution of the pdf file, and which resolution option shall I choose for OCR in Adobe Premium Pro? Thanks.
PDF is a vector graphics document. It can be rendered at any resolution. An raster image embedded in a PDF document (as is typical of a basic scanned document) will typically have a resolution (how many pixels in an inch of the document when printed on paper). One such PDF document may have several of those images (will have if it's a multi-page document) possibly with different resolutions and orientations. That's not what ImageMagick's `identify -v` reports. That 72dpi is the resolution it uses to convert the PDF into a raster image so as to report pixel information on it. You'll notice it actually runs: `gs ... -r72x72 ...` for that. `pdfimages` can report all the raster images in a PDF document: ``` $ pdfimages -list scan.pdf page num type width height color comp bpc enc interp object ID --------------------------------------------------------------------- 1 0 image 1219 1707 rgb 3 8 jpeg no 8 0 $ qpdf --show-pages --with-images scan.pdf page 1: 3 0 R images: /Im0: 8 0 R, 1219 x 1707 content: 4 0 R ``` That gives you the size in pixel, but not the size (in `mm` or `inch`) of the box that image would be printed in, so you won't get the resolution from that. From the object ID (`8 0` above), you can however find out the dimension of the container: ``` $ mutool show scan.pdf grep | grep 'Im0 8 0' scan.pdf:3: <</Contents 4 0 R/CropBox[0 0 595 842]/MediaBox[0 0 595 842]/Parent 2 0 R/Resources<</XObject<</Im0 8 0 R>>/ProcSet 6 0 R>>/Thumb 11 0 R/Type/Page>> ``` So, here, we know the image is rendered on 595x842 pt (a point being 1/72 inch). So we can derive the x and y resolution: ``` $ echo "$((1219 * 72 / 595))dpi" "$((1707 * 72 / 842))dpi" 147dpi 145dpi ``` Note that when embedded as JPG, the images may have an EXIF header that specifies the resolution. You can extract the images to find that out: ``` $ exiftool -XResolution -YResolution <(qpdf --show-object=8 --raw-stream-data scan.pdf) X Resolution : 72 Y Resolution : 72 ``` Here they don't match though. You can also extract the images and with `pdfimages` and pass that instead to your OCR so that it doesn't have to decide on a resolution before converting itself to a raster image.
How to get array of values as plusargs? How to get the array of values as arguments. I need get an array of commands of undefined size from the command line. How to get these arguments into an array or queue? Eg: `+CMDS=READ,WRITE,READ_N_WRITE` It should be taken to an array.
`$value$plusargs` does not support arrays, it does support strings. See [IEEE Std 1800-2012](https://standards.ieee.org/findstds/standard/1800-2012.html) § 21.6 "Command line input". Parsing a string in SystemVerilog is only a little cumbersome but still very doable, especially when the separator is represented as a single character. Here is a generic string parser using a SystemVerilog queue for recoding the indexes and string method `substr` defined in [IEEE Std 1800-2012](https://standards.ieee.org/findstds/standard/1800-2012.html) § 7.10 "Queue" and § 6.16.8 "Substr" ``` function void parse(output string out [], input byte separator, input string in); int index [$]; // queue foreach(in[i]) begin // find commas if (in[i]==separator) begin index.push_back(i-1); // index before comma index.push_back(i+1); // index after comma end end index.push_front(0); // first index index.push_back(in.len()-1); // last index out = new[index.size()/2]; foreach (out[i]) begin out[i] = in.substr(index[2*i],index[2*i+1]); /*$display("cmd[%0d] == in.substr(%0d,%0d) == \"%s\"", i, index[2*i],index[2*i+1], out[i]); */ end endfunction : parse ``` Then combine it with `$value$plusargs` to parse the input: ``` string cmd[]; string plusarg_string; if ( $value$plusargs("CMDS=%s",plusarg_string) ) begin parse(cmd, ",", plusarg_string); end foreach(cmd[i]) $display("CMD[%0d]:'%s'",i,cmd[i]); ``` Full working example: <http://www.edaplayground.com/s/6/570>
Shell: Check if docker container is existing How do I check in a bash script if this ``` sudo docker images -q nginx ``` gives me a result string, which means this container is existing ``` sudo docker images -q nginx if [ $? != '' ] then echo "existing" else echo "missing ```
`$?` isn't a string but the exit status of `sudo` (in this case). To use that properly, compare it against zero with `-gt`, or use `if (( $? ))` (in a shell like `bash` or `ksh93` that does arithmetic evaluation with `(( ... ))`). If `sudo docker images -q nginx` gives you a string if the container image exists and nothing if it doesn't, then you may store that in a variable and see if it's empty or not: ``` result=$( sudo docker images -q nginx ) if [[ -n "$result" ]]; then echo 'Container image exists' else echo 'No such container image' fi ``` However, using `sudo` inside a script is awkward since the tool often requires interactive prompting for passwords, and it's better to use `sudo` instead to run the script itself (and then use `sudo` inside the script only if you need to assume some other non-root user's identity). `docker inspect` is another command for checking the information about containers: ``` docker inspect -f '{{.Config.Image}}' nginx ``` This would give the container image hash for the `nginx` container. It would also return a proper exit status that you can use to determine whether the container exists at all: ``` if docker inspect -f '{{.Config.Image}}' nginx >/dev/null 2>&1 then echo 'Container image exists' else echo 'Container does not exist' fi ``` Or, you may pick out the output string and see whether it's empty or not: ``` result=$( docker inspect -f '{{.Config.Image}}' nginx 2>/dev/null ) if [[ -n "$result" ]]; then echo 'Container image exists' else echo 'No such container image' fi ``` I'm discarding the standard error stream by redirecting it to `/dev/null` since it will complain if the container image does not exist. You may also use `docker inspect` to figure out if a container is running or not by inspecting `{{.State.Running}}`: ``` result=$( docker inspect -f '{{.State.Running}}' -q nginx ) if [[ $result == "true" ]]; then echo 'Container is running' else echo 'Container is not running' fi ```
Swift Dictionary Multiple Key Value Pairs - Iteration In Swift I want to make an array of dictionaries (with multiple key value pairs) and then iterate over each element Below is the expected output of a possible dictionary. Not sure how to declare and intitialize it (somewhat similar to array of hashes in Ruby) ``` dictionary = [{id: 1, name: "Apple", category: "Fruit"}, {id: 2, name: "Bee", category: "Insect"}] ``` I know how to make an array of dictionary with one key value pair. For example: ``` var airports: [String: String] = ["YYZ": "Toronto Pearson", "DUB": "Dublin"] ```
to declare an array of dictionary, use this: ``` var arrayOfDictionary: [[String : AnyObject]] = [["id" :1, "name": "Apple", "category" : "Fruit"],["id" :2, "name": "Microsoft", "category" : "Juice"]] ``` I see that in your dictionary, you mix number with string, so it's better use AnyObject instead of String for data type in dictionary. If after this code, you do not have to modify the content of this array, declare it as 'let', otherwise, use 'var' Update: to initialize within a loop: ``` //create empty array var emptyArrayOfDictionary = [[String : AnyObject]]() for x in 2...3 { //... mean the loop includes last value => x = 2,3 //add new dictionary for each loop emptyArrayOfDictionary.append(["number" : x , "square" : x*x ]) } //your new array must contain: [["number": 2, "square": 4], ["number": 3, "square": 9]] ```
Execute stored procedure and return resultset I'm a complete VBScript newbie and I am trying to execute a stored procedure and read the resultset. I have tried numerous different approaches using articles online but nothing works and I'm stumped. The database is SQL Server 2008 R2, the application is an off-the-shelf ERP system but I'm able to add my own code to it. I can execute code by using: ``` connection.execute"insert into blah blah blah" ``` And I can read result set by using: ``` Set objRS = CreateObject("ADODB.Recordset") objRS.Open "select a, b, c FROM blahblah", Connection, adOpenStatic, adLockBatchOptimistic, adCmdText If objRS.EOF = False Then a = objRS.Fields("a").Value b = objRS.Fields("b").Value c = objRS.Fields("c").Value End If objRS.Close ``` The stored procedure in question is in effect a select statement e.g.: ``` create procedure [dbname].[dbo].[sptestproc] as @Option1 Varchar(10) = NULL, @Option2 Varchar(10) = NULL AS BEGIN select first, second from table1 where a = @option1 and b = @toption2 End ``` My code so far: ``` Dim sql sql = "EXEC [dbname].[dbo].[sptestproc] '" & Opt1 & "','" & Opt2 & "'" Set RS = CreateObject("ADODB.Recordset") RS.Open sql, Connection, adOpenStatic, adLockBatchOptimistic, adCmdText Do While Not RS.EOF Call App.MessageBox("first",vbInformation,"Data updated") Call App.MessageBox("second",vbInformation,"Data updated") RS.MoveNext Loop ``` But I cannot for the life of me get a procedure to execute and read the results. Can anyone help? Thanks
`adCmdText` would be for SQL query if you want to execute a stored procedure then you have to use `adCmdStoredProc` (value `4` instead) EDIT: ``` 'Set the connection '............... 'Set the command DIM cmd SET cmd = Server.CreateObject("ADODB.Command") SET cmd.ActiveConnection = Connection 'Set the record set DIM RS SET RS = Server.CreateObject("ADODB.recordset") 'Prepare the stored procedure cmd.CommandText = "[dbo].[sptestproc]" cmd.CommandType = 4 'adCmdStoredProc cmd.Parameters("@Option1 ") = Opt1 cmd.Parameters("@Option2 ") = Opt2 'Execute the stored procedure SET RS = cmd.Execute SET cmd = Nothing 'You can now access the record set if (not RS.EOF) THEN first = RS("first") second = RS("second") end if 'dispose your objects RS.Close SET RS = Nothing Connection.Close SET Connection = Nothing ```
How can a checkmark state be saved in core data? I have a list app where users hit the + button and enter in an item that they want to appear in the list and hit save. The table is saved with core data. The only problem is when the cell is taped I want a checkmark to be displayed. Ive enabled multiple selection with ``` UITableViewCell *thisCell = [tableView cellForRowAtIndexPath:indexPath]; if (thisCell.accessoryType == UITableViewCellAccessoryNone) { thisCell.accessoryType = UITableViewCellAccessoryCheckmark; } else { thisCell.accessoryType = UITableViewCellAccessoryNone; } [tableView deselectRowAtIndexPath:indexPath animated:NO]; ``` I would like the checkmarks to be persisted in the cell after the user exits. I have created an attribute in my entity called "checks" and gave it the type of boolean but I dont know how to make it where if you hit a row then a check appears and is persisted. Any help would be greatly appreciated. Thanks
This is how I do it. One notable point: CoreData does not store booleans, so any property labeled "boolean" is actually of type `NSNumber`. You've got to remember to convert back and forth when dealing with CoreData and boolean values. ``` - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSManagedObject *selectedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; if ([[selectedObject valueForKey:@"isDone"] boolValue]) { [selectedObject setValue:[NSNumber numberWithBool:NO] forKey:@"isDone"]; } else { [selectedObject setValue:[NSNumber numberWithBool:YES] forKey:@"isDone"]; } } ``` --- I have my `UITableViewController` set as the the delegate for the `NSFetchedResultsController`, so the changes I made to the managed objects in the query ^^^ will cause the following two methods to be run. ``` - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView reloadData]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *defaultCellIdentifier = @"Item"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:defaultCellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:defaultCellIdentifier] autorelease]; } NSManagedObject *item = [[self fetchedResultsController] objectAtIndexPath:indexPath]; cell.textLabel.text = [item valueForKey:@"name"]; if ([[item valueForKey:@"checks"] boolValue]) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else { cell.accessoryType = UITableViewCellAccessoryNone; } cell.selectionStyle = UITableViewCellSelectionStyleNone; return cell; } ``` Here's how everything ties together 1. User clicks on a row 2. tableView:didSelectRow... method changes the "isDone" property of the appropriate managed object. 3. the fetched results controller notices that a managed object has changed and calls the `controllerDidChangeContent` method on its delegate. 4. My `controllerDidChangeContent` method just reloads all the data in the table view 5. When the tableView is reloaded, my tableView:cellForRow... method checks the "isDone" property of the managed item to see if the cell should have a checkmark or not. And just so you don't get confused, I initially used a generic `NSMangagedObject` to store row state, which is why the first method I posted says, `[selectedObject valueForKey:@"isDone"]`. Later I switched to a subclassed managed object named `JKItem`, which is why the second set of methods is able to use `item.isDone` without generating a compiler warning.
Product of two independent Student distributions What is the product of two independent student t distributions? In which case does this product product result in another t distribution?
When $X$ and $Y$ are independent random variables with densities $f\_X$ and $f\_Y,$ the density of their product can be found with a change of variables as $$f\_{XY}(z) = \int\_{\mathbb R} f\_X(x) f\_Y(z/x)\,\frac{\mathrm{d}x}{|x|}.$$ Ignoring normalizing constants (we'll consider these at the end), for two Student t densities with $\nu$ and $\mu$ degrees of freedom this integrand is proportional to $$h(x,z) = \left(1 + \frac{x^2}{\mu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{x^2\nu}\right)^{-(\nu+1)/2}\,\frac{1}{|x|}.$$ **Let's find a lower bound for $f(z)$ when $z$ is small.** To do so, we may restrict the region of integration and we may replace the integrand by anything that never exceeds it. Let $z$ be positive but less than $1$ and consider the integration region where $x^2\nu$ ranges between $z^2$ and $1.$ To get an appreciation for what's going on, here (with $\mu=\nu=1$) are plots of $h(x,z)$ for $|z| = 1$ (blue), $1/2, 1/4,$ and $1/8$ (red). [![Figure](https://i.stack.imgur.com/3XhSD.png)](https://i.stack.imgur.com/3XhSD.png) You can see that as $|z|$ approaches $0,$ there's more and more area pushed into this region. That's no surprise: we would expect the largest area (which corresponds to the highest density of the product) to be at the center of the product distribution, which (by symmetry) must be $0.$ But how large does it get? Over the region $x^2/\nu \in[z^2, 1]$ the first factor of $h$ is smallest when $x$ is smallest and the second factor is smallest when $x$ is largest, whence throughout this region $$\begin{aligned} h(x,z) &\ge \left(1 + \frac{z^2}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + \frac{z^2}{1}\right)^{-(\nu+1)/2}\,\frac{1}{|x|} \\ &\ge \left(1 + \frac{1}{\mu\nu}\right)^{-(\mu+1)/2}\, \left(1 + 1\right)^{-(\nu+1)/2}\,\frac{1}{|x|}. \end{aligned}$$ The second inequality is a consequence of $z^2 \le 1.$ The factors before $1/|x|$ are constant (but nonzero), depending only on $\mu$ and $\nu,$ so again let's consider them later and ignore them now. As $x$ varies over just the positive part of this region it runs from $z/\sqrt{\nu}$ to $1/\sqrt{\nu},$ giving a lower bound proportional to $$\int\_{z/\sqrt{\nu}}^{1/\sqrt{\nu}} \frac{\mathrm{d}x}{|x|} = -\log z.$$ **As $z\to 0,$ this lower bound diverges.** Consequently, *no matter what the constants of proportionality are that we ignored,* $f\_{XY}(z)$ diverges at $0.$ Here, to illustrate, is a histogram from a simulation of ten million products (with $\nu=\mu=1/2$). Almost a million of those products are represented. The red curve is the negative logarithm. Clearly it approximates the density well near zero. [![Figure](https://i.stack.imgur.com/0LAiT.png)](https://i.stack.imgur.com/0LAiT.png) However, any Student t distribution with (say) $\kappa \gt 0$ degrees of freedom has a value proportional to $(1 + 0^2/\kappa)^{-(\kappa+1)/2} = 1$ at the origin, which is *finite.* Consequently, **the product of two independent Student t distributions is never (even remotely like) a Student t distribution.** --- The product density can be found analytically as a polynomial combination of Riemann hypergeometric functions. Since this product is never a Student t distribution, though, I did not see any point into providing further details.
What does 9.2R mean in the context of RAM memories? I see engraved in memory slots values such as **1.5V** and **9.2R**. I understand that **1.5V** refers to voltage, but what does **9.2R** refer to? [![Photo of a memory slot displaying those values](https://i.stack.imgur.com/rtvRb.jpg)](https://i.stack.imgur.com/rtvRb.jpg)
Given that it is a double riser slot it is most likely that the 9.2 is denoting the height at which the memory stick is held above the motherboard and the R is denoting that the connector is a "right angle" connector rather than the vertical you would see in desktop PCs (thanks to [kicken](https://superuser.com/users/515238/kicken) for clarifying and for a [datasheet](https://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=1-1773929-8_Memory_Socket_QRG&DocType=DS&DocLang=EN)). This is especially likely given the lower slot states "5.2R" Giving this height *might* allow a manufacturer to fit low profile components to the motherboard in the area under the memory stick. (Such as the 5.2mm high memory stick slot) Having those designations in the plastic allows manufacturers to quickly determine that they have the correct part, it is not really relevant to you as a user.
How to refresh parent form when in child form is button clicked? then closing child form is working this command: ``` private void listView1_MouseDoubleClick(object sender, EventArgs e) { ListViewItem item = listView1.SelectedItems[0]; string s = item.SubItems[6].Text; q = m; CommercialOfferEditProperties ob = new CommercialOfferEditProperties(s, q); ob.FormClosed += new FormClosedEventHandler(ob_FormClosed); ob.Show(); //show child } void ob_FormClosed(object sender, FormClosedEventArgs e) { some action } ``` But how to run action `ob_FormClosed` or run created new action, when in child form is button clicked?
1. Add a OnClick event to your child form (`CommercialOfferEditProperties`) 2. Subscribe to in in the parent form. 3. Trigger OnClick every time the child forms button is clicked. That way you will be able to notify the parent. Example: ``` //Child form declaration public class CommercialOfferEditProperties:Form { public event EventHandler ButtonClicked; public void NotifyButtonClicked(EventArgs e) { if(ButtonClicked != null) ButtonClicked(this,e); } ... } ``` Parent form: ``` private void listView1_MouseDoubleClick(object sender, EventArgs e) { ListViewItem item = listView1.SelectedItems[0]; string s = item.SubItems[6].Text; q = m; CommercialOfferEditProperties ob = new CommercialOfferEditProperties(s, q); ob.FormClosed += new FormClosedEventHandler(ob_FormClosed); ob.ButtonClicked += new EventHandler(ob_ButtonClicked); ob.Show(); //show child } void ob_FormClosed(object sender, FormClosedEventArgs e) { //process form close } void ob_ButtonClicked(object sender, EventArgs e) { //process button clicked } ```
How can I specifiy what access I need from my user's Facebook accounts when using OmniAuth? When you use [OmniAuth](http://github.com/intridea/omniauth) to login to a web app through Facebook, these are the permissions the webapp has: ``` Access my basic information Includes name, profile picture, gender, networks, user ID, list of friends, and any other information I've shared with everyone. Send me email WebApp may email me directly at [email protected] Access my data any time WebApp may access my data when I'm not using the application ``` Whereas when you use the [mini\_fb](http://github.com/appoxy/mini_fb) gem to link a web app to Facebook, these are the permissions (have to specify this as code otherwise formatting was weird): ``` Access my basic information Includes name, profile picture, gender, networks, user ID, list of friends, and any other information I've shared with everyone. Required Send me email WebApp may email me directly at [email protected] · Required Access my profile information Likes, Music, TV, Movies, Books, Quotes, About Me, Activitie...s, Interests, Groups, Events, Notes, Birthday, Hometown, Current City, Website, Religious and Political Views, Education History, Work History and Facebook StatusSee More Required Online Presence Required Access my family & relationships Family Members and Relationship Status Required Access my photos and videos Photos Uploaded by Me, Videos Uploaded by Me and Photos and Videos of Me Required Access my friends' information Birthdays, Religious and Political Views, Family Members and... Relationship Statuses, Hometowns, Current Cities, Likes, Music, TV, Movies, Books, Quotes, Activities, Interests, Education History, Work History, Online Presence, Websites, Groups, Events, Notes, Photos, Videos, Photos and Videos of Them, 'About Me' Details and Facebook StatusesSee More Required Post to my Wall WebApp may post status messages, notes, photos, and videos to my Wall Access messages in my inbox Access posts in my News Feed Access my data any time WebApp may access my data when I'm not using the application Access Facebook Chat Send me SMS messages WebApp may send SMS messages to my phone: Manage my events WebApp may create and RSVP to events on my behalf Access my custom friend lists Access my friend requests Insights WebApp may access Insights data for my pages and applications Manage my advertisements ``` I'm using [OmniAuth](http://github.com/intridea/omniauth) at the moment, and would like to keep doing so, but my app needs more permissions, like some of the extra ones that [mini\_fb](http://github.com/appoxy/mini_fb) has. Does anyone know how I can customize [OmniAuth](http://github.com/intridea/omniauth) to request extra permissions?
You can check on option using the `:scope` attribute: ``` use OmniAuth::Strategies::Facebook, 'app_id', 'app_secret', {:scope => 'email,offline_access, your,scope,you,want'} ``` Check the [Facebook permissions documentation](https://developers.facebook.com/docs/reference/api/permissions/) what scope you really want and define it separate by a commant on `:scope` option. If you use an initializer to define your OamniOauth, it's like that: ``` Rails.application.config.middleware.use OmniAuth::Builder do provider :facebook, 'APP_ID', 'APP_SECRET', {:scope => 'email,offline_access, your,scope,you,want'} end ```
python: convert base64 encoded png image to jpg I want to convert some base64 encoded png images to jpg using python. I know how to decode from base64 back to raw: ``` import base64 pngraw = base64.decodestring(png_b64text) ``` but how can I convert this now to jpg? Just writing pngraw to a file obviously only gives me a png file. **I know I can use PIL, but HOW exactly would I do it?** Thanks!
You can use [PIL](http://www.pythonware.com/products/pil/): ``` data = b'''iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAAAXNSR0IArs4c6QAAAIBJRE FUOMvN08ENgCAMheG/TGniEo7iEiZuqTeiUkoLHORK++Ul8ODPZ92XS2ZiADITmwI+sWHwi w2BGtYN1jCAZF1GMYDkGfJix3ZK8g57sJywteTFClBbjmAq+ESiGIBEX9nCqgl7sfyxIykt 7NUUD9rCiupZqAdTu6yhXgzgBtNFSXQ1+FPTAAAAAElFTkSuQmCC''' import base64 from PIL import Image from io import BytesIO im = Image.open(BytesIO(base64.b64decode(data))) im.save('accept.jpg', 'JPEG') ``` In very old Python versions (2.5 and older), replace `b'''` with `'''` and `from io import BytesIO` with `from StringIO import StringIO`.
npm publish fails with GitLab NPM registry I have tried to make use of the new NPM registry that's now part of the free GitLab edition. I am attempting to create a NPM package and publish it on our company's GitLab instance. When attempting to run `npm publish`, the process exits with the error: ``` npm ERR! code E404 npm ERR! 404 Not Found - PUT https://gitlab.myemployer.com/api/v4/projects/1873/packages/npm/@sqt-klu%2fodysseus-nn-core npm ERR! 404 npm ERR! 404 '@sqt-klu/[email protected]' is not in the npm registry. npm ERR! 404 You should bug the author to publish it (or use the name yourself!) npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. ``` This problem appears when run through GitLab CI/CD as well as through the local command line. For reference, our internal GitLab's URL is gitlab.myemployer.com/sqt-klu/odysseus/nn-core. As such, I presume `@sqt-klu` is the scope GitLab wants. The CI/CD file (and what I have been trying to replicate locally) does this before `npm publish` during the `before_script` phase: ``` - npm config set @sqt-klu:registry https://gitlab.myemployer.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/ - npm config set https://gitlab.myemployer.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken ${CI_JOB_TOKEN} ``` Locally, my user-level npmrc is as following (actual token is removed but it is a personal access token that has been assigned all possible scopes): ``` audit-level=high python=c:\Python27\python.exe https://gitlab.myemployer.com/api/v4/projects/1873/packages/npm/:_authToken=redacted @sqt-klu:registry=https://gitlab.myemployer.com/api/v4/projects/1873/packages/npm/ ``` And if it's relevant, here are the crucial contents of `package.json` (I've omitted `contributors`, `dependencies`, `devDependencies`, `engine` and `author` in this excerpt): ``` { "name": "@sqt-klu/odysseus-nn-core", "version": "0.0.1", "license": "UNLICENSED", "publishConfig": { "@sqt-klu:registry":"https://gitlab.myemployer.com/api/v4/projects/1873/packages/npm/" } } ``` I have also tried a few different package names as per [this official guide](https://docs.gitlab.com/ee/user/packages/npm_registry/index.html#package-naming-convention), to no avail. I feel like I am missing something very trivial, but I am stuck on this. Any helpful pointers or ideas are appreciated. Note: I have replaced the FQDN of the endpoint in *all* mentions in case my employer gets grumpy about this.
404 errors can, confusingly perhaps, refer to problems with credentials in this situation. You should replace ``` https://gitlab.myemployer.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken ``` with: ``` //gitlab.myemployer.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken ``` All other settings look okay\* and should work. By default, a Gitlab project should have the package repository feature enabled. You can en/disable it in the project settings. --- \* you could reduce the scope of your personal access token to just `api`. When/if you use project-level or org/group-level deploy tokens, they only need `read_package_registry` and/or `write_package_registry`.
how to get output from a pipe connection before closing it in R? In R, we can open a pipe connection using `pipe()` and write to it. I observed the following situation that I do not quite understand. Let's use a `python` pipe for example: ``` z = pipe('python', open='w+') cat('x=1\n', file=z) cat('print(x)\n', file=z) cat('print(x+2)\n', file=z) cat('print(x+2\n', file=z) cat(')\n', file=z) close(z) ``` What I was expecting was the output from `print()` would be immediately shown in the R console, but the fact is the output comes only after I close the pipe connection: ``` > z = pipe('python', open='w+') > > cat('x=1\n', file=z) > cat('print(x)\n', file=z) > cat('print(x+2)\n', file=z) > cat('print(x+2\n', file=z) > cat(')\n', file=z) > > close(z) 1 3 3 ``` So my question is, how can I get the output before I close the connection? Note that it does not seem to be possible to capture the output using `capture.output()`, either: ``` > z = pipe('python', open='w+') > > cat('x=1\n', file=z) > cat('print(x)\n', file=z) > cat('print(x+2)\n', file=z) > cat('print(x+2\n', file=z) > cat(')\n', file=z) > > x = capture.output(close(z)) 1 3 3 > x character(0) ``` The background of this question is the [`knitr` engines](http://yihui.name/knitr/demo/engines/). For the interpreted languages like Python, I wish I can open a persistent "terminal" so that I can keep on writing code into it and get output from it. I'm not sure if `pipe()` is the correct way to go, though.
Python notices that the input is not interactive and waits until the connection is closed to parse and execute the code. You can use the `-i` option to force it to stay in interactive mode. (but the output is a bit mangled). ``` z = pipe('python -i', open='w') cat('x=1\n', file=z) cat('print(x)\n', file=z) cat('print(x+2)\n', file=z) cat('print(x+2\n', file=z) cat(')\n', file=z) Sys.sleep(2) # Python 2.7.4 (default, Apr 19 2013, 18:28:01) # [GCC 4.7.3] on linux2 # Type "help", "copyright", "credits" or "license" for more information. # >>> >>> 1 # >>> 3 # >>> ... 3 # >>> close(z) ``` Your actual problem is more complicated: you need to both read and write to the same connection. I do not know how to do that in a portable way, but you can use a pipe and a named pipe (a "fifo") on platforms that support them. ``` stopifnot( capabilities("fifo") ) system('mkfifo /tmp/Rpython.fifo') output <- fifo('/tmp/Rpython.fifo', 'r') input <- pipe('python -i > /tmp/Rpython.fifo', 'w') python_code <- " x=1 print(x) print(x+2) print(x+2 ) " cat( python_code, file = input ) flush( input ) Sys.sleep(2) # Wait for the results result <- readLines(output) result # [1] "1" "3" "3" ```
Example of entity declared in a anonymous namespace that has external linkage Given the statements below (emphasis mine) in §3.5/4 and in the Note [94] in §7.3.1.1/1, I'd like to have one single example of an entity declared in a unnamed namespace that has external linkage. §3.5/4 > > **An unnamed namespace or a namespace declared directly or indirectly > within an unnamed namespace has internal linkage**. All other namespaces > have external linkage. **A name having namespace scope that has not been > given internal linkage above has the same linkage as the enclosing > namespace if it is the name of** > > > - a variable; or > - a function; or > - a named class (Clause 9), or an unnamed class defined in a typedef declaration in which the class has the typedef name for linkage > purposes (7.1.3); or > - a named enumeration (7.2), or an unnamed enumeration defined in a typedef declaration in which the enumeration has the typedef name for > linkage purposes (7.1.3); or > - an enumerator belonging to an enumeration with linkage; or > - a template. > > > Note [94] on §7.3.1.1/1: > > Although entities in an unnamed namespace might have external linkage, > they are effectively qualified by a name unique to their translation > unit and therefore can never be seen from any other translation unit. > > >
You are looking at a defect in the standard. The change that makes unnamed namespace members have internal linkage happened fairly late in the C++11 standardization process, in November 2010 ([CWG issue 1113](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1113)). As a result, a number of places in the standard needs to be changed, but weren't. One of which is the footnote you quoted. [CWG issue 1603](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1603), currently in "ready" status (read: the resolution is likely to be adopted at the next committee meeting), will fix this and a number of other issues related to giving internal linkage to unnamed namespace members.
How to get object in WebGL 3d space from a mouse click coordinate I'm building a boardgame in WebGL. The board can be rotated/zoomed. I need a way to translate a click on the canvas element (x,y) into the relevant point in 3D space (x, y, z). The ultimate result is that I want to know the (x, y, z) coordinate that contains the point that touches the object closest to the user. For instance, the user clicks a piece, and you imagine a ray traveling through 3D space that goes through both the piece and the game board, but I want the (x, y, z) coord of the piece at the point where it was touched. I feel like this must be a very common problem, but I can't seem to find a solution in my googles. There must be some way to project the current view of the 3D space into 2D so you can map each point in 2D space to the relevant point in 3D space. I want to the user to be able to mouse over a space on the board, and have the spot change color.
You're looking for an unproject function, which converts screen coordinates into a ray cast from the camera position into the 3D world. You must then perform ray/triangle intersection tests to find the closest triangle to the camera which also intersects the ray. I have an example of unprojecting available at [jax/camera.js#L568](https://github.com/sinisterchipmunk/jax/blob/5d392c9d67cb9ae5623dc03846027c473f625925/src/jax/webgl/camera.js#L568) -- but you'll still need to implement ray/triangle intersection. I have an implementation of that at [jax/triangle.js#L113](https://github.com/sinisterchipmunk/jax/blob/5d392c9d67cb9ae5623dc03846027c473f625925/src/jax/geometry/triangle.js#L113). There is a simpler and (usually) faster alternative, however, called 'picking'. Use this if you want to select an entire object (for instance, a chess piece), and if you don't care about where the mouse actually clicked. The WebGL way to do this is to render the entire scene in various shades of blue (the blue is a key, while red and green are used for unique IDs of the objects in the scene) to a texture, then read back a pixel from that texture. Decoding the RGB into the object's ID will give you the object that was clicked. Again, I've implemented this and it's available at [jax/world.js#L82](https://github.com/sinisterchipmunk/jax/blob/5d392c9d67cb9ae5623dc03846027c473f625925/src/jax/webgl/world.js#L82). (See also lines 146, 162, 175.) Both approaches have pros and cons (discussed [here](http://learningwebgl.com/blog/?p=1879&cpage=1#comment-12130) and in some of the comments after) and you'll need to figure out which approach best serves your needs. Picking is slower with huge scenes, but unprojecting in pure JS is extremely slow (since JS itself isn't all that fast) so my best recommendation would be to experiment with both. FYI, you could also look at the GLU project and unproject code, which I based my code loosely upon: <http://www.opengl.org/wiki/GluProject_and_gluUnProject_code>