text
stringlengths
0
13M
Title: Visualizing a tube with hexagonal mesh that is (semi-)opaque in python Tags: python;matplotlib;3d Question: I have a honeycomb mesh sheet: that I fold to make a tube like this: I used python with mpl_toolkits.mplot3d to generate the tube. I've got two basic problems with that: 1) As it is, the wires from the front and back overlap rendering the structure of the tube confusing. I'd like to set some degree of opacity so that I can see better the frontal hexagonal mesh without the clutter of the back mesh. 2) In the flat sheet, I set the aspect ratio to equal, but in the 3d plot it seems distorted. It might be due to the viewpoint, but I moved it around the hexagons seem too distorted, as compared with the flat version. They seem flattened along the axis of the tube, a direction with no curvature. I tried to set the 3d aspect ratio with: ```fig = plt.figure(figsize=plt.figaspect(1.0)*1.5) ax = fig.gca(projection='3d') ``` but it doesn't seem to work properly. The whole code is below: ```import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from scipy.spatial import Delaunay Nx = 20 Ny = 20 dx = 1.0 dy = dx*np.sqrt(3.0)/2 # number of points Npts = Nx*Ny points = np.zeros((Npts, 2)) # Initial positions for j in range(Ny): x = 0 for i in range(Nx): if j%2 ==0: points[j*Nx+i, 0] = x points[j*Nx+i, 1] = j*dy if i%2 == 1: x += 2*dx else: x += dx elif j%2 ==1: points[j*Nx+i, 0] = x-0.5*dx points[j*Nx+i, 1] = j*dy if i%2 == 1: x += dx else: x += 2*dx # print points # compute Delaunay tesselation tri = Delaunay(points) # obtain list of nearest neighbors indices, indptr = tri.vertex_neighbor_vertices plt.figure() # Determine equal aspect ratio plt.axes().set_aspect ('equal') plt.plot (points[:, 0], points[:, 1], 'ro') plt.xlabel("$x$", fontsize=18) plt.ylabel("$y$", fontsize=18) plt.title("Hexagonal mesh sheet") nnIndPtr = np.array([], dtype = int) nnIndices = np.zeros(Npts+1, dtype = int) for k in range(Npts): count = 0 for i in indptr[indices[k]:indices[k+1]]: # distance dist = np.linalg.norm(points[k]-points[i]) # print k, i, dist # Build nearest neighbor list pointer from Delaunay triangulation if dist < 1.1: nnIndPtr= np.append(nnIndPtr, i) count += 1 nnIndices[k+1]=nnIndices[k]+count for k in range(Npts): for i in nnIndPtr[nnIndices[k]:nnIndices[k+1]]: plt.plot([points[i, 0], points[k, 0]],[points[i, 1], points[k, 1]], "b-") plt.savefig('sheet.png') #Adjusts the aspect ratio and enlarges the figure (text does not enlarge) fig = plt.figure(figsize=plt.figaspect(1.0)*1.5) ax = fig.gca(projection='3d') ax.set_axis_off() # Fold the sheet into a tube without stretching it s = points[:, 0] Lx = Nx*dx x = Lx *np.cos(2*np.pi*s/Lx) y = Lx *np.sin(2*np.pi*s/Lx) z = points[:, 1] ax.scatter(x, y, z, color='r') for k in range(Npts): for i in nnIndPtr[nnIndices[k]:nnIndices[k+1]]: ax.plot([x[i], x[k]],[y[i], y[k]], [z[i], z[k]], color='b') ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') ax.set_title("Hexagonal mesh tube") plt.savefig('tube.png') plt.show() ``` Comment: (1) I don't quite understand your comment, since what's in the front or in the back depends on the view point. The variable k runs over all points of the structure and is fixed. It doesn't depend on the point of view. Did you mean something like use a different color for each half of the tube, assuiming the cut would be along the tube axis? That would be helpful already. A solution that depends on the viewpoint would be better, but the algorithm would be more complex. (2) do you mean using ax.set_zlim(z_min, z_max)? Comment: That's interesting. I'll analyse your answer. Another approach I'm thinking is to color code (or gray scale) the connections based on the distance from the point of view. It would suffice only a few snapshots. I'd take the distance between the point of view and the median point of the mesh edges. Comment: (1) You can set `alpha` depending on `k` in the loop. (2) There is no real `aspect` in 3D, but you can sure play around with the `zlim` until it fits your needs. Comment: (2) yes, that was my suggestion. (1) If you have a non-constant point of view, that will indeed get complicated. Essentially I implemented something similar [in this answer](https://stackoverflow.com/questions/41699494/how-to-obscure-a-line-behind-a-surface-plot-in-matplotlib).
Title: Shell script function with global variable Tags: function;shell;variables;whiptail Question: yesterday I got a very easy task, but unfortunatelly looks like i can't do with a nice code. The task briefly: I have a lot of parameters, that I want to ask with whiptail "interactive" mode in the installer script. The detail of code: ```#!/bin/bash address="(978)396-9398" # default address, what the user can modify addressT="" # temporary variable, that I want to check and modify, thats why I don't modify in the function the original variable port="1234" portT="" ... #there is a lot of other variable that I need for the installer function parameter_set { $1=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3" 3&gt;&amp;1 1&gt;&amp;2 2&gt;&amp;3) # thats the line 38 } parameter_set "addressT" "Please enter IP address!" "$address" parameter_set "portT" "Please enter PORT!" "$port" ``` But i got the following error: ```"./install.sh: line: 38: address=(978)396-9398: command not found" ``` If I modify the variable to another (not a parameter of function), works well. ```function parameter_set { foobar=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3" 3&gt;&amp;1 1&gt;&amp;2 2&gt;&amp;3) echo $foobar } ``` I try to use global retval variable, and assign to outside of the function to the original variable, it works, but I think it's not the nicest solution for this task. Could anybody help me, what I do it wrong? :) Thanks in advance (and sorry for my bad english..), Attila Comment: It seems that your whiptail command is not producing anyoutput because of the redirections. So the command substitution leaves the value empty. Try removing them. Also it's better to save the new value to a local variable first: Here is the accepted answer: It seems that your whiptail command is not producing anyoutput because of the redirections. So the command substitution leaves the value empty. Try removing them. Also it's better to save the new value to a local variable first: ```parameter_set() { local NAME=$1 local NEWVALUE=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3") export $NAME="$NEWVALUE" } ```
Title: Pass Data using popToRootViewController Tags: ios;objective-c;uistoryboardsegue;rootview;poptoviewcontroller Question: So I have a basic app, here is how it works. I have a root view controller called A and a table view controller called B. And when the user selects a row in B I pop back to the root view controller A. And what I am trying to do is to pass the data of the row that was selected as a NSString back to the root view controller A. And then use this string to "do something" depending on the string. I have tried using the NSNotification method but then I can't use the string to do something. Heres what I have tried: ```//tableViewB.m -(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [[NSNotificationCenter defaultCenter] postNotificationName:@"passData" object:[[_objects objectAtIndex:indexPath.row] objectForKey:@"title"]]; [self.navigationController popToRootViewControllerAnimated:YES]; } //rootViewA.m -(void)dataReceived:(NSNotification *)noti { NSLog(@"dataReceived :%@", noti.object); } -(void)viewDidLoad { [super viewDidLoad]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(dataReceived:) name:@"passData" object:nil]; } ``` What I am trying to do is some more like you can do when you push a viewController and use the perpareForSegue Method. Thanks in advance for your help. Comment: @rdelmar how would I go about using that in the didSelectRowAtIndexPath method? Comment: @rdelmar so you are saying have a push segue going from the TableView B to the root A and then call performSegueWithIdentifier:sender: in the didSelectRowAtIndexPath method? Comment: @rdelmar when I do the control click and drag I don't see unwind segue as an option? Comment: In addition to the other answers posted, you could use an unwind segue to go back rather than popToRootViewController, which would allow you to use prepareForSegue the same way you do for a forward segue. Comment: You do it just like any other segue that you're going to call manually -- you give your segue an identifier, and call performSegueWithIdentifier:sender:. You could also connect the unwind segue directly from the cell, in which case you shouldn't implement [email protected]. Comment: No, not a push segue, an unwind segue. If you're connecting it from the controller (B), then you would call performSegueWithIdentifier:sender: in the didSelectRowAtIndexPath method, but if you connect the segue directly from the cell, then you don't need to call anything. Comment: That's not how you create an unwind segue. Have a look at the answer here, http://stackoverflow.com/questions/12561735/what-are-unwind-segues-for-and-how-do-you-use-them Here is the accepted answer: Try this may be help full ``` MyAController *myController = (MyAController *)[self.navigationController.viewControllers objectAtIndex:0]; myController.myText = @"My String" ; [self.navigationController popToViewController:myController animated:YES]; ``` I have use many time this .. It's working fine .. Note : replace your class name and string in this . Thanks :) Comment for this answer: This might work for the OP, but please take note that the viewControllers array is a stack. In general, to get the vc below the top, use navVC.viewControllers[MAX(0,navVC.viewControllers.length-2)]. The one at length-1 is the top, the one at zero is the root. Here is another answer: You're doing the right thing but with the wrong parameters. The ```object:``` param in the notification post is the sending object. There's another post method that allows the caller to attach ```userInfo:``` as follows: ```-(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // notice the prettier, modern notation NSString *string = _objects[indexPath.row][@"title"]; [[NSNotificationCenter defaultCenter] postNotificationName:@"passData" object:self userInfo:@{"theString" : string }] [self.navigationController popToRootViewControllerAnimated:YES]; } ``` On the receiving end, just get the data out of the notification's user info with the same key: ```-(void)dataReceived:(NSNotification *)notification { NSLog(@"dataReceived :%@", notification.userInfo[@"theString"]); } ``` Comment for this answer: okay yes that works but I need to be able to use the string that I pass back to A in the viewWillAppear method, which I can do with this. Correct? Comment for this answer: Yes. If the string is needed to update the view, you can do that right there in the dataReceived method. Or if you want to save it to a view controller property in dataReceived, you can use it anytime thereafter, including just before that view controller's view reappears. Here is another answer: Use delegate: it would be better than NSNotification tableView.h: ```@protocol tableViewDelegate -(void) tableViewSelectRowWithString:(NSString*)str; @end @interface tableView:UITableViewController //or something like this @property(nonatomic,weak) id<tableViewDelegate&gt; delegate; ``` tableView.m: ```-(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [self.delegate tableViewSelectRowWithString:@"your string"]; [self.navigationController popToRootViewControllerAnimated:YES]; } -(void) dealloc{self.delegate = nil;} ``` //rootViewA.h ```@interface rootViewA : UIViewController<tableViewDelegate&gt; ``` //rootViewA.m ```//When create tableView and push view: tableView *t = ....; tableView.delegate = self -(void) tableViewSelectRowWithString:(NSString*)str{//use string} ``` Comment for this answer: When I add the code into my tableView.h I get a bunch of errors on the code. I just pasted it below the @intface line. Is there a specific place it needs to go?
Title: Excel Macro Save As PDF Filename bug Tags: excel;vba;pdf Question: Recently, I need to automate the file saving features inside Excel file and I manage to rig up basic macro that can save as PDF Source: https://exceloffthegrid.com/vba-code-save-excel-file-as-pdf/ and https://www.contextures.com/excelvbapdf.html Here is macro. The idea is user can save the current sheet in any folder locations that they want as PDF files. The bugs looks like this. Supposedly , I have file named A report template.xlsm and I keyin the name that I wanted to save for example Marketing - 15_5_2022.pdf in a folder. Somehow, after saving, the file name will revert back to A report template.pdf instead of Marketing - 15_5_2022.pdf Any ideas where do I go wrong in the codes? ```Sub SaveAsPDF() Dim PdfFilename As Variant PdfFilename = Application.GetSaveAsFilename( _ InitialFileName:="Dept - Due Date", _ FileFilter:="PDF, *.pdf", _ Title:="Save Report As PDF") If PdfFilename <&gt; False Then ActiveSheet.ExportAsFixedFormat _ Type:=PdfFilenme, _ Quality:=xlQualityStandard, _ IncludeDocProperties:=False, _ IgnorePrintAreas:=False, _ OpenAfterPublish:=True End If End Sub ``` Comment: Don't forget, filenames can't contain / or \ so you'll need to reformat your dates. Comment: Good point, CLR. I edit the question for corrections. Here is the accepted answer: You are not setting the name of the file to be saved. Try: ```ActiveSheet.ExportAsFixedFormat _ Type:=xlTypePDF, _ Filename:=PdfFilename, _ Quality:=xlQualityStandard, _ IncludeDocProperties:=False, _ IgnorePrintAreas:=False, _ OpenAfterPublish:=True ``` Regards,
Title: Component don't re-render after redirecting at Reactjs with mobx Tags: reactjs;react-router;mobx Question: I made sign up page with mobx [email protected]. When I submit after filling in information, if there is an error, it runs redirect like history.push. So when I use history.push('/') like home url, it re-render well. But when I want to re-render on the same url because of same email check, it sends to url redirected like history.push('/signup'). Unlike '/', it doesn't rerender. So the ex-email duplicated is left on it. Since I made a reset of email state on componentwillUnmount, typed email must be dissapeared like pushing F5. Could you recommend some solution? Thank you so much! ``` // In mobx store, @action submitSignUp = () =&gt; { const joinData = { userName: this.initialState.register.userName, email: this.initialState.register.email, password: this.initialState.register.password, passwordConfirm: this.initialState.register.passwordConfirm, }; axios .post('user/join', joinData) .then((response) =&gt; { if (response.data === true) { history.push('/signup'); } }) .catch((error) =&gt; { console.log(error); }); }; ```
Title: Can MySql database server store date value in mm-dd-yyyy format? Tags: mysql;date Question: Can MySql database server store date value in mm-dd-yyyy format? Here is the accepted answer: Not as a native ```DATE``` column: Those are always represented as ```YYYY-MM-DD```. However, you can use ```DATE_FORMAT()``` to output the date in any format you like. ```SELECT DATE_FORMAT(column, "%m-%d-%Y"); ``` Comment for this answer: thanks you are like family member to me :) (see you so often) Here is another answer: You can change the date format client side like this: ```mysql&gt; set date_format = '%m-%d-%Y'; ``` How the database actually stores the dates should be irrelevant. Here is another answer: Dates are stored as dates, not strings. They are not stored in any specific format. You can retrieve dates in a variety of formats by calling ```DATE_FORMAT```. Here is another answer: No mysql cannot store date in that format. It can store dates only in the yyyy-mm-dd format. u change the format of the data to be inserted or extracted from the mysql db.
Title: Determine if the selected email is from inbox or sent items Tags: c#;outlook;vsto;outlook-addin Question: I am programming an Outlook Add-in and need to determine whether a selected email is from ```Inbox``` or ```Sent Items``` so that I can tag the email with folder="Inbox" or "Sent" when I save it in my database. I understand that I can compare the folder name to Inbox or Sent Items and determine the folder, however, how do I determine when the email selected is sitting in one of the sub-folders in the inbox. Is there a ```FolderType``` property to check whether the selected email's folder is inbox or sent (similar to identifying an item type with ```OlItemType```)? Here is the accepted answer: You need to look at the ```MailItem.Parent``` and cast it to a ```Outlook.Folder```. Once you have the ```Folder```, you can access the display name via ```Folder.Name```. If you want to determine whether the selected item is a subfolder of ```Inbox```, you would need to recursively call up the ```Parent``` tree until ```Parent``` is null to find the root parent folder. ```Outlook.Explorer explorer = Globals.ThisAddIn.Application.ActiveExplorer(); Outlook.MailItem mailItem = explorer.Selection.OfType<Outlook.MailItem&gt;().First(); Outlook.Folder parentFolder = mailItem.Parent as Outlook.Folder; if (parentFolder.Parent == null) // we are at the root { string folderName = parentFolder.Name; } else // .. recurse up the parent tree casting parentFolder.Parent as Outlook.Folder... ``` You should obviously add error handling and object disposal to this sample code.
Title: How make a part of page invisible? Tags: macros;xetex Question: Excuse me for my bad English. I want to make answer part of page unvisible but occupy space. I use xetex. ```\documentclass{book} \usepackage{lipsum} \newcommand{\question}[1]{#1} \newcommand{\answer}[1]{#1} \begin{document} \question{What is your idea?} \answer{\lipsum[1]} \lipsum[2] \end{document} ``` Output: In above code I want to lipsum1 be unvisible. like this Unfortunately transparent package not work in xetex and since answer part might be everything like tikz or colorbox or other things so change color of answer part to white not solve the problem in some situations. Thanks in advance Comment: There is a difference between invisible but present (transparent, white, copy & paste is possible) and invisible occupying space, but not being there (phantom, no copy & paste). Comment: @cfr thanks but phantom dont solve problem Comment: @Daniel thanks. content of answer must be invisible Comment: „Invisible” in the printout / PDF display only or should the content itself not be part of the PDF file? Here is another answer: You could use boxes to make sure the height is appropriate. That is, the actual height of the content. Almost. By defining the ```\ProcessCutFile``` to be empty then the ```comment``` package won't do anything (it's all in the docs). Further more, we can use the ```\CommentCutFile``` to put the content of the environment into a ```\box```, which we can get the height of. Then we just apply ```\vskip``` to push this height. This does not make space across multiple pages (although, I made a similar answer here that does). Also it works with verbatim commands like ```listings``` ```\documentclass{article} \usepackage{comment} \usepackage{lipsum} \usepackage{tikz} % Make ourselves a new conditional % Use \HideSolutionstrue to "activate" it \newif\ifHideSolutions % Make a solution environment \generalcomment{solution}{% \begingroup \ifHideSolutions% % if \HideSolutionstrue is not called, then we remove the contents \def\ProcessCutFile{}\fi% }{% \ifHideSolutions% % aand,now (also when it's not called), we make a box % and then we \input the \CommentCutFile. \setbox1=\vbox{\input{\CommentCutFile}}% % Get the height from \ht1 and use \vskip to make appropriate space \vskip\ht1 \fi \endgroup% } %Uncomment below to show solutions \HideSolutionstrue \begin{document} Some problem text goes here \begin{solution} \textbf{SOLUTION 1} \[ \forall\varepsilon&gt;0\exists\delta&gt;0:\dots \] \tikz{\draw(0,0)--(3,2);} \end{solution} %^ \end{solution} has to be ALONE on line without any prior spaces BETWEEN \begin{solution} \textbf{SOLUTION 2} \begin{verbatim} This is also hidden \end{verbatim} \[ \forall\varepsilon&gt;0\exists\delta&gt;0:\dots \] \end{solution} %^ has to be ALONE on line without any prior spaces Some text after solution. \end{document} ``` Here is another answer: Option 1: Nullify Everything and Measure Using Boxes If you want to make anything (any box) invisible, you could wrap things with an environment capture the content in a box that you never typeset (just measure) measure the box's height, add a vertical skip of that amount The output might not be exactly the same as the visible version due to line skipping issues, but it should be within a reasonable margin of error. Code ```\documentclass{book} \usepackage{lipsum} \usepackage{xcolor} \usepackage{environ,varwidth} \newcommand{\question}[1]{#1} \newcounter{showsolutions} \setcounter{showsolutions}{0}% 0=False 1=True \newsavebox{\hidebox} \NewEnviron{answer}{\savebox{\hidebox}{\begin{varwidth}{\linewidth}\BODY\end{varwidth}}\ifnum\value{showsolutions}=0\relax\par\vspace{\the\dimexpr\ht\hidebox+\dp\hidebox}\else\BODY\fi}% \begin{document} \question{What is your idea?} \begin{answer} \lipsum[1] \end{answer} \lipsum[2] \end{document} ``` Option 2: Whitify Text For a text-only solution If it is just text you‘re looking to hide, you want the text to take up the same amount of space as it normally would should work with xelatex swapping out black for white should do what you want. ```\documentclass{book} \usepackage{lipsum} \usepackage{xcolor} \newcommand{\question}[1]{#1} \newcommand{\answer}[1]{\begingroup\color{white}#1\endgroup} \begin{document} \question{What is your idea?} \answer{\lipsum[1]} \lipsum[2] \end{document} ``` Another way might be to have a look at the xetex documentation, particularly under the font options. You could define. Comment for this answer: @mojtababaghban Without example code, it is hard to help. Maybe you could, at least, list the conditions in the question Comment for this answer: @mojtababaghban See update. Comment for this answer: @mojtababaghban You never said anything about visibility logic (conditions) in the question, nor mentioned that some answers should not vertically skip the same amount of space as their contents. Again, you should clarify your requirements in your question. Anyway, adding a condition around \BODY would allow you to control the visibility of the contents. Comment for this answer: @mojtababaghban Just use the command using the white-out technique for inline text, and the environment for everything else. Comment for this answer: Thanks for your attention. but it is not just text I'm looking to hide. answer argument might be tikz or picture or math or text or everything else. so just change color to white not solve all of situation. make transparent to 0 solve all situation but transparent package is not compatible with xetex. Comment for this answer: Thanks alot. I want when answer is invisibe page be like when it is visible. I want answer have an argument and depend on that argument answer be visible or invisible and occupid space when answer is invisible be exactly like when answer is visible. Comment for this answer: Consider a situation that answer finish at the same line question exist so in this situation vspace not need just not typeset answer solve problem. some situation answer start at middle of a line and other situations. Here is another answer: A most straightforward solution would be to use the ```adjustbox``` package and define answer command as : ``` \newcommand{\answer}[1]{\adjustbox{minipage=linewidth,phantom,frame}{#1}} ``` and removing the ```phantom``` option when the answer should be displayed. Of course, the ```frame``` is for demonstration purpose and must likely be removed. Edit: I just read in the comments that the switch must be given as argument, so ```\answer``` can be defined as: ```\newcommand\answer[2][phantom]{\adjustbox{minipage=\textwidth,frame,#1}{#2}} ``` with an optional argument, set by default to ```phantom``` to hide the content, and simply omitted to show it. Nevertheless, as a teacher, I would find much more convenient to make a global change, for example with a simple ```\newif``` that could be used as follows. ```\documentclass{book} \usepackage{lipsum} \usepackage{tikz} \usepackage{adjustbox} \newcommand{\question}[1]{#1\par} \newif\ifhideanswer \newcommand\answer[1]{% \ifhideanswer \adjustbox{minipage=\textwidth,phantom}{#1} \else \adjustbox{minipage=\linewidth}{#1} \fi } \begin{document} \hideanswerfalse \question{What is your idea?} \answer{% \tikz{\draw[fill=red,line width=1pt] circle(1ex);} \lipsum[1] } \lipsum[2] \rule{\textwidth}{1pt} \bigskip \hideanswertrue \question{What is your idea?} \answer{ \tikz{\draw[fill=red,line width=1pt] circle(1ex);} \lipsum[1] } \lipsum[2] \end{document} ``` producing the attached image. Comment for this answer: @Jhor With `\ifvmode` you can check wether you are on a new paragraph or still in the same paragraph. Comment for this answer: @Jhor In case you were going to implement the inline answer solution. Comment for this answer: consider situation that answer and question are in same line Comment for this answer: If you really need this, I will have a try to provide it as an additional argument to ``answer``, containing the question... Comment for this answer: @Manuel, thanks for the hint, but I don't understand what would be the use here. My ``adjustbox`` solution likely wont use it as the box is not breakable and the 'phantom' option affect it as a whole. Comment for this answer: For "situation that answer and question are in same line" you will need the empty box to be broken across lines (and possibly pages). Hence the ``adjustbox`` approach can not be used, or would require a significant hacking of the package On the other side, as it seems that whitify the text could be enough for you, I suggest to use the ``tcolorbox`` solution suggested in the accepted answer to https://tex.stackexchange.com/questions/434055
Title: To what extent is it sensible to use filter and sort options together? Tags: buttons;e-commerce;filter;sort Question: Working on an e-commerce shopping site, I am observing that the users first sort and then refine their results. Will providing both the options confuse the user? Comment: Could you please expand on your question. Preferably with some examples. You won't be able to attach images yet, but if you upload them to imgur.com and insert the link, someone will attach them for you. Comment: @JohnGB Rep of 81 is plenty for attaching images. [New User Restrictions are lifted at 10 rep](http://ux.stackexchange.com/privileges/new-user) Comment: sorting and filtering are two mutually exclusive features. Confusion can be avoided if user interface is more intuitive. Comment: Andy - your question isn't very answerable currently, it's far too broad. Please have a read of the [ask] page for advice on how to construct useful answerable questions and update this post accordingly. Comment: @rags: Mutually exclusive? How do you figure that? Comment: Both are required Here is the accepted answer: The short answer It is absolutely sensible to use filter and sort options together because, for users, they are similar: tools for parsing search results. In my opinion they don't distinguish as clearly as developers do the differences between these two controls. The long answer I strongly recommend you an article writen by Greg Nudelman in UX matters magazine: The Mystery of Filtering by Sorting. It gives interesting insights into this subject and backs them up with evidence gathered in user testing sessions. ``` After observing this phenomenon numerous times, it became clear to me that this was not merely a matter of a simple confusion of terms between filtering and sorting. Instead, it revealed a strong mental model of filtering by sorting that blurred the difference between these two modes of search results’ refinement. ``` Here is another answer: While I agree with the commenters that your question is overly broad, in the general case the answer is no, providing both options will not confuse your users. In fact, what you are describing is an interface pattern called faceted search and it is one of the most common patterns on the web. Amazon uses it: Zappos uses it: Once you recognize this pattern, you will see it everywhere. Note the convention to place filters directly to the left of the results. Both Zappos and Amazon respect this convention. Note also that they each handle sorting quite differently. Zappos uses a row of tabs with different sorting options and Amazon uses a diminutive "Sort by" dropdown. I don't know that either is strictly better, but I prefer Amazon's treatment because means that users who are accustomed to sorting will be able to make use of the feature and the diminutive design means that it will get out of the way for users who don't commonly sort. Since you are concerned about confusing your users, you may find this works better (but you should test!). In general, using conventional interface patterns is a good idea. When you have a problem that may fit an established pattern, I find it's best to try that pattern first and then fix anything that isn't working. For reference, the canonical book on interface patterns is Designing Interfaces by Jennifer Tidwell. It's coming along in age, but unless you're doing mobile, you'll find most of the established patterns still in place. If you are developing a mobile interface that, of course, changes where you go to look for established patterns.
Title: Passing mutable dictionary as default argument in Python Tags: python;dictionary Question: I am trying to implement the below code to reinitialize a new object. But it isn't happening ```class A(object): def __init__(self, arg): super(A, self).__init__() self.arg = arg def method(self): self.arg[1] += 1 print(self.arg) if __name__ == '__main__': a1 = A({1: 1, 2:2, 3:3}) a1.method() a2 = A(a1.arg) a1.method() a3 = A(a1.arg) a1.method() Expected Output: {1: 2, 2:2, 3:3} {1: 2, 2:2, 3:3} {1: 2, 2:2, 3:3} Code Output: {1: 2, 2: 2, 3: 3} {1: 3, 2: 2, 3: 3} {1: 4, 2: 2, 3: 3} ``` I found some links online where they explain about Python Mutable defaults. But couldn't find a solution to this problem. Comment: So how can we achieve this behaviour? Comment: Yes, this is not the actual usual case. Actually, I have a class Graph whose instance variable is a dictionary(adjacency list). I am computing the min cuts in this dictionary, so I am modifying the instances(in this case removing the dictionary keys). After finding the minimum cut, I have to run this process again. So I am creating another graph object and making use of the original graph object to do the min cut operation Comment: After `a1.method()` you have no longer `{1: 1, 2:2, 3:3}` anywhere so you just can't if you use only one `arg` Comment: You really want to reuse `a1.arg` ? Your usecase it not well defined or jsut seems to be a test, so if you want a good solution, provide a good usecase ;) Here is another answer: Try changing the line ```self.arg = arg ``` to ```self.arg = arg.copy() ``` This way, each ```A``` object gets its own copy of the dictionary; they don't all have to share. Comment for this answer: Can this be achieved without copy() Here is another answer: you're only using ```method()``` on your ```a1``` so it's normal that it keeps updating ```self.arg[1]``` at each calls. Maybe you would'd like to do this : ```a1 = A({1: 1, 2: 2, 3: 3}) a1.method() a2 = A(a1.arg) a2.method() a3 = A(a1.arg) a3.method() outout : {1: 2, 2: 2, 3: 3} {1: 3, 2: 2, 3: 3} {1: 4, 2: 2, 3: 3} ``` but it does't work because the objects are &quot;sharing&quot; the same array. see Mutable vs Immutable Objects So you can use ```.copy``` But even this will fail. Because when you're creating ```a2``` and ```a3```, you're passing ```{1: 2, 2:2, 3:3}``` in the argument because the initial arg have changed due to the use of ```method()``` on ```a1``` ```a1 = A({1: 1, 2: 2, 3: 3}) a1.method() a2 = A(a1.arg.copy()) a2.method() a3 = A(a1.arg.copy()) a3.method() outout : {1: 2, 2: 2, 3: 3} {1: 3, 2: 2, 3: 3} {1: 3, 2: 2, 3: 3} ```
Title: Why mapping sftp disc returns fuse: invalid argument Tags: linux;debian;sshfs Question: Why mapping sftp disc returns this error? Am I doing it wrong? ```$ sshfs user@host:/ /home/absolut/Pulpit/sftp fuse umask=0,defaults,noauto,user,allow_other 0 0 user@host's password: fuse: invalid argument `fuse' ``` Here is another answer: Usage of ```sshfs``` (and other FUSE tools) differs from what one usually writes to ```/etc/fstab```. See man page of sshfs for details. Command like this should work for you: ```sshfs user@host:/ /home/absolut/Pulpit/sftp ```
Title: Struggling with mutable/immutable default argument Tags: python-3.x;parameter-passing;variable-assignment Question: Coming from C++ and trying to learn Python I encountered the dreaded mutable default argument problem. ```def func(L=[]): L.append(1) print(L) ``` I understood that the list gets longer and longer as a result of L being declared only once during function definition and not during execution and L pointing to mutable object. What I don't understand is how the solution to said problem works: ``` def func(L = None): if L is None: L=[] L.append(1) print(L) ``` What I don't understand is why, after the first time the function is called, L doesn't point to an object of type list with the value of [1]. Hear me out: ```1: L points to object of type None 2: As L points to object of type None the if-statement evaluates to True 3: L now points to a new object of type List with the value of [] 4: To the list to which L points a value of 1 is appended 5: The value of the object pointed to by L is printed and shows [1]``` In the second iteration, as L is not going to be declared again, L should still point to an objevt of type List with the value of [1]. What am I missing? Is it because, L being immutable, Python creates a new variable L pointing to an empty list which is then destroyed after the functions returns and the garbage-collector recognizes the L of type None as the default parameter and, as a result, doesn't "kill" it? Thank you very much in advance for any type of help. Alessandro Comment: Thank you very much. This video and the resources provided in said video helped a lot. If I'm not mistaken the =[] evaluates only once and to the "address" of an empty list, which is then going to change. Is this right? Comment: Perfect thank you. Little additional question: We do we have to check if L is None? Should it be always equaL to None? Comment: Ops. Such a naive mistake. Should have spotted it. Comment: in your second example: the second time you call `func()` `L` gets reassigned to its default argument `None` (just as `L` would get assinged to `[1, 2, 3]` were you to call `func(L=[1, 2, 3])`). python has "call by assignment". this may help: https://nedbatchelder.com/text/names1.html . Comment: yes, that is absolutely right! happy pythoning! Comment: `L` is only `None` when you call `func()` (or `func(L=None)`). if you call `func(L=[1, 2, 3])` then it is not `None`.
Title: InterfaceError: Error binding parameter 0 - probably unsupported type Tags: python;django Question: i have the following: ```class Tag( models.Model ): name = models.CharField( max_length=64 ) class Tag2Node( models.Model ): ip = models.IPAddressField( db_index=True ) tag = models.ForeignKey( Tag ) last_update = models.DateTimeField( auto_now=True ) class Node( models.Model ): id = models.CharField( primary_key=True, max_length=64 ) ip = models.IPAddressField( db_index=True ) method = models.CharField( max_length=64 ) ``` (plus some other stuff) basically i can't do a ForeignKey on the Node.ip as it's rows not unique (i may have may methods for the same ip). so in order to query i do a ```found_ips = Tag2Node.objects.filter( tag__name=include ).values('ip').distinct() q = Q( ip__exact=found_ips[0] ) nodes = Node.objects.get( q ) ``` but i get the error: ```InterfaceError: Error binding parameter 0 - probably unsupported type. ``` any ideas? cheers, Comment: What is the value of the variable "include"? Here is the accepted answer: The error comes from you passing a dictionary to ```get``` I'm not sure why this error doesn't throw something else instead... ```found_ips = Tag2Node.objects.filter( tag__name=include ).values('ip').distinct() # values returns a dictionary q = Q( ip__exact=found_ips[0]['ip'] ) nodes = Node.objects.get( q ) ``` Comment for this answer: congrats :) (15 char minimium)
Title: Program to display the count of the number of times the succeeding element is greater than the preceding element Tags: c Question: I've been meaning to write a code to display the count for the number of times the suceceding element is greater than the preceding elements but I dont think I have got the logic right. Here's my code .Kindly give me suggestions to improve the code to get the desired output. ```int main(void) { int arr[10] = { '\0' }; int i = 0, n = 0, count = 0, j = 0; printf("\nEnter the number of elements in the array: "); scanf("%d", &amp;n); printf("\nEnter the elements"); for (i = 0; i < n; i++) { scanf("%d", &amp;arr[i]); } printf("\nThe array elements are: "); for (i = 0; i < n; i++) { printf("%d\t", arr[i]); } for (i = arr[0], j = arr[1]; i < arr[n], j = arr[n]; i++, j++) { if (i < j) count++; } printf("\nThe count is: %d", count); return 0; } ``` Comment: Did you try even basic debugging ? Comment: Fourth: a pencil and paper may help here too. Comment: `for (i = arr[0], j = arr[1]; i < arr[n], j = arr[n]; i++, j++)` generates warning "warning: left-hand operand of comma expression has no effect [-Wunused-value]" Review `i < arr[n], j = arr[n]` as a test condition. Also enable all your compiler warnings to save time. Comment: First things first. Format your code properly. Second. Why are you initializing an `int` array` with a quoted constants? It's not an error, but strange. Comment: Third. You have a total mess with the array elements and their indices in the loop. You have to rule it out. Before you can do it properly, I'll suggest you to avoid complex constructs within the `for` statement. Comment: @rakib Just make the input data all zeros - it will make it even simpler... Comment: Just `sort` the data then it's easy. Here is another answer: This should do the job for you. The main ingredient is the loop ``` for (i = 0; i < n-1; i++) { if (arr[i] < arr[i+1]) count++; } ``` Lets say you have 5 elements. The ```i``` will be evaluated from 0 to 3. In that case, you are comparing ```arr[i]``` to ```arr[i+1]``` It means, the loop will be comparing ```arr[0]``` to ```arr[1]```, ```arr[1]``` to ```arr[2]```, ```arr[2]``` to ```arr[3]``` and finally ```arr[3]``` to ```arr[4]```, which is what you want to achieve. ```#include <stdio.h&gt; #define MAX_ELEMENTS 10 int main(void) { int arr[MAX_ELEMENTS] = { '\0' }; int i = 0, n = 0, count = 0; printf("\nEnter the number of elements in the array: "); scanf("%d", &amp;n); if (n &gt; MAX_ELEMENTS) { printf("Only %d elements supported\n", MAX_ELEMENTS); return -1; } printf("\nEnter the elements"); for (i = 0; i < n; i++) { scanf("%d", &amp;arr[i]); } printf("\nThe array elements are: "); for (i = 0; i < n; i++) { printf("%d\t", arr[i]); } for (i = 0; i < n-1; i++) { if (arr[i] < arr[i+1]) count++; } printf("\nThe count is: %d", count); return 0; } ``` Comment for this answer: Thank you for looking into it Here is another answer: Your last for loop is way off. It is usually easier to increment the index into the array. For example, lets say you have this array : ```5 4 3 2 1 ``` Your code does this: Your for loop assigns: ```5``` to ```i``` and ```4``` to ```j```. Then you compare ```i``` and ```j``` (this is right for now) Next you increment ```i``` and ```j```, so now they are ```6``` and ```5``` respectively. This is not what you want. Also, the condition in your loop is wrong. It will break out of the loop once the value you're looking at (e.g. the first element) is larger than the last element. In my example array it would break right away. (```i=5``` is not less than ```arr[n]=1```. Instead you should increment indexes into the array and break out of the loop when you reach the last index. ```for(i = 0, j = 1; j < n; i++, j++) { if(arr[i]<arr[j]) count++; } ``` You also want to make sure ```n``` is less than or equal to 10, otherwise you'll be writing outside the array. Add either ```if(n &gt; 10) { n = 10; } ``` or ```if(n &gt; 10) { printf("Max number of elements is 10.\n"); return -1; } ``` after you get ```n```. Comment for this answer: @KlasLindbäck Oops. I meant to put `j`. I fixed it. Thanks. Comment for this answer: Thank you for providing such a detailed solution. Here is another answer: the problem is in the last for loop. you are doing something really strange for iterating. to simplify: ```for(i=0;i<n;i++){ if(arr[i]<arr[i+1]) count++; } ``` Comment for this answer: Loop condition needs to be `i+1 < n` since the code accesses `arr[i+1]`.
Title: Custom Intent for each app flavor? Tags: android;android-intent Question: Say I have app A and B. App B has multiple build flavors but each one is slightly different and I want to launch a specific flavor from app A. I have considered using one custom shared intent but I don't want the OS to prompt the user about which version to use to handle the intent if they have multiple versions of B installed. Is it possible to programatically define unique custom intents for each flavor of an application? Comment: It is. Just define these intents in a manifest file for each flavor. Here is the accepted answer: In order to use one shared AndroidManifest between multiple flavors and have unique intents for each I added a manifest placeholder for each flavor in build.gradle ```default { manifestPlaceholders = [uriPrefix: "myprefix"] } someFlavor{ manifestPlaceholders = [uriPrefix: "otherprefix"] } ``` In AndroidManfest.xml use a manifest placeholder like so in the intent filter: ``` <intent-filter&gt; <action android:name="com.custom.app.${uriPrefix}.SOME_ACTION" /&gt; <action android:name="android.intent.action.VIEW" /&gt; <category android:name="android.intent.category.BROWSABLE" /&gt; <category android:name="android.intent.category.DEFAULT" /&gt; <data android:scheme="${uriPrefix}" /&gt; </intent-filter&gt; ```
Title: How to embed data uri in css files automatically in a directory with many subdirectories? Tags: css;linux;bash;shell;data-uri Question: I am using cssembed to encodes all image references in the css files to base64 and replace the original css file with the changes. However, what I want to do is automate the process for all the css files in my folder with many subfolders/subdirectories. I tried the following: ```java -jar cssembed-0.4.5.jar *.css &gt; *.css ``` But it produces the following output: ```bash: *.css: ambiguous redirect ``` I also tried ```java -jar cssembed-0.4.5.jar *.css ``` But this only outputs the result in the terminal , does not replace the file with the encoded bits. How to solve this ? Any suggestions? NOTE: I am trying to do this on Ubuntu terminal. Here is another answer: You can use ```find``` to locate all files in all subdirectories too: ```find -name "*.css" -exec java -jar cssembed-0.4.5.jar '{}' &gt; tmp \; -exec mv tmp '{}' \; ``` Here ```tmp``` is a temporary file that is written to, which is necessary because when you use ```&gt;``` the file you are writing to is truncated immediately. The second ```-exec``` is only run if the first one returns successfully, overwriting the original file with the contents of ```tmp```. If the code above isn't working for you, perhaps you could try this: ```find -name "*.css" -exec sh -c 'java -jar cssembed-0.4.5.jar "$0" &gt; tmp &amp;&amp; mv tmp "$0"' '{}' \; ``` This invokes a separate shell for each file that is found. ```$0``` is the name of the file that has been found.
Title: Clase en un div que contiene varios elementos presenta error Tags: javascript;array;div;class Question: tengo un div que contiene otros div y adentro una imagen que se arma con arreglo, en la imagen actualmente tengo una clase para agregar a un panel cuando se le da click, eso funciona bien, pero quiero que esa función no esté en la imagen si no en el div principal, para que cuando le hagan click en cualquier parte del div ejecute la clase de agregar al panel. Cuando hago el cambio de la clase al div superior sale un error en el navegador ```Uncaught SyntaxError: Unexpected token u in JSON at position 0 at JSON.parse (<anonymous&gt;) at HTMLDivElement.<anonymous&gt; (pdvi:2885) at HTMLDivElement.dispatch (jquery.min.js:3) at HTMLDivElement.r.handle (jquery.min.js:3) ``` Esta es la imagen, actualmente la clase de addprod se ejecuta cuando le dan click en el área marcada de azul, quiero que se ejecute cuando le dan click en cualquier parte del cuadro, incluido el texto y los números de abajo. Este es el código que me está funcionando bien, a la mitad está la clase addprod que quiero cambiar a la parte superior ```<div id="divprod{{ $product-&gt;id }}" class="gallery_product filter {{ $product-&gt;idcategory }} category{{$category-&gt;id}}" style="border: 2px solid #D2CFCF; border-radius: 5px; <?= !empty($product-&gt;specialprice) ? "background-color:#e7e26a;" : "background-color:#ffffff;" ?&gt; display: inline-block; max-width:2.8vw; min-width:115px; min-height:160px; padding-top: 15px; padding-left: 7px; padding-right: 5px; margin: 1.03vh; margin-left: 0.6vh; margin-right: 1vh;"&gt; <img data-full="{{$product}}" data-category="{{$category-&gt;name}}" id="prod{{ $product-&gt;id }}" @if(strpos($product-&gt;image,'https://')!== false) src="{{$product-&gt;image }}" @else src="{{'/support/pictures/products/'.$product-&gt;image }}" @endif class="img-responsive addprod" data-id="{{ $product-&gt;id }}" data-price="{{ $product-&gt;saleprice }}" data-tax="{{$product-&gt;ValorImpuesto-&gt;value}}" data-name="{{ strtoupper($product-&gt;name) }}" data-code="{{ $product-&gt;barcode }}"&gt; <div style="height: 30px; position: relative;"&gt; <div style="font-weight:bold; font-size: smaller; color:#6A6A6A; text-align: justify; height: 80%; line-height:103%" class="perfectScrollbarContainer"&gt; {{ ucfirst(strtolower($product-&gt;name)) }}</div&gt; </div&gt; <div style="font-weight:bold; font-size: large; text-align: right;<?= !empty($product-&gt;specialprice) ? "color:#e76a6a" : "color:#585858" ?&gt; "&gt; $<?= empty($product-&gt;specialprice) ? $product-&gt;saleprice : $product-&gt;specialprice ?&gt; </div&gt; </div&gt; ``` Este es el código que cambié donde puse la clase addprod en el div superior ```<div class="addprod" id="divprod{{ $product-&gt;id }}" class="gallery_product filter {{ $product-&gt;idcategory }} category{{$category-&gt;id}}" style="border: 2px solid #D2CFCF; border-radius: 5px; <?= !empty($product-&gt;specialprice) ? "background-color:#e7e26a;" : "background-color:#ffffff;" ?&gt; display: inline-block; max-width:2.8vw; min-width:115px; min-height:160px; padding-top: 15px; padding-left: 7px; padding-right: 5px; margin: 1.03vh; margin-left: 0.6vh; margin-right: 1vh;"&gt; <img data-full="{{$product}}" data-category="{{$category-&gt;name}}" id="prod{{ $product-&gt;id }}" @if(strpos($product-&gt;image,'https://')!== false) src="{{$product-&gt;image }}" @else src="{{'/support/pictures/products/'.$product-&gt;image }}" @endif class="img-responsive" data-id="{{ $product-&gt;id }}" data-price="{{ $product-&gt;saleprice }}" data-tax="{{$product-&gt;ValorImpuesto-&gt;value}}" data-name="{{ strtoupper($product-&gt;name) }}" data-code="{{ $product-&gt;barcode }}"&gt; <div style="height: 30px; position: relative;"&gt; <div style="font-weight:bold; font-size: smaller; color:#6A6A6A; text-align: justify; height: 80%; line-height:103%" class="perfectScrollbarContainer"&gt; {{ ucfirst(strtolower($product-&gt;name)) }}</div&gt; </div&gt; <div style="font-weight:bold; font-size: large; text-align: right;<?= !empty($product-&gt;specialprice) ? "color:#e76a6a" : "color:#585858" ?&gt; "&gt; $<?= empty($product-&gt;specialprice) ? $product-&gt;saleprice : $product-&gt;specialprice ?&gt; </div&gt; </div&gt; ``` Este es el código de javascript de la clase addprod. ``` $(".addprod").click(function () { id_venta = $(this).attr("data-id"); data = JSON.parse($(this).attr("data-full")); if (data.categoria.typemodal == modalPhoneNumber) { if (canttotal == 0 || flag) { contador = contador + 1; $('#exampleModal').modal('show'); //cuando se cierra el modal $("#close_modal").click(function () { $("#service_number").val(""); $("#exampleModal").find("#confirm_number").removeClass('is-invalid') $("#exampleModal").find("#alert-div").attr('hidden', true) $('.modal-service').find('.span-se').text(''); }); //cuando se envia $("#send_modal").click(id_venta, function (e) { // $('.modal-service').find('.span-se').text(''); service = false; sendService(id_venta); }); } else { $("#modalServices").modal('show') $('.btn-services').on('click', function () { // flag = true; cancelarfactura(); $("#modalServices").modal('hide') setTimeout(function () { $('#exampleModal').modal('show'); $("#send_modal").click(function () { service = false; sendService(id_venta) }); }, 500); flag = false; }) } } else { if (service) { agregaritem($(this).attr("data-id")); } else { $("#CancelarVenta").modal('show'); } } }); ``` Comment: Y tu código de javascript? Por lo que veo te falta llevar los atributos data al tag del div Comment: Hola, gracias, por responder, actualicé la pregunta con el código javascript. Here is the accepted answer: Tu problema es que olvidaste poner los data en el div Remuevelos de img y colocalos en su contenedor ```<div class="addprod" id="divprod{{ $product-&gt;id }}" data-full="{{$product}}" data-id="{{ $product-&gt;id }}" &gt; ``` Puedes usar la funciónde jQuery para sustituir ```$(this).attr("data-id") //por $(this).data("id") ``` Comment for this answer: Perfecto, solucionado, gracias Comment for this answer: Te agradezco marques la respuesta como la correcta para marcar tu pregunta como resuelta
Title: Better way to store updatable scientific data? Tags: database Question: I am using a file consisting of published scientific data. I'm using this file with a program that reads in the first 5 space delimited data fields, and everything after that is considered a comment by the program. 2 example lines (of thousands): ```FeII 1608.4511 0.521 55.36 -1300 M03 Journal of Physics FeII 1611.23045 0.0321 55.36 1100 01J AJ ``` The program reads it as: ```FeII 1608.4511 0.521 55.36 -1300 FeII 1611.23045 0.0321 55.36 1100 ``` These numbers are each measurements and most (don't get me started) have associated errors that are not listed in this file. I would like to store this information in a useful and updatable way. That is, say the first entry FeII 1608.4511 has an error of plus/minus 0.002. Consider when a new measurement is made and changes it to: FeII 1608.45034 plus/minus 0.0005. I would like to update the value, the error, and record some information about the publication that it came from. The program that uses this file is legacy code and is both crucial and inflexible: and it needs the file to look like the above output when it's read in. I would really like for there to be a way to update the input file to include things like errors on the values and publication hyperlinks in comments. I would also like a kind of version control ability to return the state of this large file today; or in 5 months after 20 more lines are updated with new values. Any suggestions on how best to accomplish this? Should I store everything in some kind of database? Comment: @Catcall, you are striking at the heart of my concerns... I could do each change on the correct line by hand on the file easily from context clues in the publication. However, how do I choose to label them within the database? And there are entries like this: C I** 1277.5501 C I** 1277.7233 -- so each change is probably uniquely identified by the letters plus the next 5 digits? But what if I add new data that needs to go to 6 digits? What if the 6th digit of needs to be updated on another line? Comment: How do you know this new value, FeII 1608.45034 ± 0.0005, is supposed to update FeII 1608.4511 ± 0.002 rather than the row FeII 1611.23045? Here is the accepted answer: Databases are deeply tied to identity. If a database can't identify a row by the data that's in it, a database isn't going to help you. If I were you, I'd start by storing the base file in a version control system, not a database. At 20 changes per 5 months, I'd probably make those changes manually and commit each batch of changes. (I don't know what might constitute a batch for you. Could be a single change every time.) Since the format of the existing file is both crucial and brittle, I'm not sure whether modifying it is a good idea. I think I'd feel better about storing error ranges and publication hyperlinks in a separate file, and using a script to put the pieces together for applications that can use error ranges and hyperlinks. Here is another answer: For updating the information in the file introducing errors and links, you don't need any database; just open the file, iterate through the lines and update each one. If you want to be able to restore a line state, you definetively need some kind of database. You can create a database in Sql Server or Firebird for example, and store in it a row for each line historical state (with date of creation off course); your file itself would be the repository for current values and you would be able to restore the file with a date and some simple fetcing of the database information. If you can't use a database like Firebird or SQL Server, you can store the historical data in a simple text file, it's up to you. Just remember that you necesarely will need, like @CatCall commented, a way to identify each line in order to create a relation between the line in the file and the historical data stored in your repository. Here is another answer: A database sounds sensible, SQL Server Express is free and widely used. You can read in the text file including all comments and output the edited data in the same format. You can use a number of front ends including Access, for rapid development, or something you create yourself in VB.Net, or even Excel, at a pinch. You will need to consider the structure of the table(s) but it should not be too difficult, and you can get help here.
Title: Running GitActions Pipeline to create resources in Azure with Ansible Could not retrieve credential from local cache for service principal Tags: azure;github;azure-devops;ansible;devops Question: I'm trying to develop a pipeline on GitActions with Ansible, after some issue with compatibility with playbook and enviroment I was finally able to connect my Ubuntu 20.04 local machine with Azure. My goal is to use a GitActions pipeline to deploy vm on Azure. I need to connect with azure with a service principal that I've already created and added to git Action secrets with AZURE_CREDENTIAL json file. ```on: [push] name: AzureLoginSample jobs: build-and-deploy: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@master with: path: main - name: Azure Login uses: azure/login@v1 with: enable-AzPSSession: true creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Azure CLI script uses: azure/CLI@v1 with: azcliversion: latest inlineScript: | set -ex ls ~/.azure cat ~/.azure/versionCheck.json az --version az account show az group list - name: Install Ansible run: | sudo apt update sudo apt install python3-pip sudo apt install python3.8-venv python3 -m venv ansible-playbook &amp;&amp; . ansible-playbook/bin/activate &amp;&amp; pip3 install --upgrade pip &amp;&amp; pip3 install wheel pip3 install ansible==2.10.0 ansible-galaxy collection install azure.azcollection pip3 install -r ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt - name: Create Azure Resource Group with Ansible Playbook run: | ./ansible-playbook/bin/ansible-playbook main/ansible/playbook/create_rg_azcollection.yml -vvv ``` azure/login@v1 and Azure CLI script are working, but at the end my job install ansible is not working due to authentication error. ``` Could not retrieve credential from local cache for service principal ***. Please run 'az login' for this service principal.\n&quot; ``` It's clear to me that my ansible steps use differnt python interpreter so the previous login with AZ is not working because I'm running on ansible playbook environment. Do you have any idea how to solve this issue? Comment: Someone has already had this issue, it seems latest version of Az cli is causing the error. https://github.com/Azure/azure-cli/issues/20153 but I can't understand how to solve. Do I need to install Az cli during the ansible steps with pip install and specify the interpreter? Here is another answer: I finally realized how to solve, just copy the credentials into GitActions pipeline's user, named runner in $HOME/.azure folder. The command I use is : cp <path/from/repos/credential&gt; ~/.azure inside credentials file you need to store the output from azure service principal creation 'az ad sp create-for-rbac --name GitActionsPipeline --role Contributor --sdk-auth' ``` { &quot;clientId&quot;: &quot;************************************&quot;, &quot;clientSecret&quot;: &quot;*********************************&quot;, &quot;subscriptionId&quot;: &quot;*********************************&quot;, &quot;tenantId&quot;: &quot;*********************************&quot;, &quot;activeDirectoryEndpointUrl&quot;: &quot;https://login.microsoftonline.com&quot;, &quot;resourceManagerEndpointUrl&quot;: &quot;https://management.azure.com/&quot;, &quot;activeDirectoryGraphResourceId&quot;: &quot;https://graph.windows.net/&quot;, &quot;sqlManagementEndpointUrl&quot;: &quot;https://management.core.windows.net:8443/&quot;, &quot;galleryEndpointUrl&quot;: &quot;https://gallery.azure.com/&quot;, &quot;managementEndpointUrl&quot;: &quot;https://management.core.windows.net/&quot; } ```
Title: How to invoke an ActiveX Control using javascript in an ASP.NET page Tags: asp.net;com;activex Question: I'm trying to create an Active X component that will start an application on a client machine. I've created an Active X control which is quite straight forward in .NET. ALl it does is call the Process class and call Start. Now i want to be able to call the starting method on this class from javascript passing in a few parameters on the page (which are then passed down as command line arguments). I followed the guide here: http://www.c-sharpcorner.com/UploadFile/mgold/HyperlinkExec03012007191054PM/HyperlinkExec.aspx This guide talks about using a hyperlink to start the javascript but i'm using a button. Here is my HTML (i'm trying this in just plain HTML instead of ASP.NET to keep things simple for now but i want to go to ASP.NET eventually) ```<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"&gt; <html xmlns="http://www.w3.org/1999/xhtml" &gt; <body&gt; <button type="button" onclick="javascript:launch()"&gt;Click me!</button&gt; <script type="text/javascript"&gt; function launch() { alert('test') var myLauncher = new ActiveXObject('CardWriterApplicationLauncher'); myLauncher.LaunchCardWriter('test', 'test', 'test'); } </script&gt; </body&gt; </html&gt; ``` How when i click the button i get the error "Automation server can't create object". I know that my COM dll is registered properly in the GAC and with regasm, so what could i be doing wrong? Also any alternative solutions to launching the application on the users desktop from a web page would be greatly appreciated. Browser security setting can be modified as required as the client PCs are under our control and are on a private network with no internet access. Thanks Here is another answer: If you have public properties or methods in the ActiveX control can you not just call those directly referencing the ```<Object&gt;```'s ID using JavaScript? So the ActiveX control is already loaded on the page using the ```<object&gt;``` tag, and you're just calling its method. Comment for this answer: How would i go about using the object tag. I've seen some information about it, but never managed to get it to work. Comment for this answer: Search for how to embed an activeX control in an hmtl page see: http://www.fpoint.com/support/whitep/ActiveX/ax1999.aspx This will display control on the page (in your case, it might not have an interface, but you can add one to test it) Once you have the control on the page you can then start interacting with it using javascript. Here is another answer: Throw away the browser for a moment and go to the client machine, make sure you can create the activex object correctly on the machine, use a simple vb script or dummy app. Create a file called something.vbs, in it put the following code ```Set MyObj = CreateObject("CardWriterApplicationLauncher") ``` Once you have verified that part is working go to your browser. Its likely your issue has nothing to do with your browser. Also I just noticed CardWriterApplicationLauncher isn't a valid Object Identifier they ususally require a . in them. eg ```word.application```
Title: What is fanout in R-Tree? Tags: database;tree;r-tree Question: I have a doubt about R-Tree data structure. What is fan-out in R-Tree. Is it a Maximum number of entries? How can we determine the minimum and maximum number of entries in R-Tree? Let say if i have 10000 points and my page size i 8kb. Thanks Comment: Are you sure you mean R-tree and not B-tree (or B+-tree)? Comment: Note that fanout determined by page size only makes sense for external trees, that is ones that are stored in files and too large to be fully loaded in memory. In memory, trees with low fanout are more efficient, because they need fewer comparisons per search in total. Comment: yup i am sure it's R-tree. Here is the accepted answer: Fan-out, in any tree, is number of pointers to child nodes in a node. Different trees have different fan-out. A binary tree has fanout 2. A B-tree has a fan-out B, with all nodes except leaves having between B/2 and B children. External (on-disk) implementation often relax the minimal number of children restriction to save some updates. In databases, B-trees or their variant called B+-trees is often used so that each node has size of 1 page and the fan out determined by number of sort keys and pointers that fit in that space. An R-tree is a search tree where indices are multi-dimensional intervals. These may possibly overlap. It may have any fan-out. Usual is number of 2 to number of dimensions (so 4 for 2-dimensional, 8 for 3-dimensional etc.). But it may have higher fanout too and organizing it similar to B-tree is certainly possible. ``` How can we determine the minimum and maximum number of entries in R-Tree? Let say if I have 10000 points and my page size is 8KiB. ``` The size of the tree node does not have to match page size. If it does (usually used for external, i.e. on disk, implementations), you still need to know how large the sort key is and how large the pointer is. An R-tree needs 2 coordinate values, minimum and maximum, per dimension. So a 2-dimensional R-tree with double precision coordinates (the common case appearing in mapping applications) will have four 64 bit values describing the rectangle plus a child pointer, for which an external implementation probably wants to use 64 bits as well. That is 20 B per child and you can squeeze 409 of these in an 8 KiB page. The number of points does not matter. Dimension and precision of coordinate system does. In memory, trees with low fanout are more efficient, because though they are deeper, they need fewer comparisons per search. However on disk (in databases) the slow operation is reading and since that can only be done in blocks, it is faster to reduce number of nodes by having each node fill whole block and have correspondingly higher fanout. Comment for this answer: I disagree with the last paragraph. Even in memory it may be beneficial to have larger fanout, because of L1/L2/L3 caching etc. - for the same reason that insertion sort is faster than heapsort on small arrays. Here is another answer: "Fanout" refers to the number of pointers per node which R-Tree is having Comment for this answer: @user3030823: No. It means number of entries _per parent node_. Comment for this answer: does it mean number of entries?
Title: Is there a way to fill the JSON object before resolving the promise Tags: javascript;json;for-loop;promise Question: the code first gets all the urls from the database. and in parseText i'm trying to parse all the ur's and put them in an Json object for later reference. i've tried running the for loop with async/await, but that didnt give me the expected result. ```let parseText = function(dbresult) { return new Promise((resolve, reject) =&gt; { let textObj = {} for(i=0; i < dbresult; i++) { Mercury.parse(dbresult[i].url, {headers: {Cookie: 'name=Bs', 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)', },}) .then((result) =&gt; { textObj[dbresult[i].id] = result.excerpt; }); } resolve(textObj); }) } fetchLinks.then(function(result) { return parseText(result); }).then(function(result) { console.log(result); //gives back {} //do something with json object in next promise return writeTextToDb(result); //not written yet }) ``` the desired output should look something like {1234 : {text: some parsed text}}, but all i keep getting is an empty object Comment: What data type is `dbresult`? I'm pretty sure you messed up your terminating condition for your `for` loop. It should be `i < dbresult.length` Here is another answer: There are multiple things to tackle in your code, so let's go step by step: ```dbresult``` seems to be an array, judging by your use of ```dbresult[i]```, but you also have a ```i < dbresult``` condition that implies it's an integer. I'm going to asume you meant ```i < dbresult.length```. You are using ```new Promise(...)``` in a situation where you are already dealing with promises. You should never use this pattern unless you don't have an alternative, and always try to chain ```.then``` calls and return their results (which are always also promises). You seem to have failed to understand that the callback passed to ```.then``` will always run asynchronously, after the rest of the code runs. This is the reason your object comes out empty: the ```resolve``` function is called before any of the requests have time to complete. Now, loops and promises don't mix very well, but there are ways of dealing with them. What you need to anderstand is that, with loops, what you want is to chain promises. There are mostly two ways of chaining promises in this way: the imperative way and the functional way. I'm going to focus on the ```parseText``` function and omit irrelevant details. This is what you would do for a fully imperative solution: ```function parseText (dbresult) { // although the contents of the object change, the object doesn't, // so we can just use const here const textObj = {}; // initialize this variable to a dummy promise let promise = Promise.resolve(); // dbresult is an array, it's clearer to iterate this way for (const result of dbresult) { // after the current promise finishes, chain a .then and replace // it with the returned promise. That will make every new iteration // append a then after the last one. promise = promise .then(() =&gt; Mercury.parse(result.url, {...})) .then((response) =&gt; (textObj[result.id] = response.excerpt)); } // in the end, the promise stored in the promise variable will resolve // after all of that has already happened. We just need to return the // object we want to return and that's it. return promise.then(() =&gt; textObj); } ``` I hope the comments help. Again, working with promises in a loop sucks. There are two ways of doing this in an easier way, though! both use the functional methods of arrays. The first one is the easiest, and it's the one I'd recommend unless the array is going to be very big. It makes use of ```.map``` and ```Promise.all```, two powerful allies: ```function parseText (dbresult) { const textObj = {}; // create an array with all the promises const promises = dbresult.map(result =&gt; Mercury.parse(result.url, {...}) .then((response) =&gt; (textObj[result.id] = response.excerpt))) ); // await for all of them, then return our desired object return Promise.all(promises).then(() =&gt; textObj); } ``` ``` Note: ```bluebird``` users can make this even better using ```Promise.map``` and passing a ```concurrency``` value. That is actually, in my opinion, the best solution, but I want to stick with vanilla here. ``` The main problem with this solution, though, is that all request will be [email protected]. This might mean that, for very large arrays, some requests just wait in a queue or you exhaust the socket limit of the process, depending on the implementation. In any case, it's not ideal, but for most situations will work. The other functional solution consists of replicating the imperative one using ```.reduce``` instead of a ```for ... of``` loop, and it's implemented at the end of the answer, more as a curiosity than anything else, as I think it's a bit too "clever code". In my opinion, the best way of solving this is to just use ```async/await``` and forget promises altogether. You can just write your loop normally in this case and simply put await where appropriate: ```async function parseText (dbresult) { const textObj = {}; for (const result of dbresult) { // here just await the request, then do whatever with it const response = await Mercury.parse(result.url, {...})) textObj[result.id] = response.excerpt; } // thanks to await, here we already have the result we want return textObj; } ``` That's it, it's that simple. And now for what I consider to be the "clever" solution, using only ```.reduce```: ```function parseText (dbresult) { const textObj = {}; return dbresult.reduce( (prom, result) =&gt; prom .then(() =&gt; Mercury.parse(result.url, {...})) .then((response) =&gt; (textObj[result.id] = response.excerpt)), Promise.resolve() ).then(() =&gt; textObj); } ``` If it's not immediately clear what it does, that's normal. This does exactly the same thing as the original imperative ```then```-chaining one does, just using ```.reduce``` instead of a manual ```for``` loop. Note that I personally wouldn't necessarily do this, as I think it's a bit too "clever" and takes some time to parse mentally. If implementing something like this (```then```-chaining using ```.reduce``` is incredibly useful, even if a bit confusing) please add a comment explaining why you did this, what it is, or something that can help other developers understand ir at first glance. Comment for this answer: *"... I'm not going to write it because it's ridiculously "smart" and will trump even the more experienced developers"* Now you got me curious, by all means please post it. +1 BTW well written. Comment for this answer: thank you so much, the .length i didnt see right away, but that wasn't the problem. because of this explanaition i managed to continue and even finish the code for now. many thanx Comment for this answer: @zer00ne there you go. Maybe it's less clever than I think, but I read something like it a while ago while debugging something and it took me a while to understand what the hell was happening.
Title: swing: appropriate layout manager for simple situation? Tags: swing;layout Question: I have a ```JPanel``` that I want to use to contain 3 vertical components: a ```JLabel``` a ```JTextField``` a ```JScrollPane``` I want all 3 components to fill the ```JPanel```'s width. I want the ```JLabel``` and ```JTextField``` to use their normal heights and the ```JScrollPane``` to use the rest. ```BoxLayout``` almost works, except it seems like the ```JTextField``` and ```JScrollPane``` share the "extra" space when the ```JPanel``` is made large. What can I do? Here is the accepted answer: Create a BorderLayout. Put the JScrollPane in its center. Create a JPanel with a BoxLayout. Put the JLabel and JTextField in that, vertically. Put that JPanel into the NORTH side of the BorderLayout. Here is another answer: You could also use DesignGridLayout as follows: ```DesignGridLayout layout = new DesignGridLayout(thePanel); layout.row().center().fill().add(theLabel); layout.row().center().fill().add(theTextField); layout.row().center().fill().add(theScrollPane); ``` This should exactly behave as you describe. Each call to row() creates a new row in the panel. The calls to fill() make sure that each component uses the whole available width. A few advantages of using DesignGridLayout here are: only one LayoutManager for the whole panel automatic borders and inter-rows spacing Comment for this answer: thanks... when I'm willing to pull in a library, I use javabuilders + MiGlayout, this question was really about built-in layout managers, although I admit I didn't state that. Here is another answer: GridBagLayout is pretty handy. You can control anything you need and you can control only what you need. You're probably going to be interested in only the vertical parameters.
Title: CheckBox is being clipped in tablet Tags: android;checkbox;clipped Question: I have this layout: ```<LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:gravity="center_vertical" android:layout_marginLeft="10dp"&gt; <CheckBox android:id="@+id/screen_login_checkbox" android:button="@drawable/login_screen_checkbox_image_selector" android:layout_width="wrap_content" android:layout_height="wrap_content"/&gt; <TextView android:layout_width="wrap_content" android:layout_marginLeft="10dp" android:text="@string/screen_login_checkboxLabel" android:layout_alignBaseline="@id/screen_login_checkbox" android:textColor="@color/black" android:textSize="@dimen/screen_login_check_text" android:layout_height="wrap_content"/&gt; </LinearLayout&gt; ``` The problem is that this works on smartphones but is not working properly on tablets. On tablets the CheckBox is getting clipped on the right side and I don't know why. Does anyone know how to fix this? Here is the accepted answer: Try adding ```android:paddingRight``` to your ```CheckBox```. It should work for tables and smallscreen aswell pretty well if you add padding using ```dp``` not ```px``` for padding values
Title: How to minimize the number of database calls needed to authenticate a user? Tags: node.js;express;session;socket.io;express-session Question: I need to authenticate users trying to connect to my server. My first architecture queried the database three times in order to identify any given user: first ```express-session``` checked for a valid session associated with the session ID given by the user then again ```passport.js``` queried the database to deserialize the user and check for special privileges ```socket.io``` needed to associate new sockets with a user ID (no additional data needed), thus checking again for a valid session My first intuition was to scrap ```passport.js``` and to write a custom ```express-session``` store to merge the first two steps into one single call to the database. But now I am left wondering if there is any way to reuse that same call to identify sockets as well, thus potentially offering several benefits (reduced database traffic, single authentication logic etc). One idea I had: switch entirely to JSON web tokens to store the user ID along with an expiry date client-side. Sockets can then be identified without even querying the database (by signing JWTs). User deserialization only takes one database call. Is it a secure and viable option? Potential drawbacks I am concerned about off the top of my head: no session management: I don't see any way to selectively log someone off (as opposed to simply ```DELETE```ing the row in the session table associated with a specific user ID) how to ensure that there is at most one socket associated with a connected user? ill-intentioned individuals simply need a copy of the encrypted web token to hijack someone's session it would still require some form of interaction with the database e.g. to check if a user is banned Is there any other efficient way to authenticate a user across multiple channels? Comment: As for ***ill-intentioned individuals simply need a copy of the encrypted web token to hijack someone's session***, that is generally the case with any browser session management (whether it uses a cookie, a custom header or a URL query parameter). You need to run under https to make it less likely someone can grab the session credential.
Title: Node.js ORM mysql connect via SSH tunnel Tags: node.js;node-orm2 Question: I'm trying to set up a node.js application which uses node-orm2. However our cloud hosted DB can only be connected via SSH tunnel. Checking the ORM doc I can not see any config option to connect to the DB via SSH tunnel. Is there any way to set up this or I need to find some way to connect without SSH? Comment: You can always use SSH outside of your Node.js app to handle this. http://www.revsys.com/writings/quicktips/ssh-tunnel.html Here is the accepted answer: I updated the code example for tunnel-ssh 1.1.0, because it's actually the only working example on the internet (for so far i searched..). It was quite a hassle to get this new tunnel-ssh configured... ```var mysql = require('mysql'); var Tunnel = require('tunnel-ssh'); module.exports = function (server) { return new Object({ tunnelPort: 33333, // can really be any free port used for tunneling /** * DB server configuration. Please note that due to the tunneling the server host * is localhost and the server port is the tunneling port. It is because the tunneling * creates a local port on localhost */ dbServer: server || { host: '(566)234-0132', port: 33333, user: 'username', password: 'yourpwd', database: 'yourdb' }, /** * Default configuration for the SSH tunnel */ tunnelConfig: { remoteHost: '(566)234-0132', // mysql server host remotePort: 3306, // mysql server port localPort: 33333, // a available local port verbose: true, // dump information to stdout disabled: false, //set this to true to disable tunnel (useful to keep architecture for local connections) sshConfig: { //ssh2 configuration (https://github.com/mscdex/ssh2) host: 'your_tunneling_host', port: 22, username: 'user_on_tunneling', password: 'pwd' //privateKey: require('fs').readFileSync('<pathToKeyFile&gt;'), //passphrase: 'verySecretString' // option see ssh2 config } }, /** * Initialise the mysql connection via the tunnel. Once it is created call back the caller * * @param callback */ init: function (callback) { /* tunnel-ssh < 1.0.0 // // SSH tunnel creation // tunnel-ssh < 1.0.0 var me = this; me.tunnel = new Tunnel(this.tunnelConfig); me.tunnel.connect(function (error) { console.log('Tunnel connected', error); // // Connect to the db // me.connection = me.connect(callback); }); */ /* tunnel-ssh 1.1.0 */ // // SSH tunnel creation // var me = this; // Convert original Config to new style config: var config = this.tunnelConfig; var newStyleConfig = { username: config.sshConfig.username, port: config.sshConfig.port, host: config.sshConfig.host, // SSH2 Forwarding... dstPort: config.remotePort, dstHost: config.remoteHost, srcPort: config.localPort, srcHost: config.localHost, // Local server or something... localPort: config.localPort, localHost: config.localHost, privateKey: config.privateKey } me.tunnel = tunnel(newStyleConfig, function (err) { console.log('Tunnel connected', err); if (err) { return callback(err); } me.connection = me.connect(callback); }); }, /** * Mysql connection error handling * * @param err */ errorHandler: function (err) { var me = this; // // Check for lost connection and try to reconnect // if (err.code === 'PROTOCOL_CONNECTION_LOST') { console.log('MySQL connection lost. Reconnecting.'); me.connection = me.connect(); } else if (err.code === 'ECONNREFUSED') { // // If connection refused then keep trying to reconnect every 3 seconds // console.log('MySQL connection refused. Trying soon again. ' + err); setTimeout(function () { me.connection = me.connect(); }, 3000); } }, /** * Connect to the mysql server with retry in every 3 seconds if connection fails by any reason * * @param callback * @returns {*} created mysql connection */ connect: function (callback) { var me = this; // // Create the mysql connection object // var connection = mysql.createConnection(me.dbServer); connection.on('error', me.errorHandler); // // Try connecting // connection.connect(function (err) { if (err) throw err; console.log('Mysql connected as id ' + connection.threadId); if (callback) callback(); }); return connection; } } ); }; ``` Comment for this answer: The examples in this thread let you connect to a database server via a remote server via ssh. It does this by setting up a ssh tunnel. Then you tell MySQL to connect to a port (the tunnel port) on your local machine with your database credentials for your remote database and you're connected. Here is another answer: Finally it was resolved by dropping orm2 and using node-mysql and tunnel-ssh modules as in the code below. ```var mysql = require('mysql'); var Tunnel = require('tunnel-ssh'); module.exports = function (server) { return new Object({ tunnelPort: 33333, // can really be any free port used for tunneling /** * DB server configuration. Please note that due to the tunneling the server host * is localhost and the server port is the tunneling port. It is because the tunneling * creates a local port on localhost */ dbServer: server || { host: '(566)234-0132', port: 33333, user: 'username', password: 'yourpwd', database: 'yourdb' }, /** * Default configuration for the SSH tunnel */ tunnelConfig: { remoteHost: '(566)234-0132', // mysql server host remotePort: 3306, // mysql server port localPort: 33333, // a available local port verbose: true, // dump information to stdout disabled: false, //set this to true to disable tunnel (useful to keep architecture for local connections) sshConfig: { //ssh2 configuration (https://github.com/mscdex/ssh2) host: 'your_tunneling_host', port: 22, username: 'user_on_tunneling', password: 'pwd' //privateKey: require('fs').readFileSync('<pathToKeyFile&gt;'), //passphrase: 'verySecretString' // option see ssh2 config } }, /** * Initialise the mysql connection via the tunnel. Once it is created call back the caller * * @param callback */ init: function (callback) { // // SSH tunnel creation // var me = this; me.tunnel = new Tunnel(this.tunnelConfig); me.tunnel.connect(function (error) { console.log('Tunnel connected', error); // // Connect to the db // me.connection = me.connect(callback); }); }, /** * Mysql connection error handling * * @param err */ errorHandler: function (err) { var me = this; // // Check for lost connection and try to reconnect // if (err.code === 'PROTOCOL_CONNECTION_LOST') { console.log('MySQL connection lost. Reconnecting.'); me.connection = me.connect(); } else if (err.code === 'ECONNREFUSED') { // // If connection refused then keep trying to reconnect every 3 seconds // console.log('MySQL connection refused. Trying soon again. ' + err); setTimeout(function () { me.connection = me.connect(); }, 3000); } }, /** * Connect to the mysql server with retry in every 3 seconds if connection fails by any reason * * @param callback * @returns {*} created mysql connection */ connect: function (callback) { var me = this; // // Create the mysql connection object // var connection = mysql.createConnection(me.dbServer); connection.on('error', me.errorHandler); // // Try connecting // connection.connect(function (err) { if (err) throw err; console.log('Mysql connected as id ' + connection.threadId); if (callback) callback(); }); return connection; } } ); }; ``` Comment for this answer: could you please give the correct answer to @Joshua Angnoe ? Here is another answer: Thanks for the existing answers in this thread. The following solution worked for me: ```function connect() { return new Promise(async resolve =&gt; { let tunnelPort = 33000 + Math.floor(Math.random() * 1000); Tunnel({ //First connect to this server over ssh host: '(566)234-0132', username: 'vagrant', privateKey: await fs.readFile('path/to/private_key'), //And forward the inner dstPort (on which mysql is running) to the host (where your app is running) with a random port dstPort: 3306, localPort: tunnelPort }, (err) =&gt; { if (err) throw err; console.log('Tunnel connected'); let connection = mysql.createConnection({ //Now that the tunnel is running, it is forwarding our above "dstPort" to localhost/tunnelPort and we connect to our mysql instance. host: '(566)234-0132', port: tunnelPort, user: 'root', password: 'password', database: 'dbName' }); connection.on('error', err =&gt; { throw err; }); connection.connect((err) =&gt; { if (err) throw err; console.log('Mysql connected as id ' + connection.threadId); resolve(connection); }); }); }) } ``` Comment for this answer: Its works. But after a while of querying with database, it disconnects from ssh connection. And its throws PROTOCOL_CONNECTION_LOST error. Here is another answer: You can use the ```settings``` parameters to ```node-orm2``` in order pass an ```options``` object to the underlying driver. For example, if you are using a ```mysql``` then you can pass an ```ssl``` option. See https://github.com/felixge/node-mysql#ssl-options in this case. Comment for this answer: Sorry, probably the question wasn't clear enough. I can provide the ssl auth details in the ssl config option. But as the question states I need to connect to the mysql server via an SSH tunnel (see http://serverfault.com/questions/517081/connecting-to-mysql-over-ssh-tunnel). Comment for this answer: There is no Heroku involved in the setup. So I'm not sure what do you exactly mean. Basically I have SSH: server, port, user, pwd MySQL: server, port, user, pwd What I need is to configure node-orm2 to connect to the mysql server via an SSH tunnel created on the SSH server. Comment for this answer: I misunderstood, then. If that's the case then then you probably need to set up the tunnel yourself (`ssh remote_host -L ...`), as mentioned in @Brad's comment, and point your connection string to the local host using the supplied tunneled port. Does that answer the question? Comment for this answer: Assuming that your SQL server is running on `db.com:1234`, you would need to have `ssh -L 5678:localhost:1234 db.com` running on your Heroku app host, and set your connection string to connect to `localhost:5678`. If that works for you, then we can discuss more advanced options such as `-M` etc. Let's see if I understand the problem first, though.
Title: How to install 'savon' gems on Windows machine? Tags: ruby;rubygems;savon Question: How to install 'savon' gems in Windows machine, though I've download the gem and save into the ruby/lib/ruby/gems/1.8/gems folder. But it still show me the error in command prompt that: ``` :gem_original_require no such file to load --savon ``` Comment: Run `gem install savon` on the command promt. Here is another answer: there is no difference between the Linux or Windows way to install a gem. from the command line run ```gem install savon ``` and off you go.
Title: convert a line into array and print elements of array as variables in bash Tags: arrays;bash Question: I am new in linux and I try reading a text file line by line. The lines are numbers. I want to add each line in an array and consider each number as a variable. My trial is as below: Example of txt file: ```1976 1 0 0.00 0. 68. 37. 0. 105. 0.14 0.02 4.3 1.1 2.2 ``` What I need: Putting each number in a variable. for example ```a = 1976``` and ```b = 1``` etc... My code: ```IFS=$'\n' for next in `cat $filename` do line=$next echo ${line[0]} done ``` Result: ```1976 1 0 0.00 0. 68. 37. 0. 105. 0.14 0.02 4.3 1.1 2.2 ``` Comment: I am searching for a comand using awk to do this Comment: I found this command which results in the first number in the array 1976 but i do not know how to loop over the rest of the array echo $array | sed 's/^.* \(".*"$\)/\1/' Comment: I tried this with changing the number in {print $0} to get the second and third field but it prints 1976 each time : while read -r line; do array=( $line ) echo $array | awk '{print $0}' done < read1.out Comment: I suggest to use `awk` for this. Comment: Is it ok if it would be array? Here is the accepted answer: It's quite easy to store each value in an array. Here is an example: ```while read -r -a line do echo "${line[0]}" echo "${line[1]}" echo "${line[2]}" done < $filename ``` -a line splits the input-line into words (white space seperated by default) and store the results in ```line``` array. A snippet from read man: ``` -a Each name is an indexed array variable (see Arrays above). ``` You man not need the '-r' option. It basically make read to treat \ as nothing special in the input. Comment for this answer: thank you sir, the provided script works with removing the dollar sign from the file name Comment for this answer: Sir, Sorry I am still new. The fields are well extracted by your script but i do not know how to save them in variables for instance a=echo "${line[0]}" does not work Comment for this answer: Omar's code in the question suggested me that's what he's trying to achieve actually. Comment for this answer: You can assign like so: `a=${line[0]}` , without echo. Still I'm not sure what you're trying to achieve and how helpful is that for you. That way you assign variables in a loop. You can still access them after the loop is executed, but if you do something like `a=${line[0]}` then `a` would contain a value for the last line read. Here is another answer: ```# Add each line to Array readarray -t aa < $filename # Put each line into variables using Here String for l in "${aa[@]}"; do read a b c <<< $l; # Example using 3 variables, could be as many as on line # Do whatever has to be done with a, b, c, etc done ```
Title: How can I define From Email In this Apex Email Service Code Tags: apex;apex-email-service Question: In this what I will for Set a Default From Email Address. ``` Messaging.SingleEmailMessage EmailMessage = new Messaging.SingleEmailMessage(); EmailMessage.setTargetObjectId(opp.Applicant_Student_Record__c); EmailMessage.setTemplateId(et.Id); EmailMessage.setWhatId(opp.Id); EmailMessage.setSaveAsActivity(false); EmailMessage.setReplyTo('[email protected]'); EmailMessage.setSenderDisplayName('Development'); EmailMessage.setFileAttachments(Attachment); emails.add(EmailMessage); ``` Here is the accepted answer: To do this you must first set up a dedicated email address by navigating to Setup -> Administration Setup -> Email Administration -> Organization-Wide Addresses menu. Setup the address(say :[email protected]) and need to verify for the use. Add the below code : ```OrgWideEmailAddress[] owea = [select Id from OrgWideEmailAddress where Address = '[email protected]']; Messaging.SingleEmailMessage EmailMessage = new Messaging.SingleEmailMessage(); if ( owea.size() &gt; 0 ) { EmailMessage.setOrgWideEmailAddressId(owea.get(0).Id); } EmailMessage.setTargetObjectId(opp.Applicant_Student_Record__c); EmailMessage.setTemplateId(et.Id); EmailMessage.setWhatId(opp.Id); EmailMessage.setSaveAsActivity(false); EmailMessage.setReplyTo('[email protected]'); EmailMessage.setSenderDisplayName('Development'); EmailMessage.setFileAttachments(Attachment); emails.add(EmailMessage); ``` Here setting up the email address at "Email Administration" will help you to set the desired mail address from whom you want to send the mail( form which you are also querying from "OrgWideEmailAddress" to get the particular one if in case you have multiple address setup).
Title: Unmanaged DLLs fail to load on ASP.NET server Tags: asp.net;dll;iis-6;unmanaged Question: This question relates to an ASP.NET website, originally developed in VS 2005 and now in VS 2008. This website uses two unmanaged external DLLs which are not .NET and I do not have the source code to compile them and have to use them as is. This website runs fine from within Visual Studio, locating and accessing these external DLLs correctly. However, when the website is published on a webserver (runnning IIS6 and ASP.NET 2.0) rather than the development PC it cannot locate and access these external DLLs, and I get the following error: ```Unable to load DLL 'XYZ.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)``` The external DLLs are located in the bin directory of the website, along with the managed DLLs that wrap them and all the other DLLs for the website. Searching this problem reveals that many other people seem to have the same problem accessing external non.NET DLLs from ASP.NET websites, but I haven't found a solution that works. I have tried the following: Running DEPENDS to check the dependencies to establish that the first three are in System32 directory in the path, the last is in the .NET 2 framework. I put the two DLLs and their dependencies in System32 and rebooted the server, but website still couldn't load these external DLLs. Gave full rights to ASPNET, IIS_WPG and IUSR (for that server) to the website bin directory and rebooted, but website still couldn't load these external DLLs. Added the external DLLs as existing items to the projects and set their "Copy to Output" property to "Copy Always", and website still can't find the DLLs. Also set their "Build Action" property to "Embedded resource" and website still can't find the DLLs. Any assistance with this problem would be greatly appreciated! Here is the accepted answer: Try putting the dlls in the \System32\Inetsrv directory. This is the working directory for IIS on Windows Server. If this doesn't work try putting the dlls in the System32 directory and the dependency files in the Inetsrv directory. Comment for this answer: Here's an answer without needing to pollute the system32 folder: http://stackoverflow.com/a/4598747/92756 Comment for this answer: instead, you can disable ShadowCopying by adding on the web.config, provided you are not modifying binaries in the live application. Comment for this answer: you just saved my day, +1 Comment for this answer: I got burned by making a copy of my DLL and putting it in System32 because I forgot about it, and then I had the wrong version of Microsoft.Azure.Documents.ServiceInterop.dll getting loaded for my project which resulted in weird local queryRanges[0].isMinInclusive errors when trying to connect to the db. My fix ended up being to make sure my local IIS DefaultAppPool Identity was my local user. See also: https://github.com/Azure/azure-documentdb-dotnet/issues/267 Here is another answer: Аfter struggling all day over this problem and finally I found a solution which suits me. It's just a test, but the method is working. ```namespace TestDetNet { static class NativeMethods { [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); } public partial class _Default : System.Web.UI.Page { [UnmanagedFunctionPointer(CallingConvention.StdCall)] private delegate int GetRandom(); protected System.Web.UI.WebControls.Label Label1; protected void Page_Load(object sender, EventArgs e) { Label1.Text = "Hell'ou"; Label1.Font.Italic = true; } protected void Button1_Click(object sender, EventArgs e) { if (File.Exists(System.Web.HttpContext.Current.Server.MapPath("html/bin")+"\\DelphiLibrary.dll")) { IntPtr pDll = NativeMethods.LoadLibrary(System.Web.HttpContext.Current.Server.MapPath("html/bin")+"\\DelphiLibrary.dll"); if (pDll == IntPtr.Zero) { Label1.Text = "pDll is zero"; } else { IntPtr pAddressOfFunctionToCall = NativeMethods.GetProcAddress(pDll, "GetRandom"); if (pAddressOfFunctionToCall == IntPtr.Zero) { Label1.Text += "IntPtr is zero"; } else { GetRandom _getRandom = (GetRandom)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall,typeof(GetRandom)); int theResult = _getRandom(); bool result = NativeMethods.FreeLibrary(pDll); Label1.Text = theResult.ToString(); } } } } } } ``` Here is another answer: As an alternate to putting the dll in a folder that is already in the path (like system32) you can change the path value in your process by using the following code ```System.Environment.SetEnvironmentVariable("Path", searchPath + ";" + oldPath) ``` Then when LoadLibrary tries to find the unmanaged DLL it will also scan searchPath. This may be preferable to making a mess in System32 or other folders. Comment for this answer: Only works with PInvoke. if using a mixed mode assembly, the DLLs are linked (and fail) before any code runs. Here is another answer: This happens because the managed dlls get shadow copied to a temporary location under the .NET Framework directory. See http://msdn.microsoft.com/en-us/library/ms366723.aspx for details. Unfortunately, the unmanaged dlls do NOT get copied and the ASP.NET process won't be able to find them when it needs to load them. One easy solution is to put the unmanaged dlls in a directory that is in the system path (type "path" at the command line to see the path on your machine) so that they can be found by the ASP.NET process. The System32 directory is always in the path, so putting the unmanaged dlls there always works, but I would recommend adding some other folder to the path and then adding the dlls there to prevent polluting the System32 directory. One big drawback to this method is you have to rename the unmanaged dlls for every version of your application and you can quickly have your own dll hell. Comment for this answer: From my experience, when I create a new directory, add it to PATH and put the DLL in there, it is found just fine when running local. However, this is not the case when hosted on IIS - the DLL is only found when I have it in /Windows/System32. Does IIS perhaps use a different variable for the path in some cases? Comment for this answer: This is a better answer than the accepted answer because it explains why you'd want to do this. Here is another answer: I have come across the same issue. And I tried all above options, copying to system32, inetpub, setting path environment, etc nothing worked. This issue is finally resolved by copying unmanaged dll to the bin directory of web application or web service. Here is another answer: Another option is embedding the native DLL as a resource in the managed DLL. This is more complicated in ASP.NET, as it requires writing to temporary [email protected]. The technique is explained in another SO answer. Comment for this answer: why wouldn't you recommend this? I am considering to use this approach for Azure Functions app where I need to run native executable from the managed assembly. Comment for this answer: I wouldn't recommend this approach for apps or libraries that might be run or used in PAAS hosts (like Azure Websites). Comment for this answer: Azure functions didn't exist when I wrote it. Azure Functions are so small and targeted, if you get it working then go for it! Here is another answer: Adding to Matt's answer, this is what finally worked for me for 64-bit server 2003 / IIS 6: make sure your dlls / asp.net are the same version (32 / 64 bit) Put the unmanaged dlls in inetsrv dir (note that in 64 bit windows, this is under syswow64, even though the sys32/inetsrv directory is created) Leave the managed dlls in /bin Make sure both sets of dll's have read/execute permissions Comment for this answer: +1 for the location of inetsrv on 64 bit systems. Thanks for that. Comment for this answer: I found this to be too tedious and won't allow for differing versions of dlls. We resolved this by using symbolic links. Here is another answer: Run DEPENDS on XYZ.dll directly, in the location that you have deployed it to. If that doesn't reveal anything missing, use the fuslogvw tool in the platform SDK to trace loader errors. Also, the event logs sometimes contain information about failures to load DLLs. Comment for this answer: Could you link to where I can get DEPENDS? Here is another answer: Take a look with FileMon or ProcMon and filter on the names of the troublesome DLLs. This will show you what directories are scanned in search of the DLLs, and any permission issues you might have. Comment for this answer: ProcMon is extremely useful in figuring out dependencies and what a certain DLL will not load. Here is another answer: Always worth checking the path variable in your environment settings too. Comment for this answer: @Ristogod: The answer is four years old and you're downvoting me because I didn't keep track to make sure the link was active? Tough crowd. Comment for this answer: @Drax - I thought the important information was to check your path variable Comment for this answer: @annakata You're supposed to extract the important information from the link so when it goes down this answer is still valid ;) Comment for this answer: @annakata The answer is clear enough for me but probably not that self explanatory for all interested users :) Here is another answer: On Application_start use this: (customise /bin/x64 and bin/dll/x64 folders as needed) ```String _path = String.Concat(System.Environment.GetEnvironmentVariable("PATH") ,";" , System.Web.Hosting.HostingEnvironment.MapPath("~/bin/x64") ,";" , System.Web.Hosting.HostingEnvironment.MapPath("~/bin/dll/x64") ,";" ); System.Environment.SetEnvironmentVariable("PATH", _path, EnvironmentVariableTarget.Process); ```
Title: Is it possible to modify Django Q() objects after construction? Tags: python;django;django-orm;django-q Question: Is it possible to modify Django Q() objects after construction? I create a Q() object like so: ```q = Q(foo=1) ``` is it possible to later change ```q``` to be the same as if I had constructed: ```q2 = Q(foo=1, bar=2) ``` ? There's no mention of such an interface in the Django docs that I could find. I was looking for something like: ```Q.append_clause(bar=2) ``` Comment: what exactly do you mean by "same as constructed"? do you mean modify the q object on the fly ? Comment: Ok. the PerrinHarkins' answer is what you are looking for then. Comment: @karthikr: Updated question to better describe what I was looking for. Here is another answer: You can just make another Q() object and AND them together: ```q2 = q &amp; Q(bar=2)``` Comment for this answer: Yes, they are identical. What matters is if there are two .filter calls or just one. Comment for this answer: Are you sure that's semantically identical? It may well be, but I know there can be gotchas. For instance, calling `.filter(Q(parent__foo=1)).filter(Q(parent__bar=1))` is different than `.filter(Q(parent__foo=1, parent__bar=1))` and I didn't want to fall afoul of anything like that. Here is another answer: You can add Q objects together, using their ```add``` method. For example: ```&gt;&gt;&gt; q = Q(sender=x) &gt;&gt;&gt; q.add(Q(receiver=y), Q.AND) ``` The second argument to ```add``` is the connector, which can also be ```Q.OR``` EDIT: My answer is merely a different way of doing what Perrin Harkins suggested, but regarding your other concern, about different behavior of ```filter``` depending on the way you construct the query, you don't have to worry about that if you join Q objects. My example is equivalent to ```filter(sender=x, receiver=y)```, and not ```filter(sender=x).filter(receiver=y)```, because Q objects, as far as I could see in a quick test, do an immediate AND on the clauses and don't have the special behavior of ```filter``` for multi-valued relations. In any case, nothing like looking at the SQL and making sure it really is doing the same in your specific queries. Here is another answer: The answers here are a little old and unsatisfactory imo. So here is my answer This is how you deep copy: ```def deep_copy(q: Q) -&gt; Q: new_q = Q() # Go through the children of a query: if it's another # query it will run this function recursively for sub_q in q.children: # Make sure you copy the connector in # case of complicated queries new_q.connector = q.connector if isinstance(sub_q, Q): # This will run recursively on sub queries sub_q = get_employee_q(sub_q) else: pass # Do your modification here new_q.children.append(sub_q) return new_q ``` In the else condition is where your stuff (```name='nathan'``` for example) is defined. You can change or delete that if you'd like and the Query should work fine.
Title: Does bson require more space then json when sending decimal data? Tags: java;json;bson Question: I am trying to compare bson serialization with json like that: ```{ "startAmount": 1000, "amounts": [ 1000, 15000 ], "stakes": [ 0.0395, 0.0062 ], "step": 10 } ``` Note that actual size for ```amounts``` and ```stakes``` is 1500. ```amounts``` are between 1000 and 15000, ```stakes``` just random value from ```0.0001d``` to ```0.05d```. So when this json was passed to online bson serializer I got file 31.5 Kb, but original json size is 17.4Kb. In theory bson much smaller when it comes to pack numbers. So what is wrong? Does this result correct or there is some mistake in packaging? Or may be bson require more space when numbers with dot are packaged? Comment: Why should BSON be smaller than JSON (unless the JSON contains a lot of spacing and indentation)? A `double` is 8 bytes. The string `0.0395` is only 6 bytes. BSON is optimized for parsing speed, not for size. Comment: Related: http://stackoverflow.com/questions/12601890/compare-json-and-bson
Title: Grouping and counting occurences of a parameter within each group Tags: linq;sqlite;group-by;linqpad Question: I'm fiddling with LINQPad and I'm sure this question has been asked but I've been struggling and searching for quite a while. I have a collection like this: ```Age Material --- -------- 3 Steel 3 Steel 3 PVC 4 Steel 4 PVC ``` I want to group it first by Material, and then count the occurences of each Age within each Material group, resulting in something like this: ```{Mat = Steel, [(Age = 3, Count = 2), (Age = 4, Count = 1)]} {Mat = PVC, [(Age = 3, Count = 1), (Age = 4, Count = 1)]} ``` I know this has to do with nested groups but I can't get it right. I've been trying stuff like this (only without the question marks), but I don't know how to get to the items in each group: ```var matGroups = from line in Pipelines group line by line.Material into matGroup from ???? in ??? group ???? by ???.Age; ``` If its relevant, the data store I'm querying is in SQLite. UPDATE The following code gets me the data I want. However, I would still like to know the proper way to do this in a single query, if possible. ```var materials = from p in Pipelines group p by p.Material into matGroup select matGroup.Key; foreach (string material in materials) { var ages = from p in Pipelines where p.Material == material group p by p.Age into ageGroup select new {Age = ageGroup.Key, Count = ageGroup.Count()}; ages.Dump(material + " Pipelines"); } ``` Comment: I have never used LINQ, but the SQL command you want is simple: `select material, age, count(*) from pipelines group by material, age;` Maybe that will help... Here is the accepted answer: It is just a grouping by ```Age``` and ```Material```: ```from p in Pipelines group p by new { p.Material, p.Age } into g orderby g.Key.Material, g.Key.Age select new { g.Key.Material, g.Key.Age, Count = g.Count () } ```
Title: Upgrade Visual Studio 2008 to SP1 to install SQL Server 2008 Management Studio Tags: sql-server-2008;visual-studio-2008;ssms Question: Why do I need to to upgrade my Visual Studio 2008 to SP1 in order to install SQL Server 2008 Management Studio if I already have Microsoft Visual Studio 2010? How to avoid not to upgrade anymore my Visual Studio 2008 to SP1 to successfully install SQL Server 2008 Management Studio? Comment: What's the reason not to upgrade to VS 2008SP1, unless it's some sort of company policy issue? I can't think of a good reason not to upgrade. Here is another answer: If you have Visual Studio 2008 and one or more 2008 Express Editions, you must upgrade all of them to SP1.
Title: Signing .extension of app not in provisioning profile Tags: ios;xcode;provisioning-profile;signing;onesignal Question: I would like to export a new build of my Xcode app to iTunes Connect, but I always get the following error: ```Failed to create provisioning profile. The app ID "[myappid].OneSignalNotificationServiceExtension" cannot be registered to your development team. Change your bundle identifier to a unique string to try again. Provisioning profile failed qualification Profile doesn't include the selected signing certificate. ``` I tried the following: Cleaning the build Restart xCode Creating new profiles (distribution/development) Removing .OneSignalNotificationServiceExtension (gives another error) I don't want to change the app id since I want to continue on the same app (for my testers). I had to change my password of my Apple ID due to security issues, but I changed it everywhere they asked for it. Does anybody know how to fix this?
Title: Ubuntu 18.04 Laravel 5.7 nginx 1.14 Laravel routes not working Tags: php;laravel;nginx Question: I recently moved my server from apache to nginx, when using apache i had a working site using the Laravel framework. For some reason no pages other than the base index page just return a 404 error. I am sure it has something to do with my nginx config. My currently config is shown below. ```server { listen 80; server_name munkeemagic.munkeejuice.co.uk; root /var/www/html/munkeemagic/mtg-webby/mtg/public; index index.php index.html; location / { try_files $uri $uri/ /index.php?$is_args$args; } location ~* \.php$ { try_files $uri /index.php =404; fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } ``` I have checked to make sure that all the file permissions are set for user www-data which is the user nginx is using. I have even tried changing ```location / { try_files $uri $uri/ /index.php?$is_args$args; } ``` to ```location / { try_files $uri $uri/ index.php?$query_string; } ``` but that has not had any impact and i still see the same issue. So basically with this config if i navigate to http://munkeemagic.munkeejuice.co.uk then the page displays if i go to http://munkeemagic.munkeejuice.co.uk/login i get a 404 error. Does anyone have any idea what i may be doing wrong ? After following user3647971's advice i have modified the config as follows : ```server { listen 80; server_name munkeemagic.munkeejuice.co.uk; root /var/www/html/munkeemagic/mtg-webby/mtg/public; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; index index.php index.html; charset utf-8; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } error_page 404 /index.php; location ~* \.php$ { fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.(?!well-known).* { deny all; } } ``` I have restarted nginx to take the changes and have also cleared the routes cache, but unfortunately i still have the same problem. Comment: Um I thought I had, but have just found a default configuration on the laravel website. Will give that a shot and see if that makes a difference. Comment: unfortunately those two tips did not help. i have updated my question to reflect that i have changed the config. Comment: @user3647971 yes that's the path to my public folder in the laravel project Comment: Yeah, I removed Apache before I installed Ngnix. Yes the only thing I did was remove Apache and replace it with nginx. Comment: Yeah, after every config change I made, I restarted nginx. Comment: Just tried that but it had no impact, still experiencing the same problem. Comment: Think I will spin up a new VM tomorrow and see if I can get this working locally, if so then I will recreate my live vm Comment: @user3647971 just to let you know figured out the problem, I still had the default configuration enabled which was overriding my site specific config. Many thanks for all your help. Comment: Do you use the default nginx confs that come with laravel? Comment: Also might wanna clear the routes cache with `php artisan route:cache` :) hope it helps Comment: Is you root folder of _Laravel_ really what you have defined there? Doesn't seem like laravel standard Comment: In here: https://stackoverflow.com/questions/39040385/moved-laravel-project-from-apache-to-nginx There's minimal config, and you need to define the root folder to laravels public folder Comment: Do you still have apache running in the background there? Is the replacing apache with nginx only thing you did? Comment: did you restart the nginx after trying different configurations? Comment: `location ~ \.php$` Try this in your .php files section of config Comment: Okay, let me know how it turns out Here is the accepted answer: Ok figured out the problem, tbh i was a bit stupid. My default configuration was overriding my website specific config. I unlinked the default config restarted nginx and everything started to work correctly.
Title: How to get a hover underline to appear above text and have a drop-shadow effect? Tags: html;css Question: As you can see in the first image, I have the underline appear under the links which covers the red &quot;hr&quot; that runs across the page. I want to apply the same effect on the archives and categories links but with it appearing above. I can't seem to find a way of doing it. I looked up a hover underline position, and tried using text-underline-position to being above but that doesn't do what I want it to do. How do I go about doing this? In the second image, in the prototype I had designed to have the underline have a drop-shadow effect. How do I go about doing that with hover links? Can it even be achieved if I'm using an image as a background? Or would I need to save that as a .png with transparency? Any tips? HTML: ```<!DOCTYPE html&gt; <html&gt; <head&gt; <title&gt;My Site</title&gt; <link rel=&quot;stylesheet&quot; href=&quot;stylesheet.css&quot; type=&quot;text/css&quot; /&gt; </head&gt; <body&gt; <header&gt;</header&gt; <div id=&quot;NavSection&quot;&gt; <div id=&quot;TopNav&quot;&gt; <nav id=&quot;MainNav&quot;&gt; <ul id=&quot;Menu&quot;&gt; <li&gt;<a href=&quot;&quot;&gt;Home</a&gt;</li&gt; <li&gt;<a href=&quot;&quot;&gt;About</a&gt;</li&gt; <li&gt;<a href=&quot;&quot;&gt;Contact</a&gt;</li&gt; </ul&gt; </nav&gt; </div&gt; <hr /&gt; <div id=&quot;SecondNavSection&quot;&gt; <nav id=&quot;SecondNav&quot;&gt; <ul id=&quot;SecondMenu&quot;&gt; <li&gt;<a href=&quot;&quot;&gt;Archives</a&gt;</li&gt; <li&gt;<a href=&quot;&quot;&gt;Categories</a&gt;</li&gt; </ul&gt; </nav&gt; </div&gt; <div id=&quot;SiteTitle&quot;&gt; <h1 id=&quot;My&quot;&gt;My<span id=&quot;Site&quot;&gt;Site</span&gt;</h1&gt; </div&gt; </div&gt; <div id=&quot;ContentDiv&quot;&gt; <main id=&quot;ContentSection&quot;&gt; <div id=&quot;Content&quot;&gt; <p&gt;Content goes here.</p&gt; </div&gt; </main&gt; </div&gt; <footer&gt; <p&gt;My Site</p&gt; </footer&gt; </body&gt; </html&gt; ``` CSS: ```body { background-color: #ffffff; background: url(/images/background.jpg) no-repeat center center fixed; background-size: cover; resize: both; overflow: scroll; overflow-x: hidden; } 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16-webkit-scrollbar { width: 0px; font-family: Arial; } @font-face { font-family: ubuntu-medium; src: url(/fonts/ubuntu-medium.ttf); } /* @media (max-width:3440px){ body{background: url(/images/background.jpg) no-repeat center center fixed;} } */ /* @media (min-width:480px){ body{background: url(/images/background.jpg) no-repeat center center fixed;} } */ #NavSection { margin-top: 3%; } #MainNav { position: left; margin-left: 11%; } #Menu li { font-family: ubuntu-medium; font-weight: normal; color: #414141; padding: 0px 10px; display: inline; font-size: 15px; list-style-type: none; } #Menu a:hover { text-decoration-color: #414141; text-underline-offset: 0.12em; text-decoration-line: underline; text-decoration-style: solid; text-decoration-thickness: 4px; } hr { margin: 0px; border: 2px solid red; width: auto; } a { color: #414141; text-decoration: none; } a:active { color: #ff0000; } #SiteTitle { margin-left: 0%; } #My { font-family: Impact; font-weight: normal; font-size: 30px; color: #ffffff; text-decoration: underline; text-decoration-color: #414141; text-decoration-thickness: 2px; text-underline-offset: 0.08em; } #Site { color: red; } ul { list-style-type: none; margin-top: 0px; margin-bottom: 0px; padding: 0px; } #SecondNav { float: right; font-family: ubuntu-medium; font-weight: normal; color: #414141; padding: 0px 10px; font-size: 15px; margin-right: 11%; } #SecondMenu a:hover { margin-bottom: 5px; text-decoration-line: underline; text-underline-position: above; text-decoration-style: solid; text-decoration-color: #414141; text-decoration-thickness: 4px; } #SecondMenu li { margin-bottom: 5px; font-family: ubuntu-medium; font-weight: normal; color: #414141; padding: 0px 10px; display: inline; font-size: 15px; list-style-type: none; } #ContentDiv { width: 70%; height: 40%; position: absolute; top: 30%; left: 15%; transform: translateX(0%); background-color: rgba(255, 0, 0, 0.4); } #ContentSection { width: 90%; height: 60%; position: absolute; top: 20%; left: 5%; background-color: rgba(255, 255, 255, 0.9); } #Content { margin: 3%; } ``` Comment: Post your CSS and HTML of the above pictures. Comment: Wrap the link in a div and prepend a div on top that is just a solid rect. Here is another answer: Using two lists and using border on the li you can get the color. ```.navbar { position: relative; margin-bottom: 5px; font-family: ubuntu-medium; font-weight: normal; } .navbar a { text-decoration: none; color: #414141; } .navbar ul { display: flex; margin: 0; padding: 0; } .navbar ul li { display: block; flex: 0 1 auto; /* Default */ list-style-type: none; line-height: 2.5em; margin-left: 1em; } .sub ul { justify-content: flex-end; } .navbar181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16ore{ position: absolute; z-index: -1; margin-top: 2.5em; content: ''; border-top: 10px solid #FF0000; width:100%; } .main li.active { border-bottom: 10px solid #000000; box-shadow: 0 4px 2px -2px #AAAAAA; } .sub li.active { border-top: 10px solid #000000; margin-top: -10px; box-shadow: 0px -4px 2px -2px #AAAAAA; }``` ```<div class="navbar"&gt; <nav class="main"&gt; <ul&gt; <li&gt;<a href="#" class="active"&gt;Home</a&gt;</li&gt; <li class="active"&gt;<a href="#"&gt;About</a&gt;</li&gt; <li&gt;<a href="#"&gt;Contact</a&gt;</li&gt; </ul&gt; </nav&gt; <nav class="sub"&gt; <ul&gt; <li class="active"&gt;<a href="#"&gt;Foo</a&gt;</li&gt; <li&gt;<a href="#"&gt;Bar</a&gt;</li&gt; <li&gt;<a href="#"&gt;Baz</a&gt;</li&gt; </ul&gt; </nav&gt; </div&gt;``` Here is another answer: Use this HTML code ```<div class=&quot;menu&quot;&gt; <a href=&quot;#&quot;&gt;A</a&gt; <a href=&quot;#&quot;&gt;B</a&gt; <a href=&quot;#&quot;&gt;C</a&gt; <a href=&quot;#&quot;&gt;D</a&gt; <a href=&quot;#&quot;&gt;E</a&gt; <a href=&quot;#&quot;&gt;F</a&gt; </div&gt; ``` CSS ``` .body { background:#222; padding:50px; } .menu { margin:0 auto; with 90%; } .menu a { display:block; float: left; padding:5px 0; color:#fff; text-decoration:none; margin:0 10px; font-family:arial; } .menu a:hover { border-bottom:3px solid #fff; } ``` Hope gonna help you, also i would be glad if you rate my comment good!
Title: Logging entry and exit of methods along with parameters automagically? Tags: .net;logging;log4net Question: Is there a way for me to add logging so that entering and exiting methods gets logged along with parameters automatically somehow for tracing purposes? How would I do so? I am using Log4Net. Here is another answer: Have a look at: How do I intercept a method call in C#? PostSharp - il weaving - thoughts Also search SO for 'AOP' or 'Aspect Oriented Programming' and PostSharp...you get some interesting results. Here is another answer: I'm not sure what your actual needs are, but here's a low-rent option. It's not exactly "automatic", but you could use StackTrace to peel off the information you're looking for in a manner that wouldn't demand passing arguments - similar to ckramer's suggestion regarding interception: ```using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace TracingSample { class Program { static void Main(string[] args) { DoSomething(); } static void DoSomething() { LogEnter(); Console.WriteLine("Executing DoSomething"); LogExit(); } static void LogEnter() { StackTrace trace = new StackTrace(); if (trace.FrameCount &gt; 1) { string ns = trace.GetFrame(1).GetMethod().DeclaringType.Namespace; string typeName = trace.GetFrame(1).GetMethod().DeclaringType.Name; Console.WriteLine("Entering {0}.{1}.{2}", ns, typeName, trace.GetFrame(1).GetMethod().Name); } } static void LogExit() { StackTrace trace = new StackTrace(); if (trace.FrameCount &gt; 1) { string ns = trace.GetFrame(1).GetMethod().DeclaringType.Namespace; string typeName = trace.GetFrame(1).GetMethod().DeclaringType.Name; Console.WriteLine("Exiting {0}.{1}.{2}", ns, typeName, trace.GetFrame(1).GetMethod().Name); } } } } ``` You could combine something like the above example with inheritance, using a non-virtual public member in the base type to signify the action method, then calling a virtual member to actually do the work: ```public abstract class BaseType { public void SomeFunction() { LogEnter(); DoSomeFunction(); LogExit(); } public abstract void DoSomeFunction(); } public class SubType : BaseType { public override void DoSomeFunction() { // Implementation of SomeFunction logic here... } } ``` Again - there's not much "automatic" about this, but it would work on a limited basis if you didn't require instrumentation on every single method invocation. Hope this helps. Comment for this answer: This code goes well with this code to capture the parameter values http://stackoverflow.com/a/6928253/1267778 . So I have something like logService.LogEnter(arg1, arg2, arg3). Great stuff Here is another answer: The best way to achieve this sort of thing is by using interception, There are a couple of ways to do this, though they all tend to be somewhat invasive. One would be to derive all your objects from ContextBoundObject. Here is an example of using this sort of approach. The other approach would be to use one of the existing AOP libraries to achieve this. Something like DynamicProxy from the Castle Project is at the core of many of these. Here are a few links: Spring.Net PostSharp Cecil There are probably several others, and I know Castle Windsor, and Ninject both provide AOP capabilities on top of the IoC functionality. Once AOP is in place you would simply write an interceptor class that would write the information about the method calls out to log4net. I actually wouldn't be surprised if one of the AOP frameworks would give you that sort of functionality out of the box. Here is another answer: You can use open source framework CInject on CodePlex that has a LogInjector to mark entry and exit of a method call. Or you can follow the steps mentioned on this article on Intercepting Method Calls using IL and create your own interceptor using Reflection.Emit classes in C#. Here is another answer: You could use a post-compiler like Postsharp. The sample from the website talks about setting up a tracer for entering/exiting a method, which is very similar to what you want. Here is another answer: Functions To expand on Jared's answer, avoiding code repetition and including options for arguments: ```private static void LogEnterExit(bool isEnter = true, params object[] args) { StackTrace trace = new StackTrace(true); // need `true` for getting file and line info if (trace.FrameCount &gt; 2) { string ns = trace.GetFrame(2).GetMethod().DeclaringType.Namespace; string typeName = trace.GetFrame(2).GetMethod().DeclaringType.Name; string args_string = args.Length == 0 ? "" : "\narguments: [" + args.Aggregate((current, next) =&gt; string.Format("{0},{1};", current, next))+"]"; Console.WriteLine("{0} {1}.{2}.{3}{4}", isEnter ? "Entering" : "Exiting", ns, typeName, trace.GetFrame(2).GetMethod().Name, args_string ); } } static void LogEnter(params object[] args) { LogEnterExit(true, args); } static void LogExit(params object[] args) { LogEnterExit(false, args); } ``` Usage ```static void DoSomething(string arg1) { LogEnter(arg1); Console.WriteLine("Executing DoSomething"); LogExit(); } ``` Output In the console, this would be the output if ```DoSomething``` were run with "blah" as ```arg1``` ```Entering Program.DoSomething arguments: [blah] Executing DoSomething Exiting Program.DoSomething ``` Here is another answer: This does not apply to all C# applications but I am doing the following in an Asp.Net Core application to add logging to controller method calls by implementing the following interfaces: IActionFilter, IAsyncActionFilter. ```public void OnActionExecuting(ActionExecutingContext context) public void OnActionExecuted(ActionExecutedContext context) public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next) ```
Title: Metrics - multi-class model comparisons Tags: machine-learning-model;metric;jaccard-coefficient Question: I am looking for a way to quantify the performance of multi-class model labelers, and thus compare them. I want to account for the fact that some classes are ‘closer’ than others (for example a car is ‘closer’ to a ‘truck’ than a ‘flower’ is. So, if a labeler classifies a car as a truck that is better than classifying the car as a flower. I am considering using a Jaccard similarity score. Will this do what I want? Comment: How would you use Jaccard similarity exactly? Comment: The Jaccard score computes the average of the Jaccard similarity coeffficients. So basically it's the average of the union and intersection of the two (or more) sets of labels. Comment: yes but I mean what are the sets that you are going to compare? I don't see how Jaccard can find a higher similarity between classes "car" and "truck" than classes "car" and flowers". Or maybe you independently calculate the similarity based on the words context in a large corpus? Comment: I was thinking also that RMSE might be valuable since that takes into consideration the distance from truth. Comment: Agreed. I would have to manually indicate 'closeness' Here is another answer: There is no commonly established metric do that. You'll have to write custom code based on manually indicating rank ordered preferences of misclassifications.
Title: How do I get the current Cmdlet from another object? Tags: powershell Question: What's the best way to get the current PowerShell Cmdlet from another object? If I create a helper object that is not a Cmdlet but will be called by Cmdlets, the helper methods may want to call WriteVerbose, WriteDebug etc. What's the best way to get access to that? Is there a static PowerShell method that will return the current Cmdlet or do I need to have the Cmdlet pass itself to the helper? Here is the accepted answer: AFAICT you will need to pass your cmdlet object to the helper class so it can access those instance methods WriteVerbose, WriteDebug, etc, I'm not aware of any other public "static" access mechanism to get to these output streams.
Title: wrapAll jQuery problem Tags: javascript;jquery;html Question: I need to wrap three divs into one, but that group of three divs is repeating itself several times in code. Maybe my HTML will explain: ```<div class="one" /&gt; <div class="two" /&gt; <div class="three" /&gt; <div class="one" /&gt; <div class="two" /&gt; <div class="three" /&gt; ``` What I'm trying to achieve is this: ```<div class="wrap"&gt; <div class="one" /&gt; <div class="two" /&gt; <div class="three" /&gt; </div&gt; <div class="wrap"&gt; <div class="one" /&gt; <div class="two" /&gt; <div class="three" /&gt; </div&gt; ``` This is wrapping everything into one div: ```$('.one, .two, .three').wrapAll('<div class="wrap" /&gt;') ``` How can I get the groups wrapped separately? Comment: so each set of div.one div.two div.three needs to be wrapped in div.wrap? Here is the accepted answer: ```$(function(){ var all = $('.one, .two, .three'); for(i=0; i<all.length; i+=3){ all.slice(i,i+3).wrapAll('<div class="wrap" /&gt;'); } }); ``` See it in action: http://jsbin.com/uqosa/ By the way, ```<div class="one" /&gt;``` didn't work for me on Firefox, I had to close them with ```</div&gt;```. Here is another answer: This will work: ```$(function() { var $ones = $('div.one'); var $twos = $('div.two'); var $threes = $('div.three'); $ones.each(function(idx) { $('<div class="wrap"&gt;</div&gt;') .append($ones.eq(idx)) .append($twos.eq(idx)) .append($threes.eq(idx)) .appendTo('body'); }); }); ``` Here is another answer: Try this: ```$('div.one').each(function() { var wrap = $(this).wrap('<div class="wrap"&gt;</div&gt;').parent(); wrap.next('div.two').appendTo(wrap); wrap.next('div.three').appendTo(wrap); }); ``` Here is another answer: In my head this should work: ```$('.one').each(function() { var wrap = $('<div&gt;').addClass('wrap'), two = $(this).next(), three = $(two).next(); $(this).after(wrap).appendTo(wrap) two.appendTo(wrap), three.appendTo(wrap) }); ``` Comment for this answer: Updated my answer a few times. Here is another answer: ```$.fn.matchUntil = function(expr) { var match = []; this.each(function(){ match = [this]; for ( var i = this.nextSibling; i; i = i.nextSibling ) { if ( i.nodeType != 1 ) { continue; } if ( $(i).filter(expr).length &gt; 0 ) { break; } match.push( i ); } }); return this.pushStack( match ); }; ``` usage: ```$(".one").each( function() { $(this).matchUntil(".one").wrapAll("<div class='wr'&gt;</div&gt;"); }) ``` it's just modified version of http://docs.jquery.com/JQuery_1.2_Roadmap#.nextUntil.28.29_.2F_.prevUntil.28.29, which doesn't seem to work with new jQuery.
Title: Firestore: FirebaseError: Missing or insufficient permissions Tags: javascript;typescript;firebase;google-cloud-firestore;firebase-security Question: I have the following rule: ```rules_version = '2'; service cloud.firestore { match /documents/{any} { allow read, write: if request.auth.uid != null } match /users/{any} { allow read, write: if request.auth.uid != null } } ``` So basically, I want users that are authenticated to be able to read/write to the ```users``` and ```documents``` collections. But I try to write when I create a user and signin: ``` const registerRes = await firebase .auth() .createUserWithEmailAndPassword(email, password); await registerRes.user.updateProfile({ displayName: fullName }); registerRes.user.sendEmailVerification(); await firebase.firestore().collection('users').doc(registerRes?.user?.uid).set({...someStuff}); ``` But I get an error: ```FirebaseError: Missing or insufficient permissions. ``` Not sure what I'm doing wrong. Any help would be greatly appreciated - thanks! Here is the accepted answer: As explained in the doc, you need to include your path specific rules in a ```match /databases/{database}/documents``` block like: ```rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /documents/{any} { allow read, write: if request.auth.uid != null } match /users/{any} { allow read, write: if request.auth.uid != null } } } ``` Side note: Be aware that rules that are only based on ```request.auth.uid != null``` are not really securing your DB, see this SO answer.
Title: Running my UI app (WPF App) inside Worker Service with C# .Net Core Tags: c#;wpf;.net-core;windows-services Question: I'm new in programming with .Net and C# and, as said in the title, I have a WPF app which is accessible in a system tray icon and I want to run it a windows service. Typically, I want an output like it was described in an answer provided in a discussion here. ``` If you want it in the system tray I think what you'll have to do is make it a Windows service. I've only written 1 Windows Service and that was years ago, but I believe that's what you'll have to do. If I'm correct about writing a Windows service, then what I would suggest you do is create a new Visual Studio solution and add two projects to it. One would be a DLL which would run as a Windows service. The second project would be a WPF project that will be your UI the user interacts with. Then you'll have to use some messaging system to communicate between the two. For the action messages that would mimic what Outlook does, I've used some WPF toast messages to accomplish that. If you Bing/Google &quot;WPF toast popup&quot; you'll get lots of results. ``` I have many searched in Internet and find some helpful answers like: URL1 ``` You can't, not directly, because the windows service will necessarily start when the machine does, not when a user logs in. The service will also be running in a different context, likely as a different user. What you can do is to write a separate system tray based &quot;controller&quot; that interacts with the service. ``` URL2 ``` It needs some effort to achieve. Well, just two hints: 1) use static property System.Environment.UserInteractive to detect in which mode your application is running, see http://msdn.microsoft.com/en-us/library/system.environment.userinteractive.aspx; 2) get rid of app.xaml, because it will force starting WPF Application in all cases; instead, create and run and instance of System.Windows.Application (or better, a specially derived class) explicitly and only for interactive mode, see http://msdn.microsoft.com/en-us/library/system.windows.application.aspx. ``` And, I could not apply their instructions. Thanks advance! Comment: https://www.c-sharpcorner.com/UploadFile/naresh.avari/develop-and-install-a-windows-service-in-C-Sharp/ This may be helpful but you should clarify your question. Comment: I was actually trying to solve this same problem today. I found 2-3 ways of solving this and creating a Windows service seemed to be the most complicated. In case you are open to other ideas, take a look at [hardcodet.net](http://www.hardcodet.net/wpf-notifyicon), which takes the Windows Forms native ability to run an app as a tray icon and ports it to WPF. Comment: You need to create a Windows service and a Windows application (Win32, WPF, WinForms) which has a notification icon and let them interact by Named pipe or other inter process communication method. You need to learn all of them. BTW, if you just want a Windows application with a notification icon, you don't need a Windows service. Comment: Thanks @JackArnold. If I take an exemple from a popular Apps I would choose Microsoft Teams which starts in a system tray icon at startup Windows, it works permanently in background and if I click in the startup icon it shows the GUI.
Title: Task.Factory.FromAsync never calls the end method Tags: c#;asynchronous;task;factory Question: I'm kind of stumped and have been staring/debugging this code for hours now. In my service I have - ```var task = Task.Factory.FromAsync( AnotherService.BeginMethod(arg1, null null), AnotherService.EndMethod, TaskCreationOptions.None) task.ContinueWith((antecedent) =&gt; { if (antecedent.isFaulted....) else // do something else } ``` I have the above code wrapped in a ```TaskCompletionSource``` and set the result/exception in the ```task.ContinueWith``` method. So far, so good. The problem - As I'm debugging my unit tests (I have mocks for AnotherService), The begin is called, I store the variable and set the result on the tcs in my mock service. but my ```EndMethod``` in ```MockAnotherService``` is never called. I assumed that the tcs returned from the mock service would get signaled when I set result/exception on it, causing the ```From.Asyn```c call to call my End method. Is this not the case? EDIT - My mock implementation - ``` public IAsyncResult BeginSetDevice(Device device, AsyncCallback callback, object state) { var tcs = new TaskCompletionSource<string&gt;(state); var setTask = Task.Factory.StartNew( () =&gt; { if (this.FaultedState) { tcs.SetException(new Exception("You asked for a fault")); } else { this.DeviceToReturn = device; tcs.SetResult("success"); } }); return tcs.Task; } ``` Comment: Added the mock implementation. I can debug and confirm that tcs.setResult is called and tcs status is changed to RanToCompletion Comment: Never mind. I was missing a callback in my mock implementation.. That seems to fix it. Looks like the FromAsync method fills its own callback method. Thanks. `if (callback != null)` `{` `callback(tcs.Task);` `}` Comment: `FromAsync` waits for the `IAsyncResult` returned by `AnotherService.BeginMethod` to be signalled. Could you show the code for `MockAnotherService`? Comment: Everything in your question looks correct.
Title: GWT/ java- onChange event not firing 1st time through Tags: java;eclipse;gwt;event-handling;onchange Question: I have a validator on a widget dropdown textbox that when the value changes it needs to perform an action. The problem is the first time the value of the dropdown changes, It passes over the change handler- onChange (event) in my validator. It does however catch it (and work) after the initial change (2nd change and after). Not really sure if showing my code will help much because its a very simple validator, but I can if needed. I couldn't find much on my research with this happening for other people, but at the same time i feel like i have heard of others having this issue before. Is there a common reason this could be happening? Inside my validate function of my mainWidget class is the following. It basically calls a mainValidate Class from there which i could post code to if needed. ```for(int i=0;i<validatorsList.size();i++){ try{ saveErrors .add(myValidInfo.getValidator(validatorsList.get(i)).validate(getText(), iElementID, getArchitecture(),(getDisplayRow()+1),this)); } catch(Exception ex){ saveErrors .add("ERROR: Missing Validator "+validatorsList.get(i)); } } } ``` Comment: Post your code so that some one can help you Comment: yes I first had the initial value at null, then tried 1, made no difference... Using 2.2.0 Comment: so actually there is an onchange in the dropdown widget class that is fired every time correctly. can i listen to the onchange handler from a validator somehow? Comment: Sorry for the delay, Updated in question Comment: Which version of GWT are you using? Which widget are you using? Comment: Have you tried to set a initial value? Comment: Can you show the code how you trigger your validator?
Title: ld: warning: cannot find entry symbol main; not setting start address Tags: c;linker;operating-system;ld;bare-metal Question: While reading the book &quot;Operating Systems From 0 to 1&quot;, i'm currently trying to build a debuggable program on bare metal. I'm compiling the following program: ```void main() {} ``` using ```gcc -ffreestanding -nostdlib -gdwarf-4 -m32 -ggdb3 -c os.c -o os.o``` without an error. But when running ```ld -m elf_i386 -nmagic -T os.ld os.o -o os.o``` on the output, os.ld being: ```ENTRY(main); PHDRS { headers PT_PHDR PHDRS; code PT_LOAD FILEHDR PHDRS; } SECTIONS { .text 0x600: ALIGN(0x100) { *(.text) } :code .data : { *(.data) } .bss : { *(.bss) } /DISCARD/ : { *(.eh_frame) } } ``` it throws the following error: ```ld: warning: cannot find entry symbol main; not setting start address ``` Altough according to readelf os.o contains the symbol &quot;main&quot;. output of ```readelf -s os.o```: ```Symbol table '.symtab' contains 23 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 00000000 0 FILE LOCAL DEFAULT ABS os.c 2: 00000000 0 SECTION LOCAL DEFAULT 3 3: 00000000 0 SECTION LOCAL DEFAULT 5 4: 00000000 0 SECTION LOCAL DEFAULT 6 5: 00000000 0 SECTION LOCAL DEFAULT 7 6: 00000000 0 SECTION LOCAL DEFAULT 8 7: 00000000 0 SECTION LOCAL DEFAULT 10 8: 00000000 0 SECTION LOCAL DEFAULT 11 9: 00000000 0 SECTION LOCAL DEFAULT 13 10: 00000000 0 SECTION LOCAL DEFAULT 17 11: 00000000 0 SECTION LOCAL DEFAULT 19 12: 00000000 0 SECTION LOCAL DEFAULT 21 13: 00000000 0 SECTION LOCAL DEFAULT 22 14: 00000000 0 SECTION LOCAL DEFAULT 15 15: 00000000 0 SECTION LOCAL DEFAULT 23 16: 00000000 0 NOTYPE LOCAL DEFAULT 2 wm4.0.5fa2ff0e40930321219 17: 00000000 0 SECTION LOCAL DEFAULT 20 18: 00000000 0 SECTION LOCAL DEFAULT 1 19: 00000000 0 SECTION LOCAL DEFAULT 2 20: 00000000 20 FUNC GLOBAL DEFAULT 3 main 21: 00000000 0 FUNC GLOBAL HIDDEN 7 __x86.get_pc_thunk.ax 22: 00000000 0 NOTYPE GLOBAL DEFAULT UND _GLOBAL_OFFSET_TABLE_ ``` Any idea what the problem might be? Here is the accepted answer: You are trashing your ```os.o``` (input same as output) Change: ```ld -m elf_i386 -nmagic -T os.ld os.o -o os.o ``` Into: ```ld -m elf_i386 -nmagic -T os.ld os.o -o osx ``` ``` Thanks, that fixed it. I was trying to overwrite it on purpose. Can you explain why it caused that error? – Felix B. ``` ```ld``` [and most such build commands] do not allow an input file to be specified as an output file. ```ld``` may have to read ```os.o``` several times while writing the ```-o osx``` output file. The simple construction is that we open a file for read and an output file for writing when the program starts. For what you wanted, the program would have to detect that infile and outfile are the same and create a temp copy of infile. Too difficult to guarantee in the general case. It's rarely used and would slow things down. Also, if you put the commands in a makefile: ```all: osx os.o: os.c gcc -ffreestanding -nostdlib -gdwarf-4 -m32 -ggdb3 -c os.c -o os.o osx: os.o os.ld ld -m elf_i386 -nmagic -T os.ld os.o -o osx ``` If ```ld``` produced an error, ```make``` would remove the ```osx``` file. Consider that if ```os.o``` was the the output file, this would mess up the dependencies that ```make``` relies on. Comment for this answer: Thanks, that fixed it. I was trying to overwrite it on purpose. Can you explain why it caused that error?
Title: New tweet urls regex for correct character counting Tags: javascript;regex;twitter Question: I am trying to reproduce new tweet form behaviour such as correct character counting for urls. Therefore I need correct regex that will return array of 'true' or urls according to examples: ```1. www.google.com 2. http://www.google.com 3. https://www.google.com 4. http://google.com 5. https://google.com 6. google.com ``` My latest discovery was: ```(http|https):\/\/[\w-]+(\.[\w-]+)+([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-])? ``` Which works almost perfect but it doesn't catch option 1 (with www at the beginning). I don't want url like ```google.com``` to be valid when in ```[email protected]``` My goal is to be able to count all valid urls. Comment: @krzyk you are right. I made mistake. Comment: Why not make `google.com` a valid address? Some sites don't have the `www.` in the beginning at all (e.g. stackoverflow.com). Comment: Please update the question to reflect that Here is the accepted answer: I decided to use existing library from https://github.com/twitter/twitter-text/tree/master/js It works just like on twitter website. Here is another answer: So make the ```http://``` part optional: ```(?:(http|https):\/\/)?[\w-]+(\.[\w-]+)+([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-])? ``` Comment for this answer: Now it includes also `google.com` from `[email protected]` Here is another answer: Try this: ``` ^(https|http)?(\:\/\/)?([\w\.]*)\.([\w\.]+) ``` tested in notepad++ and regex101.com Comment for this answer: So next time - add ALL PATTERNS you want to find: ^(https|http)?(\:\/\/)?([\w\@\.]*)\.([\w\.\@\S?\/?]+) Comment for this answer: Now it returns only google.com and www.google.com Tested in rubular.com Comment for this answer: They might be in a string: `www.google.com http://google.com https://google.com http://www.google.com https://www.google.com/dasd?asd=asd@asd google.com [email protected]` Here is another answer: I tested this in rubular.com (for Ruby): ```(?<![@\w])(((http|https)(:\/\/))?([\w\-_]{2,})(([\.])([\w\-_]*)){1,})([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-]) ``` For JS: ```(^(((http|https)(:\/\/))?([\w\-_]{2,})(([\.])([\w\-_]*)){1,})([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-]*)|(?:[^@])\b(((http|https)(:\/\/))?([\w\-_]{2,})(([\.])([\w\-_]*)){1,})([\w.,@?^=%&amp;amp;:\/~+#-]*[\w@?^=%&amp;amp;\/~+#-]*)) ``` Tested here: regex101 Comment for this answer: Almost. When `www.google.com` is at the beginning it returns: `.google.com`, when http/https it returns nothing, when `google.com` it returns nothing. Comment for this answer: Sorry for delay with answer but I am trying to make it work in my application to test it properly. I am using javascript and i get error: `Uncaught SyntaxError: Invalid regular expression: [here your regex] : Invalid group at :1:1` Comment for this answer: Thank you very much for your support. Unfortunately your last regex didn't work properly so I decided to use twitter-text from: https://github.com/twitter/twitter-text/tree/master/js It works so I will stick to that for now. Comment for this answer: Ah, I see. This is because lookbehind is not supported in Javascript. I assumed, from you testing it on rubular.com, that you are using Ruby. For javascript, you can test it here: [https://regex101.com/r/xH0uR6/1](https://regex101.com/r/xH0uR6/1) I could not avoid repeating the regex for the alteration - perhaps there is a better option, but it works. `^(((http|https)(:\/\/))?([\w\-_]{2,})(([\.])([\w\-_]*)){1,})([\w.,@?^=%&amp;:\/~+#-]*[\w@?^=%&amp;\/~+#-]*)|(?:[^@])\b(((http|https)(:\/\/))?([\w\-_]{2,})(([\.])([\w\-_]*)){1,})([\w.,@?^=%&amp;:\/~+#-]*[\w@?^=%&amp;\/~+#-]*)`
Title: Highcharts show percentage in the center of the solid gauge chart Tags: javascript;html;css;highcharts Question: I am very new to Highcharts. I am trying to use the solid gauge. The problem is that the percentage text is not present in the center. Is there anyway to put the percentage in the center of the chart? Here is the code: ```$(function () { Highcharts.chart('container', { chart: { type: 'solidgauge' }, title: { text: '' }, tooltip: { borderWidth: 0, backgroundColor: 'none', shadow: false, style: { fontSize: '16px' }, pointFormat: '{series.name}<br&gt;<span style="font-size:2em; color: {point.color}; font-weight: bold"&gt;{point.y}%</span&gt;', positioner: function (labelWidth, labelHeight) { return { x: 200 - labelWidth / 2, y: 180 }; } }, pane: { center: ['50%', '27%'], startAngle: 0, endAngle: 360, background: [{ // Track for Stand outerRadius: '62%', innerRadius: '38%', backgroundColor: Highcharts.Color(Highcharts.getOptions().colors[2]).setOpacity(0.3).get(), borderWidth: 0, }], }, yAxis: { min: 0, max: 100, lineWidth: 0, tickPositions: [] }, plotOptions: { solidgauge: { borderWidth: '34px', dataLabels: { enabled: false }, linecap: 'round', stickyTracking: false } }, series: [{ name: 'Stand', borderColor: Highcharts.getOptions().colors[2], data: [{ color: Highcharts.getOptions().colors[2], radius: '50%', innerRadius: '50%', y: 50 }] }] }, }); ``` The x and y in the label option can set the position but I want it to be dynamic. Secondly I want it to be always shown. Here is the result of the code above. Comment: Thanks for help It really solved the problem Comment: The answer to this question might help you: [How can I style my highcharts solid gauge to have a background behind the percentage?](http://stackoverflow.com/questions/33110398/how-can-i-style-my-highcharts-solid-gauge-to-have-a-background-behind-the-percen) Comment: Somethign similar : https://jsfiddle.net/b5cakdbm/
Title: Mysql: Extract Data from 3 Tables using ON clause Tags: mysql Question: I have a query related to fetching records from the combination of 3 tables in a way that the returned result will be fetched using the ON clause with the help of foreign keys. Lets assume, I have three tables named table1, table2, and table3. Table: table1 ```id name t2_id t3_id 11 John 21 31 12 Doe 22 32 ``` Table: table2 ```id value 21 ABC-1 22 ABC-2 ``` Table: table3 ```id value 31 XYZ-1 32 XYZ-2 ``` In table:table1, t2_id and t3_id are foreign keys representing id from table table2 and table3 respectively. Question: I want to extract the records from table1 but also get the values from table2 and table 3 using their foreign key present in table1. Desired Results: ```id name t2_value t3_value 11 John ABC-1 XYZ-1 12 Doe ABC-2 XYZ-2 ``` What I have done right now: I have written the below query for doing this task but its not working. ```SELECT table1.id, table1.name, table2.value AS t2_value FROM table1 JOIN table2 ON (table1.t2_id=table2.id); ``` The output of above query will be like this: ```id name t2_value 11 John ABC-1 12 Doe ABC-2 ``` But i want to combine the 3rd table value here too. Kindly help ``` Important Note: I want to do this using Single MySQL Query. ``` Here is the accepted answer: You're on the right track. You just need to add another ```join``` clause for ```table3``` and add the column you want from it to the select list: ```SELECT table1.id, table1.name, table2.value AS t2_value, table3.value.value AS t3_value FROM table1 JOIN table2 ON table1.t2_id = table2.id JOIN table3 ON table1.t3_id = table3.id ```
Title: Add custom value in PowerShell output table Tags: powershell Question: I have a list of application web URLs, wherein I am trying to monitor HTTP statuscode of the web URL by ```Invoke-WebRequest``` command method. I got the list objects by below command: ```Invoke-WebRequest https://example.com/123 | Get-Member ``` Here, I am using statuscode and statusdescription fields to include it in output: ```Invoke-WebRequest https://example.com/123 | Select-Object StatusCode, StatusDescription | ft -a ``` Now I wish to include a custom name of URL in output before statuscode and statusdescription so that output will look like below URL StatusCode StatusDescription --- ---------- ----------------- abc web page 200 OK Comment: How do you build the custom name? In general I'd use a calculated property with `Select-Object @{n='URL';e={'abc web page'}},StatusCode,StatusDescription` Comment: I might be horribly wrong in interpreting how to write code, but I wrote below lines before original code , $URL = Add-Member -NotePropertyName URL -NotePropertyValue abc web page and Invoke-WebRequest https://example.com/123 | Select-Object URL, StatusCode, StatusDescription | ft -a Do I need to write Add-object instead of add-member? Comment: Hi, welcome to stackoverflow, kindly please read [mcve] and review the post, otherwise you can expect closing because its so general and doesnt mentioning any self effort, we are here to help you, but not to do the tasks instead of you :). Thanks in advance. Comment: Personally, I can suggest you to define an array with the all addresses, then iterate over this given array, and print out the values (for URL you can use index in array) Comment: Doesnt matter how correct or incorrect your code will be, just try to take a look for some info over the internet and try to do your best, if you will stuck on something specific, I am pretty sure you will get the help over there. People just dont wanna be bothered by general and broad questions, because it's hard to answer to them, eg. in this case, there are many ways.. `Add-Member` is "adding custom parameter to the variable" for further usage, eg., if you dont need/want to store values, it can be pretty more easier, eg. Comment: Sounds cool. What have you tried to make it work? Here is the accepted answer: If you dont need to store data in variables, and just print is fine for you ```$URLsources = "http://fooo/fooURL1:800","http://fooo/fooURL2:800","http://fooo/fooURL2:800" #table definition $tabName = "Output table" #Create Table object $table = New-Object system.Data.DataTable “$tabName” #columns definition $col1URL = New-Object system.Data.DataColumn URL,([string]) $col2Status = New-Object system.Data.DataColumn Status,([string]) $col3Desc = New-Object system.Data.DataColumn Desc,([string]) #add columns $table.Columns.Add($col1URL) $table.Columns.Add($col2Status) $table.Columns.Add($col3Desc) foreach($url in $URLsources){ $result = Invoke-WebRequest $url #preparation of the row $row = $table.NewRow() $row.URL= $url $row.Status= $result.StatusCode $row.Desc= $result.StatusDescription #add row to the table $table.Rows.Add($row) } #print out the table $table | format-table -AutoSize ``` Comment for this answer: Welcome, just for the next time please try to investigate on your own first, actually with my minimal knowledge of powershell took me ~15 min of work with testing :) Will appreciate if you will mark answer as accepted. Comment for this answer: check also @guiwhatsthat solution, but did not worked for me, maybe another PS version or something Here is another answer: You can add everything to your output: ```Invoke-WebRequest www.google.com | Select-Object -Property @{n="URL";e={'Any Name'}},StatusCode, StatusDescription``` If you want the real url which you checked in your output you could do it like that: ```Invoke-WebRequest www.google.com | Select-Object -Property @{n="URL";e={$_.BaseResponse.ResponseUri.Host}},StatusCode, StatusDescription``` Comment for this answer: Thanks a lot... This is a simple way to solve this... Many thanks
Title: Error when EditText is blank (unable to parse ' ' as integer) Tags: java;android;numberformatexception Question: I have a button and an edittext. The user enters a number and then clicks the button. Something then happens with that number. However i get an error when the edittext is blank. How do i fix this? My code below is my attempt..but it doesn't work as i get the following error and my app closes: E/AndroidRuntime(324): java.lang.NumberFormatException: unable to parse '' as integer ```public void onClick(View arg0) { // TODO Auto-generated method stub switch (arg0.getId()) { case R.id.bBankDeposit: deposit(); break; case R.id.bBankWithdraw: withdraw(); break; } } public void deposit() { String d = etDepositAmount.getText().toString(); Integer depositAmount = Integer.valueOf(d); if (depositAmount &gt; playerCash | depositAmount <= 0 | etDepositAmount.getText().toString() == "") { new AlertDialog.Builder(this).setTitle("Wait") .setMessage("Please enter a valid deposit value.") .setPositiveButton("OK", null).show(); etDepositAmount.setText(""); } else { int newBankBalance = playerBalance + depositAmount; playerBalance = newBankBalance; playerCash = (playerCash - depositAmount); bankBalance.setText("Bank Balance: $" + playerBalance); cash.setText("Cash in hand: $" + playerCash); etDepositAmount.setText(""); Player stats = new Player(this); stats.open(); stats.bankDeposit(playerId, playerCash, playerBalance); stats.close(); } } ``` } Here is the accepted answer: The idea behind your solution is correct, you just have some minor syntax errors. When you do ```Integer.valueOf(string)``` you need to handle a number format exception. It's just Java's way of letting you know it can reliably return an Integer for the given string. ```|``` is a bitwise OR operation. Presumably you want ```||```, which compares using boolean logic. Secondly, when comparing strings you want to compare by value, not by reference. ```==``` compares by reference, while ```someString.equals(someOtherString)``` will actually check the contents of the string. You really don't need to do that though because you've already parsed it into an Integer. With those fixes you'll figure out if the input is incorrect and display an alert if necessary. So - the full solution is .... ```public void deposit() { String d = etDepositAmount.getText().toString(); Integer depositAmount = 0; try { depositAmount = Integer.valueOf(d); } catch(NumberFormatException ex) { // Uh oh! Bad input! } if (depositAmount &gt; playerCash || depositAmount <= 0) { new AlertDialog.Builder(this).setTitle("Wait") .setMessage("Please enter a valid deposit value.") .setPositiveButton("OK", null).show(); etDepositAmount.setText(""); } else { int newBankBalance = playerBalance + depositAmount; playerBalance = newBankBalance; playerCash = (playerCash - depositAmount); bankBalance.setText("Bank Balance: $" + playerBalance); cash.setText("Cash in hand: $" + playerCash); etDepositAmount.setText(""); Player stats = new Player(this); stats.open(); stats.bankDeposit(playerId, playerCash, playerBalance); stats.close(); } } ``` Here is another answer: You have to check for the empty values of the edit-text to solve the ```NumberFormatException``` See this code for check the values for ```null``` or ```blank``` ```if(!TextUtils.isEmpty(text)) { //Use it }else { //display validation message } ``` Here is another answer: ```if (!d.equals("")&amp;&amp;d!=null){ Integer depositAmount = Integer.valueOf(d); } ``` Comment for this answer: Or, as of API level 9, you can use [isEmpty()](http://developer.android.com/reference/java/lang/String.html#isEmpty()). And you may want to include an `else` statement in case `d` is empty. Here is another answer: You have to catch ```NumberFormatException``` ```String s =""; int i=0; try{ i=Integer.parseInt(s); }catch(NumberFormatException ex){ //Some toast or alert } ```
Title: Remove SVN content from old Java files Tags: java;svn;rename Question: I have to make changes in an old project we did years ago and our company used SVN in that time (now we have GIT). I tried to run that project in Eclipse, but I discovered that all files have extensions *.java,v (that was easy to fix), but also all files include content like this ```head 1.2; access; symbols HR_struts:+1-725-662-8645 Root_HR_struts_start:1.2 Konec_hybridu_060602:1.2; locks; strict; comment @# @; 1.2 date 251.243.225.55.48.50; author moncka; state Exp; branches; next 1.1; ``` So I suppose that is the relic of our SVN usage. And my question is how to delete this content by using any tool (project contains thousands of files). Thanks a lot for help. P.S. Project also includes files like .jsp, .xml and so on which are spoiled too. Here is the accepted answer: If you have a lot of files with names ending ```,v``` it sounds like it was CVS rather than Subversion that was being used and that what you have is a copy of the repository rather than just a working copy of the project. You don't want to just strip this content from the files because that will not leave you with an intact copy of the most recent version of each file. You could try setting up a CVS server for the repository and doing a checkout to get a clean copy of the latest version of the source files. Alternatively, now that you're using Git it might be worth investigating ```git-cvsimport```. Converting a repository with that is fairly straightforward. Comment for this answer: Thinking about it, going down the `git-cvsimport` route is possibly a better idea as that will also preserve the history of all the files too. As a quick way to check you have a complete copy of the repository, do you have files with names like `modules`, `cvsrc`, `config` etc at the top level? Comment for this answer: If you don't see those files then you might not have the complete repository and might need to do individual files as suggested by @ChristophSeibert Comment for this answer: If you're OK to email me one of the `,v` files I'll take a look over the weekend and see if I can do anything with it (contact details are in my profile.) Comment for this answer: Yeah, thats possible, we also used CVS according to my colleagues. Well, I would like to avoid setting up a CVS server, because we dont have one anymore and it sounds to me like a overkill. I will try importing it from GIT, that could work. And dont worry about having a most recent version, it was just old files sitting on the hard drive, nobody didnt make any changes to them for ages. Thanks for advice Comment for this answer: No, I dont see nothing like that in the project folder. Well I will go with git-cvsimport and let you know what happened:-) Comment for this answer: Ok, so after some investigation I've discovered that we still have a CVS server installed under Linux but it's not running and I can't figure out how to pull that project or where to find it in the system. I tried to look to the CVSROOT directory and it's nothing interesting there. So any help is appreciated, google is really not helping me out... Comment for this answer: You could also try using RCS directly, since that was used by CVS. I'm not sure if CVS stored additional metadata somewhere. If you have RCS installed, try `co foo.java` if you have a file `foo.java,v`. If you want to edit the file, use `co -l foo.java` - that will lock the revision. See `man rcsintro` for more info.
Title: ord function execution in loop Tags: python;zipper Question: I have two folders, and each folde contains 196 files, they all are in (```'\xae\xae\xb4\x9e\x8f\x9f\xba\xc1\xd5\xbd\xcd\xa1\xb7\'```) format. I am trying to read this data convert it into human readable form. I want to combine the data of both the files of the 2 folder. I tried this using ```ord()``` function but while trying to retrive a single file with expected output, I am getting wrong values. I tries to extract first element of the read but output I am getting is forst value of all the files. here is my code: ```for file_name, files in izip(list_of_files, list_of_filesO): fi = open(file_name,"r").read() fo = open(files,"r").read() f = [open("/home/vidula/Desktop/project/ori_tri/input_%i.data" %i,'w')for i in range(len(list_of_files))] read = [ord(i) for i in fi] reado = [ord(i) for i in fo] zipped = zip (read,reado) print read[0] ``` Expected output: ```125,25 36,54 98,36 78,56 ``` Thank you in anticipation. Comment: "they all are in ('\xae\xae\xb4\x9e\x8f\x9f\xba\xc1\xd5\xbd\xcd\xa1\xb7\') format." I have absolutely no idea what you mean by this. If I opened your file in Notepad, would I see the quotes and backslashes? Would it all be one long string? Comment: it is a hex file, cannot be read in notepad. Here is another answer: ```[ord(i) for i in fi] ``` Iterating over a file iterates over lines of the file. It sounds like you want to iterate a character at a time. For that, you could try explicitly ```read```ing each file as a single chunk, and then iterating over the resulting strings. If you do not want the whole file contents in memory, you could try making a custom iterator, like: ```def each_char_of(a_file): while True: x = a_file.read(1) if not x: return yield x ``` Comment for this answer: I am sorry sir, I am newbie in programming, I am not getting what should I do? I have 2 directory containing 196 files each, In each files 121 values in hex format, I am trying to convert hex to int using ord().and write it into new output file. Each output file should have 121 values of 2 input files(like 123,456 56,23 89,56) and 196 files in new directory.but I am getting 196 values in each file of new directory. Can you please suggest me what shall be done? Thakyou
Title: unable to install pyqt4 with multiple methods Tags: python;windows;powershell;makefile;pyqt4 Question: I've been having a ton of issues in general trying to install packages on python 3 on windows. I don't know if any issues are related or not, I'm just going to try to provide as much detail to these issues as I can. It's not just PYQT, but a couple packages are giving me issues with installing. I have a python 3.5.1 virtual environment, and when I try to install PYQT4 it gives me the error (using pip, pip3, with and without --pre): ```Could not find a version that satisfies the requirement pyqt4 (from versions: ) No matching distribution found for pyqt4 ``` This isn't the only package that has given me this exact same error, but I can't recall atm what other packages gave me issues. (Another issue I often get with other packages is error: Unable to find vcvarsall.bat. I've been through all the questions regarding that error, installing visual c fails for me). So I decided to try to download the zip and follow this guide: http://pyqt.sourceforge.net/Docs/PyQt4/installation.html , which has me following this guide to install SIP as a requirement: http://pyqt.sourceforge.net/Docs/sip4/installation.html#downloading . So now I'm having more problems. Part of the installation process for SIP requires using a makefile. So I downloaded make from cygwin. I run python configure.py to generate the makefile (in the proper virtualenv), and I run make, but make never ends. It keeps reading this over and over again until I kill powershell or cmd: ```cd sipgen /usr/bin/make make[453]: Entering directory '/cygdrive/c/Users/<username&gt;/.virtualenvs/<envname&gt;/sip-4.17' ``` That directory is the location of the makefile. I'm completely inexperienced with make, so any help would be appreciated. All of these problems are while using Python 3. My python 2 virtual environments are able to install most packages fine. Edit: My mistake, in this case it's giving the same pip error with python 2 and 3. Comment: Why aren't you using the [binary packages](http://www.riverbankcomputing.com/software/pyqt/download)? Comment: I think you should clarify what question you want answering. Are you asking "how to compile PyQt in a virtual env on windows?" or would you prefer an answer to "How to make PyQt, installed via binary, show up in pip freeze and pycharm?". Your question currently has a lot of statements that cover problems, but no clear explanation of what you are actually aiming for. Comment: It just installs the directory in the site-packages, but it's not showing up on freeze. Comment: The package doesn't show up on pip freeze, nor in pycharm ide package list Comment: Try to install the package from the Windows pre-compiled binaries: [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyqt4)
Title: Migrate database and data warehouse into AWS Tags: amazon-web-services;amazon-s3;aws-lambda;amazon-redshift;amazon-kinesis-firehose Question: I want to migrate our database and data warehouse into AWS. Our on-prem database is Oracle and we use Oracle Data Integrator for data warehousing in IBM AIX. My first thought was to migrate our database with AWS DMS (Data Migration Services) into a staging point (S3) and then using Lambda (for creating the trigger when data is updated, deleted or inserted) and Kinesis Firehose (For streaming and do the ETL) to send the data into Redshift. The data in Redshift must be the replica of our on-prem data warehouse (containing facts and dimensions, aggregation and multiple joins) and I want whenever any changes happened in the on-prem database, it automatically updates the AWS S3 and Redshift so I can have near real-time data in my Redshift. I was wondering if my architecture is correct and/or is there a better way to do it? Thank you Comment: Why not just have AWS DMS directly insert the data into Redshift? Comment: We do not want to use ODI anymore so we want to migrate the ETL processes into AWS.
Title: Command in Lumen is not connecting with AWS RDS Tags: amazon-web-services;lumen;amazon-elastic-beanstalk Question: I have a Lumen 5.7 application in Elastic Beanstalk with PHP 7.2, there are several endpoints that are working correctly, get and insert data in the RDS. But a Lumen command (that i created) is tryng to connect to localhost, ignoring the environment configuration. I execute this command via ssh. I received this error: ```[2019-03-29 18:37:01] production.ERROR: PDOException: SQLSTATE[HY000] [2002] Connection refused in /var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:27 Stack trace: #0 /var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php(27): PDO-&gt;__construct('mysql:host=127....', 'forge', '', Array) #1 /var/app/current/vendor/illuminate/database/Connectors/Connector.php(67): Doctrine\DBAL\Driver\PDOConnection-&gt;__construct('mysql:host=127....', 'forge', '', Array) #2 /var/app/current/vendor/illuminate/database/Connectors/Connector.php(46): Illuminate\Database\Connectors\Connector-&gt;createPdoConnection('mysql:host=127....', 'forge', '', Array) #3 /var/app/current/vendor/illuminate/database/Connectors/MySqlConnector.php(24): Illuminate\Database\Connectors\Connector-&gt;createConnection('mysql:host=127....', Array, Array) ``` It is trying to connect to 708-362-5947 with the user forge. There is not a .env file in the server. Locally the command works correctly. Can someone help me please? Edit: I've added the config/database.php file with the next code ```return [ 'default' =&gt; 'mysql', 'connections' =&gt; [ 'mysql' =&gt; [ 'driver' =&gt; 'mysql', 'host' =&gt; env('DB_HOST'), 'database' =&gt; env('DB_DATABASE'), 'username' =&gt; env('DB_USERNAME'), 'password' =&gt; env('DB_PASSWORD'), 'charset' =&gt; 'utf8', 'collation' =&gt; 'utf8_unicode_ci', ] ], 'migrations' =&gt; 'migrations', ]; ``` And now I get this error: ```production.ERROR: InvalidArgumentException: Database hosts array is empty. in /var/app/current/vendor/illuminate/database/Connectors/ConnectionFactory.php:203 ``` Edit 2 I deployed again and I received this error now: ```[2019-03-30 01:12:53] local.ERROR: PDOException: SQLSTATE[HY000] [1045] Access denied for user 'user'@'xx.xx.xx.xx' (using password: YES) in /var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:27 Stack trace: #0 /var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php(27): PDO-&gt;__construct('mysql:host=dev-...', 'user', 'mypass...', Array) #1 /var/app/current/vendor/illuminate/database/Connectors/Connector.php(67): Doctrine\DBAL\Driver\PDOConnection-&gt;__construct('mysql:host=dev-...', 'user', 'mypass...', Array) #2 /var/app/current/vendor/illuminate/database/Connectors/Connector.php(46): Illuminate\Database\Connectors\Connector-&gt;createPdoConnection('mysql:host=dev-...', 'user', 'mypass...', Array) #3 /var/app/current/vendor/illuminate/database/Connectors/MySqlConnector.php(24): Illuminate\Database\Connectors\Connector-&gt;createConnection('mysql:host=dev-...', Array, Array) #4 /var/app/current/vendor/illuminate/database/Connectors/ConnectionFactory.php(182): Illuminate\Database\Connectors\MySqlConnector-&gt;connect(Array) ``` Connection data is correct, but the user appears with the server ip ```[email protected]``` and I don't know if this is correct. Here is the accepted answer: The password included a '$' character. For some reason Lumen truncated the password before of this character. It was removed and everything works again.
Title: Issues with n+1 queries in Rails Active Record Tags: ruby-on-rails;activerecord Question: I'm looking to assign past, current and future requests of a user to variables. He is my code: ```@user = User.find_by(email: params[:email]) req = Request.where("user_id = ? OR item_id = ?", @user.id, @user.item.first.id) @next = req.where("start_datetime &gt;= ?", Date.today).last(params[:offset].present? ? params[:offset] : "5" ) @current = req.where("start_datetime &gt; ?", Date.today.beginning_of_day).where("end_datetime &gt; ?", (Time.now - 30.minutes)).where(status: 'approved').order(:start_datetime).first || req.where("start_datetime &gt; ?", Time.now).where(status: 'approved').order(:start_datetime).first @past = req.where("start_datetime < ?", Date.today).last(5) ``` The issue is that I get hundreds of logs like below when doing this: ``` CACHE (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 39], ["LIMIT", 1]] ``` I'm not sure why this is the case and how I can solve this? Comment: this is active record cache, most likely current_user method. It doesnt hit db, so doesnt hurt performance Comment: did you define current_user yourself? In your context, I guess current user has id 39 Comment: weird because you get the user out of email, and the cached query is by id Comment: Oh ok so when it's written CACHE next to it, it can't hurt performance? It's still a mystery to me why I have hundreds of line of these for just a few queries. @apneadiving Comment: I'm using this for an API, the current_user is the @user above.
Title: Await all Mongo queries with Node.js Tags: node.js;mongodb;promise Question: Code below has a flaw as I am getting array of ```undefined```: ```let filters = []; async function getFilters(tiers) { return await Promise.all( tiers.map(async t =&gt; { let id = new ObjectId(t.filter); filters.push( await conn.collection('TierScheduleFilter').find({ _id: id }).toArray(function(err, filter) { if (err || !filter) { reject('no filter || error'); } return filter; }); ); }); ); } await getFilters(tiers); console.log(filters); // 4 filters =&gt; [ undefined, undefined, undefined, undefined ] ``` The code shall retrieve all filters but its all undefined values. Comment: Is this all inside another async function? You can't declare `await`s outside async functions. Comment: Something like `getFilters(tiers).then(x => console.log(filters))` (note: pseudocode) will possibly work for you. Because your code is no longer synchronous, your console.log is being run before the rest of your code has a chance to complete. Comment: ... if you don't force the console.log to run _after_ the getFilters call is complete, it will always print undefined as that stuff hasn't finished evaluating at that point in the code. Here is another answer: This one seems to be a proper approach: ```let filters = []; async function getFilters(tiers) { return await Promise.all( tiers.map(async t =&gt; { let id = new ObjectId(t.filter); try { return await conn.collection('TierScheduleFilter').findOne({ _id: id }) } catch (e) { return e; } })) } ```
Title: Underline text with line when selected Tags: javascript;html;css;selection;underline Question: The idea: I want to have the text underlined (preferably in a color gradient like here) when selecting a text section instead of changing the background. According to Mozilla ```text-decoration``` is possible, so also ```text-decoration: underline```. The problem: (```text-decoration``` with ```181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16selection``` does not work for me (Chrome &amp; Edge) - solved). The solution idea: Is it at all possible to underline the text as a gradient when making a selection (in CSS)? One idea of mine is to use ```JavaScript``` to see if the text is selected and then add an ```id=&quot;underline&quot;``` or so between the text, which then contains the CSS code for the gradient line. Progress: The whole thing is not achieved with pure CSS. I have now changed the code, so that when a selection is also the text is suitably underlined. Problem with the progress: The line remains even after clicking away the selection. The current code: ``` function wrapSelectedText() { var selectedText = window.getSelection().getRangeAt(0); var selection = window.getSelection().getRangeAt(0); var selectedText = selection.extractContents(); var span = document.createElement("span"); /* Styling */ span.style.backgroundImage = "linear-gradient(120deg, #84fab0 0%, #8fd3f4 100%)"; span.style.backgroundRepeat = "no-repeat"; span.style.backgroundSize = "100% 0.2em"; span.style.backgroundPosition = "0 88%"; span.appendChild(selectedText); selection.insertNode(span); } document.onmouseup = document.onkeyup = document.onselectionchange = function() { wrapSelectedText(); };``` ```181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16selection { background: none; }``` ```<p&gt;This is an example text.</p&gt;``` Here is another answer: The text-decoration needs to exist already on the element, by defining ```text-decoration: underline overline transparent;``` on the ```p``` element it does work. ```181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16selection { background: yellowgreen; /* To check if the text is selected */ text-decoration: underline overline #FF3028; /* Only works on elements that have the same decoration properties set already */ } p { color: black; font-size: 18px; text-decoration: underline overline transparent; }``` ```<p&gt;This is an example text.</p&gt;``` Comment for this answer: Thank you, that helps me a lot :) To underline only the part that is also marked, you most likely need JavaScript then, I guess Comment for this answer: @JueK3y indeed because underline applies to the entire node, in order to accomplish this (and any fancy custom styling like the gradient you want) you could wrap the selected text with an inline element (use the ‘mark’ tag and style it with a class). The JS solution is described in the following SO: https://stackoverflow.com/questions/6328718/how-to-wrap-surround-highlighted-text-with-an-element/6328906#6328906
Title: JSP:how do i store image from HTML img tag to database as blob Tags: java;mysql;jsp Question: I have one web application for capturing image from webcaam using flash player. after capturing image from webcam I am displaying it's preview in HTML img tag and also capturing image url in hidden form fields which I am using to store that image in my database. But my query is how do I store that image from that image tag as blob in my database and most of the solution I reviewed in that I found of storing image in database were uploading image from system and then storing and retrieving it from databse. Here is another answer: In order for this to work, you need to create a ```<form enctype="multipart/form-data"&gt;``` . In your html, insert a file upload control. Then you have to select the file to upload (this cannot be done automatically because of security reasons in most browsers). And then submit your form. When form is submitted, you can read image from request and store it to db ( how to upload a image in jsp and store database as blob ) Comment for this answer: you can only send data to a server using a form. Data submitted with the form are only of type . The tag is not supported Comment for this answer: :I know this way of storing image in database , I would want to know how to store image if I have image in my form HTML img tag.
Title: How to remove multiple versions of python from MacOS Catalina 10.15.4 Tags: python;macos;duplicates Question: I currently have python versions 2.7, 3.8, and 3.9 on my Mac and it just causes problems in package installing, etc. I do not know how to remove all of them and reinstall python from the beginning in a clean way this time. What should I delete? Comment: What problems, exactly? Better to learn how to correctly manage multiple Python installations than to indiscriminately remove things. Comment: Maybe refer to this post: https://apple.stackexchange.com/questions/284824/remove-and-reinstall-python-on-mac-can-i-trust-these-old-references.
Title: node.js exec command quote Tags: node.js;exec Question: i have this command on linux to get my ip adress : ```connmanctl services ethernet_00142d000000_cable | sed -n -e '/IPv4 =/s/.*Address=\([^,]*\).*/\1/p'``` i want to use it in a exec function using Child Process like this : ```const exec = require('child_process').exec; exec(' connmanctl services ethernet_00142d000000_cable | sed -n -e '/IPv4 =/s/.*Address=\([^,]*\).*/\1/p' ', (error, stdout, stderr) =&gt; { if (error) { console.error(`exec error: ${error}`); return; } console.log(`stdout: ${stdout}`); console.log(`stderr: ${stderr}`); }); ``` I got nothing and i dont know how i can exec a command with quote. thanks for your help Here is another answer: One of the easiest options is to wrap the command in double quotes, but this may cause ESlint error, because of mixing double and single quotes. ```const exec = require('child_process').exec; exec(&quot; connmanctl services ethernet_00142d000000_cable | sed -n -e '/IPv4 =/s/.*Address=\([^,]*\).*/\1/p' &quot;, (error, stdout, stderr) =&gt; { if (error) { console.error(`exec error: ${error}`); return; } console.log(`stdout: ${stdout}`); console.log(`stderr: ${stderr}`); }); ``` Another option is to use Template literals: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals ``` const exec = require('child_process').exec; exec(` connmanctl services ethernet_00142d000000_cable | sed -n -e '/IPv4 =/s/.*Address=\([^,]*\).*/\1/p' `, (error, stdout, stderr) =&gt; { if (error) { console.error(`exec error: ${error}`); return; } console.log(`stdout: ${stdout}`); console.log(`stderr: ${stderr}`); }); ```
Title: Can JMeter replay a test in a browser? Tags: jmeter;jmeter-plugins;katalon-studio Question: Is it possible to get JMeter to replay its recordings via a browser, so one can see the replay in action. Katalon can do this, but not sure about JMeter. Thanks in advance. Here is another answer: As per JMeter project main page: ``` JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time). ``` So JMeter doesn't actually kick off a real browser, it only sends HTTP Requests like real browsers, but it doesn't render response, just measures time from sending request to receiving the last byte of the response. Therefore as of current version JMeter 5.0 it is not possible The only way you can visualize the test results and see the request and response details is using View Results Tree listener. By default in ```Text``` mode you can see the source HTML code of the page JMeter hits However in ```HTML```, ```HTML (download resources)``` and ```Browser``` modes you should be able to see the rendered response:
Title: 'No matching manifest for unknown in the manifest list entries' when pull nanoserver:1903 Tags: docker;docker-for-windows Question: I want to pull microsoft-windows-nanoserver for my windows container on windows 10 pro. My environment: ```Docker Desktop Version: (276)538-6007 (31259) Channel: stable Sha1: 8858db33c8692b69de9987a5d672798d778735b2 OS Name: Windows 10 Pro Windows Edition: Professional Windows Build Number: 17763 Client: Docker Engine - Community Version: 18.09.2 API version: 1.39 Go version:go1.10.8 Git commit:6247962 Built: Sun Feb 10 04:12:31 2019 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.2 API version: 1.39 (minimum version 1.24) Go version: go1.10.6 Git commit: 6247962 Built:Sun Feb 10 04:28:48 2019 OS/Arch: windows/amd64 Experimental: true ``` What's strange is: Pull 1803 is ok: ```C:\&gt;docker pull mcr.microsoft.com/windows/nanoserver:1803 1803: Pulling from windows/nanoserver e46172273a4e: Pull complete 8f7ed89f9e35: Pull complete Digest: sha256:bc5c1878a69c4538d55bc74e50b7dbafafff1a373120e624e8bad646a0a505dc Status: Downloaded newer image for mcr.microsoft.com/windows/nanoserver:1803 ``` But pull 1903 is not ok: ```C:\&gt;docker pull mcr.microsoft.com/windows/nanoserver:1903 1903: Pulling from windows/nanoserver no matching manifest for unknown in the manifest list entries ``` What I notice is there is a table in its dockerhub: ```Tags Architecture Dockerfile OsVersion CreatedTime LastUpdatedTime 1903 multiarch No Dockerfile 10.0.18362.239 05/21/2019 18:01:07 07/09/2019 18:29:39 1803 multiarch No Dockerfile 10.0.17134.885 10/05/2018 22:06:26 07/09/2019 17:41:59 ``` Does the OsVersion mean the ```docker host os's version``` or ```my contaner's distrubtion's version```? You can see my windows host os is ```17763```, this could be the reason I cannot pull ```1903``` or other reason? Additional, if above guess is correct, then why could this happen? As I know, container just share the host's kernel, shouldn't care the os version, meanwhile, ```docker for windows``` on ```windows10``` use ```hyper-v```, why it care my windows os's version? I really don't want to upgrade my os again and again if I want to use new version container... I hope my guess is wrong, anything I miss? Here is the accepted answer: I find the answer. After I execute ```docker pull mcr.microsoft.com/windows/nanoserver:1903```, I find there is a debug log in ```C:\Users\user\AppData\Local\Docker\log.txt```, it said: ``` debug: a Windows version 10.0.18362-based image is incompatible with a 10.0.17763 host ``` So, it confirms my guess, my issue happens because I use an old windows 10 version, had to upgrade my windows10 to at least ```10.0.18362.239``` to use ```nanoserver:1903```. The reason why we had to do above, I see microsoft official explainaiton: ``` Windows Server 2016 and Windows 10 Anniversary Update (both version 14393) were the first Windows releases that could build and run Windows Server containers. Containers built using these versions can run on newer releases such as Windows Server version 1709, but there are a few things you need to know before you start. As we've been improving the Windows container features, we've had to make some changes that can affect compatibility. Older containers will run the same on newer hosts with Hyper-V isolation, and will use the same (older) kernel version. However, if you want to run a container based on a newer Windows build, it can only run on the newer host build. ``` From it, it seems, microsoft still in the process of improve windows container features, so if need to use container which based on newer windows, we had to upgrade host windows os(maybe also related to some hyper-v upgrade I guess).
Title: rexx reading file data onto panel Tags: panel;rexx Question: can you please suggest me the manual or example codes on how i can read the content of a file to be displayed in a rexx panel. the number of lines from the file can vary and so cannot use the static manner. thanks, Samuel Mathews. Comment: should we assume you are working on z/OS? Here is another answer: In ZOS to read the file in rexx use the execio command i.e. ```"EXECIO * DISKR myindd (STEM fileContentsVar." ``` Reads the file into a stem variable (fileContentsVar.0 holds the number of records and fileContentsVar.1 ... hold the actual data). You could store the file contents in a ISPF table and display the table using the TBDispl command The rexx code will be roughly ```address ispexec 'tbcreate myfile names(line)' do i=1 to fileContentsVar.0 line = fileContentsVar.i 'tbadd myfile' end 'tbtop myfile' 'tbdispl mypanel' 'tbend myfile' ``` For an example of a table-panel definition see http://pic.dhe.ibm.com/infocenter/zos/v1r12/index.jsp?topic=%2Fcom.ibm.zos.r12.f54dg00%2Fispzdg8040.htm A table-panel would look like: ```************************************************************ * )Attr * * @ Type(output) Intens(low) Just(asis) Caps(off) * * )Body * * -------------------- ????????????????? ----------------- * * +Command ==&gt;Cmdfld +Scroll ==&gt;_samt+ * * + * * This table shows ... * * * * Line * * )Model * ---- The model setion holds the * @line + * Table display section * * * )Init * * &amp;samt=page * * )Proc * * )End * ************************************************************ ``` Comment for this answer: thanks, i wil chk and let you know the updates... yes, am workin in ZOS
Title: Making HTTP requests isn’t working in React Native iOS Tags: javascript;ios;react-native;http Question: I am working on a group project where we have created a React Native app. Two of my team members worked on the server-side of the project and I am working on the client-side, the app works perfectly on Android but the http requests aren't working in iOS. I tried searching for solutions, most of them said that I need to add a few lines to the info.plist file, I tried all that but none of that worked so hopefully no one will mark this question as a duplicate. I am stuck in the Login page of the app, here is the code: UserLogin.js ```// This function is called when the login button is pressed submit=() =&gt; { // Some validations if(Object.keys(this.state.success).length==0){ const URL="http://574-265-6100:4000/userLogin" const loginconfirm = async () =&gt; { try { return await axios.post(URL,this.state) } catch (error) { console.error(error) } } const getloginconfirm = async () =&gt; { const confirm = await loginconfirm() if (confirm.data.message=="success") { alert("Successfully logged in") UserInfo.setName(confirm.data.name); UserInfo.setId(confirm.data.id); if(confirm.data.category=="Exporter"){ const {navigation} = this.props; navigation.navigate('ScreenMove') //Should go to ScreenMove.js }else{ const {navigation} = this.props; navigation.navigate('FarmerHomePage') } } else{alert("The username does not exist or password does not match the username")} } getloginconfirm(); } ``` Here's my info.plist ```<?xml version="1.0" encoding="UTF-8"?&gt; <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt; <plist version="1.0"&gt; <dict&gt; <key&gt;CFBundleDevelopmentRegion</key&gt; <string&gt;en</string&gt; <key&gt;CFBundleDisplayName</key&gt; <string&gt;UNAGI</string&gt; <key&gt;CFBundleExecutable</key&gt; <string&gt;$(EXECUTABLE_NAME)</string&gt; <key&gt;CFBundleIdentifier</key&gt; <string&gt;$(PRODUCT_BUNDLE_IDENTIFIER)</string&gt; <key&gt;CFBundleInfoDictionaryVersion</key&gt; <string&gt;6.0</string&gt; <key&gt;CFBundleName</key&gt; <string&gt;$(PRODUCT_NAME)</string&gt; <key&gt;CFBundlePackageType</key&gt; <string&gt;APPL</string&gt; <key&gt;CFBundleShortVersionString</key&gt; <string&gt;1.0</string&gt; <key&gt;CFBundleSignature</key&gt; <string&gt;????</string&gt; <key&gt;CFBundleURLTypes</key&gt; <array&gt; <dict/&gt; </array&gt; <key&gt;CFBundleVersion</key&gt; <string&gt;1</string&gt; <key&gt;LSRequiresIPhoneOS</key&gt; <true/&gt; <key&gt;NSAppTransportSecurity</key&gt; <dict&gt; <key&gt;NSAllowsArbitraryLoads</key&gt; <true/&gt; <key&gt;NSExceptionDomains</key&gt; <dict&gt; <key&gt;localhost</key&gt; <dict&gt; <key&gt;NSIncludesSubdomains</key&gt; <false/&gt; <key&gt;NSExceptionAllowsInsecureHTTPLoads</key&gt; <true/&gt; </dict&gt; </dict&gt; </dict&gt; <key&gt;NSLocationWhenInUseUsageDescription</key&gt; <string&gt;</string&gt; <key&gt;UILaunchStoryboardName</key&gt; <string&gt;LaunchScreen</string&gt; <key&gt;UIRequiredDeviceCapabilities</key&gt; <array&gt; <string&gt;armv7</string&gt; </array&gt; <key&gt;UISupportedInterfaceOrientations</key&gt; <array&gt; <string&gt;UIInterfaceOrientationPortrait</string&gt; <string&gt;UIInterfaceOrientationLandscapeLeft</string&gt; <string&gt;UIInterfaceOrientationLandscapeRight</string&gt; </array&gt; <key&gt;UIViewControllerBasedStatusBarAppearance</key&gt; <false/&gt; </dict&gt; </plist&gt; ``` Here's the error I'm getting: ``` 2020-04-29 20:02:04.589406+0530 UNAGI[30263:604617] [] nw_socket_handle_socket_event [C8:2] Socket SO_ERROR [60: Operation timed out] 2020-04-29 20:02:04.590542+0530 UNAGI[30263:604617] Connection 8: received failure notification 2020-04-29 20:02:04.592849+0530 UNAGI[30263:604617] Connection 8: failed to connect 1:60, reason -1 2020-04-29 20:02:04.594920+0530 UNAGI[30263:604617] Connection 8: encountered error(1:60) 2020-04-29 20:02:04.605649+0530 UNAGI[30263:604617] Task <695EDB52-BC43-4A20-93D2-A3481D44468C>.<3> HTTP load failed, 0/0 bytes (error code: -1001 [1:60]) 2020-04-29 20:02:04.622585+0530 UNAGI[30263:604513] Task <695EDB52-BC43-4A20-93D2-A3481D44468C>.<3> finished with error [-1001] Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={_kCFStreamErrorCodeKey=60, NSUnderlyingError=0x60000235c360 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=60, _kCFStreamErrorDomainKey=1}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <695EDB52-BC43-4A20-93D2-A3481D44468C>.<3>, _NSURLErrorRelatedURLSessionTaskErrorKey=( "LocalDataTask <695EDB52-BC43-4A20-93D2-A3481D44468C>.<3>" ), NSLocalizedDescription=The request timed out., NSErrorFailingURLStringKey=http://574-265-6100:4000/userLogin, NSErrorFailingURLKey=http://574-265-6100:4000/userLogin, _kCFStreamErrorDomainKey=1} 2020-04-29 20:02:04.639 [error][tid:com.facebook.react.JavaScript] Error: timeout of 0ms exceeded 2020-04-29 20:02:04.761 [warn][tid:com.facebook.react.JavaScript] Possible Unhandled Promise Rejection (id: 0): TypeError: undefined is not an object (evaluating 'confirm.data') getloginconfirm$@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:132789:32 tryCatch@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:30544:23 invoke@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:30720:32 tryCatch@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:30544:23 invoke@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:30620:30 http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:30630:21 tryCallOne@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:3344:16 http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:3445:27 _callTimer@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:32458:17 _callImmediatesPass@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:32494:19 callImmediates@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:32712:33 callImmediates@[native code] __callImmediates@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:2752:35 http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:2538:34 __guard@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:2735:15 flushedQueue@http://localhost:8081/index.bundle?platform=ios&amp;dev=true&amp;minify=false:2537:21 flushedQueue@[native code] callFunctionReturnFlushedQueue@[native code] ``` Appreciate the feedback! :) Comment: @BaderSerhan I didn’t find any solution for this :/ Comment: Are you still having this issue? Or how did you manage to solve it? Comment: Thank you for your reply, I appreciate it.
Title: Recently used documents are not actualized during a KDE session Tags: 16.04;kubuntu;kde;plasma Question: I use Kubuntu 16.04 LTS. After opening a document during my KDE session a link appears immediately under ```~/.local/share/RecentDocuments``` but nothing appears under the "Recent documents" entry of the Plasma widgets "Applications menu" and Kickoff. Plasma needs a restart to show the new entry. Is this a feature? Is there a way to for Plasma to show immediately new recent documents (without restarting Plasma)? Comment: I just noticed that `~/.local/share/RecentDocuments` does not cover all the recent documents shown by Plasma. There must be another place where Plasma stores recent documents but I am not able to find it. Here is the accepted answer: Perhaps because there is also a ```.kde/share/apps/RecentDocuments```. You can try symlinking them as described here: ``` ln -sf ~/.local/share/RecentDocuments ~/.kde/share/apps/RecentDocuments ```
Title: Searching WordNet for Synonym gives only one result Tags: java;wordnet;synonym;jwi Question: I am using java jwi API for searching the wordnet to get the synonyms of a word. The problem is that it only gives me one result the word to find its synonyms itself. Please guide me. Is it possible to get the list of all possible synonyms of a given word? My code is: ``` public void searcher() { try { url = new URL("file", null, path); dict = new Dictionary(url); try { dict.open(); } catch (IOException ex) { JOptionPane.showMessageDialog(null, "Dictionary directory does not exist\n" + ex + "\nClass:Meaning Thread", "Dictionary Not Found Error", JOptionPane.ERROR_MESSAGE); } IIndexWord idxWord = dict.getIndexWord("capacity", POS.NOUN); IWordID wordID = idxWord.getWordIDs().get(0); IWord word = dict.getWord(wordID); //Adding Related Words to List of Realted Words ISynset synset = word.getSynset(); for (IWord w : synset.getWords()) { System.out.println(w.getLemma()); } } catch (Exception e) { } } ``` The output is only: ```capacity ``` itself! The actual synonyms must be: ``` capability capacitance content electrical capacitance mental ability...(so on) ``` So is there anything I missed in the code or can somebodygive me any ideas what is the real problem? Thanks in advance Comment: http://lyle.smu.edu/~tspell/jaws/TestJAWS.java here is the code can work for you it need jaws.jar to integrate with it Comment: You can contact me [email protected] i am working on data mining projects since last 3 years! Comment: http://stackoverflow.com/a/27896853/3411946 Helpful Comment: here is a related question on finding antonyms: https://stackoverflow.com/questions/65403290/get-antonyms-for-a-word-in-java-wordnet-jwi Comment: can you please provide complete code. it will help me to get idea Comment: I am asking for Synonym combinig code. Comment: I am getting error : Error opening index file: ./index.sense Comment: ok, thanks. I will try and if not working than call you back. Comment: hello Java Nerd it gives me same error : Error opening index file: ./index.sense (No such file or directory) any solution. Comment: hello Java Nerd, problem solved. Problem was with dictionary. I have not defined dictionary. so, it was unable to find dictionary. Here is the accepted answer: So, here comes the answer i use Java JAWS for wordnet searching! The steps are: ``` 1- Download WordNet Dictionary from ``` Here ``` 2- Install WordNet 3- Go to Installed Directory and copied the WordNet Directory (in my case C:\Program Files (x86) was the Directory for WordNet Folder) 4- Pasted it into my Java Project (under MyProject&gt;WordNet) 5- Making Path to the directory as: File f=new File("WordNet\\2.1\\dict"); System.setProperty("wordnet.database.dir", f.toString()); 6- Got Synonyms as: public class TestJAWS{ public static void main(String[] args){ String wordForm = "capacity"; // Get the synsets containing the word form=capicity File f=new File("WordNet\\2.1\\dict"); System.setProperty("wordnet.database.dir", f.toString()); //setting path for the WordNet Directory WordNetDatabase database = WordNetDatabase.getFileInstance(); Synset[] synsets = database.getSynsets(wordForm); // Display the word forms and definitions for synsets retrieved if (synsets.length &gt; 0){ ArrayList<String&gt; al = new ArrayList<String&gt;(); // add elements to al, including duplicates HashSet hs = new HashSet(); for (int i = 0; i < synsets.length; i++){ String[] wordForms = synsets[i].getWordForms(); for (int j = 0; j < wordForms.length; j++) { al.add(wordForms[j]); } //removing duplicates hs.addAll(al); al.clear(); al.addAll(hs); //showing all synsets for (int i = 0; i < al.size(); i++) { System.out.println(al.get(i)); } } } } else { System.err.println("No synsets exist that contain the word form '" + wordForm + "'"); } } ``` The Thing is you must have jaws-bin.jar Comment for this answer: how can i handle wordnetexception. I try inputting other keywords like light or ball it throws the wordnet exception. btw, you have extra bracket for your if condition Comment for this answer: the exception is like "An error occurred parsing the synset data: : 00292635 30 v 05 light 0 illume 0 illumine 0 light_up 0 illuminate 3 012 @ 00281690 v 0000 + 14006632 n 0501 + 05025708 n 0502 + 14711674 n 0501 + 14006789 n 0101 + 04958550 n 0101 + 08663763 n 0101 + 05025269 n 0106 + 03670692 n 0101 + 11494354 n 0101 ~ 00293009 v 0000 ~ 00293130 v 0000 02 + 08 00 + 11 00 | make lighter or brighter; "This lamp lightens the room a bit" Here is another answer: If you want to use JWI and want to fetch more than 1 synonym then change your code from this exact spot: ```IIndexWord idxWord = dict.getIndexWord(inputWord, POS.NOUN); try { int x = idxWord.getTagSenseCount(); for (int i = 0; i < x; i++) { IWordID wordID = idxWord.getWordIDs().get(i); IWord word = dict.getWord(wordID); // Adding Related Words to List of Realted Words ISynset synset = word.getSynset(); for (IWord w : synset.getWords()) { System.out.println(w.getLemma()); // output.add(w.getLemma()); } } } catch (Exception ex) { System.out.println("No synonym found!"); } ``` It works perfectly fine. Here is another answer: What you are getting is "capacity#1", which has the meaning of "capability to perform or produce", and it does indeed only have one synonym. (Play around with the PWN search page to get a feel for how WordNet organizes the words into synsets.) It sounds like what you are after is the union of all synonyms in all the synsets? I think you either use ```getSenseEntryIterator()```, or simply put a loop around ```idxWord.getWordIDs().get(0);```, replacing the ```0``` with the loop counter, so you are not only ever getting the first item in the array. Comment for this answer: @219CID Yes, I guess if you want every synonym for every possible part of speech, you'd add an outer loop, that will iterate through each POS tag. Comment for this answer: could you call getSenseEntryIterator() for every POS for a given word? Or would that be repetitive? Comment for this answer: Thank you, I had success doing this and was able to get these unique synonyms for capacity: CAPABILITY CONTENT CAPACITANCE ELECTRICALCAPACITY MENTALABILITY Comment for this answer: I am wondering if you could take a look at this related JWI/WordNet question: https://stackoverflow.com/questions/65403290/get-antonyms-for-a-word-in-java-wordnet-jwi
Title: Javascript for preventing "burn-in" problem on lcd screen Tags: javascript;lcd Question: I'm building a non-public web app that will be used as info-monitor. As such, it will be running 24/7 on one LCD TV display. Since this could produce a "burn-in color" error on the LCD I'm looking for a Javascript that will prevent/reduce this problem. I want to use something similar to those they use on airport displays (a line periodically moving from left to right and top to bottom and switching color). Do you know any Javascript doing this? Thank you! Comment: It could? I thought this kind of problems was restricted to CRT monitors. Comment: I could only dream a better solution for my "problem" :) tnx guys Comment: This isn't a major problem for LCDs, just CRTs and plasma displays. See http://compreviews.about.com/od/monitors/a/LCDBurnIn.htm. @Andreas it was just as bad with plasmas :) Here is the accepted answer: In case you were still interested: (uses jQuery) ```var $burnGuard = $('<div&gt;').attr('id','burnGuard').css({ 'background-color':'#FF00FF', 'width':'1px', 'height':$(document).height()+'px', 'position':'absolute', 'top':'0px', 'left':'0px', 'display':'none' }).appendTo('body'); var colors = ['#FF0000','#00FF00','#0000FF'], color = 0, delay = 5000, scrollDelay = 1000; function burnGuardAnimate() { color = ++color % 3; var rColor = colors[color]; $burnGuard.css({ 'left':'0px', 'background-color':rColor, }).show().animate({ 'left':$(window).width()+'px' },scrollDelay,function(){ $(this).hide(); }); setTimeout(burnGuardAnimate,delay); } setTimeout(burnGuardAnimate,delay); ``` Working example found here: http://www.jsfiddle.net/bradchristie/4w2K3/3/ (or full screen version) Comment for this answer: How often should this bar wander across the screen? Once every second? Is there some science behind this technique? Comment for this answer: Or maybe I should account for the new, what is it, orange/yellow color that the new TVs have as part of the color list? ;-p (Used Primary's as this should be enough to refresh the pixel). Comment for this answer: TBH, not even sure it's necessary in this day and age. Simply wrote it because it was requested. That said, may be worthwhile to do a little research and see if there is an optimal setting. It was my understanding this was more to avoid CRT degradation. Now that everything is lcd, not sure it applies (leds won't have a residual glow). Comment for this answer: Just used this to add some burn-in protection (avoidance?) to our Dashing / Smashing instance. I had to manually create the DIV, and set the z-index of It to a large number to get it to cooperate, but its working great now. Thanks for sharing! Here is another answer: I used Brad's script but unfortunately my page had a large HTMl table that extend outside the parent container. This made it so the pixel bar would only travel part way across the screen. Instead of altering my table I added a bounding box script to find the actual width of the html table and then used that to set the width in Brad's script. ```var div = document.getElementById ("HtmlTable-ID"); if (div.getBoundingClientRect) { var rect = div.getBoundingClientRect (); w = rect.right - rect.left; // alert (" Width: " + w ); } var $burnGuard = $('<div&gt;').attr('id','burnGuard').css({ 'background-color':'#FF00FF', 'width':'1px', 'height':$(document).height()+'px', 'position':'absolute', 'top':'0px', 'left':'0px', 'display':'none' }).appendTo('body'); var colors = ['#FF0000','#00FF00','#0000FF'], color = 0, delay = 5000, scrollDelay = 1000; function burnGuardAnimate() { color = ++color % 3; var rColor = colors[color]; $burnGuard.css({ 'left':'0px', 'background-color':rColor, }).show().animate({ 'left': w +'px' },scrollDelay,function(){ $(this).hide(); }); setTimeout(burnGuardAnimate,delay); } setTimeout(burnGuardAnimate,delay); ```
Title: Phonegap always asks me to install android sdk 19 but i have already installed it Tags: android;cordova Question: This is what exactly i do : ```$ phonegap create myapp $ cd myapp $ phonegap run android ``` but it fails. I have checked everything. my environments are ok, installed the latest version. when i type ```$ android ``` on my CMD it shows me that every latest updates are already installed but can not make it work : ```$ phonegap run android ``` it returns : ``` ^ Error: Please install Android target 19 (the Android newest SDK). Make sure you have the latest Android tools installed as well. Run "android" from your command -line to install/update any missing SDKs or tools. at C:\Users\yeayea\.cordova\lib\android\cordova\3.3.0\bin\lib\check_reqs.js: 80:29 ``` please help otherwise i will hang myself! :( Comment: yup :( checked million times, changed the folder, restarted my pc, but no hope. Comment: Yes , it runs with no problem Comment: maybe reinstall my phonegap? Comment: Have you checked to make sure your $PATH includes the 'tools' and 'platform-tools' paths for the android 19 sdk? Comment: are you able to run 'android' from the command line? Comment: It's probably worth trying. I can't think of anything else right now Comment: have you checked permissions on the sdk folder and files? maybe it's not accessible to system? Here is the accepted answer: I had the same problem and I thought I had API 19 installed but I didn't. In CMD type "android" Scroll down to Android 4.2.2 (API 19) Check all and install them. Even if you already have API 20 installed and all the other packages and tools installed, you need this. Comment for this answer: This solved this very annoying problem for me. Thanks for the short and simple directions. Comment for this answer: The wording of the error message sucks, but this helped Curtis. "Target" = API, and it's version 4.4.2. There's some cordova dependency which depends on this package, although I couldn't tell you what it is exactly. Nice work Curtis! Here is another answer: For Windows, in the Environment Variable try place the java\bin directory before the windows\system32 directory. This makes the SDK Manager to use java from java\bin directory else it would use java found in the system32 folder. This should help! Here is another answer: I was able to solve this problem by checking if the environment variables were detected in the current working directory or not. The directory where I was running phonegap run android wasn't able to detect the andorid environment variable so I copied it to another directory c:/users//<> and tried again it worked. It might happen if your sdk is not installed in c: drive (Primary Partition) Comment for this answer: ! it wokred for me! I just installed my SSD on laptop, and could not add android platform while working on another hard drive! thank you very much Viki293. Here is another answer: When you run ```phonegap create myApp```, there is a script which checks that you have a specific android platform version. The platform it is looking for is in the file (for OS X): ```/Users/username/.cordova/lib/android/cordova/3.5.0/framework/project.properties``` and for Cordova 3.5.0 has the line: ```target=android-19``` Cordova and PhoneGap are the same in this regard so any reference to cordova also applies to phonegap. To check that you have this target run: ```android list targets``` and you should have an entry like: ```Available Android targets:``` ```id: 1 or "android-19" Name: Android 4.4.2 Type: Platform API level: 19 Revision: 3 Skins: HVGA, QVGA, WQVGA400, WQVGA432, WSVGA, WVGA800 (default), WVGA854, WXGA720, WXGA800, WXGA800-7in Tag/ABIs : default/armeabi-v7a``` By default only the latest target is installed which is currently android-20. You have two options: Install the target that matches what is in the project.properties file. This would seem to be the best approach as this would be the target cordova was tested against. Edit the project.properties file to match a target on you have installed. This should get you past this problem but could have unexpected impacts. To install an additional target, as Curtis said, run the Android SDK Manager (```android```) and select the Android target you require. Note these appear below the Tools folder which also has a number of different versions, but installing a different version of the tools will not solve this problem. You can also see the targets you have installed in the platforms directory of the Android sdk. Comment for this answer: In Windows 7 I find three project.properties files under /Users/username/.cordova. One (in ~templates) says "target=This_gets_replaced". Two say "target=android-19". One of these is in C:\Users\username\.cordova\lib\npm_cache\cordova-android\3.6.4\package\test\project.properties, and one is in C:\Users\username\.cordova\lib\npm_cache\cordova-android\3.6.4\package\framework\project.properties. Based on the previous comment, I would change the one in framework, but if problems arise subsequently it may be helpful to know that the other one exists. Comment for this answer: Thanks ! I updated the **/Users/username/.cordova/lib/android/cordova/3.5.0/framework/project.properties** to _android-20_ and it worked !!!
Title: Loading div for AJAX call doesn't display in IE or Chrome, works fine in Firefox Tags: javascript;jquery;ajax;internet-explorer;google-chrome Question: I'm trying to write a very simple function which displays a div that says "Loading...", then makes a jQuery ajax() call, and once complete, hides the loading div. The code works perfectly in Firefox, but IE 7 (maybe 8/9 as well, tested on 7 only) and Chrome both have issues: ```function ajaxwl(url) { $('#loadingDiv').show(); var xmlHttp=$.ajax({type: "GET", url: url, async: false }); $('#loadingDiv').hide(); return xmlHttp; } ``` I stepped through the code with the Chrome debugger, and then it worked - the loading div was displayed as expected. If I run the code without the debugger, however, Chrome (and IE 7) load the AJAX request without ever showing the loading div. Perhaps it has something to do with the fact that Chrome is locking up the browser, as I am using a non-asynchronous request? EDIT: I ended up converting the request to an asynchronous request (a conversion which needed to be done everywhere in this code I inherited, but I had been procrastinating...) and now all works as expected: ```function ajaxwl(url) { $('#loadingDiv').show(); var xmlHttp=$.ajax({type: "GET", url: url }).done(function() { $('#loadingDiv').hide(); }); return xmlHttp; } ``` Thanks for the quick responses! Comment: You're right. You kicked me into gear to convert all my requests to be asynchronous. Thanks. Comment: You probably should be using asynchronous requests, I'm not sure if that's causing the problem but having JS execution stop while performing network tasks is bad practice. Here is the accepted answer: The loading div is not visible in IE because you are using AJAX in synchronous mode due to which the js execution hangs the browser and you will not see when it is visible and when it gets hidden. In all other browsers the js execution is pretty fast so you can see it. Comment for this answer: You should basically hide the loading div when the ajax call response comes. Here is another answer: i had same problem in IE a while ago. Inspecting the problem revealed that html that was returned by ajax request was malformed (missing closing tag etc.) and IE is very unforgiving in such scenarios. Maybe you have the same issue here.
Title: Accessing variable inside array of objects JSON Tags: javascript;json Question: I'm trying to access "photor" and "captionr" of each object stored in that JSON array. But it doesn't work and gives me an error "Cannot read property 'photor' of undefined" ```var slideshow = { directory: "images/", photos:[ { "photor": "aurelius.jpg", "captionr" : "Mark Aurelius"}, { "photor": "cesar.png", "captionr" : "Gaius Julius Ceasar"}, { "photor": "couple.jpg", "captionr" : "Greek Couple"}, { "photor": "flavian.jpg", "captionr" : "Flavian Woman"}, { "photor": "lucius.jpg", "captionr" : "Lucius Verus"}, { "photor": "lupe.jpg", "captionr" : "Emperor Caracalla"}, { "photor": "sabina.jpg", "captionr" : "Sabina"} ], currentPhoto: 0, getPrevious: function(){ if (this.currentPhoto == 0) this.currentPhoto = this.photos.length-1; else this.currentPhoto--; var photo = this.directory + this.photos[this.currentPhoto][0].photor; var caption = this.photos[this.currentPhoto][1].captionr; return { "photo": photo, "caption": caption }; }; var photo=this.directory+this.photos[this.currentPhoto].photor; var caption=this.photos[this.currentPhoto].captionr; ``` Comment: Did you try var photo = this.directory + this.photos[this.currentPhoto].photor; that is without [0]?? Comment: Can you send complete code in jsfiddle? Make sure 'this.currentPhoto' returns you index of current photo and try my code. Comment: `var photo = this.directory + this.photos[this.currentPhoto].photor; var caption = this.captions[this.currentPhoto].captionr;` Comment: yes, it says "file not found" then, even though files are there Comment: http://jsfiddle.net/8mgb3m1c/ it's strange as I have all those files in "images" folder and the one that I put in html works showing the first image, but it keep giving me "file not found" error Here is the accepted answer: Try this ```var photo = this.directory + this.photos[this.currentPhoto].photor;``` Here is another answer: Probably , you are struggling with behaviour of this in JS Try this - ```getPrevious: function(){ var ref=this; if (ref.currentPhoto == 0) ref.currentPhoto = ref.photos.length-1; else ref.currentPhoto--; var photo = ref.directory + ref.photos[ref.currentPhoto][0].photor; var caption = ref.captions[ref.currentPhoto][1].captionr; return { "photo": photo, "caption": caption }; }; ```
Title: TypeScript definition files Tags: typescript;definitions Question: I'm playing with TypesScript, but the compiler complains when I use browser types such as HTMLCanvasElement. I guess I need definition files for these types. I bet there is a repository of definition files for the DOM and for most popular frameworks, but Google has not been able to help me find it. Do you guys know of such a repository? Here is the accepted answer: The ```lib.d.ts``` that is included in the source (http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#bin%2flib.d.ts) contains definitions for most of the DOM related stuff. If you are using Visual Studio, you should consider downloading the tools which includes a template where ```lib.d.ts``` is bundled. If anything is missing, I think you can use the ```declare``` syntax. Comment for this answer: This looks like what I were looking for. Thanks. Comment for this answer: you can look at http://www.tsdpm.com/ and https://github.com/Diullei/tsd#readme too. Here is another answer: In addition to what @Christoffer has said for the DOM, @Boris Yankov has a really useful repository here: https://github.com/borisyankov/DefinitelyTyped with definition files for more than thirty, one hundred and thirty close to two hundred libraries(and counting!). There's even a manager for the type definition files now: http://definitelytyped.org/tsd/ (Do not follow: link is now NSFW spam) AND: Quite a lot of definition files are now on nuget: http://www.nuget.org/packages?q=TypeScript Comment for this answer: @JcFx Do you know of any plugin or Grunt task that can create definitions for newly created, local typescript files? E.g. I have a *_definitions.ts* file that contains all my definitions (local and plugins e.g. AngularJS). This file then gets referenced from every other .ts file. It would be great if something could update *_definitions.ts* with every new typescript file I create, automatically. Comment for this answer: We may need to update the link for type definition file manager. It redirects to something completely different now. Comment for this answer: That looks very useful indeed. Thanks for the link. Comment for this answer: TSD has been deprecated in favor of Typings now: https://github.com/typings/typings/blob/master/docs/tsd.md
Title: Replace a UTF-8 character in R Tags: r;utf-8 Question: I have an R "plugin" that reads a bunch of lines from stdin, parses it and evaluates it. ```... code <- readLines(f, warn=F) ## that's where the lines come from... result <- eval(parse(text=code)) ... ``` Now, sometimes the system that provides the lines of code kindly inserts a UTF-8 non-break space (```U+00A0``` = ```\xc2\xa0```) here and there in the code. The ```parse()``` chokes on such characters. Example: ```s <- "1 +\xc2\xa03" s [1] "1 + 3" ## looks fine doesn't it? In fact, the Unicode "NON-BREAK SPACE" is there eval(parse(text=s)) Error in parse(text = s) : <text&gt;:1:4: unexpected input 1: 1 +? ^ eval(parse(text=gsub("\xc2\xa0"," ",s))) [1] 4 ``` I would like to replace that character with a regular space, and can do so (but at my own peril, I guess) as above with this: ```code <- gsub('\xc2\xa0',' ',code) ``` However, this is not clean as the byte sequence ```'\xc2\a0'``` could conceivably start matching in the middle of another 2-byte char whose 2nd byte is ```0xc2```. Perhaps a bit better, we can say: ```code <- gsub(intToUtf8(0x00a0L),' ',code) ``` But this would not generalize to a UTF-8 string. Surely there is a better, more expressive way to enter a string containing some UTF-8 characters? In general, what's the right way to express a UTF-8 string (here, the pattern argument of ```sub()```)? Edit: to be clear, I am interested in entering UTF-8 chars in a String by specifying their hexadecimal value. Consider the following example (note that ```"é"``` is Unicode ```U+00E9``` and can be expressed in UTF-8 as ```0xc3a9```): ```s <- "Cet été." gsub("té","__",s) # --&gt; "Cet é__." # works, but I like to keep my code itself free of UTF-8 literals, # plus, for the initial question, I really don't want to enter an actual # UTF-8 "NON BREAKABLE SPACE" in my code as it would be undistinguishable # from a regular space. gsub("t\xc3\xa9","__",s) ## works, but I question how standard and portable # --&gt; "Cet é__." gsub("t\\xc3\\xa9","__",s) ## doesn't work # --&gt; "Cet été." gsub("t\x{c3a9}","__",s) ## would work in Perl, doesn't seem to work in R # Error: '\x' used without hex digits in character string starting "s\x" ``` Here is another answer: (Earlier stuff deleted.) EDIT2: ```&gt; s <- '\U00A0' &gt; s [1] " " &gt; code <- gsub(s, '__','\xc2\xa0' ) &gt; code [1] "__" ``` Comment for this answer: I think it should work ... because I tested it. The pattern "\\xc2" matches the `\xc2` character. Comment for this answer: I admit to a lingering puzzlement about why this _should_ work. I simply took you at your word that `\U00A0` would equal `\xc2\xa0` whereas I would have expected it to equal `\UC2A0`. Comment for this answer: My expectations were being met by the UTF-16 value. Comment for this answer: Why do you think that should work? `"\\xc2"` is a literal slash followed by `"xc2"`, hardly the hex sequence I'm trying to form. In Java, yes, because there is an interpolation of the String done first by the compiler, then, say, `Pattern.compile()` further interprets the String. But here, no. I also edited my question to try and make it clearer with another example. The `"\\x"` clearly doesn't work. Comment for this answer: It doesn't, the `\x{c2a0}` is simply passed through, but since it represents Unicode's "NON-BREAK SPACE", you just don't see it in the display. Proof: `gsub("\xc2\xa0","__",gsub("\\xc2\\xa0", " ", "other\xc2\xa0stuff"))` returns: `[1] "other__stuff"`. Comment for this answer: Thanks! EDIT2 contains exactly what I was looking for ('\Uhhhh' to enter a unicode char). Now, can we please clean up what's before EDIT2?... Comment for this answer: Indeed, Unicode `U+00A0` is expressed in UTF-8 as `\x{c2a0}` (Perl) or `\xc2\xa0`, as I said. See http://www.fileformat.info/info/unicode/char/a0/index.htm
Title: How can I prevent i2c-designware blocking Ubuntu GNOME on a Dell Inspiron 15 3552? Tags: 16.04;kernel;dell;ubuntu-gnome Question: I installed Ubuntu GNOME 16.04.3 on a Dell Inspiron 15 3552 recently. This laptop came with Ubuntu 14.04 pre-installed; so it should be compatible with Ubuntu in general. When running Ubuntu GNOME 16.04.3 for a while, the system stops responding to the mouse, a bit later it stops responding to the keyboard, and finally it starts emitting the following kernel message repeatedly forever: ```i2c-designware 808622ci00 i2c-dw-handle-tx-abort lost arbitration ``` Other kernel messages are emitted before this, but because of fast scrolling I was unable to read them. What is the reason for this behavior and how can I stop the system to lock up this way? Comment: did you find any fix for that ? Comment: Cross-posting: https://unix.stackexchange.com/questions/409788/how-can-i-prevent-i2c-designware-blocking-ubuntu-gnome-on-a-dell-inspiron-15-355 Comment: https://meta.stackexchange.com/questions/64068/is-cross-posting-a-question-on-multiple-stack-exchange-sites-permitted-if-the-qu Comment: Is this a problem? Nobody has answered on the Unix & Linux Stack Exchange, and the question is related to a Ubuntu variant. Comment: The say that “each site is focused on a specific topic area”. This is not quite true. “Unix & Linux” is not very specific and “Ubunti” is clearly a subset of “Unix & Linux”. Furthermore they say that “SE is not a wild west for questions; a question needs to be worked on to be worthy, and if worthy, it will target a specific audience”. Any yes, I apply a lot of care when writing my questions, apparently much more care than many other people. In this light, it does not feel nice being accused of not doing this just because I have dared to ask the same question on two Stack Exchange sites. Comment: But what can I do? I am not running this site, and so I have to submit to this site’s policies, even when having been treated unfairly. I will go and delete the question on “Unix & Linux Stack Exchange”. Comment: The only fix I found was to manually remove a certain kernel module after every startup, before the described problem would show up again. Removing this kernel module sometimes disabled the touchpad, sometimes not. Currently, I don’t know which kernel module is the one to remove. I’d have to ask my daughter about that (it’s about her computer).
Title: Using caching to optimize a timeline in Rails Tags: ruby-on-rails;caching;memcached;redis Question: I'm hoping to get advice on the proper use of caching to speed up a timeline query in Rails. Here's the background: I'm developing an iPhone app with a Rails backend. It's a social app, and like other social apps, its primary view is a timeline (i.e., newsfeed) of messages. This works very much like Twitter, where the timeline is made up of messages of the user and of his/her followers. The main query in the API request to retrieve the timeline is the following: ```@messages = Message.where("user_id in (?) OR user_id = ?", current_user.followed_users.map(&amp;:id), current_user) ``` Now this query gets quite inefficient, particularly at scale, so I'm looking into caching. Here are the two things I'm planning to do: 1) Use Redis to cache timelines as lists of message ids Part of what makes this query so expensive is figuring out which messages to display on-the-fly. My plan here is to keep create a Redis list of message ids for each user. Assuming I build this correctly when a Timeline API request comes in I can call Redis to get a pre-processed ordered list of the ids of the messages to display. For example, I might get something like this: "[21, 18, 15, 14, 8, 5]" 2) Use Memcached to cache individual message objects While I believe the first point will help a great deal, there's still the potential problem of retrieving the individual message objects from the database. The message objects can get quite big. With them, I return related objects like comments, likes, the user, etc. Ideally, I would cache these individual message objects as well. This is where I'm confused. Without caching, I would simply make a query call like this to retrieve the message objects: ```@messages = Message.where("id in (?)", ids_from_redis) ``` Then I would return the timeline: ```respond_with(:messages =&gt; @messages.as_json) # includes related likes, comments, user, etc. ``` Now given my desire to utilize Memcache to retrieve individual message objects, it seems like I need to retrieve the messages one at a time. Using psuedo-code I'm thinking something like this: ```ids_from_redis.each do |m| message = Rails.cache.fetch("message_#{m}") do Message.find(m).as_json end @messages << message end ``` Here are my two specific questions (sorry for the lengthy build): 1) Does this approach generally make sense (redis for lists, memcached for objects)? 2) Specifically, on the pseudo-code below, is this the only way to do this? It feels inefficient grabbing the messages one-by-one but I'm not sure how else to do it given my intention to do object-level caching. Appreciate any feedback as this is my first time attempting something like this. Here is another answer: On the face of it, this seems reasonable. Redis is well suited to storing lists etc, can be made persistent etc, and memcached will be very fast for retrieving individual messages, even if you call it sequentially like that. The issue here is that you're going to need to clear/supplement that redis cache each time a message is posted. It seems a bit of a waste just to clear the cache in this circumstance, because you'll already have gone to the trouble of identifying every recipient of the message. So, without wishing to answer the wrong question, have you thought about 'rendering' the visibility of messages into the database (or redis, for that matter) when each message is posted? Something like this: ```class Message belongs_to :sender has_many :visibilities before_create :render_visibility sender.followers.each do |follower| visibilities.build(:user =&gt; follower) end def end ``` You could then render the list of messages quite simply: ```class User has_many :visibilities has_many :messages, :through =&gt; :visibilities end # in your timeline view: <%= current_user.messages.each { |message| render message } %&gt; ``` I would then add of individual messages like this: ```# In your message partial, caching individual rendered messages: <%= cache(message) do %&gt; <!-- render your message here --&gt; <% end %&gt; ``` I would also then add caching of entire timelines like this: ```# In your timeline view <%= cache("timeline-for-#{current_user}-#{current_user.messages.last.cache_key}") do %&gt; <%= current_user.messages.each { |message| render message } %&gt; <% end %&gt; ``` What this should achieve (I've not tested it) is that the entire timeline HTML will be cached until a new message is posted. When that happens, the timeline will be re-rendered, but all the individual messages will come from the cache rather than being rendered again (with the possible exception of any new ones that haven't been viewed by anyone else!) Note that this assumes that the message rendering is the same for every user. If it isn't, you'll need to cache the messages per user too, which would be a bit of a shame, so try not to do this if you can! FWIW, I believe this is vaguely (and I mean vaguely) what twitter do. They have a 'big data' approach to it though, where the tweets are exploded and inserted into follower timelines across a large cluster of machines. What I've described here will struggle to scale in a write-heavy environment with lots of followers, although you could improve this somewhat by using resque or similar. P.S. I've been a bit lazy with the code here - you should look to refactor this to move e.g. the timeline cache key generation into a helper and/or the person model. Comment for this answer: @pejmanjohn I don't think sorting would need to be done on the fly, but one implication that I definately should have highlighted is that this approach changes the semantics of your network. Users will only see messages from the point where they follow them. With your method, they'd see earlier ones also. Comment for this answer: @pejmanjohn r.e. your latter point on your original approach - there's potential there, but you'd still need to identify all the followers for a given message in order to add it to the lists. That's what I meant when I said 'or redis for that matter' - you could do the above just using redis, and that might well be faster :) Comment for this answer: Thanks for this. I follow your method, but am still wrapping my brain around its implications. From what I gather, one potential drawback here is that sorting would need to be done on-the-fly (given that following/unfollowing can happen at anytime). With regards to my original approach, if I were to ensure persistence and efficiently inserted into lists, rather than clearing/recreating, perhaps I avoid the issues you raise? Comment for this answer: Also I should note, like you raise, I do plan to use a background worker for fanning out the message
Title: Unexpected behavior of Spring Security OAuth2 client Tags: java;spring;spring-security-oauth2 Question: I need to access an OAuth2 protected service from my java microservice. I'm trying to use the spring-security-oauth2 client. But when importing it in maven my rest API starts returning a ```403 unauthorized```. Here is the module I'm importing ```<dependency&gt; <groupId&gt;org.springframework.security.oauth</groupId&gt; <artifactId&gt;spring-security-oauth2</artifactId&gt; <version&gt;2.3.5.RELEASE</version&gt; </dependency&gt; ``` I assume that there is some default config I am missing to find, is there any way of overriding it and only extract the oauth2RestTemplate? To clarify: I'm not trying to secure my endpoints and do not need any of that, only the client that allows me to call an OAuth2 service without having to deal with the tokens and it's expiration/renewal. Comment: Stacktrace would be useful. Comment: I'am not getting any error, just unexpected behaviour. My endpoints become secured endpoints by just importing the library, thus returning 403 errors instead of the proper payloads. Comment: If it is possible for you, I would suggest using the `spring-security-oauth2-client` module (top level package `org.springframework.security.oauth2.client`) and following the docs here https://docs.spring.io/spring-security/site/docs/5.1.6.RELEASE/reference/htmlsingle/#oauth2client . The
Title: Refreshing a Listbox and Returning to the Original Line Tags: excel;vba;listbox Question: I have a listbox which is loaded with rows of information from a selected worksheet. Selecting a line from the listbox allows me to update information on a userform and save it back to the worksheet. I would like to refresh the listbox with the new information and go back to the line I was working on without having to scroll down from the top each time I refresh the listbox. Is it possible to save the current listbox line and return to it after refreshing the listbox? Comment: Please show what you have tried. Comment: For the project I'm working on the number of rows in the listbox won't change. It seems like saving the current position of the pointer, refreshing the listbox and then setting the index to the saved number would do the trick. I'm just not sure how you write code in VBA to do that. Comment: Yes, it's possible, but how easy it is depends if the number of items in the listbox is going to change or not. https://docs.microsoft.com/en-us/office/vba/api/access.listbox.listindex
Title: How can I create run time entry for email and password in following code? Tags: java;android;email;gmail Question: Following is the code of Main.java file. ```package com.app.mail1; import android.app.Activity; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.EditText; public class Main extends Activity { EditText to, from, message, subject; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); send.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { try { GMailSender sender = new GMailSender("[email protected]", "jamunjuice"); sender.sendMail("This is Subject", "This is Body", "[email protected]", "[email protected]"); } catch (Exception e) { Log.e("SendMail", e.getMessage(), e); } } }); } } ``` Here is the code for GMailSender.java file ```package com.app.mail1; import javax.activation.DataHandler; import javax.activation.DataSource; import javax.mail.Message; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.security.Security; import java.util.Properties; public class GMailSender extends javax.mail.Authenticator { private String mailhost = "smtp.gmail.com"; private String user; private String password; private Session session; static { Security.addProvider(new com.provider.JSSEProvider()); } public GMailSender(String user, String password) { this.user = user; this.password = password; Properties props = new Properties(); props.setProperty("mail.transport.protocol", "smtp"); props.setProperty("mail.host", mailhost); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.port", "465"); props.put("mail.smtp.socketFactory.port", "465"); props.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.put("mail.smtp.socketFactory.fallback", "false"); props.setProperty("mail.smtp.quitwait", "false"); session = Session.getDefaultInstance(props, this); } protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(user, password); } public synchronized void sendMail(String subject, String body, String sender, String recipients) throws Exception { try{ MimeMessage message = new MimeMessage(session); DataHandler handler = new DataHandler(new ByteArrayDataSource(body.getBytes(), "text/plain")); message.setSender(new InternetAddress(sender)); message.setSubject(subject); message.setDataHandler(handler); if (recipients.indexOf(',') &gt; 0) message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(recipients)); else message.setRecipient(Message.RecipientType.TO, new InternetAddress(recipients)); Transport.send(message); }catch(Exception e){ } } public class ByteArrayDataSource implements DataSource { private byte[] data; private String type; public ByteArrayDataSource(byte[] data, String type) { super(); this.data = data; this.type = type; } public ByteArrayDataSource(byte[] data) { super(); this.data = data; } public void setType(String type) { this.type = type; } public String getContentType() { if (type == null) return "application/octet-stream"; else return type; } public InputStream getInputStream() throws IOException { return new ByteArrayInputStream(data); } public String getName() { return "ByteArrayDataSource"; } public OutputStream getOutputStream() throws IOException { throw new IOException("Not Supported"); } } } ``` Can I set the username and password at run time? Can I make String to Editable? Please help me if I can edit this code and set sender's email id and password at run time? Comment: make 1 provider like php who send email id and password from server when user click on it Here is another answer: Howto convert String to Editable Howto change username/password: Add this to class: ```public class Main extends Activity { private static String password; private static String username; public static void setUsername(String user){ username = user; } public static void setPassword(String pass){ password = pass; } // ................................... GMailSender sender = new GMailSender(username, password); } ``` Then, you can change pass/name: ```public class SomeClass { Main.setUsername("SomeUser"); Main.setPassword("StrongPassword"); } ``` Or you can use Intent for transfer data (username/password) in Activity
Title: Difference between glMatrixMode(GL_PROJECTION) and glMatrixMode(GL_MODELVIEW) Tags: c++;math;opengl;graphics Question: What's the difference between ```glMatrixMode(GL_PROJECTION)``` and ```glMatrixMode(GL_MODELVIEW)```? ```#include <stdio.h&gt; #include <GL/gl.h&gt; #include <GL/glut.h&gt; #define KEY_ESCAPE 27 void display(); void keyboard(unsigned char,int,int); int main(int argc, char **argv) { glutInit(&amp;argc, argv); glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH ); glutInitWindowSize(600,400); glutCreateWindow(&quot;Opengl Test&quot;); glutDisplayFunc(display); glutKeyboardFunc(keyboard); glutMainLoop(); return 0; } void display() { float x,y,z; int i; x=0; y=-0.8; z=0; glMatrixMode(GL_PROJECTION); glLoadIdentity(); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1,1,0); glBegin(GL_POINTS); for(i=0;i<98;i++) { glVertex3f(x,y,z); x=x+0.01; } glEnd(); glutSwapBuffers(); } void keyboard(unsigned char key, int mousePositionX, int mousePositionY) { switch ( key ) { case KEY_ESCAPE: exit ( 0 ); break; default: break; } } ``` Example 2: ```#include <stdio.h&gt; #include <GL/gl.h&gt; #include <GL/glut.h&gt; #define KEY_ESCAPE 27 void display(); void keyboard(unsigned char,int,int); int main(int argc, char **argv) { glutInit(&amp;argc, argv); glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH ); glutInitWindowSize(600,400); glutCreateWindow(&quot;Opengl Test&quot;); glutDisplayFunc(display); glutKeyboardFunc(keyboard); glutMainLoop(); return 0; } void display() { float x,y,z; int i; x=0; y=-0.8; z=0; glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1,1,0); glBegin(GL_POINTS); for(i=0;i<98;i++) { glVertex3f(x,y,z); x=x+0.01; } glEnd(); glutSwapBuffers(); } void keyboard(unsigned char key, int mousePositionX, int mousePositionY) { switch ( key ) { case KEY_ESCAPE: exit ( 0 ); break; default: break; } } ``` For both codes, I got the same result. Can anyone show the difference of these ```glMatrixMode(GL_MODELVIEW)``` and ```glMatrixMode(GL_PROJECTION)```? Here is the accepted answer: When you are looking at a scene, from GL point of view, you have a camera and a lens. ModelView is the matrix that represents your camera (position, pointing, and up vector). ProjectionView is the matrix that represents your camera's lens (aperture, far-field, near-field, etc). See here for more info on those... In simple examples like yours, that doesn't make many difference. However, in general, you need to ask OpenGL to calculate stuff by using matrix multiplications and inversions. For instance, if you want to transform a point on screen to a point of the GL coordinate system, GL will need a projection matrix and a model matrix and multiply them by a specific order. In that case, you must use them accordingly, because matrix multiplications are non-commutable. Comment for this answer: BOTH the MODELVIEW matrix AND the PROJECTION matrix affect the final scene. If you look at examples, you will see people manipulating BOTH of those, e.g. [GL_MODELVIEW and GL_PROJECTION](http://stackoverflow.com/questions/15482641/gl-modelview-and-gl-projection). You are probably better off STARTING from a useful example, but if you want to alter your code to make it DO something with one of the matrices, try adding lines after your glLoadIdentity() that manipulate the current matrix, as seen in the linked post. `glFrustum` for PROJECTION, and/or `glTranslated` or `glRotated` for MODELVIEW. Here is another answer: Of course you didn't get a different result. You didn't actually do anything with the matrix. You set it to identity... which is the default value. In general, you should only put the projection matrices into the ```GL_PROJECTION``` matrix. The transforms that go from the model's space to camera space should go into the ```GL_MODELVIEW``` matrix. If you don't know what projection matrices are, or if you're unsure about what a matrix is, I would suggest looking up proper OpenGL learning materials. And avoiding the use of fixed-function GL.
Title: Artifactory environment variables on CentOS Tags: java;linux;tomcat;environment-variables;artifactory Question: I'm going mad. ```/usr/lib/jvm/ ``` has ```java-1.7.0-openjdk-+1-225-612-8839_64 java-1.7.0-openjdk-+1-225-612-8839_64 ``` Last night at the most unfortunate possible time, the contents of #65, which artifactory was apparently using, disappeared. Java disappeared. Maybe it was already gone, but the new Linux guys were 'upgrading' the machine, so it's suspicious. Now, the issue is that artifactory cannot forget about version 65. If I type in ```env``` or ```set```, we're golden. No mention of v65. But artifactory lives in its own world. ```[root@me]# service artifactory check Checking arguments to Artifactory: ARTIFACTORY_HOME = /var/opt/jfrog/artifactory ARTIFACTORY_USER = artifactory TOMCAT_HOME = /opt/jfrog/artifactory/tomcat ARTIFACTORY_PID = /var/opt/jfrog/run/artifactory.pid JAVA_HOME = JAVA_OPTIONS = -server -Xms512m -Xmx2g -Xss256k -XX:PermSize=128m -XX:MaxPermSize=256m -XX:+UseG1GC [root@me]# service artifactory start Starting Artifactory tomcat as user artifactory... Max number of open files: 32000 Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid Using CATALINA_BASE: /opt/jfrog/artifactory/tomcat Using CATALINA_HOME: /opt/jfrog/artifactory/tomcat Using CATALINA_TMPDIR: /opt/jfrog/artifactory/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-1.7.0-openjdk-+1-225-612-8839_64/jre Using CLASSPATH: /opt/jfrog/artifactory/tomcat/bin/bootstrap.jar:/opt/jfrog/artifactory/tomcat/bin/tomcat-juli.jar Using CATALINA_PID: /var/opt/jfrog/run/artifactory.pid ``` ```env``` and ```set``` shows ```JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64 JRE_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64/jre ``` ```PATH``` is correct too. ```ls -l``` shows ```lrwxrwxrwx 1 root root 34 Jun 24 22:38 java-1.7.0-openjdk.x86_64 -&gt; java-1.7.0-openjdk-+1-225-612-8839_64 ``` So it's pointing to the right place. Where in the hell is the artifactory user getting 65 from? If I try ```su artifactory```, i go to ```bash-4.1$``` indicating that artifactory isn't a user in the traditional sense, but even so, env and set are correct. I finally managed to get it working by compromising. ```/opt/jfrog/artifactory/bin ``` I edited artifactory.default and put my export JAVA_HOME in there, and started artifactory from that folder, instead of as a service. This will do until the next time the Linux team mess up my server. But anyone know how I can get it running as a service? Here is another answer: I had a similar issue after. Installed Artifactory 5.3.2 and months later ran a yum update on my Linux server. I saw the msg: "error artifactory tomcat server did not start in 60 seconds" The issue started after a I ran a yum update. The update impacted my version of java as noted below. Verify Error Log ```vi /var/opt/jfrog/artifactory/logs/catalina.out ``` --> /opt/jfrog/artifactory/tomcat/bin/catalina.sh: line 433: /usr/lib/jvm/java-1.8.0-openjdk-51.243.225.55-0.b13.el7_3.x86_64/bin/java: No such file or directory ```vi /opt/jfrog/artifactory/tomcat/bin/catalina.sh export JRE_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-51.243.225.55-3.b12.el7_3.x86_64 export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-51.243.225.55-3.b12.el7_3.x86_64 cd /opt/jfrog/artifactory/tomcat/bin/ ``` restart catalina ```./catalina.sh ``` artifactory will restart and should show updated JRE_HOME Here is another answer: Take a look in /etc/init.d/artifactory, which is the script that runs when you call "service artifactory ..." - it looks like something in there (possibly another script that is sourced in) is setting JRE_HOME to the old version. You could also try ```sudo su - artifactory; env | grep JRE ``` to make sure that the artifactory user's environment doesn't set JRE_HOME to the old version. Comment for this answer: That sounds like the MySQL user you configured Artifactory to use doesn't have write permission on the Artifactory database. Take a look at http://www.jfrog.com/confluence/display/RTF/MySQL and make sure you've run the GRANT command mentioned on that page. Comment for this answer: There's nothing suspicious in /etc/init.d/artifactory. I did find an offending place where JAVA_HOME and JRE_HOME were being set though, under /etc/profile.d/java.sh. Now when I try start the service, I am getting the right JRE_HOME, but "An SQL data change is not permitted for a read-only connection, user or database." Hmm. Any idea?
Title: How to make the Java to call a String from a XML file(In Android Studio) Tags: java;android;string Question: I have this code and I will like to make it multi language app. What I want is to use the Strings from the Strings.xml file. How can i change "Colombian Peso" to String. ``` placeHolderData.add(new ExchangeListData("COP","Colombian Peso",R.drawable.colombia,time,"1")); ``` Here is the accepted answer: ```String string = getString(R.string.hello); // change hello to the right identifier // then use it in your call placeHolderData.add(new ExchangeListData("COP", string, R.drawable.colombia, time, "1")); ``` More info: https://developer.android.com/guide/topics/resources/string-resource#java You can always change the ```R.string.hello``` identifier to the right language, i.e. ```R.string.es_peso```, based on the user locale, etc. Here is another answer: Try this ``` String string = getResources().getString(R.string.hello); ``` Or ``` String string = getString(R.string.hello); ``` to get String from String.xml &amp; add it to your desirable place ``` placeHolderData.add(new ExchangeListData("COP",string,R.drawable.colombia,time,"1")); ``` Comment for this answer: Thanks for your help
Title: How to display each element in an array from Node in HTML? Tags: html;node.js;express Question: I'm using express to render a page in my node app and have sent an array to the client-side. I'm wondering how I can display each element as a paragraph in the HTML. What I tried is below, but it does not work. When my ```results``` array is five elements long, I can see in Safari's Inspect Element that five ```p``` tags are created but they are empty. Note that the ```results``` array was initiated and populated. Node: ```const app = express(); app.post('/home', (req, res) =&gt; { ... // results array gets populated here ... res.render('home', {results: results}); }); ``` HTML: ```<div class="results"&gt; {{#each results}} <p&gt;{{r}}</p&gt; {{/each}} </div&gt; ``` Comment: @ElliotBlackburn Thanks that worked! And yes, I was using handlebars. Sorry I'm new to Node... Thanks again for your quick reply :) Comment: @ElliotBlackburn Done! Comment: That looks like you're using handlebars as your view template engine? If so, you've not define "r" at all. Assuming "results" is an array of objects such as [{val: "hello"}] you can do {{val}} inside your p tag. If it's just a single value, not an object, then try using {{this}}. Comment: that's great news! I've added this as an answer for future searchers. Would you be able to mark it as correct? Here is the accepted answer: That looks like you're using handlebars as your view template engine? If so, you've not define ```[email protected]. Assuming ```results``` is an array of objects such as ```[{val: "hello"}]``` you can do ```{{val}}``` inside your p tag. If it's just a single value, not an object, then try using ```{{this}}```. For a complete example: ```<div class="results"&gt; {{#each results}} <p&gt;{{this}}</p&gt; {{/each}} </div&gt; ``` Or ```<div class="results"&gt; {{#each results}} <p&gt;{{val}}</p&gt; {{/each}} </div&gt; ```
Title: Deployment of static directory contents to google app engine Tags: google-app-engine Question: I've deployed my first GAE application and I am getting "TemplateDoesNotExist" exception at my main page. It feels like my static directory content is not uploaded to GAE. Isn't it possible that I update (appcfg.py update myapp/) all my files including the static ones and run it standalone on myappid.appspot.com ? by the way here you can see the problem: http://pollbook.appspot.com PS: my app works perfect locally Here is the accepted answer: Your templates should not be stored in a directory that you refer to as "static" in app.yaml. Static directories are for literally static files that will be served to end users by the CDN without changing. These files cannot be read by the templating engine. It works locally because the dev_appserver does not precisely emulate the production server. Put your templates in a different directory like /templates or something. You do not need to refer to this directory in your app.yaml.
Title: AWS S3 AccessDenied on subfolder Tags: amazon-web-services;amazon-s3 Question: I have created S3 bucket, and done the steps to enable static web hosting on it. I have verified it works by going to the URL which looks something as following ```https://my-bucket.s3.aws.com``` I want to put my web assets in a sub folder now I put the web assets in a folder I called ```foobar``` Now if want to access it I have to explictly enter URL as following: ```https://my-bucket.s3.aws.com/foobar/index.html``` So my question is, do I need to use some other service such as CloudFront to enable so I can go into the bucket with the following URL instead ```https://my-bucket.s3.aws.com/foobar```, that is I don't want to have to explicit say ```index.html``` at the end? Comment: Have you turned on the [S3 bucket static website hosting](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html)? Comment: Sorry, I missed that point. Why not try the website endpoint? https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.html Comment: try s3 redirection rules, https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html Comment: @jellycsc I wrote on the first bullet I have, and the second bullet I've written I've verified it works. Now I want it to work in a `subfolder` :) Comment: @jellycsc thanks I checked the page, and the endpoint examples on that page. It seems like by using S3 only I need to say the object name too when entering an URL. Whereas I'm looking for a way that it will redirect to `index.html` file when I go to a subfolder without explicitly having it in the URL. Here is another answer: You can't do this with a default document for a subfolder using CloudFront. Documentation says ``` However, if you define a default root object, an end-user request for a subdirectory of your distribution does not return the default root object. For example, suppose index.html is your default root object and that CloudFront receives an end-user request for the install directory under your CloudFront distribution: http://d111111abcdef8.cloudfront.net/install/ CloudFront does not return the default root object even if a copy of index.html appears in the install directory. ``` But that same page also says ``` The behavior of CloudFront default root objects is different from the behavior of Amazon S3 index documents. When you configure an Amazon S3 bucket as a website and specify the index document, Amazon S3 returns the index document even if a user requests a subdirectory in the bucket. (A copy of the index document must appear in every subdirectory.) For more information about configuring Amazon S3 buckets as websites and about index documents, see the Hosting Websites on Amazon S3 chapter in the Amazon Simple Storage Service Developer Guide. ``` So check out out that referenced guide, and the section on Configuring an Index Document in particular. Comment for this answer: What if your s3 bucket isn't configured as website?! I don't have a root object (since that's all I read on SO as a suggestion). I can access objects within the root of the bucket, but can't for "subfolders".
Title: Automatically email when a cell in column has certain value Tags: google-apps-script;google-sheets Question: I have a column with certain text that I use for signals. When a cell value in the column has the text "Signal1" or "Singal2" then send email with title that "Signals were found". When scanning the column any other cell expect for "Signal1" or "Signal2" can be ignored. This is what I have so far but it's only for one cell one signal: ``` function CheckSignals() { // Fetch data var dataRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Signal").getRange("H2:H29"); var data = dataRange.getValue(); // Check for signals if (data = "Go Short" || "Go Long"){ // Fetch the email address and send var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Email").getRange("C2"); var emailAddress = emailRange.getValues(); // Send Alert Email. var message = 'Signal1 ' + data; // Second column var subject = 'Signals were found'; MailApp.sendEmail(emailAddress, subject, message); } } ``` This is working but it's too simple... There are two signals I have to scan for in the column: "Singal1" and "Signal2". For example, if column H was scanned three "Signal1" and/or "Signal2" was found. The email content should contain information from the whole row for each cell where it was found. Example email: Subject: Signals was found Message: Signal 1 was found in the following rows with the following data: Row6: data from row 6 column A, data from row 6 column B,data from row 6 column C,data from row 6 column D, ...up to column H Row11: Brown, Denver, 23, 1967, 11:00, 34, etc... The spreadsheet always have 29 rows, where the first row is headers. The spreadsheet has 9 columns (A - H). Here is the accepted answer: You want to retrieve rows, when the values of the column "H2:H29" is ```Go Short``` or ```Go Long```. Values are always in "A2:H29". You want to send the retrieved rows as one email. If my understanding is correct, how about this modification? Modification points: At first, the values of "A2:H29" are retrieved. Then, the rows including ```Go Short``` or ```Go Long``` in the column "H" are retrieved. In this modified script, when the rows including ```Go Short``` or ```Go Long``` in the column "H" are retrieved, the base of message is created. Modified script: ```function CheckSignals() { var ss = SpreadsheetApp.getActiveSpreadsheet(); // Fetch data var data = ss.getSheetByName("Signal").getRange("A2:H29").getValues(); // Check for signals var contents = data.map(function(e, i) {return e[7] == "Go Short" || e[7] == "Go Long" ? e[7] + " Row " + (i + 2) + ": " + e.join(", ") : ""}).filter(String); if (contents.length &gt; 0) { // Fetch the email address and send var emailRange = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Email").getRange("C2"); var emailAddress = emailRange.getValues(); // Send Alert Email. var message = contents.join("\n"); var subject = 'Signals were found'; MailApp.sendEmail(emailAddress, subject, message); } } ``` Note: The format of ```message``` is different between your script and your question. So in this modification, the required values are used, because I couldn't understand about the correct one you want. Please modify this for your situation. I could't understand about ```Second column``` of ```var message = 'Signal1 ' + data; // Second column```. References: map() filter() join() If I misunderstood your question and this was not the result you want, I apologize. Edit: From your shared Spreadsheet, I modified your script again. The differences between your initial question and shared Spreadsheet are as follows. Values of ```Signal1```, ```Signal2``` and ```Go Short``` and ```Go Long``` cannot be found at the column "H". Those values can be seen at the columns of "D", "E", "F" and "G". In your script in your question, ```Go Short``` and ```Go Long``` are used. But in your shared Spreadsheet, ```Go short``` and ```Go long``` are used. By above differences, my modified script didn't work. This is due to my poor skill. I apologize for this situation. I reflected above differences and your shared Spreadsheet to my modified script. Please confirm the following modified script. Modified script: ```function CheckSignals() { var ss = SpreadsheetApp.getActiveSpreadsheet(); // Fetch data var data = ss.getSheetByName("Signal").getRange("A2:H29").getValues(); // Check for signals var searchValues = ["Go short", "Go long"]; var contents = data.map(function(row, i) {return searchValues.some(function(e) {return ~row.indexOf(e)}) ? row[7] + " Row " + (i + 2) + ": " + row.join(", ") : ""}).filter(String); if (contents.length &gt; 0) { // Fetch the email address and send var emailRange = ss.getSheetByName("Email").getRange("C2"); var emailAddress = emailRange.getValue(); // Send Alert Email. var message = contents.join("\n"); var subject = 'Signals were found'; MailApp.sendEmail(emailAddress, subject, message); } } ``` Note: About the format of the output values, I couldn't understand about it from your question and reply comment. So I prepared the values as a sample. So about this, please modify for your situation. Comment for this answer: Very elegant solution, thank you! Your code makes sense. Only problem is it's not producing a result. At the moment the spreadsheet does have more than one "Go Short" and no "Go Long", but no email is sent... Comment for this answer: thank you for your interest. Unfortunately it's not working for me. I now made a new spreadsheet just to re-test again: Comment for this answer: This is now working perfectly well! Thank you for this excellent solution. I hope others will benefit by this also. Comment for this answer: @Petrus Thank you for replying. I apologize for the inconvenience. About the situation of your reply comment, in my environment, the script works and email can be sent. So, unfortunately I cannot replicate your situation. Can you provide a sample Spreadsheet and script for replicating your situation? Of course, please remove your personal information. I would like to confirm it. If you can cooperate to resolve your new issue, I'm glad. Comment for this answer: @Petrus If you cannot understand my ENglish, please tell me. I have to apologize for it and modify it. Because in order to resolve your issue, more information for replicating your situation is required. Can you cooperate to resolve your issue? Comment for this answer: @Petrus Thank you for replying and providing the sample Spreadsheet. I modified the script by reflecting the differences between your initial question and your shared Spreadsheet. Could you please confirm it? If you use the modified script, please use your shared Spreadsheet. I confirmed that the script worked for your shared Spreadsheet. Comment for this answer: @Petrus Thank you for replying. I'm glad your issue was resolved. Thank you, too.
Title: Meaning for backtrace addresses in iOS crash log? Tags: ios;crash-reports;symbolicatecrash Question: Backtrace in crash log looks like this: ```6 locationd 0x00000001000bb24c 0x10006c000 + 324172 ``` Seems like ```0x00000001000bb24c``` is the function address, but what does the fourth column mean? Seems like first part in the fourth column is the image base address. What's the second part mean? From this question, someone thinks fourth column as a base address and offset address, but it seems the sum is not equal to the third column! Comment: The sum shouldn't equal the third column, a backtrace needs to include the return address. Comment: Sorry, that was probably confusing. Mats' answer is correct, the only difference is the base of the two numbers. What I meant to say is that the third column isn't the "function address," that is the address that generated the exception. It will not be the function's entry-point, rather an address somewhere within it. Comment: For the top of stack frame, it's the address of the instruction that caused the exception. For the ones below it, it's the return address. But in all of these cases it's not the address of the function. Comment: Hi Andon, can you give a more accurate and detailed answer? @AndonM.Coleman Here is the accepted answer: The last number in the row is printed in decimal (base 10) and all other numbers are printed in hexadecimal (base 16). ```324172``` decimal is ```0x4f24c``` hexadecimal. Adding the load address and offset we get: ```0x10006c000 + 324172 = 0x10006c000 + 0x4f24c = 0x00000001000bb24c ``` Here is another answer: ```0x00000001000bb24c``` is the Stack Address ```0x10006c000``` is the Load Address ```324172``` is the Symbol offset EDIT: You can find a guide here: https://www.apteligent.com/developer-resources/symbolicating-an-ios-crash-report/ Comment for this answer: From this blog `stack address – load addres` should equal to `symbol offset`.
Title: MySQL Server with MariaDB driver produces dates collation error Tags: java;mysql;jdbc;database-connection;mariadb Question: Context &amp; Problem My company decided to change from the MySQL driver to the MariaDB driver/connector, and still use the MySQL Server DB as the database server for our Spring App. During this migration, I found problems related to how the driver/connector handles Dates, which results in the following error: ```Illegal mix of collations for operation '<=' ``` This occur when running the following query, notice the date comparison on the query's last line. ```String sql2 = "select coalesce(sum(b.value), 0) " + " from acc_sub_account_booking b " + " join acc_sub_account sa ON sa.id = b.subAccount_id " + " join km_cash_bond cb ON cb.virtualPayInAccountNumber = sa.iban " + " join km_rented_object ro ON ro.id = cb.rentedObject_id " + " where b.bookingType in ('PAY_IN', 'PAY_OUT') " + " and ro.id = :rentedObjectId " + " and b.valueDate <= :today"; Object sum1 = entityManager.createNativeQuery(sql). setParameter("rentedObjectId", pRentedObject.getId()). setParameter("today", new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59)). getSingleResult(); ``` Versions: Java: 7 MySQL: 5.5 Maria Driver: 1.4.6 Understanding After reading the MySQL documentation, it mentions: ``` MySQL Connector/J is flexible in the way it handles conversions between MySQL data types and Java data types. In general, any MySQL data type can be converted to a java.lang.String, and any numeric type can be converted to any of the Java numeric types, although round-off, overflow, or loss of precision may occur. ``` Perhaps, the error didn't occur before because the MySQL connector handles the conversion between String and Date, and the error now results from trying to compare two different data types. This error does not occur when using MariaDB as the server, maybe the conversion handling in Maria in done at the database level and not in the connector. Tests I believed to be a encoding/character set problem, and changed all the tables and columns to ```utf8_general_ci```. This did not resolve the problem. I ran some tests - always using the MariaDB Driver when running the query through the application - and got the following results: Although, tests pass when using MariaDB, this is not an option. Tests #3 and 4 works, if the query parameters is cast as date: ```... and b.valueDate <= DATE(:today)";``` but this would imply many changes to the code. Test #2 is (the only one that failed and) the option I would like to follow, as it implies the less amount of changes. However, I can not seem to make it work. ? Question ? Is there a way to use MySQL and the MariaDB connector without causing this problems? Is there a better option than casting ```DATE(:today)``` all the parameters to date? Another solution? Thank you. Update: These are the data source properties set in the code: ```dataSource.setJdbcUrl("jdbc:mysql://"+ hostname + ":3306/" + databaseName + "?useUnicode=true&amp;amp;" + "characterEncoding=utf-8"); ``` Update #2: More information: The followings sets were also executed: ```ALTER DATABASE km CHARACTER SET utf8 COLLATE utf8_general_ci; SET collation_connection = 'utf8_general_ci'; SET collation_server = 'utf8_general_ci'; ``` Also, every ```INFORMATION_SCHEMA.COLUMNS``` and ```INFORMATION_SCHEMA.TABLES``` was ```CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;```. @elenst solution steps: Enabled general_log; Confirmed the values for ```NAMES``` and ```character_set_results``` are the same (latin1 and NULL) when using MySQL connector; Switched back to MariaDB and set these values Run with application/query and the error still persists. I also tried with```?sessionVariables=character_set_client=latin1```, and ```?sessionVariables=character_set_client=utf8```, and the result was the same. @DiegoDupin Cannot apply a ```Timestamp.valueOf()``` to a ```Joda Time LocalDateTime```. You can wrap it: ```setParameter("today", Timestamp.valueOf(String.valueOf(new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59))))``` but it results in a Timestamp format error. ```setParameter("today", new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59).toDate())``` will however work. The question remains, other parts of the system, due to the driver change, can still share this the fault. @RickJames: ```SHOW CREATE TABLE acc_sub_account_booking```. The comparison is done on the ```valueDate``` field, type ```date```, although the same happens for a ```datetime``` field present in another (similar) query. ```| acc_sub_account_booking | CREATE TABLE `acc_sub_account_booking` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `creationDate` datetime DEFAULT NULL, `lastModifiedDate` datetime DEFAULT NULL, `bookingText` varchar(255) DEFAULT NULL, `bookingType` varchar(255) DEFAULT NULL, `uuid` varchar(255) DEFAULT NULL, `value` decimal(19,2) DEFAULT NULL, `valueDate` date DEFAULT NULL, `zkaGVC` int(4) NOT NULL, `subAccount_id` bigint(20) DEFAULT NULL, `bankStatementDate` date DEFAULT NULL, `counterpartHolder` varchar(255) DEFAULT NULL, `counterpartIban` varchar(255) DEFAULT NULL, `customerReference` varchar(255) DEFAULT NULL, `endToEndReference` varchar(255) DEFAULT NULL, `returnReason` varchar(255) DEFAULT NULL, `customerSpecificInformations` text, `counterpartBic` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uuid` (`uuid`), KEY `FK_SAB_SA` (`subAccount_id`), CONSTRAINT `FK_SAB_SA` FOREIGN KEY (`subAccount_id`) REFERENCES `acc_sub_account` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=3407 DEFAULT CHARSET=utf8 | ``` Final Update: The problem listed here can be solved by casting to a Java ```Date``` object, instead of directly using ```JodaTime```: ```setParameter("today", new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59).toDate()) ``` Nonetheless, other problems were found and in the end, my company decided to revert their decision of using the MariaDB driver. A very big thank you to all who offer their time to try to help. Comment: Could you enable the query log and see what is actually being received on the server side? LocalDateTime is not handled in the driver , maybe it is handled by Spring somehow, not familiar with that. But maybe you are better off using java.sql.Timestamp or something like that. Comment: Well, it is prepared statement then, server side. You can use useServerPrepStmts=false in the JDBC URL to ensure client-side prepare.but it will show you serialized binary LocalDateTime probably. because this is how setObject in this driver works for unknown types. Comment: @VladislavVaintroub the query log doesn't show the "translated" query: 15 Prepare select coalesce(sum(u.betrag), 0) from km_rented_object ro join km_cash_bond cb ON cb.rentedObject_id = ro.id join konto k ON k.uuid = cb.truster_account_uuid join umsatz u ON u.konto_id = k.id where ro.id = ? and u.vorfallKennung in (100, 200) and u.buchungszeitpunkt <= ? 15 Query SELECT 1 15 Query ROLLBACK 15 Query set autocommit=1` That is what I find strange. Comment: @RickJames `valueDate`, you were right. I wrote the correct one. Comment: What is the datatype of `valueDate`? (Or `SHOW CREATE TABLE`) Can you display the sql after `:today` has been substituted? Comment: `buchungszeitpunkt`? or `b.valueDate`?? Here is the accepted answer: Query.setParameter rely on PrepareStatement.setObject(...) MariaDB jdbc driver doesn't handle LocalDateTime object in setObject I just create this issue for handling that. A workaround is to convert LocalDateTime to Timestamp : ```Object sum1 = entityManager.createNativeQuery(sql). setParameter("rentedObjectId", pRentedObject.getId()). setParameter("today", Timestamp.valueOf(new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59))). getSingleResult()); ``` Edit : If LocalDateTime correspond to java.time.LocalDateTime, then using Timestamp.valueOf((LocalDateTime)x) is a workaround. If LocalDateTime correspond to org.joda.time.LocalDateTime, then toDate() is a solution. MySQL driver in this particular case works the same way than MariaDB, if Object class is unknown, Object will be serialized and send to server. Since org.joda.time.LocalDateTime as you imagine is not defined in JDBC, you must already face some surprise here. Data send to server is not a temporal value. Comment for this answer: Maybe in MySQL it was serialized with toString() ;) I think , for JDBC it is appropriate to either use native JDBC types in setObject, java.sql.Timestamp for example, or native JDK types, like java.util.Date, or (sigh) LocalDateTime, not the Joda, the Java8 one Comment for this answer: Can't apply a `Timestamp.valueOf()` to a Joda Time `LocalDateTime`. You can wrap it like so: `setParameter("today", Timestamp.valueOf(String.valueOf(new LocalDateTime(d.getYear(), d.getMonthOfYear(), d.getDayOfMonth(), 23, 59))))` but it results in a Timestamp format error. From my tests, the parameters must either be a `String` or a `java.util.Date`. That would solve this situation, but I don't know how many more can exist. Comment for this answer: _After Edit:_ I agree with you. It should have been defined as `Date` from the start. Now, `toDate()` would indeed solve this problem, but there isn't a guarantee there aren't others similar cases. I understand MariaDB isn't simply a drop-in, but the question remains of how was this query working with the MySQL driver? And why it isn't with the MariaDB driver? Comment for this answer: @VladislavVaintroub That - "serialized with toString()" - is also my guess. You are absolutely right about using the native JDBC types. I wasn't aware of JodaTime been used in the PrepStmts (new to the company). Because it worked, everyone must have thought it's ok... But now it doesn't work, so let the fixing begin :) Here is another answer: IMPORTANT NOTE: Everywhere in the text and code lines below ```latin1``` is just an example, while the actual character set (and possibly collation) values will need to be chosen based on the information found in the log as described in the experiment steps. When there is a difference related to character sets and collations between MariaDB and MySQL Connector/J in otherwise identical conditions, it's often because MySQL Connector/J can automatically execute these ```SET``` statements upon a new connection: ```SET NAMES <character set&gt;; SET character_set_results = NULL; ``` and/or ```SET character_set_results = <character set&gt;; SET collation_connection = <collation&gt;; ``` For the latter two, there should be corresponding connection properties, so they are more obvious. For the first two, the algorithm is more obscure. MariaDB connector does not do it, it will just set session variables provided in connection properties. The easy way to discover the exact difference is this: Enable general log on the server (```SET GLOBAL general_log=1```); Run the script using MySQL Connector/J; Check the general log, find the beginning of a connection, it should look somewhat like this: ``` 99 Query /* mysql-connector-java-5.1.39 ( Revision: 3289a357af6d09ecc1a10fd3c26e95183e5790ad ) */SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout 99 Query SET NAMES latin1 99 Query SET character_set_results = NULL 99 Query SET autocommit=1 ... ``` (values in ```SET```s can differ). Then, for an experiment, add the exact same statements to be executed directly from your code right after establishing connection, e.g. ```con= DriverManager.getConnection(...); Statement st= con.createStatement(); st.execute("SET NAMES latin1"); st.execute("SET character_set_results = NULL"); ``` (or whatever better syntax you would use, and of course with the same values that you've seen in the general log). Recompile and run now with MariaDB Connector/J. Check the log, make sure the statements are executed. Check results. If the difference in behavior between MySQL and MariaDB connectors is now gone, you have found the reason. You can further narrow it down by trying to remove ```character_set_results``` which is likely be unimportant, and replacing ```SET NAMES``` by setting actual session variables, starting with ```character_set_client```. Finally, in MariaDB connector session variables can be configured by adding them to the connection line as ```"jdbc:mysql://localhost:3306/test?sessionVariables=character_set_client=latin1" ``` etc. Comment for this answer: jdbc:mysql://localhost:3306/test?sessionVariables=character_set_client=latin1 is not a good advice. This connector assumes everything to be UTF8, mb4 or not Comment for this answer: @RickJames, is it a bug if you do not have to have "useUnicode=yes&characterEncoding=UTF-8" to use Unicode? But it definitely a bug if one cannot compare *dates* because of "illegal mix of collations".date is not text data Comment for this answer: driver announces by default UTF8mb3, and does not say it is latin1. jdbc:mysql://localhost:3306/test?sessionVariables=character_‌​set_client=latin1 would unfortunately announce latin1, connector continues talking UTF8, still Comment for this answer: Driver sets collation id to 33 or 45 in the client authentication packet, which, according to https://mariadb.com/kb/en/mariadb/supported-character-sets-and-collations/, corresponds to utf8_general_ci or utf8mb4_general_ci . 45(utf8mb4_general_ci) is used only if server sends this collation in its initial authentication packet. Comment for this answer: There is an easy explanation about confusion. In the context of this specific driver, this example , as written, will easily cause data corruption, whenever any non-ASCII character would be involved.This driver always talks UTF8 to the server, and expects to read UTF8 from the server. This is, perhaps, the biggest difference between MySQL's and MariaDB's drivers. As to why MySQL driver does a lot SET commands with charsets , I do not know really know, maybe this is historical. They could just place collation id into authentication packet, like MariaDB driver does it. Comment for this answer: These are **MySQL** connector options, **MariaDB** connector/J doesn't have them. Comment for this answer: There has already been [at least one](https://jira.mariadb.org/browse/CONJ-18). Maybe if people report more, the decision will be reconsidered; but the thing is, MariaDB Connector/J was never positioned as a drop-in replacement for MySQL Connector/J, it doesn't provide identical functionality. [It was developed specifically as a **lightweight** JDBC connector](https://mariadb.com/kb/en/mariadb/about-mariadb-connector-j/). Comment for this answer: I have no idea why the use of `latin1` in the text has caused so much confusion, as the text said a few times that it's an example (which might work in some cases, but obviously not universally), while actual values need to be find out, and the whole text was about how to do it; but since it *has* caused the confusion, I've just added one more note at the beginning of the text to emphasize this particular point. Hope it will put the "latin1 is bad for your health" [email protected]. There were no other changes in the text. Comment for this answer: Update 2 has more informations and detailed responses to the comments so far. Thank you. Comment for this answer: @RickJames I'm not sure I fully understand your question - How are the bytes encoded in the client? - but I believe so. We have to deal with german characters so we have to encode them using UTF-8. Comment for this answer: @RickJames `SHOW CREATE TABLE` added to the question. Comment for this answer: If you are using utf8/utf8mb4, then I think the connection line needs two things: `useUnicode=yes&characterEncoding=UTF-8` Comment for this answer: Ouch, that's naughty. File a bug with MariaDB. Comment for this answer: Things are moving toward `utf8mb4` as being the preferred `CHARACTER SET` (aka "UTF-8" outside MySQL). I would hope that all connectors, etc, are in that frame of mind. Comment for this answer: I added my 2-cents to that "closed" bug. Comment for this answer: How are the bytes encoded in the client? If they are UTF-8, then do _not_ say that the client is using latin1. Comment for this answer: "Illegal mix of _collations_" implies a COLLATION problem, not a CHARACTER SET problem. So, I again ask for the info in the my comment on the Question. Comment for this answer: The German `ß`, when encoded in latin1 is one byte, hex `DF`. In utf8 it is 2 bytes, hex `C39F`. If `C39F` is treated as latin1, it displays as `ß`. If you can display the hex for a non-ascii character in your client, that would help in debugging. Comment for this answer: Update 2 mentions a lot of things, but I still want to see what the result was; please provide `SHOW CREATE TABLE`. Here is another answer: A likely workaround: Change ```b.valueDate <= :today``` to ```b.valueDate < CURDATE() + INTERVAL 1 DAY```
Title: "Unknown Command" error when passing "uname" to execv() Tags: c;parsing;ubuntu;execv Question: So I have to build a program in C that practically takes a command from keyboard , split it into tokens that are stored in an array and use those tokens as input to "execv" (a command in ubuntu) , I chose the command "uname" with the parameter "-a", but for some reason it keeps saying "Comanda necunoscuta!" (Unknown Command!) Heres my code: ```#include <stdio.h&gt; #include<stdlib.h&gt; #include <string.h&gt; /*strtok strcpy*/ #include<malloc.h&gt; /*malloc*/ #include <sys/types.h&gt; /* pid_t */ #include <sys/wait.h&gt; /* waitpid */ #include <unistd.h&gt; /* _exit, fork */ int main() { int i=0; char *cuvinte[256]; //words char comanda[256]; //command printf("Introduceti comanda: "); // command input fgets(comanda,sizeof(comanda),stdin); // read command char *c = strtok(comanda," "); // break command into tokens while(c!=0) { cuvinte[i] = malloc( strlen( c ) + 1 ); //alocate memory strcpy(cuvinte[i++],c); // copy them printf("%s\n",c); // print them c=strtok(NULL, " ,.!?"); } printf("Sunt %d elemente stocate in array! \n\n",i); // no of elements stored printf("Primul cuvant este: %s \n\n",cuvinte[0]); // shows the first token if((cuvinte[0]=='uname')&amp;&amp;(cuvinte[1]=='-a')){ // here lays the problem i guess /*face un proces copil*/ pid_t pid=fork(); if (pid==0) { /* procesul copil*/ static char *argv[]={"/bin/uname","-a",NULL}; execv(argv[0],argv); exit(127); /*in caz ca execv da fail*/ } else { /* pid!=0; proces parinte */ waitpid(pid,0,0); /* asteapta dupa copil */ } } else printf("Comanda necunoscuta !\n"); // problem //getch(); return 0; } ``` Comment: There is no reason to run `/bin/uname` ([uname(1)](http://man7.org/linux/man-pages/man1/uname.1.html)) as a separate process in C when you could simply use the [uname(2)](http://man7.org/linux/man-pages/man2/uname.2.html) & [gethostname(2)](http://man7.org/linux/man-pages/man2/gethostname.2.html) syscalls (using them is faster and more reliable, since you don't need any external file like `/bin/uname` to be present). Comment: Its my program,but back then I had another problem with it,which I solved , now I hit another bump,someone told me to repost question not edit the old one, so i did ! Comment: possible duplicate of [C Progam parser for ubuntu?](http://stackoverflow.com/questions/26755741/c-progam-parser-for-ubuntu) Comment: Please compile with warnings enabled. You have a multi-character constant `'uname'`, which isn't what you think it is. Here is the accepted answer: Firstly add ``` cuvinte[1][strlen(cuvinte[1])-1]='\0'; ``` after the while loop and just before "printf("Sunt %d elemente stocate in array! \n\n",i);" second thing use ``` if((strcmp(cuvinte[0],"uname")==0) &amp;&amp; (strcmp(cuvinte[1],"-a")==0)) ``` instead of "==" program will work !! Comment for this answer: yes it has to be added so that "\n" in the end can be avoided. Please accept the answer then :) Comment for this answer: Ive tryid this method before but I didn't add cuvinte[1][strlen(cuvinte[1])-1]='\0'; and because of that it didn't work.Thanks it works now ! Here is another answer: First of all, I don't have enough reputation to post a comment sorry for that. I'm not sure you can compare two strings with the '==' operator in C. Try using the strcmp function.
Title: wchar.h: error: '__FILE' was not declared in this scope' error with gcc 5.2.0 Tags: c++;gcc;stdio;powerpc;wchar Question: I am having trouble compiling code before it ever gets to the linker. Can anyone point me in the right direction? Here is the very simple code of source to reproduce this issue. ```// CpxCryptoIfSimClientC.cpp // #include <iostream&gt; // end ``` Compilation command line: ```powerpc64-wrs-linux-g++ -c -o CpxCryptoIfSimClientC.o -m64 --sysroot=/proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux -ggdb -O3 -Wextra -Wall -DPOWERPC64 -DLNX -D_GNU_SOURCE -Os -fno-unit-at-a-time -pedantic -Wall -Wno-long-long `xml2-config --cflags` -std=gnu++98 -DINLINE=__inline__ -I../inc CpxCryptoIfSimClientC.cpp ``` Output: ```In file included from /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/cwchar:44:0, from /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/bits/postypes.h:40, from /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/iosfwd:40, from /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/ios:38, from /proj/platform_cs/linux/deliveriessdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/ostream:38, from /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/c++/5.2.0/iostream:39, from ../../TestHelperStrategy_ppc64_linux/SimTestHelper.h:29, from ../CpxCryptoIfSimClientC.h:25, from ../CpxCryptoIfSimClientC.cpp:20: /proj/platform_cs/linux/deliveries/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/wchar.h:582:8: error: '__FILE' does not name a type extern __FILE *open_wmemstream (wchar_t **__bufloc, size_t *__sizeloc) __THROW; /proj/platform_cs/linux/deliveries/epb/epb_epb2_v1.52/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/wchar.h:589:19: error: '__FILE' was not declared in this scope extern int fwide (__FILE *__fp, int __mode) __THROW; /proj/platform_cs/linux/deliveries/epb/epb_epb2_v1.52/sdk_install/sysroots/ppc64e6500-wrs-linux/usr/include/wchar.h:589:27: error: '__fp' was not declared in this scope extern int fwide (__FILE *__fp, int __mode) __THROW; ``` and many similiar errors which are "undeclared in this scope". Here I am using gcc version 5.2.0 for powerpc64. Is there anything wrong with ```wchar.h``` or any other header file with this gcc version? or am I missing any flag here? Comment: `-DINLINE=__inline__` -- This seems to me like rather intentionally throwing a wrench into the library implementation gears. Any particular reasion for why you're doing this? More generally speaking, how did you come up with that rather involved compiler invocation in the first place? Comment: `-O3` followed by `-Os`, `-Wall` appearing twice, and I get a *really* bad feeling about what might hide behind the `........`. Could you reduce this to a [mcve]? Comment: Post your code. It looks like you might be missing a header file.
Title: KableExtra conditionally formatting specific rows on a column Tags: r;kable;kableextra Question: I have just learnt KableExtra and know to how to use conditionally formating the entire column using mutate() as explained in the doc using mutate such as: ```mutate( mpg = cell_spec(mpg, background = ifelse(mpg &gt; 20, "red", "blue")) ) ``` But what I don't know is, how to change background colours of only certain rows in each column whilst all rows are being displayed. For example, my data: ```df <- data.frame( region1 = c("A", sample(1:5,3)), region2 = c("B", sample(1:5,3)), region3 = c("C", sample(1:5,3)), region4 = c("A", sample(1:5,3)) ) ``` Now I want to format only second and third row. I dont want to change the color background of first and last row. These second and third row should be 'red' when above 1, or yellow when equal to 1 or green when below 1. Some one could help me this? Comment: I needed to display a single table where few rows are strings and rest are numbers. I had to format numers in the different columns. I dont know how to handle dataframe with mixed type. Should I keep them serperately? If so how will I display them as a single table..:-( Comment: Will only the first row be letters and all other rows be numeric? Comment: I try to adhere to the tidy data format (see this [PDF](http://vita.had.co.nz/papers/tidy-data.html) for Hadley Wickham's paper on the subject) – in short, rows are observations, while columns are variables – so I would keep letters and numbers together if they refer to the same variable. Here is the accepted answer: Here's an example that ignores the first and last rows and colours according to value, like you say, but ignores letters. First, I load the libraries. ```# Load libraries library(knitr) library(kableExtra) ``` Next, I create a dummy data frame. ```# Create data frame df <- data.frame( region1 = c(sample(c(-5:5, letters[1:5]), 10, replace = TRUE)), region2 = c(sample(c(-5:5, letters[1:5]), 10, replace = TRUE)), region3 = c(sample(c(-5:5, letters[1:5]), 10, replace = TRUE)), region4 = c(sample(c(-5:5, letters[1:5]), 10, replace = TRUE)), stringsAsFactors = FALSE ) ``` Here, I define the function for formatting cells. I ignore the first and last rows and check if the character is a letter or number, then colour accordingly. ```foo <- function(x, n, nmax){ cell_spec(x, background = ifelse(is.na(as.numeric(x)), "white", ifelse(n == nmax | n == 1, "white", ifelse(x &gt; 1, "red", ifelse(x < 1, "green", "yellow"))))) } ``` Finally, I apply the function. ```df %&gt;% mutate_all(funs(foo(., n = row_number(), nmax = n()))) %&gt;% kable(escape = FALSE) %&gt;% kable_styling() ``` Comment for this answer: It is just fabulous. I have learnt and extended as per your help. Thanks a lot for saving my time. I was about to give up. Somebody told coding is poetry. But I see that in your function usage! thanks again! Here is another answer: That's not a good design for a dataframe: columns need to be all one type, so your numbers will be coerced to character. Nevertheless, you can do what you ask for as follows. ```fixcol <- function(col) { x <- as.numeric(col[2:3]) x <- cell_spec(x, background = ifelse(x &gt; 1, "red", ifelse(x == 1, "yellow", "green"))) col[2:3] <- x col } df <- as.data.frame(lapply(df, fixcol)) kable(df, escape = FALSE) ``` Comment for this answer: Many thanks. I am getting an error : Error in `[<-.factor`(`*tmp*`, 2:3, value = NULL) : replacement has length zero. am I doing anything wrong? Comment for this answer: I increased the rows...now it turns...NA Comment for this answer: You could print `col` in the `fixcol` function. If it has fewer than 3 entries, you'd get a message like that, because `x` would end up too short.
Title: Managing Angular Module names in a large project Tags: angularjs;namespaces;coding-style Question: I'm working on a a large scale angular project with a team of devs. the problem we run into is if you have several files for a component, say a directive. some-directive.js some-directive-controller.js in the definition of both files you would have to attach them to a module, however one file must create the module with ```[]```. If the developer forgets to poke around they will add ```[]``` in the second file called will actually overwrite the module. so now it becomes a memory game. Each developer has to remember to only declare the module in one file ```[]``` some-directive.js ```angular.module('some-module',['some-dependencies']).directive('some-directive',function(){}); ``` some-controller.js ```angular.module('some-module',[]).controller('some-controller',function(){}); ``` we have been using the following approach. Is there a better way? some-directive.js some-directive-module.js some-directive-controller.js where some-directive-module only contains the module creation, includes any dependencies, and does any .config needed. Still the dev needs to remember to angular.module('some-directive') in all the other files without the square brackets. some-directive-module.js ```angular.module('some-directive',[]) .config(//someconfig stuff); ``` some-directive-module.js ```angular.module('some-directive).directive(//declare directive);``` some-directive-controller.js ```angular.module('some-directive).controller(//declare contrller used by directive);``` I suggested that instead we should do the following, it eliminates the issue of overwriting modules, but I received some negative feedback from one of the other devs some-directive-module.js ```angular.module('some-directive',['some-directive.directive','some-directive.controller']) .config(//someconfig stuff); ``` some-directive-module.js ```angular.module('some-directive.directive',[]).directive(//declare directive);``` some-directive-controller.js ```angular.module('some-directive.controller',[]).controller(//declare contrller used by directive);``` Is there a better way? Or is one of the above options correct? Comment: I recommend you to take a look at the John Papa's Angular Styleguide https://github.com/johnpapa/angular-styleguide. It goes beyond a pattern for naming stuff and gives you a whole world of good practices to Angular apps. It is also flexible so you can adapt it to your own preferences. I'm using it on my projects and everything is going fine. Here is the accepted answer: The recommended way (by multiple competent people) is to use the setter-getter-syntax (creating once with ```angular.module("someModule",[])``` and accessing with ```angular.module("someModule")``` from there on). Putting the module definition and configuration into one file seems very clean and is common practice between a lot of developers. But make sure not to create a module for every single directive - group services, directives, constants and so on into reasonable functionality-modules instead. Making clear what a file contains by its name is also a good idea in my opinion, so your some-directive-module.js approach seems fine to me. If developers "poke around" and "wildly add []", they should get a slap on the wrist follwoed by an explanation how modules work in angular, so they stop doing it ;-)
Title: Sphinx documentation - creating table showing latest api requests Tags: python;python-sphinx Question: Hello I want to generate in Sphinx a status overview table - one column including information if the last api request was successful (shown as check mark) and another column including the related date. For example a Quandl request was successful: ```+---------------+--------------------+--------------+ | Data provider | Successful request | Last request | +---------------+--------------------+--------------+ | Quandl | ✅ | 2020-05-12 | | Eurostat | ✅ | 2020-05-1 | | ... | ❌ | ... | +---------------+--------------------+--------------+ ``` Do you have ideas how to implement this in sphinx. Thanks in advance :) Here is the accepted answer: You were close. You were missing row separators. ```+---------------+--------------------+--------------+ | Data provider | Successful request | Last request | +---------------+--------------------+--------------+ | Quandl | ✅ | 2020-05-12 | +---------------+--------------------+--------------+ | Eurostat | ✅ | 2020-05-1 | +---------------+--------------------+--------------+ | ... | ❌ | ... | +---------------+--------------------+--------------+ ``` For more options, see the Sphinx documentation on tables.
Title: Issue with login script Tags: php;mysql;login-script Question: I'm trying to make a login script but I'm stick with a problem: ```<?php session_start(); if (isset($_POST['username'])) { $username = mysql_real_escape_string($_POST['username']); $password = mysql_real_escape_string($_POST['password']); $query = mysql_query( "SELECT id FROM users WHERE username = '$username' AND password = '$password'" ); if (mysql_num_rows($query) == 0) { header('Location: ?error'); exit(); } // assign id to session $_SESSION['id'] = mysql_result($query, 0, 'id'); mysql_query( "UPDATE users SET last_activity = ".time()." WHERE ".$_SESSION['id'] ); header("Location: /"); exit(); } ?&gt; ``` The problem with this script is that it sets last_activity to current time on EVERY user. Can't figure the problem out. Some help would be greatly appricated, and yes I'm gonna look into password encrypting later :P edit: found problem, should be ```mysql_query("UPDATE users SET last_activity = ".time()." WHERE id = ".$_SESSION['id']);``` Comment: Robert, put your solution as an answer and accept it Here is another answer: You can use the mysql function now() as opposed to the php function time() - not that it makes a huge difference but its slightly neater as you don't have to break the string to do it. E.g. "Update tablename set time=now() where condition"
Title: declare a string rather than address Tags: c#;asp.net;ftp Question: ```alt="" src="ftp://+1-838-486-8450/Chrysanthemum.jpg" style="height: 299px; width: 317px" ``` how do I declare a string rather than address for ftp? i would like to say ```alt="" src="imagePath" style="height: 299px; width: 317px" ``` how would I declare that in my actual c# aspx.cs code? And in this code above? Here is the accepted answer: If you mean you want to use the value of a string variable as the ```src``` attribute, then you will want to do something like this: ```alt="" src="<%= Server.HTMLEncode(someStringVariable) %&gt;" ... ``` Or if you are on ASP.NET 4: ```alt="" src="<%: someStringVariable %&gt;" ... ``` Comment for this answer: compiler error photopath does not exist in the current context? Comment for this answer: `{ string PhotoPath; GridViewRow row = GridView1.Rows[GridView1.SelectedIndex]; PhotoPath = row.Cells[5].Text; }` Comment for this answer: protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) Comment for this answer: yup done and done you sir are are awesome! Thanks so much for the help my god lucky I didnt go with my previous coding should have seen it ;) Comment for this answer: @Garrith: Now that you've managed to get this to work, you can use the same code to point to an ASHX, without giving the client your FTP password. Comment for this answer: @Garrith: Then the variable does not exist in the context of the page. It must either be (1) previously declared in the page, or (2) declared as a field/property in the page's codebehind file. If you are referencing a static field/property on another class, you will have to qualify it with the class name. Pasting the code where you declare this variable would help. Comment for this answer: @Garrith: If that is a method definition, then the `PhotoPath` variable is in a different scope entirely from the page. You will have to promote that variable to a class-level variable (a field) if you want this to work. Note that the field will need to be public or protected for it to be accessible from the page markup file. Comment for this answer: @Garrith: Well, promoting the variable to a class-level field should require no other code changes at all, since the names would be identical. However, this will have to be a design decision that you make based on the structure of your page. Comment for this answer: @Garrith: You can only place a bounty on a question if it's been open for a day or so with no accepted answer. Don't worry about it -- I'm here primarily to help, not garner rep points. Here is another answer: ```alt="" src="<%=imagePath%&gt;" style="height: 299px; width: 317px" ```
Title: kivyMD Calling a variable into kivy language Tags: python;python-3.x;kivy;kivy-language Question: Im having a problem, the thing i want to do is to call a variable I have in kivymd language but it always give an error. here is my code: KV: ```<Main&gt; product: product Screen: BoxLayout: size_hint: .8, .8 pos_hint: {"center_x": .5, "y": .7} spacing: dp(100) orientation: "vertical" MDTextFieldRound: id: product hint_text: 'Enter a product' icon_left: 'magnify' on_text_validate: app.System() FloatLayout: MDCard: orientation: "vertical" size_hint: .43, .3 height: self.minimum_height pos_hint: {"x": .05, "y": .35} BoxLayout: id: box size_hint_y: None height: dp(150) MDLabel: text: 'self.data_ebay' # I want to call it here MDCard: orientation: "vertical" size_hint: .43, .3 height: self.minimum_height pos_hint: {"x": .52, "y": .35} BoxLayout: id: box size_hint_y: None height: dp(150) MDCard: orientation: "vertical" size_hint: .43, .3 height: self.minimum_height pos_hint: {"x": .52, "y": .03} BoxLayout: id: box size_hint_y: None height: dp(150) MDCard: orientation: "vertical" size_hint: .43, .3 height: self.minimum_height pos_hint: {"x": .05, "y": .03} BoxLayout: id: box size_hint_y: None height: dp(150) ``` PY: ```import kivy from kivy.properties import ObjectProperty from selenium import webdriver import requests from selectorlib import Extractor from selenium.webdriver.common.keys import Keys from kivymd.app import MDApp from kivy.app import App from kivy.lang import Builder from kivy.core.window import Window from kivy.uix.floatlayout import FloatLayout class Main(MDApp): Window.size = (310, 520) title = "Best Price" def build(self): return Builder.load_string(KV) product = ObjectProperty(None) def System(self): self.options = webdriver.ChromeOptions() self.options.add_argument('headless') self.options.add_argument('window-size=1920x1080') self.options.add_argument("disable-gpu") self.browser = webdriver.Chrome('C://Users//Yesnia//Documents//chromedriver_win32//chromedriver.exe', options=self.options) self.browser.get('https://www.ebay.com/') self.Esearch = self.browser.find_element_by_name('_nkw') self.Esearch.send_keys(self.root.ids.product.text) self.Esearch.send_keys(Keys.ENTER) self.url = self.browser.current_url self.Ebay = Extractor.from_yaml_file('ebay.txt') self.r_ebay = requests.get(self.url) self.data_ebay = self.Ebay.extract(self.r_ebay.text) print('Ebay: ', self.data_ebay) # This is the variable if __name__ == "__main__": Main().run() ``` Here is the accepted answer: i found a solution, the problem is that the variable is inside the class function, and when you call to app.data_ebay app refers to the class, you can add in the class this: ```class Main(MDApp): Window.size = (310, 520) title = "Best Price" def build(self): return Builder.load_string(KV) product = ObjectProperty(None) data_ebay = StringProperty() #You can add this ``` but data ebay has to be an 'str' Here is another answer: You've asked this question 3 times now. How to call a variable in .kv kivymd AttributeError: &#39;NoneType&#39; on the selenium this particular instance I think both me and lothric provided very valuable methods of using variables in kv. Please refer to the answers. Please also note that when lothric mentions 'app.variable' he doesn't mean for you to literally place the variable in quotes as that just turns it into a string as you have done in your code. In this instance: ``` MDLabel: ### text: 'self.data_ebay' <- you've turned this variable into a string which is not valid text: app.data_ebay ``` As lothric and I both mentioned, your method of calling variables is incorrect as you the MDLabel object does not have any data_ebay attribute, however your APP CLASS does. This is why you want to reference it using app.data_ebay. Instead of starting separate questions for the same issue, please communicate more on the single question. Thanks. Comment for this answer: i try it with text: app.data_ebay but it still giving me the same error: Comment for this answer: this is the error: AttributeError: 'Main' object has no attribute 'data_ebay' File "E:\pythonf2\lib\site-packages\kivy\lang\builder.py", line 249, in create_handler return eval(value, idmap), bound_list File "", line 31, in File "E:\pythonf2\lib\site-packages\kivy\lang\parser.py", line 76, in __getattribute__ return getattr(object.__getattribute__(self, '_obj'), name) Comment for this answer: that's because the variable doesn't exist until your System function runs. Since the variable doesn't exist when the kv language loads up, your label can't call the variable. Assign the variable and give it some empty text in your App __init__ function.
Title: BeautifulSoup 4 Install Error by typing pip install --upgrade beautifulsoup4 Tags: python-3.x;beautifulsoup;failed-installation Question: I tried to install BeautifulSoup 4 It worked after I typed following into my mac terminal ```$ easy_install beautifulsoup4 $ pip install beautifulsoup4 ``` But when I imported it in my python, ```from bs4 import BeautifulSoup ``` the screen always shows the Error: ```ImportError: cannot import name 'HTMLParseError' ``` Then I googled the error and found by typing following code in terminal could solve the problem ```pip install --upgrade beautifulsoup4 ``` But after I typed,it was shown Exception: ```Traceback (most recent call last): File "//anaconda/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "//anaconda/lib/python3.5/site-packages/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "//anaconda/lib/python3.5/site-packages/pip/req/req_set.py", line 742, in install **kwargs File "//anaconda/lib/python3.5/site-packages/pip/req/req_install.py", line 831, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "//anaconda/lib/python3.5/site-packages/pip/req/req_install.py", line 1032, in move_wheel_files isolated=self.isolated, File "//anaconda/lib/python3.5/site-packages/pip/wheel.py", line 346, in move_wheel_files clobber(source, lib_dir, True) File "//anaconda/lib/python3.5/site-packages/pip/wheel.py", line 324, in clobber shutil.copyfile(srcfile, destfile) File "//anaconda/lib/python3.5/shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: '//anaconda/lib/python3.5/site-packages/bs4/__init__.py' ``` I have no idea how to fix it. Thanks for someone help Comment: What's the output of `ls -l //anaconda/lib/python3.5/site-packages/bs4/`? Here is the accepted answer: Easiest way to solve this is using Sudo. sudo pip install --upgrade beautifulsoup4 However sudo is not always recommended to do so, you should not install arbitrary code as root. My recommendation would be to create a virtualenv, install the packages play with before using sudo to install them as root. You can use virtualenvwrapper sudo pip install virtualenvwrapper mkvirtualenv workon python setup.py install
Title: Building GhostScript 9.04 Win32 Tags: ghostscript Question: I want to build GhostScript 9.04 for Win32 and I have read the documentation to do so which details creating your own makefile project. I was just curious about the &quot;ghostscript.vcproj&quot; I'm finding in the top level directory. If I convert this to VS2010, I seem to get a good build out of it. Is there any reason not to use this &quot;ghostscript.vcproj&quot;? The build commandline seems to have some extra stuff in it than what is detailed in the documentation, so I was worried that it might be making some kind of specialized build. See below ```nmake -f psi\msvc32.mak SBR=1 DEVSTUDIO= &amp;&amp; nmake -f psi\msvc32.mak DEVSTUDIO= bsc ``` Comment: [Thanks and other fluff do not belong in posts](https://meta.stackoverflow.com/q/260776/6296561). Refrain from further rollbacks Here is the accepted answer: You can use the solutions supplied, they are fine and its what we use. If you would rather use nmake and the makefiles then that's fine too, the solutions simply use the makefiles so its sort of the same, just more convenient in some ways if you are using Visual Studio. The 'extra stuff' is in there to support the visual studio source browser, basically to improve the experience when using Visual Studio, its not essential. I'll see about updating the documentation in make.htm. Here is another answer: Sorry to bump a very old topic, but when attempting to compile GhostScript v.9.14.1 with Visual Studio 2015, I get these errors: ```Error U1034 syntax error : separator missing lib.mak (line 51) Error MSB3073 The command "nmake -f psi\msvc32.mak SBR=1 DEVSTUDIO= debug &amp;&amp; nmake -f psi\msvc32.mak DEVSTUDIO= debugbsc" exited with code 2. ghostscript C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\Microsoft.MakeFile.Targets ``` Here's the code at line 51 in lib.mak: ```GLLCMS2CC=$(CC_SHARED) $(GCFLAGS) $(I_)$(GLI_) $(II)$(LCMS2SRCDIR)$(D)include$(_I) $(GLF_) ``` Is there any way to remedy this? Thank you. PS: Does this project build the DLL? Could we build the DLL ourselves? Comment for this answer: Unfortunately, even the command line nmake throws the same U1034 syntax error. I'll give v9.2x a try. Any way of getting older versions of VS, perhaps? From what I've read, it could be a make/nmake incompatibility. BTW, I simply launche the .VCPROJ file in the extracted folder and VS2015 did a one-way upgrade; only thing is this nmake error. Comment for this answer: Once again, thank you, KenS. Amidst a slew of warnings, it compiled and the DLL works. Much appreciated. The reason I was trying the older versions is because they were under the more liberal open source license, and I believe that the later versions fall under stricter distribution regulations. I'd like to distribute the DLL with my closed-source commercial applications. Comment for this answer: You're right; I did a little reading and distribution with closed-source infringes the GPL licensing terms. I'm a kitchen-table developer and revenues are small, so any sort of licensing fees would not be very affordable. Since I don't actually "distribute" my software through any mass medium, but physically install them in the customers' premises, I suppose that they could download a copy for their own use. Would simply interfacing with an installed copy be considered an infringement, you think? Comment for this answer: You're right; a lot of grey areas. A friend suggested open-sourcing only the module that requires GhostScript, which converts PostScript files to PDFs. The main application could then remain proprietary, as consuming the generated PDF files does not infringe the terms of the license. I think that's a good trade-off. Comment for this answer: Update: Wrote in to Artifex, and they confirmed that such a distribution configuration was legally acceptable. So long as any standalone module that utilised GhostScript functionality were distributed under the AGPL (open-source on demand) license, GhostScript (full, part, modified) could be distributed alongside it without issue. Comment for this answer: Well, it *ought* to work, however, 9.14 is kind of old, the current code is 9.20 and 9.21 will be released in a few weeks. I don't currently have VS 2015 installed, I'm using Team edition 2008 at the moment. You do have all the third party source there ? Including the LCMS2 directory ? You could try using nmake directly on psi/msvc32.mak and see what happens Comment for this answer: Well, I have used VS2015 Community Edition in the past to build GS (after upgrading the project file) and it worked OK..... I'll see if anyone has a copy installed and can try it. Comment for this answer: Checking out what U1034 means, it does look like its a bug in nmake, its complaining that there is no ':' between target and dependencies, but this is not a target line, its an assignment Comment for this answer: According to one of my colleagues the **current** version of Ghostscript imports and builds without problems on VS 2015. We aren't in a position to spend time testing an obselete version though. Things **have** changed since 9.14 and there were some issues (regarding VS 2015) which were fixed after that release. So if you upgrade it should be fine. Comment for this answer: I don't believe there is any difference between GPL v2 and AGPL v3 (the AFPL licence was a **long** time ago) as regards distribution of the DLL.I'm also reasonably certain that 9.14 was distributed under the same AGPL v3 licence (in fact the change appears to have been around 9.07). If you do go down the road of distributing the DLL with a closed-source application you are (I believe) skating on very thin ice, I am not a lawyer, but if you are going to use Ghostscript in a commercial application, you should definitely seek a commercial licence in my opinion. Comment for this answer: I'm afraid that's the point where I'd have to bow out. I **believe** (but if you are worried you should seek competent advice) that the main thrust of the (A)GPL Is that users should be able to replace the open source component with a newer (or possibly different) version. It seems to me that, from a legal standpoint, simply using an installed version is compliant, since you can just install another version. A number of organisations (some surprisingly large) use this technique to avoid paying for the software they use.
Title: How do I import a csv file into Google Colab that uses R kernel? Tags: r;google-colaboratory Question: I have a csv on my computer that I can upload to Google Drive. I am trying to use a Google colab but in R and not Python. How can I import this csv? Comment: You can find the answer here: https://stackoverflow.com/questions/54595285/how-to-use-r-with-google-colaboratory Comment: Does this answer your question? [How to use R with Google Colaboratory?](https://stackoverflow.com/questions/54595285/how-to-use-r-with-google-colaboratory) Comment: Not really, but the python and R trick worked Here is the accepted answer: https://stackoverflow.com/a/57927212/5333248 Here a workaround for you. Lateral arrow on top-left of the screen >> files >> upload. In this way you can upload the .csv file from your pc. There's even a Mount Drive option in the same path, but as I understand is only for python. The file last only for the current session. You'll need to re-upload it every time you reopen the notebook on Google Colab! Just to clarify which is the lateral arrow
Title: Rspec Cucumber avoid repeated tests Tags: ruby-on-rails;ruby;ruby-on-rails-3;rspec;cucumber Question: very often I see people writing tests (specs) in both capybara and rspec. By default RSpec generates all kind of specs, actually I'm just disabling these one: ```config.generators do |g| g.test_framework :rspec, :view_specs =&gt; false, :request_specs =&gt; false, :routing_specs =&gt; false end end ``` I'm doing this because I want to test views with Cucumber, am I right? Maybe there's some other spec I should disable to avoid repeated tests among the two frameworks? Thanks! Here is another answer: You see people writing specs in both Cucumber and RSpec because they are two frameworks that do the same thing (Behavior Driven Development). Cucumber simply provides a business readable specification document that gets translated into code, wheras RSpec has the business specification mixed into the code within described, it and context blocks. Generally speaking, if you are using both, you are probably just going to use RSpec for the UNIT testing capabilities (Model, Controller, View and Routing Specs, although there is a strong argument that you only need to do model specs). Then you'll do your Acceptance Tests in either Cucumber or through Request Specs in RSpec, there is no reason to use both. My general rule of thumb is to use RSpec if only technical folks are using the specs because I find it easier to use, and Cucumber if your product owner is hands on enough to want to see the specification. Request Specs and Cucumber Features are both a form of Integration Tests, which means you are going to be using all aspects of your application, not just one part in isolation. It generally accepted that you can push off responsibility for testing the controller, routes and views to integration tests, so you are correct that you can disable those in RSpec if you agree that you don't need them. Myself I still like to write the odd controller test, usually to make sure that if I post with extra parameters the application won't let a user make themselves an admin or something similar. Comment for this answer: +1 - Really good answer. However, I would say that even if only technical folks are using the specs, I find using Cucumber pretty useful as a documentation of what the application is supposed to do. Comment for this answer: Very true, although I find the output of rspec in documentation format is usually as good as a cucumber feature file.