_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d11501
train
Have you tried converting std::string to System::String^ as stated in MSDN docs
unknown
d11502
train
You need to have Namenode and datanode services running: on master node: sudo service hadoop-hdfs-namenode start on datanodes: sudo service hadoop-hdfs-datanode start
unknown
d11503
train
How to Dynamically Change the Type of an Input Element The type of an input element can be changed dynamically simply with AngularJS interpolation. <input type="{{dynamicType}}" ng-model="inputValue"> Or from a directive angular.module("myApp").directive("typeVar", function() { return { link: linkFn }; function linkFn(scope,elem,attrs) { //elem.attr("type", "number"); attrs.$observe("typeVar", function(value) { elem.attr("type", value); //attrs.$set("type", value); }); } }); HTML <input type-var="{{dynamicType}}" ng-model="inputValue"> The DEMO on JSFiddle The type property is an intrinsic part of the input element. For more information, see MDN HTML Element Reference -- input.
unknown
d11504
train
If you want to use types in React that are richer than what PropTypes can provide you, you have two options: * *Use TypeScript. This does not come out of the box, so you have to change your build. There are a bunch of bootstrapper projects that can make your life easier. *Use Flow. The standard bootstrappers such as CRA come with support for it out of the box. Which one to use is a matter of preference at the end. Both have their advantages. Here you have a pretty good comparison of what both can do.
unknown
d11505
train
If this is a Spring Boot application, Ctrl-C will close the application context and the DefaultKafkaProducerFactory will close the producer in its destroy() method (called by the application context during close()). You will only lose records if you kill -9 the application.
unknown
d11506
train
You are getting this exception because you need to call find() on the matcher before accessing groups: Matcher m = p.matcher(theString); while (m.find()) { String substring =m.group(); System.out.println(substring); } Demo. A: There are two things wrong here: * *The pattern you're using is not the most ideal for your scenario, it's only checking if a string only contains numbers. Also, since it doesn't contain a group expression, a call to group() is equivalent to calling group(0), which returns the entire string. *You need to be certain that the matcher has a match before you go calling a group. Let's start with the regex. Here's what it looks like now. Debuggex Demo That will only ever match a string that contains all numbers in it. What you care about is specifically the number in that string, so you want an expression that: * *Doesn't care about what's in front of it *Doesn't care about what's after it *Only matches on one occurrence of numbers, and captures it in a group To that, you'd use this expression: .*?(\\d+).* Debuggex Demo The last part is to ensure that the matcher can find a match, and that it gets the correct group. That's accomplished by this: if (m.matches()) { String substring = m.group(1); System.out.println(substring); } All together now: Pattern p = Pattern.compile(".*?(\\d+).*"); final String theString = "Incident #492 - The Title Description"; Matcher m = p.matcher(theString); if (m.matches()) { String substring = m.group(1); System.out.println(substring); } A: You need to invoke one of the Matcher methods, like find, matches or lookingAt to actually run the match.
unknown
d11507
train
That does sound very strange. I have to admit that I haven't seen anything like this myself with a modal window. You don't mention where you're trapping the KeyDown key, so it's a bit hard to comment on that. What I have seen sometimes, especially when doing something a little "different", is the error message not telling you the actual cause of the problem. I would try wrapping the code with a dispatcher call, to make sure the call is being performed on the correct thread, as well as a try/catch to see if you can find the real cause of the error: Private Sub YourClickHandler Try Me.Details.Dispatcher.BeginInvoke( Sub() OpenModalWindow("HelpWindow") End Sub) Catch ex As Exception Me.ShowMessageBox(ex.Message) End Try End Sub I hope that helps, or at least points you in the direction of a solution.
unknown
d11508
train
You can create a role with permissions using this: guild.roles.create({ name: 'Super Cool Blue People', color: 'BLUE', reason: 'we needed a role for Super Cool People', permissions: ['ADMINISTRATOR', 'KICK_MEMBERS'], }) I have added two examples above ADMINISTRATOR and KICK_MEMBERS. Flag List: https://discord.js.org/#/docs/main/stable/class/Permissions?scrollTo=s-FLAGS Please note that you're recommended to use Permissions.FLAGS.<FLAG-NAME> instead of hard-coding the permission name. A: A guild's RoleManager#create takes a parameter of type CreateRoleOptions, which has a property permissions of type PermissionResolvable. Therefore, you can create a role with specific permissions by calling: guild.roles.create({ permissions: "ADMINISTRATOR" });
unknown
d11509
train
Not directly, eventually java may throw a IOException from the InputStream or OutputStream but this is platform dependent. there was talk of a "raw sockets" java implementation prior to the release of JDK 7 but somehow I don't think it ever got off the ground. A: Can my Java application be triggered on receipt of the FIN,ACK send from Apache ? Yes, but you would have to be reading from the connection. You will get end of stream in one of its various forms, depending on which read method you call. See the Javadoc.
unknown
d11510
train
If this problem appears, you have to : 1 - Go to the Atom menu. 2 - Select "Install Shell Commands". 3 - Restart the terminal It's magic it works :D A: Here are some tools to figure this out. Check current configuration: git config --list Check Status: git status See which configuration below works for the atom text editor: git config --global core.editor "atom" git config --global core.editor "atom --wait" git config --global core.editor "atom -w -s" Be sure to leave a message in the file that opens after running "git commit" in terminal. Save and completely exit the editor. A: Walking down the error you included: hint: Waiting for your editor to close the file... tells you that git has attempted to open your specified editor to write a commit message. This much is normal. The next part: atom --wait: atom: command not found tells you that git tried to execute $ atom --wait, but couldn't find the atom command. This indicates that the atom command was either never installed, or is not on your path. (For reference, the executable to run Atom on my Mac is located at /usr/local/bin/atom) The solution depends on your operating system. Solution for Mac This exactly matches the problem described by the Installing Atom on Mac official documentation: When you first open Atom, it will try to install the atom and apm commands for use in the terminal. In some cases, Atom might not be able to install these commands because it needs an administrator password. Therefore... To install the atom and apm commands, run "Window: Install Shell Commands" from the Command Palette, which will prompt you for an administrator password. Alternatively, the steps given in Fizik26's Answer will accomplish the same thing. Note: the "Window: Install Shell Commands" action only seems to be available on Mac, not Windows or Linux. A: In my case for Windows 10, I only uninstalled Git (v2.32) and kept Atom. I re-installed Git, and I chose Atom to be my default editor from the drop-down menu that appears in the installation wizard. If you kept pressing the NEXT button during the installation, you will end up with VIM as your default editor, and we sure don't want that. A: This is because there is no atom command at least in the PATH. To enable opening Atom from the command-line, you will need to install shell commands from the either the Atom menu or the Atom Command Palette. Next ensure that atom is in your path.
unknown
d11511
train
If you want to get the user's message and then repost it as a code block, you can use Formatters to do that. All you have to do is get the user's message and then repost it by doing this: const { Formatters } = require('discord.js') // ... client.on('message', function (message) { if (message.content.startsWith("||tb ")) { message.channel.send(Formatters.codeBlock(message.content)) } }); // ... You can also additionally provide which language the code would be in also to the Formatter. If you want to learn more about the .codeBlock(), you can go here => codeBlock | Formatters Easier Solution An easier solution to format the message content would be to use triple backticks like this ```. Then, additionally if you want to specify the programming language as well you can just add it after the triple backticks. An example: client.on('message', function (message) { if (message.content.startsWith("||tb ")) { message.channel.send('```' + message.content + '```') } });
unknown
d11512
train
Is a Bug. https://developers.facebook.com/bugs/298184123723116/ We have managed to reproduce this issue and it appears to be a valid bug. We are assigning this to the appropriate team.
unknown
d11513
train
Here are few things you can try. * *Try adding error_reporting('E_ALL'); on top of the script. *Check your web server's configuration (htaccess, virtualhost etc). *(more likely cause) Since, you are using mail() function, it could be causing the error. Check your server's mail configuration. More info: PHP's mail() function causes a 500 Internal Server Error only after a certain point in the code *Compare your server's configuration with your localhost's configuration.
unknown
d11514
train
Ok finally I'm able to keep working with simulator. I've just redownloaded Xcode and reinstalled it (moved to 8.2 actually) - as long as it solved the issue I'm fine with this solution but I'm still curious what went wrong and how should I've fixed it in a good way. After spending more time I can add that copying the mentioned pods' sources into the project directly and removing them from pod file so they compile directly from sources in case there are any issues with the compiled frameworks still brought the same error - NSNumberFormatter.h was invisible though import is there - this again work on device but fails to compile for simulator. This is where simulator became the main suspect but installing new one (by downloading a new simulator runtime) didn't solve the issue.
unknown
d11515
train
Which version of Python are you using? It seems that module (in sendsms/util.py) will import the library called importlib which only exists in Python 2.7. If you are using Python 2.6 or lower, that library do not exist.
unknown
d11516
train
The solution is no-cache the image or create something like this: BANNER.jpg?V='.time(); BUT when using background image, it does not find the image. Anyone have a solution that, please post.
unknown
d11517
train
First create a CTE that unpivots the table so that each code is on a separate row: with cte(Name, IncidentId, CodeName, Code) as( select Name, IncidentId, CodeName, Code from Incident i unpivot(Code for CodeName in (Code1, Code2, Code3, Code4)) unpvt ) Now you do an outer join on the CTE to itself, filtering out the excluded codes. This gives you one row for each Name-Incident-Code tuple, but you have null values in the rows where the code was excluded (you need the null rows to maintain the proper count of codes). Select *, t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeName, 'zzz')) CodeNumber from cte t1 left outer join cte t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code) The ROW_NUMBER() here will create the new CodeNumber. The order byisnull(t2.CodeNumber, 'zzz')) pushes the null rows to the end so that the rows that have valid codes get numbered first (because "zzz" is greater than "Code-whatever-"). Now you just need to pivot the previous query back so that the codes become columns again: select Name, IncidentId, [1] Code1, [2] Code2, [3] as Code3, [4] as Code4 from ( Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeName, 'zzz')) CodeNumber from cte t1 left outer join cte t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code) ) x pivot(max(Code) for CodeNumber in ([1], [2], [3], [4]) ) as pvt SQL Fiddle Update There's a couple problems with the code above. First, when I create the CodeNumber with ROW_NUMBER(), I am sorting by CodeName. This breaks down after 9 code columns because they no longer sort correctly (they get sorted alphabetically instead of numerically). So I need to pull the code number out in the CTE so I can use it to sort by later: with cte(Name, IncidentId, CodeName, CodeNumber, Code) as( select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 5, len(CodeName))), Code from Incident i unpivot(Code for CodeName in (Code1, Code2, Code3, Code4, Code5, Code6, Code7, Code8, Code9, Code10)) unpvt ) Now the rest of the query looks like this: select Name, IncidentId, [1] Code1, [2] Code2, [3] as Code3, [4] as Code4, [5] as Code5, [6] as Code6, [7] as Code7, [8] as Code8, [9] as Code9, [10] as Code10 from ( Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber from cte t1 left outer join cte t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code) ) x pivot(max(Code) for NewCodeNumber in ([1], [2], [3], [4], [5], [6], [7], [8], [9], [10]) ) as pvt Note that since I now have a column called CodeNumber in the CTE, I am calling the newly generated number "NewCodeNumber". Also, I am ordering by t2.CodeNumber instead of t1.Code. Updated SQL Fiddle. Update Regarding the question in your comment, you're essentially asking about unpivoting multiple columns, which is not as straightforward as unpivoting a single column. One way to accomplish it is to unpivot the code and the codedate separately: with cteCode(Name, IncidentId, CodeName, CodeNumber, Code) as( select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 5, len(CodeName))), Code from Incident i unpivot(Code for CodeName in (Code1, Code2, Code3, Code4, Code5, Code6, Code7, Code8, Code9, Code10)) unpvt ), cteCodeDate(Name, IncidentId, CodeName, CodeNumber, CodeDate) as( select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 9, len(CodeName))), CodeDate from Incident i unpivot(CodeDate for CodeName in (CodeDate1, CodeDate2, CodeDate3, CodeDate4, CodeDate5, CodeDate6, CodeDate7, CodeDate8, CodeDate9, CodeDate10)) unpvt ) and then join them back together: Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber, t3.CodeDate from cteCode t1 join cteCodeDate t3 on t3.Name = t1.Name and t3.IncidentId = t1.IncidentId and t3.CodeNumber = t1.CodeNumber left outer join cteCode t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code) Pivoting multiple columns isn't as easy as a single column either, so I went a different route to get the final result: select Name, IncidentId, MAX(case when newCodeNumber = 1 then Code end) Code1, MAX(case when newCodeNumber = 1 then CodeDate end) CodeDate1, MAX(case when newCodeNumber = 2 then Code end) Code2, MAX(case when newCodeNumber = 2 then CodeDate end) CodeDate2, MAX(case when newCodeNumber = 3 then Code end) Code3, MAX(case when newCodeNumber = 3 then CodeDate end) CodeDate3, MAX(case when newCodeNumber = 4 then Code end) Code4, MAX(case when newCodeNumber = 4 then CodeDate end) CodeDate4, MAX(case when newCodeNumber = 5 then Code end) Code5, MAX(case when newCodeNumber = 5 then CodeDate end) CodeDate5, MAX(case when newCodeNumber = 6 then Code end) Code6, MAX(case when newCodeNumber = 6 then CodeDate end) CodeDate6, MAX(case when newCodeNumber = 7 then Code end) Code7, MAX(case when newCodeNumber = 7 then CodeDate end) CodeDate7, MAX(case when newCodeNumber = 8 then Code end) Code8, MAX(case when newCodeNumber = 8 then CodeDate end) CodeDate8, MAX(case when newCodeNumber = 9 then Code end) Code9, MAX(case when newCodeNumber = 9 then CodeDate end) CodeDate9, MAX(case when newCodeNumber = 10 then Code end) Code10, MAX(case when newCodeNumber = 10 then CodeDate end) CodeDate10 from ( Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber, t3.CodeDate from cteCode t1 join cteCodeDate t3 on t3.Name = t1.Name and t3.IncidentId = t1.IncidentId and t3.CodeNumber = t1.CodeNumber left outer join cteCode t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code) ) x group by Name, IncidentId SQL Fiddle A: This is too long for a comment. This is best handled with dynamic SQL. Moving things over from column to column to handle exclusions is cumbersome, at best. It would end up being some variant on: if code1 is not excluded then code1 else if code2 is not excluded then code2 else if code3 is not excluded then code3 else code4 is not excluded then code4 as code1 if code1 is not excluded if code2 is not excluded then code2 else if code3 is not excluded then code3 else if code4 is not excluded then code4 and so on, and so on and so on Instead, you probably have a place where you can add something like this to the dynamic SQL: where not exists (select 1 from ExcludedCodes ec where ec.code <> the.code) And you will eliminate them before the pivot.
unknown
d11518
train
Can you try: $('a[rel=tooltip]').tooltip(); $('a[rel=tooltip]').off('.tooltip'); Don't forget to change the selector. Works fine for me... http://jsfiddle.net/D9JTZ/ A: To permanently disable a tooltip: $('[data-toggle="tooltip"]').tooltip("disable"); To stop the tooltip from being displayed on hover but have the ability to re-enable it: $('[data-toggle="tooltip"]').tooltip("destroy"); $('[data-toggle="tooltip"]').tooltip(); // re-enabling A: You can't disable tooltips that way because it has no event listener on the body. Instead, you can disable the tooltips themselves using the code below. $('[rel=tooltip]').tooltip() // Init tooltips $('[rel=tooltip]').tooltip('disable') // Disable tooltips $('[rel=tooltip]').tooltip('enable') // (Re-)enable tooltips $('[rel=tooltip]').tooltip('destroy') // Hide and destroy tooltips Edit: For Bootstrap 4, the 'destroy' command has been replaced by the 'dispose' command, so: $('[rel=tooltip]').tooltip('dispose') // Hide and destroy tooltips in Bootstrap 4 A: I found a way to do it using CSS! Just add .tooltip { visibility: hidden } to your CSS file. If you want to make your link accessibility friendly without the tooltip, then just add aria-label= "Here's a link description." Hope this helps! A: I struggled too with this, but I came up with a solution! In my case I use Jquery Sortable which is ofcourse annoying when you have tooltips flying around! So I made a variable var sort = '0`; And since almost every tooltip has an init() function, I created if(window.sort!='1') { // So I'm not sorting init.tooltip(); } So, this can be the easiest enable/disable function!
unknown
d11519
train
If you know the values at compile time just place them in the two separate arrays. You can't separate them at runtime since when you place them like this : var ar:Array = [234*256,558*698,256*784]; The multiplication will be calculated and stored in the array and you will have this at runtime : var ar:Array = [59904,389484,200704]; On the other hand if the values are coming from an external source as strings "234*256" You can split that from the * symbol and store the two parts in separate arrays. var multiply:String = "234*256"; var parts:Array = multiply.split("*"); ar_x.push(parts[0]); ar_y.push(parts[1]); A: I've got it! Code looks like this: var def_poss:Array = [def_pos[i][1],def_pos[i][2],def_pos[i][3],def_pos[i] [4],def_pos[i][5],def_pos[i][6],def_pos[i][7],def_pos[i][8],def_pos[i][9],def_pos[i][10],def_pos[i][11]]; } for (var m:int=0;m<def_poss.length;m++) { var temp_def:Array = def_poss.toString().split("*"); } for (var n:int=0;n<temp_def.length;n++) { var parts:Array = temp_def.toString().split(","); def_x.push(parts[0]); def_y.push(parts[1]); def_x.push(parts[2]); def_y.push(parts[3]); def_x.push(parts[4]); def_y.push(parts[5]); def_x.push(parts[6]); def_y.push(parts[7]); def_x.push(parts[8]); def_y.push(parts[9]); def_x.push(parts[10]); } First I have to split ("*") to get strange array ["255","255,586".....], then iterate thru that array and split(",") then push parts to separate arrays Thanks for help!
unknown
d11520
train
Subtract 9 months or add 3 months (I'm not sure what you want to call "Jahr"): SELECT YEAR( ordered_date + interval 3 month) AS Jahr, SUM( preis ) AS Preis, count(*) as Anzahl FROM wccrm_orders WHERE typ = 'Brautkleid' GROUP BY Jahr ORDER BY Jahr ASC;
unknown
d11521
train
win32cryrpt is a part of the Windows Extensions for Python or pywin32. It is a wrapper around the Windows crypto API. It doesn't make sense to try and install it without pywin32 and if your install of that has failed then that is the problem you have to solve. Please try pip install pypiwin32 again, being sure to execute it in the correct folder, which is the Scripts subfolder of the Python environment you want to install it in. You may have more than one Python installation without realizing it, and if you run pip from outside that folder, you may get a different instance of pip. The standard location for Python installations is C:\Program Files\Python3x. If the pip install doesn't complete as expected then edit your question to include the messages from the failed install. Did not work isn't enough to go on.
unknown
d11522
train
renderModal = () => { return (........your html....); } render () { {this.renderModal()}; } You can use this code to get what you are asking.. A: render () { return ( <Modal title='' content='' onOk='' onClose=''/> <SomeComponent> </SomeComponent> ) } you should wrap them in a parent element => render () { return ( <> <Modal title='' content='' onOk='' onClose=''/> <SomeComponent> </SomeComponent> </> ) } then you can use renderModal = () => <Modal title='' content='' onOk='' onClose='' /> render () { return ( <> {this.renderModal()} <SomeComponent> </SomeComponent> </> ) }
unknown
d11523
train
In short, don't do this, it won't be a good design. While, as @j.con pointed out, it is possible to add further methods to the generated classes, using -xinject-code or a custom XJC plugin, adding a marshalling method is not a good idea. With JAXB API, it will be pretty ugly. To do anything you'll need an instance of JAXBContext. Either you'll pass it to your method or instantiate within the method. The latter isn't quite good as JAXBContext is instantiated for a collection of classes or packages (context path). So you'll basically have to preset, with which classes your class may be used together. Doing so, you're losing flexibility. Next, JAXB marshallers produce many things, not just strings/stream results but also DOM or SAX or StAX. JAXB API is quite cool about that. Opting just for strings seems to be a shortsighted choice to be. Finally, I don't think adding toXMLString() or whatever is so much sweet syntactic sugar compared to a simple utility service or class. And hacking into code generation for that really feels like misplaced effort.
unknown
d11524
train
So it seems I needed to include the kotlin jdk 8 libraries in my gradle build file: implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8"
unknown
d11525
train
I cant claim the credit for this as it was provided by someone on a GL forum, but to close this question out, the answer is that referencing myVec.length() is not enough, you have to actually reference the array so this works : if (myVec.length() == 2 && myVec[0].y == 0.35) { ... Without actually referencing an array entry in the shader, the compiler presumably optimizes it out, thus returning a length of zero at runtime.
unknown
d11526
train
No, playlist entries with a byte-range length of 0 are NOT valid. A: Byte-range can not be zero in any case. I don't think any streamer will send zero value for this.
unknown
d11527
train
Yes your code is perfect but, in blogger theme design with html you need to add "b:skin and b:section" inside that code. And this is the sample code for blogger from your code : Sorry if i wrong but thats all i know :) <!doctype html> <html> <head> <meta charset="utf-8"> <title>Untitled Page</title> <b:skin> <!-- Your CSS Code Here --> </b:skin> </head> <body> <b:section id='Unique-Id'> <!-- Your section content --> </b:section>
unknown
d11528
train
No, you can't. This would break the contract of the superclass, which says: this method accepts a IAppendOnlyData as second argument. Remember that an instance of a subclass is also an instance of its superclass. So anyone could refer to the subclass instance as its superclass, and call the base method, passing a IAppendOnlyData, without knowing that the instance is actually a subclass instance. Read more about the Liskov substitution principle. The only way to do that is to make the superclass generic: public class Updater<T extends IAppendOnlyData> { ... public abstract void processRow(Cluster cluster, T row); } public class UserdataUpdater extends Updater<IUserData> { @Override public void processRow(Cluster cluster, IUserData row) { ... } } A: You cannot modify a method declaration in a derived class. You can only override a superclass method if the derived class method has the exact same method signature. You must use function overloading and make a new method processRow with the new parameter types you mentioned. A: In my experience, you have to use the first declaration, then in the implementation, check to make sure that: row instanceof IUserData of course, this is checked at runtime rather than during compile, but I don't know any other way around it. Of course, you can also just cast the row to the type IUserData, whether blindly or after checking its type (above). A: Short answer: No. You can create such a function, but because the signature is different, the compiler will see it as a different function. If you think about it, what you are trying to do doesn't really make sense. Suppose you wrote a function that takes an Updater as a parameter and calls processRow on it with something that is not IUserData. At compile time, Java has no way to know whether the object passed in an Updater, UserdataUpdater, or some other subclass of Updater. So should it allow the call or not? What should the compiler do? What you can do is inside UserdataUpdater.processRow, include code that checks the type passed in at runtime and throws an exception or does some other sort of error processing if it is not valid. A: Assuming you have no control on the Updater class, you can't do that ... you'll have to implement that method with the exact same signature. However you can check for the type of row, inside your implementation and decide whatever processing is appropriate: public void processRow (Cluster cluster, IAppendOnlyData row) { if( row instanceof IUserData ) { // your processing here } else { // Otherwise do whatever is appropriate. } }
unknown
d11529
train
In DB2, ROWID serves more of an internal function to the RDMS than what is allowed by end users. This is intentional. See link: http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db2.doc.sqlref/xf7c63.htm However, if you do not need the ROWID properties (use the data for read-only purposes) then it may be possible to mimic unloading / loading of this table. You can use the EXPORT / IMPORT commands to do the unloading / loading functions, which should support ROWID, but if it does not, then you can achieve the same functionality by converting the unsupported datatype ROWID into a supported datatype. The only thing is, that once you do this, you will not be able to convert the data back into this datatype. In other words, all the properties of ROWID will now be a regular INTEGER field. select INTEGER(ROWID) as int-rowid , col2 , coln from table order by 1 Then you can execute the EXPORT / IMPORT command to unload / load the data. Warning: Once you get rid of the ROWID properties, you cannot gain this back. In other words, INSERTS to this table will NOT automatically increment the ROWID field.
unknown
d11530
train
Have you tried using webkit? I found a similar qustion from enter link description here try this code from that question maybe: ::-webkit-calendar-picker-indicator { filter: invert(1); }
unknown
d11531
train
The F# code: let a = 1 let a = 2 let a = 3 a + 1 is just a condensed (aka "light") version of this: let a = 1 in let a = 2 in let a = 3 in a + 1 The (sort of) C# equivalent would be something like this: var a = 1; { var a = 2; { var a = 3; return a + 1; } } In the context of having nested scopes, C# doesn't allow shadowing of names, but F# and almost all other languages do. In fact, according to the font of all knowledge C# is unusual in being one of the few languages that explicitly disallow shadowing in this situation. This might be because C# is a relatively new language. OTOH F# copies much of its design from OCaml, which in turn is based on older languages, so in some sense, the design of F# is "older" than C#. A: Tomas Petricek already explains that this isn't mutation, but shadowing. One follow-up question is: what is it good for? It isn't a feature I use every day, but sometimes I find it useful, particularly when doing property-based testing. Here's an example I recently did as part of doing the Tennis Kata with FsCheck: [<Property>] let ``Given player has thirty points, when player wins, then the new score is correct`` (points : PointsData) (player : Player) = let points = points |> pointTo player Thirty let actual = points |> scorePoints player let expected = Forty { Player = player OtherPlayerPoint = (points |> pointFor (other player)) } expected =? actual Here, I'm shadowing points with a new value. The reason is that I want to explicitly test the case where a player already has Thirty points and wins again, no matter how many points the other player has. PointsData is defined like this: type Point = Love | Fifteen | Thirty type PointsData = { PlayerOnePoint : Point; PlayerTwoPoint : Point } but FsCheck is going to give me all sorts of values of PointsData, not only values where one of the players have Thirty. This means that the points arriving as the function argument don't really represent the test case I'm interested in. To prevent accidental usage, I shadow the value in the test, while still using the input as a seed upon which I can build the actual test case value. Shadowing can often be useful in cases like that. A: I think it is important to explain that there is a different thing going on in F# than in C#. Variable shadowing is not replacing a symbol - it is simply defining a new symbol that happens to have the same name as an existing symbol, which makes it impossible to access the old one. When I explain this to people, I usually use an example like this - let's say we have a piece of code that does some calculation using mutation in C#: var message = "Hello"; message = message + " world"; message = message + "!"; This is nice because we can gradually build the message. Now, how can we do this without mutation? The trick is to define new variable at each step: let message1 = "Hello"; let message2 = message1 + " world"; let message3 = message2 + "!"; This works - but we do not really need the temporary states that we defined during the construction process. So, in F# you can use variable shadowing to hide the states you no longer care about: let message = "Hello"; let message = message + " world"; let message = message + "!"; Now, this means exactly the same thing - and you can nicely show this to people using Visual F# Power Tools, which highlight all occurrences of a symbol - so you'll see that the symbols are different (even though they have the same name).
unknown
d11532
train
Use quotes where they are needed, like this #!/bin/bash echo "" read -p "-Write file you need- " FILE read -p "-Write the folder to search- " FOLDER FILE="*$FILE*" set -x find "$FOLDER" -name "$FILE"
unknown
d11533
train
I would just draw all of your items to your own buffer, then copy it all in at once. I've used this for graphics in many applications, and it has always worked very well for me: public Form1() { InitializeComponent(); } private void timer1_Tick(object sender, EventArgs e) { Invalidate();// every 100 ms } private void Form1_Load(object sender, EventArgs e) { DoubleBuffered = true; } private void Form1_Paint(object sender, PaintEventArgs e) { Bitmap buffer = new Bitmap(Width, Height); Graphics g = Graphics.FromImage(buffer); Pen pen = new Pen(Color.Blue, 1.0f); //Random rnd = new Random(); for (int i = 0; i < Height; i++) g.DrawLine(pen, 0, i, Width, i); BackgroundImage = buffer; } EDIT: After further investigation, it looks like your problem is what you're setting your Graphics object to: Graphics g = CreateGraphics(); needs to be: Graphics g = e.Graphics(); So your problem can be solved by either creating a manual buffer like I did above, or simply changing you Graphics object. I've tested both and they both work. A: Try setting the double buffered property to true just once in the constructor while you're testing. You need to make use of the back buffer. Try this: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace DoubleBufferTest { public partial class Form1 : Form { private BufferedGraphicsContext context; private BufferedGraphics grafx; public Form1() { InitializeComponent(); this.Resize += new EventHandler(this.OnResize); DoubleBuffered = true; // Retrieves the BufferedGraphicsContext for the // current application domain. context = BufferedGraphicsManager.Current; UpdateBuffer(); } private void timer1_Tick(object sender, EventArgs e) { this.Refresh(); } private void OnResize(object sender, EventArgs e) { UpdateBuffer(); this.Refresh(); } private void UpdateBuffer() { // Sets the maximum size for the primary graphics buffer // of the buffered graphics context for the application // domain. Any allocation requests for a buffer larger // than this will create a temporary buffered graphics // context to host the graphics buffer. context.MaximumBuffer = new Size(this.Width + 1, this.Height + 1); // Allocates a graphics buffer the size of this form // using the pixel format of the Graphics created by // the Form.CreateGraphics() method, which returns a // Graphics object that matches the pixel format of the form. grafx = context.Allocate(this.CreateGraphics(), new Rectangle(0, 0, this.Width, this.Height)); // Draw the first frame to the buffer. DrawToBuffer(grafx.Graphics); } protected override void OnPaint(PaintEventArgs e) { grafx.Render(e.Graphics); } private void DrawToBuffer(Graphics g) { //Graphics g = grafx.Graphics; Pen pen = new Pen(Color.Blue, 1.0f); //Random rnd = new Random(); for (int i = 0; i < Height; i++) g.DrawLine(pen, 0, i, Width, i); } } } It's a slightly hacked around version of a double buffering example on MSDN. A: No need to use multiple buffers or Bitmap objects or anything. Why don't you use the Graphics object provided by the Paint event? Like this: private void Form1_Paint(object sender, PaintEventArgs e) { Graphics g = e.Graphics; Pen pen = new Pen(Color.Blue, 1.0f); Random rnd = new Random(); for (int i = 0; i < Height; i++) g.DrawLine(pen, 0, i, Width, i); }
unknown
d11534
train
You have given int instead of float. #include<stdio.h> #include<math.h> int main(){ float p,r,t,si,ci; printf("enter the principle:"); scanf("%d",&p); printf("enter the rate:"); scanf("%d",&r); printf("enter the time:"); scanf("%d",&t); si=p*r*t/100; ci=p*pow((1+(r/100)),t); printf("simple interest:%d",si); printf("\ncompound interest:%d",ci); int a=getch(); return 0; }
unknown
d11535
train
Just found the answer: git submodule add http://git.drupal.org/project/token.git sites/all/modules/token The leading "/" was the problem. A: I had the same problem, but apparently for different reasons. I tried to use git submodule add like I used git clone - without specifying the directory like this: git submodule add ../repos/subA instead of git submodule add ../repos/subA subA All I have to say is that is the effing worst error message possible to tell me I left off a required command-line argument.
unknown
d11536
train
As the error indicates, you can't execute two queries concurrently over the same connection. Open a second connection (you could name it Connection2) and assign that to $cm2: $cm2.Connection = $Connection2
unknown
d11537
train
You $list must have contained empty values use array_filter $IMGurls = array_map("genIMG", array_unique(array_filter($list))); Example $list = array(1,2,3,4,5,"","",7); function genIMG($sValue) { return 'http://asite.com/' . $sValue . '?&fmt=jpg'; } $IMGurls = array_map("genIMG", array_unique(array_filter($list))); foreach ( $IMGurls as $imgLink ) { echo "<a href='" . $imgLink . "'>" . $imgLink . "</a><br />"; } Output http://asite.com/1?&fmt=jpg http://asite.com/2?&fmt=jpg http://asite.com/3?&fmt=jpg http://asite.com/4?&fmt=jpg http://asite.com/5?&fmt=jpg http://asite.com/7?&fmt=jpg
unknown
d11538
train
This is also a ReSharper-Configuration problem that I encounter myself. May this help you. When I try to run the application, it get this ReSharper error: "The project to be run with this configuration is not present in the solution" In Resharper help page, I found the path to the run configurations: https://www.jetbrains.com/help/resharper/2016.2/Run_Configurations.html ReSharper | Tools | Run Configurations... Since the project doesn't exist anymore, I delete that connfiguration: {Project Name} > Configure > Delete... A: This issue (http://youtrack.jetbrains.com/issue/RSRP-295311) should be fixed in ReSharper 7 here: http://www.jetbrains.com/resharper/download/index.html. Thank you! A: I got the same ReSharper error in VS2019, VS2022: ReSharper – Failed to execute run configuration The project to be run with this configuration is not present in the solution. Turns out that you don’t have to necessarily run the project that is problematic. If there is some configuration saved for another project in ReSharper, and that project is not present anymore (for example, you’ve switched branches or deleted some project from the solution), this error will pop-up. It was my case. Solution is as described above: Go to Extensions -> ReSharper -> Tools -> Run configurations… -> nonExistingProject -> Configure -> Delete This will delete configuration that is preventing ReSharper to work well and blocks your run at the IDE. Apparently, still not fixed...
unknown
d11539
train
When you do updates using JPQL, the updates go directly to the database. Hibernate doesn't magically update corresponding entities in the persistence context (the first level cache). This userRepository.findUserByUsername(username) should get a user from the database (without the cache). But if you have a query cache enabled, the result can be loaded from the cache. You need to enable SQL logging and check the logs for that. Also keep in mind that @Transactional above blockUser() method does nothing because spring boot creates a transaction for each test method. It doesn't mean that you have to remove @Transactional of course.
unknown
d11540
train
You will have to set a timer for the time it takes the toast to disappear. If I'm not mistaken, LENGTH_SHORT is 2 seconds or around it. Call a timer with a timer task with a 2 seconds delay that will call finish in turn.
unknown
d11541
train
If you want to create a dataframe with t1 through t12 containing a range of dates: t = seq(mdy("01/01/2000"), by = "3 months", length.out = 12) #this replaces the loop names(t) <- paste0("t", c(1:12)) #this names your vector data.frame(as.list(t)) #this creates the df
unknown
d11542
train
You need to use FD_SET on the file descriptors you're actually interested in -- namely tuberia1_fd and tuberia2_fd. So something like... while (1) { FD_ZERO(&rfds); FD_SET(tuberia1_fd, &rfds); FD_SET(tuberia2_fd, &rfds); int max; if (tuberia1_fd > tuberia2_fd) { max = tuberia1_fd; } else { max = tuberia2_fd; } tv.tv_sec = 0; tv.tv_usec = 0; retval = select(max + 1, &rfds, NULL, NULL, &tv);
unknown
d11543
train
Charts can generate iccube-events for a couple of js events (I guess here on row click). Check the image below : Edit: For a bar chart, use "On Navigate" event.
unknown
d11544
train
First thing to do is to adjust your data model in the store. Use "mapping" to accommodate all the fields in object field into individual fields, like that: var store = Ext.create('Ext.data.JsonStore', { fields: [ 'Date', 'Ahourly', {name:'ChourlyVal',mapping:'Chourly.val'}, {name:'ChourlyColor',mapping:'Chourly.color'}, {name:'ChourlyStatus',mapping:'Chourly.status'}, {name:'CdailyVal' ,mapping:'Cdaily.val'}, {name:'CdailyColor' ,mapping:'Cdaily.color'}, {name:'CdailyStatus' ,mapping:'Cdaily.status'} ], proxy: { type: 'ajax', url: 'data1.json', reader: { type: 'json', rootProperty: 'data' } }, autoLoad: true }); And than, adjust the column and the renderer to use the mapped fields: { text: 'Chourly', dataIndex: 'ChourlyVal', flex: 1, renderer: function (a, meta, record) { console.log(record); meta.tdStyle = "background-color:" + record.data.ChourlyColor + ";"; var cellText = a + " " + record.data.ChourlyStatus; return cellText; }, editor: 'textfield' } I hope that helps A: Thanks for your reply. I think its the good way. Then I think I have to use the serialize function to put the object back together with the modified value(s) to send back to the server like this : fields: ['Date', { name: 'Chourly', serialize: function (val, rec) { return { 'val': rec.get('val'), 'status': rec.get('status'), 'color': rec.get('color') }; } }, { name: 'val', mapping: 'Chourly.val', persist: false //Do not write back to server }, { name: 'status', mapping: 'Chourly.status', persist: false }, { name: 'color', mapping: 'Chourly.color', persist: false }, ... ]... Now I have to do that dynamically because I don't know in advance which data the user will consult (Chourly, Dhourly, Ehourly ....). Any idea to modify the model on the load callback ?
unknown
d11545
train
NSMutableArray *arrayContainZero = [NSMutableArray new]; NSMutableArray *arrayWithZeroIndexes = [NSMutableArray new]; for (NSNumber *numberZero in Array) { if(numberZero.integerValue == 0) { [arrayContainZero addObject:numberZero]; [arrayWithZeroIndexes addObject:@([arrayContainZeroindexOfObject:numberZero])]; } } A: NSArray *mainArr=@[@1,@1,@1,@0,@0,@0,@1,@1,@0,@1,@1]; NSMutableArray *tempArr=[[NSMutableArray alloc] init]; for(int i=0;i<[mainArr count];i++){ if([[mainArr objectAtIndex:i] isEqualToNumber:[NSNumber numberWithInt:0]]){ NSMutableArray *arr=[[NSMutableArray alloc] initWithArray:tempArr]; [tempArr removeAllObjects]; NSDictionary *dic=[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:0],@"value",[NSNumber numberWithInt:i],@"index",nil]; [tempArr addObject:dic]; [tempArr addObjectsFromArray:arr]; }else if ([[mainArr objectAtIndex:i] isEqualToNumber:[NSNumber numberWithInt:1]]){ NSDictionary *dic=[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:1],@"value",[NSNumber numberWithInt:i],@"index",nil]; [tempArr addObject:dic]; } } NSLog(@"%@",tempArr.description);
unknown
d11546
train
My best bet is that either there are two static UITableViews inside a UIScrollView or that it's some custom subclass UIView set as tableHeaderView and styled to look as on the picture. If I were to implement it I'd go with the second choice. A: Make a subclass of UITableView. Give your subclass properties referring to the photo well and the three special row cells. Override layoutSubviews to call [super layoutSubviews] and then set the frames of the photo well and the three special row cells. The photo well should not be a cell.
unknown
d11547
train
import random PREFIX = 'AAA' numbers = random.sample(range(0,10000), 10) # to include also 9999 with open("output.txt","a") as f: for number in numbers: f.write(f'{PREFIX}{number:0>4d}\n') A: Simple Concatenation can do the trick for you .. def converter(): string="AAA" ans= string+str(number) return ans
unknown
d11548
train
Figured it out. Coming from a regular programming background, I had a confusion on how the historic referencing works and the way the pine script runs on each bar. valuewhen returns the first occurrence of the 5-bar D fractal with the TOP value in the middle. The actual top value prev_top_fractal is obtained by historic referencing [2] on the returned result. In the next line, prev_high in price is found by a similar logic.
unknown
d11549
train
Have you ever tried to change <property name="grammarLocation" value="resource:/edu/cmu/sphinx/demo/transcriber"/> to <property name="grammarLocation" value="resource:edu/cmu/sphinx/demo/transcriber"/> (meaning just removing the leading slash before edu)? Class.getResource() and ClassLoader.getResource() do interpret the provided name differently: while Class.getResource( "/edu/cmu/sphinx/demo/transcriber/transcriber.manifest" ) would find the resource, you have to use ClassLoader.getResource() with the argument "edu/cmu/sphinx/demo/transcriber/transcriber.manifest" to find the same resource. As you do not know which method is used by the library code you call to load the other resources, you should give it a try.
unknown
d11550
train
Because when hover callback runs it use global variable origImgSrc. Variable origImgSrc rewrites every iteration and equals last image src after all. Just put origImgSrc into your each.
unknown
d11551
train
Well it works now. But actually I don't know why. I shouldn't care. I copied the working code from the bench-*.php file from the /docs into my own file and it worked. Here's the code: echo "\n-- Judy STRING_TO_INT \n"; echo "Mem usage: ". memory_get_usage() . "\n"; echo "Mem real: ". memory_get_usage(true) . "\n"; $s=microtime(true); $judy = new Judy(Judy::STRING_TO_MIXED); for ($i=0; $i<500; $i++) $judy["$i"] = 'test'; var_dump($judy); unset($judy["102"]); echo $judy["192"]; var_dump($judy["102"]); echo "Size: ".$judy->size()."\n"; $e=microtime(true); echo "Elapsed time: ".($e - $s)." sec.\n"; echo "Mem usage: ". memory_get_usage() . "\n"; echo "Mem real: ". memory_get_usage(true) . "\n"; echo "\n"; unset($judy);
unknown
d11552
train
WebGL effectively clears the screen after the page has been composited. When you're stepping through stuff one line at a time it's going to be composited every time you stop. If you don't want it to be cleared ask for preserveDrawingBuffer: true when you create the WebGL context as in gl = someCanvas.getContext("webgl", { preserveDrawingBuffer: true }); As for why, from the spec While it is sometimes desirable to preserve the drawing buffer, it can cause significant performance loss on some platforms. Whenever possible this flag should remain false and other techniques used. Techniques like synchronous drawing buffer access (e.g., calling readPixels or toDataURL in the same function that renders to the drawing buffer) can be used to get the contents of the drawing buffer. If the author needs to render to the same drawing buffer over a series of calls, a Framebuffer Object can be used. Implementations may optimize away the required implicit clear operation of the Drawing Buffer as long as a guarantee can be made that the author cannot gain access to buffer contents from another process. For instance, if the author performs an explicit clear then the implicit clear is not needed. The TL;DR version is preserveDrawingBuffer: false (the default) allows WebGL to swap buffers when compositing (that doesn't mean it will swap buffers but it can if it chooses to). preserveDrawingBuffer: true means it can't swap buffers, it must copy buffers. Copying is much slower than swapping.
unknown
d11553
train
Whether you do this inside the function or not it's fully up to you. Personally if I did it inside the function I would change its name to something clearer since it doesn't only check if a key exists. Anyhow I found a solution within the same function: function array_key_exists_r($needle, $haystack){ $result = array_key_exists($needle, $haystack); if ($result && $haystack[$needle]){ return $result; } foreach ($haystack as $v) { if (is_array($v) || is_object($v)){ $result = array_key_exists_r($needle, $v); if ($result) { return $result; } } } return false; } So basically I added a validation on your ifs and that did it also change the default return value to false just in case. I think it can still be enhanced but this does the job. A: Try this approach instead. Easier! function array_key_exists_r($needle, $haystack) { $found = []; array_walk_recursive($haystack, function ($key, $value) use (&$found) { # Collect your data here in $found }); return $found; }
unknown
d11554
train
Public API choices: -[NSView cacheDisplayInRect:toBitmapImageRep:] -[NSBitmapImageRep initWithFocusedViewRect:] Private WebKit method: -[DOMElement renderedImage]
unknown
d11555
train
You can first pick the TOP 10 Publications and then put a JOIN with the Category table like following query to get all the categories. SELECT [Publication].*,[PublicationCategory].[categoryid] FROM ( SELECT TOP 10 [Publication].id, [Publication].title, [Publication].content FROM Publications [Publication] ORDER BY [Publication].Id DESC ) [Publication] INNER JOIN Categories [PublicationCategory] ON [Publication].id = [PublicationCategory].publicationid DEMO A: Use a CTE to number your publlication, and then JOIN onto your table PublicationCategory and filter on the value of ROW_NUMBER(): WITH RNs AS( SELECT P.Id, P.Title, P.Content, ROW_NUMBER() OVER (ORDER BY P.ID DESC) AS RN FROM Publication P) SELECT RNs.Id, Rns.Title, RNs.Content, PC.CategoryId FROM RNs LEFT JOIN PublicationCategory PC ON RNs.Id = PC.Id WHERE RNs.RN <= 10; A: I Think the best answer is the @PSK's but What if a publication is not categorized? (weird case but if is not validated maybe could happen) so you can add a left join and always get at least the 10 publications, if a publication has no category you still will get it but with a NULL category SELECT [Publication].*,[PublicationCategory].[categoryid] FROM ( SELECT TOP 10 [Publication].id, [Publication].title, [Publication].content FROM Publications [Publication] ORDER BY [Publication].Id DESC ) [Publication] LEFT JOIN Categories [PublicationCategory] ON [Publication].id = [PublicationCategory].publicationid
unknown
d11556
train
In this case, you can just remove the "$" from your custom formula and it will move just fine Your ranges are going to be together as you shown, but will look at the cell at its right
unknown
d11557
train
Uploading cannot be batched, please run the upload requests individually.
unknown
d11558
train
Try this instead: req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length;
unknown
d11559
train
The following solutions can be provided: * *if statement to check adult age could be placed outside switch then default in switch is referring to minor ages `[0..10]. if (a >= 18) { System.out.println("You can watch any film unaccompanied"); } else { switch (a) { case 11 -> System.out.println("You can view up to PG rated films unaccompanied"); case 12, 13, 14 -> System.out.println("You can view films up to a 12 age rating unaccompanied"); case 15, 16, 17 -> System.out.println("You can view films up to a 15 age rating unaccompanied"); default -> System.out.println("You are too young to view a film unaccompanied at the age of " + a); } } *replace if with a ternary operator in the default: switch (a) { case 11 -> System.out.println("You can view up to PG rated films unaccompanied"); case 12, 13, 14 -> System.out.println("You can view films up to a 12 age rating unaccompanied"); case 15, 16, 17 -> System.out.println("You can view films up to a 15 age rating unaccompanied"); default -> System.out.println(a >= 18 ? "You can watch any film unaccompanied" : "You are too young to view a film unaccompanied at the age of " + a); } A: The reason you're getting this is because the if statements are in the case for 15,16,17. Thus the only situation where if(a>=18) or if(a<11) is ran the only possible things a could be is 15, 16, or 17. A: You have some errors in your code which you must fix. You can't have if statements in a switch. Better move them outside. Here's what the working example should be: import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Please enter your age: "); int a = sc.nextInt(); if (a >= 18) { System.out.println("You can watch any film unaccompanied"); }else if (a < 11) { System.out.println("You are too young to view a film unaccompanied"); } switch (a) { case 11: System.out.println("You can view up to PG rated films unaccompanied"); break; case 12: System.out.println("You can view films up to a 12 age rating unaccompanied"); break; case 13: System.out.println("You can view films up to a 12 age rating unaccompanied"); break; case 14: System.out.println("You can view films up to a 12 age rating unaccompanied"); break; case 15: System.out.println("You can view films up to a 15 age rating unaccompanied"); break; case 16: System.out.println("You can view films up to a 12 age rating unaccompanied"); break; case 17: System.out.println("You can view films up to a 12 age rating unaccompanied"); break; } } } Sample I/O Input 12 Output You can view films up to a 12 age rating unaccompanied
unknown
d11560
train
I would recommend the following approach: * *create a simple class containing the following fields: name, description, category, and price *create an empty array that will be indexed with the app_id, each corresponding to an instance of the aforementioned class *read each file and affect the result to the correct element in the array if (!isset($array[$app_id])) $array[$app_id] = new MyClass(); array[$app_id]->name = $name; array[$app_id]->description = $description; *once you're done reading all files, you can iterate through the array and write each element to the SQL table. A: Simply open all three files in sequence and then have three stream_get_line's in your while loop, one for each file: $fp1 = fopen('textfile1','r'); $fp2 = fopen('textfile2','r'); $fp3 = fopen('textfile3','r'); while (!feof($fp1)) { $line1 = stream_get_line($fp1...) $line2 = stream_get_line($fp2...) $line3 = stream_get_line($fp3...) ... You'll have to take care that each file has exactly the same number of lines, though. Best is probably to check for feof on each stream before reading a line from it, and aborting with an error message if one of the streams runs out of lines before the others. A: Try this: file1.txt contents: id 13 category test description test file2.txt: id 13 name test description test id 15 name test description test id 17 name test description test $files = array('file1.txt','file2.txt'); foreach($files as $file) { flush(); preg_match_all("/id\s(?<id>\d+)\s(?:name|category|price)?\s(?<name>[\w]+)\sdescription\s(?<description>[^\n]+)/i",@file_get_contents($file),$match); print_r($match); } returns: Array ( [0] => Array ( [0] => id 13 category test description test ) [id] => Array ( [0] => 13 ) [1] => Array ( [0] => 13 ) [name] => Array ( [0] => test ) [2] => Array ( [0] => test ) [description] => Array ( [0] => test ) [3] => Array ( [0] => test ) ) Array ( [0] => Array ( [0] => id 13 name test description test [1] => id 15 name test description test [2] => id 17 name test description test ) [id] => Array ( [0] => 13 [1] => 15 [2] => 17 ) [1] => Array ( [0] => 13 [1] => 15 [2] => 17 ) [name] => Array ( [0] => test [1] => test [2] => test ) [2] => Array ( [0] => test [1] => test [2] => test ) [description] => Array ( [0] => test [1] => test [2] => test ) [3] => Array ( [0] => test [1] => test [2] => test ) )
unknown
d11561
train
I found out what helped me. Thought I had tried it - but seems not: Added this to the top. for some reason this server and git needs this: $env:GIT_REDIRECT_STDERR = '2>&1'
unknown
d11562
train
The solution I found is inspired by this answer. Actually, the AUTOSSH_PIDFILE variable could not be used by autossh (because start-stop-daemon runs in a different environment). So the workaround is to use : $ sudo start-stop-daemon --background --name mytunnel --start --exec /usr/bin/env AUTOSSH_PIDFILE="/var/run/mytunnel.pid" /usr/lib/autossh/autossh -- -M 0 -p 22 user@server -f -T -N -R 31022:localhost:31222 * */usr/bin/env AUTOSSH_PIDFILE="/var/run/mytunnel.pid" correctly defines the necessary environment variable *--make-pidfile and --pidfile are no longer required by start-stop-daemon *sudo start-stop-daemon --pidfile /var/run/mytunnel.pid --stop now works to kill autossh *--background option makes the ssh's -f optional (using -f or not does not change anything if --background is used) The reason for the behaviour is not completely clear to me. However, it seems that autossh automatically creates several processes to handle correctly ssh instances when it does not see AUTOSSH_PIDFILE variable. Edit: When using it from a service init script (in /etc/init.d/servicename), the syntax has to be modified: sudo start-stop-daemon --background --name mytunnel --start --exec /usr/bin/env -- AUTOSSH_PIDFILE="/var/run/mytunnel.pid" /usr/lib/autossh/autossh -M 0 -p 22 user@server -f -T -N -R 31022:localhost:31222 Notice the -- that must come just after the /usr/bin/env command (it was after the /usr/lib/autossh/autossh from command line). A: Just an update for a working solution from 7 years into the future as I recently came across this exact same issue. The AUTOSSH_PIDFILE variable for some reason will make the program fail without any output, including when using -v. What I found would work is simply foregoing -f on autossh and using --background on the start-stop-daemon. Apparently this also necessitates the -N flag on autossh, which is as best I can tell undocumented. Could be a new development or upstream issues, but in any case the answer by OP did not work for me, but this did: start-stop-daemon --start --background --user myuser --quiet --make-pidfile --pidfile "/var/run/myhost_autossh.pid" --exec autossh -- -M 0 -N myhost
unknown
d11563
train
This should do it: Process p = new Process(); p.StartInfo = new StartInfo(url) {UseShellExecute = true}; p.Start(); EDIT: This will work for a valid URL. As the comment above say this will not work for http://about:home. EDIT #2: I will keep the previous code in case it's helpful for anybody. Since the comment above I've been looking how to do it, and in deed was not so trivial, this is what I did in order to launch the default browser without navigating to any URL. using (var assoc = Registry.CurrentUser.OpenSubKey(@"SOFTWARE\Microsoft\Windows\Shell\Associations\UrlAssociations\http\UserChoice")) { using (var cr = Registry.ClassesRoot.OpenSubKey(assoc.GetValue("ProgId") + @"\shell\open\command")) { string loc = cr.GetValue("").ToString().Split('"')[1]; // In windows 10 if Microsoft edge is the default browser // loc=C:\Windows\system32\LaunchWinApp.exe, so launch Microsoft Edge manually // 'cause didn't figured it out how to launc ME with that exe if (Path.GetFileNameWithoutExtension(loc) == "LaunchWinApp") Process.Start("explorer", @"shell:Appsfolder\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge"); else Process.Start(loc); } } I tested it in my machine (Win10) and worked out for every default browser switch. Hope it helps now.
unknown
d11564
train
Try wrapping type T#TT2 { type TT4 = Int } in parenthesis before the final projection like so def test[T <: T1](t: (T#TT2 { type TT4 = Int })#TT3) = ??? Types can always be wrapped in parentheses SimpleType ::= SimpleType TypeArgs | SimpleType ‘#’ id | StableId | Path ‘.’ ‘type’ | Literal | ‘(’ Types ‘)’ <======= note the parentheses for example scala> val xs: (List[(Int)]) = List(42) val xs: List[Int] = List(42)
unknown
d11565
train
Problem solved, also i didn't totally understood the logic. From EJB Spec 3.1 - 4.9.2.1: Session Bean Superclasses A session bean class is permitted to have superclasses that are themselves session bean >classes. However, there are no special rules that apply to the processing of annotations or the deployment descriptor for this case. For the purposes of processing a particular session bean class, all superclass processing is identical regardless of whether the superclasses are themselves session bean classes. In this regard, the use of session bean classes as superclasses merely represents a convenient use of implementation inheritance, but does not have component inheritance semantics. For example, the client views exposed by a particular session bean are not inherited by a subclass that also happens to define a session bean. @Stateless public class A implements Foo { ... } @Stateless public class B extends A implements Bar { ... } Assuming Foo and Bar are local business interfaces and there is no associated deployment descriptor, session bean A exposes local business interface Foo and session bean B exposes local business interface Bar, but not Foo. Session bean B would need to explicitly include Foo in its set of exposed views for that interface to apply. For example: @Stateless public class A implements Foo { ... } @Stateless public class B extends A implements Foo, Bar { ... } I my case, refractoring project and removing Interfaces worked.
unknown
d11566
train
Performance and efficiency are slightly a more important consideration in Android. Something considered to be half baked optimization effort, sometimes makes sense in android.(like we should not use enum but java int enum pattern). So to the answer to your question is. If you have to register multiple onClick listeners, Implement interface and use switch case within it. If you have to register only one on click listener, use anonymous class. (Android developers prefers anonymous class, whenever possible. Limiting the scope ;))
unknown
d11567
train
The big difference is that cloud computing is a big group of servers in 1 data center building which is usually at one location. On the other hand CDN is also group of servers but distributed around the country so it allows web visitors a better and faster access to the website. For example if you're in A location trying to access a server in B it can be faster to be hitting a server locally in A for the files. The CDN is usually able to support much larger traffic volumes since the speed is calculated based on location the traffic comes from. CDN work on the principle of delivering content form the nearest located server as per user location. CDN is short is a way to boost and speed up your website turn around time. A: CDN is simply a network of servers that replicate your binary files so that they are served from geographically close locations. CDN has been around for a lot longer than cloud computing as you know it today. Not every cloud provider is a CDN, and not every CDN is a cloud computing provider. Cloud computing is simply - dividing up a large computing resource (usually processing power) into little chunks which you can use remotely. CDN is simply - a bunch of "disks" that are spread across the world in different datacenters. You upload your file to one of these disks - and then tell it where your customers are coming from. It will then copy the same file to other disks that are nearer to your customers; giving your visitors a faster experience. This collection of disks is called the content delivery network. One of the biggest names in CDN is Akamai. A: Short answer: They are different. Detailed one follows: * *CDN short for Content delivery network is more like edge computing. It follows end-to-end principle of networking meaning, as much work can be de-centralized and distributed to nodes near user, do it. Reduce single point of failure. You can read a small article written by me at : http://www.sitepoint.com/content-delivery-networks-cdn-get-to-the-edge/ *Cloud Computing is much more than delivering content near edge. It's elastic computing, storage and network on demand in very broad sense. For computing you need: storage & processing power and that's what is provided by Cloud Computing
unknown
d11568
train
The line $x = shift in your second Perl example just overwrites a global, lexically-scoped variable, same as if you would add global x to your Python code. This has nothing to do with dynamic scoping, and there are many other languages with the same behaviour as Perl - I'd consider Python the odd man out here for requiring to explicitly import a variable name visible at lexical scope. The real problem with the Perl code is not lexical scoping, but lack of declared parameters. With declared parameters, it would be impossible to forget the my, and the problem would go away as well. I find Python's approach to scoping (as of Python 2) far more problematic: it's inconsistent (explicit imports for globals to get a read-write binding, whereas lexicals in nested functions are bound automatically, but read-only) and makes closures secon-class citizens.
unknown
d11569
train
You can try adding a timestamp to check when it is being requested the second time: console.log("got request @" + new Date()); Since processing is taking time, the request is getting timed out and hence another request is being fired. You need to increase the timeout interval. In vanilla JS: var server = http.createServer(function (req, res) { ... }); server.timeout = 120000; If you are using Express then you can use Connect Middleware for Timeout support: var timeout = express.timeout // express v3 and below var timeout = require('connect-timeout'); //express v4 app.use(timeout(120000)); Edit: Handling the multiple request issue: Most of the answers on SO hint that second request is made for favicon. But I checked on my local machine that this is not the case with the second request made after interval of around 2 minute by printing the req.url which points to the same page which in your case will be /saveCollection. I can suggest a hack which worked on my local machine: * *Send headers: res.writeHead(200, { 'Content-Type': 'application/json' }); upon request. This will buy us 2 minutes time. *Now setup a interval which will keep feeding browser every 2 minutes with dummy data. *When the computation is completed remove the interval, send the actual result and close the response. Here's an example: router.post('/saveCollection', function(req, res,next) { console.log("col name:"+req.param("collName")); var fileStream=fs.createReadStream(req.files.myFile.path); var csvConverter=new Converter({constructResult:false}); csvConverter.on("end_parsed",function() { console.log('file completely parsed:'); clearInterval(si); res.write(", 'success':true}"); res.end(); }); i = 0; var si = setInterval(function() { i++; res.write(",'dummy" + i + "' : 'piece'"); }, 1000 * 60); //this should be kept a little less than two minutes console.log('before parsing'); res.writeHead(200, { 'Content-Type': 'application/json' }); res.write("{'dummy':'piece'"); fileStream.pipe(csvConverter); }); There is no other way to do this that I can think of. Even if you ignore the second request, still the first request will expire and browser will probably show an error. There is another work around, start computing and send a response back as soon as computing starts and then use sockets to check the status.
unknown
d11570
train
You can use the command this way: gcloud app logs tail --service=my-service --version=my-app-version in order to specify the service and version, then see If you're not really getting all the logs. See a list of all options here. Also you can see all your logs by going to Stackdriver -> Logging -> Logs: Once there, you can filter the logs by app version: Also be aware that depending on the request you make, sometimes you'll only see certain kind of logs. I pasted your code in the quickstart for app engine flexible and I got this: ........... 2017-12-18 09:40:07 my-service[my-app-version] "GET /favicon.ico" 200 2017-12-18 09:40:07 my-service[my-app-version] "GET /" 200 2017-12-18 09:40:07 my-service[my-app-version] "GET /favicon.ico" 200 2017-12-18 09:40:08 my-service[my-app-version] "GET /" 200 2017-12-18 09:40:13 my-service[my-app-version] org.eclipse.jetty.server.handler.ContextHandler.root: com.example.appengine.gettingstartedjava.helloworld.HomeServlet: FINE is not loggable 2017-12-18 09:40:13 my-service[my-app-version] com.example.appengine.gettingstartedjava.helloworld.HomeServlet: Received GET request 2017-12-18 09:40:13 my-service[my-app-version] org.eclipse.jetty.server.handler.ContextHandler.root: com.example.appengine.gettingstartedjava.helloworld.HomeServlet: responded GET request from: 35.187.117.231 2017-12-18 09:40:13 my-service[my-app-version] This is System Out. FINE is not loggable 2017-12-18 09:40:15 my-service[my-app-version] This is System Out. FINE is not loggable .......... In addition to this, you can use the Stackdriver Logging Client Libraries. First add this to your POM dependencies: <dependency> <groupId>com.google.cloud</groupId> <artifactId>google-cloud-logging</artifactId> <version>1.14.0</version> </dependency> then use the quickstart and paste this code in the HelloServlet.java. Your code would look like this: @Override public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { PrintWriter out = resp.getWriter(); out.println("Hello, world - Flex Servlet"); Logging logging = LoggingOptions.getDefaultInstance().getService(); String logName = "My-log"; String text = "Hello World"; LogEntry entry = LogEntry.newBuilder(StringPayload.of(text)).setSeverity(Severity.ERROR).setLogName(logName) .setResource(MonitoredResource.newBuilder("global").build()).build(); logging.write(Collections.singleton(entry)); System.out.printf("Logged: %s%n", text); } You can see this result in Stackdriver:
unknown
d11571
train
I have created this simple example on CodePen which you may use to help solve your question and learn. In the demo, click the 'menu' button to toggle the sidebar. The method I think you need to use is to position your main content area 'absolutely', and 'left' by the width of your sidebar when it is expanded I demo this in my demo by toggling a CSS class containing these properties on the main content area. This allows your responsive grid cells to remain the same width whilst the menu is open. I find it easier to see examples on CodePen but I have also embedded my example here: $("button").on("click", function() { $(".sidebar").toggleClass('is-active'); }) body { margin: 0; overflow-x: hidden; } .sidebar { display: none; width: 220px; height: calc(100vh - 28px); background: #eee; border-right: #ccc; padding: 14px; position: relative; left: -220px; } .sidebar.is-active { display: block; left: 0; transition: all 2000ms ease; } .sidebar.is-active~.main-content { width: 100vw; height: calc(100vh - 28px); position: absolute; left: calc(220px + 28px); } .main-content { width: 100%; padding: 14px; } .row { display: flex; flex-wrap: no-wrap; flex-direction: row; justify-content: space; } .row .cell { width: 100%; background: #eee; padding: 14px; } .row .cell:nth-of-type(2) { border-right: 1px solid lightgray; border-left: 1px solid lightgray; } nav a { display: block; padding: 14px; } .menu-btn { position: absolute; right: 0; z-index: 9999; } .wrapper { width: 100%; height: 100vh; display: flex; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <button type="button" class="menu-btn">Menu</button> <div class="wrapper"> <div class="sidebar"> <nav> <a href="#">Link #1</a> <a href="#">Link #2</a> <a href="#">Link #3</a> <a href="#">Link #4</a> </nav> </div> <div class="main-content"> <div class="row"> <h1>Main content</h1> </div> <div class="row"> <div class="cell"> Cell #1 </div> <div class="cell"> Cell #2 </div> <div class="cell"> Cell #3 </div> </div> </div> </div>
unknown
d11572
train
What is your computational goal more specifically? Here's a way to split your data up and create a combined frame In [44]: x = df['pipestring'].apply(lambda x: pd.Series(x.split('|'))) In [45]: x Out[45]: 0 1 2 3 0 aa aaa aaa NaN 1 bb bbbb bbb bbbbbb In [46]: df.join(x).set_index(['wibble']) Out[46]: pipestring pipelist 0 1 2 3 wibble a aa|aaa|aaa [aa, aaa, aaa] aa aaa aaa NaN b bb|bbbb|bbb|bbbbbb [bb, bbbb, bbb, bbbbbb] bb bbbb bbb bbbbbb A: The quickest way to get started with that is to stack your dataframe: In [44]: df = df.stack() In [45]: df.ix[0, 'pipelist'] Out[45]: ['aa', 'aaa', 'aaa'] In [46]: df Out[46]: 0 pipestring aa|aaa|aaa wibble a pipelist [aa, aaa, aaa] 1 pipestring bb|bbbb|bbb|bbbbbb wibble b pipelist [bb, bbbb, bbb, bbbbbb] Does that get you where you want to be?
unknown
d11573
train
This can be done using webpack's weak resolve Example from webpack docs: const page = 'Foo'; // Trick: Can be taken from props __webpack_modules__[require.resolveWeak(`./page/${page}`)]; My use case: Suppose, we're doing A/B testing on component D which has variations D1, D2 and D3. We can make folder D/ with D1.js D2.js and D3.js variations inside this folder. Now, require.resolveWeak('./D/${variation}') will pack chunks for D1, D2 and D3 in the build folder. Now, on runtime, passing the props to pick the particular variation will dynamically load that JS. Note: For eg: to pick D2 variation, experiment name also must be D2 (or else you must store mapping of experiment name to component name) to be passed as props. Generally, people do A/B testing by just having multiple if-elses. So, in the loadVariationOfD.js, instead of having weak resolve import statement, if-else is used with dynamic imports(I'm using loadable-components for this).
unknown
d11574
train
Try this: $paged = (get_query_var('paged')) ? get_query_var('paged') : 1;
unknown
d11575
train
Try ELDK cross compiler toolchain (linux distributions) for PowerPC: ftp://ftp.denx.de/pub/eldk/4.2/ppc-linux-x86/distribution/README.html Check Eldk version 4.2. Help: https://www.denx.de/wiki/ELDK-5/WebHome ppc_74xx-gcc from eldk can be used to compile for your platform. ` $ ppc_74xx-gcc -c myfile.c `
unknown
d11576
train
<RelativeLayout android:layout_width="54dp" android:layout_height="54dp" android:layout_alignParentTop="true" android:layout_alignParentStart="true" android:layout_marginStart="20dp" android:layout_marginTop="20dp"> <ImageView android:id="@+id/warning_circle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_alignParentRight="true" android:src="@drawable/ic_warning_circle"/> <ImageView android:id="@+id/button_menu" android:layout_width="44dp" android:layout_height="44dp" android:layout_alignParentTop="true" android:layout_alignParentStart="true" android:padding="5dp" android:src="@drawable/ic_menu" android:background="@drawable/button_white_rounded" android:elevation="4dp" android:scaleType="fitXY" android:adjustViewBounds="true" android:tint="@color/colorText"/> </RelativeLayout> A: The layout_align [Top|Bottom|Left|Right] attribute in RelativeLayout is used to align views based on their respective x and y values within the margin. The second ImageView will now be aligned to the top, bottom, left, and right of the first ImageView based on the margins. Padding is ignored in the alignment. <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/white" > <ImageView android:id="@+id/inside_imageview" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="5dip" android:layout_marginTop="5dip" android:src="@drawable/frame" /> <ImageView android:id="@+id/outside_imageview" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignTop="@id/inside_imageview" android:layout_alignBottom="@id/inside_imageview" android:layout_alignLeft="@id/inside_imageview" android:layout_alignRight="@id/inside_imageview" android:scaleType="fitXY" /> </RelativeLayout> A: Your code is perfect but either you have to remove android:elevation from button_menu or you have to add android:elevation in warning_circle. I have updated my answer based on @MikeM. suggestion for use RelativeLayout instead of ConstraintLayout Solution 1: <RelativeLayout android:layout_width="54dp" android:layout_height="54dp" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:layout_marginStart="20dp" android:layout_marginTop="20dp"> <ImageView android:id="@+id/button_menu" android:layout_width="44dp" android:layout_height="44dp" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:adjustViewBounds="true" android:background="@drawable/button_white_rounded" android:padding="5dp" android:scaleType="fitXY" android:src="@drawable/ic_menu" android:tint="@color/colorText" tools:src="@tools:sample/avatars" /> <ImageView android:id="@+id/warning_circle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_alignParentBottom="true" android:src="@drawable/ic_warning_circle" tools:src="@tools:sample/avatars" /> </RelativeLayout> Solution 2: <RelativeLayout android:layout_width="54dp" android:layout_height="54dp" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:layout_marginStart="20dp" android:layout_marginTop="20dp"> <ImageView android:id="@+id/button_menu" android:layout_width="44dp" android:layout_height="44dp" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:adjustViewBounds="true" android:background="@drawable/button_white_rounded" android:elevation="4dp" android:padding="5dp" android:scaleType="fitXY" android:src="@drawable/ic_menu" android:tint="@color/colorText" tools:src="@tools:sample/avatars" /> <ImageView android:id="@+id/warning_circle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_alignParentBottom="true" android:elevation="4dp" android:src="@drawable/ic_warning_circle" tools:src="@tools:sample/avatars" /> </RelativeLayout> A: You can use FrameLayout to achieve this.. here is an example, you can modify this according to your need. In warning icon image view add gravity as bottom|end and in menu icon image view add margin of 10dp when using FrameLayout. <FrameLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginStart="20dp" android:layout_marginTop="20dp"> <ImageView android:id="@+id/button_menu" android:layout_width="44dp" android:layout_height="44dp" android:layout_margin="10dp" android:padding="5dp" android:src="@drawable/ic_menu" android:background="@drawable/button_white_rounded" android:elevation="4dp" android:scaleType="fitXY" android:adjustViewBounds="true" android:tint="@color/colorText"/> <ImageView android:id="@+id/warning_circle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:src="@drawable/ic_warning_circle"/> </FrameLayout> A: <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_marginTop="20dp"> <ImageView android:layout_centerInParent="true" android:id="@+id/warning_circle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="20dp" android:src="@drawable/ic_launcher_background"/> <ImageView android:id="@+id/button_menu" android:layout_width="44dp" android:layout_height="44dp" android:padding="5dp" android:layout_marginRight="-20dp" android:layout_marginBottom="-20dp" android:layout_alignRight="@+id/warning_circle" android:layout_alignBottom="@+id/warning_circle" android:src="@drawable/ic_menu_gallery" android:background="@color/colorAccent" android:elevation="4dp" android:scaleType="fitXY" android:adjustViewBounds="true"/> </RelativeLayout> Use this way!! We have a concept of layers in relative layout! Vertically down views are on the top most layer. A: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <ImageView android:layout_width="match_parent" android:layout_height="match_parent" android:scaleType="fitXY" android:src="@drawable/download" /> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" android:layout_centerInParent="true"> <ImageView android:layout_width="match_parent" android:layout_height="match_parent" android:src="@drawable/download_img" /> </RelativeLayout> </RelativeLayout> A: You could try to search any ready-to-use solutions for badges like ImageBadgeView. Or try to play with BadgeDrawable from material lib. I'd prefer to use ViewOverlay for such purposes and just add few rows of code: button_menu.post { val drawable = ContextCompat.getDrawable(context, R.drawable.ic_warning_circle) drawable.setBounds( button_menu.width - drawable.intrinsicWidth, button_menu.height - drawable.intrinsicHeight, button_menu.width, button_menu.height ) button_menu.overlay.add(drawable) } In this way you would reduce amount of containers in your xml layout and make it more readable.
unknown
d11577
train
Is missing a close parenthesis if(!mysql_select_db($database) ) ^---Missing { exit('Error: could not select the database'); } A: if(!mysql_select_db($database)) you missed a ) A: You are missing a ) (closing bracket) in this line: if(!mysql_select_db($database)) A: You have missed a closing bracket in your second if statement. A: You missed a closing ) in your second if statement: if(!mysql_select_db($database) <<<<< A: You're missing a parenthesis: if(!mysql_select_db($database)) A: In case no one else told you. You missed a ) on the second if statement. That is a ")" or otherwise known as a closing parentheses.
unknown
d11578
train
filter your newArray. Add an input for the search and bind it to searchValue with [(ngModel)]="searchValue" then in the setter, filter newArray. so: private _searchValue:string set searchValue(value:string) { this._searchValue = value; this.newArray = this.newArray.filter(v=>v.contains(value)); } Note - this is unchecked, there may be typos. But this is the idea of a way to do it
unknown
d11579
train
Each time you scale the image along X and Y, the origin shifts in both dimensions by a specific offset. If you can compensate for that offset in the X dimension then a vertical animation could be achieved. In this case in first keyframe the scale increased by 0.1 which is 100 * 0.1 = 10px now origin got offset by 5px in X dimension, compensating in terms of translateX(-5px). Similarly for all the other keyframes. If you want a faster animation in the Y dimension just increase the Y translate values without touching the X translation values. .man-walk { width: 100px; height: 125px; position: absolute; top: 0; left: 50px; animation-name: man-walk; animation-duration: 0.45s; animation-iteration-count: infinite; } @keyframes man-walk { 0% { transform: rotate(0deg); } 25% { transform: rotate(1.5deg); } 50% { transform: rotate(0deg); } 75% { transform: rotate(-1.5deg); } 100% { transform: rotate(0deg); } } .man-scale { width: 100px; height: 125px; animation-name: man-scale; animation-duration: 2s; animation-timing-function: linear; animation-iteration-count: infinite; } /* define the animation */ @keyframes man-scale { 0% { transform: translate(-5px, 30px) scale(1.1); } 25% { transform: translate(-20px, 70px) scale(1.4); } 50% { transform: translate(-35px, 120px) scale(1.7); } 75% { transform: translate(-50px, 180px) scale(2.0); } 100% { transform: translate(-65px, 250px) scale(2.3); } } <div class="man-scale"> <img class="man-walk" src="http://clipart-library.com/img/1184697.png"> </div> There might be some advanced CSS techniques to calculate the offset automatically.
unknown
d11580
train
As pointed out in the comments, you can create a regular expression that validates you phone number format, in this case /^[0-9]{3}-[0-9]{3}-[0-9]{4}$/ could be a starting point. Here a complete example. <template> <div class=""> <label :class="{ error: $v.number.$error, green: $v.number.$dirty && !$v.number.$error }" > <input class="" v-model="number" @change="$v.number.$touch()" placeholder="Phone number" /> </label> <div class="" v-if="$v.number.$dirty && $v.number.$error"> <p class="">NOT VALID</p> </div> <button class=" " :disabled="$v.$invalid" @click="saveClick()" > <span class="">Save</span> </button> </div> </template> Script part import { required, helpers } from "vuelidate/lib/validators" const number = helpers.regex( "serial", /^[0-9]{3}-[0-9]{3}-[0-9]{4}$/ ) export default { data() { return { number: null } }, computed: { serial: { get() { return this.number }, set(value) { this.number = value } } }, methods: { saveClick() { //TODO } }, validations: { number: { required, number } } } Note This is for Vue 2.x and vuelidate ^0.7.6
unknown
d11581
train
Explanation: The logic behind the following script is the following: * *We get all the URLs of the files that are currently in column D of the sheet. These are the URLs of the files that have been recorder so far. We can safely assume that the URLs are always unique: const aURLs = sheet.getRange('D2:D'+sheet.getLastRow()).getValues().flat(); *The second step is to iterate through the files as the original script would do and check if the URL of a file is in aURLs. If the file URL is not in aURLs or in other words in column D, then add it to the newFiles array: if(!uNames.includes(name)){ newFiles.push(data); } *After we checked all the files, we can add, after the last row of sheet, only the new files: sheet.getRange(sheet.getLastRow()+1,1,newFiles.length,newFiles[0].length).setValues(newFiles); Solution: Manually add the headers in the file for the first time only: and then execute the script every next time: function myFunction() { const folder = DriveApp.getFolderById('1_gA4D7dfybJ60IdfgsnqdfdsgVoo9D76fgsdgf9cqmAnJI7g7'); const contents = folder.getFiles(); const sheet = SpreadsheetApp.getActiveSheet(); const aURLs = sheet.getRange('D2:D'+sheet.getLastRow()).getValues().flat(); const newFiles = []; while(contents.hasNext()) { let file = contents.next(); let name = file.getName(); let date = file.getDateCreated(); let size = file.getSize(); let id = file.getUrl(); let data = [name, date, size,id] if(!aURLs.includes(id)){ newFiles.push(data); } } sheet.getRange(sheet.getLastRow()+1,1,newFiles.length,newFiles[0].length).setValues(newFiles); }
unknown
d11582
train
Just figured out what was my issue. The version of my monogo is 3.0.12 and partialFilterExpression was introduced in version 3.2
unknown
d11583
train
if you're referring to the VS Local Database Cache project item, yes, it doesnt support SQL Express as a client. if you're willing to hand code, the docs has a tutorial, see: Tutorial: Synchronizing SQL Server and SQL Express other samples you can take a look at: Database Sync:SQL Server and SQL Express 2-Tier Database Sync:SQL Server and SQL Express N-Tier with WCF
unknown
d11584
train
Firefox doesn't implement box-sizing without a -moz- prefix. See bugzilla Also, your question missed the most important CSS rules: i.e. the width of each div. The page you link to shows rules for .row-fluid > .span3 and another for .span9 A: You change your html like this (just small changes in div start tag. and I hope you get our output like image below <div class="row-fluid"> <div id="content" class="span9"> content </div> <div id="ads" class="span3"> ads </div> </div> ok you want to like this image2 #content{ margin-top:2em; padding: 1em; box-sizing: border-box; border-radius: 20px ; border: 2px solid #EEF; float:left; } #ads{ margin-top:2em; padding: 1em; box-sizing: border-box; border-radius:15px; border: 2px solid #EEF; float: left; } A: I hope this fiddle can help you http://jsfiddle.net/9Zf8U/1/ I just added firefox css hack here and nothing eals. A: Adding css hacks for (moz browser) only.
unknown
d11585
train
I'm pretty sure it is because of the continue and absent of the break. It gets stuck in an infinite loop. You could do it like this: absent = [2,5] student = 1 while student < 11: if student not in absent: print(f"student {student} is attended!") student += 1 Here some info about the continue and break in a loop: https://www.programiz.com/python-programming/break-continue A: when student is 1,do: print(f"student {student} is attended!") student += 1 but when student is 2, if-statement is True, continue works and passes: print(f"student {student} is attended!") student += 1" and 3,4,5..10 don't go on. A: Your code runs infinite times because at first iteration if condition fails and prints the output and increments the student. In second iteration if condition will be true then it continues(which skips all the statements below the continue statement and moves to the loop) so that your student will be always 2. try this: absent = [2, 5] student = 1 while student < 11: if student in absent: student += 1 else: print(f"student {student} is attended!") student += 1
unknown
d11586
train
If you want to show a menu icon then try this. you should add showAsAction attribute to always or ifRoom <menu xmlns:android="http://schemas.android.com/apk/res/android" > <item android:id="@+id/action_refresh" android:icon="@drawable/ic_refresh" android:showAsAction="ifRoom" android:title="@string/action_refresh"/> <item android:id="@+id/action_filter" android:icon="@drawable/ic_filter" android:showAsAction="always" android:title="@string/action_filter"/> </menu>
unknown
d11587
train
If DateTime is a timestamp then you can use the mysql date functions. Specifically GROUP BY WEEKDAY(timestamp_field). Added into your query it looks like this $query = '*, AVG(time) as average_time' ; $Performance = Performance::join('athletes','performance.athlete_id','=','athletes.id') ->select(\DB::raw($query)) ->where('performance.sport_id', '=', '43') ->orderBy('group_id', 'asc') ->groupBy('WEEKDAY(DateTime)') ->groupBy('group_id') ->get() ->toArray();
unknown
d11588
train
I have used Newtonsoft JSON (NuGet package) for this purpose. Example: using Newtonsoft.JSON; public string DataTableToJSONWithJSONNet(DataTable table) { string JSONString = string.Empty; JSONString = JSONConvert.SerializeObject(table); return JSONString; } You can find this Newtonsoft example and a few other methods here. A: Using a query like you are using is pretty much going to make you use this style of assignment. Switching to Entity Framework to query your DB is going be your best bet, since it will do assignment to objects/classes automatically. But I get that doing so after a project is started can be a PITA or nearly impossible (or a very significantly large amount of work) to do. There's also a bit of a learning curve, if you've never used it before. What you can do to make things easier is to create a constructor for your model that takes in a DataRow and assigns the data on a single place. public BookViewModel(DataRow dr) { Name = dr["Name"].ToString(); Stock = Convert.ToInt32(dr["Stock"]); } Then you just call "book.Add(new BookViewModel(dr));" in your foreach loop. This works well if you have to do this in multiple places in your code, so you don't have to repeat the assignments when you import rows. You might also be able to use Reflection to automatically assign the values for you. This also has a bit of a learning curve, but it can make conversions much simpler, when you have it set up. Something similar to Reflection is AutoMapper, but that's not as popular as it used to be. I was going to suggest using a JSON package like Newtonsoft or the built in package for C#, but it looks I got beat to that punchline. Another option is using Dapper. It's sort of a half-step between your current system and Entity. It can use SQL or it's own query language to cast the results directly to a model. This might be the easiest and most straight forward way to refactor your code. Dapper and Entity are examples of object relational mappers (ORMs). There are others around you can check out. I've only listed methods I've actually used and there are many other ways to get the same thing done, even without an ORM. They all have their pros and cons, so do your research to figure out what you're willing to commit to. A: Simply just replace your "return Json(book)" with return Ok(book)
unknown
d11589
train
The result of async pipe is always T | null or null, and the stepControl doesn't accept null. You can add *ngIf to make sure it's not null before using it, like the following: <mat-step *ngIf="productDetailsFormGroup$ | async as stepControl" [editable]="true" [stepControl]="stepControl" > <ng-template matStepLabel>Product details</ng-template> <ng-template matStepContent> <fetebird-ui-product-details></fetebird-ui-product-details> </ng-template> </mat-step>
unknown
d11590
train
Junctions are not meant to be interpolated into regexes. They're meant to be used in normal Perl 6 expressions, particularly with comparison operators (such as eq): my @a = <x y z>; say "y" eq any(@a); # any(False, True, False) say so "y" eq any(@a); # True To match any of the values of an array in a regex, simply write the name of the array variable (starting with @) in the regex. By default, this is interpreted as an | alternation ("longest match"), but you can also specify it to be a || alternation ("first match"): my @a = <foo bar barkeep>; say "barkeeper" ~~ / @a /; # 「barkeep」 say "barkeeper" ~~ / || @a /; # 「bar」
unknown
d11591
train
Your definition of the get_context_data method does not update all the variables you expect to be using within your template. For instance, the context class variable is not the same thing as the context variable you are returning inside the get_context_data method. Therefore, the only variable to which you have access in the template is num_authors. To make sure you have all the needed variables within your template, you need to edit get_context_data to update the context with the dictionary defined at the class level: class PostListView(generic.ListView): model = Post post_list = Post.objects.all() num_posts = Post.objects.all().count() num_authors = Author.objects.count() template_name = 'blog/post_list.html' context_vars = { 'num_posts': num_posts, 'num_authors': num_authors, 'post_list' : post_list, } def get_context_data(self, **kwargs): context = super(PostListView, self).get_context_data(**kwargs) context.update(PostListView.context_vars) return context Two main updates are made to your original code snippet: the class variable context is changed to context_vars to avoid conflicts and confusion; and the context variable within the get_context_data method is updated with the contents of context_vars. This will make sure that everything defined at the class level (i.e. PostListView.context_vars) makes it to your template.
unknown
d11592
train
You could try from PIL import * and just import everything from the library instead or if that doesn't work try import PIL A: Raspberry Pi uses ARM, PILLOW that is installed is not compatible, hence the instruction is illegal. Try installing PILLOW with sudo apt-get command A: Thanks for your replies. Eventually I installed PIL using: sudo apt-get install python-imaging This installed PIL in the right architecture an is working.
unknown
d11593
train
Simply pass an datetime object to the create method: from datetime import date obj = SomeModel.objects.create(date=date(2015, 5, 18)) or obj = SomeModel.objects.create(date=date.strftime('2015-05-18', '%y-%m-%d')) A: There are two ways Django transforms the data in a model. The first, when saving the model the data is transformed to the correct data type and send to the backend. The data on the model itself is not changed. The second, when the data is loaded from the database, it is converted to the datatype that Django thinks is most appropriate. This is by design: Django does not want to do any magic on the models itself. Models are just python class instances with attributes, so you are free to do whatever you want. This includes using the string u'2015-05-18' as a date or the string 'False' to store as True (yeah that's right). The database cannot store dates as arbitrary data types, so in the database it is just the equivalent of a Python date object. The information that it used to be a string is lost, and when loading the data directly from the database with get(), the data is consistently converted to the most appropriate Python data type, a date.
unknown
d11594
train
The buildid for CDash is computed based on the site name, the build name and the build stamp of the submission. You should have a Build.xml file in a Testing/20110311-* directory in your build tree. Open that up and see if any of those fields (near the top) is empty. If so, you need to set BUILDNAME and SITE with -D args when configuring with CMake. Or, set CTEST_BUILD_NAME and CTEST_SITE in your ctest -S script. If that's not it, then this is a mystery. I've not seen this error occur before...
unknown
d11595
train
If you have the x permission on the script and cannot execute it, it may be because you mounted the current partition with the option noexec. See explanation in manpage of mount You can verify this by running the mount command without any arguments. A: $ cat > testscript.sh #!/bin/bash echo Hello World. ^D $ chmod +x testscript.sh $ ./testscript.sh #=> Hello world. Works fine.
unknown
d11596
train
The way to do it was using "The Login Flow for Web (without JavaScript SDK)" api to get a user access token. A user access token is required to be sent with graph api queries in order to get page posts. The first step is to create an app on facebook where you specify what information you want the program to be able to access via the graph api. The end user will then choose to accept these permissions later. The program creates a web browser frame and navigates to https://www.facebook.com/dialog/oauth?client_id={app-id}&redirect_uri=https://www.facebook.com/connect/login_success.html&response_type=token The response type "token" means that when the (embedded) web browser is redirected to the redirect_uri the user access token will be added to the end of the url as a fragment. E.g the browser would end up on the page with url https://www.facebook.com/connect/login_success.html#access_token=ACCESS_TOKEN... The redirect uri can be anything but facebook has that specific one set aside for this scenario where you are not hosting another server which you want to receive and process the response. Basically facebook gathers all the information required from the user and then sends them to the redirect_uri. Some information they may require is for them to login and accept permissions your app on facebook requires. So the program simply keeps an eye on what url the embedded browser is on and when it matches the redirect_uri it parses the url which will contain the data as fragments and can then close the browser.
unknown
d11597
train
Get JWT from request header then decode jwt.verify(token, getKey, options, function(err, decoded) { console.log(decoded.email) }); jwt.verify - jwt doc A: Create new middleware ( above other routes) // route middleware to verify a token router.use(function(req, res, next) { // check header or url parameters or post parameters for token var token = req.body.token || req.query.token || req.headers['x-access-token']; // decode token if (token) { // verifies secret and checks exp jwt.verify(token, app.get('superSecret'), function(err, decoded) { if (err) { return res.json({ success: false, message: 'Failed to authenticate token.' }); } else { // if everything is good, save to request for use in other routes req.decoded = decoded; next(); } }); } else { // if there is no token // return an error return res.status(403).send({ success: false, message: 'No token provided.' }); } }); Help : jwt - decode and save in req A: In the code, After return your redirect never work. so There're 2 options: * *You don't need to return a token to client, just use res.redirect('/profile') after your verification. (in this way, your server and client are in one) *You just return the token to client (Don't use res.redirect('/profile') anymore) then client will use that token to redirect to the profile. (in this way, your server and client are separate each other).
unknown
d11598
train
I think it is not a problem in using device for the developing purpose. A: Looks like a fault in the device - I'd send it in for repair. I've certainly not heard of debugging causing issues with devices. A: Do check if your internal storage is about getting full. Also if you have minimum RAM config, try not using multiple apps while debugging. Probably this should help. And nonetheless, you can just visit a technician and get your phone thoroughly checked for issues.
unknown
d11599
train
What comes to my mind in this case is that you are receiving the response after the render method is being executed. You may ask But how it works when I log just this.state.data.product? Well, when you initialize your state you are defining that it have a data object, so in that case the code won't break because this.state.data.product is undefined, but when you try to get one layer deeper now you are trying to access to a property of an undefined value, which leads to the error you are getting (cannot read property x of undefined). So, what can I make to fix that? You have to store a boolean variable in your state, let's call it fetchingData and give it a default value of true, so now you have state = { data: {}, fetchingData: true }; then, on your render method you validate if its fetching the data or if you already have te response from the server, modify it to be like this: render() { if(this.state.fetchingData) return <View><Text>Fetching data...</Text></View> console.log(this.state.data.product.brands); return( Whatever you want to do with the data... ) } What it does is when you are still waiting for the data, a Fetching data... message will be shown, now when you receive the data you also have to change fetchingData to false. if (response.status == 200) { let data = await response.json(); this.setState({ data: data, fetchingData: false }); }
unknown
d11600
train
One possible solution is presented here. Create a link to $HOME/.local/lib/python2.7/site-packages/numpy/core/include/numpy in the tensorflow/third_party dir and edit -Ithird_party to tensorflow/python/build and tensorflow/tensorflow.bzl A: Here is a very nasty workaround if you get the "undeclared inclusion(s) in rule" errors: 1.) Pick a path in your gccs build-in include directories (i.e. one of the cxx_builtin_include_directory paths in bazel-workspace/tools/cpp/CROSSTOOL - should be equal to those given by g++ -v bla.cc) 2.) Lets say you picked directory dir .Create a directory called "bla" within dir 3.) In dir/bla, create a symlink to your numpy include directory (ln -s .../core/include/numpy .) 4.) Add "dir/bla" to tensorflow/python/BUILD and tensorflow/tensorflow.bzl as described in the link. 5.) Feel guilty but happy that it compiles
unknown