_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d12601
train
Try this: <script> if (parseInt$('.ezfc-price-value').text()) > 24){ $('.ly-radio-livraison').prop("disabled", false); } </script> You need to compare the text inside that element, not the element itself. Hope this helps. A: demo: http://jsbin.com/toguruzure/1/edit?html,js,output $(function(){ $(".price-value").change(function(){ $('.radio-livraison').prop("disabled", !(parseInt($('.price-value').text()) > 28)); }); $("#total").on('input', function(){ $(".price-value").text($(this).val()).change(); }); })
unknown
d12602
train
Since you have only one Canvas and each Visual can have only one parent in visual tree therefore each time you place your resource it's put in that place of visual tree and removed from the previous place. You can either put Canvas directly into ViewBox in UserControl and make Path a resource or you can try setting x:Shared attribute to false on your Canvas which should result in new instance of resource being created each time you refer to it: <Application.Resources> <Canvas x:Key="appbar_close" x:Shared="False" ...> <!-- --> </Canvas> </Application.Resources> A: Set x:Shared=false on resource. It will work. <Canvas x:Key="appbar_close" x:Name="appbar_close" x:Shared="false" Width="76" Height="76" Clip="F1 M 0,0L 76,0L 76,76L 0,76L 0,0" >
unknown
d12603
train
The best practice is to use a normalized database schema. Then the DBMS keeps it up to date, so you don't have to. But I understand the tradeoff that makes a denormalized design attractive. In that case, the best practice is to update the total on every change. Investigate triggers. The advantage of this practice is that you can make the total keep in sync with the changes so you never have to think about whether it's out of date or not. If one change is committed, then the updated total is committed too. However, this has some weaknesses with respect to concurrent changes. If you need to accommodate concurrent changes to the same totals, and you can tolerate the totals being "eventually consistent," then use periodic recalculation of the total, so you can be sure only one process at a time is changing the total. Another good practice is to cache aggregate totals outside the database, e.g. memcached or in application variables, so you don't have to hit the database every time you need to display the value. The query "select sum(points) as total from points where id = ?" should not take 2 seconds, even if you have a huge number of rows and a lot of requests. If you have a covering index defined over (id, points) then the query can produce the result without reading data from the table at all; it can calculate the total by reading values from the index itself. Use EXPLAIN to analyze your query and look for the "Using index" note in the Extra column. CREATE TABLE Points ( id INT, points INT, reason VARCHAR(10), KEY id (id,points) ); EXPLAIN SELECT SUM(points) AS total FROM Points WHERE id = 1; +----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+ | 1 | SIMPLE | points | ref | id | id | 5 | const | 9 | Using where; Using index | +----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+ A: By all means keep the underlying table normalized. If you can deal with data potentially being one day old, run a script each nigh (you can schedule it), to do the roll up and populate the new table. Best to just re-create the thing each night from the source table to prevent any inconsistencies between the two. That said, with the size of your record, you must either have very slow server, or very large # of records, because a record that small, with an indexed field on id should sum very quickly for you - however, I am of the mindset that if you can improve user response time by even a few seconds, there is no reason not to use rollup tables - even if DB purists object. A: Have the extra totalpoints column on the same table, and create/update the value of totalpoints for every row creation/update. If you need totalpoints for a certain record, you can lookup the value without computing totalpoints. For example if you need the last value of totalpoint, you can get it like this: SELECT totalpoint FROM point ORDER BY id DESC LIMIT 1; A: There is another approach: caching. Even if it's cached for only a few seconds or minutes, that is a win on a frequently accessed value. And it's possible to dissociate the cache-fetch with the cache-update. That way, a reasonably current value is always returned in constant time. The tricky bit is having the fetch spawning a new process to do the update. A: I'd suggest to create a layer that you use to access and modify the data. You can use these DB access functions to encapsulate the data maintenance in all tables to keep the redundant data in sync. A: You could go either way in this case, because it's not very complicated. I prefer, as a general rule, to allow the data to be temporarily inconsistent, by having just enough redundancy, and have a periodic process resolve the inconsistencies. However, there is no harm in having a trigger mechanism to encourage early execution of the periodic process. I feel this way because relying on event-based notification-style code to keep things consistent can, in more complex cases, greatly complicate the code and make verification difficult. A: You could also create another reporting schema and have it reload at fixed intervals via some process that does the calculations. This is not applicable to realtime information - but is very standard way of doing things. A: Keeping Denormalized Values Correct
unknown
d12604
train
You just want to see if the passwords match, and are between a min and max length? Isn't the above overkill? Am I missing something? You could use js alone to check the length of the first password field, then in the onblur event of the second field, check to see if field1==field2. Minor thing I noticed, the label for the second field has the wrong "for" attribute.
unknown
d12605
train
In your table tx_ext_domain_model_heroslider_item you miss a field for the reverse table name. at least you have not declared it in your relation: foreign_table_field = parent_table You know that your parent records always are tt_content, but TYPO3 needs some help. ANFSCD: why do you have 'allowed' => 'tx_ext_domain_model_heroslider_item', I can not find any documentation about an option allowed.
unknown
d12606
train
The following produces the CSV shown below. It would be easy to tweak the program to remove the double-quotation marks, etc. .Person[] | .Roles.Role | if type == "array" then .[] else . end | [.["@Id"], .Name] | @csv Output "1","Job1" "2","Job2" "3","Job3" Adding the index in .Person .Person | range(0; length) as $ix | .[$ix] | .Roles.Role | if type == "array" then .[] else . end | [$ix, .["@Id"], .Name] | @csv
unknown
d12607
train
System.out.println(((JButton) e.getSource()).getName() + " Click"); A: You can cast to a JComponent if you know that only JComponents will be the return value of e.getSource() I'm using JComponent as the cast since it gives more flexibility. If you're only using JButtons, you can safely cast to a JButton instead. @Override public void actionPerformed(ActionEvent e) { if (e.getSource() == thirdBtn) { //System.out.println("Third Button Click"); System.out.println(((JComponent) e.getSource()).getName()+" Click"); } } Feel free to replace getName() with getText(), depending on what exactly you need. Also, the == operator should only be used to compare Object references, so consider casting to a JComponent from the beginning and using .equals() on the names or text. Edit You can't output the name of the variable, but you can set the name/text of the JComponent. Eg JButton btnExample = new JButton(); btnExample.setName("btnExample"); Or if you want "btnExample" to actually be displayed on the button: JButton btnExample = new JButton(); btnExample.setText("btnExample");
unknown
d12608
train
While I doubt the original poster is still around, the answer may be interesting to others encountering the same situation. The problem OP encounters here is that he does not have the correct rights to modify/delete the next.trk file in the default ado folder. Usually this happens when you do not have admin rights on a (heavily) locked-down server. One solution is to copy the files directly: however, this can be rather tricky if the server is really locked down, or you have programs with unspecified dependencies. The alternative is to change your ado folder. OP took the correct first step by altering the location of net install through net set ado "somefoldername". However, they missed the second step, which tells Stata "somefoldername" is part of the adopath. This is done through adopath ++ "somefoldername". If I'm not mistaken, this only stays active as long as Stata is open. To make this "permanent", that line of code should be added to the profile.do dofile, which runs automatically at Stata startup. See the Stata FAQ for more information on the profile.do file.
unknown
d12609
train
You will need to pass the length as an additional parameter. Using an assumed-shape array will not work, here's why: In the ABI employed by most Fortran compilers, arrays as parameters ("dummy arguments") can take one of two representations, depending on the interface used in the subroutine/function: * *Those passed with known sizes or assumed sizes, like arr(n) or arr(*), usually receive just a pointer to the first element, with elements assumed to be contiguous. *Those passed with assumed shapes, like arr(:) receive an array descriptor structure. This is completely implementation dependent, but usually such a structure contains the pointer to the first element of the data plus information on the bounds of each dimension, the stride, etc. That is the reason why you can directly pass a single row, or only the elements in even indices, of an array if the function receives it as an assumed shape array: the descriptor structure encodes the information that the data is not necessarily contiguous and so the Fortran compiler does not need to copy arr(5:2:) to a temporary location in memory. The reason why you cannot use such facilities to communicate with Java is that the descriptor structure is completely non-standard, a part of the particular ABI of each compiler. Thus, even if you somehow managed to understand how to build it (and it would be non trivial) the next version of your compiler could bring a total change.
unknown
d12610
train
I could integrate Onesignal in my ios app. The issue was in config.xml I commented the push related tags in config.xml.
unknown
d12611
train
I would try the following approach using the following CSS: #navbar > ul > li { float: left; margin-left: 21px; font-family: 'Open Sans', sans-serif; font-size: 14px; text-transform: uppercase; color: #fff; border-top: 2px solid transparent; padding-top: 8px; position: relative; line-height: 1.5; height: 24px; } #navbar > ul > li > ul { list-style: none; position: absolute; /* change this */ margin-left: 0px; padding-left: 0px; margin-top: -5px; /* this can control the whitespace... */ } Add position: relative to the li elements in your primary navigation, and add some line height and height values as needed. For the secondary navigation, change position to absolute on the ul and tweak the top margin to close any whitespace. See demo at: http://jsfiddle.net/audetwebdesign/adw5hp84/ By using absolute positioning of the secondary menus, you take them out of the flow of the main menu bar and you don't have to worry that the length of the labels will affect the primary navigation layout. Note that there is probably more detailed work to be done on styling the spacing between links and so on. A: Get rid of the width in #navbar > ul > li.navbar_multiple:hover > ul > li: #navbar > ul > li.navbar_multiple:hover > ul > li { display:block; height:20px; padding-left: 10px; margin-left: 0px; } Updated your fiddle: http://jsfiddle.net/ktvde9qo/11/ EDIT: To make all submenu items have the same width, simply remove (or comment out) the following lines, alternatively modify them for your needs: #navbar > ul > li > ul > li:first-child { margin-top: 8px; padding-top: 9px; width: 100%; } Fiddle updated: http://jsfiddle.net/ktvde9qo/13/ EDIT 2: To make the submenu items longer than the main menu item, just add a longer width to the submenu item and a shorter one to the main: The submenu item: #navbar > ul > li > ul > li { padding-left: 0px; display: none; text-transform: none; font-size: 12px; padding: 4px 4px 8px 6px; padding-top: 10px; border-top: 1px solid #39718e; background-color: #316885; width:200px; /* changed this */ } The main menu item: #navbar > ul > li.navbar_multiple { margin-left: 13px; width:100px; /* changed this */ } Final fiddle update: http://jsfiddle.net/ktvde9qo/18/ A: Can you try it? If contents will add in drop down list like Entertainment to Entertaiment Daily Programs also this will be works #navbar > ul > li > ul{ width:150px; } A: I think you are looking for something like that: css #navbar { height: 35px; background-color: #4c9fcd; position: relative; top: 0px; z-index: 1001; } #navbar > ul { list-style: none; text-align: left; padding-left: 25px; margin-top: 0px; } #navbar > ul > li { float: left; margin-left: 21px; font-family: 'Open Sans', sans-serif; font-size: 14px; text-transform: uppercase; color: #fff; border-top: 2px solid transparent; padding-top: 8px; } #navbar > ul > li:first-child { margin-left: 0px; } #navbar > ul > li.navbar_multiple { margin-left: 13px; //padding-left: 5px; } #navbar > ul > li:hover { border-top: 2px solid white; cursor: pointer; //background-color: #316885; } #navbar > ul > li > ul { list-style: none; position: relative; margin-left: 0px; padding-left: 0px; } #navbar > ul > li > ul > li { padding-left: 0px; display: none; text-transform: none; font-size: 12px; padding: 4px 4px 8px 6px; padding-top: 10px; border-top: 1px solid #39718e; background-color: #316885; } #navbar > ul > li > ul > li:first-child { margin-top: 8px; padding-top: 9px } #navbar > ul > li > ul > li:last-child { border-bottom: 3px solid #4c9fcd; } #navbar > ul > li.navbar_multiple:hover > ul > li { display: block; height:20px; padding-left: 10px; margin-left: 0px; } #navbar > ul > li.navbar_multiple > ul > li:hover { background-color: #0d3f5a; } #navbar > ul > li.navbar_multiple:hover { background-color: #316885; padding-right: 0px !important; } fiddle
unknown
d12612
train
Mapserver is very easy to setup and learn. Implementing any kind of rendering by yourself is going to require much more effort, and you will probably find a lots of unexpected traps. mapserver cgi should be enough for your needs. If you require some very specific tweak, then mapscript can be useful. I think it could be interesting if you could make a pure JavaScript application, and save yourself from installing a web server (and a map server). If you just needed browsing a tile mosaic, may be you could do it just with JavaScript (generate an html table with a cell for each tile). You can render points or polygons, with JavaScript, using a canvas and doing some basic coordinate conversion to translate geographic points to pixels. Openlayers have this functionality, I think. EDIT: I just checked and with Openlayers you can browse local tiles, and you can render kml and some other vect data. So, I think you should give Openlayers a try. A: No need to have a wms/wfs. What you need is a tile implementation. Basically you should have some sort of central service, or desktop service that generates the tiles. Once these tiles are generated, you can simply transform them to your "no-real-webserver-architecture" filesystem. You can create a directory structure that conforms to /{x}/{y}/{z}.png and call it from javascript. An example of how openstreetmap does this can be found here: http://wiki.openstreetmap.org/wiki/OpenLayers_Simple_Example A: You may like featureserver: http://featureserver.org/. It has its own WFS. I am using it right now.
unknown
d12613
train
Implement your own wrapper similar with scoped_lock to hide the decision inside it: wrapping a pointer to a mutex and checking if the pointer is null (no locking applied) or not null (locking applied). Some skeleton: class ScopedLockEx { public: ScopedLockEx( boost::mutex* pMutex) : pMutex_( pMutex) { if( pMutex_) pMutex_->lock(); } ~ScopedLockEx() { if( pMutex_) pMutex_->unlock(); } private: boost::mutex* pMutex_; };
unknown
d12614
train
Assuming your tables are named table_1 and table_2 SELECT table_2.t_no, table_2.t_name, table_1.Name FROM table_1 JOIN table_2 ON table_1.no = table_2.t_no Or another method: SELECT table_2.t_no, table_2.t_name, table_1.Name FROM table_1, table_2 WHERE table_1.no = table_2.t_no A: SELECT grpt1.t_no, grpt1.t_name, table_2.Name FROM table_2 JOIN ( SELECT t_no , MAX(t_name) FROM table_1 GROUP BY t_no ) AS grpt1 ON table_2.no = grpt1.t_no
unknown
d12615
train
Whenever you find yourself writing parser code for simple formats like the one in your example you're almost always doing something wrong and not using a suitable framework. For instance - there's a set of simple helpers for parsing XML in the android.sax package included in the SDK and it just happens that the example you posted could be easily parsed like this: public class WikiParser { public static class Cm { public String mPageId; public String mNs; public String mTitle; } private static class CmListener implements StartElementListener { final List<Cm> mCms; CmListener(List<Cm> cms) { mCms = cms; } @Override public void start(Attributes attributes) { Cm cm = new Cm(); cm.mPageId = attributes.getValue("", "pageid"); cm.mNs = attributes.getValue("", "ns"); cm.mTitle = attributes.getValue("", "title"); mCms.add(cm); } } public void parseInto(URL url, List<Cm> cms) throws IOException, SAXException { HttpURLConnection con = (HttpURLConnection) url.openConnection(); try { parseInto(new BufferedInputStream(con.getInputStream()), cms); } finally { con.disconnect(); } } public void parseInto(InputStream docStream, List<Cm> cms) throws IOException, SAXException { RootElement api = new RootElement("api"); Element query = api.requireChild("query"); Element categoryMembers = query.requireChild("categorymembers"); Element cm = categoryMembers.requireChild("cm"); cm.setStartElementListener(new CmListener(cms)); Xml.parse(docStream, Encoding.UTF_8, api.getContentHandler()); } } Basically, called like this: WikiParser p = new WikiParser(); ArrayList<WikiParser.Cm> res = new ArrayList<WikiParser.Cm>(); try { p.parseInto(new URL("http://zelda.wikia.com/api.php?action=query&list=categorymembers&cmtitle=Category:Games&cmlimit=500&format=xml"), res); } catch (MalformedURLException e) { } catch (IOException e) { } catch (SAXException e) {} Edit: This is how you'd create a List<String> instead: public class WikiParser { private static class CmListener implements StartElementListener { final List<String> mTitles; CmListener(List<String> titles) { mTitles = titles; } @Override public void start(Attributes attributes) { String title = attributes.getValue("", "title"); if (!TextUtils.isEmpty(title)) { mTitles.add(title); } } } public void parseInto(URL url, List<String> titles) throws IOException, SAXException { HttpURLConnection con = (HttpURLConnection) url.openConnection(); try { parseInto(new BufferedInputStream(con.getInputStream()), titles); } finally { con.disconnect(); } } public void parseInto(InputStream docStream, List<String> titles) throws IOException, SAXException { RootElement api = new RootElement("api"); Element query = api.requireChild("query"); Element categoryMembers = query.requireChild("categorymembers"); Element cm = categoryMembers.requireChild("cm"); cm.setStartElementListener(new CmListener(titles)); Xml.parse(docStream, Encoding.UTF_8, api.getContentHandler()); } } and then: WikiParser p = new WikiParser(); ArrayList<String> titles = new ArrayList<String>(); try { p.parseInto(new URL("http://zelda.wikia.com/api.php?action=query&list=categorymembers&cmtitle=Category:Games&cmlimit=500&format=xml"), titles); } catch (MalformedURLException e) { } catch (IOException e) { } catch (SAXException e) {}
unknown
d12616
train
A http request only gets a single http response. Using http, you only get one response. Some options for you: 1) Wait for everything to finish before replying. Make sure each part creates a result, success or failure, and send the multiple responses at once. You would need some control flow library such as async or Promises to make sure everything responded at the same time. A good choice if all parts will happen "quickly", not good if your user is waiting "too long" for a response. (Those terms were in quotes, because they are application dependent). 2) Create some scheme where the first response tells how many other responses to wait for. Then you'd have a different HTTP request asking for the first additional message, and when that returns to your client, ask for the second additional message, and so on. This is a lot of coordination though, as you'd have to cache responses, or even try again if they were not done yet. Using a memory cache like redis (or similar) could fulfill the need to holding responses until ready, with a non-existent meaning 'not ready' 3) Use an eventing protocol, such as WebSockets, that can push messages from the server. This is a good choice, especially if you don't know how long some events would occur after the trigger. (You would not want to stall a HTTP request for tens of seconds waiting for 3 parts to complete - user will get bored, or quit, or re-submit.). Definitely check out the Primus library for this option. It can even serve the client-side script, which makes integration quick and easy.
unknown
d12617
train
Add this line at the end of your viewDidLoad: [self.textField scrollRangeToVisible:NSMakeRange(0, 1)]; Like this: - (void)viewDidLoad { [super viewDidLoad]; NSString* path = [[NSBundle mainBundle] pathForResource:@"license" ofType:@"txt"]; NSString* terms = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil]; self.textField.text = terms; [self.textField scrollRangeToVisible:NSMakeRange(0, 1)]; } A: This used to happens on all iPhone/iPod devices under landscape mode, and solutions by the others will work. Now it only happens for iPhone 6+ or iPhone 6s+ under landscape mode. This is a work around for it: - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; dispatch_async(dispatch_get_main_queue(), ^{ [self.textView scrollRangeToVisible:NSMakeRange(0, 1)]; }); } The view is scrolled somehow after the viewWillAppear and before viewDidAppear, and that's why we need the dispatch above. A: I had the same problem and I solved it by getting the previous scroll position and reseting it after updating the "text" CGPoint p = [self.texField contentOffset]; self.textField.text = terms; [self.textField setContentOffset:p animated:NO]; [self.textField scrollRangeToVisible:NSMakeRange([tv.text length], 0)]; A: For those using Xamarin, this is the only solution that works for me: textView.Text = "very long text"; DispatchQueue.MainQueue.DispatchAsync(() => { textView.ContentOffset = new CGPoint(0, 0); }); A: The best working solution for me (Objective C): - (void)viewDidLayoutSubviews { [self.myTextView setContentOffset:CGPointZero animated:NO]; }
unknown
d12618
train
You aren't giving the consumer application any time to actually receive a message, you create it, then you close it. You either need to use a timed receive call to do an sync receive of the message from the Queue or you need to add some sort of wait in the main method such as a CountDownLatch etc to allow the async onMessage call to trigger shutdown once processing of the message is complete. A: What @Tim Bish said is correct. You either need to have a timer say for example receiver should listen for 1 hour- or make it available until program terminate. Either case you need to start your consumer program once: Change your receiveJMS method as follows: public void receiveJMS() throws NamingException, JMSException { try{ ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(ActiveMQConnection.DEFAULT_BROKER_URL); Connection connection = connectionFactory.createConnection(); connection.start(); // it's the start point Session session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE); // Getting the queue 'testQueue' Destination destination = session.createQueue("testQueue"); MessageConsumer consumer = session.createConsumer(destination); // set an asynchronous message listener // JMSReceiver asyncReceiver = new JMSReceiver(); //no need to create another object consumer.setMessageListener(this); connection.setExceptionListener(this); // connection.close(); once this is closed consumer no longer active Thread.sleep(60 *60 * 1000); // receive messages for 1 hour }finally{ connection.close();// after 1 hour close it } } The above program will listen upto 1 hour. If you want it as long as the program run, remove the finally block. But the recommended way is to close it somehow. since your application seems to be standalone ,you can check the java runtime shutdown hook, where you can specify how to release such resources while program terminates. If your consumer is a web application you can close it in a ServletContextlistner.
unknown
d12619
train
This is a known issue of SonarQube java analyzer : https://jira.sonarsource.com/browse/SONARJAVA-583 This is due to a lack of semantic analysis to resolve properly method reference (thus identify to which method this::isActive refers to).
unknown
d12620
train
To find all <div> elements that have class attribute from a given list: #!/usr/bin/env python from bs4 import BeautifulSoup # $ pip install beautifulsoup4 with open('input.xml', 'rb') as file: soup = BeautifulSoup(file) elements = soup.find_all("div", class_="header name quantity".split()) print("\n".join("{} {}".format(el['class'], el.get_text()) for el in elements)) Output ['header'] content ['name'] content ['quantity'] content ['name'] content ['quantity'] content ['header'] content2 ['name'] content2 ['quantity'] content2 ['name'] content2 ['quantity'] content2 There are also other methods that allows you to search, traverse html elements.
unknown
d12621
train
Try Integer.parseInt("100101", 2); This will parse the integer as a binary number. A: Just add this as a last statement: System.out.println(Integer.toBinaryString(product)); To see the binary version of your product. Java prints out the numbers using decimal base numbers, since human is the master. Internally in the memory, or CPU registry everything is binary.
unknown
d12622
train
string path = @"E:\AppServ\Example.txt"; File.AppendAllLines(path, new [] { "The very first line!" }); See also File.AppendAllText(). AppendAllLines will add a newline to each line without having to put it there yourself. Both methods will create the file if it doesn't exist so you don't have to. * *File.AppendAllText *File.AppendAllLines A: You just want to open the file in "append" mode. http://msdn.microsoft.com/en-us/library/3zc0w663.aspx A: You can just use File.AppendAllText() Method this will solve your problem. This method will take care of File Creation if not available, opening and closing the file. var outputPath = @"E:\Example.txt"; var data = "Example Data"; File.AppendAllText(outputPath, data); A: string path=@"E:\AppServ\Example.txt"; if(!File.Exists(path)) { File.Create(path).Dispose(); using( TextWriter tw = new StreamWriter(path)) { tw.WriteLine("The very first line!"); } } else if (File.Exists(path)) { using(TextWriter tw = new StreamWriter(path)) { tw.WriteLine("The next line!"); } } A: When you start StreamWriter it's override the text was there before. You can use append property like so: TextWriter t = new StreamWriter(path, true); A: else if (File.Exists(path)) { using (StreamWriter w = File.AppendText(path)) { w.WriteLine("The next line!"); w.Close(); } } A: Try this. string path = @"E:\AppServ\Example.txt"; if (!File.Exists(path)) { using (var txtFile = File.AppendText(path)) { txtFile.WriteLine("The very first line!"); } } else if (File.Exists(path)) { using (var txtFile = File.AppendText(path)) { txtFile.WriteLine("The next line!"); } } A: You don't actually have to check if the file exists, as StreamWriter will do that for you. If you open it in append-mode, the file will be created if it does not exists, then you will always append and never over write. So your initial check is redundant. TextWriter tw = new StreamWriter(path, true); tw.WriteLine("The next line!"); tw.Close(); A: You could use a FileStream. This does all the work for you. http://www.csharp-examples.net/filestream-open-file/ A: Use the correct constructor: else if (File.Exists(path)) { using(var sw = new StreamWriter(path, true)) { sw.WriteLine("The next line!"); } } A: File.AppendAllText adds a string to a file. It also creates a text file if the file does not exist. If you don't need to read content, it's very efficient. The use case is logging. File.AppendAllText("C:\\log.txt", "hello world\n"); A: From microsoft documentation, you can create file if not exist and append to it in a single call File.AppendAllText Method (String, String) .NET Framework (current version) Other Versions Opens a file, appends the specified string to the file, and then closes the file. If the file does not exist, this method creates a file, writes the specified string to the file, then closes the file. Namespace: System.IO Assembly: mscorlib (in mscorlib.dll) Syntax C#C++F#VB public static void AppendAllText( string path, string contents ) Parameters path Type: System.String The file to append the specified string to. contents Type: System.String The string to append to the file. AppendAllText A: using(var tw = new StreamWriter(path, File.Exists(path))) { tw.WriteLine(message); } A: .NET Core Console App: public static string RootDir() => Path.GetFullPath(Path.Combine(AppContext.BaseDirectory, @"..\..\..\")); string _OutputPath = RootDir() + "\\Output\\" + "MyFile.txt"; if (!File.Exists(_OutputPath)) File.Create(_OutputPath).Dispose(); using (TextWriter _StreamWriter = new StreamWriter(_OutputPath)) { _StreamWriter.WriteLine(strOriginalText); } A: Please note that AppendAllLines and AppendAllText methods only create the file, but not the path. So if you are trying to create a file in "C:\Folder", please ensure that this path exists.
unknown
d12623
train
You can add offline_access to your scope (e.g. "scope": "offline_access openid something:else",) and this will yield you a refresh_token. Auth0 currently supports unlimited refresh_token usage, so when your access_token expires (you either can track expiration time manually using "expires_in": 86400 value in respones or react on 401 response from api) - you can send your refresh token to OAuth2 api endpoint and receive new access token back. They have few descent articles about this matter and what you need to configure for your clients and API as well as what not to do (depending on your client security assumptions). Take a note - you must secure refresh_token properly - store it in some reliable store and prevent any external scripts from accessing it. I assume with electron app you can do it more reliable than with a public website.
unknown
d12624
train
You can use a .properties file, where you need to type: jdbc.url=jdbc:oracle:thin:@//localhost:1521/your_database jdbc.username=user jdbc.password=password A: The URL should be chaged to one of the following depending on you configurations * *jdbc:oracle:thin:@host:port/service *jdbc:oracle:thin:@host:port:SID (SID - System ID of the Oracle server database instance.) *jdbc:oracle:thin:@myhost:1521:databaseInstance (By default, Oracle Database 10g Express Edition creates one database instance called XE.)
unknown
d12625
train
WITH week (dn) AS ( SELECT 1 UNION ALL SELECT dn + 1 FROM week WHERE dn < 7 ) SELECT DATENAME(dw, dn + 5) FROM week Replace dn + 5 with dn + 6 if your week starts from Monday. If you need a single comma separated string instead of a set, use this: WITH week (dn, dname) AS ( SELECT 1, CAST(DATENAME(dw, 6) AS NVARCHAR(MAX)) UNION ALL SELECT dn + 1, dname + ', ' + DATENAME(dw, dn + 6) FROM week WHERE dn < 7 ) SELECT dname FROM week WHERE dn = 7 A: A straight select that will work with any SET DATEFIRST setting select datename(dw, 6-@@datefirst) + ', ' + datename(dw, 1+6-@@datefirst) + ', ' + datename(dw, 2+6-@@datefirst) + ', ' + datename(dw, 3+6-@@datefirst) + ', ' + datename(dw, 4+6-@@datefirst) + ', ' + datename(dw, 5+6-@@datefirst) + ', ' + datename(dw, 6+6-@@datefirst) If you don't care about region (Monday or Sunday as first day of week), then just select datename(dw, 0) + ', ' + datename(dw, 1) + ', ' + datename(dw, 2) + ', ' + datename(dw, 3) + ', ' + datename(dw, 4) + ', ' + datename(dw, 5) + ', ' + datename(dw, 6) + ', ' It will perform much better than going through CTE and will also work in 2000 should you ever need it. A: The fastest means is to statically define the comma separated list. I don't know if SET DATEFIRST affects only the database -- if it's the entire instance, I would really hesitate to use SET DATEFIRST. Rather than using recursion, you can use the values from MASTER..SPT_VALUES, and a combination of the STUFF and FOR XML PATH functions (Caveat: SQL Server 2005+): Week starting on Sunday: SELECT STUFF((SELECT ', ' + x.wkday_name FROM (SELECT DISTINCT DATENAME(dw, t.number) AS wkday_name, t.number FROM MASTER.dbo.SPT_VALUES t WHERE t.number BETWEEN -1 AND 5) x ORDER BY x.number FOR XML PATH ('')), 1, 2, '') Week starting on Monday: SELECT STUFF((SELECT ', ' + x.wkday_name FROM (SELECT DISTINCT DATENAME(dw, t.number) AS wkday_name, t.number FROM MASTER.dbo.SPT_VALUES t WHERE t.number BETWEEN 0 AND 6) x ORDER BY x.number FOR XML PATH ('')), 1, 2, '') Comparison: The statically defined list won't return a query plan for me on SQL Server 2005. Quassnoi's recursion example on 2005 has a subtree cost of 0.0000072; SPT_VALUES has a subtree cost of 0.0158108. So the recursive approach is appears more efficient than SPT_VALUES -- possibly due to the very small size?
unknown
d12626
train
This can be done through R studio as well. library(usethis) usethis::edit_r_environ() when the tab opens up in R studio, add this to the 1st line: R_MAX_VSIZE=100Gb (or whatever memory you wish to allocate). Re-start R and/or restart computer and run the R command again that gave you the memory error. A: I had the same problem, increasing the "R_MAX_VSIZE" did not help in my case, instead cleaning the variables no longer needed solved the problem. Hope this helps those who are struggling here. rm(large_df, large_list, large_vector, temp_variables) A: For those using Rstudio, I've found that setting Sys.setenv('R_MAX_VSIZE'=32000000000), as has been suggested on multiple StackOverflow posts, only works on the command line, and that setting that parameter while using Rstudio does not prevent this error: Error: vector memory exhausted (limit reached?) After doing some more reading, I found this thread, which clarifies the problem with Rstudio, and identifies a solution, shown below: Step 1: Open terminal, Step 2: cd ~ touch .Renviron open .Renviron Step 3: Save the following as the first line of .Renviron: R_MAX_VSIZE=100Gb Step 4: Close RStudio and reopen Note: This limit includes both physical and virtual memory; so setting _MAX_VSIZE=16Gb on a machine with 16Gb of physical memory may not prevent this error. You may have to play with this parameter, depending on the specs of your machine A: I had this problem when running Rcpp::sourceCpp("my_cpp_file.cpp"), resulting in Error: vector memory exhausted (limit reached?) changing the Makevars file solved it for me. Currently it looks like CC=gcc CXX=g++ CXX11=g++ CXX14=g++ cxx18=g++ cxx1X=g++ LDFLAGS=-L/usr/lib
unknown
d12627
train
You could use a sequence diagram in this case. It's easy to show call structures like in your case.
unknown
d12628
train
It looks like you might be relying on an implementation detail. However, to get around the error, you can explicitly cast the type to any in order to access the indexer. Assuming there is a _factories property, this should work: var factories = Array.from((<any>this.resolver)['_factories'].keys());
unknown
d12629
train
Looks like the missing pieces have been due to the PnP configuration: yarn add --dev typescript ts-node prettier yarn dlx @yarnpkg/sdks vscode Add a minimal tsconfig.json: { "compilerOptions": { /* Basic Options */ "target": "es5", "module": "commonjs", "lib": ["ESNext"], /* Strict Type-Checking Options */ "strict": true, /* Module Resolution Options */ "moduleResolution": "node", "esModuleInterop": true, /* Advanced Options */ "forceConsistentCasingInFileNames": true, "skipLibCheck": true } } Then install this VS Code extension followed by these steps: * *Press ctrl+shift+p in a TypeScript file *Choose "Select TypeScript Version" *Pick "Use Workspace Version" More details can be found in the docs.
unknown
d12630
train
let documents = [ { name: "name1", id: 1 }, { name: "name2", id: 2 }, { name: "name1", id: 3 }, { name: "name1", id: 0 } ]; let max = documents.sort( (a, b) => a.id > b.id ? -1 : 1)[0] console.log( max ); How about let maxIdDocument = documents.sort( (a, b) => a.id > b.id ? 1 : -1)[0] A: First, sort in descending order, then get the first one: const arr = [{ name: "name1", id: 1 }, { name: "name2", id: 2 }, { name: "name1", id: 3 }, { name: "name1", id: 0 } ]; const [greatest] = arr.sort(({ id: a }, { id: b }) => b - a); console.log(greatest); A: You can sort that array and with the function pop get the object with the greatest id. arr.slice() // This is to avoid a mutation on the original array. let arr = [ { name: "name1", id: 1 }, { name: "name2", id: 2 }, { name: "name1", id: 3 } , { name: "name1", id: 0 } ], greatest = arr.slice().sort(({id: a}, {id: b}) => a - b).pop(); console.log(greatest); .as-console-wrapper { min-height: 100%; }
unknown
d12631
train
Instead of onkeyup, use onchange event. onchange event will fire only on blur of the text box. <asp:TextBox ID="txtLastName" runat="server" onchange="JqueryAjaxCall();" AutoPostBack="true"></asp:TextBox>
unknown
d12632
train
The current documentation does not reflect any change of behaviour of shorthand echo (<?=) since version 5.4.0, in which only the necessary configuration to enable it was changed. * *http://php.net/manual/en/function.echo.php
unknown
d12633
train
I was able to do it by adding a class to the playlist items v-list, with the below: .playlist-container .playlist-items { flex-basis: 0px; flex-grow: 1; overflow-y: auto; } A: Get the current height of the right-hand column as a computed property using document.getElementById and element.offsetHeight, then set the playlist container: <v-col class="pa-0 overflow-y-auto" :style="{'max-height': `${height}px`}"> where height is the computed height.
unknown
d12634
train
A dictionary is unordered. You can sort the data for output. >>> data = {'b': 2, 'a': 3, 'c': 1} >>> for key, value in sorted(data.items(), key=lambda x: x[0]): ... print('{}: {}'.format(key, value)) ... a: 3 b: 2 c: 1 >>> for key, value in sorted(data.items(), key=lambda x: x[1]): ... print('{}: {}'.format(key, value)) ... c: 1 b: 2 a: 3 Using an OrderedDict is not an option here, because you don't want to maintain order, but want to sort with different criteria.
unknown
d12635
train
You should use full namespace for the facade: \OneSignal::sendNotificationToAll("Some Message"); Or add this to the top of your class: use OneSignal; A: You should write use OneSignal top of the class underneath the namespace. Hope this work for you!
unknown
d12636
train
I believe you need use read_html - returned all parsed tables and select Dataframe by position: website = 'https://en.wikipedia.org/wiki/Winning_percentage' #select first parsed table df1 = pd.read_html(website)[0] print (df1.head()) Win % Wins Losses Year Team Comment 0 0.798 67 17 1882 Chicago White Stockings best pre-modern season 1 0.763 116 36 1906 Chicago Cubs best 154-game NL season 2 0.721 111 43 1954 Cleveland Indians best 154-game AL season 3 0.716 116 46 2001 Seattle Mariners best 162-game AL season 4 0.667 108 54 1975 Cincinnati Reds best 162-game NL season #select second parsed table df2 = pd.read_html(website)[1] print (df2) Win % Wins Losses Season Team \ 0 0.890 73 9 2015–16 Golden State Warriors 1 0.110 9 73 1972–73 Philadelphia 76ers 2 0.106 7 59 2011–12 Charlotte Bobcats Comment 0 best 82 game season 1 worst 82-game season 2 worst season statistically
unknown
d12637
train
The result you're getting from getPath() is the immutable value from a dict or list. This value does not even know it's stored in a dict or list, and there's nothing you can do to change it. You have to change the dict/list itself. Example: a = {'hello': [0, 1, 2], 'world': 2} b = a['hello'][1] b = 99 # a is completely unaffected by this Compare with: a = {'hello': [0, 1, 2], 'world': 2} b = a['hello'] # b is a list, which you can change b[1] = 99 # now a is {'hello': [0, 99, 2], 'world': 2} In your case, instead of following the path all the way to the value you want, go all the way except the last step, and then modify the dict/list you get from the penultimate step: getPath(["apple","colors",2]) = "green" # doesn't work getPath(["apple","colors"])[2] = "green" # should work A: You could cache your getPath using custom caching function that allows you to manually populate saved cache. from functools import wraps def cached(func): func.cache = {} @wraps(func) def wrapper(*args): try: return func.cache[args] except KeyError: func.cache[args] = result = func(*args) return result return wrapper @cached def getPath(l): ... getPath.cache[(someList, )] = 'green' getPath(someList) # -> 'green' A: You can't literally do what you're trying to do. I think the closest you could get is to pass the new value in, then manually reassign it within the function: someData = {"apple":{"taste":"not bad","colors":["red","yellow"]}, "banana":{"taste":"perfection","shape":"banana shaped"}, "some list":[6,5,3,2,4,6,7]} def setPath(path, newElement): nowSelection = someData for i in path[:-1]: # Remove the last element of the path nowSelection = nowSelection[i] nowSelection[path[-1]] = newElement # Then use the last element here to do a reassignment someList = ["apple","colors",1] setPath(someList, "green") print(someData) {'apple': {'taste': 'not bad', 'colors': ['red', 'green']}, 'banana': {'taste': 'perfection', 'shape': 'banana shaped'}, 'some list': [6, 5, 3, 2, 4, 6, 7]} I renamed it to setPath to reflect its purpose better.
unknown
d12638
train
Suggestions: * *[First priority] Check return values from CUDA functions to see whether any errors are reported. *Run this through cuda-memcheck. I'm not sure what the relationship is between globalRows, globalCols, localRows, localCols, num_elts etc. is but reading out-of-bounds seems like a candadite for problems. *Remember that summing the squares can lead to rounding errors fairly quickly if you don't take care. Consider using a running mean/variance or doing a tree-based reduction.
unknown
d12639
train
Use pd.crosstab: df1 = pd.crosstab(df['emp_id'], df['category']).rename_axis( columns=None).reset_index() OUTPUT: emp_id A B C 0 033 0 0 1 1 12 2 0 0 2 2233 1 0 0 3 441 0 0 3 4 6676 1 0 1 5 91 0 1 0 NOTE: If you don't need 0 in the output you can use: df = pd.crosstab(df['emp_id'], df['category']).rename_axis( columns=None).reset_index().replace(0, '') OUTPUT: emp_id A B C 0 033 1 1 12 2 2 2233 1 3 441 3 4 6676 1 1 5 91 1 Updated Answer: df = ( df.reset_index() .pivot_table( index=['emp_id', df.groupby('emp_id')['year'].transform(', '.join)], columns='category', values='index', aggfunc='count', fill_value=0) .rename_axis(columns=None) .reset_index() ) OUTPUT: emp_id year A B C 0 033 FY16 0 0 1 1 12 FY18, FY14 2 0 0 2 2233 FY21 1 0 0 3 441 FY20, FY17, FY12 0 0 3 4 6676 FY19, FY10 1 0 1 5 91 FY15 0 1 0 A: You can also use pivot table: pv = pd.pivot_table(df, index='emp_id', columns='category', aggfunc='count') pv.fillna('', inplace=True) print(pv) year category A B C emp_id 033 1 12 2 2233 1 441 3 6676 1 1 91 1
unknown
d12640
train
I think that the only way to do this is to use AJAX - either on x.php to load y.php, or on y.php to load data. A: you need to use jquery. ı am using this code for that. when your request start its load a loading image to page. when data return with success function it loading the data to page. $('#something').change(function(){ // change, click doesnt matter $('#load').html("<img src='images/load2.gif' />"); // loading img $.post( 'select.php?do=load', // your y.php {s:value}, function(answer){ $('#load').html(""); $('#page').html(answer); } ); }); A: Hope it helps <style> .loader { position: fixed; left: 0px; top: 0px; width: 100%; height: 100%; z-index: 9999; background: url('images/somegif.GIF') 50% 50% no-repeat rgb(249,249,249); } </style> <script> $(window).load(function() { $(".loader").fadeOut("slow"); }) </script> <body> <div class="loader"></div> </body>
unknown
d12641
train
Let's consider a much simpler example, to remove all the irrelevant details. (Here, instead of b.Thing we will use String, and instead of a.Thing we will use Object; String is a subclass of Object, so it is analogous. Instead of a.Container, we will use List. The subclassing of the container is irrelevant, as you will see.) If you can understand why this example doesn't work, you will understand your problem. List<String> foo = new ArrayList<String>(); List<Object> bar = foo; // why doesn't this work?
unknown
d12642
train
Thanks to all! Espesially to who gave an advice to try with bigger array. I tryied with 100, 1000 and 100000 and I was surprised, that the dictionary was faster. Of course, previously, I tryied not just with array from my first post, bur with array of about 50 numbers, but the dictionary was slower. With aaray of 100 or more numbers the dictuinary is much more faster. The result for 100000 items is: list - 31 seconds, dicionary - 0.008 seconds
unknown
d12643
train
It's defined as classmethod in TestCase so you should do the same in your code. Maybe both versions work right now but in the future releases of Django it can break the compatibility of your code with Django. You can check the documentation. classmethod TestCase.setUpTestData(): The class-level atomic block described above allows the creation of initial data at the class level, once for the whole TestCase. Just follow the example from docs: from django.test import TestCase class MyTests(TestCase): @classmethod def setUpTestData(cls): # Set up data for the whole TestCase cls.foo = Foo.objects.create(bar="Test") ...
unknown
d12644
train
It's difficult to envisage exactly what you're looking for without more information. But if I wanted some framework that allowed me to store and retrieve data in some structured way, in as-yet-unknown storage devices this is the kind of way I'd be thinking. This may not be the answer you're looking for, but I think there'll be concepts here that will inspire you in the right direction. #include <iostream> #include <tuple> #include <boost/variant.hpp> #include <map> // define some concepts // bigfoo is a class that's expensive to copy - so lets give it a shared-handle idiom struct bigfoo { struct impl { impl(std::string data) : _data(std::move(data)) {} void write(std::ostream& os) const { os << "I am a big object. Don't copy me: " << _data; } private: std::string _data; }; bigfoo(std::string data) : _impl { std::make_shared<impl>(std::move(data)) } {}; friend std::ostream& operator<<(std::ostream&os, const bigfoo& bf) { bf._impl->write(os); return os; } private: std::shared_ptr<impl> _impl; }; // all the data types our framework handles using abstract_data_type = boost::variant<int, std::string, double, bigfoo>; // defines the general properties of a data table store concept template<class...Columns> struct table_definition { using row_type = std::tuple<Columns...>; }; // the concept of being able to store some type of table data on some kind of storage medium template<class IoDevice, class TableDefinition> struct table_implementation { using io_device_type = IoDevice; using row_writer_type = typename io_device_type::row_writer_type; template<class...Args> table_implementation(Args&...args) : _io_device(std::forward<Args>(args)...) {} template<class...Args> void add_row(Args&&...args) { auto row_instance = _io_device.open_row(); set_row_args(row_instance, std::make_tuple(std::forward<Args>(args)...), std::index_sequence_for<Args...>()); row_instance.commit(); } private: template<class Tuple, size_t...Is> void set_row_args(row_writer_type& row_writer, const Tuple& args, std::index_sequence<Is...>) { using expand = int[]; expand x { 0, (row_writer.set_value(Is, std::get<Is>(args)), 0)... }; (void)x; // mark expand as unused; } private: io_device_type _io_device; }; // model the concepts into a concrete specialisation // this is a 'data store' implementation which simply stores data to stdout in a structured way struct std_out_io { struct row_writer_type { void set_value(size_t column, abstract_data_type value) { // roll on c++17 with it's much-anticipated try_emplace... auto ifind = _values.find(column); if (ifind == end(_values)) { ifind = _values.emplace(column, std::move(value)).first; } else { ifind->second = std::move(value); } } void commit() { std::cout << "{" << std::endl; auto sep = "\t"; for (auto& item : _values) { std::cout << sep << item.first << "=" << item.second; sep = ",\n\t"; } std::cout << "\n}"; } private: std::map<size_t, abstract_data_type> _values; // some value mapped by ascending column number }; row_writer_type open_row() { return row_writer_type(); } }; // this is a model of a 'data table' concept using my_table = table_definition<int, std::string, double, bigfoo>; // here is a test auto main() -> int { auto data_store = table_implementation<std_out_io, my_table>( /* std_out_io has default constructor */); data_store.add_row(1, "hello", 6.6, bigfoo("lots and lots of data")); return 0; } expected output: { 0=1, 1=hello, 2=6.6, 3=I am a big object. Don't copy me: lots and lots of data }
unknown
d12645
train
this arr[i] = temp + (i * row); should be arr[i] = temp + (i * col); since i = [0,row-1]
unknown
d12646
train
No you cannot do that. Parse the input and depending on the input, implement your logic with if statements for example.
unknown
d12647
train
There is JsonField in postgres which can be used for this kind of tasks. Also there is many apps for django which add JsonField that can work with mysql like django-extensions for example A: I'm not sure why you're avoiding subclassing, it's built pretty much for this purpose. The best way to do this is subclassing. For this example, it looks like you'd probably want to use the shape attribute as the major class. For example, you could create a Shape class thusly: class Shape(models.Model): color = models.CharField(max_length=20) class Meta: abstract = True class Circle(Shape): radius = FloatField() class Ellipse(Shape): eccentricity = FloatField() This would create two tables in your model, a Circle table with a color and radius column, as well as a Ellipse table with a color and a eccentricity column. If you really want to avoid subclassing for some reason, and you are using PostgreSQL, you can declare a JSONField (probably called attributes or something similar) of the different values. They would be harder to access programmatically, though. That would look like this: class Shape(models.Model): color = models.CharField(max_length=20) attributes = models.JSONField()
unknown
d12648
train
I recommend that you read the Data Binding Overview page on MSDN so that you can get a better idea on data binding. For now, I can give you a few tips. Firstly, in WPF, your property should really have used an ObservableCollection<T>, like this: private ObservableCollection<Ligne> _ListeLigne = new ObservableCollection<Ligne>(); public ObservableCollection<Ligne> ListeLigne { get { return _ListeLigne; } set { _ListeLigne = value; OnPropertyChanged("ListeLigne"); } } Then your selected item like this: private Ligne _CurrentLigne = new Ligne(); public Ligne CurrentLigne { get { return _CurrentLigne; } set { _CurrentLigne= value; OnPropertyChanged("CurrentLigne"); } } With properties like this, your XAML would be fine. Lastly, to add your items, you simply do this: ListeLigne = new ObservableCollection<Ligne>(SomeMethodGettingYourData()); Or just...: ListeLigne = SomeMethodGettingYourData(); ... if your data access method returns an ObservableCollection<Ligne>. If you want to select a particular element in the UI, then you must select an actual item from the data bound collection, but you can do that easily using LinQ. using System.Linq; CurrentLigne = ListeLigne.First(l => l.SomeLigneProperty == someValue); Or just: CurrentLigne = ListeLigne.ElementAt(someValidIndexInCollection); Oh... and I've got one other tip for you. In your code: foreach (Ligne ligne in CurrentLigne) { if (Currentligne!= null) // this is a pointless if condition _ligneBLL.InsetLigne(ligne); } The above if condition is pointless because the program execution will never enter the foreach loop if the collection is null. A: Try This !! foreach (Ligne ligne in ListLigne) { var _ligne = ligne as Ligne; _ligneBLL.InsetLigne(ligne); } A: I think you want to use a BindingList. It is the list I always use, but remember you will need to post your notifyChange events.
unknown
d12649
train
Have you tried @Override public void onCreate(Bundle savedInstanceState) { getWindow().requestFeature(Window.FEATURE_ACTION_BAR_OVERLAY); as described in the Android documentation
unknown
d12650
train
Although I'm not actually looking at Mikes code (you could do that though) I would imagine he has a single content control to which he has assigned the Front content originally. On Flip a projection is animated until its edge on at which point the Rear content is assigned, and the animation continues. Hence at anyone time the only one of Front or Rear content can actually be navigated to with something like FindName However if you give each of the root Child controls that you place in Front and Rear their own x:Name you should be able gain access to your text boxes using the rear name.
unknown
d12651
train
I would recommend you to use composer, which will generate autoload.php for you to include at the top of the file: #!/usr/bin/env php <?php require_once './vendor/autoload.php'; use Symfony\Component\ClassLoader\UniversalClassLoader; $loader = new UniversalClassLoader(); $loader->registerNamespace('BlueHeadStudios', __DIR__.'/src/'); $loader->register(); // write your code below A: You should run the application with app/console check the example here
unknown
d12652
train
This makes sense because drawing to the canvas and clearing the canvas are expensive methods; if you have a smaller canvas and call clearRect on it every animation step then it will perform better than a larger canvas running the exact same code. The best thing to do is optimise your draw method to only clear what changed each frame. Once you do that you will notice a performance boost; this article will help with other areas in which you can increase performance. I'm developing with canvas too and have found that WebKit based browsers, in general, handle canvas operations quicker than Gecko in most cases. A: I actually see similar behavior. I believe it is when you cross the 65776 pixel barrier -> not 65536 as you would expect. I don't yet know why this is the case, but it is likely some kind of internal data structure or data type issue (possibly its using a hash and needs to grow the table at that point) inside your browser. Your test is actually invalid on my chrome browser - it does not exhibit the same performance drop. I wrote some test cases http://jsperf.com/pixel-count-matters/2 and http://jsperf.com/magic-canvas/2 Hope this helps!
unknown
d12653
train
To get the total as fractions of a day you can use: SELECT SUM( TO_DATE( duration_d, 'MI:SS' ) - TO_DATE( '00:00', 'MI:SS' ) ) AS total FROM your_table Which gives the result: TOTAL ------------------------------------------ 0.0383449074074074074074074074074074074074 To convert this to an interval data type you can use NUMTODSINTERVAL: SELECT NUMTODSINTERVAL( SUM( TO_DATE( duration_d, 'MI:SS' ) - TO_DATE( '00:00', 'MI:SS' ) ), 'DAY' ) AS total FROM your_table Which gives the result: TOTAL ------------------- +00 00:55:13.000000 A: Please try below: with x as (select sum((regexp_substr(YOUR_COLUMN, '[0-9]+', 1, 1)*60) + regexp_substr(id, '[0-9]+', 1, 2)) seconds from YOUR_TABLE) SELECT TO_CHAR(TRUNC(seconds/3600),'FM9900') || ':' || TO_CHAR(TRUNC(MOD(seconds,3600)/60),'FM00') || ':' || TO_CHAR(MOD(seconds,60),'FM00') FROM x Will work only if the duration is always [MI:SS]. Also you can add the group by as per your requirement. Converting Seconds to the required duration format Reference. Group By with x as (select calling_number,sum((regexp_substr(YOUR_COLUMN, '[0-9]+', 1, 1)*60) + regexp_substr(id, '[0-9]+', 1, 2)) seconds from YOUR_TABLE group by calling_number) SELECT calling_number, TO_CHAR(TRUNC(seconds/3600),'FM9900') || ':' || TO_CHAR(TRUNC(MOD(seconds,3600)/60),'FM00') || ':' || TO_CHAR(MOD(seconds,60),'FM00') FROM x A: Use a combination of SUBSTR, to_char, to_date, NVL, INSTR, reverse and SUM. SELECT "calling_number", to_char(to_date(SUM(NVL(SUBSTR("duration_d", 0, INSTR("duration_d", ':')-1), "duration_d"))*60 + SUM(substr("duration_d", - instr(reverse("duration_d"), ':') + 1)),'sssss'),'hh24:mi:ss') AS SUM_DURATION_D FROM yourtable GROUP BY "calling_number" Output calling_number SUM_DURATION_D 1 00:26:10 2 00:29:03 SQL Fiddle: http://sqlfiddle.com/#!4/9b0a81/33/0 A: Correct spelling as below SELECT SUM( TO_DATE( duration_d, 'mi:ss' ) ) FROM YOURTABLE Group By calling_number
unknown
d12654
train
When people are using your react page, it is "running" on their computer and the software does not have access to all the files and data you'd like to use. You will need to do this at "build time" when your service is being packaged up, or "on the server". When you are building your react app, you can hook into processes that can find files and perform operations on them. Gatsby might be your best bet. Look at how people add "markup" files to their projects, built a menu from them and then render them as blog articles. NextJS, Vite, and other frameworks and tools will work, you may just need to learn a bit more. The other approach, to do this "on the server" means when you are running code on the server you can do almost anything you like. So, your react app would make a call (e.g. rest request) to the server (e.g. NodeJS), and the code running on the server can use fs and other APIs to accomplish what you'd like. From what you describe, doing this as part of your build step is probably the best route. A much easier route is to move this data into a JSON file and/or a database and not crawl the file system. A: Looks like its a client side rendering app and not server side (node app). Resources folder you trying to list is residing in server and react app (javascript) running on browser can't access it directly. A: To whom it may concern, I spent more time than I should have working on this. I highly recommend converting everything to JSON files and importing them. If you want to be able to scan for JSON files, make a JSON file that includes the names of all your files, then import that and use that for scanning. Doing things dynamically is a bare. If you don't reformat, you'll likely need to use Fetch, which is asynchronous, which opens up a whole can of worms if you're using a framework like React. If you're just importing json files, it's super easy: import files from './resources/index.json'; where index.json looks like { "files":[ "file1.json", "file2.json" ] } if you need to import the files more dynamically (not at the very beginning) you can use this: var data = require('./resources/'+filename) this will allow you to scan through your files using the index.json file, and load them dynamically as needed.
unknown
d12655
train
You can unlist, find the unique edges and take the length of the resulting vector: length(unique(unlist(shortestPath$epath)))
unknown
d12656
train
extends Activity implements MediaPlayer.OnCompletionListener, View.OnClickListener Then you need to register your activity. mediaPlayer.setOnCompletionListener(this); someView.setOnClickListener(this); Where 'this' is the activity you just created A: Stick this code right up at the start as part of the onCreate(): MediaPlayer mp = MediaPlayer.create(this, pathToTheFile, web, whereverTheSoundIs); mp.setOnCompletionListener(this); mp.start(); If that doesn't work, then you have a problem locating the sound file, or it is in the wrong format. From experience, cut things down to simple parts and try and get each part working first before moving on. Another thing I do is put comments after each } for example: '} // End of Case' Oh, almost forgot, in the onCompletion you might like to close off the media player with mp.release(); Cheers A: try this! it works to me, i hope it helps you: main: package com.hairdryer; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageButton; import com.hairdryer.ServiceReproductor; public class Reproductor extends Activity implements OnClickListener { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ImageButton btnInicio = (ImageButton) findViewById(R.id.on); ImageButton btnFin = (ImageButton) findViewById(R.id.off); btnInicio.setOnClickListener(this); btnFin.setOnClickListener(this); } public void onClick(View src) { ImageButton btnInicio = (ImageButton) findViewById(R.id.on); switch (src.getId()) { case R.id.on: btnInicio.setBackgroundResource(R.drawable.on2); startService(new Intent(this, ServiceReproductor.class)); break; case R.id.off: stopService(new Intent(this, ServiceReproductor.class)); btnInicio.setBackgroundResource(R.drawable.on); break; } } } ServiceReproductor.java: package com.hairdryer; import android.app.Service; import android.content.Intent; import android.media.MediaPlayer; import android.media.MediaPlayer.OnCompletionListener; import android.os.IBinder; import android.widget.Toast; public class ServiceReproductor extends Service implements OnCompletionListener{ private MediaPlayer player; @Override public IBinder onBind(Intent intent) { // TODO Auto-generated method stub return null; } @Override public void onCreate() { //Toast.makeText(this, "Servicio Creado", Toast.LENGTH_LONG).show(); player = MediaPlayer.create(this, R.raw.inicio); player.setOnCompletionListener(this); } @Override public void onDestroy() { //Toast.makeText(this, "Servicio Detenido", Toast.LENGTH_LONG).show(); player.stop(); } @Override public void onStart(Intent intent, int startid) { //Toast.makeText(this, "Servicio Iniciado", Toast.LENGTH_LONG).show(); player.start(); } @Override public void onCompletion(MediaPlayer mp) { player = MediaPlayer.create(this, R.raw.secador); player.start(); player.setLooping(true); // TODO Auto-generated method stub } } main.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" > <ImageView android:layout_height="wrap_content" android:layout_width="wrap_content" android:src="@drawable/secador" /> <ImageButton android:id="@+id/off" android:layout_width="90dp" android:layout_height="90dp" android:layout_alignParentBottom="true" android:layout_alignParentLeft="true" android:background="@drawable/off" /> <ImageButton android:id="@+id/on" android:layout_width="90dp" android:layout_height="90dp" android:layout_alignParentBottom="true" android:layout_toRightOf="@+id/off" android:layout_marginRight="10dp" android:background="@drawable/on" /> </RelativeLayout>
unknown
d12657
train
This is an incredible stupid problem. The messages file is supposed to have the name messages&lowbar;de&lowbar;DE&lowbar;XX.properties (note that the last two segments are in upper case). My guess is it works when started from the IDE because Eclipse uses the filesystem and hence the OS standard, which is "ignore casing" for Windows. When started from the finished product however the files are in JARs, where casing is important.
unknown
d12658
train
If you want to achieve this functionality, first you need to map the zipcodes with countries or use any plugin for it. Then, define a model or table for it. Then, during checkout make an ajax call from zipcode field to your controller, call your model there and then check the pincode. Based on your comparison, return the result and display what you want to.
unknown
d12659
train
__declspec(dllimport) tells the compiler that the function will be imported from a DLL using an import LIB, rather than found in a different OBJ file or a static LIB. BTW: it sounds like you may not want DLLs at all. DLLs are specifically for swapping out the library after compilation without having to recompile the application, or for sharing large amounts of object code between applications. If all you want is to reuse a set of code between different projects without having to compile it for each one, static libraries are sufficient, and easier to reason about. A: It's usually best to have a conditional macro that imports or exports depending #ifdef MODULE1 #define MODULE1_DECL __declspec(dllexport) #else #define MODULE1_DECL __declspec(dllimport) #endif This way you export functions etc that you want to export, and import what you want to use. For example see this SO post You #define MODULE1 (maybe as a setting) in your project that will export definitions, and use the MODULE1_DECL define rather than explicitly putting either __declspec(dllimport) or __declspec(dllexport) in your code. Read the manual for further details. Name mangling just happens in C++: it indicates namespaces, and the parameters a function overload etc. takes for disambiguation.
unknown
d12660
train
You are passing string as an error instead of object in the props for the Login component. Try console.log of "errors" in the component where Login component is rendered to see what value is getting set. A: PropTypes expecting an object because of your propTypes definition erros: PropTypes.object.isRequired, Use: Login.propTypes = { loginUser: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, errors: PropTypes.string.isRequired }; If it's no required your also have to define a defaultProp: Login.propTypes ={ errors: PropTypes.string, } Login.defaultProps = { errors: '', } A: Make sure your link is not misspelled, export const loginUser=(userData)=>dispatch=>{ axios.post('/api/users/login',userData) not export const loginUser=(userData)=>dispatch=>{ axios.post('/api/user/login',userData) users not user
unknown
d12661
train
I always looking for a good solution but I didn't find it. I've added events in the XAML for the TextBox and PasswordBox on GetFocus and KeyDown. In the Code-Behind, I can now manage the "Enter" Key, that gives the focus to the next TextBox: private void RegisterTextBox_KeyDown(object sender, KeyRoutedEventArgs e) { TextBox currentTextBox = (TextBox)sender; if (currentTextBox != null) { if (e.Key == Windows.System.VirtualKey.Enter) { FocusManager.TryMoveFocus(FocusNavigationDirection.Next); } } } But I haven't yet managed the ScrollViewer autoscroll like I hope. I tried to inspire me from this link, but to no avail: Keyboard-Overlaps-Textbox => Is there really a way to reproduce the autoscroll that is used on the registration form for the Windows Store?
unknown
d12662
train
It's not getting corrupted, you're just losing floating point precision as the magnitude of your number increases. As your number gets larger and larger, the delta between each successive point gets larger as well. In IEEE754, the difference between 1.f and the next larger number is 0.0000001 At 200,000, the next larger number is 200,000.02. (courtesy of IEEE754 converter). I'm not even 100% positive what kind of FP precision GLSL uses (the quick ref card indicates it might be 14-bit mantissa in the fragment shader?). So in reality it could be even worse. If you're just looking at a small window of a large number, then the error will continue to grow. I suspect that as the precision goes down, your texture starts to look more and more 'blocky'. The only thing I can think of to do is to design your code such that the number does not have to grow unbounded forever. Is there a smarter way you could wrap the number over so that it doesn't have to get so large?
unknown
d12663
train
You can write your function to recode the levels - the easiest way to do that is probably to change the levels directly with levels(fac) <- list(new_lvl1 = c(old_lvl1, old_lvl2), new_lvl2 = c(old_lvl3, old_lvl4)) But there are already several functions that do it out of the box. I typically use the forcats package to manipulate factors. Check out fct_recode from the forcats package. Link to doc. There are also other functions that could help you - check out the comments below. Now, as to why your code isn't working: * *df$col looks for a column literally named col. The workaround is to do df[[col]] instead. *Don't forget to return df at the end of your function *c(source = target) will create a vector with one element named "source", regardless of what happens to be in the variable source. The solution is to create the vector c(source = target) in 2 steps. revalue_factor_levels <- function(df, col, source, target) { to_rename <- target names(to_rename) <- source df[[col]] <- revalue(df[[col]], to_rename) df } Returning the df means the syntax is: data <- revalue_factor_levels(data, "lg_with_children", "Mandarin", "Other") I like functions that take the data as the first argument and return the modified data because they are pipeable. library(dplyr) data <- data %>% revalue_factor_levels("lg_with_children", "Mandarin", "Other") %>% revalue_factor_levels("lg_with_children", "Half and half", "Other") %>% revalue_factor_levels("lg_with_children", "N/A", "Other") Still, using forcats is easier and less prone to breaking on edge cases. Edit: There is nothing preventing you from both using forcats and creating your custom function. For example, this is closer to what you want to achieve: revalue_factor_levels <- function(df, col, ref_level) { df[[col]] <- forcats::fct_others(df[[col]], keep = ref_level) df } # Will keep Shanghaisese and revalue other levels to "Other". data <- revalue_factor_levels(data, "lg_with_children", "Shanghainese") A: Here is what I ended up with thanks to help from the community. revalue_factor_levels <- function(df, col, ref_level) { df[[col]] <- fct_other(df[[col]], keep = ref_level) df } data <- revalue_factor_levels(data, "lg_with_children", "Shanghainese")
unknown
d12664
train
Assuming you want a javascript object and not JSON, which disallows functions, HourSetup = Should be changed to: HourSetup : Also, as JonoW points out, your single line comments are including some of your code as the code is formatted in the post. A: There's a "=" that shouldn't be there. Change it to ":" A: JSON is a form of JavaScript deliberately restricted for safety. It cannot include dangerous code elements like function expressions. It should work OK in plain eval()ed JavaScript, but it's not JSON.
unknown
d12665
train
For your xml provided in the XML. First create a java POJO class with fields as: String index; String tb_name; List<String> bitf_names; Use the class below for that: import java.util.List; class TestBus { private String index; private String tb_name; private List<String> bitf_names; public String getIndex() { return index; } public void setIndex(String index) { this.index = index; } public String getTb_name() { return tb_name; } public void setTb_name(String tb_name) { this.tb_name = tb_name; } public List<String> getBitf_names() { return bitf_names; } public void setBitf_names(List<String> bitf_name) { this.bitf_names = bitf_name; } @Override public String toString() { return "TestBus [index=" + index + ", tb_name=" + tb_name + ", bitf_name=" + bitf_names + "]"; } } After that, use the code below to create a list of TestBus classes: i.e List<TestBus> testBusList = new ArrayList<>(); Use this class for the complete code and logic: import java.io.File; import java.util.ArrayList; import java.util.List; import java.util.stream.IntStream; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; public class ReadXMLFile { public static List<TestBus> testBuses = new ArrayList<>(); public static void main(String argv[]) { try { File fXmlFile = new File("D:\\ECLIPSE-WORKSPACE\\demo-xml-project\\src\\main\\resources\\testbus.xml"); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder dBuilder = dbFactory.newDocumentBuilder(); Document doc = dBuilder.parse(fXmlFile); doc.getDocumentElement().normalize(); NodeList testBusNodeList = doc.getElementsByTagName("testbus"); for (int parameter = 0; parameter < testBusNodeList.getLength(); parameter++) { TestBus testBus = new TestBus(); Node node = testBusNodeList.item(parameter); if (node.getNodeType() == Node.ELEMENT_NODE) { Element eElement = (Element) node; String index = eElement.getElementsByTagName("index").item(0).getTextContent(); String tb_name = eElement.getElementsByTagName("tb_name").item(0).getTextContent(); NodeList bitf_name = eElement.getElementsByTagName("bitf_name"); List<String> bitf_namesList = new ArrayList<>(); IntStream.range(0, bitf_name.getLength()).forEach(bName -> { bitf_namesList.add(bitf_name.item(bName).getTextContent()); }); testBus.setIndex(index); testBus.setTb_name(tb_name); testBus.setBitf_names(bitf_namesList); testBuses.add(testBus); } } } catch (Exception e) { System.out.println("!!!!!!!! Exception while reading xml file :" + e.getMessage()); } testBuses.forEach(bus -> System.out.println(bus)); // in single line System.out.println("###################################################"); // using getters testBuses.forEach(bus -> { System.out.println("index = " + bus.getIndex()); System.out.println("tb_name = " + bus.getTb_name()); System.out.println("bitf_names = " + bus.getBitf_names()); System.out.println("#####################################################"); }); } }
unknown
d12666
train
Just keep one array for changeTyreSelectedOption = [] in state and if you user select change any tyre option then push that index to changeTyreSelectedOption array like this changeTyreSelectedOption.push(index). Now condition to show extra component will be {changeTyreSelectedOption.includes(index) ? <View> <Select label="What type of Tyres" name={`timesheets.[${index}].tyresType`} items={[ { label: 'Front RH, LH', value: 'Front RH, LH', key: 1 }, { label: 'Rear RH, LH', value: 'Rear RH, LH', key: 2 }, { label: 'Spare', value: 'Spare', key: 3 }, ]} /> </View> : <View> {console.log('Hide Rock Breaker!')} </View> } to show the desired component. Note: you also need to remove the index from the changeTyreSelectedOption if user change option
unknown
d12667
train
In the example the cache block size is 32 bytes, i.e., byte-addressing is being used; with four-byte words, this is 8 words. Since an entire block is loaded into cache on a miss and the block size is 32 bytes, to get the index one first divides the address by 32 to find the block number in memory. The block number modulo 32 (5-bit index) is the index. The block number divided by 32 is the tag. The trace would look like this: 0 miss <00000, 0000, mem[0..31]> 4 hit <00000, 0000, mem[0..31]> 16 hit <00000, 0000, mem[0..31]> 132 miss <00100, 0000, mem[128..159]> 232 miss <00111, 0000, mem[224..255]> 160 miss <00101, 0000, mem[160..191]> 1024 miss <00000, 0001, mem[1024..1055]> 30 miss <00000, 0000, mem[0..31]> 140 hit <00100, 0000, mem[128..159]> 3100 miss <00000, 0011, mem[3072..3103]> 180 hit <00101, 0000, mem[160..191]> 2180 miss <00100, 0010, mem[2176..2207]> As you can see there are four hits out of 12 accesses, so the hit rate should be 33%. (It looks like the provider of the answer thought there were 16 accesses.) Side comment: Since this is starting from an empty cache, there is only one conflict miss (the mem[30] access) and the remaining seven misses are compulsory (first access) misses. However, if this was the body of a loop, in each iteration after the first there would be four conflict misses at index 00000 (address 0, 1024, 30, 3100), two conflict misses at 00100 (addresses 132, 2180), and six hits (addresses 4, 16, 140 are hits whose cache blocks have been reloaded in each iteration, corresponding to conflict misses; addresses 232, 160, 180 are hits whose cache blocks were loaded in the first iteration and not evicted). Running through this trace, one can find that at the end the following blocks would be valid (being the last accessed for their index): <00000, 0011, mem[3072..3103]> <00100, 0010, mem[2176..2207]> <00101, 0000, mem[160..191]> <00111, 0000, mem[224..255]> Note how this differs from the given answer which is clearly wrong because with 32-byte blocks, no block would start at mem[16] and 1024 divided by 32 gives a block number (in memory) of 32, 32 modulo 32 is 0 (not 1) for the index. Incidentally, providing the byte range cached in the block is probably more helpful than just providing the starting address of the block.
unknown
d12668
train
got it: $(document).ready(function() { 02 $.ajax({ 03 type: "GET", 04 url: "AJAX/DivideByZero", 05 dataType: "json", 06 success: function(data) { 07 if (data) { 08 alert("Success!!!"); 09 } 10 }, error: function(xhr, status, error) { 11 DisplayError(xhr); 12 } 13 }); 14 }); 15 16 function DisplayError(xhr) { 17 var msg = JSON.parse(xhr.responseText); 18 alert(msg.Message); 19 } A: What you can do is throw an exception in your GetBillingEntities method of your webservice. Catch the exception, log some details and then re-throw it. If your method throws an exception it should get caught in the "error:" block. So basically you handle the error data in your service and handle how to display an error to the user in your "error:" block.
unknown
d12669
train
try to specify background color with selector like this: #menu-top-navigation > li:hover This way, you should be able to specify one gradient for whole LI content (in your case UL and LI). Strongly reccomend to use CSS3 gradients. If this wont help you might need to specify on gradient for #menu-top-navigation > li:hover {gradient color of XXX to YYY} and one for #menu-top-navigation > li:hover > ul {gradient color of YYY to ZZZ} Note: gradient spec is just an example, it is not the actual syntax. Just wanted to make easier to understand. Let me know if you have any questions.
unknown
d12670
train
Try replacing NSSet with Set<NSObject> override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) { self.view.endEditing(true) } The syntax has been edited in swift 1.2. The NSSet was replaced by Set<NSObject> See the Blog Post and the Xcode 6.3 release notes
unknown
d12671
train
Navigationview Set SelectedItem for sub-menu item in UWP app During the testing, the problem is expend animation block select animation that make item indicator dismiss. Currently we have a workaround that add a task delay before set SelectedItem. It will do select animation after DashboardMenuItem expend. DashboardMenuItem.IsExpanded = true; Microsoft.UI.Xaml.Controls.NavigationViewItem selectedItem = (Microsoft.UI.Xaml.Controls.NavigationViewItem)DashboardMenuItem.MenuItems[0]; await Task.Delay(100); MainNavigation.SelectedItem = selectedItem;
unknown
d12672
train
The default model-binding setup supports an indexed format, where each property is specified against an index. This is best demonstrated with an example query-string: ?a[0].Number=1&a[0].Text=item1&a[1].Number=2&a[1].Text=item2 As shown, this sets the following key-value pairs * *a[0].Number = 1 *a[0].Text = item1 *a[1].Number = 2 *a[2].Text = item2 This isn't quite covered in the official docs, but there's a section on collections and one on dictionaries. The approach shown above is a combination of these approaches.
unknown
d12673
train
It seams that the <p> tag is not supported. See this Reference. You will have to find another solution, did you try using <div> tags
unknown
d12674
train
There are a few errors in your code. First of all, when you use single quotes in glob('$fileList/*'), it s literal string $fileList/*. There's no variable substitution. If you want to put $fileList value into then string, you need to either use double quotes (glob("$fileList/*")), or concatenation (glob($fileList . '/*')) Next error is that glob() returns array, so $fileList is an array, and that means that you cannot simply put it into a string. Now, depending on what you really want to do, you may want to take any particular result from glob(), or iterate over all of them and do what you want on each of them. I'll assume that you want to create index.html in each matched empty directory. So it would be: foreach($fileList as $filePath) { if (count(glob("$filePath/*")) == 0 ) { // rest of code } } Now, there's the clue of your problem. Your putting content into index.html file in current directory, because you don't tell file_put_contents where to do it. The first argument is path, not just the filename. So when you passed index.html value, it's a relative path to current directory. You need to pass whole path, which probably is $filePath . "/index.html", so it would be: $key = $filePath . "/index.html"; At the end I would like to notice, that you may need additional validation like if matched paths are indeed directories. Also working on relative paths is a little bit risky. It would be better to rely on absolute paths. A: in line 3, instead of $key = 'index.html'; you should use: $key = $fileList[0].'/index.html'; It will create file in that folder not in current directory.
unknown
d12675
train
@Html.Raw() is your friend here. See: https://learn.microsoft.com/en-us/aspnet/core/mvc/views/razor?view=aspnetcore-2.2 Note that this presents a bit of a security risk in that if a user enters an address containing JavaScript, iFrame, etc they can produce an undesired effect.
unknown
d12676
train
If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.
unknown
d12677
train
Here's how I resolved an issue in my context: The server is run through an ANT script with jvm configured with an agent (the property name 'agentfile' below is associated with a value pointing to the agent library) Now, I would get the error 'java result 1' whenever the server was run, without any indication of the actual error. Here's how this issue was debugged. 1) The agent was turned off (i.e.) the above 2 lines were commented out. 2) Then when ANT was run, the actual error message was clearly shown - the problem was: a class file was missing. This error was being eaten by the agent, since it is low level C code and just looks to load a class that it cannot find and throws the Java error. Lesson learnt: if you have an agent, turn it off and then run your ANT - it may throw up the reasons for error seen. This is of course one of many scenarios noticed for the java result 1 error.
unknown
d12678
train
select r1.empid, r1.date, r1.time as time_in, r2.time as time_out from raw_Data r1 inner join raw_data r2 on r1.empid = r2.empid where r1.in_out = 'IN' and r2.in_out = 'OUT'; A: Ok, so you can tell if the employee worked the night shift when his time_out was AM. In this case, it's the last row's case. What I did was determinate a real date field. It is the day before when you're out from the night shift, and the current date in any other case select empid, IF(RIGHT(timeinout,2)='am' AND in_out='OUT', DATE_ADD(date, INTERVAL -1 DAY), date) as realdate, MAX(if(in_out='IN',timeinout,null)) as time_in, MAX(if(in_out='OUT',timeinout,null)) as time_out from shifts group by empid, realdate Outputs depending on the table size it might be worth using this way just for saving yourself a join. In almost any other case, a join is cleaner. I guess you have no control over the format of the input, so you'll have to stick to times as text and make a comparison for the am/pm suffix in the last 2 characters. I find that rather error prone, but let's pray the raw data will stick to that format. This solution makes a few assumptions that I rather explain here to avoid further misunderstandings * *The workers can't work the night shift if they worked the day shift (since we're grouping by date, you would need an extra field to distinguish day shift and night shift for a given day) *Input will never list an OUT time earlier than a IN time for a given day/employee tuple (if it happens, this would need an extra verification step to guarantee consistent output) *Input will always include timein and timeout for a given shift (if it didn't, you would need an extra step to discard orfan timeentries). A: Try this query and tell me if it works SELECT empid, date, MAX(CASE WHEN in_out = 'IN' THEN time ELSE '' END) time_in, MAX(CASE WHEN in_out = 'OUT' THEN time ELSE '' END) time_out FROM Raw Data GROUP BY empid, date
unknown
d12679
train
After testing with our other microservices, we found out that this problem was related to the elasticsearch-py library rather than our elasticsearch configuration, as our other microservice, which is golang based, could perform sniffing with no problem. After further investigation we linked the problem to this open issue on the elasticsearch-py library: https://github.com/elastic/elasticsearch-py/issues/2005. The problem is that the authorization headers are not properly passed to the request made to Elasticsearch to discover the nodes. To my knowledge, there is currently no fix that does not involve altering the library itself. However, the error message is clearly misleading.
unknown
d12680
train
Other answers are assuming that you are already using a transaction. I won't omit this, since you might be missing it. You should use a transaction to ensure that records in all 15 tables or none are inserted/updated. The transaction ensures you atomicity in your operation. If something fails during the stored procedure and you don't use a transaction, some of the save/update operations will be made, and some not (the ones from the query that has produced the error). If you use BEGIN TRAN, and COMMIT for successful operations or ROLLBACK in case of failure, you will get all done or nothing. You should check for errors after each query execution, and call ROLLBACK TRANSACTION if there is one, or call COMMIT at the end of the stored procedure. There is a good sample in the accepted answer of this Stackoverflow question on how to handle transaction inside a stored procedure. Once you have the transaction, the second part is how to avoid dirty reads. You can set the isolation level of your database to READ COMMITED, this will prevent, by default, dirty reads on your data. But the user can still chose to do dirty reads by specifying WITH (NOLOCK) or READUNCOMMITED in his query. You cannot prevent that. Besides, there are the snapshot isolation levels (Snapshot and Read Commited Snapshot) that could prevent locking (which is not always good), avoiding dirty reads at the same time. There is a lot of literature on this topic over the internet. If you are interested in snapshot isolation levels, I suggest you to read this great article from Kendra Little at Brent Ozar. A: You can't prevent that dirty read does not happen. Is the reader that does the dirty reads, not you (the writer). All you can do is ensure that your write is atomic, and that is accomplished by wrapping all writes in a single transaction. This way readers that do not issue dirty reads will see either all your updates, either none (atomic). If a reader chooses to issue dirty reads there's nothing you can do about it. Note that changing your isolation level has no effect whatsoever on the reader's isolation level. A: All you need to do to ensure that the sql server isolation level is set appropriately. To eliminate dirty reads, it needs to be at Read Committed or higher, (Not at Read Uncommitted). Read Committed is the default setting out of the box. It might be worth while, however, to review the link above and see what benefits, (and consequences) the higher settings provide. A: You can set the transaction isolation level to SERIALIZABLE. by using the following statement SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Before you BEGIN TRANSACTION but Warning it can slow down other user who will be try to see on update or insert data into your tables, Your can also make use of the SNAP_SHOT Isolation Level which shows you only the last Commited/Saved data but it makes extensive use of Temp DB which can also effect the performance.
unknown
d12681
train
Finally, we solved this not by using code. Because we all know if the response not consumed directly, the connection of a request will not released. So in our code, we offen consume the response first. We solved this problem not by using better code but slightly modify some parameters like maxconnectionpoolsize, maxconnectionperroute and maxconnectiontimeout based on our business scenario. Then running it and all seems ok now. Hope this helps you. A: you can set the parameters in either properties or yml file like below. http: pool: size: 100 sockettimeout: 20000 defaultMaxPerRoute: 200 maxPerRoutes: - scheme: http host: localhost port: 8080 maxPerRoute: 100 - scheme: https host: {{URL}} port: -1 maxPerRoute: 200
unknown
d12682
train
Try below Code. HTML Code: <select class="form-control" id="modeles" [(ngModel)]="selectedValue"> <option *ngFor="let model of models" [value]="model.price">{{ model.name }}</option> </select> <p>{{selectedValue}}</p> Typescript Code: selectedValue: number; models = [ { idModele: 1, name: "Bianca", price: 500 }, { idModele: 2, name: "etc", price: 600 } ]; A: You can do this easily: Just add app.component.ts code like: export class AppComponent { selectedPrice:Number models:Array<Object> = [ {idModele: 1, model: "Bianca", price: 500}, {idModele: 2, model: "etc", price: 600} ] } and add app.component.html code like: <h1>Selecting Model</h1> <select [(ngModel)]="selectedPrice"> <option *ngFor="let model of models" [ngValue]="model.price"> {{model.model}} </option> </select> <hr> <p>{{ selectedPrice }}</p> A: Please try this in html <select . . . [(ngModel)]="selectedValue"> <option *ngFor="let model of models" [value]="model?.price">{{model.name}}</option> </select> {{selectedValue}} in ts selectedValue: number; models = [ { idModele: 1, name: "Bianca", price: 500 }, { idModele: 2, name: "etc", price: 600 } ];
unknown
d12683
train
I don't know if this is generic enough for you but here's how I do something similar: import spray.json.DefaultJsonProtocol case class Rejection(type:String, status: Int, message: String) object RejectionJsonProtocol extends DefaultJsonProtocol{ implicit val rejectionFormat = jsonFormat3(Rejection) } Then you can complete your routes with a Rejection, possibly in your RouteExceptionHandler like this: trait RouteExceptionHandlers extends HttpService { import RejectionJsonProtocol._ implicit def routeExceptionHandler(implicit log: LoggingContext) = ExceptionHandler { case e: UnsuccessfulResponseException => requestUri { uri => complete(e.response.status, Rejection("error",e.response.status,"Not found")) } } //other exceptions go here }
unknown
d12684
train
Are you just trying to mask the address, to make it look nicer or hide the fact that you're linking to to another website, or is it that you don't want people to know they can access that page without using your popup? If it's the former, then what you could do is make the page you open in window.open an iframe, and point the iframe to your actual page. They user could still access the target page, but only via your nicer looking url. The other option is to use something like a colorbox with an iframe instead of window.open, which will mask the address. Have a look at the Outside Webpage (iframe) example on this page. Of course whichever option you choose, someone smart can still track down the target url via the source code and go there directly. A: window.open('http://mysite/proxy.html') and in proxy.html : <html> <body> <iframe src="/realPage.html"></iframe> </body> </html> A: Simple solution open new tab after that add url to location.href. window.open('','_blank').location.href = "url"
unknown
d12685
train
Further investigation has revealed a large can of worms. It would seem that there is no properly implemented method to kill the request stream without potentially causing an error at the client. This question covers the difficulty of terminating a request early without causing an error: How to cancel HTTP upload from data events? I have decided to abort very long requests and to allow shorter ones to complete. In my application this should only normally abort requests that are probably a DOS attack (legitimate requests are usually short)
unknown
d12686
train
Well, let me sum this up: I guess you won't need most of them in JavaScript, which I am most familiar to and therefore it's "the language I'm thinking in". Where you probably will need the hexadecimal radix is, when you want to convert a hexadecimal color code like #ffffff to an RGB color code. The octal radix is useful regarding the Unix file permissions, but I read somewhere that it's deprecated in JavaScript. I can't imagine to have permission stuff running in JS anyway because would be a huge security issue. Or maybe just read only to output the permissions in the frontend. I couldn't find anything why we should use a binary radix though. Maybe someone can add a comment.
unknown
d12687
train
I think there are a couple things that could cause your problem. 1) Are you sure that you have added the most recent FBSDKCoreKit and FBSDKLoginKit and added these lines to the top of your swift file: import FBSDKCoreKit import FBSDKLoginKit 2) I have read that this is just an error on the simulator and should be ignored. Make sure that you are testing on a real device for this feature. 3) Have you included the FBSDKLoginButtonDelegate in the heading of your class declaration? 4) I would make sure that your info.plist source code is all included and correct. Here is what I have added for Facebook in my app: <key>LSApplicationQueriesSchemes</key> <array> <string>fbapi</string> <string>fb-messenger-api</string> <string>fbauth2</string> <string>fbshareextension</string> </array> <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string> fb + FACEBOOKAPPIDFROMTHEFACEBOOKDEVCONSOLE </string> </array> </dict> </array> <key>FacebookAppID</key> <string>FACEBOOKAPPIDFROMTHEFACEBOOKDEVCONSOLE</string> <key>FacebookDisplayName</key> <string>NAME OF YOUR APP</string> <key>NSAppTransportSecurity</key> <dict> <key>NSExceptionDomains</key> <dict> <key>facebook.com</key> <dict> <key>NSIncludesSubdomains</key> <true /> <key>NSThirdPartyExceptionRequiresForwardSecrecy</key> <false/> </dict> <key>fbcdn.net</key> <dict> <key>NSIncludesSubdomains</key> <true/> <key>NSThirdPartyExceptionRequiresForwardSecrecy</key> <false/> </dict> <key>akamaihd.net</key> <dict> <key>NSIncludesSubdomains</key> <true/> <key>NSThirdPartyExceptionRequiresForwardSecrecy</key> <false/> </dict> </dict> </dict>
unknown
d12688
train
I would start like this, this is assuming you only want one row/record per farm-shed. import pandas as pd import functools # dataframe is df # First combine horizontally: get the average weight into one column weight_cols = [col for col in df.columns if col.startswith("Average weight")] df2 = ( df.assign(**{ 'Average weight': lambda df: functools.reduce(pd.Series.combine_first, [df[col] for col in weight_cols]) }) .drop(columns=weight_cols) ) # Get first value from each column in each group grouped_by = ['Farm', 'Shed'] remaining_cols = df2.columns.difference(grouped_by, sort=False) (df2.groupby(grouped_by) .agg({col: "first" for col in remaining_cols}) .reset_index()) If you want to safeguard against duplicated data, more checks are needed.
unknown
d12689
train
Calling 'clearAnimation()' of your SwipeRefreshLayout before you replace the Fragment should do the trick. A: It's a bug in SwipeRefreshLayout which is not fixed from long time despite having enough stars. You can keep track of issue here.
unknown
d12690
train
You can use .dropna(),.drop_duplicates(). parsed_data=parsed_data.drop.duplicates() parsed_data.dropna(how='all', inplace = True) # do operation inplace your dataframe and return None. parsed_data= parsed_data.dropna(how='all') # don't do operation inplace and return a dataframe as a result. # Hence this result must be assigned to your dataframe If directly .dropna() not working for you then you may use .replace('', numpy.NaN,inplace=True). Or you can try this too: json_df=json_df[json_df['SRC/Word1'].str.strip().astype(bool)] This is faster then doing .replace(), if you have empty string. And now we cleaned it, we can just use .values.tolist() to get those value in list. A: Yup! Pandas has built-in functions for all of these operations: import pandas as pd df = pd.read_json('stela_zerrl_t01_201222_084053_test_edited.json', lines=True) series = df['SRC/Word1'] no_dupes = series.drop_duplicates() no_blanks = no_dupes.dropna() final_list = no_blanks.tolist() If you want an numpy array rather than a python list, you can change the last line to the following: final_array = no_blanks.to_numpy() A: Drop the duplicates, replace empty string by NaN, then create the list. >>> df.drop_duplicates().replace('', np.NAN).dropna().values.tolist() [['E1F25701'], ['E15511D7']] PS: Since what you have is a dataframe so it will be a 2D List, if you want 1D list, you may do it for the specific column: >>> df['SRC/Word1'].drop_duplicates().replace('', np.NAN).dropna().tolist() ['E1F25701', 'E15511D7'] What you have is not an empty string, but white space character, Try this: replace \s+ to np.NAN with regex=True: >>>df['SRC/Word1'].drop_duplicates().replace('\s+', np.NAN, regex=True).dropna().tolist() ['E1F25701', 'E15511D7'] And apparently, below will also work: df['SRC/Word1'].drop_duplicates().str.strip().replace('', np.NAN).dropna().tolist() ['E1F25701', 'E15511D7']
unknown
d12691
train
Tryout these steps: * *Stop your MySQL server completely. This can be done from Wamp(if you use it), or start “services.msc” using Run window, and stop the service there. *Open your MS-DOS command prompt using “cmd” inside the Run window. Then go to your MySQL bin folder, such as C:\MySQL\bin. Path is different if you use Wamp. *Execute the following command in the command prompt: mysqld.exe -u root --skip-grant-tables *Leave the current MS-DOS command prompt as it is, and open a new MS-DOS command prompt window. *Go to your MySQL bin folder again. *Enter “mysql” and press enter. *You should now have the MySQL command prompt working. Type “use mysql;” so that we switch to the “mysql” database. *Execute the following command to update the password: UPDATE user SET Password = PASSWORD('your_new_passowrd') WHERE User = 'root'; However, you can now run almost any SQL command that you wish. After you are finished close the first command prompt, and type “exit;” in the second command prompt. You can now start the MySQL service. That’s it.
unknown
d12692
train
Two solutions are here: 1) This is answer I get from Microsoft: In the list view the WinJS.UI.GridLayout's viewport is loaded horizontally. You need to change the viewport's orientation to vertical. You can do this by attaching the event onloadingstatechanged event. args.setPromise(WinJS.UI.processAll().then(function () { listview.winControl.onloadingstatechanged = function myfunction() { if (listview.winControl.loadingState == "viewPortLoaded") { var viewport = listview.winControl.element.getElementsByClassName('win-viewport')[0]; viewport.className = 'win-viewport win-vertical'; } } })); and change the class win-horizontal to win-vertical. 2) Problem could be also solved adding standard html header and list below header, without data-win-options="{ header: select('.header') }" attribute. In that case we need to calculate height of the list: .list { height: calc(100% - headerHeight); }
unknown
d12693
train
I figured it out - at least a hacked answer - and will post my solution in case others can use it. Basically I adjusted the font size of the axis text, and used the scales pkg to keep the notation consistent (i.e. get rid of scientific). My altered code is: ggplot(melt.df, aes(x = value)) + geom_histogram(bins=50,na.rm=TRUE) + facet_wrap(~variable, scales="free") + theme_bw()+ theme(axis.text=element_text(size=10),text=element_text(size=16))+ scale_x_continuous(labels = scales::comma) A: If you don't want the commas in the labels, you can set something like options(scipen = 10) before plotting. This makes the threshold for using scientific notation higher so ordinary labels will be used in this case.
unknown
d12694
train
Generics with Bound Parameters (no wildcards) * *Is my inference correct as in ICommand definition? No. Two reasons * *You have written a small 'o' while passing it to Mediator. (I guess it's just a typing mistake.) *You passed IObserver<T> in stead of O to ISubject which would definitely cause a parameter bound mismatch. Correct Version: interface ICommand<T, M extends IMediator<T, S, O>, S extends ISubject<T, O>, O extends IObserver<T>> * *How to interpret the above case studies? * *The first thing you'd need to understand that you have one unknown type T and five interfaces. *Therefore you would have total six concrete types which have to be included progressively in the interface declarations. (You explicitly asked not to bother about rationale of the design.) *If you write them in a correct order, it becomes much more manageable. Interface declarations: interface IObserver<T> interface ISubject<T, O extends IObserver<T>> interface IMediator<T, O extends IObserver<T>, S extends ISubject<T,O>> interface ICommand<T, O extends IObserver<T>, S extends ISubject<T, O>, M extends IMediator<T, O, S>> interface IProducerConsumer<T, O extends IObserver<T>, S extends ISubject<T, O>, M extends IMediator<T, O, S>, C extends ICommand<T, O, S, M>> * *What are the best defintions assuming that I want to insert T and must able to get and put? * *If you want to get and put object of type T, what you probably need is bunch of interfaces which take only one parameter T. Generics will enforce that all would be compatible as T will be replaced by same type everywhere. *Your current system is too rigid. In real scenario, you would never have so many implementations of these interfaces (unless you are re-implementing facebook in java) so that you'd have many possible combinations of the implementations and you want to ensure compatibility. *Generics enforces type-safety by applying restrictions which are good. But you should not put restrictions just because you can put them. You are losing flexibility, readability and maintainability of your code. *You should add bounds only when you need them. They should not affect the design in any way before contracts between interfaces have been decided. Possibly sufficient way: interface IObserver<T> interface ISubject<T> interface IMediator<T> interface ICommand<T> interface IProducerConsumer<T> * *What is the rules and relations of the type parameter definitions in interface and implemented classes? * *The only relation between type parameters in interfaces and implementing class that I can think of is that implementing class has to provide a type to replace the generic type parameter. *In some cases, that type can again be a generic type in which case the responsibility of providing concrete type is forwarded to the code using the class reference or another class which extends that class. It may even be recursive! *The rules are not written in the language, in stead, you are applying all the rules on this mechanism when you make any type parameter bound. So as long as you are supplying a type which qualifies against all of your rules, you are good to go. *More rules means more robust but less flexible/readable. So do the trade of wisely. Two simple cases: // General way private class ProductObserver implements IObserver<Product> { } private ProductObserver productObserver; // Aspect oriented way private class LoggerObserver<T> implements IObserver<T> { } private LoggerObserver<Product> loggerObserver; * *Lastly, I'd suggest you to read (comprehensive) Java Generics FAQ by Angelika Langer if you have any further doubt. *If you keep experimenting like this, you might as well end up inventing a design pattern. Don't forget to share it with us when you do :D Hope this helps. Good luck.
unknown
d12695
train
Remove the double quote around "([^"]*)" in step definition, there is no quote in feature file. When(/^I search for the word ([^"]*)$/, function(word){}); enter_SearchText(text) { var me = this; // wait 15 seconds return browser.sleep(15*1000).then(function(){ return element(me.eaautomation).sendKeys(text); }); }
unknown
d12696
train
The issue you are seeing is because the pager savePages option is set to true by default. In order for that option to work as expected, you need to include the storage widget, contained in the jquery.tablesorter.widgets.js file. Without the storage widget, the pager is not able to save the last user set page into local storage and/or cookies depending on browser support. If you don't want the page to "remember" the last user set page size, then set that option to false: $(function() { $("table") .tablesorter().tablesorterPager({ container: $("#pager"), savePages: false, size:5 }); });
unknown
d12697
train
Use Flexbox Layout to design flexible responsive layout structure: * {box-sizing: border-box;} /* Style Row */ .row { display: -webkit-flex; -webkit-flex-wrap: wrap; display: flex; flex-wrap: wrap; } /* Make the columns stack on top of each other */ .row > .column { width: 100%; padding-right: 15px; padding-left: 15px; } /* Responsive layout - makes a two column-layout (50%/50% split) */ @media screen and (min-width: 600px) { .row > .column { flex: 0 0 50%; max-width: 50%; } } /* Responsive layout - makes a four column-layout (25%/25% split) */ @media screen and (min-width: 991px) { .row > .column { flex: 0 0 25%; max-width: 25%; } } <p>Resize this frame to see the responsive effect!</p> <div class="row"> <!-- First Column --> <div class="column" style="background-color: #dc3545;"> <h2>Column 1</h2> <p>Some Text...</p> <p>Some Text...</p> <p>Some Text...</p> </div> <!-- Second Column --> <div class="column" style="background-color: #ffc107;"> <h2>Column 2</h2> <p>Some Text...</p> <p>Some Text...</p> <p>Some Text...</p> </div> <!-- Third Column --> <div class="column" style="background-color: #007eff;"> <h2>Column 3</h2> <p>Some Text...</p> <p>Some Text...</p> <p>Some Text...</p> </div> <!-- Fourth Column --> <div class="column" style="background-color: #28a745;"> <h2>Column 4</h2> <p>Some Text...</p> <p>Some Text...</p> <p>Some Text...</p> </div> </div> Reference: Create a Mixed Column Layout
unknown
d12698
train
Putting the two floats in there side by side makes the parent container's height effectively 0. You can put a div with a style="clear:both;" before the parent's closing tag and you will get your background back. <div class="introduction"> <div class="image"> <img src="" /> </div> <div class="text"> <p> Text </p> <p>Good luck!!</p> </div><div style="clear:both;"></div></div> A: Something like this may achieve what you want: <style> .introduction{ margin:0px; padding:5px; background-color:orange; } .introduction img{ float:left; padding:10px; } .introduction p{ padding-left:50px; background-color:blue; } </style> <div class="introduction"> <img src="img/can_25.png" /> <p>Text</p> <p>Good Luck</p> </div> Since your p's aren't floated, they will hold your div open .. depending on the size of your image. A: I would suggest you 2 solutions: 1) If you want to your output look like this: IMAGE TEXT IMAGE TEXT TEXT HTML: <div class="whole"> <img class="ib" src="myimg.png" alt="my img" /> <p class="ib">my text my text my text my text my text my text my text is so short I will die alone</p> </div> CSS: .ib{ display:inline-block;} .whole{ vertical-align:top} .ib img{width:30%; border:0;} .ib p{width:70%;margin:0;padding:0;} 2) or like this: IMAGE TEXT TEXT TEXT TEXT HTML: <img src="myimg.png" alt="my img" class="leftimg" /> <p>my text my text my text is so short I will die alone</p> CSS: .leftimg { float:left; margin-right:5px; } DEMO: http://jsfiddle.net/goodfriend/b66J8/37/
unknown
d12699
train
For points 1 and 2 you could try WxMpl : http://agni.phys.iit.edu/~kmcivor/wxmpl/ It's a module for matplolib embedding in wxPython. Zooming in/out works out of the box.
unknown
d12700
train
The first thing I would try, would be to double check that the python you're calling is the same that you've installed pygrib into * *$ which python *$ python -c "help("modules")" *$ python -c "help("modules pygrip")" (to inspect which python you're calling, and which packages are installed there). If that's not working, then there's something off with the ubuntu package - so I'd then try installing from source instead; From https://github.com/jswhit/pygrib Clone the github repository, or download a source release from https://pypi.python.org/pypi/pygrib. Copy setup.cfg.template to setup.cfg, open in text editor, follow instructions in comments for editing. If you are using the old grib_api library instead of the new eccodes library, be sure to uncomment the last line setup.cfg. Run 'python setup.py build' Run 'python setup.py install' (with sudo if necessary) Run 'python test.py' to test your pygrib installation.
unknown