_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d7001
train
Since you are not showing how you compile the code, could you check that you are linking against the multi threaded Intel MKL libs and e.g. pthreads? For example (this is for an older version of MKL): THREADING_LIB="$(MKL_PATH)/libmkl_$(IFACE_THREADING_PART)_thread.$(EXT)" OMP_LIB = -L"$(CMPLR_PATH)" -liomp5 There should be an examples directory in your MKL distribution e.g. intel/composer_xe_2011_sp1.10.319/mkl/examples. In there you can check the contents of spblasc/makefile to see how to correctly link against the multithreaded libs for you particular version of MKL. Another suggestion that should speed things up is adding compiler optimisation flags e.g. OPT_FLAGS = -xHost -O3 to allow icc to generate optimised code for your architecture so your line would end up as: icc mkl_test_mp.cpp -mkl -std=c++0x -openmp -xHost -O3
unknown
d7002
train
In order to Link your Adsense account to your Developer account you MUST create first a Google Checkout Account. After creating one, you will see on your developer console a new request to link between your account to your adsense (by entering your pub-xxxxxxxxxxx id).
unknown
d7003
train
You need to do a couple of things. First you need to bind the selected value of your <select> element to a field. For that you need to use the @bind attribute: <label for="State">Choose a State:</label> <select id="State" @bind="selectedState"> <option value="">Choose a state</option> @foreach (var item in branch) { <option value="@item.Id">@item.Description</option> } </select> @code { private int? selectedState; } Also add an option with empty value in your select so that by default no state is selected: <option value="">Choose a state</option> Now you can create a property that returns the filtered schools based on the selected state: private List<SchoolTable> FilteredSchools => selectedState.HasValue ? schools.Where(s => s.State == selectedState.Value).ToList() : schools; Use this property to generate the <table> element content: <label for="State">Choose a State:</label> <select id="State" @bind="selectedState"> <option value="">Choose a state</option> @foreach (var item in branch) { <option value="@item.Id">@item.Description</option> } </select> <table style="width:50%; margin-left:710px; border:1px solid black" border="1" class="table-bordered"> <tr bgcolor="#ffffff" style="border:1px solid black"> <th style="border:1px solid black">Schools</th> @foreach (var item in FilteredSchools) { <th style="border:1px solid black">@item.Name</th> <th style="border:1px solid black">@item.State</th> } </tr> </table> @code{ private List<SchoolTable> schools = new List<SchoolTable>(); private int? selectedState; private List<SchoolTable> FilteredSchools => selectedState.HasValue ? schools.Where(s => s.State == selectedState.Value).ToList() : schools; protected override void OnInitialized() { schools = SchoolService.GetSchoolTable(); } }
unknown
d7004
train
I have solved it myself. I realised that it doesn't work like I thought and that I have to add a reference to the manager directly in MyDataSource. After that it works to $expand the manager.
unknown
d7005
train
AFAIk, this scenario is not implemented yet. Please provide your feedback on the UserVoice. All the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure. Reference: How to create IoT edge device on IoT central?
unknown
d7006
train
$varname = 'variable' . $count; $$varname = $r['somefield']; http://www.php.net/manual/en/language.variables.variable.php A: You'd be better off with an array... $variable[] = $r['somefield']; You can use variable variables, however it is probably not a good idea, especially for a trivial case like this one.
unknown
d7007
train
There are many tools. Below are my suggestions. For GET, I usually just type it in the browser's URL bar. For POST or all other including GET, I use cURL from the command line like so: curl -X POST \ https://saturnapi.com/access/demo/demo \ -H 'saturnapi-access-key':'API_KEY' \ -d 'SaturnParams'='28' \ -H specifices the headers, which in this case is my API key. -d specifies the data, which in this case is SaturnParams field. For more on cURL, see docs here.
unknown
d7008
train
This is the default behaviour of a form submission in the browser. You have registered a showSpinnerSignUp() method on the click of a button while the click on that same button is responsible for submitting the enclosing form. Since the browser's built-in behavior for submitting forms works by making a full roundtrip to the server, that immediately interrupts your code execution of onclick event because your browser is navigating away from your current page. This can be simulated as if you had clicked your button and refreshed the page immediately. Your current setup might work when you deploy this to the production server, but the possibility of working this setup locally is low because a local development server used to run on a faster machine so the page refresh time of form submission is quite faster. To verify this happening, open your browser's console and call the method showSpinnerSignUp() directly (remember to remove it from $(function() {}) temporarily) and see if the spinner is showing up. Also, there is an option to Preserve Logs which keeps the Javascript console logs even after page refresh. So, put a simple console.log() inside your method call and then try hitting that button. You'll see your method is called but no spinner is displayed due to form submission.
unknown
d7009
train
In XCode 4.2 running on an IOS 5 device, I was able to get [pickerView reloadAllComponents] to change the number of components in a UIPickerView. It works exactly as you originally expected it to. Calling reloadAllComponents caused the numberOfComponentsInPickerView method to get called. A: I found a solution for my problem. Instead of trying to make the UIPickerView reload and change the number of components dynamically, I use two UIPickerView, one has 1 component, and the other has two. And I will switch between these two when the user select different rows of data in the tableview. Hope this can help if other people has similar need.
unknown
d7010
train
Mohammed, The Pinterest web service tries to access the image you send in the 'media' parameter in order to display the image in the Pinterest popup. If the Pinterest service cannot locate the image at the specified path (http status 404), or does not have access to the image (for example status 403 - forbidden), you will see the error you mention above. I suggest you ensure the image is located at a URL that will respond with http status 200 - success. A: I discover that I am using the local Domain, which is not accessible by pinterest, I uploaded all files to online server with real domain and problem solved
unknown
d7011
train
There are two things that can make your event binding code slow: the selector and the # of bindings. The most critical of the two is the # of bindings, but the selector could impact your initial performance. As far as selectors go, just make sure you don't use pure class name selectors like .myclass. If you know that the class of myclass will always be in a <div> element, make your selector be div.myclass as it will help jQuery find the matching elements faster. Also, don't take advantange of jQuery letting you give it huge selector strings. Everything it can do with string selectors it can also do through functions, and this is intentional, as it is (marginally, admittedly) faster to do it this way as jQuery doesn't have to sit around to parse your string to figure out what you want. So instead of doing $('#myform input:eq(2)'); you might do $('input','#myform').eq(2);. By specifying a context, we are also not making jQuery look anywhere it doesn't have to, which is much faster. More on this here. As far as the amount of bindings: if you have a relatively medium-sized amount of elements then you should be fine - anything up to 200, 300 potential element matches will perform fine in modern browsers. If you have more than this you might want to instead look into Event Delegation. What is Event Delegation? Essentially, when you run code like this: $('div.test').click(function() { doSomething($(this)); }); jQuery is doing something like this behind the scenes (binding an event for each matched element): $('div.test').each(function() { this.addEventListener('click', function() { doSomething(this); }, false); }); This can get inefficient if you have a large amount of elements. With event delegation, you can cut down the amount of bindings done down to one. But how? The event object has a target property that lets you know what element the event acted on. So you could then do something like this: $(document).click(function(e) { var $target = $(e.target); if($target.is('div.test')) { // the element clicked on is a DIV // with a class of test doSomething($target); } }); Thankfully you don't actually have to code the above with jQuery. The live function, which is advertised as an easy way to bind events to elements that do not yet exist, is actually able to do this by using event delegation and checking at the time an action occurs if the target matches the selector you specify to it. This has the side effect, of course, of being very handy when speed is important. The moral of the story? If you are concerned about the amount of bindings your script has just replace .bind with .live and make sure you have smart selectors. Do note, however, that not all events are supported by .live. If you need something not supported by it, you can check out the livequery plugin, which is live on steroids. A: Basically, you're not going to do any better. All it is doing is calling attachEventListener() on each of your selected elements. On parse time alone, this method is probably quicker than setting inlined event handlers on each element. Generally, I would consider this to be a very inexpensive operation.
unknown
d7012
train
Azure Web services are deployed to a Windows Server 2016. You can access the log files and change the location of the log files using * *Open the url https://<app service name>.scm.azurewebsites.net/DebugConsole {or choose Menu > Debug Console > CMD) *Type the dir command to go the location of the tomcat install. For example dir D:\Program Files (x86)\apache-tomcat-8.5.6\conf *Alternatively you can also click on the folder name dispalyed above the command/terminal *In your case, to enable detailed logging, open logging.properties under \Program Files (x86)\apache-tomcat-8.5.6\conf and set 1catalina.org.apache.juli.FileHandler.level = FINE. You can read more info at Documentation on Apache Tomcat 8.5 Logging Sample screen from my Tomcat setup as a webservice:
unknown
d7013
train
You don't need to quote the values in the connection string. <add name="Database1" connectionString="Data Source=170.21.191.85;Initial Catalog=Database1;User ID=sa;Password=final"/>
unknown
d7014
train
I created a simple example of what you need to do in order to create your polynomial features from scratch. The first part of the code creates the result from Scikit Learn: from sklearn.preprocessing import PolynomialFeatures import pandas as pd import numpy as np df = pd.DataFrame.from_dict({ 'x': [2], 'y': [5], 'z': [6]}) p = PolynomialFeatures(degree=2).fit(df) f = pd.DataFrame(p.transform(df), columns=p.get_feature_names(df.columns)) print('deg 2\n', f) p = PolynomialFeatures(degree=3).fit(df) f = pd.DataFrame(p.transform(df), columns=p.get_feature_names(df.columns)) print('deg 3\n', f) The result looks like: deg 2 1 x y z x^2 x y x z y^2 y z z^2 0 1.0 2.0 5.0 6.0 4.0 10.0 12.0 25.0 30.0 36.0 deg 3 1 x y z x^2 x y x z y^2 y z z^2 x^3 x^2 y x^2 z x y^2 x y z x z^2 y^3 y^2 z y z^2 z^3 0 1.0 2.0 5.0 6.0 4.0 10.0 12.0 25.0 30.0 36.0 8.0 20.0 24.0 50.0 60.0 72.0 125.0 150.0 180.0 216.0 Now to create a similar feature without Scikit Learn, we can write our code like this: row = [2, 5, 6] #deg = 1 result = [1] result.extend(row) #deg = 2 for i in range(len(row)): for j in range(len(row)): res=row[i]*row[j] if res not in result: result.append(res) print("deg 2", result) #deg = 3 for i in range(len(row)): for j in range(len(row)): for z in range(len(row)): res=row[i]*row[j]*row[z] if res not in result: result.append(res) print("deg 3", result) The result looks like: deg 2 [1, 2, 5, 6, 4, 10, 12, 25, 30, 36] deg 3 [1, 2, 5, 6, 4, 10, 12, 25, 30, 36, 8, 20, 24, 50, 60, 72, 125, 150, 180, 216] To get the same results recursively, you can use the following code: row = [2, 5, 6] def poly_feats(input_values, degree): if degree==1: if 1 not in input_values: result = input_values.insert(0,1) result=input_values return result elif degree > 1: new_result=[] result = poly_feats(input_values, degree-1) new_result.extend(result) for item in input_values: for p_item in result: res=item*p_item if (res not in result) and (res not in new_result): new_result.append(res) return new_result print('deg 2', poly_feats(row, 2)) print('deg 3', poly_feats(row, 3)) And the results will be: deg 2 [1, 2, 5, 6, 4, 10, 12, 25, 30, 36] deg 3 [1, 2, 5, 6, 4, 10, 12, 25, 30, 36, 8, 20, 24, 50, 60, 72, 125, 150, 180, 216] Also, if you need to use Pandas data frame as an input to the function, you can use the following: def get_poly_feats(df, degree): result = {} for index, row in df.iterrows(): result[index] = poly_feats(row.tolist(), degree) return result
unknown
d7015
train
Chrome and Firefox's developer tools allow you to modify JS on the fly. If you're on Chrome, open up the console by going to the menu View->Developer->JavaScript Console. Copy the js from the page source. Alter it. Then paste altered javascript function(s) into the console. Hit enter. Then start typing 'solvePuzzle();' Hit enter. You'll see the response come back. For Firefox, you'll need to download Firebug plugin. A: You cannot do this from JavaScript due to the same origin policy: https://developer.mozilla.org/en/Same_origin_policy_for_JavaScript. If this weebly site supports some sort of JSON API, you could use JSONP: http://en.wikipedia.org/wiki/JSONP. Other than that, you're probably better off interacting with this site via the server side due to security restrictions on the client side. A: Consider installing a HTTP tunnel on your "mysite.com" so that the browser does not have to access "weebly.com" directly.
unknown
d7016
train
Why does the combination of const range and const auto& lambda argument fail to compile, while pasing a mutable range works and taking the lambda argument by value works? First, the operator*() of the iterator of flat_map is defined as follows: reference operator*() const { return reference{*kit_, *vit_}; } And the type of reference is pair, this means that operator*() will return a prvalue of pair, so the parameter type of the lambda cannot be auto&, that is, an lvalue reference, because it cannot bind rvalue. Second, const flat_map does not model the input_range concept, that is, its iterator does not model input_iterator which requires indirectly_readable which requires common_reference_with<iter_reference_t<In>&&, iter_value_t<In>&>, the former is pair<const int&, const int&>&&, and the latter is pair<const int, int>&, there is no common_reference for the two. The workaround is to just define common_reference for them, just like P2321 does (which also means that your code is well-formed in C++23): template<class T1, class T2, class U1, class U2, template<class> class TQual, template<class> class UQual> requires requires { typename pair<common_reference_t<TQual<T1>, UQual<U1>>, common_reference_t<TQual<T2>, UQual<U2>>>; } struct basic_common_reference<pair<T1, T2>, pair<U1, U2>, TQual, UQual> { using type = pair<common_reference_t<TQual<T1>, UQual<U1>>, common_reference_t<TQual<T2>, UQual<U2>>>; }; For details on common_reference, you can refer to this question.
unknown
d7017
train
the Go plugin currently uses the term Go Libraries for different GOPATH values. If you have a single GOPATH that you'd like to use for all the projects, then you can add it to the "Global Libraries". For example, my $GOPATH is /home/florin/golang and in the plugin I've set the Global Libraries from the Go Libraries setting to reflect that (see this screenshot). If the plugin can automatically detect the GOPATH, and you have the check box ticked for that, then the plugin will try and use that value as the GOPATH value, see the next screenshot Also, the plugin has three different types of GOPATH values right now: * *the Global Libraries -> you should set GOPATH entries here for GOPATH values that you want to share between different projects (most use-cases) *Project Libraries -> you should set the GOPATH entries here for GOPATH values that are specific to the current project only (when you want to have a single GOPATH per project approach) *Module Libraries -> this is a very specific setting, it's only used in case you have different modules in your project and you want to have a different GOPATH configuration for each of the modules. The module in this case is a specific logical grouping of the source code in the IDE, not in the packages that the Go project uses (think of the ability to have a Python module, a Go module and an Android module all in the same project). There's a ticket that plans to simplify this further and your input will be included. Hope it helps.
unknown
d7018
train
You need to move the xml code into another file. I think you code is correct. I have just moved the xml to a new .xml file named company.xml in the site root directory and it is working fine. <?xml version="1.0"?> <Company> <Employee category="technical"> <FirstName>Tanmay</FirstName> <LastName>Patil</LastName> <ContactNo>1234567890</ContactNo> </Employee> <Employee category="non-technical"> <FirstName>Taniya</FirstName> <LastName>Mishra</LastName> <ContactNo>1234667898</ContactNo> </Employee> </Company> A: if (window.XMLHttpRequest){ //if browser supports XMLHttpRequest { xmlhttp = new XMLHttpRequest(); } } else {// code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } a pair of curly brace missing. A: I tried the following code, and it is working fine. Hope your code is correct. You can move the XML data to some other file and check once. I hope your code should work fine. function loadFile() { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp = new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","popup.xml",true); xmlhttp.send(); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { responseData = xmlhttp.responseText; } } }
unknown
d7019
train
One way would be to use var-get user=> (var-get var) [1 2 3]
unknown
d7020
train
add webpack loader to Change MyUI alias path. module.exports = function(content) { let prefix = "MyUI/"; if (this.context.includes("/src") && this.context.includes(prefix)) { let relativePath = this.context .slice(this.context.indexOf(prefix)) .replace(prefix, ""); let pkgName = `${prefix}${relativePath.slice( 0, relativePath.indexOf("/") )}`; content = content.replace(/@\//g, `${pkgName}/src/`); } return content; };
unknown
d7021
train
You may want to check out how I did it here: https://github.com/kvahed/codeare/blob/master/src/matrix/ft/DFT.hpp The functions doing the job are at the bottom. If there are any issues, feel free to contact me personally. A: I found what is my problem! I did not understand the output layout of DFT in FFTW deeply. After some tests, I found the the dimension in width should w/2 + 1, and the dimension in height did not changed. the code to do FT, IFT, and shift the origin have no problem. what I need to do is changing the dimension when try to access it. magnitude after DFT magnitude after shifting the origin Cheers, Yaowang
unknown
d7022
train
Try this code. Should get you the job done. public static void main(String[] args){ String s = "3, V, 11, H, 21, H"; String[] t = s.split(" [ ,]*|,[ ,]*"); int first = Integer.parseInt(t[0]); int second = Integer.parseInt(t[2]); int third = Integer.parseInt(t[4]); System.out.println(first); System.out.println(second); System.out.println(third); } A: You can split your String by "," and check if it's a number using NumberUtils.isNumber (String str) from org.apache.commons.lang.math.NumberUtils : Checks whether the String a valid Java number. Valid numbers include hexadecimal marked with the 0x qualifier, scientific notation and numbers marked with a type qualifier (e.g. 123L). Null and empty String will return false. String s = "3, V, 11, H, 21, H"; for(String st : s.split(",")){ if(NumberUtils.isNumber(st.trim())) System.out.println(st); } If you want to check that the String contains only digits, you can use NumberUtils.isDigits(String str) A: public static void main(String[] args) { String in = "3, V, 11, H, 21, H"; List<String> storage = Arrays.asList(in.split(",")); List<Integer> output = new ArrayList<Integer>(); int first = 0; int second = 0; int third = 0; for(String str : storage){ if(str.trim().matches("[0-9]+") ){ // or if(NumberUtils.isNumber(str) ) output.add(Integer.parseInt(str.trim())); } } if(output.size() == 3){ first = output.get(0); second = output.get(1); third = output.get(2); } System.out.print("first: "); System.out.println(first); System.out.print("second: "); System.out.println(second); System.out.print("third: "); System.out.println(third); } Output: first: 3 second: 11 third: 21 A: You can split this string by , and then check if every part is a number like this : import java.util.*; public class HelloWorld{ public static void main(String []args){ String str = "3, V, 11, H, 21, H"; String[] parts = str.split(", "); ArrayList<Integer> listNumbers = new ArrayList<Integer>(); for(String x : parts){ try{ listNumbers.add( Integer.parseInt(x) ); } catch(Exception e){} } for(int i=0;i<listNumbers.size();i++) System.out.println("Number "+(i+1)+" : "+listNumbers.get(i)); } }
unknown
d7023
train
Note that QNetworkAccessManager operates asynchronously. The get() method does not block while the network operation occurs; it returns immediately. (See the Detailed Description section of the documentation for more info.) This is pretty typical of Qt's network-related APIs, because you usually don't want your application to freeze while waiting for data to move across a network. What this means is what your instance, nam, isn't alive long enough for the GET request to actually finish. Your instance of the Product::Network class is deleted immediately after the call to login() because it's allocated on the stack. Although I can't see the code, I'm guessing it cleans up the QNetworkAccessManager as well. If you extend the lifetime of your network object, you may find that your slot will eventually be invoked. Also, this is more a matter of preference, but I think it would be cleaner to avoid passing a receiver and a slot to your login() function. I'd recommend declaring your own signals in the Network class as part of its API, and to connect to those in the LoginWindow class.
unknown
d7024
train
The answer to your issue is both simple and surprising, if you're not used to AS3. In AS3, the flash.* classes tend to, when a setter is used, make and store a copy of the passed object. Since they store a copy, any modification on the original instance after the setter isn't applied on the copy, and thus is ignored. It is the case of, for example, DisplayObject.filters, ContextMenu.customItems or URLRequest.data. In your code, you are setting varSend.data = variables before filling variables. You should do the reverse : variables.uname = uname_txt.text; variables.sendRequest = "parse"; varSend.data = variables; // Send the data to the php file varLoader.load(varSend); Only some classes do that, and even then, they usually don't do it with all of their setters.
unknown
d7025
train
You can't call individual SSIS tasks, but you can call an SSIS package from a stored procedure. The procedure so is not totally straight-forwards and I won't put instructions here, as there are many sites which do so. However, if all these tasks do is call an SP, why not just call the sp?
unknown
d7026
train
Try setting the db_column option to BooleanField, or any field -- that should be the actual field name stored in MongoDB.
unknown
d7027
train
You have to set the tick positions first: ax.set_xticks(np.arange(5) + 1.) ax.set_xticklabels(a['f1'])
unknown
d7028
train
It is a matter of naming convention. You can refer to Where is the JavaBean property naming convention defined? for reference. From section 8.8 of JavaBeans API specification ... Thus when we extract a property or event name from the middle of an existing Java name, we normally convert the first character to lower case*case 1. However to support the occasional use of all upper-case names, we check if the first two characters*case 2 of the name are both upper case and if so leave it alone. So for example, 'FooBah" becomes 'fooBah' 'Z' becomes 'z' 'URL' becomes 'URL' We provide a method Introspector.decapitalize which implements this conversion rule Hence for your given class, the property deduced from getUser_Name() and setUser_Name() is "user_Name" instead of "User_Name" according to *case1. And calling getProperty(bean, "ID") is working according to *case 2. To solve the problem, please update the naming according to the Java naming convention, we should start with lower case for property and method, and use camelCase instead of snake_case to separate word. Keep in mind that following convention is really important in programming. The following is the updated class as an example. import java.lang.reflect.InvocationTargetException; import org.apache.commons.beanutils.BeanUtils; public class User { private String ID; private String userName; public String getID() { return ID; } public void setID(String ID) { this.ID = ID; } public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } public static void main(String[] args) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException { User bean = new User(); bean.setUserName("name"); System.out.println(BeanUtils.getProperty(bean, "userName")); } }
unknown
d7029
train
Double-check your JavaScript bindings. $("#AJAX-form").on(...) is looking for an element with id="AJAX-form". Your form appears to have class="AJAX-form". Either bind to the class with $(".AJAX-form").on(...) or change your form id to match <%= form_tag("/pages/thank_you", remote: true, id: 'AJAX-form') do %> A: From 1.9 version ajaxSuccess can be attached just only on the document object. "$(document).ajaxSucces(function(){...});" But you can use the following code: $('form').submit( function(){ return $.ajax({ url: $(this).attr("action"), data:data, processData: false, contentType: false, type: "POST", dataType : "json", success: function( result ) { $form.trigger('submit:success'); }, error: function( xhr, status, errorThrown ) { console.log( "Error: " + errorThrown ); console.log( "Status: " + status ); console.dir( xhr ); $form.trigger('submit:error'); }, complete: function( xhr, status ) { $form.trigger('submit:complete'); console.log( "The request is complete!" ); } } ); $('form.SpecialForm').on('submit:success', myFunction1); $('form.OtherSpecialForm').on('submit:success', otherFunction); $('form.ThirdSpecialForm').on('submit:error', thirdFunction); $('form.OneMoreSpecialForm').on('submit:complete', oneMoreFunction); $('form.theLastSpecialForm').on('submit:complete', theLastFunction);
unknown
d7030
train
If you see in bin exe must me building There in *.exe.config you can modify connection string It will be easier if you create installer
unknown
d7031
train
Current practice can perhaps be exemplified by a quote from David Flanagan's book "JavaScript : The Definitive Guide", which says that Certain canvas operations and attributes (such as extracting raw pixel values and setting shadow offsets) always use this default coordinate system (the default coordinate system is that of the canvas). And it continues with In most canvas operations, when you specify the coordinates of a point, it is taken to be a point in the current coordinate system [that's for example the cartesian plane you mentioned, @Walkerneo], not in the default coordinate system. Why is using a "current coordinate system" more useful than using directly the canvas c.s. ? First and foremost, I believe, because it is independent of the canvas itself, which is tied to the screen (more specifically, the default coordinate system dimensions are expressed in pixels). Using for instance a Cartesian (orthogonal) coordinate system makes it easy for you (well, for me too, obviously :-D ) to specify your drawing in terms of what you want to draw, leaving the task of how to draw it to the transformations offered by the Canvas API. In particular, you can express dimensions in the natural units of your drawing, and perform a scale and a translation to fit (or not, as the case may be...) your drawing to the canvas. Furthermore, using transformations is often a clearer way to build your drawing since it allows you to get "farther" from the underlying coord system and specify your drawing in terms of higher level operations ('scale', 'rotate', 'translate' and the more general 'transform'). The abovementioned book gives a very nice exemple of the power of this approach, drawing a Koch (fractal) snowflake in many fewer lines that would be possible (if at all) using canvas coordinates. A: The HTML5 canvas, like most graphics systems, uses a coordinate system where (0,0) is in the top left and the x-axis and y-axis go from left to right and top down respectively. This makes sense if you think about how you would create a graphics system with nothing but a block of memory: the simplest way to map coordinates (x,y) to a memory slot is to take x+w*y, where w is the width of a line. This means that the canvas coordinate system differs from what you use in mathematics in two ways: (0,0) is not the center like it usually is, and y grows down rather than up. The last part is what makes your figures upside down. You can set transformations on the canvas that make the coordinate system more like what you are used to: var ctx = document.getElementById('canvas').getContext('2d'); ctx.translate(250,250); // Move (0,0) to (250, 250) ctx.scale(1,-1); // Make y grow up rather than down
unknown
d7032
train
You can have independent axes for the charts by adding resolve_scale(y='independent') Note that, by itself, this lets the y-domain limits for each facet adjust to the subset of the data within each facet; you can make them match by explicitly specifying domain limits. Put together, it looks like this: alt.Chart(df).mark_bar().encode( x=alt.X('c2:N', title=None), y=alt.Y('sum(values):Q', axis=alt.Axis(grid=False, title=None), scale=alt.Scale(domain=[0, 25])), column=alt.Column('c1:N', title=None), color=alt.Color('DF:N', scale=alt.Scale(range=['#96ceb4', '#ffcc5c','#ff6f69'])) ).configure_view( strokeOpacity=0 ).resolve_scale( y='independent' )
unknown
d7033
train
hehe done this few years back for students during class. I hope you know how oscilloscopes works so here are just the basics: * *timebase * *fsmpl is input signal sampling frequency [Hz] Try to use as big as possible (44100,48000, ???) so the max frequency detected is then fsmpl/2 this gives you the top of your timebase axis. The low limit is given by your buffer length *draw Create function that will render your sampling buffer from specified start address (inside buffer) with: * *Y-scale ... amplitude setting *Y-offset ... Vertical beam position *X-offset ... Time shift or horizontal position This can be done by modification of start address or by just X-offsetting the curve *Level Create function which will emulate Level functionality. So search buffer from start address and stop if amplitude cross Level. You can have more modes but these are basics you should implement: * *amplitude: ( < lvl ) -> ( > lvl ) *amplitude: ( > lvl ) -> ( < lvl ) There are many other possibilities for level like glitch,relative edge,... *Preview You can put all this together for example like this: you have start address variable so sample data to some buffer continuously and on timer call level with start address (and update it). Then call draw with new start address and add timebase period to start address (of course in term of your samples) *multichannel I use Line IN so I have stereo input (A,B = left,right) therefore I can add some other stuff like: * *Level source (A,B,none) *render mode (timebase,Chebyshev (Lissajous curve if closed)) *Chebyshev = x axis is A, y axis is B this creates famous Chebyshev images which are good for dependent sinusoidal signals. Usually forming circles,ellipses,distorted loops ... *miscel stuff You can add filters for channels emulating capacitance or grounding of input and much more *GUI You need many settings I prefer analog knobs instead of buttons/scrollbars/sliders just like on real Oscilloscope * *(semi)Analog values: Amplitude,TimeBase,Level,X-offset,Y-offset *discrete values: level mode(/,),level source(A,B,-),each channel (direct on,ground,off,capacity on) Here are some screenshots of my oscilloscope: Here is screenshot of my generator: And finally after adding some FFT also Spectrum Analyser PS. * *I started with DirectSound but it sucks a lot because of buggy/non-functional buffer callbacks *I use WinAPI WaveIn/Out for all sound in my Apps now. After few quirks with it, is the best for my needs and has the best latency (Directsound is too slow more than 10 times) but for oscilloscope it has no merit (I need low latency mostly for emulators) Btw. I have these three apps as linkable C++ subwindow classes (Borland) * *and last used with my ATMega168 emulator for my sensor-less BLDC driver debugging *here you can try my Oscilloscope,generator and Spectrum analyser If you are confused with download read the comments below this post btw password is: "oscill" Hope it helps if you need help with anything just comment me [Edit1] trigger You trigger all channels at once but the trigger condition is checked usually just from one Now the implementation is simple for example let the trigger condition be the A(left) channel rise above level so: * *first make continuous playback with no trigger you wrote it is like this: for ( int i = 0, j = 0; i < countSamples ; ++j) { YVectorRight[j]=Samples[i++]; YVectorLeft[j] =Samples[i++]; } // here draw or FFT,draw buffers YVectorRight,YVectorLeft *Add trigger To add trigger condition you just find sample that meets it and start drawing from it so you change it to something like this // static or global variables static int i0=0; // actual start for drawing static bool _copy_data=true; // flag that new samples need to be copied static int level=35; // trigger level value datatype should be the same as your samples... int i,j; for (;;) { // copy new samples to buffer if needed if (_copy_data) for (_copy_data=false,i=0,j=0;i<countSamples;++j) { YVectorRight[j]=Samples[i++]; YVectorLeft[j] =Samples[i++]; } // now search for new start for (i=i0+1;i<countSamples>>1;i++) if (YVectorLeft[i-1]<level) // lower then level before i if (YVectorLeft[i]>=level) // higher then level after i { i0=i; break; } if (i0>=(countSamples>>1)-view_samples) { i0=0; _copy_data=true; continue; } break; } // here draw or FFT,draw buffers YVectorRight,YVectorLeft from i0 position * *the view_samples is the viewed/processed size of data (for one or more screens) it should be few times less then the (countSamples>>1) *this code can loose one screen on the border area to avoid that you need to implement cyclic buffers (rings) but for starters is even this OK *just encode all trigger conditions through some if's or switch statement
unknown
d7034
train
My issue got resolved. There was a problem in the certificate (in .pfx format) that I was using to sign the jar. When this certificate was generated from the site of the CA, the checkbox of "Include All Certificates in the path", was not selected. As a result the certificate did not have the complete chain required for signature verification in the applet jar. The following command can be used to display the details of the certificates in a .pfx file openssl pkcs12 -in <pfx_file_name>.pfx -nodes The certificate that had an issue, had only 1 certificate listed; while the one generated later had the complete chain of 3 certificates.
unknown
d7035
train
implementation 'org.springframework.security:spring-security-saml2-service-provider' As far as I can tell that dependency is marked as optional, so it has to be included explicitly. https://github.com/spring-projects/spring-security/blob/master/config/spring-security-config.gradle
unknown
d7036
train
If you're parsing a single value, the simplest approach is probably to just use DateTime.ParseExact: DateTime value = DateTime.ParseExact(text, "o", null); The "o" pattern is the round-trip pattern, which is designed to be ISO-8601: The "O" or "o" standard format specifier corresponds to the "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK" custom format string for DateTime values and to the "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffzzz" custom format string for DateTimeOffset values. I haven't specified a format provider, as it doesn't matter: The pattern for this specifier reflects a defined standard (ISO 8601). Therefore, it is always the same regardless of the culture used or the format provider supplied. If you need Json.NET to handle this transparently while deserializing other values, it may be a trickier proposition - others may know more. Additionally, just as a plug, you may wish to consider using my Noda Time project, which supports ISO-8601 and integrates with JSON.NET - albeit not in a pre-packaged way just yet.
unknown
d7037
train
So I think i figured it out. In my dto: [JsonIgnore] public string SessionBagString { get; set; } public JObject SessionBag { get { if (!string.IsNullOrEmpty(SessionBagString)) { return JObject.Parse(SessionBagString); } return null; } set { if(value != null) { SessionBagString = value.ToString(); } } } In my repo code I now have: if (dto.SessionBag != null) { entity.SessionBagString = dto.SessionBagString; } That pretty much worked for me. Let me know if there is a better way to do it.
unknown
d7038
train
I had the same question (and found the same results), but I also found a workaround. Allow me to illustrate with an example. You have a ProjectFile.build and a CommonFile.build. Let's say you want to overwrite a target called "Clean". You would need to create a new file (call it CommonFile_Clean.build) which contains: <?xml version="1.0"?> <project> <target name="Clean"> <echo message="Do clean stuff here" /> </target> </project> In CommonFile.build, you conditionally include CommonFile_Clean.build: <?xml version="1.0"?> <project> <echo message="checking Clean definition..." /> <if test="${not target::exists('Clean')}"> <echo message="Clean target not defined." /> <include buildfile="CommonFile_Clean.build" /> </if> </project> In ProjectFile.build, you can either define the Clean target (in which case CommonFile_Clean.build will not be used) or use the default implementation as defined in CommonFile_Clean.build. Of course, if you have a large number of targets, this will be quite a bit of work. Hope that helps. A: No, I've just tried it for you, as I have a similar set-up, in that I have all of the build targets we use in a commonFile.build and then use the following code to bring it in... <include buildfile="../commonFile.build"/> In my newFile.build (that includes the commonFile.build at the top of the file), I added a new target called 'build', as it exists in the commonFile, and here's the error message you get in response... BUILD FAILED Duplicate target named 'build'! Nice idea, probably bourne of OO principles, but sadly it doesn't work. Any good?
unknown
d7039
train
Simply because the server doesn't only send the certificate; it also proves that its the "owner" of the certificate; speaking simplified here: The server encrypts something that you can decrypt using the certificate, but only the owner of the certificate could encrypt that way. Assuming you know the public/private key crypto pattern, the certificate contains a public key that can decrypt data that was encrypted with the server's private key. The server will never ever hand out the private key.
unknown
d7040
train
Based on the Spring Data JPA documentation 4.4.3. Property Expressions ... you can use _ inside your method name to manually define traversal points... You can put the underscore in your REST query as follows: /api/markets?projection=expanded&sort=event_name,asc A: Just downgrade spring.data.‌​rest.webmvc to Hopper release <spring.data.jpa.version>1.10.10.RELEASE</spring.data.jpa.ve‌​rsion> <spring.data.‌​rest.webmvc.version>‌​2.5.10.RELEASE</spri‌​ng.data.rest.webmvc.‌​version> projection=expanded&sort=event.name,asc // works projection=expanded&sort=event_name,asc // this works too Thanks @Alan Hay comment on this question Ordering by nested properties works fine for me in the Hopper release but I did experience the following bug in an RC version of the Ingalls release.bug in an RC version of the Ingalls release. This is reported as being fixed, * *jira issue - Sorting by an embedded property no longer works in Ingalls RC1 BTW, I tried v3.0.0.M3 that reported that fixed but not working with me. A: We had a case when we wanted to sort by fields which were in linked entity (it was one-to-one relationship). Initially, we used example based on https://stackoverflow.com/a/54517551 to search by linked fields. So the workaround/hack in our case was to supply custom sort and pageable parameters. Below is the example: @org.springframework.data.rest.webmvc.RepositoryRestController public class FilteringController { private final EntityRepository repository; @RequestMapping(value = "/entities", method = RequestMethod.GET) public ResponseEntity<?> filter( Entity entity, org.springframework.data.domain.Pageable page, org.springframework.data.web.PagedResourcesAssembler assembler, org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler entityAssembler, org.springframework.web.context.request.ServletWebRequest webRequest ) { Method enclosingMethod = new Object() {}.getClass().getEnclosingMethod(); Sort sort = new org.springframework.data.web.SortHandlerMethodArgumentResolver().resolveArgument( new org.springframework.core.MethodParameter(enclosingMethod, 0), null, webRequest, null ); ExampleMatcher matcher = ExampleMatcher.matching() .withIgnoreCase() .withStringMatcher(ExampleMatcher.StringMatcher.CONTAINING); Example example = Example.of(entity, matcher); Page<?> result = this.repository.findAll(example, PageRequest.of( page.getPageNumber(), page.getPageSize(), sort )); PagedModel search = assembler.toModel(result, entityAssembler); search.add(linkTo(FilteringController.class) .slash("entities/search") .withRel("search")); return ResponseEntity.ok(search); } } Used version of Spring boot: 2.3.8.RELEASE We had also the repository for Entity and used projection: @RepositoryRestResource public interface JpaEntityRepository extends JpaRepository<Entity, Long> { } A: Your MarketRepository could have a named query like : public interface MarketRepository exten PagingAndSortingRepository<Market, Long> { Page<Market> findAllByEventByName(String name, Page pageable); } You can get your name param from the url with @RequestParam A: This page has an idea that works. The idea is to use a controller on top of the repository, and apply the projection separately. Here's a piece of code that works (SpringBoot 2.2.4) import ro.vdinulescu.AssignmentsOverviewProjection; import ro.vdinulescu.repository.AssignmentRepository; import org.apache.commons.lang3.StringUtils; import org.springframework.data.domain.Page; import org.springframework.data.domain.PageRequest; import org.springframework.data.domain.Pageable; import org.springframework.data.domain.Sort; import org.springframework.data.projection.ProjectionFactory; import org.springframework.data.web.PagedResourcesAssembler; import org.springframework.hateoas.EntityModel; import org.springframework.hateoas.PagedModel; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RepositoryRestController public class AssignmentController { @Autowired private AssignmentRepository assignmentRepository; @Autowired private ProjectionFactory projectionFactory; @Autowired private PagedResourcesAssembler<AssignmentsOverviewProjection> resourceAssembler; @GetMapping("/assignments") public PagedModel<EntityModel<AssignmentsOverviewProjection>> listAssignments(@RequestParam(required = false) String search, @RequestParam(required = false) String sort, Pageable pageable) { // Spring creates the Pageable object correctly for simple properties, // but for nested properties we need to fix it manually pageable = fixPageableSort(pageable, sort, Set.of("client.firstName", "client.age")); Page<Assignment> assignments = assignmentRepository.filter(search, pageable); Page<AssignmentsOverviewProjection> projectedAssignments = assignments.map(assignment -> projectionFactory.createProjection( AssignmentsOverviewProjection.class, assignment)); return resourceAssembler.toModel(projectedAssignments); } private Pageable fixPageableSort(Pageable pageable, String sortStr, Set<String> allowedProperties) { if (!pageable.getSort().equals(Sort.unsorted())) { return pageable; } Sort sort = parseSortString(sortStr, allowedProperties); if (sort == null) { return pageable; } return PageRequest.of(pageable.getPageNumber(), pageable.getPageSize(), sort); } private Sort parseSortString(String sortStr, Set<String> allowedProperties) { if (StringUtils.isBlank(sortStr)) { return null; } String[] split = sortStr.split(","); if (split.length == 1) { if (!allowedProperties.contains(split[0])) { return null; } return Sort.by(split[0]); } else if (split.length == 2) { if (!allowedProperties.contains(split[0])) { return null; } return Sort.by(Sort.Direction.fromString(split[1]), split[0]); } else { return null; } } } A: From Spring Data REST documentation: Sorting by linkable associations (that is, links to top-level resources) is not supported. https://docs.spring.io/spring-data/rest/docs/current/reference/html/#paging-and-sorting.sorting An alternative that I found was use @ResResource(exported=false). This is not valid (expecially for legacy Spring Data REST projects) because avoid that the resource/entity will be loaded HTTP links: JacksonBinder BeanDeserializerBuilder updateBuilder throws com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot construct instance of ' com...' no String-argument constructor/factory method to deserialize from String value I tried activate sort by linkable associations with help of annotations but without success because we need always need override the mappPropertyPath method of JacksonMappingAwareSortTranslator.SortTranslator detect the annotation: if (associations.isLinkableAssociation(persistentProperty)) { if(!persistentProperty.isAnnotationPresent(SortByLinkableAssociation.class)) { return Collections.emptyList(); } } Annotation @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.FIELD) public @interface SortByLinkableAssociation { } At your project incluide @SortByLinkableAssociation at linkable associations that whats sort. @ManyToOne(fetch = FetchType.EAGER) @SortByLinkableAssociation private Event event; Really I didn't find a clear and success solution to this issue but decide to expose it to let think about it or even Spring team take in consideration to include at nexts releases.
unknown
d7041
train
* *In your situation, using your shared Spreadsheet, when you delete the value from the cell "C4" of the data validation with the delete button, the event object of e of onEdit(e) has "value":{"oldValue":"deleted value"}. * *You want to know about this situation. If my understanding is correct, how about this answer? When I had tested this, in your situation, I noticed that the border of cell under the simple trigger is related to this situation. Preparation 1: For the explanation, it supposes as follows. * *Create new Spreadsheet. *Put a text of sample to the cell "A1". *Set a simple trigger of the OnEdit event trigger as the script of function onEdit(e) {Logger.log(JSON.stringify(e))}. * *At the explanation, e of the event object is used, when the OnEdit event trigger was fired. Sample situations 1: Situation 1A: When the value of sample of the cell "A1" is deleted by the delete button, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"user":{"nickname":"","email":""}} Situation 1B: When the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","user":{"nickname":"","email":""},"value":{"oldValue":"sample"}} Sample situations 2: Here, in order to replicate your situation, please set the border to the cell "A1". Situation 2A: When the value of sample of the cell "A1", which was surrounded by the border, is deleted by the delete button, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","user":{"nickname":"","email":""},"value":{"oldValue":"sample"}} Situation 2B: When the text of sample in the cell "A1", which was surrounded by the border, is deleted by deleting each character using the backspace key, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","user":{"nickname":"","email":""},"value":{"oldValue":"sample"}} Preparation 2: For the explanation, it supposes as follows. * *Create new Spreadsheet. *Put a text of sample to the cell "A1". *Copy and paste the script of function InstallOnEdit(e) {Logger.log(JSON.stringify(e))}. * *At the explanation, e of the event object is used, when the OnEdit event trigger was fired. *Set the installable OnEdit event trigger to the function of InstallOnEdit. Sample situations 1: Situation 1A: When the value of sample of the cell "A1" is deleted by the delete button, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"triggerUid":"###","user":{"nickname":"###","email":"###@gmail.com"}} Situation 1B: When the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","triggerUid":"###","user":{"nickname":"###","email":"###@gmail.com"}} Sample situations 2: Here, in order to replicate your situation, please set the border to the cell "A1". Situation 2A: When the value of sample of the cell "A1", which was surrounded by the border, is deleted by the delete button, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","triggerUid":"###","user":{"nickname":"###","email":"###@gmail.com"}} Situation 2B: When the text of sample in the cell "A1", which was surrounded by the border, is deleted by deleting each character using the backspace key, e of the event object returns the following value. {"authMode":{},"range":{"columnStart":1,"rowStart":1,"rowEnd":1,"columnEnd":1},"source":{},"oldValue":"sample","triggerUid":"###","user":{"nickname":"###","email":"###@gmail.com"}} Results and discussions: From above experiment, the following results could be obtained. * *Values of the event object depend on the situation with and without the border of cell. * *Also above situation can be seen at not only the border, but also the cases that the background color of the cell, the format (font color, size, bold and so on) except for the default font format. *It seems that when the cell and font are changed from the default settings, the event object returns the values of "Sample situations 2". *Values of the event object also depend on with and without using the installable event trigger. *In the case of the cell with the default cell and font under the simple trigger, * *When the value of sample of the cell "A1" is deleted by the delete button, e of the event object has no both oldValue and value. *When the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object has both oldValue and value. And value is {"oldValue":"deleted value"}. *In the case of the cell with the border under the simple trigger, * *When the value of sample of the cell "A1" is deleted by the delete button and also the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object has both oldValue and value. And value is {"oldValue":"deleted value"}. *In the case of the cell with the default cell and font under the installable trigger, * *When the value of sample of the cell "A1" is deleted by the delete button, e of the event object has no both oldValue and value. *When the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object has oldValue and no value. And oldValue is the deleted value which is not the object. *In the case of the cell with the border under the installable trigger, * *When the value of sample of the cell "A1" is deleted by the delete button and also the text of sample in the cell "A1" is deleted by deleting each character using the backspace key, e of the event object has oldValue and no value. And oldValue is the deleted value which is not the object. From above results, I thought that the values (no both oldValue and value) of event object from the default condition of the cell and font might be a bug or the specification. But I had looked for the official document about this. Unfortunately, I couldn't still find it. About your situation: Using above results, when your shared Spreadsheet was tested, the cell "C4" is surrounded by the border. And the simple trigger is used. So the situation is the same with above "Sample situations 2" of "Preparation 1". By this, when the value of cell "C4" is deleted by the delete button, "value":{"oldValue":"deleted value"} is returned. In this case, how about the following method? * *When you want to use the simple trigger, I think that the script of the bottom of your question can be used. *When you can use the installable OnEdit event trigger, you can know whether the value was deleted by checking with and without value in the event object e. References: * *Event Objects *Simple Triggers *Installable Triggers
unknown
d7042
train
well we finally reached out to some people with extensive siteminder experience and they suggested to use the "classic" app pool instead of "Integrated" which solved our issue.
unknown
d7043
train
You asked me to show you: public class FixRandom { // Holds the fixed random value. private final int fixedValue; // Upper bound for the RNG. private static final int BOUND = 10; // Constructor. public FixRandom() { // Set the fixed random value. Random rand = new Random(); fixedValue = rand.nextInt(BOUND); } // Method to access the fixed random value. public int getFixRandom() { return fixedValue; } } When you need to retrieve the number, use the getFixRandom() method.
unknown
d7044
train
For this you need to add your Shell service when bootstrapping your application: bootstrap(AppComponent, [ Shell ]); And remove it from all viewProviders attribute of your components: @Component({ selector: 'my-app', providers: [TemplateRef], templateUrl: './angular/app/Index.html', directives: [ROUTER_DIRECTIVES, NgIf], // viewProviders: [Shell] <------- }) It's because of the way the "hierarchical injectors" feature of Angular2 works. For more details you could have a look at this question: * *What's the best way to inject one service into another in angular 2 (Beta)? A: Indeed you need to make Shell a singleton. You have to add the service in your bootstrap method as a provider.
unknown
d7045
train
* *Make Compress async method public static async Task Compress() { await Task.Run(() => { //your compress logic } } *Call it as awaitable: await Compress(...); You can also use Parallel.ForEach instead of your foreach loop. But be aware that this code won't really execute in parallel. This is because that you are using IO functions.
unknown
d7046
train
If you really mean digits (not numbers), this is as easy as re.findall(r'[369]', my_str) For a list of numbers, it's quite easy without regular expressions: lst = "55,62,12,72,55" print [x for x in lst.split(',') if int(x) % 3 == 0] A: Using the idea from this question i get: i = "1, 2, 3, 4, 5, 6, 60, 61, 3454353, 4354353, 345352, 2343241, 2343243" for value in i.split(','): result = re.search('^(1(01*0)*1|0)+$', bin(int(value))[2:]) if result: print '{} is divisible by 3'.format(value) But you don't want to use regular expressions for this task. A: A hopefully complete version, from reduction of DEA[1]: ^([0369]|[147][0369]*[258]|(([258]|[147][0369]*[147])([0369]|[258][0369]*[147])*([147]|[258][0369]*[258])))+$ [1:] Converting Deterministic Finite Automata to Regular Expressions', C. Neumann 2005 NOTE: There is a typo in Fig.4: the transition from q_j to itself should read ce*b instead of ce*d. A: Just for the heck of it: reobj = re.compile( r"""\b # Start of number (?: # Either match... [0369]+ # a string of digits 0369 | # or [147] # 1, 4 or 7 (?: # followed by [0369]*[147] # optional 0369s and one 1, 4 or 7 [0369]*[258] # optional 0369s and one 2, 4 or 8 )* # zero or more times, (?: # followed by [0369]*[258] # optional 0369s and exactly one 2, 5 or 8 | # or [0369]*[147] # two more 1s, 4s or 7s, with optional 0369s in-between. [0369]*[147] ) | # or the same thing, just the other way around, [258] # this time starting with a 2, 5 or 8 (?: [0369]*[258] [0369]*[147] )* (?: [0369]*[147] | [0369]*[258] [0369]*[258] ) )+ # Repeat this as needed \b # until the end of the number.""", re.VERBOSE) result = reobj.findall(subject) will find all numbers in a string that are divisible by 3.
unknown
d7047
train
In order to use custom layout handles in your local.xml file, first you have to make an observer for it. To create an observer, you start out by adding it as an extension / module. Create the following files/folders if not present (The names Yourname and Modulename can be anything, just make sure it's the same where it shows up, including upper/lower case): Directory View /app/etc/modules/Yourname_Modulename.xml /app/code/local/Yourname/Modulename/etc/config.xml /app/code/local/Yourname/Modulename/Model/Observer.php Now that you have the file structure, let's look at the first file, Yourname_Modulename.xml tucked in the app/etc/modules/ folder: Yourname_Modulename.xml <?xml version="1.0"?> <config> <modules> <Yourname_Modulename> <codePool>local</codePool> <active>true</active> </Yourname_Modulename> <modules> <config> Now /app/code/local/Yourname/Modulename/etc/config.xml: config.xml <?xml version="1.0"?> <config> <global> <models> <yournamemodulename> <class>Yourname_Modulename_Model</class> </yournamemodulename> </models> </global> <frontend> <events> <controller_action_layout_load_before> <observers> <yourname_modulename_model_observer> <type>singleton</type> <class>Yourname_Modulename_Model_Observer</class> <method>controllerActionLayoutLoadBefore</method> </yourname_modulename_model_observer> </observers> </controller_action_layout_load_before> </events> </frontend> </config> And lastly the file /app/code/local/Yourname/Modulename/Model/Observer.php. For this one, you'll need to know what you'd like to name "your_layout_handle" and also how to determine if your layout should be loaded via PHP. Observer.php <?php class Yourname_Modulename_Model_Observer { public function controllerActionLayoutLoadBefore( Varien_Event_Observer $observer) { //Get Layout Object $layout = $observer->getEvent()->getLayout(); /* *Begin Logic to Determine If Layout Handle Should Be Applied. *Below Determines If We Are On A Product View Page. *Here is Where You Would Modify The Code For Different Layout Handles */ if( Mage::registry( 'current_product' ) ) { //Check if current_category is set if( Mage::registry( 'current_category' ) ) { //Send Layout Update Handle If Product Was Browsed $layout->getUpdate()->addHandle( 'your_layout_handle' ); } else { //Send Layout Update Handle If Product Was Linked or Searched $layout->getUpdate()->addHandle( 'your_other_handle' ); } } } } I would say that is all, but of course you now need to do something with your layout handles in app/code/design/frontend/package/theme/layout/local.xml. How it behaves is up to you, but for an example here is the sections that apply in my local.xml. The names I used for my handles were "catalog_product_view_browsed" and "catalog_product_view_searched". local.xml <!-- Jump To Relevant Section --> <catalog_product_view_browsed> <reference name="left"> <action method="unsetChild"> <name>left.poll</name> </action> </reference> <reference name="right"> <action method="insert"> <blockName>left.poll</blockName> <siblingName>right.newsletter</siblingName> <after>0</after> </action> </reference> </catalog_product_view_browsed> <catalog_product_view_searched> <reference name="left"> <action method="insert"> <blockName>right.newsletter</blockName> <siblingName>left.vertnav</siblingName> <after>1</after> </action> </reference> <reference name="right"> <action method="unsetChild"> <name>right.newsletter</name> </action> </reference> </catalog_product_view_browsed> <!-- End Relevant Section --> You may need to refresh/clean your cache. That should be it. A: There's unfortunately no way to track referring pages in Magento's layout XML, but you can tell if someone came to the product page from a search by checking $_SERVER['HTTP_REFERER']. If a user comes to a product page from a search, the referring url will look like this: /catalogsearch/result/?q=[SEARCH TERM].
unknown
d7048
train
@media (max-width: 600px) { .block { margin: 0 11px; } } A: You just need to add a max-width: 90% or whatever fixes your requirement. *{ margin: 0; padding: 0; } .block{ background: red; height: 100px; margin: 0 auto; width: 600px; max-width: 90%; } <div class='block'> </div> A: You can use the media queries to control the styles of your elements. Example: The following example changes the background-color to lightgreen if the viewport is 480 pixels wide or wider (if the viewport is less than 480 pixels, the background-color will be pink) @media screen and (min-width: 480px) { body { background-color: lightgreen; } } for more and different device width link * { margin: 0; padding: 0; } .block { text-align:center; background: red; height: 100px; margin: 0 auto; width: 600px; } @media (max-width: 400px) { margin: 0 50px; } @media (min-width: 401px) and (max-width: 800px) { margin: 0 30px; } @media (min-width: 801px) { margin: 0 10px; } <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>repl.it</title> </head> <body> <div class="block">1</div> <script src="script.js"></script> </body> </html>
unknown
d7049
train
When you look at the official docs for convert you find that for binary data there is a style option of 0, 1, 2. Style option 1 gives the value in hex format. DECLARE @RFID INT = 1292202724; SELECT CONVERT(VARBINARY(8), @RFID) AS 'VARBINARY_VALUE'; SELECT CONVERT(NVARCHAR(15), CONVERT(VARBINARY(8), @RFID), 1 /* style 1 */) AS 'STRING_VALUE'; --Using Convert
unknown
d7050
train
In your AppDelegate.m - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWillHide) name:UIKeyboardWillHideNotification object:nil]; return YES; } -(void) keyboardWillHide { NSLog(@"Bye"); } A: Following Delegate method of textfield is called automatically when we clicked on the return or arrow button of keyboard. - (BOOL)textFieldShouldReturn:(UITextField *)textField
unknown
d7051
train
You need to understand that Date does not only represent a date, but also a time. >= compares both date and time components of a Date object. Since you didn't specified any time in your date string, the API assumed it to be 00:00:00 in your local time, which is 18:30:00 of the previous day in UTC. Why UTC, you ask? That's what the description of the date always is. When you print a date, it always prints it in UTC time. To print it in your time zone, set the timeZone property of your date formatter and format it. One way to only compare the date components is by removing the time components. From this answer, this is how you remove time components: public func removeTimeStamp(fromDate: Date) -> Date { guard let date = Calendar.current.date(from: Calendar.current.dateComponents([.year, .month, .day], from: fromDate)) else { fatalError("Failed to strip time from Date object") } return date } Now this should be true: dateStartDate >= removeTimeStamp(fromDate: dateToday) A: As Sweeper explained, dateStartDate is at 00:00 of 28/02/2018, whereas dateToday is the current point in time, which is on the same day, but after midnight. Therefore dateStartDate >= dateToday evaluates to false. To compare the timestamps only to day granularity and ignore the time components you can use if Calendar.current.compare(dateStartDate, to: dateToday, toGranularity: .day) != .orderedAscending { print("Yes") } This will print "Yes" if dateStartDate is on the same or a later day than dateToday. The compare method returns .orderedAscending, .orderedSame, or .orderedDescending, depending on wether the first date is on a previous day, the same day, or a later day, than the second date. A: try to set your current date Formatter while comapring date. Below your sample code update: var dateStartString = "28/02/2018" let dateFormatter = DateFormatter() dateFormatter.dateFormat = "dd/MM/yyyy" dateFormatter.locale = NSLocale.current guard let dateStartDate = dateFormatter.date(from: dateStartString) else { fatalError("ERROR: Date conversion failed due to mismatched format.") } var dateToday = Date() print(dateToday) let dateTodaystr = dateFormatter.string(from: dateToday) dateToday = dateFormatter.date(from: dateTodaystr)! print(dateToday) if(dateStartDate>=dateToday){ print("Yes") } else{ print("Today date is 28/02/2018. Why it print No?") } A: You need to timeZone for your dateFormatter: dateFormatter.timeZone = TimeZone(secondsFromGMT:0)!
unknown
d7052
train
I find AutoMapper a great choice for creating my View Models etc. When saving I find using a Service class with a method such as Save(SystemUser user) is best because then you have room for control over validation and other things that must be done. The mapping code for creating the entities you need to save is hand done, because usually there are a lot more factors involved in the save than in the read. Therefore AutoMapper isn't such a good choice here. I normally write my service class with repositories for the various entities in it's constructor. This is more to allow for unit testing which you haven't mentioned, but would be a good design anyway. If this is the sort of thing you need then I think your on the right track.
unknown
d7053
train
I use gulp-add-src to do that. var gulp = require('gulp'), coffee = require('gulp-coffee'), concat = require('gulp-concat'), addsrc = require('gulp-add-src'); // Scripts gulp.task('coffee', function () { return gulp.src('src/coffee/**/*.coffee') .pipe(coffee()) .pipe(addsrc('src/coffee/lib/*.js')) .pipe(concat('compiled.js')) .pipe(gulp.dest('dist')); }); A: Sadly, it doesn't. What you can do, is using the underlying event-stream's merge method. Then you'll have one pipeline for the coffee files that gets compiled and one for the javascript side. Here is an example Gulpfile.coffee: coffee = require 'gulp-coffee' es = require 'event-stream' gulp.task 'scripts', () -> es.merge( gulp.src(["public-dev/app.js", "public-dev/scripts/**/*.js"]) # ... gulp.src("public-dev/**/*.coffee").pipe coffee() ) .pipe concat 'all.js' .pipe gulp.dest "build" A: I marked Patrick J. S.'s answer as "Correct" becuase in reality, this is exactly what I needed to do. That said, "event-stream" isn't what I ended up going with, simply because I needed to preserve my dependency structure of files, and event-stream's merge() method does not preserver order nor does it have options to. Instead I opted for a package called streamqueue, which does preserve order of glob. Probably slower, but order matters in my app unfortunately. In the future I will try to be as modular as possible.
unknown
d7054
train
Substitute 'moment()' for the hardcoded end date. Example: $('input[name="daterange"]').daterangepicker( { locale: { format: 'YYYY-MM-DD' }, startDate: '2013-01-01', endDate: moment() }, function(start, end, label) { alert("A new date range was chosen: " + start.format('YYYY-MM-DD') + ' to ' + end.format('YYYY-MM-DD')); }); (Also, not sure if this was a copy/paste issue - but you are missing the 'https:' prefix on your script and stylesheet imports.)
unknown
d7055
train
For me, it looks like a bug. Put some debugging code (the following) and see the result: <?php class Foo { private $bar; function __get($name){ echo "__get(".$name.") is called!\n"; debug_print_backtrace(); $x = $this->$name; return $x; } function __unset($name){ unset($this->$name); echo "Value of ". $name ." After unsetting is \n"; echo $this->$name; echo "\n"; } } echo "Before\n"; $foo = new Foo; echo "After1\n"; unset($foo->bar); echo "After2\n"; echo $foo->bar; echo "After3\n"; echo $foo->not_found; ?> The result is: Before After1 Value of bar After unsetting is __get(bar) is called! #0 Foo->__get(bar) called at [E:\temp\t1.php:17] #1 Foo->__unset(bar) called at [E:\temp\t1.php:24] PHP Notice: Undefined property: Foo::$bar in E:\temp\t1.php on line 9 After2 __get(bar) is called! #0 Foo->__get(bar) called at [E:\temp\t1.php:26] __get(bar) is called! #0 Foo->__get(bar) called at [E:\temp\t1.php:9] #1 Foo->__get(bar) called at [E:\temp\t1.php:26] PHP Notice: Undefined property: Foo::$bar in E:\temp\t1.php on line 9 After3 __get(not_found) is called! #0 Foo->__get(not_found) called at [E:\temp\t1.php:28] PHP Notice: Undefined property: Foo::$not_found in E:\temp\t1.php on line 9 A: invoked in 1) return $this->$name; 2) echo $foo->bar; in the code: class Foo { private $bar; function __get($name){ echo "__get is called!"; return $this->$name; *** here *** } function __unset($name){ unset($this->$name); } } $foo = new Foo; unset($foo->bar); echo $foo->bar; *** and here *** __get() is utilized for reading data from inaccessible properties. so, un-setting the variable, turn $foo->bar and $this->bar inaccessible. However, if unset is removed then $foo->bar is inaccessible but $this->bar is accessible. However, i don't know how PHP avoid to call the function recursively. May be PHP is smart or the variable is self-setting at some point. A: The magic __get function is called everytime you try to access a variable. If you look at your code, you do this exactly 2 times. Once in the unset function and once in the echo function. unset($foo->bar); echo $foo->bar;
unknown
d7056
train
Of course, on the back-end, to remove unneeded network transmission. You need do it once for all your front-ends. I don't think there are practical use-cases when it's more suitable to do it on front-end, since, even if back-end doesn't trim and, thus, saves it's CPU a bit, much more processing is done by backend underlying OS to transmit redundant bytes to clients.
unknown
d7057
train
The duration window indicates the time in which the backup will start. I can start anywhere between the time specified and could last longer than the window.
unknown
d7058
train
The issue is with your setArchived method : type hints can not be used with scalar types. You must remove the bool type : public function setArchived($archived) { $this->archived = $archived; return $this; } (perhaps you write 'bool' instead of 'boolean' when using doctrine:generate:entities ?) A: Why not use a column type of "boolean"? /** * @ORM\Column(type="boolean") */ private $archived; Then in your update function pass true/false instead of 1/0 /** * @ORM\PrePersist * @ORM\PreUpdate */ public function updatedDefaults() { if($this->getArchived() == null) { $this->setArchived(true); } }
unknown
d7059
train
AsyncTask is designed to work best when nested in an Activity class. What makes AsyncTask 'special' is that it isn't just a worker thread - instead it combines a worker thread which processes the code in doInBackground(...) with methods which run on the Activity's UI thread - onProgressUpdate(...) and onPostExecute(...) being the most commonly used. By periodically calling updateProgress(...) from doInBackground(...), the onProgressUpdate(...) method is called allowing it to manipulate the Activity's UI elements (progress bar, text to show name of file being downloaded, etc etc). In short, rather than firing your 'update' Activity from an AsyncTask, your update Activity itself should have a nested AsyncTask which it uses to process the update and publish progress to the UI. A: You have two options: 1) Pass an instance of activity to your AsyncTask constructor in order to invoke some method on it: new MyTask(this).execute(); So, you can do: public MyTask (Activity activity) { this.activity = activity; } public void onPostExecute(...) { activity.someMethod(); } 2) Pass a Handler instance and send message from onPostExecute() to the activity.
unknown
d7060
train
You might be able to make use of nonblocking if, say, proc 1 can start doing something with row 1 while waiting for row 4. But blocking should be fine to start with, too. There is a lot of synchronization built into the algorithm. Everyone has to work based on the current row. So the recieving processes will need to know how much work to expect for each iteration of this procedure. That's easy to do if they know the total number of rows, and which iteration they're currently working on. So if they're expecting two rows, they could do two blocking recvs, or launch 2 nonblocking recvs, wait on one, and start processing the first row right away. But probably easiest to get blocking working first. You may find it advantageous even at the start to have the master process doing isends so that all the sends can be "launched" simulataneously; then a waitall can process them in whatever order. But better than this one-to-many communication would probably be to use a scatterv, which one could expect to proceed more efficiently. Something that has been raised in many answers and comments to this series of questions but which I don't think you've ever actually addressed -- I really hope that you're just working through this project for educational purposes. It would be completely, totally, crazy to actually be implementing your own parallel linear algebra solvers when there are tuned and tested tools like Scalapack, PETSc, and Plapack out there free for the download.
unknown
d7061
train
To get all those values please use this code: databasePostsReference = FirebaseDatabase.getInstance().getReference().child("posts").child(postId); ValueEventListener eventListener = new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { List<UserPostPOJO> list = new ArrayList<>(); String elevation = (String) dataSnapshot.child("elevation").getValue(); String feel = (String) dataSnapshot.child("feel").getValue(); String uid = (String) dataSnapshot.child("uid").getValue(); String uri = (String) dataSnapshot.child("uri").getValue(); UserPostPOJO userPostPOJO = new UserPostPOJO(); userPostPOJO.setElevation(elevation); userPostPOJO.setFeel(feel); userPostPOJO.setUid(uid); userPostPOJO.setUri(uri); list.add(userPostPOJO); } @Override public void onCancelled(DatabaseError databaseError) {} }; databasePostsReference.addListenerForSingleValueEvent(eventListener); In which postId is the unique id generated by the push() method. And notice, that the declaration of your list must be inside the onDataChange() method, otherwise, you'll get null. Hope it helps.
unknown
d7062
train
The simplest way would be to loop through every unique value and determine the row and column positions that match each value. Something like this could work: val = unique(m); pos = cell(1, numel(val)); for ii = 1 : numel(val) [r,c] = find(m == val(ii)); pos{ii} = [r,c]; end pos would be a cell array containing all of the positions for each unique value. We can show what these positions are by: >> format compact; celldisp(pos) pos{1} = 1 1 2 1 pos{2} = 1 2 pos{3} = 1 3 pos{4} = 2 2 3 2 pos{5} = 2 3 pos{6} = 3 1 pos{7} = 3 3 This of course is not meaningful unless you specifically show each unique value per group of positions. Therefore, we can try something like this instead where we can loop through each element in the cell array as well as display the corresponding element that each set of positions belongs to: for ii = 1 : numel(val) fprintf('Value: %f\n', val(ii)); fprintf('Positions:\n'); disp(pos{ii}); end What I get is now: Value: 1.000000 Positions: 1 1 2 1 Value: 2.000000 Positions: 1 2 Value: 3.000000 Positions: 1 3 Value: 4.000000 Positions: 2 2 3 2 Value: 5.000000 Positions: 2 3 Value: 6.000000 Positions: 3 1 Value: 7.000000 Positions: 3 3 A: This gives you what you want, except for the fact that indices of unique elements are also wrapped in cell twice, just like the indices of repeating elements: m = [1,2,3;1,4,5;6,4,7]; [~, idx] = ismember(m(:), unique(m(:))); linInd = 1:numel(m); [i,j] = ind2sub(size(m), linInd); res = accumarray(idx, linInd, [], @(x) {num2cell([i(x);j(x)]',2)}); Result: >> celldisp(res) res{1}{1} = 2 1 res{1}{2} = 1 1 res{2}{1} = 1 2 res{3}{1} = 1 3 res{4}{1} = 2 2 res{4}{2} = 3 2 res{5}{1} = 2 3 res{6}{1} = 3 1 res{7}{1} = 3 3
unknown
d7063
train
files = dir('*.csv') ; % this gives all csv files present in folder N = length(files) ; % total number of files in the folder for i = 1:N thisfile = files(i).name ; end In the above files is a structure, it has all the information of your csv files. You can extract name of the files using files(i).name where i = 1,2,...N. If you want all the names of a file in a string. Use filenames = {files.name}' ; Above line, gives you names of all csv files in the folder into a cell array.
unknown
d7064
train
If you get "add" was called on null, then the problem has to do with _pickedStartDate So perhaps try something like: controller: _endDateController..text = _pickedStartDate != null ? DateFormat("dd.MM.yyyy").format(_pickedStartDate.add(Duration(days: 365))) : '',
unknown
d7065
train
How about this? var allClasses = $("#QBS").find('li a[class^="qb_"]') .map(function () { return this.className.split(" ").pop(); }).get(); console.log(allClasses); Fiddle Provided the class started with qb_* is at the beginning and you want to take only the last class of the match. if all your class names are qb_mode then: var allClasses = $("#QBS").find('.qb_mode').map(function () { return this.className.split(" ").pop(); }).get(); if you want all of them then: var allClasses = $("#QBS").find('.qb_mode').map(function () { var cls = this.className.replace(/qb_mode/,''); return cls.trim().split(/\W+/); }).get(); console.log(allClasses); Fiddle A: If I understood you correctly, how about: var name = $('#QBS a.qb_mode.starting').prop('class').replace(/\s*(qb_mode|starting)\s*/g,''); console.log(name); // Rogers See demo here. A: a=document.getElementById('QBS'); var b=a.getElementsByClassName("qb_mode"); var i, j=b.length, result=[]; for(i=0;i<j;i++) { c=b[i].className.split(" "); result.push(c.pop()); } return result; A: fiddle http://jsfiddle.net/3Amt3/ var names=[]; $("#QBS > li a").each(function(i){ var a=$(this).attr("class").split(" "); names[i]=a[(a.length-1)]; console.log("Name is " + names[i]); }); or a more precise selector $("#QBS > li a.qb_mode").each( ....
unknown
d7066
train
They both are different by the following differences:- int array[40]; int * arrayp; Now if you will try to see the size of both then it will be different for pointer it will same everytime whereas for array it varies with your array size sizeof(array);\\Output 80 sizeof(arrayp);\\Output 4(on 32-bit machines) Which means that computer treats all the offsprings of integers in an array as one which could not be possible with pointers. Secondly, perform increment operation. array++;\\Error arrayp++;\\No error If an array could have been a pointer then that pointer's pointing location could have been changes as in the second case with arrayp but it is not so.
unknown
d7067
train
public function users() { return $this->belongsToMany(User::class, 'message')->orderBy('id', 'desc'); } If you want to limit the number of users returned, append ->take(10); to take only last 10 users
unknown
d7068
train
Use quotes so the shell doesn't barf on special characters. REG='/^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(-(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*)?(\+[0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*)?$/g' Note that this regex will not work with grep. POSIX regex syntax is different and more limited than Perl syntax, which is what you've got. A: You ran into two major problems * *Bash handling special characters, escapes, and anything else it found in your string before storing it. This is solved by single quoting your string to prevent bash expansion/interpretation. *The Regex was written in Perl syntax, which grep does not support, it uses POSIX which is slightly different from Perl even though the two are similar. This is solved by removing the surrounding / /g and replacing all instances of \d which is not recognized by POSIX with [0-9]. Result: REG='^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-(0|[1-9][0-9]*|[0-9]*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9][0-9]*|[0-9]*[a-zA-Z-][0-9a-zA-Z-]*))*)?(\+[0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*)?$'
unknown
d7069
train
Let's start from the inmost type, that is {id = 1, question = "question", answer = "answer"}, it can't be a key value pair since it has three properties: id, question, answer. However, you can turn it into named tuple (int id, string question, string answer) The declaration will be (int id, string question, string answer)[][][] courseData = new (int, string, string)[][][] { new (int, string, string)[][]//chapter 1 { new (int, string, string)[] { // Long form (id : 1, question : "question", answer : "answer"), // Short form: we can skip id, question, answer names (2, "question", "answer"), } } }; Now you have an array (array of array of array to be exact): int course = 1; int chapter = 1; int question = 2; // - 1 since arrays are zero based string mySecondAnswer = courseData[course - 1][chapter - 1][question - 1].answer;
unknown
d7070
train
Do this 4 step 1: Add this library in dependencies of the app build.gradle : implementation 'com.android.support:multidex:1.0.3' 2: Add in the defaultConfig of the app build.gradle : defaultConfig { //other configs multiDexEnabled true //add this line } 3: Create new Java class like this : public class ApplicationClass extends MultiDexApplication { @Override public void onCreate() { super.onCreate(); } } 4: Add this to your manifest (in application tag): <application android:name=".ApplicationClass" android:icon="@mipmap/ic_launcher" android:label="@string/app_name"> A: Same situation happened in my Visual Studio 2019 Xamarin project, and the way to solve this problem is just like what mentioned in the link: Error:Cannot fit requested classes in a single dex file.Try supplying a main-dex list. # methods: 72477 > 65536 multiDexEnabled true By checking the Enable multiDex checkbox in the option page of the ***.Droid project solves the problem. A: Before you take any decision, as said in Google documentation : Before configuring your app to enable use of 64K or more method references, you should take steps to reduce the total number of references called by your app code, including methods defined by your app code or included libraries. So try to remove useless importation in your app gradle and do a nice clean project or do multidex Source : https://developer.android.com/studio/build/multidex
unknown
d7071
train
You are using 7.1.0 right. Remove this and try server.servlet-path=/*
unknown
d7072
train
The answer below might work, but I decided to go with this: execute "ALTER INDEX material_donations_pkey RENAME TO material_donation_requests_pkey;" I chose this because it is the command that the migration was trying to run automatically as part of the original migration. That command wasn't automatically part of the pre-4.0 rails when I renamed this table the first time so I ran it now. I felt more comfortable doing exactly what rails is doing presently. A: Here is a hacky solution, add another migration file and do this before change table name second time: execute "ALTER TABLE material_donation_requests DROP CONSTRAINT material_donations_pkey;" execute "ALTER TABLE material_donation_requests ADD PRIMARY KEY (id);" And you probably need to do something similar after your next name changing. A: Adding my solution for similar problem. Below is the error while running migrations 01 PG::UndefinedTable: ERROR: relation "fundraise_stories_pkey" does not exist 01 : ALTER INDEX "fundraise_stories_pkey" RENAME TO "fundraisers_pkey" Connected to database via pgadmin and selected constraints for the table fundraise_stories. It was showing constraint name as "fundrise_stories_pkey". So it was some old mistake because constraint name doesnt match with table name. Solution: * *Find existing index name under constraints section for the table. In my case it was 'fundrise_stories_pkey' *Rename the index before renaming table *Finally, Rename the table. Below is modified migration to rename index before renaming table. def self.up execute "ALTER INDEX fundrise_stories_pkey RENAME TO fundraise_stories_pkey;" rename_table :fundraise_stories, :fundraisers end Log D, [2020-02-02T17:16:27.428294 #7363] DEBUG -- : (0.2ms) BEGIN == 20200127102616 RenameFundraiseStoryTableToFundraisers: migrating =========== -- execute("ALTER INDEX fundrise_stories_pkey RENAME TO fundraise_stories_pkey;") D, [2020-02-02T17:16:27.434366 #7363] DEBUG -- : (5.5ms) ALTER INDEX fundrise_stories_pkey RENAME TO fundraise_stories_pkey; -> 0.0061s -- rename_table(:fundraise_stories, :fundraisers) D, [2020-02-02T17:16:27.435722 #7363] DEBUG -- : (0.7ms) ALTER TABLE "fundraise_stories" RENAME TO "fundraisers" D, [2020-02-02T17:16:27.438769 #7363] DEBUG -- : (0.3ms) ALTER TABLE "public"."fundraise_stories_id_seq" RENAME TO "fundraisers_id_seq" D, [2020-02-02T17:16:27.439334 #7363] DEBUG -- : (0.2ms) ALTER INDEX "fundraise_stories_pkey" RENAME TO "fundraisers_pkey" D, [2020-02-02T17:16:27.445452 #7363] DEBUG -- : (0.8ms) ALTER INDEX "index_fundraise_stories_on_bank_account_id" RENAME TO "index_fundraisers_on_bank_account_id" D, [2020-02-02T17:16:27.446153 #7363] DEBUG -- : (0.3ms) ALTER INDEX "index_fundraise_stories_on_creator_id_and_creator_type" RENAME TO "index_fundraisers_on_creator_id_and_creator_type" -> 0.0131s == 20200127102616 RenameFundraiseStoryTableToFundraisers: migrated (0.0193s) ==
unknown
d7073
train
Idea currently does not allow checking out remote branches without creating a local one that tracks it. Here is the request: https://youtrack.jetbrains.com/issue/IDEA-140077 Since there is no local branch that matches the remote one, you need to use the Create branch and select the remote one in the From dropdown.
unknown
d7074
train
HI, Try this JSON Link http://gdata.youtube.com/feeds/api/users/UserName/uploads?&v=2&max-results=50&alt=jsonc here that content Object returns to my channel videos Reference by http://code.google.com/apis/youtube/2.0/reference.html#Response_codes_uploading_videos
unknown
d7075
train
a % b in c++ default: (-7 / 3) => -2 -2 * 3 => -6 so a % b => -1 (7 / -3) => -2 -2 * -3 => 6 so a % b => 1 in python: -7 % 3 => 2 7 % -3 => -2 in c++ to python: (b + (a % b)) % b A: The sign in such cases (i.e when one or both operands are negative) is implementation-defined. The spec says in §5.6/4 (C++03), The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined; otherwise (a/b)*b + a%b is equal to a. If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined. That is all the language has to say, as far as C++03 is concerned. A: From ISO14882:2011(e) 5.6-4: The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. For integral operands the / operator yields the algebraic quotient with any fractional part discarded; if the quotient a/b is representable in the type of the result, (a/b)*b + a%b is equal to a. The rest is basic math: (-7 / 3) => -2 -2 * 3 => -6 so a % b => -1 (7 / -3) => -2 -2 * -3 => 6 so a % b => 1 Note that If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined. from ISO14882:2003(e) is no longer present in ISO14882:2011(e)
unknown
d7076
train
To add another, perhaps cleaner, option. I suggest the enum variation: What is the best approach for using an Enum as a singleton in Java? A: As far as readability I would go with the initialization on demand holder. The double checked locking, I feel, is a dated and an ugly implementation. Technically speaking, by choosing double checked locking you would always incur a volatile read on the field where as you can do normal reads with the initialization on demand holder idiom. A: Initialisation-on-demand holder only works for a singleton, you can't have per-instance lazily loaded elements. Double-checked locking imposes a cognitive burden on everyone who has to look at the class, as it is easy to get wrong in subtle ways. We used to have all sorts of trouble with this until we encapsulated the pattern into utility class in our concurrency library We have the following options: Supplier<ExpensiveThing> t1 = new LazyReference<ExpensiveThing>() { protected ExpensiveThing create() { … // expensive initialisation } }; Supplier<ExpensiveThing> t2 = Lazy.supplier(new Supplier<ExpensiveThing>() { public ExpensiveThing get() { … // expensive initialisation } }); Both have identical semantics as far as the usage is concerned. The second form makes any references used by the inner supplier available to GC after initialisation. The second form also has support for timeouts with TTL/TTI strategies. A: Initialization-on-demand holder is always best practice for implementing singleton pattern. It exploits the following features of the JVM very well. * *Static nested classes are loaded only when called by name. *The class loading mechanism is by default concurrency protected. So when a thread initializes a class, the other threads wait for its completion. Also, you don't have to use the synchronize keyword, it makes your program 100 times slower. A: I suspect that the initialization on demand holder is marginally faster that double-checked locking (using a volatile). The reason is that the former has no synchronization overhead once the instance has been created, but the latter involves reading a volatile which (I think) entails a full memory read. If performance is not a significant concern, then the synchronized getInstance() approach is the simplest.
unknown
d7077
train
If you really need this, you need to merge the two function clauses. One way to do this: func what x = case what of "add" -> x+a "mul" -> x*a where a = 2 A: You can also introduce a second function: function fName x = function' fName x where a = 2 function' "sum" x = x + a function' "product" x = x * a A: (Forgive me because I'm a newbie.) I don't think this is possible. The scope of the where "block" is the function it's defined in. What you can do is this, though: Prelude> let a = 2 Prelude> let sum x = x + a Prelude> let product x = x * a Prelude> sum 3 5 This is done in GHCi. You may be concerned that everyone can see a, but if this were in a .hs file, you could make it a module and just not export a, and then only these functions could see it. A: I think you can define a new function geta geta=2 And then you can use the geta function in any other functions. I don't think mix every functions is a good way, maybe you will have 20 functions need the same value
unknown
d7078
train
Hopefully this pipe extract will be of some help. If var is true then make an HTTP request and pipe the response to the map operator. If there's an HTTP error it will pipe null. If the var is false it will pipe the value in the of statement to the map operator. mergeMap((var) => if (var === true) { return this.myService.doSomeHttpRequest().pipe( catchError(() => { return of(null) }) ) else { return of('something') } )), filter(res => res != null), map(res => { if (res === 'something') { } else } }) This is coded freehand, sorry for any missing brackets and things like that but it should get you on the right route. If you want the pipe to not pass anything to map then you could consider using the filter operator. This will block the pipe from executing anything further.
unknown
d7079
train
How about using this syntax: const dispatch = this.props.dispatch; Meteor.call('seed', mergedobj, function(err, seed){ if(err){ // error handling here. }else{ dispatch(seedAction(seed)); } }) So you don't have to deal with this keyword.
unknown
d7080
train
As far I understood your scenario, you are checking that your password is auto populated in password field and If so then you need to clear that You can do this - First you need to check is there any value in password field if so then do clear int passLength = driver.findElement(By.id("Password")).getAttribute("value").length(); if(passLength>0) { driver.findElement(By.id("Password")).clear(); }
unknown
d7081
train
the problem is with your loop, you need to iterate in one loop instead of an inner loop: public static void OutputOfFile(char[] x) throws IOException { File file = new File(""test""); PrintWriter out = new PrintWriter(file.getAbsoluteFile()); out.print(x); out.close(); } public static void main(String[] args) throws IOException { BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); String x = reader.readLine(); char[] x1 = x.toCharArray(); char[] x2 = new char[x1.length]; for (int i = x1.length - 1, k = 0; i >= 0; i--, k++) { x2[k] = x1[i]; } OutputOfFile(x2); }
unknown
d7082
train
Why not using a dynamic inventory based on the mac address of your devices? Just a small example. Of course it needs to be improved but it is for your reference: #!/usr/bin/env python # -*- coding:utf-8 -*- from __future__ import (absolute_import, division, print_function, unicode_literals) import json import socket import subprocess import re def main(): print(json.dumps(inventory(), sort_keys=True, indent=2)) def inventory(): ip_address = find_ip() return { 'all': { 'hosts': [ip_address], 'vars': {}, }, '_meta': { 'hostvars': { ip_address: { 'ansible_ssh_user': 'ansible', } }, }, 'ip': [ip_address] } def find_ip(): lines = subprocess.check_output(['arp', '-a']).decode('utf-8').split('\n') for line in lines: if re.search('a0:d7:95:1a:80:f8', line): ip = re.search(r"(\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b)", line) return ip.group(1) if __name__ == '__main__': main() Output: { "_meta": { "hostvars": { "192.168.0.100": { "ansible_ssh_user": "ansible" } } }, "all": { "hosts": [ "192.168.0.100" ], "vars": { "ansible_connection": "local" } }, "ip": [ "192.168.0.100" ] } Example: ansible-playbook -i inventories/dynamic/mydyn.py hosts.yml PLAY [Test wait] **************************************************************************************************************** TASK [Debug] ******************************************************************************************************************** ok: [192.168.0.100] => { "ansible_host": "192.168.0.100" } TASK [Ping] ********************************************************************************************************************* ok: [192.168.0.100] PLAY RECAP ********************************************************************************************************************** 192.168.0.100 : ok=2 changed=0 unreachable=0 failed=0
unknown
d7083
train
Instead of using top:-0.5em; You can use bottom : 0.4em; As its not good practice to use negative values for position. A: check this one i hope this will help you.. HTML <p> <input type="password" class="pw-box" id="pwbox-33" name="post_password"> <input type="submit" class="pw-submit" value="Submit" name="Submit"> </p> CSS p { width:50%; display: inline; float: left; padding-left: 1%; padding-right: 1%; position: relative; font-size: 1em; line-height: 1.5em; margin:20px 0 0 20px; background:rgba(0,0,0,0.1); } #pwbox-33 { width: 200px; border:0; background: #606D80; border-top:solid 1px #1F3453; color: #DCE0E6; margin: 0; padding: 0.5em; text-shadow: 0 1px 0 #1F1F20; vertical-align: top; } .pw-submit{ cursor: pointer; background:#2B4C7E; color: #DCE0E6; padding: 0.5em 1em; position: absolute; text-shadow: 0 -1px 0 #1F1F20; top: -0.3em; border:0; font-size:12px; font-family:arial; left:200px; border-bottom:solid 0.5em #0E2952; outline:0; } http://jsfiddle.net/YjNJ5/3/ A: Found a solution that looks exactly the same as intended. http://jsfiddle.net/XBjz3/8/ Here are the changes from the original. #pwbox-33 { vertical-align: bottom; } .pw-submit { top: 0; vertical-align: bottom; } Tested to work on Firefox, Chrome, IE, Safari, Opera, and Android stock browser.
unknown
d7084
train
If I am clear you just want to filter your first column string and rest seperately. Why not you just use a simple counter for this: while(rowIterator.hasNext()) { Row row = rowIterator.next(); String RowContent = null; Iterator<Cell> cellIterator = row.cellIterator(); while(cellIterator.hasNext()) { Cell cell = cellIterator.next(); RowContent=RowContent+cell.toString(); } //Code for saving RowContent or printing or whatever you want for text in complete row } RowContent will give concatenation of each cells of a single row in each iteration. A: Like you did in the switch block with "break". But what i think you want is this: Iterator<Cell> cellIterator = row.cellIterator(); boolean stop = false; while(cellIterator.hasNext()) { Cell cell = cellIterator.next(); switch(cell.getCellType()) { case Cell.CELL_TYPE_STRING: System.out.print(cell.getStringCellValue() + "\t\t"); list1.add(cell.getStringCellValue()); stop = true; break; } if (stop) { break; } } This would stop the while loop when you found a string cell and will then operate on the next row. Make any possible condition you need to break the while loop. For example collect string columns and when you found the desired set stop to true, to get to the next row. A: External Link: Busy Developers' Guide to HSSF and XSSF Features Here is an example that should work. Maven Dependencies: <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>3.9</version> </dependency> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi-ooxml</artifactId> <version>3.9</version> </dependency> Code: import org.apache.poi.hssf.usermodel.HSSFDataFormatter; import org.apache.poi.openxml4j.exceptions.InvalidFormatException; import org.apache.poi.ss.usermodel.*; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.util.Iterator; public class StackOverflowQuestion18095443 { public static void main(String[] args) { if(args.length != 1) { System.out.println("Please specify the file name as a parameter"); System.exit(-1); } String sfilename = args[0]; File file = new File("C:\\Users\\student3\\" + sfilename + ".xls"); read(file); } public static void read(File file) { try (InputStream in = new FileInputStream(file)) { HSSFDataFormatter formatter = new HSSFDataFormatter(); Workbook workbook = WorkbookFactory.create(in); Sheet sheet = workbook.getSheetAt(0); Iterator<Row> rowIterator = sheet.iterator(); while (rowIterator.hasNext()) { Row row = rowIterator.next(); StringBuilder rowText = new StringBuilder(); Iterator<Cell> cellIterator = row.cellIterator(); while (cellIterator.hasNext()) { Cell cell = cellIterator.next(); String cellAsStringValue = formatter.formatCellValue(cell); rowText.append(cellAsStringValue).append(" "); } System.out.println(rowText.toString().trim()); } } catch (InvalidFormatException | IOException e) { e.printStackTrace(); } } } As for terminating the iteration you can conditionally break from the loop. Or, you could also just not use an iterator. Notice that you can obtain a Cell from a Row using a named reference (this allows you to refer to a cell by name, such as "A2", just like you would in Excel) or simply by the column index within the row.
unknown
d7085
train
Always use sequential consistency if in doubt :) memory_order_seq_cst The operation is ordered in a sequentially consistent manner: All operations using this memory order are ordered to happen once all accesses to memory that may have visible side effects on the other threads involved have already happened. This is the strictest memory order, guaranteeing the least unexpected side effects between thread interactions though the non-atomic memory accesses. It applies either to the consumer's store which sends the signal back to the producer or the std::atomic_thread_fence before it. void producer() { for(;;) { data[write_index] = <something>; status[write_index].store(READABLE, std::memory_order_release); // do something else while consumer reads it // producer knows for sure that reader will consume it and signal back while( status[write_index].load(std::memory_order_acquire) != WRITEABLE) _mm_pause(); // or at least std::this_thread::yield(); } } void consumer() { while (status[read_index].load(std::memory_order_acquire) != READABLE) _mm_pause(); // or at least std::this_thread::yield(); //do something with data[read_index] // signal back and prevent loads from reorder below status[read_index].store(WRITEABLE, std::memory_order_seq_cst); } I'm afraid that this will result in excessive synchronization on x86.. but until someone else comes up with more tricky (but still portable C++11) approach, consider this as the price of portability.
unknown
d7086
train
Note that if you actually try and instantiate B then you'll also get the error that B::B() is deleted: https://gcc.godbolt.org/z/jdKzv7zvd The reason for the difference is probably that when you declare the C constructor, the users of C (assuming the definition is in another translation unit) have no way of knowing that the constructor is in fact deleted. At best this would lead to some confusing linker error, at worst it'd just crash at runtime. In the case of B all users of B will immediately be able to tell that the default constructor is deleted so it does no harm (even if it makes no sense as warned by clang) to declare it as defaulted.
unknown
d7087
train
Your query will always return 'item7' because you always starting after 2018-11-11 11:11:11 so it will ignore all the other items and go to the last 2018-11-11 11:11:11 and skip from there. You need to get the last item returned and keep a reference to it, then on your startAfter use the document reference to start after. Usually, flutter's startAfter requires a list to keep a reference to it. Look at Firebase query cursors A: Currently flutter don't support startAfter(lastDocFetched) or startAt(anyDoc). This is required as you said it starts with the string matching values, not at a particular document.
unknown
d7088
train
C++ standard surely says nothing about usage of windows API functions like UnmapViewOfFile or CloseHandle. RAII is a programming idiom, you can use it or not, and its a lot older than C++11. One of the reasons why RAII is recomended is that it makes life easier when working with exceptions. Destructors will always safely release any resources - mostly memory, but also handles. For memory your have classes in standard library, like unique_ptr and shared_ptr, but also vector and lots of other. For handles like those from WinAPI, you must write your own, like: class handle_ptr { public: handle_ptr() { // aquire handle } ~handle_ptr() { // release } } A: Cleanup is still necessary, but due to the possibility of exceptions the code should not do cleanup simply by executing cleanup operations at the end of a function. That end may never be reached! Instead,     Do cleanup in destructors. In C++11 it is particularly easy to any kind of cleanup in a destructor without defining a custom class, since it's now much easier to define a scope guard class. Scope guards were invented by Petru Marginean, who with Andrei Alexandrescu published an article about it in DDJ. But that original C++03 implementation was pretty complex. In C++11, a bare bones scope guard class: class Scope_guard : public Non_copyable { private: function<void()> f_; public: void cancel() { f_ = []{}; } ~Scope_guard() { f_(); } Scope_guard( function<void()> f ) : f_( move( f ) ) {} }; where Non_copyable provides move assignment and move construction, as well as default construction, but makes copy assignment and copy construction private. Now right after successfully acquiring some resource you can declare a Scope_guard object that will guaranteed clean up at the end of the scope, even in the face of exceptions or other early returns, like Scope_guard unmapping( [&](){ UnmapViewOfFile(lpMapView); } ); Addendum: I should better also mention the standard library smart pointers shared_ptr and unique_ptr, which take care of pointer ownership, calling a deleter when the number of owners goes to 0. As the names imply they implement respectively shared and unique ownership. Both of them can take a custom deleter as argument, but only shared_ptr supports calling the custom deleter with the original pointer value when the smart pointer is copied/moved to base class pointer. Also, I should better also mention the standard library container classes such as in particular vector, which provides a dynamic size copyable array, with automatic memory management, and string, which provides much the same for the particular case of array of char uses to represent a string. These classes free you from having to deal directly with new and delete, and get those details right. So in summary, * *use standard library and/or 3rd party containers when you can, *otherwise use standard library and/or 3rd party smart pointers, *and if even that doesn't cut it for your cleanup needs, define custom classes that do cleanup in their destructors. A: As @zero928 said in the comment, RAII is a way of thinking. There is no magic that cleans up instances for you. With RAII, you can use the object lifecycle of a wrapper to regulate the lifecycle of legacy types such as you describe. The shared_ptr<> template coupled with an explicit "free" function can be used as such a wrapper. A: As far as I know C++11 won't care of cleanup unless you use elements which would do. For example you could put this cleaning code into the destructor of a class and create an instance of it by creating a smart-pointer. Smart-pointers delete themselves when they are not longer used or shared. If you make a unique-pointer and this gets deleted, because it runs out of scope then it automatically calls delete itself, hence your destructor is called and you don't need to delete/destroy/clean by yourself. See http://www.cplusplus.com/reference/memory/unique_ptr/ This is just what C++11 has new for automatically cleaning. Of course an usual class instance running out of scope calls its destructor, too. A: No! RAII is not about leaving clean-up aside, but doing it automatically. The clean-up can be done in a destructor call. A pattern could be: void f() { ResourceHandler handler(make_resource()); ... } Where the ResourceHandler is destructed (and does the clean-up) at the end of the scope or if an exception is thrown. A: The WIN32 API is a C API - you still have to do your own clean up. However nothing stops you from writing C++ RAII wrappers for the WIN32 API. Example without RAII: void foo { HANDLE h = CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ, NULL, OPEN_ALWAYS, 0, NULL); if ( h != INVALID_HANDLE_VALUE ) { CloseHandle(h); } } And with RAII: class smart_handle { public: explicit smart_handle(HANDLE h) : m_H(h) {} ~smart_handle() { if (h != INVALID_HANDLE_VALUE) CloseHandle(m_H); } private: HANDLE m_H; // this is a basic example, could be implemented much more elegantly! (Maybe a template param for "valid" handle values since sometimes 0 or -1 / INVALID_HANDLE_VALUE is used, implement proper copying/moving etc or use std::unique_ptr/std::shared_ptr with a custom deleter as mentioned in the comments below). }; void foo { smart_handle h(CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ, NULL, OPEN_ALWAYS, 0, NULL)); // Destructor of smart_handle class would call CloseHandle if h was not NULL } RAII can be used in C++98 or C++11. A: I really liked the explanation of RAII in The C++ Programming Language, Fourth Edition Specifically, sections 3.2.1.2, 5.2 and 13.3 explain how it works for managing leaks in the general context, but also the role of RAII in properly structuring your code with exceptions. The two main reasons for using RAII are: * *Reducing the use of naked pointers that are prone to causing leaks. *Reducing leaks in the cases of exception handling. RAII works on the concept that each constructor should secure one and only one resource. Destructors are guaranteed to be called if a constructor completes successfully (ie. in the case of stack unwinding due to an exception being thrown). Therefore, if you have 3 types of resources to acquire, you should have one class per type of resource (class A, B, C) and a fourth aggregate type (class D) that acquires the other 3 resources (via A, B & C's constructors) in D's constructor initialization list. So, if resource 1 (class A) succeeded in being acquired, but 2 (class B) failed and threw, resource 3 (class C) would not be called. Because resource 1 (class A)'s constructor had completed, it's destructor is guaranteed to be called. However, none of the other destructors (B, C or D) will be called. A: It does NOT cleanup `FILE*. If you open a file, you must close it. I think you may have misread the article slightly. For example: class RAII { private: char* SomeResource; public: RAII() : SomeResource(new char[1024]) {} //allocated 1024 bytes. ~RAII() {delete[] SomeResource;} //cleaned up allocation. RAII(const RAII& other) = delete; RAII(RAII&& other) = delete; RAII& operator = (RAII &other) = delete; }; The reason it is an RAII class is because all resources are allocated in the constructor or allocator functions. The same resource is automatically cleaned up when the class is destroyed because the destructor does that. So creating an instance: void NewInstance() { RAII instance; //creates an instance of RAII which allocates 1024 bytes on the heap. } //instance is destroyed as soon as this function exists and thus the allocation is cleaned up //automatically by the instance destructor. See the following also: void Break_RAII_And_Leak() { RAII* instance = new RAII(); //breaks RAII because instance is leaked when this function exits. } void Not_RAII_And_Safe() { RAII* instance = new RAII(); //fine.. delete instance; //fine.. //however, you've done the deleting and cleaning up yourself / manually. //that defeats the purpose of RAII. } Now take for example the following class: class RAII_WITH_EXCEPTIONS { private: char* SomeResource; public: RAII_WITH_EXCEPTIONS() : SomeResource(new char[1024]) {} //allocated 1024 bytes. void ThrowException() {throw std::runtime_error("Error.");} ~RAII_WITH_EXCEPTIONS() {delete[] SomeResource;} //cleaned up allocation. RAII_WITH_EXCEPTIONS(const RAII_WITH_EXCEPTIONS& other) = delete; RAII_WITH_EXCEPTIONS(RAII_WITH_EXCEPTIONS&& other) = delete; RAII_WITH_EXCEPTIONS& operator = (RAII_WITH_EXCEPTIONS &other) = delete; }; and the following functions: void RAII_Handle_Exception() { RAII_WITH_EXCEPTIONS RAII; //create an instance. RAII.ThrowException(); //throw an exception. //Event though an exception was thrown above, //RAII's destructor is still called //and the allocation is automatically cleaned up. } void RAII_Leak() { RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS(); RAII->ThrowException(); //Bad because not only is the destructor not called, it also leaks the RAII instance. } void RAII_Leak_Manually() { RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS(); RAII->ThrowException(); delete RAII; //Bad because you manually created a new instance, it throws and delete is never called. //If delete was called, it'd have been safe but you've still manually allocated //and defeated the purpose of RAII. } fstream always did this. When you create an fstream instance on the stack, it opens a file. when the calling function exists, the fstream is automatically closed. The same is NOT true for FILE* because FILE* is NOT a class and does NOT have a destructor. Thus you must close the FILE* yourself! EDIT: As pointed out in the comments below, there was a fundamental problem with the code above. It is missing a copy constructor, a move constructor and assignment operator. Without these, trying to copy the class would create a shallow copy of its inner resource (the pointer). When the class is destructed, it would have called delete on the pointer twice! The code was edited to disallow copying and moving. For a class to conform with the RAII concept, it must follow the rule for three: What is the copy-and-swap idiom? If you do not want to add copying or moving, you can simply use delete as shown above or make the respective functions private.
unknown
d7089
train
Here is the article from PHP manual that explains sessions security in PHP: link Probably the most effective way to protect your sessions will be to enable SSL on your site and forcing storing of session id in cookies. Then cookies will be encrypted as they will be passed to your site and that should guarantee enough protection. A: You can use HTTPS as RaYell said, but if you can't afford a certificate, there are some ways to secure a session even with HTTP: * *Store the user-agent in the session when you create the session. Check the user-agent on every request. If the user-agent changes, delete the session. *Same as above, but with the IP address. The annoying thing there is that a lot of ISP provide dynamic IPs, and the session can be deleted illegitimately. *Set a low session timeout. This will not prevent session hijacking, but it reduces the risks. Beware, this can annoy users though. *Set a low session lifetime (1 day). This will force users to reauthenticate after 1 day, so even if a session is hijacked, it won't be hijacked for more than one day. Remember these advices will not prevent session hijacking. They will dramatically reduce the risks, but there will always be a risk, unless you use HTTPS. A: the session_id is stored in a cookie on the users system. Not sure what you mean by protecting it.
unknown
d7090
train
It's a block duration of validity. E.g If you define a namespace alias as below, the namespace alias abc would be invalid outside {...} block. { namespace abc = xyz; abc::test t; //valid } abc::test t; //invalid A: The scope is the declarative region in which the alias is defined. A: It would have the scope of the block in which it was defined - likely to be the same as function scope unless you declare the alias inside a block within a function. A: I'm fairly certain that a namespace alias only has scope within the block it's created in, like most other sorts of identifiers. I can't check for sure at the moment, but this page doesn't seem to go against it. A: As far as I know, it's in the scope it's declared. So, if you alias in a method, then it's valid in that method, but not in another. A: Take a look at http://en.wikibooks.org/wiki/C++_Programming/Scope/Namespaces A: It is valid for the duration of the scope in which it is introduced. Take a look at http://en.cppreference.com/w/cpp/language/namespace_alias, I trust the explanation of cppreference, it's much more standard.
unknown
d7091
train
First of all, change the Driver to the newer version of the driver, the SQLiteDriver is for the old finstar driver, and it has some quirks with the newer version of SQL Lite. Also, your database path needs to be changed, you can't reference a path with spaces without putting "'s around it. If you put your database in the same directory as your executable, you can just use the db name. This configuration works great for me : <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SQLite20Driver</property> <property name="connection.connection_string">Data Source=core.db3;Version=3</property> <property name="dialect">NHibernate.Dialect.SQLiteDialect</property> <property name="query.substitutions">true=1;false=0</property> <property name='proxyfactory.factory_class'>NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="show_sql">true</property>
unknown
d7092
train
yes it should, try running ifconfig from the console. Hope this helps, Jason.
unknown
d7093
train
In my opinion this is a bug. The ListBase.mouseOverHandler now sets a variable called lastHighlightItemRendererAtIndices when it dispatches an ITEM_ROLL_OVER event, which is then used (together with lastHighlightItemIndices) when dispatching an ITEM_ROLL_OUT event in ListBase.clearHighlight (called by the mouseOutHandler). The problem is that when you mouse from row-to-row the mouseOverHandler is called first, setting the lastHightlight... variables, and then when the mouseOutHandler gets called subsequently, it uses the lastHighlight... values that were just set with the result that you get consecutive 'roll over' and 'roll out' events for the same renderer. Frankly I don't know why ListBase.clearHighlight just doesn't use the passed in renderer when dispatching the ITEM_ROLL_OUT event (which is how it used to work in SDK 2) as this is the actual renderer that is being 'rolled out of'. A: Are they coming from the same object? If not you will it is likely so that you will get an itemRollOut from the "item" you just left and a itemRollOver from the new one you entered, depending on their spacing and such these may fire very close to each other. A: Make sure you are setting super.data in your item renderer if you are overriding set data(). ListBase listens for MOUSE_OVER and then figures out the item underneath it based on coordinates of mouse and the position of the item renderer. You could check ListEvent.itemRenderer to see which renderer's roll over and roll out are firing and in what order. Worst case, you could listen for rollOver and rollOut inside your item renderer. A: Had the same problem. super.data was already being set, and the item is the same for the rollOut and rollOver event. I ended up opting for anirudhsasikumar's worst case scenario, and listened for rollOver and rollOut inside the item renderer. Seems to work fine. A: I was having this same issue. I ended up subclassing the mx.controls.List class and overriding the clearHighlight function. As far as I can tell, the lastHighlightItemIndices variable is only ever read in that function. So doing something like the following fixed this issue: import mx.core.mx_internal; use namespace mx_internal; public class List extends mx.controls.List { public function List() { super(); } override mx_internal function clearHighlight( item:IListItemRenderer ):void { var uid:String = itemToUID( item.data ); drawItem( UIDToItemRenderer( uid ), isItemSelected( item.data ), false, uid == caretUID ); var pt:Point = itemRendererToIndices( item ); if( pt ) { var listEvent:ListEvent = new ListEvent( ListEvent.ITEM_ROLL_OUT ); listEvent.columnIndex = item.x; listEvent.rowIndex = item.y; listEvent.itemRenderer = item; dispatchEvent( listEvent ); } } } Then just use this List class instead of the Adobe one and you'll have the behavior you expect. I tested this against Flex SDK 3.2 and it works. <mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:controls="com.example.controls.*"> [ other code ... ] <controls:List itemRollOver="onItemRollOver( event )" itemRollOut="onItemRollOut( event )" /> </mx:Canvas> Thanks to Gino Basso for the idea in the post above. Hope that helps. A: Thanks for the solution. That really solved the problem! Small correction, though: listEvent.columnIndex = item.x; listEvent.rowIndex = item.y; should be listEvent.columnIndex = pt.x; listEvent.rowIndex = pt.y; item.x and y hold the coordinate of the renderer in pixels.
unknown
d7094
train
Command Line git diff --name-only origin/master Will list the files that you have changed but not pushed. git diff origin/master directory_foo/file_bar.m Will list the line by line diff of all the un-pushed changes to directory_foo/file_bar.m. GUI Tool If your looking for GUI Tools for a Git workflow, I use Xcode to commit locally and SourceTree to push and pull. Before Xcode and SourceTree I was a diehard command line SCM person. But I really like committing locally with Xcode, it streamlined a multi-step process of reviewing the change diffs and committing very nicely. The more I've used SourceTree the more I like it and the less I use the command line. I've always liked Atlassian's products and the $10 license for indies and small startups rocks. Now I just Love them to death for buying SourceTree and making free on the Mac App Store. I think SourceTree does exactly what you want to do. On the tool bar the red 3 shows I have 3 commits to push. You just select master and origin/master to get an aggregate of what will be pushed. The bottom left pain shows the files changed and the right shows the aggregated diff for that file. FYI: Two awesome presentations on Git (Absolutely Must Watch) Git For Ages 4 And Up http://www.youtube.com/watch?v=1ffBJ4sVUb4 Advanced Git http://vimeo.com/49444883 A: You can tell the changes of your local repository vs your remote repository from the Terminal in your local repository directory by using the command git diff [local] [remote] for example: git diff master origin/master A: Use git show to retrieve the commits you made. Get the SHA identification of your commits and: git diff 0da94be..59ff30c Another option is: git diff origin/master The command above show the diff of the files to be pushed Source: https://stackoverflow.com/a/3637039/2387977 A: I tried with Eclipse and found it very straight forward. Right click project, 'Compare With', 'Branch, Tag or Reference' Under 'Remote Tracking', select 'origin/master' I got the full list of files modified.
unknown
d7095
train
The black pixels are just because of padding. This is a simple operation that allows you to have network inputs having the same size (i.e. you have batches containing images with the of size: 223x221 because smaller images are padded with black pixels). An alternative to padding that removes the need of adding black pixels to the image, is that of preprocessing the images by: * *removing padding via cropping operation *resizing the cropped images to the same size (e.g. 223x221) You can do all of these operations in simple python, thanks to tensorflow map function. First, define your python function def py_preprocess_image(numpy_image): input_size = numpy_image.shape # this is (223, 221) image_proc = crop_by_removing_padding(numpy_image) image_proc = resize(image_proc, size=input_size) return image_proc Then, given your tensorflow dataset train_data, map the above python function on each input: # train_data is your tensorflow dataset train_data = train_data.map( lambda x: tf.py_func(preprocess_image, inp = [x], Tout=[tf.float32]), num_parallel_calls=num_threads ) Now, you only need to define crop_by_removing_padding and resize, which operate on ordinary numpy arrays and can thus be written in pure python code. For example: def crop_by_removing_padding(img): xmax, ymax = np.max(np.argwhere(img), axis=0) img_crop = img[:xmax + 1, :ymax + 1] return img_crop def resize(img, new_size): img_rs = cv2.resize(img, (new_size[1], new_size[0]), interpolation=cv2.INTER_CUBIC) return img_rs
unknown
d7096
train
You can simply use the String.prototype.repeat method: " ".repeat(3); A polyfill for older browsers. A: Here is a convertToSpace function var convertToSpace = function (spaces) { var string = ""; for (var i = 0; i < spaces; i++) { string += " "; } return string; } A: A concise option: function convertToSpace(n) { return new Array(n + 1).join(' ') } A: Using Array#fill and Array#reduce The fill() method fills all the elements of an array from a start index to an end index with a static value. The reduce() method applies a function against an accumulator and each value of the array (from left-to-right) to reduce it to a single value. var NumberOfSpace = 3; function convertToSpace(param) { return new Array(param).fill(' ').reduce(function(a, b) { return a + b; }); } var space = convertToSpace(NumberOfSpace); console.log(space);
unknown
d7097
train
Use this code <script type="text/javascript"> var song=new Audio('http://live1.goear.com/listen/d941195f4a5f477381d8a95ba666a0cb/52eac666/sst2/mp3files/10102006/450929654ac4765a83324119603d02d6.mp3'); song.play(); </script> but your link not contain any MP3 i think now
unknown
d7098
train
If you really care about performance, try using none of them - or at least minimise it. Using these Methods will loop through the list of GameObjects and return the object and because of the looping it is pretty heavy on the performance. So if you use them, never call them in the Update()-Method, call them in Start() or any method that doesn't get called often and store the value. To be honest I don't know, which one is faster. If I had to guess, I would say it is GameObject.Find(), since it's only checking the name, whereas FindObjectOfType() checks the components. But even if I would consider using FindObjectOfType(), because Find() uses a string and you might want to avoid that, because of typos (if you are not storing it inside a single class and just reference the variable)
unknown
d7099
train
Why not cast the number columns to varchar columns? If you're using SQL SERVER you can do that like so: CONVERT(VARCHAR,secorg.org_id) = CONVERT(VARCHAR,progmap.org_id) You'll have to do an outer join for instances when the column that is both 'ALL' and numbers is 'All' as it won't be able to inner join to the other table. For the quick fix based on your code above you can just change the second WHEN clause to look like so (again assuming you're using MS SQL SERVER): WHEN CONVERT(VARCHAR,secorg.org_id) = CONVERT(VARCHAR,progmap.org_id) THEN secorg.org_name Try this as your query: SELECT DISTINCT program_id, prog_name, CASE Eitc_Active_Switch WHEN '1' THEN 'ON' ELSE 'OFF' END AS Prog_Status, progmap.client_id, ISNULL(secorg.org_name,'ALL') AS org_name, CASE prog.has_post_calc_screen WHEN 'True' THEN 'True' ELSE 'False' END AS Referal_ID, CASE WHEN progmap.program_ID IN ('AMC1931','AMCABD','AMCMNMI','AMC') AND sec.calwinexists_ind = '1' THEN 'Yes' WHEN progmap.program_ID IN ('AMC1931','AMCABD','AMCMNMI','AMC') AND sec.calwinexists_ind = '0' THEN 'No' WHEN progmap.program_ID NOT IN ('AMC1931','AMCABD','AMCMNMI','AMC') THEN 'N/A' END AS calwin_interface, sec.Client_name FROM ref_programs prog (nolock) LEFT OUTER JOIN ref_county_program_map progmap (nolock) ON progmap.program_id = prog.prog_id AND progmap.CLIENT_ID = prog.CLIENT_ID INNER JOIN sec_clients sec (nolock) ON sec.client_id = progmap.Client_id LEFT OUTER JOIN sec_organization secorg (nolock) ON CONVERT(VARCHAR,secorg.org_id) = CONVERT(VARCHAR,progmap.org_id) A: The case statement is fine its the field alias thats bad it shoud be END As org_name A multipart alias like secorg.org_name won't work
unknown
d7100
train
you need the exact same name in the unity editor for the script as well as in the code editor (I suppose visual studio). the name is Case Sensitive and needs to be correctly spelled in each part. check the name and the location of the file and place them so they are easily found by you. for example: Name in unity: WaveSystem_LoL //Name in C# code: public class WaveSystem_LoL : MonoBehavior if they do not match the error will appear so double and even triple check! A: Seems like your file name is different than class name. Your file name of cs file should be wavesystem.cs
unknown