_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d6101
train
I don't know PHP, but if you can split the (string?) "5546.263" into 55 and 46.263 (degrees and decimal minutes), you can convert to decimal degrees with 55 + (46.263 / 60). In other words, it is degrees + (minutes / 60).
unknown
d6102
train
The problem is instead of this: $(document).ready({ You need this: $(document).ready(function () { I am sure you knew that, but it's easy to overlook since the error is shown to be in the following line. Another issue: I think you will also run into problems here: $(this).css ( marginRight: '20px' ); Per the jQuery docs, you should use this: $(this).css ('margin-right', '20px'); An alternative: Here's one more thing, just to give a complete answer. As is noted in the comments, you really don't need jQuery at all for this, if you don't want to use it. Try this: #menu-nav a:hover { margin-right: 20px; } You can add whatever styles you want like that. A: In your example $(document).ready({...}); should be $(document).ready(function(){ //... }); And also change $(this).css ( marginRight: '20px' ); to $(this).css('marginRight','20px'); or $(this).css({'marginRight':'20px'}); in both lines A: You don't pass an object, so you cannot use property: value, you can code: $(this).css({ marginRight: '20px'}); or: $(this).css( 'marginRight', '20px' ); A: I use this convention to avoid getting lost and confused in a myriad of brackets and parenthesis: $(document).ready(DocReady); function DocReady() { // More code here } An advantage is that you can also call the DocReady function from a button click (maybe to test it).
unknown
d6103
train
Only thing CouchDB queries can give you is the key -> value mapping. You can search the ordered dictionary, but you cannot search in the multi-dimensional data, with regular expression or even the key that contain a keyword as a substring (e.g. you have data "Mr. John Smith", and you want it to be found by the query with the keyword "John"). ElasticSearch fills the gap and provides additional indexing of the data. It is mainly useful for full-text indexing, but also supports geospatial data. A: In terms of disk usage: * *https://github.com/logstash/logstash/wiki/Elasticsearch-Storage-Optimization *http://till.klampaeckel.de/blog/archives/95-Operating-CouchDB.html As Marcin pointed out, Elasticsearch excels in full-text-search and its flexibility of analyzer and search functionality.
unknown
d6104
train
The problem is, you create a new Group for each alien. You only have to create the Group once and add the Alien Sprites to this one Group: * *Create the alien1 Group in the constructor (init) of the class. *Add the aliens in spawn method. *Draw all the aliens in the Group using your "draw" method. (The name of your method may be different - I don't know) class ... def __init__(self): # [...] self.alien1 = pygame.sprite.Group() # creat the group in the constructor def spawn(self): self.count += 2 rem = self.count % 33 if rem == 0: alien1_sprite = Alien1((rand.randrange(38,462),50)) self.alien1.add(alien1_sprite) def draw(self): # your draw or render method # [...] self.alien1.draw(screen) # draw all the aliens Read the documentation of pygame.sprite.Group. The Group manages the Sprites it contains. The Group is the "list" that stores the Sprites.
unknown
d6105
train
Identity has been introduced as part of Oracle 12c and not available in 11g, so for auto-increment ID prior to 12c you can use this post Developers who are used to AutoNumber columns in MS Access or Identity columns in SQL Server often complain when they have to manually populate primary key columns using sequences in Oracle. This type of functionality is easily implemented in Oracle using triggers. Create a table with a suitable primary key column and a sequence to support it. CREATE TABLE departments ( ID NUMBER(10) NOT NULL, DESCRIPTION VARCHAR2(50) NOT NULL); ALTER TABLE departments ADD ( CONSTRAINT dept_pk PRIMARY KEY (ID)); CREATE SEQUENCE dept_seq; Create a trigger to populate the ID column if it's not specified in the insert. CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; / A: In Oracle version pre 12c, you should use a SEQUENCE and TRIGGER to handle your auto-number ID Table CREATE TABLE regions ( region_id NUMBER(10) NOT NULL, region_name VARCHAR2(50) NOT NULL ); ALTER TABLE regions ADD ( CONSTRAINT regions_pk PRIMARY KEY (ID)); Sequence: CREATE SEQUENCE regions_seq; Trigger: CREATE OR REPLACE TRIGGER regions_id_generate BEFORE INSERT ON regions FOR EACH ROW WHEN (new.region_id IS NULL) BEGIN SELECT regions_seq.NEXTVAL INTO :new.region_id FROM dual; END; / When you do an insert, just specify a NULL value for your region_ID column, and Oracle will attribute it the next integer number in the sequence A: Im sorry, but maybe the server/database you are trying to connect to is 12c and yours/client doesnt support the feature. (I believe the IDENTITY definition is introduced in 12c) Maybe try to use SEQUENCE instead. (Sequence is an object that is not bound to a specific table and can be used anywhere to get new unique numbers. Because of this you should create a trigger to set the value to your column)
unknown
d6106
train
Box<dyn MemorizedOutput> implements Any, so it is covered by the blanket implementation of MemorizedOutput. As per https://doc.rust-lang.org/reference/expressions/method-call-expr.html, Rust will prefer methods implemented on Box<dyn MemorizedOutput> before it the dereferenced type dyn MemorizedOutput. So a.as_any() is actually <Box<dyn MemorizedOutput> as MemorizedOutput>::as_any(&a), which obviously cannot be downcasted into i32.
unknown
d6107
train
input elements do not have .innerHTML. Use .value instead: var name = document.getElementById("name"), full_name = name.value, full_name_split = full_name.split(" ")[0];
unknown
d6108
train
While I don't know of any tutorials to show step by step. These links may help: * *Using the Contacts API *ContactManager - Contact Manager
unknown
d6109
train
I normally write Object.create() for shallow copy ,but deep copy (nested object) I do with JSON.parse(JSON.stringify(nestedObject)) const obj = { foo: { a: { type: 'foo', baz: 1 }, b: { type: 'bar', baz: 2 }, c: { type: 'foo', baz: 3 } } } var temp = JSON.parse(JSON.stringify(obj)) for(var i in temp.foo) { if(temp.foo[i].type == "foo") { temp.foo[i].baz = 5; } } console.log(temp); console.log(obj); A: Object.assign is perfect for this, from the docs: The Object.assign() method is used to copy the values of all enumerable own properties from one or more source objects to a target object. It will return the target object. And you can use it like this : Object.assign( {}, obj, { foo: { a: { baz: obj.foo.a.type === 'foo' ? 5 : obj.foo.a } } } ); A: You can use Object.entries(), Array.prototype.map(), spread element, computed property, Object.assign() const obj = { foo: { a: { type: 'foo', baz: 1 }, b: { type: 'bar', baz: 2 }, c: { type: 'foo', baz: 3 } } } let key = Object.keys(obj).pop(); let res = {[key]: Object.assign({}, ...Object.entries(obj[key]) .map(([prop, {type,baz}]) => ({[prop]: {type,baz:type === "foo" ? 5 : baz}}) ))}; console.log(res, obj);
unknown
d6110
train
.NET has a number of set operations that work on enumerables, so you could take the set intersection to find members in both lists. Use Any() to find out if the resulting sequence has any entries. E.g. if(list1.Intersect(list2).Any()) A: You can always use linq if (list1.Intersect(list2).Count() > 0) ... A: If you're able to use Linq then if(list1.Intersect(list2).Count > 0) {...collision...}.
unknown
d6111
train
You can determine PHP version and extension dependencies with PEAR's PHP_CompatInfo package. As for PEAR packages the app might be using, you can see what's installed using pear list -a I don't know of a tool that will tell you which external script dependencies are in use other than grep.
unknown
d6112
train
I have approached similar problem with a different pattern. Please refer to https://docs.spring.io/spring-batch/docs/current/reference/html/scalability.html#remoteChunking Here you need to break job in two parts: * *Master Master picks records to be processed from DB and sent a chunk as message to queue task-queue. Then wait for acknowledgement on separate queue ack-queue once it get all acknowledgements it move to next step. * *Slave Slave receives the message and process it. send acknowledgement to ack-queue.
unknown
d6113
train
* *No, it will not *Only if there's enough memory pressure However, if your application is doing allocations, it's pretty unlikely the string will survive for too long. And if there's not enough memory pressure, the GC has little reason to release the memory. Do make sure the string is not referenced anymore, though. Count the references, check that it's not interned. You can force a garbage collection, but it's great way to damage GC performance. It might be the best solution for your case if your application does the odd "allocate a huge string and then forget it" operation, and you care about memory available to the rest of the system (there's no benefit to your application). There's no point in trying that if you're doing a lot of allocations anyway, though - in that case, look for memory leaks on your side. WinDbg can help. A: 1. Will garbage collector gets invoked if Gen 0 and Gen 1 is not under pressure ? If there is enough space in Generation0 and Generation2, then Garbage Collector will not be invoked. If there is enough space in Generation0 and Generation2, then it means that that there is enough space to create new objects and there is no reason to run Garabage Collection. 2. Will it go to Gen 2 when it has released Gen 0 or Gen 1 memory ? If the object is survived after Garbage Collection in Generation1 and in the Generation1, then the object will be moved to Generation2. 3. If so what’s the better way to handle this ? * *To destroy an object from heap, you should just delete references of this string. Your string variable which has xml values should not be static to be garbage collected.(read more about roots) *Try to use compact GCSettings.LargeObjectHeapCompactionMode: GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce; GC.Collect(); This will compact the Large Object Heap when the next Full Garbage Collection is executed. With GC.Collect() call after the settings been applied, the GC gets compacted immediately. * *Try to use WeakReference: WeakReference w = new WeakReference(MyLargeObject); MyLargeObject = w.Target as MyLargeClass; MyLargeClass MyLargeObject; if ( (w == null) || ( (MyLargeObject=w.Target as MyLargeClass) == null) ) { MyLargeObject = new MyLargeClass(); w = new WeakReference(MyLargeObject); } This article gives is very useful to you about Garbage Collection and the article is written in plain English.
unknown
d6114
train
This is one possible way by shredding the XML on p1:AddOnFeatureEnum elements (reference nodes() method for this part), then use value() method on the shredded elements to extract the varchar(100) values : ;WITH XMLNAMESPACES('http://www.alarm.com/WebServices' as p1) INSERT INTO #rewardscusts SELECT enum.value('.','varchar(100)') FROM @AddOnFeaturesXML.nodes('/p1:Addons/p1:AddOnFeatureEnum') as T(enum) This is full working demo codes : DECLARE @AddOnFeatures AS VARCHAR(1500) DECLARE @AddOnFeaturesXML AS XML SET @AddOnFeatures = '<p1:Addons xmlns:p1="http://www.alarm.com/WebServices"><p1:AddOnFeatureEnum>WeatherToPanel</p1:AddOnFeatureEnum><p1:AddOnFeatureEnum>EnterpriseNotices</p1:AddOnFeatureEnum></p1:Addons>' SET @AddOnFeaturesXML = CAST(@AddOnFeatures AS XML) DECLARE @rewardscusts TABLE(AddOnFeature VARCHAR(100) primary key) ;WITH XMLNAMESPACES('http://www.alarm.com/WebServices' as p1) INSERT INTO @rewardscusts SELECT x.value('.','varchar(100)') FROM @AddOnFeaturesXML.nodes('/p1:Addons/p1:AddOnFeatureEnum') as T(x) SELECT * FROM @rewardscusts output : | AddOnFeature | |-------------------| | EnterpriseNotices | | WeatherToPanel |
unknown
d6115
train
The easiest and least destructive change would be to send it as response header. In the servlet you can use HttpServletResponse#setHeader() to set a response header: response.setHeader("X-Metadata", metadata); // ... (using a header name prefixed with X- is recommended for custom headers) In JS you can use XMLHttpRequest#getResponseHeader() to get a response header: var metadata = xhr.getResponseHeader('X-Metadata'); // ... You can even set some JSON string in there so that (de)serialization is easy.
unknown
d6116
train
I had similar problem and i solved it following way. Solve as follows: Function prototype declarations and global variable should be in test.h file and you can not initialize global variable in header file. Function definition and use of global variable in test.c file if you initialize global variables in header it will have following error multiple definition of `_ test'| obj\Debug\main.o:path\test.c|1|first defined here| Just declarations of global variables in Header file no initialization should work. Hope it helps Cheers A: Including the implementation file (test.c) causes it to be prepended to your main.c and complied there and then again separately. So, the function test has two definitions -- one in the object code of main.c and once in that of test.c, which gives you a ODR violation. You need to create a header file containing the declaration of test and include it in main.c: /* test.h */ #ifndef TEST_H #define TEST_H void test(); /* declaration */ #endif /* TEST_H */ A: If you have added test.c to your Code::Blocks project, the definition will be seen twice - once via the #include and once by the linker. You need to: * *remove the #include "test.c" *create a file test.h which contains the declaration: void test(); *include the file test.h in main.c A: If you're using Visual Studio you could also do "#pragma once" at the top of the headerfile to achieve the same thing as the "#ifndef ..."-wrapping. Some other compilers probably support it as well .. .. However, don't do this :D Stick with the #ifndef-wrapping to achieve cross-compiler compatibility. I just wanted to let you know that you could also do #pragma once, since you'll probably meet this statement quite a bit when reading other peoples code. Good luck with it A: The underscore is put there by the compiler and used by the linker. The basic path is: main.c test.h ---> [compiler] ---> main.o --+ | test.c ---> [compiler] ---> test.o --+--> [linker] ---> main.exe So, your main program should include the header file for the test module which should consist only of declarations, such as the function prototype: void test(void); This lets the compiler know that it exists when main.c is being compiled but the actual code is in test.c, then test.o. It's the linking phase that joins together the two modules. By including test.c into main.c, you're defining the test() function in main.o. Presumably, you're then linking main.o and test.o, both of which contain the function test(). A: Ages after this I found another problem that causes the same error and did not find the answer anywhere. I thought to put it here for reference to other people experiencing the same problem. I defined a function in a header file and it kept throwing this error. (I know it is not the right way, but I thought I would quickly test it that way.) The solution was to ONLY put a declaration in the header file and the definition in the cpp file. The reason is that header files are not compiled, they only provide definitions. A: You shouldn't include other source files (*.c) in .c files. I think you want to have a header (.h) file with the DECLARATION of test function, and have it's DEFINITION in a separate .c file. The error is caused by multiple definitions of the test function (one in test.c and other in main.c) A: You actually compile the source code of test.c twice: * *The first time when compiling test.c itself, *The second time when compiling main.c which includes all the test.c source. What you need in your main.c in order to use the test() function is a simple declaration, not its definition. This is achieved by including a test.h header file which contains something like: void test(void); This informs the compiler that such a function with input parameters and return type exists. What this function does ( everything inside { and } ) is left in your test.c file. In main.c, replace #include "test.c" by #include "test.h". A last point: with your programs being more complex, you will be faced to situations when header files may be included several times. To prevent this, header sources are sometimes enclosed by specific macro definitions, like: #ifndef TEST_H_INCLUDED #define TEST_H_INCLUDED void test(void); #endif
unknown
d6117
train
You'll need to use ParseExact (or TryParseExact) to parse the date: var date = "20181217"; var parsedDate = DateTime.ParseExact(date, "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture); var formattedDate = parsedDate.ToString("dd/MM/yyyy", System.Globalization.CultureInfo.InvariantCulture) Here we tell ParseExact to expect our date in yyyyMMdd format, and then we tell it to format the parsed date in dd/MM/yyyy format. Applying it to your code: cashRow["BusinessDate"] = DateTime.ParseExact(row.Cells[ClosingDate.Index].Value.ToString(), "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture).ToString("dd/MM/yyyy", System.Globalization.CultureInfo.InvariantCulture); A: You have to use ParseExact to convert your yyyyMMdd string to DateTime as follows and then everything is okay. cashRow["BusinessDate"] = DateTime.ParseExact(row.Cells[ClosingDate.Index].Value.ToString(), "yyyyMMdd", CultureInfo.InvariantCulture).ToString("dd/MM/yyyy"); A: As you already know the exact format of your input yyyyMMdd (eg: 20181211), just supply it to DateTime.ParseExact() and then call ToString() on the returned DateTime object. string YourOutput = DateTime.ParseExact(p_dates, "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture).ToString("dd/MM/yyyy", System.Globalization.CultureInfo.InvariantCulture); A: Try this: cashRow["BusinessDate"] = row.Cells[ClosingDate.Index].Value.ToString("dd/MM/yyyy");
unknown
d6118
train
The purpose of using a framework is that the long-term maintenance of your application is more easily done because you have coded consistently with specific standards. You can also have multiple developers working in parallel and easily "piece" the parts back together if they are done consistently. At least that is part of the promise frameworks provide, in theory. A framework is built to solve a specific problem in development. Before choosing a framework, you should review your application and decide if a framework will help you solve those problems. If it can great; if it can't you shouldn't force your app to use a framework. If you are duplicating a lot of code between the front and back end, that seems problematic to me. I'm not surprised that value objects would be mirrored between the front and back end, but their whole purpose is transferring data between different systems. Other code, or business logic, should ideally only exist in one system. As an aside; What an oddly phrased question. If you read specific blogs of experts and want their opinions, why are you posting here instead of contacting them directly? A: The combination of Flex and GAE can be very powerful. However GAE does have limitations that may impact what you are trying to build. For instance GAE doesn't support Spring last time I checked. Another cloud alternative that may work better is the new partnership between VMWare (SpringSource) and Salesforce.com called VMforce.
unknown
d6119
train
Your vector stores pointer. And you store inside it pointer of local variables: } else if(shape=='T'){ cin>>x1>>y1>>x2>>y2>>x3>>y3; Triangle tr(x1,y1,x2,y2,x3,y3); // <= Create local variable, automatic allocation shapes[sum] = &tr; // <= store its address //cout<<shapes[sum]->getArea()<<endl; sum++; } // <= automatic dealocation of tr, ie tr doesn't exist anymore // shapes[sum - 1] stores address of no more existing variable => Undefined behavior you should do: } else if(shape=='T'){ cin>>x1>>y1>>x2>>y2>>x3>>y3; Triangle *tr = new Triangle(x1,y1,x2,y2,x3,y3); // manual allocation shapes.push_back(tr); //cout<<shapes[sum]->getArea()<<endl; sum++; } BUT you have to deallocate with delete when you don't need anymore objects in vector sum isn't necessary: you have to use push_back to avoid Undefined Behavior, and after, you can use shapes.size() to retrieve size of vector. Indeed, accessing to an element of vector which is out of bound (ie when you do vector[n], n is equal or greater than vector.size()) is undefined behavior. Modern way of doing this: use of shared_ptr or unique_ptr
unknown
d6120
train
Well, it turns out that IIS has a really nice rewrite rule pattern tester. I found this tutorial extremely helpful. If you use the IIS URL Rewrite GUI, you can create a test redirect and then the URL Rewrite will write the redirect into web.config. You can then look in there and check your syntax.
unknown
d6121
train
In the mean time I found another (even better and not so hacky) way: <script> export let to let slotObj; const imageURL = getFaviconFor(to) </script> <a href={to}> <img src={imageURL} alt={slotObj? slotObj.textContent + ' Icon' : 'Icon'} /> <span bind:this={slotObj}><slot/></span> </a> A: This hack will work. Add a hidden span or div to bind the slot text: <span contenteditable="true" bind:textContent={text} class="hide"> <slot/> </span> And update your code like this: <script> export let to let text; const imageURL = getFaviconFor(to) </script> <a href={to}> <img src={imageURL} alt={text + ' Icon'} /> {text} </a>
unknown
d6122
train
Before moving forward, make sure you understand the concept of Reader Extensions. Your PDF must be applied with appropriate Usage Rights (more specifically 'Database and Web service Connectivity') before loading data into it. You did not create a data connection directly to your XML file. Rather you would have created a data source from a sample XML file. This will just use the schema of the XML - not the actual data. If you want to change the values of all fields at the same, set Global Data Binding. Hope these tips may help you. -Nith
unknown
d6123
train
I'm attempting to have my code create a number of random values, save those values, then allow me to manipulate those random values to create a number of profiles This is, in general, wrong approach. Right one is to have RNG internal state saved, so then after restoring it you'll get the same sequence of random numbers Along the lines: import random state = random.getstate() # save it, pickle it, ... ... # restore state, unpickle it, ... random.setstate(state) # call to random.randint() will produce controllable sequence of numbers
unknown
d6124
train
For the logback html file to display correctly, a custom CSS must be specified with font-family: 'lucida sans unicode', tahoma, arial, helvetica, sans-serif; (or a similar font) specified where its needed. For instance I have it set for TR.even and TR.odd classes. As an aside, it turns out that eclipse has issues with these character sets. I was unable to get asian characters to print period, even with simple examples such as Locale locale = new Locale("zh", "CN"); System.out.println(locale.getDisplayLanguage(Locale.SIMPLIFIED_CHINESE)); I ran the same code in NetBeans and it outputs flawlessly. In the case of eclipse I changed the encoding to UTF-8 wherever possible via system preferences, as well as set the default font to the above font with no resolution. I even went so far as to download a new copy of eclipse, extract it to a new directory and create a new workspace before setting everything to UTF-8/changing the font and then creating a test case, again with no resolution. For NetBeans no change was required. --EDIT-- Also please note this seems to be a windows issue only - My home development machine is Linux and it ran the above code perfectly with no alterations to preferences and using a new install of Eclipse.
unknown
d6125
train
Notice “ in your <img>. It is not the actual quotes. Thus replace “ with ". Thus the new HTML would be as follows <!DOCTYPE html> <html> <head> <title>Question Three</title> </head> <body> <p> <h1>Dominos Pizza order form</h1> <img src="dominos.png" alt="Dominos logo" width="100" height="50"> </p> </body> </html>
unknown
d6126
train
You have several smaller bugs in this code. It is likely that gcc optimizes the code better than Keil and therefore the function could simply be removed. In some cases you are missing volatile which may break the code: * *led_reg=(uint8_t*)0x50000000; should be led_reg=(volatile uint8_t*)0x50000000u;, see How to access a hardware register from firmware? *void wait(void){ for(int i=0;i<=0x2FFFFF;i++); } should be volatile as well or the loop will just get removed. Additionally, I don't know if you wrote the startup code or if the tool vendor did, but it has bugs. Namely .pvStack = (void*) (&_estack) invokes undefined behavior, since C doesn't allow conversions from function pointers to object pointers. Instead of void* you need to use an unsigned integer or a function pointer type. As mentioned in comments, putting some alignment keyword on a Cortex M vector table is fishy - it should be at address 0 so how can it be misaligned? It would be interesting to see the definition of DeviceVectors. Also, bare metal microcontroller systems do not return from main(). You should not use int main(), since that is likely to cause a bit of stack overhead needlessly. Use an implementation-defined form such as void main (void) then compile for embedded ("freestanding") systems. With gcc that is -ffreestanding.
unknown
d6127
train
There are two critical problems with your code keeping it from working. 1) You are always updating the same "box" variable. You need to create a different one each time. I fixed this by adding a call to clone(). 2) The recursive function does not return a value after recursing. Add a return here in findInnerBox. The following code does what you want with those minor adjustments: var innerBox = $('.box'); var innerBoxDimentions; var boxHtml = innerBox.clone().removeClass('outter'); $(document).ready(function(){ do { innerBox = findInnerBox(innerBox); innerBoxDimentions = innerBox.width() / 2; boxHtml = boxHtml.clone().width(innerBoxDimentions).height(innerBoxDimentions); innerBox.append(boxHtml); } while (innerBoxDimentions > 2); //var boxHtml = innerBox.clone().width(innerBoxDimentions/2).height(innerBoxDimentions/2); innerBox.append(boxHtml); }); function findInnerBox(box){ if(box.children('.box').length > 0){ return findInnerBox(box.children('.box')) } else{ return box; } } /* CSS Styles for Recursive Box */ .box{ border:solid 1px #000; } .outter { width: 500px; height:500px } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.0/jquery.min.js"></script> <div class="box outter"></div> A: Opps, too late I guess. Here is my version of the code var size = 500; while(size = size - 20){ $('.box:last').append('<div class="'+size+' box"></div>'); $('.'+size).css('height',size).css('width',size); } http://jsfiddle.net/443wovkk/2/
unknown
d6128
train
I'm not entirely sure as to what you want to do here. If im correct what your looking for is an expanded listview. You can achieve this using the following library it takes two layouts one for the listview item and the second the layout which needs to be expanded when the listview item is clicked https://github.com/tjerkw/Android-SlideExpandableListView
unknown
d6129
train
The problem is caused by way "Image Events" returns references to opened images: open alias "Paul:Users:tim:Downloads:test:Math.png" --> image "Math.png" open alias "Paul:Users:tim:Downloads:test:169:Math.png" --> image "Math.png" The opened image is referenced by name. If you open another image with the same name, the returned reference may occasionally refer to a previously opened image of the same name. As a work-around add a close every image before you enter the loop that processes the selected images. Furthermore you need to close the opened image when you are done with it: tell application "Image Events" close every image repeat with currentWallpaper in wallpapers set theWallpaper to open (currentWallpaper as alias) tell theWallpaper ... end tell close theWallpaper end repeat end tell
unknown
d6130
train
Look at EmberScript http://emberscript.com/ The key difference is that the Class and extends compile directly to the Ember equivalents, rather than trying to make the Coffeescript ideas fit with Ember. class SomeModel extends Ember.Object becomes var SomeModel; var get$ = Ember.get; var set$ = Ember.set; SomeModel = Ember.Object.extend(); A: App.SomeModel = DS.Model.extend() This calls Ember.js's own Object extend method, which adds observers, reopens a class and so on. class App.SomeModel extends DS.Model() Doesn't rely on a framework, In plain javascript, It's assigning "Somemodel" the properties of the "DS.Model()" object. It's not expected to work inside the framework for extending Ember.Object's
unknown
d6131
train
Yes. When you call QIODevice::readAll() 2 times, it is normal that the 2nd time you get nothing. Everything has been read, there is nothing more to be read. This behavior is standard in IO read functions: each call to a read() function returns the next piece of data. Since readAll() reads to the end, further calls return nothing. However, this does not necessarily means that the data has been flushed. For instance when you read a file, it just moves a "cursor" around and you can go back to the start of the file with QIODevice::seek(0). For QNetworkReply, I'd guess that the data is just discarded.
unknown
d6132
train
Though it is not clear where you use your codes, but the following is a java model class, with setter and getter method. Even it is not the direct answer of you question, but I have used this types of model class in my projects, one can find the idea of setter and getter from the following user class. For using in various purposes You can create your User class as follows with constructors and setter and getter methods : public class User { private String Nama; private String Kelas; private int NIM; public User() { } public User(String nama, String kelas, int NIM) { Nama = nama; Kelas = kelas; this.NIM = NIM; } public String getNama() { return Nama; } public void setNama(String nama) { Nama = nama; } public String getKelas() { return Kelas; } public void setKelas(String kelas) { Kelas = kelas; } public int getNIM() { return NIM; } public void setNIM(int NIM) { this.NIM = NIM; } }
unknown
d6133
train
We suggest this .gitignore: react-native/Examples/SampleApp/.gitignore. It ignores both user-specific Xcode files and the node_modules dir. A: React Native CLI creates a .gitignore file when you start a new project: react-native init <ProjectName> It covers all the basics that should/can be ignored. Source: https://github.com/facebook/react-native/blob/master/template/_gitignore A: This is a related question:What should Xcode 6 gitignore file include? It can be divided into three categories: * *IDE(Webstorm,Xcode) config file,like:.idea/,ios/ProjectName.xcodeproj/xcuserdata *version control tools(git,svn) file, like: .git *other files,for example,.DS_Store is OSX dir config file my answer is which have been inspected in practice: ### SVN template .svn/ ### Xcode template # Xcode # # gitignore contributors: remember to update Global/Xcode.gitignore, Objective-C.gitignore & Swift.gitignore ## Build generated build/ DerivedData/ ## Various settings *.pbxuser !default.pbxuser *.mode1v3 !default.mode1v3 *.mode2v3 !default.mode2v3 *.perspectivev3 !default.perspectivev3 xcuserdata/ ## Other *.moved-aside *.xccheckout *.xcscmblueprint ### JetBrains template # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 # User-specific stuff: .idea/workspace.xml .idea/tasks.xml .idea/dictionaries .idea/vcs.xml .idea/jsLibraryMappings.xml # Sensitive or high-churn files: .idea/dataSources.ids .idea/dataSources.xml .idea/dataSources.local.xml .idea/sqlDataSources.xml .idea/dynamic.xml .idea/uiDesigner.xml # Gradle: .idea/gradle.xml .idea/libraries .idea # Mongo Explorer plugin: .idea/mongoSettings.xml ## File-based project format: *.iws ## Plugin-specific files: # IntelliJ /out/ # mpeltonen/sbt-idea plugin .idea_modules/ # JIRA plugin atlassian-ide-plugin.xml # Crashlytics plugin (for Android Studio and IntelliJ) com_crashlytics_export_strings.xml crashlytics.properties crashlytics-build.properties fabric.properties ### TortoiseGit template # Project-level settings /.tgitconfig *.swp # node_modules/,Xcode and Webstorm will spend lots of time for indexing this dir node_modules/ # ios/Pods, ios/Pods/ # OS X temporary files that should never be committed .DS_Store src/components/.DS_Store # user personal info,for example debug info ios/ProjectName.xcodeproj/project.xcworkspace/ ios/ProjectName.xcodeproj/xcuserdata # Podfile versions ios/Podfile.lock # Created by .ignore support plugin (hsz.mobi) Hope it helps you! A: gitignore.io suggests the following .gitignore file for react-native: Created by https://www.gitignore.io/api/reactnative ### ReactNative ### # React Native Stack Base ### ReactNative.Xcode Stack ### # Xcode # # gitignore contributors: remember to update Global/Xcode.gitignore, Objective-C.gitignore & Swift.gitignore ## Build generated build/ DerivedData/ ## Various settings *.pbxuser !default.pbxuser *.mode1v3 !default.mode1v3 *.mode2v3 !default.mode2v3 *.perspectivev3 !default.perspectivev3 xcuserdata/ ## Other *.moved-aside *.xccheckout *.xcscmblueprint ### ReactNative.Node Stack ### # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* # Runtime data pids *.pid *.seed *.pid.lock # Directory for instrumented libs generated by jscoverage/JSCover lib-cov # Coverage directory used by tools like istanbul coverage # nyc test coverage .nyc_output # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) .grunt # Bower dependency directory (https://bower.io/) bower_components # node-waf configuration .lock-wscript # Compiled binary addons (http://nodejs.org/api/addons.html) build/Release # Dependency directories node_modules/ jspm_packages/ # Typescript v1 declaration files typings/ # Optional npm cache directory .npm # Optional eslint cache .eslintcache # Optional REPL history .node_repl_history # Output of 'npm pack' *.tgz # Yarn Integrity file .yarn-integrity # dotenv environment variables file .env ### ReactNative.Buck Stack ### buck-out/ .buckconfig.local .buckd/ .buckversion .fakebuckversion ### ReactNative.macOS Stack ### *.DS_Store .AppleDouble .LSOverride # Icon must end with two \r Icon # Thumbnails ._* # Files that might appear in the root of a volume .DocumentRevisions-V100 .fseventsd .Spotlight-V100 .TemporaryItems .Trashes .VolumeIcon.icns .com.apple.timemachine.donotpresent # Directories potentially created on remote AFP share .AppleDB .AppleDesktop Network Trash Folder Temporary Items .apdisk ### ReactNative.Gradle Stack ### .gradle **/build/ # Ignore Gradle GUI config gradle-app.setting # Avoid ignoring Gradle wrapper jar file (.jar files are usually ignored) !gradle-wrapper.jar # Cache of project .gradletasknamecache # # Work around https://youtrack.jetbrains.com/issue/IDEA-116898 # gradle/wrapper/gradle-wrapper.properties ### ReactNative.Android Stack ### # Built application files *.apk *.ap_ # Files for the ART/Dalvik VM *.dex # Java class files *.class # Generated files bin/ gen/ out/ # Gradle files .gradle/ # Local configuration file (sdk path, etc) local.properties # Proguard folder generated by Eclipse proguard/ # Log Files # Android Studio Navigation editor temp files .navigation/ # Android Studio captures folder captures/ # Intellij *.iml .idea/workspace.xml .idea/tasks.xml .idea/gradle.xml .idea/dictionaries .idea/libraries # External native build folder generated in Android Studio 2.2 and later .externalNativeBuild # Freeline freeline.py freeline/ freeline_project_description.json ### ReactNative.Linux Stack ### *~ # temporary files which can be created if a process still has a handle open of a deleted file .fuse_hidden* # KDE directory preferences .directory # Linux trash folder which might appear on any partition or disk .Trash-* # .nfs files are created when an open file is removed but is still being accessed .nfs* # End of https://www.gitignore.io/api/reactnative A: It's probably worth noting that react-native init <project-name> generates a .gitignore file for you. This will likely be up to date with React Native's current tooling and build outputs. So this should be a good starting point. Using react-native-cli 1.0.0 and react-native 0.36.0 generated the following .gitignore file: # OSX # .DS_Store # Xcode # build/ *.pbxuser !default.pbxuser *.mode1v3 !default.mode1v3 *.mode2v3 !default.mode2v3 *.perspectivev3 !default.perspectivev3 xcuserdata *.xccheckout *.moved-aside DerivedData *.hmap *.ipa *.xcuserstate project.xcworkspace # Android/IJ # *.iml .idea .gradle local.properties # node.js # node_modules/ npm-debug.log # BUCK buck-out/ \.buckd/ android/app/libs android/keystores/debug.keystore A: If you look at the React Native examples: https://github.com/facebook/react-native/tree/master/Examples Each one has a directory with a contents similar to the iOS directory generated by react-native-cli. Looking further into the Xcode project file, it's referenced in there too, and look at the contents - there's things like the launch screen. So yes, the iOS directory is needed. Regarding node_modules, I suggest you look at this answer which provides more information: https://stackoverflow.com/a/19416403/125680
unknown
d6134
train
Usually for maintainability and to reduce code size when multiple constructors call the same initialization code: class stuff { public: stuff(int val1) { init(); setVal = val1; } stuff() { init(); setVal = 0; } void init() { startZero = 0; } protected: int setVal; int startZero; }; A: Just the opposite: it's usually better to put all of the initializations in the constructor. In C++, the "best" policy is usually to put the initializations in an initializer list, so that the members are directly constructed with the correct values, rather than default constructed, then assigned. In Java, you want to avoid a function (unless it is private or final), since dynamic resolution can put you into an object which hasn't been initialized. About the only reason you would use an init() function is because you have a lot of constructors with significant commonality. (In the case of C++, you'd still have to weigh the difference between default construction, then assignment vs. immediate construction with the correct value.) A: In Java, there's are good reasons for keeping constructors short and moving initialization logic into an init() method: * *constructors are not inherited, so any subclasses have to either reimplement them or provide stubs that chain with super *you shouldn't call overridable methods in a constructor since you can find your object in an inconsistent state where it is partially initialized A: It's a design choice. You want to keep your constructor as simple as possible, so it's easy to read what it's doing. That why you'll often see constructors that call other methods or functions, depending on the language. It allows the programmer to read and follow the logic without getting lost in the code. As constructors go, you can quickly run into a scenario where you have a tremendous sequence of events you wish to trigger. Good design dictates that you break down these sequences into simpler methods, again, to make it more readable and easier to maintain in the future. So, no, there's no harm or limitation, it's a design preference. If you need all that initialisation done in the constructor then do it there. If you only need it done later, then put it in a method you call later. Either way, it's entirely up to you and there are no hard or fast rules around it. A: It is a design pattern that has to do with exceptions thrown from inside an object constructor. In C++ if an exception is thrown from inside an object costructor then that object is considered as not-constructed at all, by the language runtime. As a consequence the object destructor won't be called when the object goes out of scope. This means that if you had code like this inside your constructor: int *p1 = new int; int *p2 = new int; and code like this in your destructor: delete p1; delete p2; and the initialization of p2 inside the constructor fails due to no more memory available, then a bad_alloc exception is thrown by the new operator. At that point your object is not fully constructred, even if the memory for p1 has been allocated correctly. If this happens the destructor won't be called and you are leaking p1. So the more code you place inside the constructor, the more likely an error will occur leading to potential memory leaks. That's the main reason for that design choice, which isn't too mad after all. More on this on Herb Sutter's blog: Constructors exceptions in C++ A: If you have multiple objects of the same class or different classes that need to be initialized with pointers to one another, so that there is at least one cycle of pointer dependency, you can't do all the initialization in constructors alone. (How are you going to construct the first object with a pointer/reference to another object when the other object hasn't been created yet?) One situation where this can easily happen is in an event simulation system where different components interact, so that each component needs pointers to other components. Since it's impossible to do all the initialization in the constructors, then at least some of the initialization has to happen in init functions. That leads to the question: Which parts should be done in init functions? The flexible choice seems to be to do all the pointer initialization in init functions. Then, you can construct the objects in any order since when constructing a given object, you don't have to be concerned about whether you already have the necessary pointers to the other objects it needs to know about.
unknown
d6135
train
You need to publish the deleted page. The activation/deactivation of pages does not happen automatically, so if the node disappears immediately after deletion, you won't be able afterwards to publish the "deletion" to public instances, to keep them in sync.
unknown
d6136
train
I don't have API keys to run this code but I see few mistakes: When you use for items in filteredList: then you get word from list, not its index so you can't compare it with number. To get number you would use for items in range(len(filteredList)): But instead of this version better use first version but then use items instead of filterd[items] in results = google_search(items, my_api_key, my_cse_id, num=5) If you choose version with range(len(filteredList)): then don't add 1 to items - because then you get numbers 1..6 instead of 0..5 so you skip first element filteredList[0] and it doesn't search first word. And later you try to get filteredList[6] which doesn't exist on list and you get your error message. for word in filteredList: results = google_search(word, my_api_key, my_cse_id, num=5) print(results) newDict = dict() for result in results: for (key, value) in result.items(): if key in keyValList: newDict[key] = value newDictList.append(newDict) print(newDictList) BTW: you have to create newDict = dict() in every loop. BTW: standard print() and pprint.pprint() is used only to sends text on screen and always returns None so you can't assign displayed text to variable. If you have to format text then use string formatting for this. EDIT: version with range(len(...)) which is not preferred in Python. for index in range(len(filteredList)): results = google_search(filteredList[index], my_api_key, my_cse_id, num=5) print(results) newDict = dict() for result in results: for (key, value) in result.items(): if key in keyValList: newDict[key] = value newDictList.append(newDict) print(newDictList) EDIT: from googleapiclient.discovery import build import requests API_KEY = "AIzXXX" CSE_ID = "013XXX" def google_search(search_term, api_key, cse_id, **kwargs): service = build("customsearch", "v1", developerKey=api_key) res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute() return res['items'] words = [ 'Semkir sistem', 'Evrascon', 'Baku Electronics', 'Optimal Elektroniks', 'Avtostar', 'Improtex', # 'Wayback Machine' ] filtered_results = list() keys = ['cacheId', 'link', 'htmlTitle', 'htmlSnippet', ] for word in words: items = google_search(word, API_KEY, CSE_ID, num=5) for item in items: #print(item.keys()) # to check if every item has the same keys. It seems some items don't have 'cacheId' row = dict() # row of data in final list with results for key in keys: row[key] = item.get(key) # None if there is no `key` in `item` #row[key] = item[key] # ERROR if there is no `key` in `item` # generate link to cached page if row['cacheId']: row['link_cache'] = 'https://webcache.googleusercontent.com/search?q=cache:{}:{}'.format(row['cacheId'], row['link']) # TODO: read HTML from `link_cache` and get full text. # Maybe module `newpaper` can be useful for some pages. # For other pages module `urllib.request` or `requests` can be needed. row['html'] = requests.get(row['link_cache']).text else: row['link_cache'] = None row['html'] = '' # check word in title and snippet. Word may use upper and lower case chars so I convert to lower case to skip this problem. # It doesn't work if text use native chars - ie. cyrylica lower_word = word.lower() if (lower_word in row['htmlTitle'].lower()) or (lower_word in row['htmlSnippet'].lower()) or (lower_word in row['html'].lower()): filtered_results.append(row) else: print('SKIP:', word) print(' :', row['link']) print(' :', row['htmlTitle']) print(' :', row['htmlSnippet']) print('-----') for item in filtered_results: print('htmlTitle:', item['htmlTitle']) print('link:', item['link']) print('cacheId:', item['cacheId']) print('link_cache:', item['link_cache']) print('part of html:', item['html'][:300]) print('---')
unknown
d6137
train
Working code. Try this. <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script> <style> /* The Modal (background) */ .modal { display: none; /* Hidden by default */ position: fixed; /* Stay in place */ z-index: 1; /* Sit on top */ padding-top: 100px; /* Location of the box */ left: 0; top: 0; width: 100%; /* Full width */ height: 100%; /* Full height */ overflow: auto; /* Enable scroll if needed */ background-color: rgb(0,0,0); /* Fallback color */ background-color: rgba(0,0,0,0.4); /* Black w/ opacity */ } /* Modal Content */ .modal-content { background-color: #fefefe; margin: auto; padding: 20px; border: 1px solid #888; width: 80%; } /* The Close Button */ .close { color: #aaaaaa; float: right; font-size: 28px; font-weight: bold; } .close:hover, .close:focus { color: #000; text-decoration: none; cursor: pointer; } </style> </head> <body> <!-- Trigger/Open The Modal --> <a href="#" id="myBtn" class="myBtn"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <a href="#" id="myBtn" class="myBtn"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <a href="#" id="myBtn" class="myBtn"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <!-- The Modal --> <div id="myModal" class="modal"> <!-- Modal content --> <div class="modal-content"> <span class="close">&times;</span> <p>Some text in the Modal..</p> </div> </div> <script> // Get the modal var modal = document.getElementById('myModal'); // Get the button that opens the modal var btn = document.getElementById("myBtn"); // Get the <span> element that closes the modal var span = document.getElementsByClassName("close")[0]; // When the user clicks the button, open the modal // btn.onclick = function() { // modal.style.display = "block"; // } $(".myBtn").click(function() { modal.style.display = "block"; }); // When the user clicks on <span> (x), close the modal span.onclick = function() { modal.style.display = "none"; } // When the user clicks anywhere outside of the modal, close it window.onclick = function(event) { if (event.target == modal) { modal.style.display = "none"; } } </script> </body> A: <a href="#" id="myBtn" class="popupmodel"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <a href="#" id="myBtn" class="popupmodel"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <a href="#" id="myBtn" class="popupmodel"><img src="http://theparlour21.se/wp-content/uploads/2015/07/granat-salad.png"></a> <!-- The Modal --> <div id="myModal" class="modal"> <!-- Modal content --> <div class="modal-content"> <span class="close">&times;</span> <p>Some text in the Modal..</p> </div> </div> $(document).ready(function(){ $(document).on("click",".close,.modal", function(){ $(".modal").hide(); }); $(document).on("click",".popupmodel", function(){ $(".modal").show(); }); }); fiddle link
unknown
d6138
train
This is quite crude but this q function will take an existing psv file and pad it out: pad:{m:max each count@''a:flip"|"vs/:read0 x;x 0:"|"sv/:flip m$a} It works by taking the max string length of each column and padding the rest of the values to the same width using $. The columns are then stitched back together and saved. Taking this example file contents: column1|column2|column3 somevalue1|somevalue2|somevalue3 somevalue4|somevalue5|somevalue6 somevalue7|somevalue8|somevalue9 Passing through pad in an open q session: pad `:sample.psv Gives the result: column1 |column2 |column3 somevalue1|somevalue2|somevalue3 somevalue4|somevalue5|somevalue6 somevalue7|somevalue8|somevalue9
unknown
d6139
train
I see what you're trying to do now. You want to have only 2 axes, 1 that will correspond to the values in columns 0/1, and the other corresponding to column 2. I threw this example together (feel free to copy-paste in to google playground): function drawVisualization() { var data = new google.visualization.DataTable(); data.addColumn('date', 'Date'); data.addColumn('number', 'Sold Pencils'); data.addColumn('string', 'title1'); data.addColumn('string', 'text1'); data.addColumn('number', 'Sold Pens'); data.addColumn('string', 'title2'); data.addColumn('string', 'text2'); data.addColumn('number', 'Sold Erasers'); data.addColumn('string', 'title3'); data.addColumn('string', 'text3'); data.addRows([ [new Date(2008, 1 ,1), 30000, null, null, 40645, null, null, 100, null, null], [new Date(2008, 1 ,2), 14045, null, null, 20374, null, null, 120, null, null], [new Date(2008, 1 ,3), 55022, null, null, 50766, null, null, 70, null, null], [new Date(2008, 1 ,4), 75284, null, null, 14334, 'Out of Stock', 'Ran out of stock on pens at 4pm', 100, null, null], [new Date(2008, 1 ,5), 41476, 'Bought Pens', 'Bought 200k pens', 66467, null, null, 110, null, null], [new Date(2008, 1 ,6), 33322, null, null, 39463, null, null, 130, 'Eraser Boom', 'People dig erasers'] ]); var annotatedtimeline = new google.visualization.AnnotatedTimeLine( document.getElementById('visualization')); annotatedtimeline.draw(data, {'displayAnnotations': true, scaleType: 'allmaximized', scaleColumns:[0, 2]}); } This shows column 0/1 on the left-side scale, and column 2 on the right-side scale. A: Get the same scale for both series 2 and 3 ? Looks like the only way is to "hack" it. You could do it by adding two rows of fake values at the end (or start) of the series. Get the max and min values for series 2 and 3 in variables max2, max3, min2, min3. Then append two rows of (fake) values : row1 = last value of series 1, Max(max2, max3), Max(max2, max3) row2 = last value of series 1, Min(min2, min3), Min(min2, min3) You need to set also option scaleType:'allmaximized' and your series 2 and 3 will be perfectly aligned.
unknown
d6140
train
You can use @JsName annotation to provide exact name for the function (or other symbol) in compiled javascript. I.e. @JsName("withParam") fun withParam(args: String) { println("JavaScript generated through Kotlin") }
unknown
d6141
train
How do I stop my backtracking algorithm once I find an answer? You could use Python exception facilities for such a purpose. You could also adopt the convention that your solution_recursive returns a boolean telling to stop the backtrack. It is also a matter of taste or of opinion. A: I'd like to expand a bit on your recursive code. One of your problems is that your program displays paths that are not the solutions. This is because each call to solution_recursive starts with show_frogs(frogs) regardless of whether frogs is the solution or not. Then, you say that the program is continuing even after the solution has been found. There are two reasons for this, the first being that your while loop doesn't care about whether the solution has been found or not, it will go through all the possible moves: while len(S_prime) > 0: And the other reason is that you are reinitializing S_prime every time this function is called. I'm actually quite amazed that it didn't enter an infinite loop just checking the first move over and over again. Since this is a problem in class, I won't give you an exact solution but these problems need to be resolved and I'm sure that your teaching material can guide you.
unknown
d6142
train
Ok, I found the problem here. This was working ok but my browser cache (Firefox) was remembering the old values. I turned of caching in the browser (by going to about:config and setting browser.cache.disk.enable = FALSE Then everything started working correctly. Hopefully this will help others who have the same issue!
unknown
d6143
train
You can do "conditional render" using a condition and a component like this: const someBoolean = true; retrun ( { someBoolean && <SomeComponent /> } ) if someBoolean is true, then <SomeComponent/> will show. if someBoolean is false, then <SomeComponent/> will not show. So, just use that to conditionally render your dropdown column or any other component. If, the table rows are based upon content and dynamically rendered, then I would modify the content accordingly. (i.e. don't provide the rows you don't want to render).
unknown
d6144
train
In standard SQL, quoted identifiers are case sensitive and Postgres follows that standard. So the following: select column_one as "COL", column_two as "col" from ... Or as part of a table create table dont_do_this ( "COL" integer, "col" integer ); Those are two different names as they become case sensitive due to the use of the double quotes. But I would strongly advise to not do that. This will most probably create confusion and problems down the line. I think this should work with MySQL as well, but as it traditionally doesn't care about following the SQL standard, I don't know. A: Can we create two (or more) columns in Postgres/ MySQL with the same name but a different case? As in does case-sensitivity matter in column names? Not possible to create columns with same name - yes case-sensitivity matters. Example in MySQL: CREATE TABLE test( id int, id int ); CREATE TABLE test1( id int, ID int ); Output in MySQL: Schema Error: Error: ER_DUP_FIELDNAME: Duplicate column name 'id' Output in PostgreSQL: Schema Error: error: column "id" specified more than once SELECT statement: SELECT id as "id", ID1 as "id" FROM test; Output: id 2 Case-sensitive SELECT: SELECT id as "id", ID1 as "ID" FROM test; Output: id ID 1 2
unknown
d6145
train
Question is, is there a way to limit the number of ORDERED rows returned in the subquery ? The following is what I typically use for top-n type queries (pagination query in this case): select * from ( select a.*, rownum r from ( select * from your_table where ... order by ... ) a where rownum <= :upperBound ) where r >= :lowerBound; I usually use an indexed column to sort in inner query, and the use of rownum means Oracle can use the count(stopkey) optimization. So, not necessarily going to do full table scan: create table t3 as select * from all_objects; alter table t3 add constraint t_pk primary key(object_id); analyze table t3 compute statistics; delete from plan_table; commit; explain plan for select * from ( select a.*, rownum r from ( select object_id, object_name from t3 order by object_id ) a where rownum <= 2000 ) where r >= 1; select operation, options, object_name, id, parent_id, position, cost, cardinality, other_tag, optimizer from plan_table order by id; You'll find Oracle does a full index scan using t_pk. Also note the use of stopkey option. Hope that explains my answer ;) A: Your inference that Oracle must return all rows in the subquery before filtering out the first N is wrong. It will start fetching rows from the subquery, and stop when it has returned N rows. Having said that, it may be that Oracle needs to select all rows from the table and sort them before it can start returning them. But if there were an index on the column being used in the ORDER BY clause, it might not. Oracle is in the same position as any other DBMS: if you have a large table with no index on the column you are ordering by, how can it possibly know which rows are the top N without first getting all the rows and sorting them? A: Order by may become heavy operation if you have lots of data. Take a look at your execution plan. If the data is not real time you could create a material view on these kind of selects... A: In older versions of ORACLE (8.0) you don't have the possibility to use ORDER BY clause in subquery. So, only for those of us who yet use some ancient versions, there is another way to deal with: The magic of UNION operator. UNION will sort the records by columns in the query: Example: SELECT * FROM (SELECT EMP_NO, EMP_NAME FROM EMP_TABLE UNION SELECT 99999999999,'' FROM DUAL) WHERE ROWNUM<=5 where 99999999999 is bigger then all values in EMP_NO; Or, if you want to select TOP 5 salary employees with the highest 5 salaries: SELECT EMP_NO, EMP_NAME, 99999999999999-TMP_EMP_SAL FROM (SELECT 99999999999999-EMP_SAL TMP_EMP_SAL, EMP_NO, EMP_NAME FROM EMP_TABLE UNION SELECT 99999999999999,0,'' FROM DUAL) WHERE ROWNUM<=5; Regards, Virgil Ionescu
unknown
d6146
train
Please add package like below: <PackageReference Include="Microsoft.AspNetCore.Mvc.NewtonsoftJson" Version="5.0.17" /> And your ConfigureServices method like below: public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews().AddNewtonsoftJson(); } And it works for me.
unknown
d6147
train
So what ended up working is using location = / { } in my ui.conf file and location / { } In my main conf file.
unknown
d6148
train
If a report is only associated with a user through the project (this means specifically that it makes no sense to have a report with a different user than its project) then the second one is better. You will always be able to access the user by (report object).project.user, or in search queries as 'project__user'. If you use the first one you risk getting the user data for the report and the project out of sync, which would not make sense for your app.
unknown
d6149
train
You should deploy to pypi or to local repository.
unknown
d6150
train
select user_id,name,score from your_table where (user_id,id) in (select user_id,max(id) from your_table group by user_id) A: Considering the below formats for your tables CREATE TABLE IF NOT EXISTS `user` (`user_id` int(11) NOT NULL auto_increment, `user_name` varchar(200) collate latin1_general_ci NOT NULL, PRIMARY KEY (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=1; CREATE TABLE IF NOT EXISTS `user_score` ( `id` int(11) NOT NULL auto_increment, `user_id` int(11) NOT NULL, `user_score` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=1; Please try executing the below sql select query for retrieving latest score for each user SELECT u . * ,s.user_score,s.id FROM user u JOIN user_score s ON u.user_id = s.user_id WHERE id IN (SELECT MAX( id ) FROM user_score GROUP BY user_id)
unknown
d6151
train
You can try this: GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%'; FLUSH PRIVILEGES; or try to connect to 127.0.0.1 not to localhost A: This is not a problem with MySQL installation or set-up. Each account name consists of both a user and host name like 'user_name'@'host_name', even when the host name is not specified. From the MySQL Reference Manual: https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_usage MySQL account names consist of a user name and a host name. This enables creation of accounts for users with the same name who can connect from different hosts. This also allows MySQL to grant different levels of permissions depending on which host they use to connect. When updating grants for an account where MySQL doesn't recognize the host portion, it will return an error code 1133 unless if it has a password to identify this account. Also, MySQL will allow an account name to be specified only by it's user name, but in this case it is treated as 'user_name'@'%'.
unknown
d6152
train
I would say valgrind + callgrind, you can control the output while the program is running and you can use kcachegrind to check the output in kde. A: You can use valgrind for this.
unknown
d6153
train
Try this: import "influxdata/influxdb/schema" schema.measurementTagValues( bucket: "my_bucket", tag: "host", measurement: "my_measurement" ) A: this work for me: from(bucket: "telegraf") |> range(start: -15m) |> group(columns: ["host"], mode:"by") |> keyValues(keyColumns: ["host"]) Note: if you want more time back (e.g. -30d) the performance will be slow, you can solve it by load this query only once (available in grafana variables) or better add some filters and selectors for example: from(bucket: "telegraf") |> range(start: -30d) |> filter(fn: (r) => r._field == "you field") |> filter(fn: (r) => /* more filter*/) |> group(columns: ["host"], mode:"by") |> first() |> keyValues(keyColumns: ["host"]) A: I'm using the following flux-code to extract all host tag-values for the bucket "telegraf" - just as your posted InfluxQL: import "influxdata/influxdb/schema" schema.tagValues(bucket: "telegraf", tag: "host") InfluxDB has a bit about this in their documentation: https://docs.influxdata.com/influxdb/v2.0/query-data/flux/explore-schema/#list-tag-values
unknown
d6154
train
I assume thread-A creates, updates, then passes the object-X to thread-B. If object-X and whatever it refers to directly or transitively (fields) are no further updated by thread-A, then volatile is redundant. The consistency of the object-X state at the receiving thread is guaranteed by JVM. In other words, if logical ownership for object-X is passed from thread-A to thread-B, then volatile doesn't make sense. Conversely, on modern multicore systems, the performance implications of volatile can be more than that of thread-local garbage left by immutable case classes. If object-X is supposed to be shared for writing, making a field volatile will help to share its value, but you will face another problem: non-atomic updates on the object-X, if fields' values depend on each other. As @alf pointed out, to benefit from happens-before guarantees, the objects must be passed safely! This can be achieved using java.util.concurrent.** classes. High level constructs like Akka define their own mechanisms of "passing" objects safely. References: https://docs.oracle.com/javase/tutorial/essential/concurrency/immutable.html A: As @tair points out, the real solution to your problem is to use an immutable case class: The sender may update the size field just before sending it out. It seems that receiver does not update the size; neither does sender update the size after if has already sent the BufD out. So for all practical reasons, recipient is much better off receiving an immutable object. As for @volatile, it ensures visibility—the writes are indeed hitting the main memory instead of being cached in the thread local cache, and the reads include a memory barrier to ensure that the value read is not stale. Without @volatile, the recipient thread is free to cache the value (it's not volatile, hence it should not be changed from the other thread, hence it's safe to cache) and re-use it instead of referring to the main memory. (SLS 11.2.1, JLS §8.3.1.4) @volatile Marks a field which can change its value outside the control of the program; this is equivalent to the volatile modifier in Java. and A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field. The problem here is that either you don't need all that as the object is effectively immutable (and you're better off with a properly immutable one), or you want to see coordinated changes in buf and size on the recipient size. In the latter case, @volatile may be useful (while fragile), if writer appends (not overwrites!) to buf, and then updates the size. In this case, write to buf happens-before write to size, which in turn happens-before reader can read the updated value from size (by volatility), therefore if reader checks and re-checks the size, and writer only appends, you're probably fine. Having said that, I would not use this design. As for the warning, it all compiles to Java, i.e. JVM, bytecode, and volatile is a JVM flag for fields. Traits cannot define a field—they only define methods, and it's up to the extending class to decide whether it'll be a proper variable or (a pair of) methods (SLS 4.2). A variable declaration var x: T is equivalent to the declarations of both a getter function x and a setter function x_=: def x: T def x_= (y: T): Unit A function cannot be @volatile, hence the warning.
unknown
d6155
train
Like I answered in your previous question, you should spend the time to read these two pages. They will help you get your answer much faster. There's no error in my code. If you're getting an error message, then there's an error in your code. every time I open and close a form What form? There is no form in your example. it will be doubled in Server side. I close it so it will not be double IP in Checklistbox of the Server. What server? What checklistbox? We don't know what you are referring to here. Without a minimal, complete and verifiable example, we can't help you very well. That being said, it looks like you are closing your _clientSocket. Once you've closed a socket you must re-open it or create a new one before you can use it again. You cannot call BeginReceive after you've closed your socket. I was able to reproduce your error by creating a complete, minimal and verifiable example. Here is the code: public partial class Form1 : Form { Socket _clientSocket; public Form1() { InitializeComponent(); } const int buffSize = 1024; byte[] receivedBuf = new byte[buffSize]; Socket listenerSock; Socket handlerSock; private void Form1_Load(object sender, EventArgs e) { IPHostEntry ipHostInfo = Dns.GetHostEntry(Dns.GetHostName()); IPAddress ipAddress = ipHostInfo.AddressList[0]; IPEndPoint localEndPoint = new IPEndPoint(ipAddress, 11000); listenerSock = new Socket(ipAddress.AddressFamily, SocketType.Stream, ProtocolType.Tcp); listenerSock.Bind(localEndPoint); listenerSock.Listen(10); listenerSock.BeginAccept(ServerAcceptAsync, null); _clientSocket = new Socket(ipAddress.AddressFamily, SocketType.Stream, ProtocolType.Tcp); _clientSocket.Connect(localEndPoint); _clientSocket.BeginReceive(receivedBuf, 0, buffSize, SocketFlags.None, ReceiveData, null); } private void ServerAcceptAsync(IAsyncResult ar) { handlerSock = listenerSock.EndAccept(ar); } private void ReceiveData(IAsyncResult ar) { //try //{ Debug.WriteLine("received data"); int received = _clientSocket.EndReceive(ar); if (received == 0) { return; } Array.Resize(ref receivedBuf, received); string text = Encoding.Default.GetString(receivedBuf); Debug.WriteLine(text); if (text == "Server: -O") { Thread NT = new Thread(() => { this.BeginInvoke((Action)delegate () { textBox1.Text = "Guest"; this.Hide(); _clientSocket.Close(); //Usertimer us = new Usertimer(textBox1.Text); //us.Show(); }); }); NT.Start(); } Array.Resize(ref receivedBuf, _clientSocket.ReceiveBufferSize); //AppendtoTextBox(text); _clientSocket.BeginReceive(receivedBuf, 0, receivedBuf.Length, SocketFlags.None, new AsyncCallback(ReceiveData), null); //} //catch (Exception ex) //{ // MessageBox.Show(ex.Message, Application.ProductName, MessageBoxButtons.OK, MessageBoxIcon.Error); //} } private void button1_Click(object sender, EventArgs e) { var message = Encoding.UTF8.GetBytes("Server: -O"); handlerSock.Send(message); } } I commented the code that was not necessary to reproduce. As expected, the problem is that you call ReceiveAsync after you call _clientSock.Close(). You can't do that. If you close the socket, you should not execute anymore code. Here is an example of how to fix this: if (text == "Server: -O") { Thread NT = new Thread(() => { this.BeginInvoke((Action)delegate () { textBox1.Text = "Guest"; this.Hide(); _clientSocket.Close(); //Usertimer us = new Usertimer(textBox1.Text); //us.Show(); }); }); NT.Start(); } else { Array.Resize(ref receivedBuf, _clientSocket.ReceiveBufferSize); //AppendtoTextBox(text); _clientSocket.BeginReceive(receivedBuf, 0, receivedBuf.Length, SocketFlags.None, new AsyncCallback(ReceiveData), null); }
unknown
d6156
train
Your a element is empty. Add this to your css .social-icons a { display: block; height: 100%; }
unknown
d6157
train
Try configuring your mapper like so: mapper.setDateFormat(new SimpleDateFormat("dd-MM-yyyy+hh:mm")); that should work but if you want more control you can use @JsonFormat annotation: public class Applicant { @XmlElement(required = true) @XmlSchemaType(name = "date") @JsonFormat( shape = JsonFormat.Shape.STRING, pattern ="dd-MM-yyyy+hh:mm") protected XMLGregorianCalendar dateOfBirth; } for even more control : https://www.baeldung.com/jackson-annotations source : https://github.com/FasterXML/jackson-docs
unknown
d6158
train
Solution was to convert the RGB to HSL as suggested by Herbert. Function for converting to human still needs a little tweaking / finishing off but here it is: function hslToHuman($h, $s, $l) { $colors = array(); // Gray if ($s <= 10 && (9 <= $l && $l <= 90)) { $colors[] = "gray"; } $l_var = $s / 16; // White $white_limit = 93; if (($white_limit + $l_var) <= $l && $l <= 100) { $colors[] = "white"; } // Black $black_limit = 9; if (0 <= $l && $l <= ($black_limit - $l_var)) { $colors[] = "black"; } // If we have colorless colors stop here if (sizeof($colors) > 0) { return $colors; } // Red if (($h <= 8 || $h >= 346)) { $colors[] = "red"; } // Orange && Brown // TODO // Yellow if (40 <= $h && $h <= 65) { $colors[] = "yellow"; } // Green if (65 <= $h && $h <= 170) { $colors[] = "green"; } // Blue if (165 <= $h && $h <= 260) { $colors[] = "blue"; } // Pink && Purple // TODO return $colors; } A: Alright so you've got a graphics library, there must be an average thingy in there, you average your picture, take any pixel and tadaam you're done ? And the simplest solution found on here is : resize to 1x1px, get colorat : Get image color After that it's pretty easy, find somewhere a detailed list of rgb to human readable (for example html colors), encode that as an array and use it in your script -> round() your r,g,b, vals to the precision of your data and retrieve the color. You should determine what color granularity you want and go from there -> find your set of named colors (I think all 8bit colors have a name somewhere) and then reduce your rgb information to that - either by reducing color information of the image before reading it (faster) or by reducing color information at read time (more flexible in terms of color list. Basic example of some rgb-human readable resources : http://www.w3schools.com/html/html_colornames.asp Others : http://chir.ag/projects/name-that-color/#2B7C97 http://r0k.us/graphics/SIHwheel.html
unknown
d6159
train
HyperTreeList inherits from GenericTreeItem: * *http://wxpython.org/Phoenix/docs/html/lib.agw.customtreectrl.GenericTreeItem.html#lib.agw.customtreectrl.GenericTreeItem It would appear that you can use its Check() method to toggle whether a tree item is checked or not. A: Just been struggling with this also, here my solution: import wx.lib.agw.hypertreelist as HTL self.tree_list =self.createTree() self.tree_list.AddColumn("File") self.tree_list.AddColumn("Size") #self.tree_list.AssignImageList(some image_list) self.root =self.tree_list.AddRoot("abc", ct_type=1) for my_text in [ 'def', 'ghi', 'jkl' ]: node =self.tree_list.AppendItem( self.root, testware_path, ct_type =1) self.tree_list.SetItemText( node, text=my_text, column=1) self.tree_list.SetItemText( node, text=my_text, column=2) if my_text.startswith('d'): node.Check()
unknown
d6160
train
Those kind of templates are really hard to find unless u pay fo them. Just have a look on this : http://mobile.smashingmagazine.com/2010/07/19/how-to-use-css3-media-queries-to-create-a-mobile-version-of-your-website/
unknown
d6161
train
I think this error is because your dependencies in your second file (version 3.6.0) aren't the same as your Android Studio version (version 3.6.1) . Try changing your dependencies to: dependencies { classpath 'com.android.tools.build:gradle:3.6.1' }
unknown
d6162
train
Please try this code . func play() { if let data = NSData(contentsOfURL: savePath) { do { try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, withOptions: .AllowBluetooth) try AVAudioSession.sharedInstance().setActive(true) audioPlayer = try AVAudioPlayer(data: data, fileTypeHint: AVFileTypeAppleM4A) audioPlayer!.prepareToPlay() audioPlayer!.play() } catch let error as NSError { print("Unresolved error \(error.debugDescription)") } } } A: My example. I wrote a very long time, so the code is horrible, but it works class PlayMusicVC: UIViewController, AVAudioPlayerDelegate, ADBannerViewDelegate { // MARK: - override functions override func viewDidLoad() { super.viewDidLoad() self.tabBarController?.tabBar.hidden = true mpVolumeView() play() timer = NSTimer.scheduledTimerWithTimeInterval(1.0, target: self, selector: "timeForLabels", userInfo: nil, repeats: true) bannerView.delegate = self bannerView.hidden = true } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) self.tabBarController?.tabBar.hidden = true self.becomeFirstResponder() UIApplication.sharedApplication().beginReceivingRemoteControlEvents() } override func viewDidDisappear(animated: Bool) { super.viewDidDisappear(animated) // timer.invalidate() self.resignFirstResponder() UIApplication.sharedApplication().endReceivingRemoteControlEvents() } override func canBecomeFirstResponder() -> Bool { return true } override func remoteControlReceivedWithEvent(event: UIEvent?) { if event!.type == UIEventType.RemoteControl { switch event!.subtype { case UIEventSubtype.RemoteControlPlay: play() case UIEventSubtype.RemoteControlPause: pause() case UIEventSubtype.RemoteControlNextTrack: next() case UIEventSubtype.RemoteControlPreviousTrack: previous() default: break } } } // MARK: - var and let var timer: NSTimer! var fileManager = NSFileManager.defaultManager() // var (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer = AVAudioPlayer() var nameSongForLabel: String! var artistSongForLabel: String! var albumSongForLabel: String! var dataImageForImageView: NSData! // data from table var currentIndex = Int() var arrayOfSongs = [String]() // MARK: - IBOutlet weak @IBOutlet weak var nameSongLabel: UILabel! @IBOutlet weak var imageOfArtwork: UIImageView! @IBOutlet weak var durationView: UIView! @IBOutlet weak var switchView: UIView! @IBOutlet weak var volumeView: UIView! // button @IBOutlet weak var playPauseButton: UIButton! // left and right labels @IBOutlet weak var leftLabelTime: UILabel! @IBOutlet weak var rightLabelTime: UILabel! @IBOutlet weak var sliderDuration: UISlider! // MARK: - ADBanner @IBOutlet weak var bannerView: ADBannerView! // MARK: - IBAction func and switch functions @IBAction func previousTrack(sender: UIBarButtonItem) { previous() } var playingSong = true @IBAction func playTrack(sender: UIBarButtonItem) { if playingSong == false /* true*/ { play() playingSong = true controlCenter() } else { pause() playingSong = false controlCenter() } } @IBAction func nextTrack(sender: UIBarButtonItem) { next() } @IBAction func sliderD(sender: UISlider) { (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime = NSTimeInterval(sender.value) controlCenter() } func previous() { maximumCount = false // var allDigit = arrayOfSongs.count-1 if currentIndex > 0 { newIndex = currentIndex-- - 1 } play() controlCenter() } // MARK: - data for control var titleSongForControl: String! var titleArtistForControl: String! var currentPause: NSTimeInterval! var imageForControlCenter: UIImage! var maximumCount = false func play() { playPauseButton.setImage(UIImage(named: "Pause32.png"), forState: UIControlState.Normal) var currentSong: String! if newIndex == nil { currentSong = arrayOfSongs[currentIndex] if currentIndex == arrayOfSongs.endIndex-1 { maximumCount = true } } else { currentSong = arrayOfSongs[newIndex] if newIndex == arrayOfSongs.endIndex-1 { maximumCount = true } newIndex = nil } let directoryFolder = fileManager.URLsForDirectory(NSSearchPathDirectory.DocumentDirectory, inDomains: NSSearchPathDomainMask.UserDomainMask) var superURL: NSURL! let url: NSURL = directoryFolder.first! superURL = url.URLByAppendingPathComponent(currentSong) let playerItem = AVPlayerItem(URL: superURL) let commonMetaData = playerItem.asset.commonMetadata for item in commonMetaData { if item.commonKey == "title" { nameSongForLabel = item.stringValue } if item.commonKey == "artist" { artistSongForLabel = item.stringValue } if item.commonKey == "album" { albumSongForLabel = item.stringValue } if item.commonKey == "artwork" { dataImageForImageView = item.dataValue } } titleSongForControl = nameSongForLabel titleArtistForControl = artistSongForLabel nameSongLabel.text = "\(artistSongForLabel) - \(nameSongForLabel)" if dataImageForImageView != nil { imageOfArtwork.image = UIImage(data: dataImageForImageView) imageForControlCenter = UIImage(data: dataImageForImageView) } else { imageOfArtwork.image = UIImage(named: "Notes100.png") } (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer = try? AVAudioPlayer(contentsOfURL: superURL) do { try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback) } catch _ { } do { try AVAudioSession.sharedInstance().setActive(true) } catch _ { } if currentPause == nil { (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.play() controlCenter() } else { (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime = currentPause (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.play() currentPause = nil } } func timeForLabels() { let timeForRightLabel: NSTimeInterval = (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.duration - (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime let timeForLeftLabel = (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime let calendar: NSCalendarUnit = [NSCalendarUnit.Minute, NSCalendarUnit.Second] // var time: NSTimeInterval = 0 let dateFormatter = NSDateComponentsFormatter() dateFormatter.unitsStyle = NSDateComponentsFormatterUnitsStyle.Positional dateFormatter.zeroFormattingBehavior = NSDateComponentsFormatterZeroFormattingBehavior.Pad dateFormatter.allowedUnits = calendar let timeRight = dateFormatter.stringFromTimeInterval(timeForRightLabel)! rightLabelTime.text = "-\(timeRight)" let timeLeft = dateFormatter.stringFromTimeInterval(timeForLeftLabel)! leftLabelTime.text = timeLeft sliderDuration.minimumValue = 0.0 sliderDuration.value = Float((UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime) sliderDuration.maximumValue = Float((UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.duration) // auto next sound if rightLabelTime.text == "-0:00" && maximumCount == false { next() } // if (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.playing == false { playPauseButton.setImage(UIImage(named: "Play32.png"), forState: UIControlState.Normal) playingSong = false } } func pause() { playPauseButton.setImage(UIImage(named: "Play32.png"), forState: UIControlState.Normal) do { try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback) } catch _ { } do { try AVAudioSession.sharedInstance().setActive(true) } catch _ { } (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.pause() currentPause = (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime } var newIndex: Int! var newSong: String! func next() { let allDigit = arrayOfSongs.count-1 if arrayOfSongs.count > 1 { if currentIndex < allDigit { if newIndex == nil { newIndex = currentIndex++ + 1 } else { newIndex = currentIndex++ } play() controlCenter() } else if currentIndex == arrayOfSongs.endIndex { (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.stop() } } } func controlCenter() { let mpPlaysCenter = MPNowPlayingInfoCenter.defaultCenter() mpPlaysCenter.nowPlayingInfo = [MPMediaItemPropertyArtist: titleArtistForControl, MPMediaItemPropertyTitle: titleSongForControl, MPMediaItemPropertyPlaybackDuration: (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.duration, MPMediaItemPropertyPlayCount: (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime, MPNowPlayingInfoPropertyElapsedPlaybackTime: (UIApplication.sharedApplication().delegate as! AppDelegate).mainAudioPlayer.currentTime] } // MARK: - swipe functions @IBAction func leftSwipe(sender: UISwipeGestureRecognizer) { sender.direction = UISwipeGestureRecognizerDirection.Left next() } @IBAction func rightSwipe(sender: UISwipeGestureRecognizer) { sender.direction = UISwipeGestureRecognizerDirection.Right previous() } // MARK: - functions func mpVolumeView() { let mpView = MPVolumeView(frame: CGRectMake(8, 15, self.view.bounds.size.width-16, self.volumeView.bounds.size.height)) volumeView.addSubview(mpView) } // MARK: - banner view function func bannerView(banner: ADBannerView!, didFailToReceiveAdWithError error: NSError!) { NSLog("Banner error is %@", error) } func bannerViewDidLoadAd(banner: ADBannerView!) { bannerView.hidden = false } func bannerViewActionShouldBegin(banner: ADBannerView!, willLeaveApplication willLeave: Bool) -> Bool { return true } }
unknown
d6163
train
You don't need table 2 at all. You can determine what you're after from table 2 by querying table 1. * *Number purchased of any item: Select sum(qty) from table_1 where item_id = [id of your item] and user_id = [id of your user]; * *Last purchased date of given item for given user select max(date_purchase) from table_1 where item_id = [id of your item] and user_id = [id of your user]; Also, remember to add indexes to table 1 so that your querying is as fast as possible.
unknown
d6164
train
Always store dates in UTC and when displaying them, calculate the local time based on the user's time zone (which you will have to ask for at some point and store). A: DateTime2 (aka long date in MS SQL) is only going to be useful if you need that level of granularity or 10,000 year range. DateTime seems to be the norm. If you only need the date portion and can ignore the time, use the new Date datatype in SQL 2008. Ask your clients what granularity is necessary for the project. UTC is the best way to store the date since your server or your client can be anywhere in the world (assuming a web-based deployment). Also, if you move your server or have a new co-located server you won't need to adjust for timezone since everyone is already running on UTC. You should only convert from UTC to local time in the presentation later. If your client is using a web browser you can get the timezone offset from javascript. Then if you need that value on the server-side, either store the value in a cookie or in a hidden HTML field for easy access. I've used a tweaked version of this javascript sample from CodeProject to do the same.
unknown
d6165
train
L2 cache helps in some ways, but it does not obviate the need for coalesced access of global memory. In a nutshell, coalesced access means that for a given read (or write) instruction, individual threads in a warp are reading (or writing) adjacent, contiguous locations in global memory, preferably that are aligned as a group on a 128-byte boundary. This will result in the most effective utilization of the available memory bandwidth. In practice this is often not difficult to accomplish. For example: int idx=threadIdx.x + (blockDim.x * blockIdx.x); int mylocal = global_array[idx]; will give coalesced (read) access across all the threads in a warp, assuming global_array is allocated in an ordinary fashion using cudaMalloc in global memory. This type of access makes 100% usage of the available memory bandwidth. A key takeaway is that memory transactions ordinarily occur in 128-byte blocks, which happens to be the size of a cache line. If you request even one of the bytes in a block, the entire block will be read (and stored in L2, normally). If you later read other data from that block, it will normally be serviced from L2, unless it has been evicted by other memory activity. This means that the following sequence: int mylocal1 = global_array[0]; int mylocal2 = global_array[1]; int mylocal3 = global_array[31]; would all typically be serviced from a single 128-byte block. The first read for mylocal1 will trigger the 128 byte read. The second read for mylocal2 would normally be serviced from the cached value (in L2 or L1) not by triggering another read from memory. However, if the algorithm can be suitably modified, it's better to read all your data contiguously from multiple threads, as in the first example. This may be just a matter of clever organization of data, for example using Structures of Arrays rather than Arrays of structures. In many respects, this is similar to CPU cache behavior. The concept of a cache line is similar, along with the behavior of servicing requests from the cache. Fermi L1 and L2 can support write-back and write-through. L1 is available on a per-SM basis, and is configurably split with shared memory to be either 16KB L1 (and 48KB SM) or 48KB L1 (and 16KB SM). L2 is unified across the device and is 768KB. Some advice I would offer is to not assume that the L2 cache just fixes sloppy memory accesses. The GPU caches are much smaller than equivalent caches on CPUs, so it's easier to get into trouble there. A general piece of advice is simply to code as if the caches were not there. Rather than CPU oriented strategies like cache-blocking, it's usually better to focus your coding effort on generating coalesced accesses and then possibly make use of shared memory in some specific cases. Then for the inevitable cases where we can't make perfect memory accesses in all situations, we let the caches provide their benefit. You can get more in-depth guidance by looking at some of the available NVIDIA webinars. For example, the Global Memory Usage & Strategy webinar (and slides ) or the CUDA Shared Memory & Cache webinar would be instructive for this topic. You may also want to read the Device Memory Access section of the CUDA C Programming Guide.
unknown
d6166
train
You can just move the -8 into the upper limit, and since you include (<=) the upper limit you shouldn't be using until, but the regular range expansion with two dots. So it becomes: for (i in 0..table.size-8){ for (j in 0..table[i].size-8){} } (I imagine you would also want to replace the magical number eight with a variable with a meaningful name)
unknown
d6167
train
You could use indices inside the ForEach and then still use $group and accessing the index of the businesses via the index like that... List { ForEach(group.businesses.indices) { index in TextField("", text: $group.businesses[index].address) } } A: An alternative solution may be to use zip (or enumerated) to have both businesses and its indices: struct TestView: View { @Binding var group: Group var body: some View { TextField("", text: $group.name) List { let items = Array(zip(group.businesses.indices, group.businesses)) ForEach(items, id: \.1.id) { index, business in if business.enabled { Text(business.name) TextField("", text: $group.businesses[index].address) } else { Text("\(business.name) is disabled") } } } } }
unknown
d6168
train
Here is a small function that take a vector x and a desired rho and returns a vector such that cor(<vector>,x) == rho`). f <- function(x,rho) { orth = lm(runif(length(x))~x)$residuals rho*sd(orth)*x + orth*sd(x)*sqrt(1-rho^2) } Now we apply the function to column a to create a column c such that cor(a,c) == 0.7 d %>% mutate(c = f(a,.7)) A: The second is actually more easier (for me at least): just make z-scores out of both a and b and add or average them. It will correlate with both a and b with 0.7 d <- d %>% mutate(d=((a - mean(a)) / sd(a)) + ((b- mean(b)) / sd(b))) A: Using the Iman and Conover method (1982) developed in the mc2d package (a rank correlation structure). library(mc2d) cc <- rnorm(n,50,20) cc <- cornode(cbind(d$a,cc), target=0.7)[,"cc"] d$c <- cc cor(d) For more than one variable, you have to build a matrix of correlation. ## Target (corrTarget <- matrix(c(1, 0.7, 0.7, 0.7, 1, 0.7, 0.7, 0.7, 1), ncol=3)) dd <- rnorm(n,50,20) dd <- cornode(cbind(a=d$a,b=d$b,dd), target=corrTarget) cor(dd) d$b <- dd[,"b"] d$d <- dd[,"dd"] cor(d) The final correlation structure should be checked because it is not always possible to build the target correlation structure.
unknown
d6169
train
Assuming Me is the sub form, and Orçamentos is the main form: If Me.Produto = "" Then Me.Parent!Comando33.Visible = False Me.Parent!Comando47.Visible = False Me.Descrição.Visible = False End If
unknown
d6170
train
You can use toFixed(x) function that allows you to chose the number of decimal after the comma Source : https://www.w3schools.com/jsref/jsref_tofixed.asp For example : (42.4).toFixed(0) === 42
unknown
d6171
train
This pretrained VGG-16 model encodes all of the model parameters as tf.constant() ops. (See, for example, the calls to tf.constant() here.) As a result, the model parameters would not appear in tf.trainable_variables(), and the model is not mutable without substantial surgery: you would need to replace the constant nodes with tf.Variable objects that start with the same value in order to continue training. In general, when importing a graph for retraining, the tf.train.import_meta_graph() function should be used, as this function loads additional metadata (including the collections of variables). The tf.import_graph_def() function is lower level, and does not populate these collections.
unknown
d6172
train
Add the this code in view controller if ([self respondsToSelector:@selector(edgesForExtendedLayout)]) self.edgesForExtendedLayout = UIRectEdgeNone; // iOS 7 specific in your viewDidLoad method.
unknown
d6173
train
Please do following steps to send mail from localhost on Ubuntu/Linux through gmail :- For that you need to install msmtp on Linux/Ubuntu server. Gmail uses https:// (it's hyper text secure) so you need install ca-certificates ~$ sudo apt-get install msmtp ca-certificates It will take few seconds to install msmtp package. Now you have to create configuration file(msmtprc) using , gedit editor. ~$ sudo gedit /etc/msmtprc Now you have to copy and paste following code in gedit (file you created with above command) defaults tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt account default host smtp.gmail.com port 587 auth on user [email protected] password MY_GMAIL_PASSSWORD from [email protected] logfile /var/log/msmtp.log Don't forget to replace MY_GMAIL_ID with your "gmail id" and MY_GMAIL_PASSSWORD with your "gmail password" in above lines of code. Now create msmtp.log as ~$ sudo touch /var/log/msmtp.log You have to make this file readable by anyone with ~$ sudo chmod 0644 /etc/msmtprc Now Enable sendmail log file as writable with ~$ sudo chmod 0777 /var/log/msmtp.log Your configuration for gmail's SMTP is now ready. Now send one test email as ~$ echo -e "Subject: Test Mail\r\n\r\nThis is my first test email." |msmtp --debug --from=default -t [email protected] Please check your Gmail inbox. Now if you want to send email with php from localhost please follow below instructions:- Open and edit php.ini file ~$ sudo gedit /etc/php/7.0/apache2/php.ini You have to set sendmail_path in your php.ini file. Check your SMTP path with ~$ which msmtp and you will get /usr/bin/msmtp like that. Search sendmail_path in php.ini and edit as below ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ; http://php.net/sendmail-path sendmail_path = /usr/bin/msmtp -t Please check 3rd line carefully there is no semicolon before sendmail_path. Now save and exit from gedit. Now it's time to restart your apache ~$ sudo /etc/init.d/apache2 restart Now create one php file with mail function from http://in2.php.net/manual/en/function.mail.php. Do test and enjoy !! A: This article explains exactly how to do what you want: https://www.howtoforge.com/tutorial/configure-postfix-to-use-gmail-as-a-mail-relay/
unknown
d6174
train
As it seems you're using ActiveSupport, there is simple way to do this: username.presence || firstname
unknown
d6175
train
Shouldn't you change your R.id.actionbar_btn to R.id.actionbar_home? A: I think you are getting wrong id for Button: <Button android:id="@+id/actionbar_home" android:layout_width="33dp" android:layout_height="32dp" android:background="@drawable/ic_launcher" /> final Button button = (Button) findViewById(R.id.actionbar_btn); "R.id.actionbar_btn" and android:id="@+id/actionbar_home" should be same
unknown
d6176
train
I don't think there is any problem with performance, however it's not clear to me why you would want to encapsulate Objective-C objects within a C++ object. One reason to keep C++ purely C++ is so it can interact with other C++ objects, which is no longer possible once you include Objective-C objects. In order to allow a C++ object, with an embedded Objective-C object, to be used by another C++ class (where it needs to "see" the header file) I guess you'd have to use void * or something.
unknown
d6177
train
You need to implement both of those classes. The SipProvider class will connect to your endpoint (Aterisk, for example). Note that this class must be on an static context, because only one connection is allowed per client. You cant create a SipProvider instance calling a SipStack class, on sipStack.createSipProvider(listeningPoint). After this, you be able to create transactions and send requests to you endpoint. The SipListener is the class that will process all responses from your server. This means that every request that you send to the server (Via SipProvider) will receive a response on SipListener. So, you must have this listener to process all data returned by your endpoint. Try to implement the code that was described on oracle article that you cite. I started to develop based on this article, and works very fine! A: Check the examples at the Reference Implementation https://java.net/projects/jsip/sources/svn/show/trunk/src/examples?rev=2279 to help you moving forward faster
unknown
d6178
train
According to the pgf texsystem example, you need to use the "pfg" backend (mpl.use("pgf")) and choose the font you want to use: style = { "pgf.texsystem": "pdflatex", "text.usetex": True, "pgf.preamble": [ r"\usepackage[utf8x]{inputenc}", r"\usepackage[T1]{fontenc}", r"\usepackage{cmbright}", ] } Alternatively you may use a formatter, which does not format the ticklabels as latex math (i.e. does not put them into dollar signs). One may adapt the default ScalarFormatter not to use latex math by setting ax.xaxis.get_major_formatter()._usetex = False ax.yaxis.get_major_formatter()._usetex = False A: The problem is LateX related. You just need to load the extra cmbright package that enable sans-serif math fonts. On Debian-like systems: sudo apt install texlive-fonts-extra On Fedora: sudo dnf install texlive-cmbright Then try your code with this style: style = { "text.usetex": True, "font.family": "sans-serif" "text.latex.preamble" : r"\usepackage{cmbright}" }
unknown
d6179
train
I had exactly the same problem and solved it using eval function : if (version_compare(PHP_VERSION, '5.3.0') >= 0) { eval(' function osort(&$array, $prop) { usort($array, function($a, $b) use ($prop) { return $a->$prop > $b->$prop ? 1 : -1; }); } '); } else { // something else... } A: Anonymous functions came with PHP 5.3.0. My first thing to do would be to replace with one of the following: $my_print_r = "my_print_r"; $my_print_r(); or $my_print_r = create_function('$a','print_r($a);'); UPDATE: i would go with the first method... It would give me enough control for creating my own function versions and in much easier procedure than create_function, regardless of PHP version. I never had any problems with that approach. In fact i once built a whole multidimensional array of functions, that was easily manageable and able to add/remove functions, think of that.. That system was also used by other users, they were able to add their functions too in a way. A: The only thing I can think of is to make a build script which people will have to run to make it compatible with lower versions. So in case of anonymous methods, the script will loop through all the PHP files, looking for anonymous methods: $f = function($a) use($out) { echo $a . $out; }; and replace them with create_function for < 5.3: $f = create_function('$a', ' global $out; echo $a . $out; '); A: use include if (o99_php_good() != true){ include 'new_func.php'; } new_func.php: $f = function() use ($out) { echo $out; };
unknown
d6180
train
__x does not have a special meaning. ENDIAN_LE16 is a macro that makes a place to change endianness without changing your source code. Each build target can have a different version of gfpr.h specific for that target. You must be compiling for a little-endian machine, so ENDIAN_LE16 doesn't need to make any changes. It just leaves its argument (__x) unchanged. If you were compiling for a big-endian target, ENDIAN_LE16 would be defined to swap the bytes of its argument. Something like: #define ENDIAN_LE16(__x) ( (((x) & 0xff) << 8) | (((x) >> 8) & 0xff) ) That way, by changing which target's gfpr.h file you include, you get the right results without having to change your source code. Edit Per the file you're probably looking at, ENDIAN_BE32 invokes ENDIAN_RET32, which twiddles bits in a way similar to what I showed above.
unknown
d6181
train
This will split your array into 2 arrays: var articles = ["article1", "article2", "article3", "article4", "article5", "article6", "article7", "article8", "article9", "article10"]; var separatorIndex = articles.length & 0x1 ? (articles.length+1)/2 : articles.length/2; var firstChunk = articles.slice(0,separatorIndex); //["article1", "article2", "article3", "article4", "article5"] var secondChunk = articles.slice(separatorIndex,articles.length); //["article6", "article7", "article8", "article9", "article10"] Then you can use them where and/or how you want. Explanation The 2nd line finds an anchor index (alias -> middle index of division),by which array should be divided into 2 chunks. The array can have odd and even lengths,and those 2 situations must be distinguished. As it is impossible to divide odd-length array into 2 equal parts, it must be divided in such way, that 1st chunk will have 1 element more or less,than 2nd chunk.Here, I have implemented first case,that is, 1st chunk will have 1 element more,than 2nd one. Here are the examples of different situations: total length | 1st (length) | 2st (length) | separatorIndex 10 0-4 (5) 5-9 (5) 5 11 0-5 (6) 6-10 (5) 6 12 0-5 (6) 0-11 (6) 6 In table, number-number syntax shows accordingly start and end indexes in array. The division is done by .slice() function.
unknown
d6182
train
If I am understanding from your comments correctly you have wide dataframes of 1 row. Assuming they are the same dimensions you can just transpose and bind them then do your t test. t.globalshare = t(globalshare) t.localshare = t(localshare) combined = cbind(t.globalshare, t.localshare) t.test(combined, t.globalshare ~ t.localshare)
unknown
d6183
train
You can use ternary operators within the string to check each day against the threshold, and output the extra style instructions where needed, something like this: $table_rows[$rowId] .= '<tr> <td style="text-align:center"><b>'.$row['table_name'].'</td> <td style="text-align:center;'.($row["$date07"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date07"].'</td> <td style="text-align:center;'.($row["$date06"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date06"].'</td> <td style="text-align:center;'.($row["$date05"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date05"].'</td> <td style="text-align:center;'.($row["$date04"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date04"].'</td> <td style="text-align:center;'.($row["$date03"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date03"].'</td> <td style="text-align:center;'.($row["$date02"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date02"].'</td> <td style="text-align:center;'.($row["$date01"] < $row["threshold"] ? "background-color:red;" : "").'">'.$row["$date01"].'</td> </tr>'; Demo: https://3v4l.org/NV8vF
unknown
d6184
train
You can use the following code to get the result what you wanted. var BallsProjection = from blueball in bag from redball in bag where blueball.Contains("Blue") && redball.Contains("Red") && blueball.CompareTo("Blue1") !=0 select new { ball1 = blueball, ball2 = redball }; You can enhance this code to meet your requirement.I have not tested this code.
unknown
d6185
train
Delete platforms/android folder and try to rebuild. That helped me a lot. (Visual Studio Tools for Apache Cordova) A: Delete all the apk files from platfroms >> android >> build >> generated >> outputs >> apk and run command cordova run android A: I removed android platforms and installed again then worked. I wrote these lines in command window: cordova platform remove android then cordova platform add android A: I found answer myself; and if someone will face same issue, i hope my solution will work for them as well. * *Downgrade NodeJs to 0.10.36 *Upgrade Android SDK 22 A: I have had this problem several times and it can be usually resolved with a clean and rebuild as answered by many before me. But this time this would not fix it. I use my cordova app to build 2 seperate apps that share majority of the same codebase and it drives off the config.xml. I could not build in end up because i had a space in my id. com.company AppName instead of: com.company.AppName If anyone is in there config as regular as me. This could be your problem, I also have 3 versions of each app. Live / Demo / Test - These all have different ids. com.company.AppName.Test Easy mistake to make, but even easier to overlook. Spent loads of time rebuilding, checking plugins, versioning etc. Where I should have checked my config. First Stop Next Time! A: I had the same error code but different issue Error: /Users/danieloram/desktop/CordovaProject/platforms/android/gradlew: Command failed with exit code 1 Error output: Exception in thread "main" java.lang.UnsupportedClassVersionError: com/android/dx/command/Main : Unsupported major.minor version 52.0 To resolve this issue I opened the Android SDK Manager, uninstalled the latest Android SDK build-tools that I had (24.0.3) and installed version 23.0.3 of the build-tools. My cordova app then proceeded to build successfully for android. A: In my case it was the file size restriction which was put on proxy server. Zip file of gradle was not able to download due this restriction. I was getting 401 error while downloading gradle zip file. If you are getting 401 or 403 error in log, make sure you are able to download those files manually. A: Faced same problem. Problem lies in required version not installed. Hack is simple Goto Platforms>platforms.json Edit platforms.json in front of android modify the version to the one which is installed on system. A: I'm using Visual Studio 2015, and I've found that the first thing to do is look in the build output. I found this error reported there: Reading build config file: \build.json... SyntaxError: Unexpected token The solution for that was to remove the bom from the build.json file Then I hit a second problem - with this message in the build output: FAILURE: Build failed with an exception. * What went wrong: A problem was found with the configuration of task ':packageRelease'. File 'C:\Users\Colin\etc' specified for property 'signingConfig.storeFile' is not a file. Easily solved by putting the correct filename into the keystore property
unknown
d6186
train
Instead of mapping lat and long as float you should geo-point mapping
unknown
d6187
train
Based on my investigation, this problem was because of unbalanced job distribution. That's why some PCs are idle while the others were still busy. It is necessary to design good algorithm to distribute the jobs equally in Spark.
unknown
d6188
train
Inside your functions, a is a copy of a pointer to pointer. Here you assigned something else to that copy: a = calloc((*n),sizeof(int)); That does not have any effect outside of the function. Outside of your functions, a is a pointer to int, it would make sense to write a pointer to int there. You could do that (via the copy used within your functions) for example like *a = calloc((*n),sizeof(int)); For that effect outside of your functions you should call with the address of the pointer to int getData(&a,&n); With this int * getData(int **a,int *n) you could assign the returned pointer to int to a outside of your function. But don't here getData(a,&n);. And for returning a pointer to int, this line is not appropriate return a; because it returns a pointer to pointer to int. Which does happen to contain the right value (freshly allocated pointer to int), but that is still incorrect by type. ... Note: You are relying very much on your fscanf()s working, so the n you are using for size might be uninitialised... And you should not use while(!feof(f)), Why is “while ( !feof (file) )” always wrong? And d seems unused inside of your functions... So please heed the recommendation in the comment by M.M. about reading compiler warnings.
unknown
d6189
train
Save a handle to the write end of the stdout pipe when creating the child process. You can then write a character to this to unblock the thread that has called ReadFile (that is reading from the read end of the stdout pipe). In order not to interpret this as data, create an Event (CreateEvent) that is set (SetEvent) in the thread that writes the dummy character, and is checked after ReadFile returns. A bit messy, but seems to work. /* Init */ stdout_closed_event = CreateEvent(NULL, TRUE, FALSE, NULL); /* Read thread */ read_result = ReadFile(stdout_read, data, buf_len, &bytes_read, NULL); if (!read_result) ret = -1; else ret = bytes_read; if ((bytes_read > 0) && (WAIT_OBJECT_0 == WaitForSingleObject(stdout_closed_event, 0))) { if (data[bytes_read-1] == eot) { if (bytes_read > 1) { /* Discard eot character, but return the rest of the read data that should be valid. */ ret--; } else { /* No data. */ ret = -1; } } } /* Cancel thread */ HMODULE mod = LoadLibrary (L"Kernel32.dll"); BOOL WINAPI (*cancel_io_ex) (HANDLE, LPOVERLAPPED) = NULL; if (mod != NULL) { cancel_io_ex = (BOOL WINAPI (*) (HANDLE, LPOVERLAPPED)) GetProcAddress (mod, "CancelIoEx"); } if (cancel_io_ex != NULL) { cancel_io_ex(stdout_write_pipe, NULL); } else { SetEvent(stdout_closed_event); WriteFile(stdout_write_pipe, &eot, 1, &written, NULL); }
unknown
d6190
train
Run this command php artisan key:generate and the clear config cache using php artisan config:cache Hope this will work!
unknown
d6191
train
When you register a COM dll using regsvr32, CLSIDs are defined inside the dll. In typical ATL COM project, these entries are specified in *.rgs files, and registry is updated based on that content. Of course, this is not the only way to do it and other toolsets and technologies do it in different way. These CLSIDs are actually defined in the .idl file of the project (again, this is valid for ATL projects), and .rgs file must correctly match the values. The .idl file itself contains IDL definitions of all COM types in the library, and is used by MIDL compiler to generate the type library.
unknown
d6192
train
tl;dr use oma as an argument within your pairs() call. As usual, it's all in the documentation, albeit somewhat obscurely. ?pairs states: Also, graphical parameters can be given as can arguments to ‘plot’ such as ‘main’. ‘par("oma")’ will be set appropriately unless specified. This means that pairs() tries to do some clever stuff internally to set the outer margins (based on whether a main title is requested); it will ignore external par("oma") settings, only paying attention to internal settings. The "offending" line within the code of stats:::pairs.default is: if (is.null(oma)) oma <- c(4, 4, if (!is.null(main)) 6 else 4, 4) Thus setting oma within the call does work: par(bg="lightblue") ## so we can see the plot region ... z <- matrix(rnorm(300),ncol=3) pairs(z,oma=c(0,0,0,0))
unknown
d6193
train
You asked similar question and removed it after getting the answer : unset array indexs from value of another array? $firstArray = array( 0 => '@@code' ,1 => '@@label' ,2 => '@@name' ,3 => '@@age' ); $keysArray = array( 0 ,1 ); $resultArray = array_diff_key( $firstArray ,array_flip( $keysArray ) ); var_dump( $resultArray ); A: Maybe you need this? foreach($array3 as $tmp){ unset($array1[$tmp]); unset($array2[$tmp]); }
unknown
d6194
train
If isHiddenWordFound says that the file is not found if part of it is hidden, then you need to inverse it to be true to continue the loop, once the word is found it will return true at which point the inverse will be false allowing the program execution to continue: while (!isHiddenWordFound()); A: It seems the isHiddenWordFound method is there for precisely this purpose. do { ... } while (!isHiddenWordFound());
unknown
d6195
train
One common method of packing variable-length data sets to a single continuous array is using one element to describe the length of the next data sequence, followed by that many data items, with a zero length terminating the array. In other words, if you have data "strings" 1, 2 3, 4 5 6, and 7 8 9 10, you can pack them into an array of 1+1+1+2+1+3+1+4+1 = 15 bytes as 1 1 2 2 3 3 4 5 6 4 7 8 9 10 0. The functions to access said sequences are quite simple, too. In OP's case, each data item is an uint8: uint8 dataset[] = { ..., 0 }; To loop over each set, you use two variables: one for the offset of current set, and another for the length: uint16 offset = 0; while (1) { const uint8 length = dataset[offset]; if (!length) { offset = 0; break; } else ++offset; /* You have 'length' uint8's at dataset+offset. */ /* Skip to next set. */ offset += length; } To find a specific dataset, you do need to find it using a loop. For example: uint8 *find_dataset(const uint16 index) { uint16 offset = 0; uint16 count = 0; while (1) { const uint8 length = dataset[offset]; if (length == 0) return NULL; else if (count == index) return dataset + offset; offset += 1 + length; count++; } } The above function will return a pointer to the length item of the index'th set (0 referring to the first set, 1 to the second set, and so on), or NULL if there is no such set. It is not difficult to write functions to remove, append, prepend, and insert new sets. (When prepending and inserting, you do need to copy the rest of the elements in the dataset array forward (to higher indexes), by 1+length elements, first; this means that you cannot access the array in an interrupt context or from a second core, while the array is being modified.) If the data is immutable (for example, generated whenever a new firmware is uploaded to the microcontroller), and you have sufficient flash/rom available, you can use a separate array for each set, an array of pointers to each set, and an array of sizes of each set: static const uint8 dataset_0[] PROGMEM = { 1 }; static const uint8 dataset_1[] PROGMEM = { 2, 3 }; static const uint8 dataset_2[] PROGMEM = { 4, 5, 6 }; static const uint8 dataset_3[] PROGMEM = { 7, 8, 9, 10 }; #define DATASETS 4 static const uint8 *dataset_ptr[DATASETS] PROGMEM = { dataset_0, dataset_1, dataset_2, dataset_3, }; static const uint8 dataset_len[DATASETS] PROGMEM = { sizeof dataset_0, sizeof dataset_1, sizeof dataset_2, sizeof dataset_3, }; When this data is generated at firmware compile time, it is common to put this into a separate header file, and simply include it from the main firmware .c source file (or, if the firmware is very complicated, from the specific .c source file that accesses the data sets). If the above is dataset.h, then the source file typically contains say #include "dataset.h" const uint8 dataset_length(const uint16 index) { return (index < DATASETS) ? dataset_len[index] : 0; } const uint8 *dataset_pointer_P(const uint16 index) { return (index < DATASETS) ? dataset_ptr[index] : NULL; } i.e., it includes the dataset, and then defines the functions that access the data. (Note that I deliberately made the data itself static, so they are only visible in the current compilation unit; but the dataset_length() and dataset_pointer(), the safe accessor functions, are accessible from other compilation units (C source files), too.) When the build is controlled via a Makefile, this is trivial. Let's say the generated header file is dataset.h, and you have a shell script, say generate-dataset.sh, that generates the contents for that header. Then, the Makefile recipe is simply dataset.h: generate-dataset.sh @$(RM) $@ $(SHELL) -c "$^ > $@" with the recipes for the compilation of the C source files that need it, containing it as a prerequisite: main.o: main.c dataset.h $(CC) $(CFLAGS) -c main.c Do note that the indentation in Makefiles always uses Tabs, but this forum does not reproduce them in code snippets. (You can always run sed -e 's|^ *|\t|g' -i Makefile to fix copy-pasted Makefiles, though.) OP mentioned that they are using Codevision, that does not use Makefiles (but a menu-driven configuration system). If Codevision does not provide a pre-build hook (to run an executable or script before compiling the source files), then OP can write a script or program run on the host machine, perhaps named pre-build, that regenerates all generated header files, and run it by hand before every build. In the hybrid case, where you know the length of each data set at compile time, and it is immutable (constant), but the sets themselves vary at run time, you need to use a helper script to generate a rather large C header (or source) file. (It will have 1500 lines or more, and nobody should have to maintain that by hand.) The idea is that you first declare each data set, but do not initialize them. This makes the C compiler reserve RAM for each: static uint8 dataset_0_0[3]; static uint8 dataset_0_1[2]; static uint8 dataset_0_2[9]; static uint8 dataset_0_3[4]; /* : : */ static uint8 dataset_0_97[1]; static uint8 dataset_0_98[5]; static uint8 dataset_0_99[7]; static uint8 dataset_1_0[6]; static uint8 dataset_1_1[8]; /* : : */ static uint8 dataset_1_98[2]; static uint8 dataset_1_99[3]; static uint8 dataset_2_0[5]; /* : : : */ static uint8 dataset_4_99[9]; Next, declare an array that specifies the length of each set. Make this constant and PROGMEM, since it is immutable and goes into flash/rom: static const uint8 dataset_len[5][100] PROGMEM = { sizeof dataset_0_0, sizeof dataset_0_1, sizeof dataset_0_2, /* ... */ sizeof dataset_4_97, sizeof dataset_4_98, sizeof dataset_4_99 }; Instead of the sizeof statements, you can also have your script output the lengths of each set as a decimal value. Finally, create an array of pointers to the datasets. This array itself will be immutable (const and PROGMEM), but the targets, the datasets defined first above, are mutable: static uint8 *const dataset_ptr[5][100] PROGMEM = { dataset_0_0, dataset_0_1, dataset_0_2, dataset_0_3, /* ... */ dataset_4_96, dataset_4_97, dataset_4_98, dataset_4_99 }; On AT90CAN128, the flash memory is at addresses 0x0 .. 0x1FFFF (131072 bytes total). Internal SRAM is at addresses 0x0100 .. 0x10FF (4096 bytes total). Like other AVRs, it uses Harvard architecture, where code resides in a separate address space -- in Flash. It has separate instructions for reading bytes from flash (LPM, ELPM). Because a 16-bit pointer can only reach half the flash, it is rather important that the dataset_len and dataset_ptr arrays are "near", in the lower 64k. Your compiler should take care of this, though. To generate correct code for accessing the arrays from flash (progmem), at least AVR-GCC needs some helper code: #include <avr/pgmspace.h> uint8 subset_len(const uint8 group, const uint8 set) { return pgm_read_byte_near(&(dataset_len[group][set])); } uint8 *subset_ptr(const uint8 group, const uint8 set) { return (uint8 *)pgm_read_word_near(&(dataset_ptr[group][set])); } The assembly code, annotated with the cycle counts, avr-gcc-4.9.2 generates for at90can128 from above, is subset_len: ldi r25, 0 ; 1 cycle movw r30, r24 ; 1 cycle lsl r30 ; 1 cycle rol r31 ; 1 cycle add r30, r24 ; 1 cycle adc r31, r25 ; 1 cycle add r30, r22 ; 1 cycle adc r31, __zero_reg__ ; 1 cycle subi r30, lo8(-(dataset_len)) ; 1 cycle sbci r31, hi8(-(dataset_len)) ; 1 cycle lpm r24, Z ; 3 cycles ret subset_ptr: ldi r25, 0 ; 1 cycle movw r30, r24 ; 1 cycle lsl r30 ; 1 cycle rol r31 ; 1 cycle add r30, r24 ; 1 cycle adc r31, r25 ; 1 cycle add r30, r22 ; 1 cycle adc r31, __zero_reg__ ; 1 cycle lsl r30 ; 1 cycle rol r31 ; 1 cycle subi r30, lo8(-(dataset_ptr)) ; 1 cycle sbci r31, hi8(-(dataset_ptr)) ; 1 cycle lpm r24, Z+ ; 3 cycles lpm r25, Z ; 3 cycles ret Of course, declaring subset_len and subset_ptr as static inline would indicate to the compiler you want them inlined, which increases the code size a bit, but might shave off a couple of cycles per invocation. Note that I have verified the above (except using unsigned char instead of uint8) for at90can128 using avr-gcc 4.9.2. A: First, you should put the predefined length array in flash using PROGMEM, if you haven't already. You could write a script, using the predefined length array as input, to generate a .c (or cpp) file that contains the PROGMEM array definition. Here is an example in python: # Assume the array that defines the data length is in a file named DataLengthArray.c # and the array is of the format # const uint16 dataLengthArray[] PROGMEM = { # 2, 4, 5, 1, 2, # 4 ... }; START_OF_ARRAY = "const uint16 dataLengthArray[] PROGMEM = {" outFile = open('PointerArray.c', 'w') with open("DataLengthArray.c") as f: fc = f.read().replace('\n', '') dataLengthArray=fc[fc.find(START_OF_ARRAY)+len(START_OF_ARRAY):] dataLengthArray=dataLengthArray[:dataLengthArray.find("}")] offsets = [int(s) for s in dataLengthArray.split(",")] outFile.write("extern uint8 array[2000];\n") outFile.write("uint8* pointer_array[] PROGMEM = {\n") sum = 0 for offset in offsets: outFile.write("array + {}, ".format(sum)) sum=sum+offset outFile.write("};") Which would output PointerArray.c: extern uint8 array[2000]; uint8* pointer_array[] = { array + 0, array + 2, array + 6, array + 11, array + 12, array + 14, }; You could run the script as a Pre-build event, if your IDE supports it. Otherwise you will have to remember to run the script every time you update the offsets. A: You mention that the data set lengths are pre-defined, but not how they are defined - so I'm going to make the assumption of how the lengths are written into code is up for grabs.. If you define your flash array in terms of offsets instead of lengths, you should immediately get a run-time benefit. With lengths in flash, I expect you have something like this: const uint8_t lengths[] = {1, 5, 9, ...}; uint8_t get_data_set_length(uint16_t index) { return lengths[index]; } uint8_t * get_data_set_pointer(uint16_t index) { uint16_t offset = 0; uint16_t i = 0; for ( i = 0; i < index; ++i ) { offset += lengths[index]; } return &(array[offset]); } With offsets in flash, the const array has gone from uint8_t to uint16_t, which doubles the flash usage, plus an additional element to be speed up calculating the length of the last element. const uint16_t offsets[] = {0, 1, 6, 15, ..., /* last offset + last length */ }; uint8_t get_data_set_length(uint16_t index) { return offsets[index+1] - offsets[index]; } uint8_t * get_data_set_pointer(uint16_t index) { uint16_t offset = offsets[index]; return &(array[offset]); } If you can't afford that extra flash memory, ou could also combine the two by having the lengths for all elements and offsets for a fraction of the indices, e.g every 16 element in the example below, trading off run-time cost vs flash memory cost. uint8_t get_data_set_length(uint16_t index) { return lengths[index]; } uint8_t * get_data_set_pointer(uint16_t index) { uint16_t i; uint16_t offset = offsets[index / 16]; for ( i = index & 0xFFF0u; i < index; ++i ) { offset += lengths[index]; } return &(array[offset]); } To simplify the encoding, you can consider using x-macros, e.g. #define DATA_SET_X_MACRO(data_set_expansion) \ data_set_expansion( A, 1 ) \ data_set_expansion( B, 5 ) \ data_set_expansion( C, 9 ) uint8_t array[2000]; #define count_struct(tag,len) uint8_t tag; #define offset_struct(tag,len) uint8_t tag[len]; #define offset_array(tag,len) (uint16_t)(offsetof(data_set_offset_struct,tag)), #define length_array(tag,len) len, #define pointer_array(tag,len) (&(array[offsetof(data_set_offset_struct,tag)])), typedef struct { DATA_SET_X_MACRO(count_struct) } data_set_count_struct; typedef struct { DATA_SET_X_MACRO(offset_struct) } data_set_offset_struct; const uint16_t offsets[] = { DATA_SET_X_MACRO(offset_array) }; const uint16_t lengths[] = { DATA_SET_X_MACRO(length_array) }; uint8_t * const pointers[] = { DATA_SET_X_MACRO(pointer_array) }; The preprocessor turns that into: typedef struct { uint8_t A; uint8_t B; uint8_t C; } data_set_count_struct; typedef struct { uint8_t A[1]; uint8_t B[5]; uint8_t C[9]; } data_set_offset_struct; typedef struct { uint8_t A[1]; uint8_t B[5]; uint8_t C[9]; } data_set_offset_struct; const uint16_t offsets[] = { 0,1,6, }; const uint16_t lengths[] = { 1,5,9, }; uint8_t * const pointers[] = { array+0, array+1, array+6, }; This just shows an example of what the x-macro can expand to. A short main() can show these in action: int main() { printf("There are %d individual data sets\n", (int)sizeof(data_set_count_struct) ); printf("The total size of the data sets is %d\n", (int)sizeof(data_set_offset_struct) ); printf("The data array base address is %x\n", array ); int i; for ( i = 0; i < sizeof(data_set_count_struct); ++i ) { printf( "elem %d: %d bytes at offset %d, or address %x\n", i, lengths[i], offsets[i], pointers[i]); } return 0; } With sample output There are 3 individual data sets The total size of the data sets is 15 The data array base address is 601060 elem 0: 1 bytes at offset 0, or address 601060 elem 1: 5 bytes at offset 1, or address 601061 elem 2: 9 bytes at offset 6, or address 601066 The above require you to give a 'tag' - a valid C identifier for each data set, but if you have 500 of these, pairing each length with a descriptor is probably not a bad thing. With that amount of data, I would also recommend using an include file for the x-macro, rather than a #define, in particular if the data set definitions can be exported somewhere else. The benefit of this approach is that you have the data sets defined in one place, and everything is generated from this one definition. If you re-order the definition, or add to it, the arrays will be generated at compile-time. It is also purely using the compiler toolchain, in particular the pre-processor, but there's no need for writing external scripts or hooking in pre-build scripts. A: You said that you want to store the address of each data set but it seems like it would be much simpler if you store the offset of each data set. Storing the offsets instead of the addresses means that you don't need to know the address of big array at compile time. Right now you have an array of constants containing the length of each data set. const uint8_t data_set_lengths[] = { 1, 5, 9...}; Just change that to be an array of constants containing the offset of each data set in the big array. const uint8_t data_set_offsets[] = { 0, 1, 6, 15, ...}; You should be able to calculate these offsets at design time given that you already know the lengths. You said yourself, just accumulate the lengths to get the offsets. With the offsets precalculated the code won't have the bad performance of accumulating at run time. And you can find the address of any data set at run time simply by adding the data set's offset to the address of the big array. And the address of big array doesn't need to be settled until link time.
unknown
d6196
train
As per the suggestion given by Andy Lester I have added the index for the necessary columns. So the execution time is reduced to half that is 18 secs in localhost. I further investigated that the index columns are of datatype VARCHAR. So I changed that to INT and then executed, got results within 0.9 secs. Thanks to Andy Lester.
unknown
d6197
train
Try like below, $('.myclass').each(function() { this.selectedIndex = 0; }); DEMO: http://jsfiddle.net/ZDYjP/ A: How about this short one? ​$(".myclass :first-child").prop("selected", true);​​​​​​​​​ DEMO: http://jsfiddle.net/u8S54/ A: Do it like this: $('.myclass').each(function() { $(this).find('option:first').attr('selected','selected'); }); A: You don't need to use each. Here's an example. ​$('.select').find('option:first').prop('selected', true ); A: $("option:first", ".myselect").prop('selected', true); Working sample A: If you remove the selected property then the first one by default is what is shown. $('.myclass').find('option:selected').removeAttr("selected");
unknown
d6198
train
If you prefer a Java centric solution, DOM4J has support for traversing a document tree: Document doc = DocumentHelper.parseText(XML); final Namespace ns = Namespace.get("test", "urn:foo:bar"); doc.accept(new VisitorSupport() { @Override public void visit(Element node) { node.setQName(QName.get(node.getName(), ns)); // Attribute QNames are read-only, so need to create new List<Attribute> attributes = new ArrayList<Attribute>(); while(node.attributes().size() > 0) attributes.add(node.attributes().remove(0)); for(Attribute a: attributes) { node.addAttribute(QName.get(a.getName(), ns), a.getValue()); } } }); A: You could run an XSLT transformation: <xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0"> <xsl:template match="*"> <xsl:element name="test:{local-name()}" namespace="http://test-namepace/ns"> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="@*"> <xsl:attribute name="test:{local-name()}" namespace="http://test-namepace/ns"> <xsl:value-of select="."/> </xsl:element> </xsl:template> </xsl:transform> I think DOM4J has methods to apply an XSLT 1.0 transformation directly, but you also have the option to use Saxon, which handles DOM4J as input and/or output alongside many other tree models. Incidentally, (a) in your requirements example, the result document is ill-formed because it doesn't declare the namespace, and (b) it's not generally considered good practice to put all the attributes in the same namespace as the containing elements; I've given you a solution on the assumption that you have good reasons for this rather strange design.
unknown
d6199
train
"mPDF has limited scope to control when automatic page-breaks occur, and does not have ‘widows’ or ‘orphans’ protection." https://mpdf.github.io/paging/page-breaks.html
unknown
d6200
train
You can use negative z-index on the fixed element. <div id="fixed">This is fixed</div> <div id="static">This is static</div> #fixed { position:fixed; z-index:-1; } Fiddle Demonstration
unknown