_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d3301
train
You need to use Filter Expression like: $..[?(@.Id == '3cb5ee8d-1382-49fc-850c-013c65ab81b0')].SysCreatedUserId Demo: More information: JMeter's JSON Path Extractor Plugin - Advanced Usage Scenarios
unknown
d3302
train
To hide/unhide an app, your app need to be the DevicePolicyManager. You can find more information about the device policy manager at http://developer.android.com/reference/android/app/admin/DevicePolicyManager.html and you may need to use https://developer.android.com/reference/android/app/admin/DevicePolicyManager.html#setApplicationHidden(android.content.ComponentName,%20java.lang.String,%20boolean) DevicePolicyManager dpm = (DevicePolicyManager) context.getSystemService(Context.DEVICE_POLICY_SERVICE); ComponentName ownerComponent = new ComponentName(context, DeviceAdminReceiverImpl.class); boolean newHiddenValue = true; dpm.setApplicationHidden(ownerComponent, packageName, newHiddenValue);
unknown
d3303
train
If it's not rolling back the transaction, there is one possibility that your table has MyISAM as the engine, since MyISAM tables do not support rollbacks. So double-check that the table's engine is correctly set to InnoDB.
unknown
d3304
train
Had to exclude per file type in the end. sourceSets { main { resources { srcDir '.' exclude ('**/*.j3odata','**/*.mesh','**/*.skeleton',\ '**/*.mesh.xml','**/*.skeleton.xml','**/*.scene',\ '**/*.material','**/*.obj','**/*.mtl','**/*.3ds',\ '**/*.dae','**/*.blend','**/*.blend*[0-9]','**/*.bin',\ '**/*.gltf') } } }
unknown
d3305
train
var result = String.fromCharCode.apply(null, arrayOfValues); JSFiddle Explanations: String.fromCharCode can take a list of char codes as argument, each char code as a separate argument (for example: String.fromCharCode(97,98,99)). apply allows to call a function with a custom this, and arguments provided as an array (in contrary to call which take arguments as is). So, as we don't care what this is, we set it to null (but anything could work). In conclusion, String.fromCharCode.apply(null, [97,98,99]) is equivalent to String.fromCharCode(97,98,99) and returns 'abc', which is what we expect. A: It depends on what you want and what you mean. Option One: If you want to convert the text to ASCII, do this: var theArray = [97,112,112,46,106,115,10,110,111,100,101,46,106,115,10]; theString = String.fromCharCode.apply(0, theArray); (Edited based on helpful comments.) Produces: app.js node.js Option Two: If you just want a list separated by commas, you can do .join(','): var theArray = [97,112,112,46,106,115,10,110,111,100,101,46,106,115,10]; var theString = theArray.join(','); You can put whatever you want as a separator in .join(), like a comma and a space, hyphens, or even words. A: In node.js it's usually done with buffers: > new Buffer([97,112,112,46,106,115,10,110,111,100,101,46,106,115,10]).toString() 'app.js\nnode.js\n' It'll be faster than fromCharCode, and what's most important, it'll preserve utf-8 sequences correctly. A: just use the toString() function: var yourArray = [97,112,112,46,106,115,10,110,111,100,101,46,106,115,10]; var strng = yourArray.toString(); A: The ssh2 module passes a Buffer (not an actual javascript array) to 'data' event handlers for streams you get from exec() or shell(). Unless of course you called setEncoding() on the stream, in which case you'd get a string with the encoding you specified. If you want the string representation instead of the raw binary data, then call chunk.toString() for example.
unknown
d3306
train
Select p.id,p.name as orgn,t.name as altn,p.descripion as orgd,t.description as altd from product p join tmp_product t on t.id=p.id and (t.name<>p.name or t.description <> p.description) A: I want to compare both tables with a query and return columns that have changed from Product to Temp_Product Since the two tables have the same structure, you can use the EXCEPT set oeprator for this: SELECT * FROM Temp_Product EXCEPT SELECT * FROM Product; SQL Fiddle Demo
unknown
d3307
train
Check whether your UIImageView interactions are enabled: ball.userInteractionEnabled = YES;
unknown
d3308
train
Your problem is that you are defining tally as an instance method, but it's really just a decorator function (it can't be called on an instance in any reasonable way). You can still define it in the class if you insist (it's just useless for instances), you just need to make it accept a single argument (the function to wrap), without self, and make the wrapper accept self (while passing along any provided arguments to the wrapped function): class User: # ... other methods unchanged ... def tally(func): def wrapper_function(self, *args, **kwargs): # Accept self + arbitrary arguments to make decorator useable on more functions print("Wins: {}\nLosses: {}\nTotal Score: {}".format(self.wins, self.losses, self.score)) return func(self, *args, **kwargs) # Pass along self to wrapped function along with arbitrary arguments return wrapper_function # Don't call the wrapper function; you have to return it so it replaces the decorated function @tally def record_win(self, w=1): self.wins += w # Optionally, once tally is no longer needed for new methods, but before dedenting out of class definition, do: del tally # so it won't stick around to appear as a possible method to call on instances # It's done all the work it needs to do after all Removing the w argument in favor of *args, **kwargs means you don't need to specialize to specific function prototypes, nor duplicate the defaults of the function you're wrapping (if they don't pass the argument, the default will get used automatically).
unknown
d3309
train
It may be that you are doing it, but just not showing it in your question. You need to create and register an instance of SessionsMiddleware using something like this: app.middleware.use(SessionsMiddleware(session: MemorySessions(storage: MemorySessions.Storage()))) Do this before you create the instance of your controller. EDIT in reply to OP' comment: I normally pass the instances of the different middleware explicitly because I tend to apply subsets to groups of routes rather than all the middleware. For example: app.middleware.use(FileMiddleware(publicDirectory: app.directory.publicDirectory)) app.middleware.use(SessionsMiddleware(session: MemorySessions(storage: MemorySessions.Storage()))) app.middleware.use(User.sessionAuthenticator(.mysql)) try app.register(collection: APIController(middleware: UserToken.authenticator())) var middleware: [Middleware] = [CustomMiddleware.InternalErrorMiddleware()] try app.register(collection: InsecureController(middleware: middleware)) middleware.append(contentsOf: [User.redirectMiddleware(path: [C.URI.Solidus].string), User.authenticator(), User.guardMiddleware(), CustomMiddleware.SessionTimeoutMiddleware()]) try app.register(collection: CustomerController(middleware: middleware)) BTW, have you included my line 3 above? That may be your problem.
unknown
d3310
train
I found this API http://youtube.codeplex.com/. May be helpful for someone in future. Regards, Asif Hameed
unknown
d3311
train
Find the lowest value character in cells B15, B17 and B19 only Input data housed in B15:B20 In D15, enter formula : =CHAR(MIN(CODE(T(OFFSET(B14,{1,3,5},0)))))
unknown
d3312
train
It's a known bug related to navigation bar items and not relegated to just sheets, it seems to affect any modal, and I've encountered it in IB just the same when using modal segues. Unfortunately this issue is still present in 11.3 build, hopefully they get this fixed soon.
unknown
d3313
train
This is a classic recursion problem, in my opinion it will be much easier to use a static function instead of a member function: class MyClass: def __init__(self, val, child =None): self.val = val self.child = child @staticmethod def find_last_child_val(current_node: MyClass): if current_node.child == None: return current_node.val else: return MyClass.find_last_child_val(current_node.child) c = MyClass("I'm child") p = MyClass("I'm parent", c) MyClass.find_last_child_val(p) Update: Pay attention that searching for a child using a recursion like this, is not efficient. find_last_child_val() runs in O(n) complexity. It is much more efficient to perform n iterations in a for loop instead of a recursion. If you can't think of a way to reduce the tree traversal complexity, I suggest using a different data structure.
unknown
d3314
train
It used to check for tampering, but the overhead of checking every strong-name-signed assembly at application startup was too high, so Microsoft disabled this behaviour by default a number of years ago (way back when ".NET Framework version 3.5 Service Pack 1" was released). This is called the Strong-Name bypass feature. You can disable the feature (i.e. make Windows check for tampering) for a particular application by adding the following to its ".config" file: <configuration> <runtime> <bypassTrustedAppStrongNames enabled="false" /> </runtime> </configuration> You can enable strong-name checking for ALL applications by editing the registry (which is clearly not a feasible solution!). For more details, see the following page: https://learn.microsoft.com/en-us/dotnet/framework/app-domains/how-to-disable-the-strong-name-bypass-feature The advice nowadays is to use a full code-signing certificate for your executable and DLLs if you want to prevent code tampering.
unknown
d3315
train
Require loads and executes code in the global environment. For example, lets create a simple sandbox (Lua >= 5.2): -- example.lua my_global = 42 local sandbox do local _ENV = { require = require, print = print } function sandbox() print('<sandbox> my_global =', my_global) require 'example_module' end end print('<global> my_global =', my_global) sandbox() print('<global> my_global =', my_global) Now, lets create a module that changes my_global: -- example_module.lua print('<module> my_global =', my_global) my_global = nil The expectation is that inside the sandbox the only functions available are require and print. Code inside the sandbox should not be able to access the global my_global. Run the example and you will see: $ lua example.lua <global> my_global = 42 -- The global environment, Ok. <sandbox> my_global = nil -- Inside the sandbox, Ok. <module> my_global = 42 -- Inside the sandbox, but loaded with require. Whoops, we have access to the global environment. <global> my_global = nil -- The module changed the value and it is reflected in the global environment. The module has broken out of the sandbox. A: Since it has access to the file system and the global environment, it can execute code and modify values it's not supposed to modify. You can implement and make available your own require method that satisfies your sandbox requirements. For example, you can preload those libraries you verified and have "require" only return preloaded results.
unknown
d3316
train
You're likely writing in an unexpected directory. Try to fully specify the path like /home/... (note the first '/') or just write it to a local file like array.txt. A: When handling file streams, I prefer using this idiom to detect errors early. #include <iostream> #include <fstream> #include <cstring> int main() { std::ifstream input("no_such_file.txt"); if (!input) { std::cerr << "Unable to open file 'no_such_file.txt': " << std::strerror(errno) << std::endl; return 1; } // The file opened successfully, so carry on }
unknown
d3317
train
I have the same use case and this is what I have done. In my case, I have multiple proxy targets so I have configured the JSON (ProxySession.json) accordingly. Note: This approach is not dynamic. you need to get JSESSIONID manually(session ID) for the proxy the request. login into an application where you want your application to proxy. Get the JSESSIONID and add it in JSON file or replace directly in onProxyReq function and then run your dev server. Example: Webpack-dev.js // Webpack-dev.js const ProxySession = require("./ProxySession"); config = { output: {..........}, plugins: [.......], resolve: {......}, module: { rules: [......] }, devServer: { port: 8088, host: "0.0.0.0", disableHostCheck: true, proxy: { "/service/**": { target: ProxySession.proxyTarget, changeOrigin: true, onProxyReq: function(proxyReq) { proxyReq.setHeader("Cookie", "JSESSIONID=" + ProxySession[buildType].JSESSIONID + ";msa=" + ProxySession[buildType].msa + ";msa_rmc=" + ProxySession[buildType].msa_rmc + ";msa_rmc_disabled=" + ProxySession[buildType].msa_rmc); } }, "/j_spring_security_check": { target: ProxySession.proxyTarget, changeOrigin: true }, "/app_service/websock/**": { target: ProxySession.proxyTarget, changeOrigin: true, onProxyReq: function(proxyReq) { proxyReq.setHeader("Cookie", "JSESSIONID=" + ProxySession[buildType].JSESSIONID + ";msa=" + ProxySession[buildType].msa + ";msa_rmc=" + ProxySession[buildType].msa_rmc + ";msa_rmc_disabled=" + ProxySession[buildType].msa_rmc); } } } } ProxySession.json //ProxySession.json { "proxyTarget": "https://t.novare.me/", "build-type-1": { "JSESSIONID": "....", "msa": "....", "msa_rmc": ...." }, "build-type-2": { "JSESSIONID": ".....", "msa": ".....", "msa_rmc":"....." } } A: I met the exact same issue, and fixed it by this way: This is verified and worked, but it's not dynamic. proxy: { '/my-bff': { target: 'https://my.domain.com/my-bff', changeOrigin: true, pathRewrite: { '^/my-bff': '' }, withCredentials: true, headers: { Cookie: 'myToken=jx42NAQSFRwXJjyQLoax_sw7h1SdYGXog-gZL9bjFU7' }, }, }, To make it dynamic way, you should proxy to the login target, and append a onProxyRes to relay the cookies, something like: (not verified yet) onProxyRes: (proxyRes: any, req: any, res: any) => { Object.keys(proxyRes.headers).forEach(key => { res.append(key, proxyRes.headers[key]); }); }, A: "/api/**": { ... cookieDomainRewrite: { "someDomain.com": "localhost" }, withCredentials: true, ... } A: You can use this plugin to securely manage auth cookies for webpack-dev-server: A typical workflow would be: * *Configure a proxy to the production service *Login on the production site, copy authenticated cookies to the local dev server *The plugin automatically saves your cookie to system keychain A: https://github.com/chimurai/http-proxy-middleware#http-proxy-options use option.cookieDomainRewrite and option.cookiePathRewrite now A: cookies ?? devServer: { https: true, < ------------ on cookies host: "127.0.0.1", port: 9090, proxy: { "/s": { target: "https://xx < --- https secure: false, //pathRewrite: { "^/s": "/s" }, changeOrigin: true, withCredentials: true } } } . . . . . . . . . . .
unknown
d3318
train
ended up giving up the idea of uploading to a temporary folder, and them move the files when the message is sent. rather, now, I send everything on the same FormData object (using a mixture of both here http://www.c-sharpcorner.com/UploadFile/manas1/upload-files-through-jquery-ajax-in-Asp-Net-mvc/ and here JQuery ajax file upload to ASP.NET with all form data, and for now it is working (which is actually enough for me)...
unknown
d3319
train
Try library(data.table) dt <- rbind( data.table(user=1, action=1:10, time=c(1,5,10,11,15,20,22:25)), data.table(user=2, action=1:5, time=c(1,3,10,11,12)) ) # dt[, session:=cumsum(c(T, !(diff(time)<=2))), by=user][] # user action time session # 1: 1 1 1 1 # 2: 1 2 5 2 # 3: 1 3 10 3 # 4: 1 4 11 3 # 5: 1 5 15 4 # 6: 1 6 20 5 # 7: 1 7 22 5 # 8: 1 8 23 5 # 9: 1 9 24 5 # 10: 1 10 25 5 # 11: 2 1 1 1 # 12: 2 2 3 1 # 13: 2 3 10 2 # 14: 2 4 11 2 # 15: 2 5 12 2 I used a difference of <=2 to collect sessions.
unknown
d3320
train
You can change default date and time format in your en.yml locale file like this: (this is example for french format in one of my projects) date: formats: default: "%d/%m/%Y" short: "%e %b" long: "%e %B %Y" long_ordinal: "%e %B %Y" only_day: "%e" time: formats: default: "%d %B %Y %H:%M" time: "%H:%M" short: "%d %b %H:%M" long: "%A %d %B %Y %H:%M:%S %Z" long_ordinal: "%A %d %B %Y %H:%M:%S %Z" only_second: "%S" am: 'am' pm: 'pm' Or you can simply convert your datetime instances to: @request.begin_date.strftime("%m/%d/%Y") == @request.begin_date_was.strftime("%m/%d/%Y") or even: l(@request.begin_date, :format => your_format_in_locale_file) == l(@request.begin_date_was, :format => your_format_in_locale_file) Hope it will help you A: I realize when you asked, you were probably using a different Rails version, but I just stumbled upon this myself with Rails 3.2.5. Apparently its a regression in 3.2.5, too: https://github.com/rails/rails/issues/6591 A: I arrived here from a google search for a similar problem. It wasn't related to the Rails version, but I found out after some debugging that I was assigning a Time object with milliseconds. When calling changed, I got back and array of seemingly identical objects, since it got converted to DateTime's. Not sure if this can be considered a bug in Rails or not, but in case you end up with the same problem, check that you aren't assigning datetime's with milliseconds in them.
unknown
d3321
train
It really should not matter where you define your factory, or any other function for that matter. Just be sure to import it correctly, somewhere in the top of app.module.ts import {multiTranslateHttpLoaderFactory} from 'path/to/your/component'
unknown
d3322
train
android:background="@android:color/transparent" You can also set your own colors: android:background="#80000000" The first two hex characters represent opacity. So #00000000 would be fully transparent. #80000000 would be 50% transparent. #FF000000 is opaque. The other six values are the color itself. #80FF8080 is a color I just made up and is a translucent sort of pinkish. A: Set the background to the standard android transparent color. In Code: myButton.setBackground(getResources().getColor(android.R.color.transparent)); In xml: android:background="@android:color/transparent" A: The best way is to create a selector.
unknown
d3323
train
In Ruby you could do this, but you're out of luck in PHP. The good news is, you can modify what you're doing slightly to pass the query and the parameters separately as arguments to the query method: $db->query("UPDATE `table` SET ? WHERE `id` = '1'", array( "id" = "2", "position" = "1", "visible" = "1", "name" = "My Name's John", "description" = "This is neat!" ); And then handle the interpolation and concatenation in your Database object: class Database { function __construct() { // Connect to the database, code not shown } public function query($query, $input) { $sql = $this->_normalize_query($query, $input) mysql_query($sql); return true; } protected function _normalize_query($query, $input) { $params = ""; foreach($input as $k => $v) { // escape and assemble the input into SQL } return preg_replace('/\?/', $params, $query, 1); } } However There are already a lot of ORMs out there that are very capable. If you are looking for something to only assemble queries and not manage results, you can probably find something as well. It seems like you're reinventing the wheel needlessly here. Good PHP ORM Library? A: You could write a sort of Helper functions which would work something like: (inside of class Database { ) public function ArrayValues($array) { $string = ""; foreach($array as $Key => $Value) { $string .= "`$Key` = '$Value' ,"; } // Get rid of the trailing , // Prevent any weird problems if(strlen($string) > 1) { $string = substr($string, 0, strlen($string) - 2); } return $string; } Then you'd use it like: $db->query("UPDATE `table` SET " . $db->ArrayValues(array("id" = "2", "position" = "1", "visible" = "1", "name" = "My Name's John", "description" = "This is neat!" )) . " WHERE `id` = '1'"); I haven't tested this, however it should work.
unknown
d3324
train
I don't konw if it exactly meets your need, but have a look at webstart's Version Download Protocol. To sum it up: With versioned download you can specify each jar-version to be used in the jnlp-file like this: <jar href="jackson-core.jar" version="2.0.2" /> and deploy your jar-file on the server with a filename of jackson-core__V2.0.2.jar. With this protocol webstart will only use jar-files whose version exactly matches the given version from the jnlp-file. Another advantage is, that when the specified version is already present in the local cache webstart will not try to download the version again - regardless of timestamps etc. Advantages: * *Full control over versions used via jnlp-file. *Less download-requests for jars present in cache Disadvantages: * *New versions require a change in the jnlp-file *Not suitable for SNAPSHOT-builds since the file's timestamp is completely ignored and version-numbers don't change for SNAPSHOT-builds.
unknown
d3325
train
You need to use --bignum option, as this answer suggests. (Supported in gawk since version 4.1). echo 0x06375FDFAE88312A |awk --bignum '{printf "%d\n",strtonum($1)}' echo 0x06375FDFAE88312A |awk --bignum --non-decimal-data '{printf "%d\n",$1}' The problem is that AWK typically uses double floating point number to represent numbers by default, so there is a limit on how many exact digits can be stored that way.
unknown
d3326
train
There is an answer on SO here. This link https://www.sevenforums.com/tutorials/278262-mklink-create-use-links-windows.html would serve well too (from the answer above). Basically you have to use mklink Windows command from command prompt (the latter must be run as administrator). Now. Assume you have WAMP installed and virtual host named mysite.local is created and pointinig to the physical d:\mysite folder. You want now the files in the folder f:\otherfolder\realfolder to be accessible via mysite.local/otherfolder/somefile.ext kind of URL. For this you have to create the symbolic link named otherfolder in d:\mysite that will point to f:\otherfolder\realfolder. You have to execute: mklink /D d:\myfolder\otherfolder f:\otherfolder\realfolder from Windows command prompt. The link otherfolder is created in d:\myfolder and you can access files via an URL as mentioned above.
unknown
d3327
train
That's not how you should do it in ReactJS. Here's a good tutorial for handling forms: https://reactjs.org/docs/forms.html Basically you need to set a value to each input and handling their respective onChange callback: e.g. <input type="text" name="name" value={this.state.name} onChange={onNameChange} placeholder="Estelle Nze Minko" /> Then in your component you have a method onNameChange which saves the new value to a state for example: onNameChange(event) { const name = event.target.value; this.setState(s => {...s, name}); } Finally when submitting the form you need to use the values inside this.state handleSubmit(e) { e.preventDefault(); // prevent page reload const {name} = this.state; const data = JSON.stringify({name}); fetch(`/players/modify/${this.props.match.params.id}/confirm`, { method: "POST", headers: { "Content-Type": "application/json" }, mode: "cors", body: data }); } This is all just an example, I recommend you read the link I gave you first. LE: Using an uncontrolled component: https://codesandbox.io/s/dark-framework-k2zkj here you have an example I created for an uncontrolled component. basically you can do this in your onSubmit function: onSubmit(event) { event.preventDefault(); const tempPlayer = new FormData(event.target); for (let [key, value] of tempPlayer.entries()) { console.log(key, value); } }; A: Is there a reason you don't use controlled components? You could keep the input values in the state, and when submitting the form, you just use the values from the state. React Docs on Forms A: The error seems to come from storing the input itself as value, you might have ended up performing an operation on it somewhere. Instead you can store the name and value of input with: tempPlayer[input.name] = input.value; Demo: const root = document.getElementById("root"); const { render } = ReactDOM; function App() { const handleSubmit = (event) => { event.preventDefault(); let tempPlayer = {} Object.entries(event.target.elements).forEach(([name, input]) => { if(input.type != 'submit') { tempPlayer[input.name] = input.value; } }); console.log(tempPlayer) } return ( <form onSubmit={handleSubmit}> <label htmlFor="name"> Name: <input type="text" name="name" placeholder="Estelle Nze Minko" /> </label> <label htmlFor="poste"> Poste: <input type="text" name="poste" placeholder="Back" /> </label> <label htmlFor="number"> Number: <input type="number" min="0" max="100" step="1" name="number" placeholder="27" /> </label> <label htmlFor="height"> Height (m): <input type="number" min="1.00" max="2.34" step="0.01" name="height" placeholder="1.78" /> </label> <label htmlFor="selects"> Number of selects: <input type="number" min="0" max="300" step="1" name="selects" placeholder="88" /> </label> <button type="submit">Add</button> </form> ); } render(<App />, root); <script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script> <script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script> <div id="root" /> PS: Uncontrolled forms are known to have performance improvements and reduced rerenders as shown in React hook form. You don't have to use a controlled component. You may not need Controlled Components - swyx Blog
unknown
d3328
train
Solved, 'Category' is the name of my taxonomy: @{ var categoryName = ""; foreach (dynamic term in Model.ContentItem.BlogPost.Category.Terms.Value) { categoryName = term.Name; } }
unknown
d3329
train
This is the query as it would better be written: SELECT host.key AS uid, daily_summary.date FROM host INNER JOIN daily_summary USING(weekly_id); In addition to removing the spurious commas, this also removes the unneeded quotes around the column aliases. Only use single quotes for string and date constants. Otherwise, they are likely to cause confusion in queries. If you need to escape aliases in MySQL, then use backticks. A: Check version simplified without typos: SELECT host.key, daily_summary.date FROM host INNER JOIN daily_summary USING(weekly_id); I also prefer to use word ON instead of USING: SELECT host.key, daily_summary.date FROM host INNER JOIN daily_summary ON host.weekly_id = daily_summary.weekly_id A: The MySQL syntax error message shows you part of your statement starting with the first character it can't understand. In your case the first characters it can't understand are FROM. It's looking for another column name after the comma.
unknown
d3330
train
To improve the speed of populating the FlowLayoutPanel with your user controls, disable layout updating while you add the controls. Immediately before your loop, call SuspendLayout() and then at the end call ResumeLayout(). Make sure to use a try-finally to guarantee the ResumeLayout() runs even if an exception occurs. A: I wouldn't add that many user controls. Rather, I'd have a series of data structures that stores information about what thumbnail to use, positioning, etc, etc, and then handle the rendering of each thumbnail required. Of course, you would only render what you need, by checking the paint event args in your control and rendering the thumbnails that are in view and that require rendering. A: Aha! I found something. When the UserControl is not in view and it receives a Paint event, then e.ClipRectangle.IsEmpty is true!
unknown
d3331
train
When the params tensor is in high dimensions, the ids only refers to top dimension. Maybe it's obvious to most of people but I have to run the following code to understand that: embeddings = tf.constant([[[1,1],[2,2],[3,3],[4,4]],[[11,11],[12,12],[13,13],[14,14]], [[21,21],[22,22],[23,23],[24,24]]]) ids=tf.constant([0,2,1]) embed = tf.nn.embedding_lookup(embeddings, ids, partition_strategy='div') with tf.Session() as session: result = session.run(embed) print (result) Just trying the 'div' strategy and for one tensor, it makes no difference. Here is the output: [[[ 1 1] [ 2 2] [ 3 3] [ 4 4]] [[21 21] [22 22] [23 23] [24 24]] [[11 11] [12 12] [13 13] [14 14]]] A: Yes, the purpose of tf.nn.embedding_lookup() function is to perform a lookup in the embedding matrix and return the embeddings (or in simple terms the vector representation) of words. A simple embedding matrix (of shape: vocabulary_size x embedding_dimension) would look like below. (i.e. each word will be represented by a vector of numbers; hence the name word2vec) Embedding Matrix the 0.418 0.24968 -0.41242 0.1217 0.34527 -0.044457 -0.49688 -0.17862 like 0.36808 0.20834 -0.22319 0.046283 0.20098 0.27515 -0.77127 -0.76804 between 0.7503 0.71623 -0.27033 0.20059 -0.17008 0.68568 -0.061672 -0.054638 did 0.042523 -0.21172 0.044739 -0.19248 0.26224 0.0043991 -0.88195 0.55184 just 0.17698 0.065221 0.28548 -0.4243 0.7499 -0.14892 -0.66786 0.11788 national -1.1105 0.94945 -0.17078 0.93037 -0.2477 -0.70633 -0.8649 -0.56118 day 0.11626 0.53897 -0.39514 -0.26027 0.57706 -0.79198 -0.88374 0.30119 country -0.13531 0.15485 -0.07309 0.034013 -0.054457 -0.20541 -0.60086 -0.22407 under 0.13721 -0.295 -0.05916 -0.59235 0.02301 0.21884 -0.34254 -0.70213 such 0.61012 0.33512 -0.53499 0.36139 -0.39866 0.70627 -0.18699 -0.77246 second -0.29809 0.28069 0.087102 0.54455 0.70003 0.44778 -0.72565 0.62309 I split the above embedding matrix and loaded only the words in vocab which will be our vocabulary and the corresponding vectors in emb array. vocab = ['the','like','between','did','just','national','day','country','under','such','second'] emb = np.array([[0.418, 0.24968, -0.41242, 0.1217, 0.34527, -0.044457, -0.49688, -0.17862], [0.36808, 0.20834, -0.22319, 0.046283, 0.20098, 0.27515, -0.77127, -0.76804], [0.7503, 0.71623, -0.27033, 0.20059, -0.17008, 0.68568, -0.061672, -0.054638], [0.042523, -0.21172, 0.044739, -0.19248, 0.26224, 0.0043991, -0.88195, 0.55184], [0.17698, 0.065221, 0.28548, -0.4243, 0.7499, -0.14892, -0.66786, 0.11788], [-1.1105, 0.94945, -0.17078, 0.93037, -0.2477, -0.70633, -0.8649, -0.56118], [0.11626, 0.53897, -0.39514, -0.26027, 0.57706, -0.79198, -0.88374, 0.30119], [-0.13531, 0.15485, -0.07309, 0.034013, -0.054457, -0.20541, -0.60086, -0.22407], [ 0.13721, -0.295, -0.05916, -0.59235, 0.02301, 0.21884, -0.34254, -0.70213], [ 0.61012, 0.33512, -0.53499, 0.36139, -0.39866, 0.70627, -0.18699, -0.77246 ], [ -0.29809, 0.28069, 0.087102, 0.54455, 0.70003, 0.44778, -0.72565, 0.62309 ]]) emb.shape # (11, 8) Embedding Lookup in TensorFlow Now we will see how can we perform embedding lookup for some arbitrary input sentence. In [54]: from collections import OrderedDict # embedding as TF tensor (for now constant; could be tf.Variable() during training) In [55]: tf_embedding = tf.constant(emb, dtype=tf.float32) # input for which we need the embedding In [56]: input_str = "like the country" # build index based on our `vocabulary` In [57]: word_to_idx = OrderedDict({w:vocab.index(w) for w in input_str.split() if w in vocab}) # lookup in embedding matrix & return the vectors for the input words In [58]: tf.nn.embedding_lookup(tf_embedding, list(word_to_idx.values())).eval() Out[58]: array([[ 0.36807999, 0.20834 , -0.22318999, 0.046283 , 0.20097999, 0.27515 , -0.77126998, -0.76804 ], [ 0.41800001, 0.24968 , -0.41242 , 0.1217 , 0.34527001, -0.044457 , -0.49687999, -0.17862 ], [-0.13530999, 0.15485001, -0.07309 , 0.034013 , -0.054457 , -0.20541 , -0.60086 , -0.22407 ]], dtype=float32) Observe how we got the embeddings from our original embedding matrix (with words) using the indices of words in our vocabulary. Usually, such an embedding lookup is performed by the first layer (called Embedding layer) which then passes these embeddings to RNN/LSTM/GRU layers for further processing. Side Note: Usually the vocabulary will also have a special unk token. So, if a token from our input sentence is not present in our vocabulary, then the index corresponding to unk will be looked up in the embedding matrix. P.S. Note that embedding_dimension is a hyperparameter that one has to tune for their application but popular models like Word2Vec and GloVe uses 300 dimension vector for representing each word. Bonus Reading word2vec skip-gram model A: Another way to look at it is , assume that you flatten out the tensors to one dimensional array, and then you are performing a lookup (eg) Tensor0=[1,2,3], Tensor1=[4,5,6], Tensor2=[7,8,9] The flattened out tensor will be as follows [1,4,7,2,5,8,3,6,9] Now when you do a lookup of [0,3,4,1,7] it will yeild [1,2,5,4,6] (i,e) if lookup value is 7 for example , and we have 3 tensors (or a tensor with 3 rows) then, 7 / 3 : (Reminder is 1, Quotient is 2) So 2nd element of Tensor1 will be shown, which is 6 A: Since I was also intrigued by this function, I'll give my two cents. The way I see it in the 2D case is just as a matrix multiplication (it's easy to generalize to other dimensions). Consider a vocabulary with N symbols. Then, you can represent a symbol x as a vector of dimensions Nx1, one-hot-encoded. But you want a representation of this symbol not as a vector of Nx1, but as one with dimensions Mx1, called y. So, to transform x into y, you can use and embedding matrix E, with dimensions MxN: y = E x. This is essentially what tf.nn.embedding_lookup(params, ids, ...) is doing, with the nuance that ids are just one number that represents the position of the 1 in the one-hot-encoded vector x. A: Yes, this function is hard to understand, until you get the point. In its simplest form, it is similar to tf.gather. It returns the elements of params according to the indexes specified by ids. For example (assuming you are inside tf.InteractiveSession()) params = tf.constant([10,20,30,40]) ids = tf.constant([0,1,2,3]) print tf.nn.embedding_lookup(params,ids).eval() would return [10 20 30 40], because the first element (index 0) of params is 10, the second element of params (index 1) is 20, etc. Similarly, params = tf.constant([10,20,30,40]) ids = tf.constant([1,1,3]) print tf.nn.embedding_lookup(params,ids).eval() would return [20 20 40]. But embedding_lookup is more than that. The params argument can be a list of tensors, rather than a single tensor. params1 = tf.constant([1,2]) params2 = tf.constant([10,20]) ids = tf.constant([2,0,2,1,2,3]) result = tf.nn.embedding_lookup([params1, params2], ids) In such a case, the indexes, specified in ids, correspond to elements of tensors according to a partition strategy, where the default partition strategy is 'mod'. In the 'mod' strategy, index 0 corresponds to the first element of the first tensor in the list. Index 1 corresponds to the first element of the second tensor. Index 2 corresponds to the first element of the third tensor, and so on. Simply index i corresponds to the first element of the (i+1)th tensor , for all the indexes 0..(n-1), assuming params is a list of n tensors. Now, index n cannot correspond to tensor n+1, because the list params contains only n tensors. So index n corresponds to the second element of the first tensor. Similarly, index n+1 corresponds to the second element of the second tensor, etc. So, in the code params1 = tf.constant([1,2]) params2 = tf.constant([10,20]) ids = tf.constant([2,0,2,1,2,3]) result = tf.nn.embedding_lookup([params1, params2], ids) index 0 corresponds to the first element of the first tensor: 1 index 1 corresponds to the first element of the second tensor: 10 index 2 corresponds to the second element of the first tensor: 2 index 3 corresponds to the second element of the second tensor: 20 Thus, the result would be: [ 2 1 2 10 2 20] A: Here's an image depicting the process of embedding lookup. Concisely, it gets the corresponding rows of a embedding layer, specified by a list of IDs and provide that as a tensor. It is achieved through the following process. * *Define a placeholder lookup_ids = tf.placeholder([10]) *Define a embedding layer embeddings = tf.Variable([100,10],...) *Define the tensorflow operation embed_lookup = tf.embedding_lookup(embeddings, lookup_ids) *Get the results by running lookup = session.run(embed_lookup, feed_dict={lookup_ids:[95,4,14]}) A: embedding_lookup function retrieves rows of the params tensor. The behavior is similar to using indexing with arrays in numpy. E.g. matrix = np.random.random([1024, 64]) # 64-dimensional embeddings ids = np.array([0, 5, 17, 33]) print matrix[ids] # prints a matrix of shape [4, 64] params argument can be also a list of tensors in which case the ids will be distributed among the tensors. For example, given a list of 3 tensors [2, 64], the default behavior is that they will represent ids: [0, 3], [1, 4], [2, 5]. partition_strategy controls the way how the ids are distributed among the list. The partitioning is useful for larger scale problems when the matrix might be too large to keep in one piece. A: Adding to Asher Stern's answer, params is interpreted as a partitioning of a large embedding tensor. It can be a single tensor representing the complete embedding tensor, or a list of X tensors all of same shape except for the first dimension, representing sharded embedding tensors. The function tf.nn.embedding_lookup is written considering the fact that embedding (params) will be large. Therefore we need partition_strategy. A: The existing explanations are not enough. The main purpose of this function is to efficiently retrieve the vectors for each word in a given sequence of word indices. Suppose we have the following matrix of embeddings: embds = np.array([[0.2, 0.32,0.9], [0.8, 0.62,0.19], [0.0, -0.22,-1.9], [1.2, 2.32,6.0], [0.11, 0.10,5.9]]) Let's say we have the following sequences of word indices: data=[[0,1], [3,4]] Now to get the corresponding embedding for each word in our data: tf.nn.embedding_lookup( embds, data ) out: array([[[0.2 , 0.32, 0.9 ], [0.8 , 0.62, 0.19]], [[1.2 , 2.32, 6. ], [0.11, 0.1 , 5.9 ]]])> Note If embds are not an array or tensor, the output will not be like this (I won't go into details). For example, if embds were a list, the output would be: array([[0.2 , 0.32], [0.8 , 0.62]], dtype=float32)>
unknown
d3332
train
In your web.config in the appSettings tag, add the line <add key="enableSimpleMembership" value="true"/> SimpleMembership is built in so from here you simply need to write [InitializeSimpleMembership] above your public class AccountController: Controller When you want to force a user to log in for a certain page you write in the pages controller [Authorize] That tables will be automatically generated in your database. If you want to add more fields to these tables you will need to simply google it. Here's a link for more information http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx A: Purely as a point of reference, it might be a good idea to create a new Internet Application template of an ASP.NET MVC 4 Web Application project (i.e. via File > New Project). If you look at the AccountController, as @zms6445 says, it is decorated with an InitializeSimpleMembership attribute. You can find the implementation of this attribute in the InitializeSimpleMembershipAttribute.cs file in the Filters folder within the root directory. In here, this is the missing part of the puzzle - you need to hook up your existing database so that it is used by the SimpleMembershipProvider. This is the code you need: private class SimpleMembershipInitializer { public SimpleMembershipInitializer() { try { if (!WebSecurity.Initialized) { WebSecurity.InitializeDatabaseConnection("CONNECTION_STRING_NAME", "USER_TABLE", "USER_ID_FIELD", "USER_NAME_FIELD", autoCreateTables: true); } } catch (Exception ex) { throw new InvalidOperationException("Something is wrong", ex); } } } Some things to note: * *CONNECTION_STRING_NAME is an entry in your web.config ConnectionStrings - you CANNOT use the model connection string here - the SimpleMembershipProvider does not recognise that format! You need to specify an System.Data.SqlClient connection string, e.g. <add name="CONNECTION_STRING_NAME" connectionString="data source=SERVER;initial catalog=DATABASE;user id=USER;password=PASSWORD;" providerName="System.Data.SqlClient" /> *USER_TABLE is the table in your database to hold extra user information, such as first name, surname etc. This is linked to the autogenerated tables via the USER_ID_FIELD. *USER_ID_FIELD is usually the primary key of your Users table. It must be of type int. *USER_ID_NAME is a unique name for the user, which could be an Email address. *autoCreateTables is set to true to ensure the tables required for the SimpleMembership to work are created if they don't already exist. Of course, this code only gets fired if you hit a page via the AccountController, since this has been decorated by the attribute. You could put a breakpoint in there and see it in action. This should get you started - the Internet Application template is a pretty good template to follow if you get stuck. Hope this helps.
unknown
d3333
train
Starting Firebase Functions 1.0+, there are 2 kinds of HTTP functions that you can use for your Android app. * *Call Functions directly. Via functions.https.onCall *Call Functions through HTTP Request. Via functions.https.onRequest I recommend you to use onCall as your functions endpoint, and call directly by using FirebaseFunctions. This way, you don't need to get FirebaseUser token, as it will be automatically included when you call using FirebaseFunctions. And remember, TypeScript is just superset of Javascript. I will still give examples in Node.js, but it is recommended to type your Javascript code in TypeScript. Example index.js (CloudFunctions endpoint) exports.importantfunc = functions.https.onCall((data, context) => { // Authentication / user information is automatically added to the request. if (!context.auth) { // Throwing an HttpsError so that the client gets the error details. throw new functions.https.HttpsError('not-authorised', 'The function must be called while authenticated.'); } const uid = context.auth.uid; const email = context.auth.token.email; //Do whatever you want }); MyFragment.java //Just some snippets of code examples private void callFunction() { FirebaseFunctions func = FirebaseFunctions.getInstance(); func.getHttpsCallable("importantfunc") .call() .addOnCompleteListener(new OnCompleteListener<HttpsCallableResult>() { @Override public void onComplete(@NonNull Task<HttpsCallableResult > task) { if (task.isSuccessful()) { //Success } else { //Failed } } }); } More information about callable function, read here
unknown
d3334
train
It appears that there maybe a bug in the sample leading to this error. Please file an issue in the GitHub project's issue tracker so we can follow up.
unknown
d3335
train
I would like to move my middleware, and socket connection to app.js and in www just to start server You can separate the code like this and pass both app and server variables to your app.js module where it can run the rest of the initialization code (middleware, routes, socket.io setup, etc...): // www const express = require('express'); const app = express(); const server = require('http').Server(app); const port = normalizePort(process.env.PORT || '3000'); app.set('port', port); // load app.js and let it do it's part of the initialization of app and server require('./app.js')(app, server); server.listen(port); server.on('error', onError); server.on('listening', onListening); /** * Normalize a port into a number, string, or false. */ function normalizePort(val) { var port = parseInt(val, 10); if (isNaN(port)) { // named pipe return val; } if (port >= 0) { // port number return port; } return false; } /** * Event listener for HTTP server "error" event. */ function onError(error) { if (error.syscall !== 'listen') { throw error; } var bind = typeof port === 'string' ? 'Pipe ' + port : 'Port ' + port; // handle specific listen errors with friendly messages switch (error.code) { case 'EACCES': console.error(bind + ' requires elevated privileges'); process.exit(1); break; case 'EADDRINUSE': console.error(bind + ' is already in use'); process.exit(1); break; default: throw error; } } /** * Event listener for HTTP server "listening" event. */ function onListening() { var addr = server.address(); var bind = typeof addr === 'string' ? 'pipe ' + addr : 'port ' + addr.port; debug('Listening on ' + bind); } And, then this would be an outline for app.js: const bodyParser = require('body-parser'); const siofu = require("socketio-file-upload"); const config = require('../config.json'); const appRoutes = require('./routes/Approutes'); const socketioJwt = require('socketio-jwt'); const socketIo = require('socket.io'); // export one function that gets called once as the server is being initialized module.exports = function(app, server) { app.set('views', path.join(__dirname, '../views')); app.set('view engine', 'hjs'); app.set('appProperties', { secret: config.secret }); app.use(siofu.router); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); var io = socketIo.listen(server); io.use(socketioJwt.authorize({ secret: config.secret, handshake: true })); app.io = io; app.use(function(req, res, next) { 'use strict'; req.io = io; next(); }); app.use('/', appRoutes); require('../sockets/user')(io); }
unknown
d3336
train
This is definitely Google Guava's dependency conflict. The default constructor of Stopwatch class became private since Guava v.17 and marked deprecated even earlier. So to HBase Java client works properly you need Guava v.16 or earlier. Check the way you build your application (Maven/Gradle/Classpath) and find the dependency which uses Guava v.17+. After that, you can resolve the conflict. A: I received the same error and had to spend for 5 days to know the issue. I added following dependency and its gone. <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>15.0</version> </dependency> A: You can use maven shade plugin to solve this issue. That a look at this blog post. Here is an example (actually a snippet from my working pom.) <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.0.0</version> <executions> <execution> <id>assemble-all</id> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <!--<finalName>PROJECT_NAME-${project.version}-shaded</finalName>--> <relocations> <relocation> <pattern>com.google.common</pattern> <shadedPattern>shaded.com.google.common</shadedPattern> </relocation> <relocation> <pattern>com.google.protobuf</pattern> <shadedPattern>shaded.com.google.protobuf</shadedPattern> </relocation> </relocations> <artifactSet> <includes> <include>*:*</include> </includes> </artifactSet> </configuration> </execution> </executions> </plugin>
unknown
d3337
train
Cast your field to LinkField class and use Class property: LinkField field = Sitecore.Context.Item.Fields["Link"]; string cssClass = field.Class; **EDIT: ** If you want to change behaviour of Sitecore sc:link to change css class of every link, you need to add your own processor to the renderField pipeline: public class UpdateLinkClass { public void Process(Sitecore.Pipelines.RenderField.RenderFieldArgs args) { if (args != null && (args.FieldTypeKey == "link" || args.FieldTypeKey == "general link")) { Sitecore.Data.Fields.LinkField linkField = args.Item.Fields[args.FieldName]; if (!string.IsNullOrEmpty(linkField.Class)) { args.Parameters["class"] = linkField.Class + "-custom"; } } } } and register it before GetLinkFieldValue processor: <processor type="My.Assembly.Namespace.UpdateLinkClass, My.Assembly" /> <processor type="Sitecore.Pipelines.RenderField.GetLinkFieldValue, Sitecore.Kernel" />
unknown
d3338
train
In UTC: filter: range: "@timestamp": gte: "now/d+0h" lt: "now/d+2h" A: The now is take the time of the server. filter: - range: "@timestamp": "from": "now-2h" "to": "now" A: if you want your alert to be effective for specific hours only, you can create an enhancement that drop the alert if the current time doesnt match your needs check https://elastalert.readthedocs.io/en/latest/recipes/adding_enhancements.html regards
unknown
d3339
train
Well after lots of surfing I found a good webpage for programming language icons. Programming Language Icons. Thank you guys! A: I use FontAwesome for all types of icons. There are various websites another is Flaticon A: There is Devicon a set of icons representing programming languages, designing, and development tools. You can use it as a font or directly copy/paste the SVG code into your project.
unknown
d3340
train
Have you talked to your AWS Solutions Architect about the use case? They love this kind of thing, they'll be happy to help you figure out the right architecture. It may be a good fit for the AWS IoT services? If you don't go with the managed IoT services, you'll want to push the messages to a scalable queue like Kafka or Kinesis (IMO, if you are processing 18M * 5 sensors = 90M events per day, that's >1000 events per second. Kafka is not overkill here; a lot of other stacks would be under-kill). From Kinesis you then flow the data into a faster stack for analytics / querying, such as HBase, Cassandra, Druid or ElasticSearch, depending on your team's preferences. Some would say that this is time series data so you should use a time series database such as InfluxDB; but again, it's up to you. Just make sure it's a database that performs well (and behaves itself!) when subjected to a steady load of 1000 writes per second. I would not recommend using a RDBMS for that, not even Postgres. The ones mentioned above should all handle it. Also, don't forget to flow your messages from Kinesis to S3 for safe keeping, even if you don't intend to keep the messages forever (just set a lifecycle rule to delete old data from the bucket if that's the case). After all, this is big data and the rule is "everything breaks, all the time". If your analytical stack crashes you probably don't want to lose the data entirely. As for alerting, it depends 1) what stack you choose for the analytical part, and 2) what kinds of triggers you want to use. From your description I'm guessing you'll soon end up wanting to build more advanced triggers, such as machine learning models for anomaly detection, and for that you may want something that doesn't poll the analytical stack but rather consumes events straight out of Kinesis.
unknown
d3341
train
You should try reading about Ajax A: Do you want this? http://jqueryui.com/demos/autocomplete/
unknown
d3342
train
The expression z+1 is an example of pointer arithmetic. The array z decays to a pointer to the first element of the array, i.e. &z[0]. z+1 means "take the address contained at z and add 1 array element to that address". This is the same as &z[1]. So this function call: r1 =f(3, z); Passes in the address of the first element of z, resulting in the array z being visible in the function. In this function call: r2 =f(1, z+1); It passes in the address of the second element of z. So the function is able to see the array starting from the second element.
unknown
d3343
train
It looks like you're expecting an AuthResult to get passed directly to mOnSignInSuccessListener. In this particular case, in my opinion, it's not worthwhile to try to coerce an extra Continuation to return the value you're looking for. Instead of trying to arrange for the AuthResult to be passed to that listener as a parameter, the listener can simply reach directly into mainTask.getResult(), or you can save the AuthResult in a member variable and access it that way. Either way, it's safe because the mOnSignInSuccessListener will only be invoked after mainTask completes, which ensures the AuthResult is ready.
unknown
d3344
train
You need the Thread.sleep before interrupting otherwise, you are interrupting before the child thread even before it has gotten a chance to start running. As per the API specs "Interrupting a thread that is not alive need not have any effect.". So, in affect, the interrupt statement is being ignored as at the time the thread is not active. Without the sleep, the thread become active AFTER the interrupt - hence is never interrupted. public class App { public static void main(String[] args) throws InterruptedException { MainThread mainThread = new MainThread(); mainThread.start(); Thread.sleep(1); // <== This line is needed as otherwise, the next line will // interrupt the thread, even before it has started running! mainThread.childThread.interrupt(); } }
unknown
d3345
train
That's the expected behavior due to limitations applied to webviews loaded via the in-app-browser/Messenger/Facebook app.
unknown
d3346
train
The job should be triggered incase the PR is updated with new additional commits. E.g: Before a PR is merged, in case there are any new checkins done which are a now a part of the existing PR, this should trigger a jenkins build.(I am not able to get this working.) I'm not sure what you fully want here, do you want to combine both PR to a single PR when a check in is done immediately while a PR is being done ?? I don't think that's possible, why not wait till all check-in's are done and then do a pull request (Pull request are generally done when everything is finalized and you are sure of what needs to go(pushed) in general done in PROD environment) There is an Trigger Setup option with cancel Build on update. But then you will have to trigger a new PR, you can check other option under Trigger Setup on commit related query Unless you provide more details on what you require I cant help any further :) The job should also be triggered if a simple push onto the master branch is made.(I am not able to get this working) I don't think Git Pull Requests Handles Push. For that you will have to explicitly check the GitHub Branches in the Build Trigger option, and you can add restriction on the branches like 'build only Master'. P.S. As for your webhook you have given permission to push and pull from Git but the same needs to be applied from Jenkins side :)
unknown
d3347
train
We can try DT[V3==1 & 1:.N %in% 1:5, V2 := 1] Or another option is DT[intersect(which(V3==1), 1:5), V2 := 1] Benchmarks set.seed(24) DT <- data.table(1:1e6, 0, rbinom(1e6, 2, 0.5)) DT1 <- copy(DT) DT2 <- copy(DT) OP's version system.time({ DT[V3 == 1 & DT[,.I <= 5], V2:= 1] }) #user system elapsed #0.08 0.00 0.08 Modified options system.time({ DT1[V3==1 & 1:.N %in% 1:5, V2 := 1] }) # user system elapsed # 0.14 0.00 0.14 system.time({ DT2[intersect(which(V3==1), 1:5), V2 := 1] }) # user system elapsed # 0.05 0.00 0.05 A: Here's a faster way for @akrun's example: set.seed(24) DT <- data.table(1:1e6, 0, rbinom(1e6, 2, 0.5)) DT1 <- copy(DT) DT2 <- copy(DT) library(microbenchmark) microbenchmark( DT1[which(V3[1:5]==1L), V2:= 1], DT2[intersect(which(V3==1), 1:5), V2 := 1] , times = 1, unit = "relative" ) # Unit: relative # expr min lq mean median uq max neval # sequential 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1 # set_ops 55.43582 55.43582 55.43582 55.43582 55.43582 55.43582 1 It's "sequential" in the sense that we subset by index before evaluating the condition. The generalization is cond = quote(V3 == 1) indx = 1:5 DT[ DT[indx, indx[eval(cond)]], V2 := 1] # or set(DT, i = DT[indx, indx[eval(cond)]], j = "V2", v = 1)
unknown
d3348
train
This is due to the creation of the view for the orientation, If you are in portrait mode, and you change to landscape, it creates again the view, and you need to set the onClickListener again. The same happens if you start the activity in landscape mode, to portrait. A: Setting map to null in onCreate() method helped as I needed to handle orientation changes by the system. so first I set mMap=null then I setup the map.
unknown
d3349
train
You're passing in a string '[0x86C543, 0xE6E6E6]' where you need an array. The [] brackets denote an Array but by placing this in quotes it is read in as a string. Change this to b.setStyle('fillColors', [0x86C543, 0xE6E6E6]); A: fillColors on mx:Button works only in Halo theme. So you need to use Flex 3 SDK or try something like this in Flex 4 http://blog.flexexamples.com/2009/07/14/using-the-halo-theme-in-flex-4/ EDIT: Here's an example. Try to create project with Flex 4 SDK and put this code to application: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600"> <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; mx|Button{ fillColors: #FF0000, #00FF00 } </fx:Style> <mx:Button /> </s:Application> There will be a warning The style 'fillColors' is only supported by type 'mx.controls.Button' with the theme(s) 'halo'. And it doesn't matter that you are using mx:Button, the fact that you need to use halo theme matters. So if you want change fillColors of mx:Button you have 2 choices that I've posted before
unknown
d3350
train
I usually have this problem when dealing with payment iframes in web applications. I think the solution is the same or a similar approach. Check the postMessage API. What we usually do, is emit an event on the iFrame (your webView i guess) side. Usually is a navigation event, and then we listen for that event globally in our app like: <script> var defaultResponse = { status: 0, code: '', message: '' } function queryParamsToObject(search = '') { return search ? JSON.parse('{"' + search.replace(/&/g, '","').replace(/=/g, '":"') + '"}') : '' } function getResponseObject() { return queryParamsToObject(decodeURI(location.search).substring(1)) } var response = getResponseObject() console.log('window:child', window.parent, response) try { window.parent.postMessage(response, '*') } catch (e) { console.log('window:child:send:error', e) } </script> On a fast search to react-native-webview i see that they are already using it Hope it helps A: check out this render() { const html = ` <html> <head></head> <body> <script> setTimeout(function () { window.ReactNativeWebView.postMessage("Hello!") }, 2000) </script> </body> </html> `; return ( <View style={{ flex: 1 }}> <WebView source={{ html }} onMessage={event => { alert(event.nativeEvent.data); }} /> </View> ); } }
unknown
d3351
train
it seems that you converted the csv into list of tuples in this line: information = [tuple(line) for line in csv.reader(file)] which result in: [(//...tuple 1...//) , (//...tuple 1...//) , ...etc] you better just concat them if you dont want nested lists: data_list= [] for line in csv.reader(file): data_list += line.split() information = tuple(data_list)
unknown
d3352
train
The git worktree method described in comments will work on a Unix/Linux system, but probably not on Windows if your different users have different accounts (which they should, for sanity if nothing else). It has some drawbacks: in particular, while each working tree gets its own index, all the working trees share one single underlying repository, which means that Git commands that must write to the repository have to wait while someone else has the repository databases busy. How disruptive this might be in practice depends on how your users would use this. It's usually a much better idea to give each user their own full clone, even on a shared server, and even if it's a Unix/Linux system. They can then push to, and fetch from, one more clone on that shared server, that you designate as the "source of truth". There is only one drawback to this method of sharing, which is that each clone occupies its own extra disk space. However, this tends to be minor: when cloning a local repository locally, using file-oriented "URLs": git clone /path/to/source-of-truth.git work/my-clone Git will, to the extent possible, use "hard links" to files to save space. These hard links work quite well, although the links "break apart" over time (as files get updated) and gradually the clones wind up taking more and more space. This means that every once in a while—say, once a month or once a year or so, depending on activity—it may be helpful for space purposes to have your users-on-the-shared-server delete their clones and re-clone at a time that's convenient for them. This will re-establish complete sharing. (Of course, disk space is cheap these days, and the right way to handle this on a modern Linux server is probably to set up a ZFS pool with, say, a bunch of 8 or 12TB drives in a RAIDZ1 or RAIDZ2 configuration to get many terabytes of usable storage at around $30 per terabyte, even counting overhead and the cost of cabling and so on. You'll probably pay more for some high end Intel CPU than you do for all the drives put together.)
unknown
d3353
train
This page by Movable Type contains formulae for geospatial calculations and, even better, most of the formulae are already written in Javascript. So using their library and your example, I would do something like this: Starting Latitude: <span id="starting_lat">0</span><br> Starting Longitude: <span id="starting_long">0</span><br> Current Latitude: <span id="current_lat">0</span><br> Current Longitude: <span id="current_long">0</span><br> Speed: <span id="speed">0</span><br> Direct Distance (km): <span id="directDistance">0</span><br> Cumulative Distance (km): <span id="cumulativeDistance">0</span><br> <script> var startPos; var prevPos; var currentPos; var watchID; var cumulativeDist = 0; document.addEventListener("deviceready", startApp, false); //Start the Application function startApp() { // Start watching the location watchID = navigator.geolocation.watchPosition(onSuccess, onError, {timeout: 30000, maxAge:0, enableHighAccuracy: false }); // enableHighAccuracy=false => No GPS } function onSuccess(position) { if(currentPos){ prevPos = currentPos; ) currentPos = new LatLon(position.coords.latitude, position.coords.longitude); document.getElementById('current_lat').innerHTML=(currentPos.lat()); document.getElementById('current_long').innerHTML=(currentPos.lon()); if(!startPos){ startPos = currentPos; document.getElementById('starting_lat').innerHTML=(startPos.lat()); document.getElementById('starting_long').innerHTML=(startPos.lon()) } if(prevPos){ getDistance(); } } function onError(error) { alert('code: ' + error.code + '\n' + 'message: ' + error.message + '\n'); } function getDistance(){ cumulativeDist += currentPos.distanceTo(prevPos); document.getElementById('cumulativeDistance').innerHTML=(cumulativeDist); var directDist = currentPos.distanceTo(startPos); document.getElementById('directDistance').innerHTML=(directDist); } </script>
unknown
d3354
train
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below: public class PropertyInit implements InitializingBean,FactoryBean { private Properties props = new Properties(); @Override public Object getObject() throws Exception { return props; } @Override public Class getObjectType() { return Properties.class; } } Now you should be able to load this property class with: <context:property-placeholder properties-ref="propertyInit"/> Hope you like this idea. I used this approach in a previous project. A: I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties. That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break. It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.
unknown
d3355
train
you can use .map { it.trim() } too, but otherwise, groovy does not have method reference working like java one
unknown
d3356
train
1.) to use Dagger2, you need to include it as a dependency in your project. annotationProcessor 'com.google.dagger:dagger-compiler:2.9' compile 'com.google.dagger:dagger:2.9' provided 'org.glassfish:javax.annotation:10.0-b28' Then you can use Dagger2. 2.) Whenever you have a class which is the dependency of another class. So if you have something like this: public int getNewRandomNumber() { return new Random().nextInt(5000); } Then you have an implicit dependency on Random and it is not mockable. So you might want to provide this as a dependency to the method instead: public int getNewRandomNumber(Random random) { return random.nextInt(5000); } But you might be using Random in a bunch of places, so you might want to provide it to the class itself: public class RandomGenerator { private final Random random; public RandomGenerator(Random random) { this.random = random; } } After which the question surfaces: if you need an instance of RandomGenerator, where are you getting Random from? Surely this is stupid: RandomGenerator gen = new RandomGenerator(new Random()); Wouldn't it be nicer to do either this: RandomGenerator randomGenerator = component.randomGenerator(); or @Inject RandomGenerator randomGenerator; Well apparently with Dagger it's pretty easy @Module public class RandomModule { @Provides @Singleton Random random() { return new Random(); } } @Singleton @Component(modules={RandomModule.class}) public interface SingletonComponent { Random random(); RandomGenerator randomGenerator(); } then @Singleton public class RandomGenerator { private final Random random; @Inject public RandomGenerator(Random random) { this.random = random; } } or @Singleton public class RandomGenerator { @Inject Random random; @Inject public RandomGenerator() { } } You can read more about dependency injection here
unknown
d3357
train
If you are on an Apache machine try this: function get_raw_http_request() { $request = "$_SERVER[REQUEST_METHOD] $_SERVER[REQUEST_URI] $_SERVER[SERVER_PROTOCOL]\r\n"; foreach (getallheaders() as $name => $value) { $request .= "$name: $value\r\n"; } $request .= "\r\n" . file_get_contents('php://input'); return $request; } http://php.net/manual/en/function.getallheaders.php A: Looking over the manual there doesn't seem to be an unparsed raw access to the request to match what you want, so I suspect you will need to re-build what you want from the $_SERVER variables. A quick search I found this class, made some small change to get the GET / HTTP/1.1, perhaps you will find it suits your needs. <?php /** * Access the HTTP Request * * Found on http://www.daniweb.com/web-development/php/code/216846/get-http-request-headers-and-body */ class http_request { /** additional HTTP headers not prefixed with HTTP_ in $_SERVER superglobal */ public $add_headers = array('CONTENT_TYPE', 'CONTENT_LENGTH'); /** * Construtor * Retrieve HTTP Body * @param Array Additional Headers to retrieve */ function http_request($add_headers = false) { $this->retrieve_headers($add_headers); $this->body = @file_get_contents('php://input'); } /** * Retrieve the HTTP request headers from the $_SERVER superglobal * @param Array Additional Headers to retrieve */ function retrieve_headers($add_headers = false) { if ($add_headers) { $this->add_headers = array_merge($this->add_headers, $add_headers); } if (isset($_SERVER['HTTP_METHOD'])) { $this->method = $_SERVER['HTTP_METHOD']; unset($_SERVER['HTTP_METHOD']); } else { $this->method = isset($_SERVER['REQUEST_METHOD']) ? $_SERVER['REQUEST_METHOD'] : false; } $this->protocol = isset($_SERVER['SERVER_PROTOCOL']) ? $_SERVER['SERVER_PROTOCOL'] : false; $this->request_method = isset($_SERVER['REQUEST_METHOD']) ? $_SERVER['REQUEST_METHOD'] : false; $this->headers = array(); foreach($_SERVER as $i=>$val) { if (strpos($i, 'HTTP_') === 0 || in_array($i, $this->add_headers)) { $name = str_replace(array('HTTP_', '_'), array('', '-'), $i); $this->headers[$name] = $val; } } } /** * Retrieve HTTP Method */ function method() { return $this->method; } /** * Retrieve HTTP Body */ function body() { return $this->body; } /** * Retrieve an HTTP Header * @param string Case-Insensitive HTTP Header Name (eg: "User-Agent") */ function header($name) { $name = strtoupper($name); return isset($this->headers[$name]) ? $this->headers[$name] : false; } /** * Retrieve all HTTP Headers * @return array HTTP Headers */ function headers() { return $this->headers; } /** * Return Raw HTTP Request (note: This is incomplete) * @param bool ReBuild the Raw HTTP Request */ function raw($refresh = false) { if (isset($this->raw) && !$refresh) { return $this->raw; // return cached } $headers = $this->headers(); $this->raw = "{$this->method} {$_SERVER['REQUEST_URI']} {$this->protocol}\r\n"; foreach($headers as $i=>$header) { $this->raw .= "$i: $header\r\n"; } $this->raw .= "\r\n{$this->body}"; return $this->raw; } } /** * Example Usage * Echos the HTTP Request back to the client/browser */ $http_request = new http_request(); $resp = $http_request->raw(); echo nl2br($resp); /* Result (less <br> tags) GET / HTTP/1.1 HOST: localhost:8080 USER-AGENT: Mozilla/5.0 ... ACCEPT: text/html,application/xhtml+xml,application/xml;... ACCEPT-LANGUAGE: en-US,en;q=0.5 ACCEPT-ENCODING: gzip, deflate DNT: 1 COOKIE: PHPSESSID=... CONNECTION: keep-alive */ ?> P.S: Dont forget to htmlentities() them values on output :) A: The raw request is not available to PHP, because the request has already been consumed by the time PHP starts. When PHP is running as an Apache module (mod_php), for instance, the request is received and parsed by Apache, and PHP is only invoked after Apache has parsed that request and determined that it refers to a file which should be processed by PHP. If PHP is running as a CGI or FastCGI handler, it never receives an HTTP request at all — it only sees the CGI form of the request, which is quite different.
unknown
d3358
train
You need to enter the Phone USB configuration for your device in the udev file on ubuntu You need to add a udev rules file that contains a USB configuration for each type of device you want to use for development. In the rules file, each device manufacturer is identified by a unique vendor ID, as specified by the ATTR{idVendor} property. For a list of vendor IDs, see USB Vendor IDs, below. To set up device detection on Ubuntu Linux: Log in as root and create this file: /etc/udev/rules.d/51-android.rules. Use this format to add each vendor to the file: SUBSYSTEM=="usb", ATTR{idVendor}=="0bb4", MODE="0666", GROUP="plugdev" In this example, the vendor ID is for HTC. The MODE assignment specifies read/write permissions, and GROUP defines which Unix group owns the device node. Note: The rule syntax may vary slightly depending on your environment. Consult the udev documentation for your system as needed. For an overview of rule syntax, see this guide to writing udev rules. Now execute: chmod a+r /etc/udev/rules.d/51-android.rules Vendor ID for Sony Ericcson is Sony Ericsson 0FCE
unknown
d3359
train
Use $sql = "INSERT INTO survey_answers (response_id, quest_id, response_value) VALUES ('".LAST_INSERT_ID()."', '".$id."', '."$value."')"; instead $sql = "INSERT INTO survey_answers (response_id, quest_id, response_value) VALUES (LAST_INSERT_ID(), $id, '$value')"; A: Ok I think I've got this licked, though there may be a better solution. In my form, I changed the radio buttons to read as follows: echo "<td bgcolor='#E8E8E8'><input type='radio' name='quest".$row[quest_id]."' value='$i'></td>"; The only change was adding 'quest' as a prefix to the ID# in the name attribute. In my PHP file: foreach ($_POST as $name=>$value) { if (strpos($name, "quest") !== FALSE) { $qID = substr($name, 5); $sql = "INSERT INTO survey_answers (response_id, quest_id, response_value) VALUES (LAST_INSERT_ID(), '$qID', '$value')"; $res = send_sql($sql, $link, $db) or die("Cannot add survey"); } } I'm checking the post values for those with the 'quest' prefix, then extracting the id# via substr. If there's a better way I'm all ears...
unknown
d3360
train
You figured it out, but overall the Discord API does not allow you to delete an ephemeral message. The best you can get is changing the message content.
unknown
d3361
train
Actual code should be 1) var orders = from o in Orders where o.OrderItems.Any(i => i.PartId == 100) select o; The Any() method returns a bool and is like the SQL "in" clause. This would get all the order where there are Any OrderItems what have a PartId of 100. 2a) // This will create a new type with the 2 details required var orderItemDetail = from o in Orders from i in Orders.OrderItems where i.PartId == 100 select new() { o.OrderNumber, i.PartName } The two from clauses are like an inner join. 2b) // This will populate the OrderItemSummary type var orderItemDetail = from o in Orders from i in Orders.OrderItems where i.PartId == 100 select new OrderItemSummary() { OriginalOrderNumber = o.OrderNumber, PartName = i.PartName } 3) // This will create a new type with two properties, one being the // whole OrderItem object. var orderItemDetail = from o in Orders from i in Orders.OrderItems where i.PartId == 100 select new() { OrderNumber = o.OrderNumber, Item = i } Since "i" is an object of Type OrderItem, Item is create as an OrderItem.
unknown
d3362
train
you can define a Map from fruits and returnedValue like: Map<String, String> returnedValue = { "APPLE" : "Vitamin A", "ORANGE" : "Vitamin C", "BANANA" : "Vitamin K", }; and return from this. all your code like this : Function(String) returnFunction(); String myFruits; String myVitamin; List<String> fruits = [ "APPLE", "ORANGE", "BANANA", ]; Map<String, String> returnedValue = { "APPLE" : "Vitamin A", "ORANGE" : "Vitamin C", "BANANA" : "Vitamin K", }; DropdownSearch( onChanged: (dynamic value) { myFruits = value; myVitamin = returnedValue[value]; returenFunction(myVitamin); // if you want return from this class }, mode: Mode.DIALOG, items: fruits, ),
unknown
d3363
train
I had a SPA (single page application) written in React communicating to a REST JSON API written in nodejs and hosted on Heroku as a monolith. I migrated to AWS Lambda and split the monolith into 3+ AWS Lambdas micro services inside of a monorepo The following project structure is good if your SPA requires users to login to be able to do anything. I used a single git repository where I have a folder for each service: * *api *app *www Inside of each of the services' folders I have a serverless.yml defining the deployment to a separate AWS Lambda. Each service maps to only 1 function index which accepts all HTTP endpoints. I do use 2 environments staging and production. My AWS Lambdas are named like this: * *example-api-staging-index *example-api-production-index *example-app-staging-index *example-app-production-index *example-www-staging-index *example-www-production-index I use AWS Api Gateway's Custom Domain Names to map each lambda to a public domain: * *api.example.com (REST JSON API) *app.example.com (single page application that requires login) *www.example.com (server side rendered landing pages) *api-staging.example.com *app-staging.example.com *www-staging.example.com You can define the domain mapping using serverless.yml Resources or a plugin but you have to do this only once so I did manually from the AWS website console. My .com domain was hosted on GoDaddy but I migrated it to AWS Route 53 since HTTPS certificates are free. app service * */bundles-production */bundles-staging *src (React/Angular single page application here) *handler.js *package.json *serverless.yml The app service contains a folder /src with the single page application code. The SPA is built locally on my computer in either ./bundles-production or ./bundles-staging based on the environment. The built generates the .js and .css bundles and also the index.html. The content of the folder is deployed to a S3 bucket using the serverless-s3-deploy plugin when I run serverless deploy -v -s production. I defined only 1 function getting called for all endpoints in the serverless.yml (I use JSON instead of YAML): ... "functions": { "index": { "handler": "handler.index", "events": [ { "http": "GET /" }, { "http": "GET /{proxy+}"} ] }, }, The handler.js file returns the index.html defined in /bundles-staging or /bundles-production I use webpack to build the SPA since it integrates very well with serverless with the serverless-webpack plugin. api service I used aws-serverless-express to define all the REST JSON API endpoints. aws-serverless-express is like normal express but you can't do some things like express.static() and fs.sendFile(). I tried initially to use a separate AWS Lambda function instead of aws-serverless-express for each endpoint but I quickly hit the CloudFormation mappings limit. www service If most of the functionality of your single page application require a login it's better to place the SPA on a separate domain and use the www domain for server-side rendered landing pages optimized for SEO. BONUS: graphql service Using a micro service architecture makes it easy to experiment. I'm currently rewriting the REST JSON API in graphql using apollo-server-lambda A: I have done pretty much the same architecture and hosted the single page app on s3. What you can do is set up cloudfront for api gateway and than point api.yourDomain.com to that cloudfront. Than you will also need to enable cors on your api. This plugin handles setting up domain and cloudfront for you: https://github.com/amplify-education/serverless-domain-manager I am not sure about your project requirements but if you want to serve static files faster setting up domain->cloudfront->s3 might be a wise choice. A: Here is something meaningful and worthy to read about the Serverless Architecture and Azure Serverless services https://sps-cloud-architect.blogspot.com/2019/12/what-is-serverless-architecture.html
unknown
d3364
train
You are fetching data from underlying CONFIG table here and annotate FEATURE with @Id; i.e. asking hiberanate to fetch only unique records. You can not distinguish null as a 'unique' value resulting your list only fetching valid unique values. Just for checking-- * *Try having FEATURE column only 2 values repeating at multiple positions ; you will get only 2 distinct values in your list. *Try to insert values into FEATURE through java, use one of the value as null; you will get unique constraint voilation exception.
unknown
d3365
train
The website you have linked to does a post of the form ajaxUploadForm using the jQuery ajaxForm function. I would presume that extra input data will be included when you add input elements to the ajaxUploadForm form. Try it out: change the markup to the following (borrowed from the site in question): <script type="text/javascript"> 1: $(function() { $("#ajaxUploadForm").ajaxForm({ iframe: true, dataType: "json", beforeSubmit: function() { $("#ajaxUploadForm").block({ message: '<h1><img src="/Content/busy.gif" /> Uploading file...</h1>' }); }, success: function(result) { $("#ajaxUploadForm").unblock(); $("#ajaxUploadForm").resetForm(); $.growlUI(null, result.message); }, error: function(xhr, textStatus, errorThrown) { $("#ajaxUploadForm").unblock(); $("#ajaxUploadForm").resetForm(); $.growlUI(null, 'Error uploading file'); } }); }); </script> <form id="ajaxUploadForm" action="<%= Url.Action("AjaxUpload", "Upload")%>" method="post" enctype="multipart/form-data"> <fieldset> <legend>Upload a file</legend> <label>File to Upload: <input type="file" name="file" />(100MB max size)</label> <input type="text" id="someOtherInputElement" value="Test" /> <input id="ajaxUploadButton" type="submit" value="Submit" /> </fieldset> </form>
unknown
d3366
train
So you can do this with formulas, but it's a bit involved. Bottom line is that here is the result I came up with: The drop-down was created dynamically using dynamic named ranges and formulas We need to start out with some definitions. This is my test worksheet and data: The formulas will work out using the named ranges, so you can put your "working area" (the boxed area in green) almost anywhere, including on a different (possibly hidden) worksheet. You must define four, dynamic named ranges as follows, which match the color-shaded areas in the image above: Many of these formulas are array formulas, so you must be careful to enter them with CTRL+SHIFT+ENTER. Once your data areas and names are defined, the first area to populate is the UniqueClusterList (the range on the sheet is F2:M2). We're building a list of unique items, based on the data in your column of cluster values. So you need an array formula that identifies all the unique values in the range. In each cell in the range, enter the array formula CTRL+SHIFT+ENTER for each: Cell F2 =IFERROR(LOOKUP(2,1/(COUNTIF($E$2:E2,ClusterList)=0),ClusterList),"") Cell G2 =IFERROR(LOOKUP(2,1/(COUNTIF($E$2:F2,ClusterList)=0),ClusterList),"") Cell H2 =IFERROR(LOOKUP(2,1/(COUNTIF($E$2:G2,ClusterList)=0),ClusterList),"") Cell I2 =IFERROR(LOOKUP(2,1/(COUNTIF($E$2:H2,ClusterList)=0),ClusterList),"") ... and so on. Notice that only the cell address in the middle is changing. Next, we need to build up the list of IDs for each unique Cluster value. This is also an array formula. Starting in cell F3 with CTRL+SHIFT+ENTER: =IFERROR(INDEX(IDList, SMALL(IF(F$2=ClusterList, ROW(IDList)-2,""), ROW()-2)),"") Then use your cursor to grab the auto-fill icon in the selection box of that cell and drag it down to cell F16. Since cells F3:F16 are now selected, re-grab the auto-fill icon and drag to the right to fill the whole range F2:M16. All of the values should pop-in as calculated by the formulas. Your final step is to create the lookup formula for the drop-down list. So select cell C3, then on the ribbon click Data --> Data Tools --> Data Validation to get the dialog window. Now select Allow: List, and in the Source: field enter the following formula: =OFFSET($F$2,1,MATCH(A3,UniqueClusterList,0)-1,SUMPRODUCT(COUNTIF(IDList,OFFSET($F$2,0,MATCH(A3,UniqueClusterList,0)-1,MAXUNIQUE,1))),1) You'll now have a drop-down in cell C3 that matches the very first image above. Drag the auto-fill selection icon all the way down to cell C20 and all of those cells will correctly calculate the drop-down list based on the available Clusters and IDs.
unknown
d3367
train
First: I am not familiar with the array format that you showed in your post. I have never seen an array instantiated in Python using just square brackets. That's not a function call. Second: Your problem may not be fully specified. However: if you have shown us all the possible values that you can have in your input arrays, then only 1, 2, and 3 are possible. In that case, there are only 9 possible pairings of elements from array_1 and array_2, and the values you want as outputs can easily be stored in a 3 x 3 lookup table. Finally: I use Numpy. You seem to want arrays, and Numpy is very common and handles arrays well. import numpy as np from itertools import product arr1 = np.array([[1,2,2],[1,2,3],[1,2,1]], dtype=int) arr2 = np.array([[1,1,2],[2,2,3],[3,1,1]], dtype=int) lookup = np.array(((0,10,15),(5,0,15),(5,10,0)), dtype=int) result = np.zeros_like(arr1) for r, c in product(*[range(x) for x in arr1.shape]): a, b = arr1[r,c], arr2[r,c] result[r,c] = lookup[a-1,b-1] print(arr1, "\n") print(arr2, "\n") print(result) Here's the output: [[1 2 2] [1 2 3] [1 2 1]] [[1 1 2] [2 2 3] [3 1 1]] [[ 0 5 0] [10 0 0] [15 5 0]] A: You can do this in a list comprehension: Use a conditional expression to produce your values. You put the if, else statement first and then iterate through to replace the values. I have created a list and pulled a number at random from that list. import random array_1 = [[1, 2, 2], [1, 2, 3], [1, 2, 1]] array_2 = [[1, 1, 2], [2, 2, 3], [3, 1, 1]] lst = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50] res = [[random.choice(lst) if i != j else 0 for i, j in zip(a, b)] for a, b in zip(array_1, array_2)] for i in res: print(i) Returns: [0, 25, 0] [50, 0, 0] [35, 45, 0]
unknown
d3368
train
Set bezierCurve: false for the charts where you are having this problem.
unknown
d3369
train
The query you posted should work with no problem: SELECT SUM(ABS(`number_x` - `number_y`)) AS `total_difference` FROM `table` Or if you want to write it with a subquery like so: SELECT SUM(diff) AS `total_difference` FROM ( SELECT Id, ABS(number_x - number_y) diff FROM TableName ) t A: SELECT SUM(ABS(`number_x` - `number_y`)) AS `total_difference` FROM `table` OR You can write other way, but i'll prefer above only SELECT ABS(SUM(number_x) - SUM(number_y)) AS total_difference FROM table
unknown
d3370
train
Sure, the difference between the following two: [f(x) for x in list] and this: (f(x) for x in list) is that the first will generate the list in memory, whereas the second is a new generator, with lazy evaluation. So, simply write the "unfiltered" list as a generator instead. Here's your code, with the generator inline: def myFunction(x): print("called for: " + str(x)) return x * x originalList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] limit = 10 result = [C2 for C2 in ((myFunction(C), C) for C in originalList) if C2[0] < limit] # result = [C2 for C2 in [(myFunction(C), C) for C in originalList] if C2[0] < limit] Note that you will not see a difference in the printout from the two, but if you were to look at memory usage, the second statement which is commented out, will use more memory. To do a simple change to your code in your question, rewrite unfiltered as this: unfiltered = [ (myFunction(C),C) for C in originalList ] ^ ^ +---------- change these to (..) ---------+ | v unfiltered = ( (myFunction(C),C) for C in originalList ) A: Don't use a list comprehension; a normal for loop is fine here. A: Just compute the distances beforehand and then filter the results: with_distances = ((myFunction(C), C) for C in originalList) result = [C for C in with_distances if C[0] < limit] Note: instead of building a new list, I use a generator expression to build the distance/element pairs. A: Some options: * *Use memoization *Use a normal for loop *Create an unfiltered list, then filter it (your option 1). The 'wasted' memory will be reclaimed by the GC very quickly - it's not something you need to worry about. A: Lasse V. Karlsen has an excellent reply to your question. If your distance computation is slow, I guess your elements are polylines, or something like that, right ? There are lots of ways to make it faster : * *If the distance between bounding boxes of objects is > X, then it follows that the distance between those objects is > X. So you only need to compute distance between bounding boxes. *If you want all objects that are at a distance less than X from object A, only objects whose bounding box intersect A's bounding box enlarged by X are potential matches. Using the second point you can probably drop lots of candidate matches and only do the slow computation when needed. Bounding boxes must be cached beforehand. If you really have a lot of objects you could also use space partitioning... Or convex enclosing polys if you are in 3D A: Rather than using a global variable as in your option 2, you could rely on the fact that in Python parameters are passed by object - that is, the object that is passed into your myFunction function is the same object as the one in the list (this isn't exactly the same thing as call by reference, but it's close enough). So, if your myFunction set an attribute on the object - say, _result - you could filter by that attribute: result = [(_result, C) for C in originalList if myFunction(C) < limit] and your myFunction might look like this: def myFunction(obj): obj._result = ... calculation .... return obj._result A: What's wrong with option 1? "duplicate my originalList and waste some memory (the list could be quite big - more than 10,000 elements)" 10,000 elements is only 10,000 pointers to tuples that point to existing objects. Think 160K or so of memory. Hardly worth talking about.
unknown
d3371
train
The cancel method will stop all following executions, but not the current one. It your method execution takes a long time, then it is possible by the time you call cancel, the method has already begun executing. The best way to make sure the method does not execute is to call cancel from within the run() function itself.
unknown
d3372
train
You need to change your saveEdits function to check is there anything saved on storage with the same key or not. To achieve it I will recommend you to use get and set item from API here some example how you can do it. function saveEdits() { //get the editable element var editElem = document.getElementById("edit"); //get the edited element content var userVersion = editElem.innerHTML; //get previously saved const previousEditsStr = localstorage.getItem('userEdits'); // parse allready saved edits or create empty array const savedEdits = previousEditsStr ? JSON.parse(previousEditsStr) : []; // push the latest one savedEdits.push(userVersion); //stringify and save the content to local storage localStorage.setItem('userEdits', JSON.stringify(savedEdits)); //write a confirmation to the user document.getElementById("update").innerHTML = "Edits saved!"; } Please be noticed that the memory here is limited and you need to take it under your control. For example you can limit previously saved comments. Since we are saving an array thats men you need to change your reading part as well function checkEdits() { const userEdits = localStorage.getItem('userEdits'); //find out if the user has previously saved edits if (userEdits) { // here is the saved edits const comments = JSON.parse(userEdits); // showing previously saved message document.getElementById("edit").innerHTML = comments[comments.length - 1]; } }
unknown
d3373
train
I think your http request getting called all time while you are scrolling you just have to add following code in your onScroll of setOnScrollListener final int lastItem = firstVisibleItem + visibleItemCount; if(lastItem == totalItemCount) { //your http request } I hope this will work for you.
unknown
d3374
train
I think the problem come from a name conflict. There are 2 objects named 'sonarqube': * *The SonarQube task *The SonarQube extension It seems to not break your build, but here when you write sonarqube.enabled it access to the extension (according to your stacktrace). The solution is probably to disambiguate using tasks.sonarqube.enabled. See https://docs.gradle.org/current/userguide/more_about_tasks.html#N11143
unknown
d3375
train
This is my approach to solve this issue: * *Determine vertical lines *Determine horizontal lines *Find their intersections which are joints For first step check each column and determine thin lines and make them black(0). The result will be only vertical lines. For the second step do reverse. At the end compare vertical line image with the horizontal line image. The pixels which are white(255) in both are the intersection points. Note: Please do not blame me because of coding in C++. I am not familiar with python I just wanted to show my approach and results. Here is the code and results: Source: Vertical Lines: Horizontal Lines: Result: The code: #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <iostream> using namespace std; using namespace cv; int main() { Mat img = imread("/ur/image/directory/joints.jpg",1); imshow("Source",img); int checker = 1,checker2 = 1; int begin_y,finish_y2,finish_y,begin_y2; Mat vertical_img = img.clone(); Mat horizontal_img = img.clone(); cvtColor(vertical_img,vertical_img,CV_BGR2GRAY); cvtColor(horizontal_img,horizontal_img,CV_BGR2GRAY); int finish_checker = 0,finish_checker2=0; for(int i=0;i<horizontal_img.rows;i++) { for(int j=0;j<horizontal_img.cols;j++) { if(horizontal_img.at<uchar>(Point(j,i))>100 && checker) { begin_y = j; checker = 0; } if(horizontal_img.at<uchar>(Point(j,i))<20 && checker==0) { finish_y = j; checker = 1; finish_checker = 1; } if(finish_checker) { if((finish_y-begin_y)<30) { for(int h=begin_y-2;h<=finish_y;h++) { horizontal_img.at<uchar>(Point(h,i)) = 0; } } finish_checker = 0; } } } imshow("Horizontal",horizontal_img); for(int i=0;i<vertical_img.cols;i++) { for(int j=0;j<vertical_img.rows;j++) { if(vertical_img.at<uchar>(Point(i,j))>100 && checker2) { begin_y2 = j; checker2 = 0; } if(vertical_img.at<uchar>(Point(i,j))<50 && checker2==0) { finish_y2 = j; checker2 = 1; finish_checker2 = 1; } if(finish_checker2) { if((finish_y2-begin_y2)<30) { for(int h=begin_y2-2;h<=finish_y2;h++) { vertical_img.at<uchar>(Point(i,h)) = 0; } } finish_checker2 = 0; } } } imshow("Vertical",vertical_img); for(int y=0;y<img.cols;y++) { for(int z=0;z<img.rows;z++) { if(vertical_img.at<uchar>(Point(y,z))>200 && horizontal_img.at<uchar>(Point(y,z))>200) { img.at<cv::Vec3b>(z,y)[0]=0; img.at<cv::Vec3b>(z,y)[1]=0; img.at<cv::Vec3b>(z,y)[2]=255; } } } imshow("Result",img); waitKey(0); return 0; } A: Here's a slight modified version of @YunusTemurlenk's approach using Python instead of C++. The idea is: * *Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold. *Obtain horizontal and vertical line masks. Create horizontal and vertical structuring elements with cv2.getStructuringElement then perform cv2.morphologyEx to isolate the lines. *Find joints. We cv2.bitwise_and the two masks together to get the joints. *Find centroid on joint mask. We find contours then calculate the centroid. Horizontal/vertical line masks Detected joints in green Results import cv2 import numpy as np # Load image, grayscale, Gaussian blur, Otsus threshold image = cv2.imread('1.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (3,3), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # Find horizonal lines horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,1)) horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) # Find vertical lines vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,10)) vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2) # Find joints joints = cv2.bitwise_and(horizontal, vertical) # Find centroid of the joints cnts = cv2.findContours(joints, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: # Find centroid and draw center point M = cv2.moments(c) cx = int(M['m10']/M['m00']) cy = int(M['m01']/M['m00']) cv2.circle(image, (cx, cy), 3, (36,255,12), -1) cv2.imshow('thresh', thresh) cv2.imshow('horizontal', horizontal) cv2.imshow('vertical', vertical) cv2.imshow('joints', joints) cv2.imshow('image', image) cv2.waitKey()
unknown
d3376
train
For a start you have a space in the Path of your Binding for the AutoCompleteBox.Text property which I don't think is allowed. A: After looking into this, it seems like it doesn't have anything to do with the DataGridTemplateColumn, but rather with the AutoCompleteBox from the Wpf Toolkit. The AutoCompleteBox has been nothing but trouble for me ever since I started using it. As a result, I decided to scrap it and use an Editable ComboBox instead. The combobox is much cleaner and more simple to implement. Here is how my code now looks and the datarowview is able to see what the user types in the box: <DataGridTemplateColumn Header="Account Type"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Path='Account Type'}" /> </DataTemplate> </DataGridTemplateColumn.CellTemplate> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <ComboBox IsEditable="True" LostFocus="LostFocusAccountTypes" ItemsSource="{DynamicResource types}" Height="23" IsTextSearchEnabled="True"/> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn> Code Behind (this.Types is an observable collection of strings) private void PopulateAccountTypes() { try { string accountQuery = "SELECT AccountType FROM AccountType WHERE UserID = " + MyAccountant.DbProperties.currentUserID + ""; SqlDataReader accountType = null; SqlCommand query = new SqlCommand(accountQuery, MyAccountant.DbProperties.dbConnection); accountType = query.ExecuteReader(); while (accountType.Read()) { this.Types.Add(accountType["AccountType"].ToString()); } accountType.Close(); Resources["types"] = this.Types; } catch (Exception ex) { MessageBox.Show(ex.Message); } }
unknown
d3377
train
you can use case when hour to merge multi hours to time range then group it. select (case when date_part('hour', ts.a_start_time) <= 6 then '1 to 6' when date_part('hour', ts.a_start_time) <= 12 then '6 to 12' when date_part('hour', ts.a_start_time) <= 18 then '12 to 18' else '18 to 23' end )AS hour, count(*) as c, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 0) AS s, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 1) AS m, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 2) AS t, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 3) AS w, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 4) AS th, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 5) AS f, count(*) FILTER (WHERE EXTRACT(dow FROM ts.a_start_time) = 6) AS sa from flight_schedules as fs inner join resource_mapping as rm on rm.flight_schedules_id = fs.id inner join task_schedule_details as tsd on tsd.id = rm.task_schedule_detail_id inner join task_status as ts on ts.resource_mapping_id = rm.id inner join task_master as tm on tm.id = tsd.task_id inner join delay_code_master as dcm on dcm.id = ts.delay_code_id inner join delay_categorization as dc on dc.id = dcm.delay_category_id Where fs.station=81 group by hour order by hour ASC
unknown
d3378
train
You have to inject $state in your controller .controller ("mainCtrl", function($scope, $state) { Then, you can use it $state.go('results'); A: You have to use $state.go('results'); in stead of $state.go('results', "");. The second field is for setting the options of the $state.go method. I guess it's not working because you leave this as an empty string. If you don't put anything there, it will use the default values.
unknown
d3379
train
def post_params params.require(:post).permit(:title, :content) end Change params_require to params.require
unknown
d3380
train
You should take a look at the source code of tf.nn.dynamic_rnn, specifically _dynamic_rnn_loop function at python/ops/rnn.py - it's solving the same problem. In order not blow up the graph, it's using tf.while_loop to reuse the same graph ops for new data. But this approach adds several restrictions, namely the shape of tensors that are passing through in a loop must be invariant. See the examples in tf.while_loop documentation: i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] tf.while_loop( c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
unknown
d3381
train
In the case of your application you should probably think about adapting some algorithms from bioinformatics. For example you could firstly unify your strings by making sure, that all separators are spaces or anything else you like, such that you would compare "Alan Turing" with "Turing Alan". And then split one of the strings and do an exact string matching algorithm ( like the Horspool-Algorithm ) with the pieces against the other string, counting the number of matching substrings. If you would like to find matches that are merely similar but not equal, something along the lines of a local alignment might be more suitable since it provides a score that describes the similarity, but the referenced Smith-Waterman-Algorithm is probably a bit overkill for your application and not even the best local alignment algorithm available. Depending on your programming environment there is a possibility that an implementation is already available. I personally have worked with SeqAn lately, which is a bioinformatics library for C++ and definitely provides the desired functionality. Well, that was a rather abstract answer, but I hope it points you in the right direction, but sadly it doesn't provide you with a simple formula to solve your problem. A: Have a look at the Jaccard distance metric (JDM). It's an oldie-but-goodie that's pretty adept at token-level discrepancies such as last name first, first name last. For two string comparands, the JDM calculation is simply the number of unique characters the two strings have in common divided by the total number of unique characters between them (in other words the intersection over the union). For example, given the two arguments "JEFFKTYZZER" and "TYZZERJEFF," the numerator is 7 and the denominator is 8, yielding a value of 0.875. My choice of characters as tokens is not the only one available, BTW--n-grams are often used as well. A: One of the easiest and most effective modern alternatives to edit distance is called the Normalized Compression Distance, or NCD. The basic idea is easy to explain. Choose a popular compressor that is implemented in your language such as zlib. Then, given string A and string B, let C(A) be the compressed size of A and C(B) be the compressed size of B. Let AB mean "A concatenated with B", so that C(AB) means "The compressed size of "A concatenated with B". Next, compute the fraction (C(AB) - min(C(A),C(B))) / max(C(A), C(B)) This value is called NCD(A,B) and measures similarity similar to edit distance but supports more forms of similarity depending on which data compressor you choose. Certainly, zlib supports the "chunk" style similarity that you are describing. If two strings are similar the compressed size of the concatenation will be near the size of each alone so the numerator will be near 0 and the result will be near 0. If two strings are very dissimilar the compressed size together will be roughly the sum of the compressed sizes added and so the result will be near 1. This formula is much easier to implement than edit distance or almost any other explicit string similarity measure if you already have access to a data compression program like zlib. It is because most of the "hard" work such as heuristics and optimization has already been done in the data compression part and this formula simply extracts the amount of similar patterns it found using generic information theory that is agnostic to language. Moreover, this technique will be much faster than most explicit similarity measures (such as edit distance) for the few hundred byte size range you describe. For more information on this and a sample implementation just search Normalized Compression Distance (NCD) or have a look at the following paper and github project: http://arxiv.org/abs/cs/0312044 "Clustering by Compression" https://github.com/rudi-cilibrasi/libcomplearn C language implementation There are many other implementations and papers on this subject in the last decade that you may use as well in other languages and with modifications. A: I think you're looking for Jaro-Winkler distance which is precisely for name matching. A: You might find compression distance useful for this. See an answer I gave for a very similar question. Or you could use a k-tuple based counting system: * *Choose a small value of k, e.g. k=4. *Extract all length-k substrings of your string into a list. *Sort the list. (O(knlog(n) time.) *Do the same for the other string you're comparing to. You now have two sorted lists. *Count the number of k-tuples shared by the two strings. If the strings are of length n and m, this can be done in O(n+m) time using a list merge, since the lists are in sorted order. *The number of k-tuples in common is your similarity score. With small alphabets (e.g. DNA) you would usually maintain a vector storing the count for every possible k-tuple instead of a sorted list, although that's not practical when the alphabet is any character at all -- for k=4, you'd need a 256^4 array. A: I'm not sure that what you really want is edit distance -- which works simply on strings of characters -- or semantic distance -- choosing the most appropriate or similar meaning. You might want to look at topics in information retrieval for ideas on how to distinguish which is the most appropriate matching term/phrase given a specific term or phrase. In a sense what you're doing is comparing very short documents rather than strings of characters.
unknown
d3382
train
You have to have type="radio" buttons within the same container to make them behave like radio buttons, like as per w3schools example <form> <input type="radio" name="sex" value="male">Male<br> <input type="radio" name="sex" value="female">Female </form>
unknown
d3383
train
Is your window setting TextOptions.TextFormattingMode on Ideal? If so, try setting Display.
unknown
d3384
train
First issue is default metadata provided in last parameter of DP identifier is incorrect. Instead of new UIPropertyMetadata(default(Double)), it should be new UIPropertyMetadata(typeof(Double)) Second issue in XAML. Use x:Type to pass type. <nb:NumberBox xmlns:sys="clr-namespace:System;assembly=mscorlib" FormatValue="{x:Type sys:Int32}"/> A: From here http://msdn.microsoft.com/en-us/library/ee792002%28v=vs.110%29.aspx, the types are x:Int32, x:Double, ... So you want to use: <nb:NumberBox FormatValue="{x:Type x:Int32}"/> In the general sense, you would have to include the namespace in the XAML: <Window ... xmlns:ns="clr-namespace:MyNamepsace;assembly=MyAssembly" > <nb:NumberBox FormatValue="{x:Type ns:MyType}"/> </Window>
unknown
d3385
train
There are a few problems. First, your mysql statement is wrong. Change this: $querystring ="SELECT Account FROM Admins WHERE Account ='".$_POST['Account']."';"; To this: $querystring ="SELECT * FROM Admins WHERE Account ='" .$_POST['Account']. "'"; Next, for one test, echo out the account name and password that you get from the db query, just so you can make sure you are getting what you expect (perhaps the capitalization is wrong for the field name or something like that). $user = mysql_fetch_assoc($result); echo 'Got this username: ' .$user['Account']. '<br />'; echo 'Got this password: ' .$user['Password']. '<br />'; Once you have confidence that you are getting the correct data, then you can finish your script. This: if(md5($_POST['Password']) != $user['Password']) { echo "wachtwoord is niet correct";?><br/> <?php echo "Het account is:".$user['Account'];?><br/> <?php echo "het wachtwoord is: ".$user['Password'];?><br/> <?php echo mysql_fetch_assoc($result); } Can be re-written as: if(md5($_POST['Password']) != $user['Password']) { echo "wachtwoord is niet correct<br />"; echo "Het account is: " .$user['Account']. "<br />"; echo "het wachtwoord is: " .$user['Password']. "<br />"; }
unknown
d3386
train
library(lme4) library(lattice) xyplot(incidence/size ~ period|herd, cbpp, type=c('g','p','l'), layout=c(3,5), index.cond = function(x,y)max(y)) gm1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), data = cbpp, family = binomial) summary(gm1)
unknown
d3387
train
Try double checking how you are inflating the view. There are two possible ways: * *Passing the parent to the inflater: LayoutInflater.from(context).inflate(R.layout.layout, this, true); *Keep the view without the parent: LayoutInflater.from(context).inflate(R.layout.layout, null); When using the first method, you cannot call addView() with the view as parameter. Using the second way, the view is not pinned to parent, therefore you can invoke addView()
unknown
d3388
train
You must change the setOddHeader $sheet->getHeaderFooter()->setOddHeader('&C&G'); to $sheet->getHeaderFooter()->setOddFooter('&C&G'); The HeaderFooter::IMAGE_FOOTER_CENTER and &C&G must correspond each other. You should set a width e.g.: $drawing->setWidth(800); The fullcode should look like this: $drawing = new \PhpOffice\PhpSpreadsheet\Worksheet\HeaderFooterDrawing(); $drawing->setName('PhpSpreadsheet logo'); $drawing->setPath('./uploads/blu.png'); $drawing->setWidth(800); $sheet->getHeaderFooter()->addImage($drawing, \PhpOffice\PhpSpreadsheet\Worksheet\HeaderFooter::IMAGE_FOOTER_CENTER); $sheet->getHeaderFooter()->setOddFooter('&C&G');
unknown
d3389
train
You don't have to. The file/data will be read "as is" whenever you open the table. If you wish to update the text file, just replace it with the new file. A: Thanks for the reply. But i need to refresh my linked table anyway. So i found this code helpful. Private Sub Command3_Click() Dim b As Boolean b = RefreshLinkedTables() End Sub Function RefreshLinkedTables() As Boolean Dim db As DAO.Database Dim tb As DAO.TableDef Dim fld As DAO.Field Set db = CurrentDb For Each tb In db.TableDefs ' Skip system files. If (Mid(tb.Name, 1, 4) <> "MSys" And Mid(tb.Name, 1, 4) <> "~TMP") Then Debug.Print tb.Name Debug.Print tb.Connect 'If (Mid(tb.Connect, 1, 5) = "ODBC;") Then If (tb.Name = "P60DZ30") Then tb.RefreshLink Debug.Print "Refreshing fields data" tb.Fields.Refresh End If 'End If Debug.Print "=== === ===" End If db.TableDefs.Refresh Next Set db = Nothing RefreshLinkedTables = True Exit Function End Function
unknown
d3390
train
Definitely not suitable for general purpose graph libraries (whatever you're supposed to do if more than one of the words meaningful in a node is in the input string -- is that an error? -- or if none does and there is no default for the node, as for node 30 in the example you supply). Just write the table as a dict from node to tuple (default stuff, dict from word to specific stuff) where each stuff is a tuple (destination, word-to-add) (and use None for the special "word-to-add" clear). So e.g.: tab = {1: (100, None), {'acs': (20, None)}), 20: ((30, 'hst_ota'), {'ota': (30, 'hst_ota'), 'noota': (30, None)}), 30: ((None, None), {'acs': (10000,None), 'cos':(11000,None)}), etc etc Now handling this table and an input comma-separated string is easy, thanks to set operations -- e.g.: def f(icss): kws = set(icss.split(',')) N = 1 while N in tab: stuff, others = tab[N] found = kws & set(others) if found: # maybe error if len(found) > 1 ? stuff = others[found.pop()] N, word_to_add = stuff if word_to_add is not None: print word_to_add A: Adding an answer to respond to the further requirements newly edited in...: I still wouldn't go for a general-purpose library. For "all nodes can be reached and there are no loops", simply reasoning in terms of sets (ignoring the triggering keywords) should do: (again untested code, but the general outline should help even if there's some typo &c): def add_descendants(someset, node): "auxiliary function: add all descendants of node to someset" stuff, others = tab[node] othernode, _ = stuff if othernode is not None: someset.add(othernode) for othernode, _ in others.values(): if othernode is not None: someset.add(othernode) def islegal(): "Return bool, message (bool is True for OK tab, False if not OK)" # make set of all nodes ever mentioned in the table all_nodes = set() for node in tab: all_nodes.add(node) add_desendants(all_nodes, node) # check for loops and connectivity previously_seen = set() currently_seen = set([1]) while currently_seen: node = currently_seen.pop() if node in previously_seen: return False, "loop involving node %s" % node previously_seen.add(node) add_descendants(currently_seen, node) unreachable = all_nodes - currently_seen if unreachable: return False, "%d unreachable nodes: %s" % (len(unreachable), unreachable) else: terminal = currently_seen - set(tab) if terminal: return True, "%d terminal nodes: %s" % (len(terminal), terminal) return True, "Everything hunky-dory" For the "legal strings" you'll need some other code, but I can't write it for you because I have not yet understood what makes a string legal or otherwise...!
unknown
d3391
train
For the Alignments: You can either drop the Related Tasks into a Table, or you can use a Tab (vbTab, not "\t") For the multiple-rows: This would be simpler if you had a 2D Array (e.g. r(0,0)="RelatedTaskName" and r(0,1)="RelatedTaskID") instead of splitting it based on a Colon, but it's doable, and there are several different ways to go about it. The method that I am going to use here is to build all of your string at once, then use Replace to dump the finished product: (using Tab instead of a Table for the indents) Dim taskID As String, taskName As String, lTaskNum As Long, TaskList As String TaskList = "" 'Start with an empty list For lTaskNum = LBound(r) To UBound(r) If Len(TaskList) > 0 Then TaskList = TaskList & vbTab 'We are using Tab instead of a table here taskName= r(lTaskNum) 'Grab element from the array taskID = Left(taskName, InStr(taskName, ":") - 1) 'Just the number taskName = Replace(taskName, taskID & ":", "",count:=1) 'Just the Link text TaskList = TaskList & "<a href = ""www.xxx.com&amp;id=" & taskID & """>" & taskName & "</a><br />" 'Add the task to the stack Next lTaskNum 'If Len(TaskList) < 1 Then TaskList = "No Related Tasks" 'Optional bonus! sigText = Replace(sigText, "{{myRelatedTasks}}", TaskList) 'Push the finished list into the email
unknown
d3392
train
You are talking about the work directory and the configuration spark.worker, so my assumption is you are running the streaming job in Spark's standalone mode (not using a cluster manager such as YARN because things are quite different there). According to the documentation on Spark Standalone Mode the work directory is described as: Directory to run applications in, which will include both logs and scratch space (default: SPARK_HOME/work). Here scratch space means that it is "including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks." In the work folder you will find for each application the .jar libraries such that the executor have access to the libraries. In addition, it contains some temporary data based on the processing logic and actual data (not on the amount of processing triggers). The sub-folders 0, 1 are incremental for different jobs/stages or runs of the same application. (To be frank, I am not fully knowledgeable about those sub-folders.) The cleaning of this folder can be adjusted by the following three configurations for the SPARK_WORKER_OPTS as described here: spark.worker.cleanup.enabled - Default: false: Enable periodic cleanup of worker / application directories. Note that this only affects standalone mode, as YARN works differently. Only the directories of stopped applications are cleaned up. This should be enabled if spark.shuffle.service.db.enabled is "true" spark.worker.cleanup.interval - Default: 1800 (30 minutes): Controls the interval, in seconds, at which the worker cleans up old application work dirs on the local machine. spark.worker.cleanup.appDataTtl - Default: 604800 (7 days, 7 * 24 * 3600): The number of seconds to retain application work directories on each worker. This is a Time To Live and should depend on the amount of available disk space you have. Application logs and jars are downloaded to each application work dir. Over time, the work dirs can quickly fill up disk space, especially if you run jobs very frequently.
unknown
d3393
train
Try dlg->adjustSize(); dlg->setFixedSize(dlg->sizeHint());
unknown
d3394
train
It's because of the height: 12%; after a certain limit the height becomes too small and cannot contain the elements. I would reconsider using a percentage as the height but if you really want to, then I would at least put something like min-height: 100px; (or whatever min-height works based on your styles), I believe that should solve the issue :)
unknown
d3395
train
Temporarily split the info into multiple lines so you can sort: tr ^ \\n | sort | tr \\n ^ Note: if you have multiple entries, you have to write a loop, which processes it per line.. with huge datasets this is probably not a good idea (too slow), in which case pick a programming language.. but you were asking about the shell... A: Can be done in awk itself: awk -F "^" '{OFS="^"; for (i=1; i<=NF; i++) a[i]=$i} END {n=asort(a, b); for(i=1; i<=n; i++) printf("%s%s", b[i], FS); print ""}' file
unknown
d3396
train
I finally (with a little help from Cam_Aust) solved the problem !!! Here is what I did: * *Find the cp_port.h file in your system: sudo find / -name cpl_port.h, My output was: /Library/Frameworks/GDAL.framework/Versions/1.11/Headers/cpl_port.h /opt/local/include/cpl_port.h *Add the resulting folders to your $PATH in your bash init script (~/.bash_login or ~/.zshrc). This worked for me: export PATH=/Library/Frameworks/GDAL.framework/Headers:$PATH *Open a new terminal session or source ~/.zshrc After this, you can now pip install gdal: Collecting gdal Using cached GDAL-2.1.0.tar.gz Installing collected packages: gdal Running setup.py install for gdal ... done Successfully installed gdal-2.1.0 A: After installing the GDAL Libraries from kyngchaos (as mentioned in the question) you should be able to find the python package ready to go. I cheated and used that: export PYTHONPATH=$PYTHONPATH:/Library/Frameworks/GDAL.framework/Versions/2.1/Python/2.7/site-packages
unknown
d3397
train
The simplest way to terminate an instance after a given time period is: * *Provide a User Data script when launching the Amazon EC2 instance *The script should wait the given period then issue a shutdown command, such as: sleep 3600; shutdown now -h Also, when launching the instance, set Shutdown Behavior = Terminate. This means that when the instance is stopped from within the instance, it will actually be terminated. Alternatively, you could code a Stopinator that would trigger at times and determine which instances to stop/terminate. For some examples see: Simple Lambda Stopinator: Start/Stop EC2 instances on schedule or duration A: Thanks, John(@john-rotenstein). I tested all your suggestions. I used the following User Data script for Windows instances: <script> shutdown -s -t 3600 </script> <persist>true</persist> Linux instances: ... #cloud-config cloud_final_modules: - [scripts-user, always] --// Content-Type: text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt" #!/bin/bash sleep 3600 shutdown now -h In addition, Stopinator Type 2 is working perfectly well for me, but only with Stop-After tag
unknown
d3398
train
You need to install / enable the mbstring extension. Further information can be found on php.net
unknown
d3399
train
Inheritance is your friend. You should probably have a base Player class, or even something more low level than that. The base class implements the colision detection and movement code. Your Warrior and other player types should inherit that class and override different parts to change the behavior. A: My strategy works approximately like this: I have a Character class. It contains a HitboxSet. The HitboxSet exposes an active Hitbox. All my collision is done based on Hitbox instances. Depending on what kind of character I need, I pass in different sprites, hitboxes, movements, etc. The result is a high degree of flexibility, while allowing you to keep your concerns thoroughly separated. This strategy is known as Composition, and is widely considered superior to inheritance based strategies in almost all cases. The main advantage of this strategy over inheritance is the ability to mix and match components. If, one day, you decide that you need a new projectile that's smaller than a previous one, and moves in a sine pattern, you simply use a new sine movement. You can reuse such things infinitely, in any combination. You can do something like this with inheritance: a child class can override a method and do something differently. The problem is that you either keep adding new child classes with different combinations and end up with a terrifyingly complex tree, or you limit your options. In short, if something is naturally going to vary, as it almost certainly will in a game, those variations should be distinct classes that can be combined in different ways rather than mutually exclusive overrides.
unknown
d3400
train
It is not exactly what you are looking for. However using the MaterialButton component it is very simple to have rounded buttons. Just use app:cornerRadius to define the corner radius and app:backgroundTint to change the background color. <com.google.android.material.button.MaterialButton app:backgroundTint="@color/myselector" app:cornerRadius="xxdp" .../> You can programmatically change these values using: button.setCornerRadius(...); button.setBackgroundTintList(..); A: if your colors is limited you can create some drawables with the colors you want. for example : blackButton.xml : <shape xmlns:android="http://schemas.android.com/apk/res/android" android:padding="10dp" android:shape="rectangle"> <solid android:color="@color/black" /> <corners android:bottomLeftRadius="80dp" android:bottomRightRadius="80dp" android:topLeftRadius="80dp" android:topRightRadius="80dp" /> whiteButton.xml : <shape xmlns:android="http://schemas.android.com/apk/res/android" android:padding="10dp" android:shape="rectangle"> <solid android:color="@color/white" /> <corners android:bottomLeftRadius="80dp" android:bottomRightRadius="80dp" android:topLeftRadius="80dp" android:topRightRadius="80dp" /> and when you want change your button's background just use button.setBackground(getResources().getDrawable(R.drawable.blackButton)) or button.setBackground(getResources().getDrawable(R.drawable.whiteButton))
unknown