text
stringlengths
64
81.1k
meta
dict
Q: Failure remediation strategy for File I/O I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome. A: I'm guessing you're a novice programmer here. The advice I give here is not applicable in all situations, but it will help you write solid code. Trying to figure out how to recover from an error is hard unless you have a very solid model for how that error can occur and what it means. Consequently, unless you know exactly what an error is and what it means, report the error on stderr or whathaveyou and bomb out. If you bomb out as soon as the first thing goes awry, you will be forced to understand the error and fix your code. This leads to higher-quality code in the long run, even if your intuition suggests otherwise. Some functions return "errors" that don't indicate serious failure. In POSIX, EINTR is there as a hack to make signal handling easier to implement, and it has the side effect of making a certain architecture of single-threaded programs that care about signals a little easier to implement. When I/O functions return EAGAIN, that means you have the file open in nonblocking mode and an I/O wanted to block. You need to handle these things correctly. Some errors indicate that something awful has happened; EIO in POSIX means that something has gone wrong that the function doesn't even know how to talk about. Working with file-system code, you'll notice that some errors can be caused by concurrent updates to the file. It is a fool's errand to try to "recover" from these sorts of things "gracefully." Don't try.
{ "pile_set_name": "StackExchange" }
Q: Can i trigger function on another component from my header? Hello i have a question about vue routing and how the tree works. I have my parrent router where i have my router-view and my header on same level. i have some functions i want to trigger from my header to a route called dashboard within my router-view e.g: header.vue <a href="#" @click.prevent="update()"> click me to update dashboard </a> dashboard.vue <p> {{datafrom filldata}} </p> methods: { fillDataToP() { function to fill data } } is this possible in vue? A: You could use the EventBus feature within vuejs. in your main.js file add const EventBus = new Vue() Vue.prototype.$bus = EventBus; from your header.vue file you can now emit an event: Example this.$bus.emit('someString', SomeObjectToPass); Then on your Dashboard.vue you can listen to an event by using: this.$bus.on('sameStringAsInEmit', () => { // Fill Data })
{ "pile_set_name": "StackExchange" }
Q: Writing camera matrix into xml/yaml file I am using opencv and python I have calibrated my camera having the following parameters: camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]] dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01] I am working in python.I wrote the following code to save the above into a file but the file was like a normal text file. f = open("../calibration_camera.xml","w") f.write('Camera Matrix:\n'+str(camera_matrix)) f.write('\n') f.write('Distortion Coefficients:\n'+str(dist_coefs)) f.write('\n') f.close() How can i save this data into an xml/yaml file using python commands thus getting the desired output.Please help. Thanks in advance A: Using JSON JSON seems to be the easiest format for serialization in your case camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]] dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01] data = {"camera_matrix": camera_matrix, "dist_coeff": dist_coeff} fname = "data.json" import json with open(fname, "w") as f: json.dump(data, f) data.json: {"dist_coeff": [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424], "camera_matrix": [[532.80990646, 0.0, 342.49522219], [0.0, 532.93344713, 233.88792491], [0.0, 0.0, 1.0]]} Using YAML YAML is best option, if you expect human editing of the content In contrast to json module, yaml is not part of Python and must be installed first: $ pip install pyyaml Here goes the code to save the data: fname = "data.yaml" import yaml with open(fname, "w") as f: yaml.dump(data, f) data.yaml: camera_matrix: - [532.80990646, 0.0, 342.49522219] - [0.0, 532.93344713, 233.88792491] - [0.0, 0.0, 1.0] dist_coeff: [-0.281325798, 0.0291150014, 0.00121234399, -0.000140823665, 0.154861424] Using XML My example is using my favourite lxml package, other XML packages are also available. from lxml import etree from lxml.builder import E camera_matrix=[[ 532.80990646 ,0.0,342.49522219],[0.0,532.93344713,233.88792491],[0.0,0.0,1.0]] dist_coeff = [-2.81325798e-01,2.91150014e-02,1.21234399e-03,-1.40823665e-04,1.54861424e-01] def triada(itm): a, b, c = itm return E.Triada(a = str(a), b = str(b), c = str(c)) camera_matrix_xml = E.CameraMatrix(*map(triada, camera_matrix)) dist_coeff_xml = E.DistCoef(*map(E.Coef, map(str, dist_coeff))) xmldoc = E.CameraData(camera_matrix_xml, dist_coeff_xml) fname = "data.xml" with open(fname, "w") as f: f.write(etree.tostring(xmldoc, pretty_print=True)) data.xml: <CameraData> <CameraMatrix> <Triada a="532.80990646" c="342.49522219" b="0.0"/> <Triada a="0.0" c="233.88792491" b="532.93344713"/> <Triada a="0.0" c="1.0" b="0.0"/> </CameraMatrix> <DistCoef> <Coef>-0.281325798</Coef> <Coef>0.0291150014</Coef> <Coef>0.00121234399</Coef> <Coef>-0.000140823665</Coef> <Coef>0.154861424</Coef> </DistCoef> </CameraData> You shall play a bit with the code to format strings representing the numbers with proper precision. This I leave to you.
{ "pile_set_name": "StackExchange" }
Q: How log to the Output window with ASP.Net vNext/5 Using Visual Studio 2015 RC and ASP.Net vNext/5 beta4. I would like to output logging to the Output Window in Visual Studio when debugging or, if possible, to the console window hosting the site when using the WebListener. My Web project is built on the standard out of the box template for a Web app so has most of the default stuff in there. In my Startup I have the usual default loggerfactory.AddConsole(); In my controller I inject ILoggerFactory and do something like this; this.logger = loggerFactory.CreateLogger<ThingClass>(); this.logger.LogVerbose("Verbose"); this.logger.LogInformation("Info"); this.logger.LogError("error"); None of that gets written out to the Debug window or elsewhere - I am not really sure what AddConsole() is supposed to achieve here? I then tried to add Microsoft.Framework.Logging.TraceSource to project.json and loggerfactory.AddTraceSource(new SourceSwitch("web-app", "Verbose"), new DefaultTraceListener()); to Startup. That actually works - except now every log message gets written twice to the console, which is rather annoying. I am clearly missing something fundamental here but cannot find any documentation on the new Microsoft.Framework.Logging. In fact, the most comprehensive and in-depth documentation I have been able to track down is Nicholas Blumhardt's short article here: http://nblumhardt.com/2015/05/diagnostic-logging-in-dnx-asp-net-5/. I do understand that the framework is supposed to just be a wrapper and that I can implement my own providers as well use a range of frameworks like Serilog etc. But... for a simple application surely I should be able to log to the debug window in VS without a lot of ceremony? A: We're adding a debug logger in beta6 https://github.com/aspnet/Logging/tree/dev/src/Microsoft.Framework.Logging.Debug Update: https://www.nuget.org/packages/Microsoft.Extensions.Logging.Debug/1.0.0-rc1-final
{ "pile_set_name": "StackExchange" }
Q: PHP combine like arrays based on a duplicate key VALUE not just the key iteself I have an array that looks like the below (this is a print_r on a $data variable) Array ( [0] => Array ( [quan] => 1 [prod_key] => 6f2e8858b8333afaeec8cd51be30ba6a [title] => Broomhandle - 6" x 12.5" [total] => 11.00 [weight] => 0.25 [image] => thumb_37658989fcd29e9.jpg ) [1] => Array ( [quan] => 1 [prod_key] => 6f2e8858b8333afaeec8cd51be30ba6a [title] => Broomhandle - 6" x 12.5" [total] => 11.00 [weight] => 0.25 [image] => thumb_37658989fcd29e9.jpg ) [2] => Array ( [quan] => 1 [prod_key] => of2ef85vb8333afaeec8cd51be30jq7i [title] => Watch [total] => 65.00 [weight] => 0.15 [image] => thumb_37658989fcd29e9.jpg ) ) What I am trying to do is loop through the array and combine the items that have the same prod_key into one item and update the the total, quantity and weight so the above example should look like: Array ( [0] => Array ( [quan] => 2 [prod_key] => 6f2e8858b8333afaeec8cd51be30ba6a [title] => Broomhandle - 6" x 12.5" [total] => 22.00 [weight] => 0.50 [image] => thumb_37658989fcd29e9.jpg ) [1] => Array ( [quan] => 1 [prod_key] => of2ef85vb8333afaeec8cd51be30jq7i [title] => Watch [total] => 65.00 [weight] => 0.15 [image] => thumb_37658989fcd29e9.jpg ) ) A: make a new array and use the product key as an array indice. Then you can easily add or update the entries $result = array(); foreach ($data as $v) { if (!isset($result[$v['prod_key']])) { $result[$v['prod_key']] = $v; } else { $result[$v['prod_key']]['quan'] += $v['quan']; $result[$v['prod_key']]['total'] += $v['total']; $result[$v['prod_key']]['weight'] += $v['weight']; //etc... } }
{ "pile_set_name": "StackExchange" }
Q: Dynamic row range when calculating moving sum/average using window functions (SQL Server) I'm currently working on a sample script which allows me to calculate the sum of the previous two rows and the current row. However, I would like to make the number '2' as a variable. I've tried declaring a variable, or directly casting in the query, yet a syntax error always pops up. Is there a possible solution? DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) FROM @myTable A: DECLARE @test VARCHAR = 1 DECLARE @sqlCommand VARCHAR(1000) DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SET @sqlCommand = 'SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN ' + @test + ' PRECEDING AND CURRENT ROW) FROM #temp' EXEC (@sqlCommand) A: You can try something like this which does not use dynamic SQL. DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) DECLARE @prev_records INT = 2 ;WITH CTE as ( SELECT ROW_NUMBER() OVER(ORDER BY myValue) rn,myValue FROM @myTable ) SELECT (SELECT SUM(myValue) FROM CTE t2 WHERE t2.rn BETWEEN (t1.rn - @prev_records) AND t1.rn ) FROM CTE t1 SUM(myValue) OVER() is best option however it does not allow you to pass previous N rows using a variable.
{ "pile_set_name": "StackExchange" }
Q: Oracle setup required for heavy-ish load I am trying to make a comparison between a system setup using Hadoop and HBase and achieving the same using Oracle DB as back end. I lack knowledge on the Oracle side of things so come to a fair comparison. I am looking for what kind of Oracle setup is required to handle a certain work load (hardware, OS, software stack, etc.). The work load and non-functional requirements are roughly this: 12M transactions on two tables with one simple relation and multiple (non-text) indexes within 4 hours. That amounts to 833 transactions per second (TPS), sustained. This needs to be done every 8 hours. Make sure that all writes are durable (so a running transaction survives a machine failure in case of a clustered setup) and have a decent level of availability? With a decent level of availability, I mean that regular failures such as disk and a single network interface / tcp connection drop should not require human intervention. Rare failures, may require intervention, but should be solved by just firing up a cold standby that can take over quickly. Additionally add another 300 TPS, but have these happen almost continuously 24/7 across many tables (but all in pairs of two with the same simple relation and multiple indexes)? Some context: this workload is 24/7 and the system needs to hold 10 years worth of historical data available for live querying. Query performance can be a bit worse than sub-second, but must be lively enough to consider for day-to-day usage. The ETL jobs are setup in such a way that there is little churn. Also in a relational setup, this workload would lead to little lock contention. I would expect index updates to be the major pain. To make a comparison as fair as possible, I would expect the loosest consistency level that Oracle provides. I have no intention of bashing Oracle in favor of some non-relational DB solution. I think it is a great database for many uses. I am trying to get a feeling for the tradeoff there is between going open source (and NoSQL) like we do and using a commercially supported, proven setup. A: Firstly, I will say that this workload is by no means heavy by Oracle standards; thousands of commits/sec are possible, easily. Secondly, however, what matters here is not your database and it's not your server: it's your storage. There are many options here; you won't go too wrong with something like NetApp (I don't work for them, just a satisfied user) and the question is, what size? Here ORION is your friend. Whatever storage array you choose, this will take care of your first level of resilience, if your first node fails you simply mount the disks on another and start back up again, and Oracle will perform crash recovery so no data will be lost. My advice is, get some numbers at a lower level - MB/s, IOPs - by performing a representative benchmark and take those to your nearest storage vendor and ask them what they've got. With NetApp at least, you can start fairly small and grow, adding another head for more resilience/better performance, adding shelves for more capacity, etc etc. Then test the crap out of it with Orion!
{ "pile_set_name": "StackExchange" }
Q: Web hosting that allows background processes? I am thinking of starting a website, but for it to work, I need a ruby script constantly running in the background. Can you recommend any web hosts that allow this? Thanks! A: If you have VPS or dedicated hosting then you can set up cronjob for this. Heroku lets you run cron job as addon HostingRails allows it. EngineYard allows it too I'm sure most others will do.
{ "pile_set_name": "StackExchange" }
Q: Re-inventing my authentication strategy with ASP.NET Currently, I use custom written authentication code for my site, which is built on .NET. I didn't take the standard Forms Auth route, as all the examples I could find were tightly integrated with WebForms, which I do not use. For all intents and purposes, I have all static HTML, and any logic is done through Javascript and web service calls. Things like logging in, logging off, and creating a new account are done without even leaving the page. Here's how it works now: In the database, I have a User ID, a Security ID, and a Session ID. All three are UUIDs, and the first two never change. Each time the user logs on, I check the user table for a row that matches that username and hashed password, and I update the Session ID to a new UUID. I then create a cookie that's a serialized representation of all three UUIDs. In any secure web service calls, I deserialize that cookie that make sure there's a row in the users table with those 3 UUIDs. It's a fairly simple system and works well, however I don't really like the fact that a user can only be logged on with one client at a time. It's going to cause issues when I create mobile and tablet apps, and already creates issues if they have multiple computers or web browsers. For this reason, I'm thinking about throwing away this system and going with something new. Since I wrote it years ago, I figure there might be something much more recommended. I've been reading up on the FormsAuthentication class in the .NET Framework, which handles auth cookies, and runs as an HttpModule to validate each request. I'm wondering if I can take advantage of this in my new design. It looks like cookies are stateless, and sessions don't have to be tracked within the database. This is done through the fact that cookies are encrypted with a private key on the server, that can also be shared across a cluster of web servers. If I do something like: FormsAuthentication.SetAuthCookie("Bob", true); Then in later requests, I can be assured that Bob is indeed a valid user as a cookie would be very difficult if not impossible to forge. Would I be wise to use the FormsAuthentication class to replace my current authentication model with? Rather than have a Session ID column in the database, I'd rely on encrypted cookies to represent valid sessions. Are there third party/open source .NET authentication frameworks that might work better for my architecture? Will this authentication mechanism cause any grief with code running on mobile and tablet clients, such as an iPhone app or Windows 8 Surface app? I would assume this would work, as long as these apps could handle cookies. Thanks! A: Since I didn't get any responses, I decided to take a shot at this myself. First, I found an open source project that implements session cookies in an algorithm agnostic way. I used this as a starting point to implement a similar handler. One issue I had with the built in ASP.NET implementation, which is a similar restriction in the AppHarbor implementation, is sessions are only keyed by a string username. I wanted to be able to store arbitrary data to identify a user, such as their UUID in the database as well as their logon name. As much of my existing code assumes this data is available in the cookie, it would take a lot of refactoring if this data were no longer available. Plus, I like the idea of being able to store basic user information without having to hit the database. Another issue with the AppHarbor project, as pointed out in the this open issue, is the encryption algorithm isn't verified. This is not exactly true, as AppHarbor is algorithm agnostic, however it was requested that the sample project should show how to use PBKDF2. For that reason, I decided to use this algorithm (implemented in the .NET Framework through the Rfc2898DeriveBytes class) in my code. Here's what I was able to come up with. It's meant as a starting point for someone looking to implement their own session management, so feel free to use it for whatever purpose you see fit. using System; using System.IO; using System.Linq; using System.Runtime.Serialization.Formatters.Binary; using System.Security; using System.Security.Cryptography; using System.Security.Principal; using System.Web; namespace AuthTest { [Serializable] public class AuthIdentity : IIdentity { public Guid Id { get; private set; } public string Name { get; private set; } public AuthIdentity() { } public AuthIdentity(Guid id, string name) { Id = id; Name = name; } public string AuthenticationType { get { return "CookieAuth"; } } public bool IsAuthenticated { get { return Id != Guid.Empty; } } } [Serializable] public class AuthToken : IPrincipal { public IIdentity Identity { get; set; } public bool IsInRole(string role) { return false; } } public class AuthModule : IHttpModule { static string COOKIE_NAME = "AuthCookie"; //Note: Change these two keys to something else (VALIDATION_KEY is 72 bytes, ENCRYPTION_KEY is 64 bytes) static string VALIDATION_KEY = @"MkMvk1JL/ghytaERtl6A25iTf/ABC2MgPsFlEbASJ5SX4DiqnDN3CjV7HXQI0GBOGyA8nHjSVaAJXNEqrKmOMg=="; static string ENCRYPTION_KEY = @"QQJYW8ditkzaUFppCJj+DcCTc/H9TpnSRQrLGBQkhy/jnYjqF8iR6do9NvI8PL8MmniFvdc21sTuKkw94jxID4cDYoqr7JDj"; static byte[] key; static byte[] iv; static byte[] valKey; public void Dispose() { } public void Init(HttpApplication context) { context.AuthenticateRequest += OnAuthenticateRequest; context.EndRequest += OnEndRequest; byte[] bytes = Convert.FromBase64String(ENCRYPTION_KEY); //72 bytes (8 for salt, 64 for key) byte[] salt = bytes.Take(8).ToArray(); byte[] pw = bytes.Skip(8).ToArray(); Rfc2898DeriveBytes k1 = new Rfc2898DeriveBytes(pw, salt, 1000); key = k1.GetBytes(16); iv = k1.GetBytes(8); valKey = Convert.FromBase64String(VALIDATION_KEY); //64 byte validation key to prevent tampering } public static void SetCookie(AuthIdentity token, bool rememberMe = false) { //Base64 encode token var formatter = new BinaryFormatter(); MemoryStream stream = new MemoryStream(); formatter.Serialize(stream, token); byte[] buffer = stream.GetBuffer(); byte[] encryptedBytes = EncryptCookie(buffer); string str = Convert.ToBase64String(encryptedBytes); var cookie = new HttpCookie(COOKIE_NAME, str); cookie.HttpOnly = true; if (rememberMe) { cookie.Expires = DateTime.Today.AddDays(100); } HttpContext.Current.Response.Cookies.Add(cookie); } public static void Logout() { HttpContext.Current.Response.Cookies.Remove(COOKIE_NAME); HttpContext.Current.Response.Cookies.Add(new HttpCookie(COOKIE_NAME, "") { Expires = DateTime.Today.AddDays(-1) }); } private static byte[] EncryptCookie(byte[] rawBytes) { TripleDES des = TripleDES.Create(); des.Key = key; des.IV = iv; MemoryStream encryptionStream = new MemoryStream(); CryptoStream encrypt = new CryptoStream(encryptionStream, des.CreateEncryptor(), CryptoStreamMode.Write); encrypt.Write(rawBytes, 0, rawBytes.Length); encrypt.FlushFinalBlock(); encrypt.Close(); byte[] encBytes = encryptionStream.ToArray(); //Add validation hash (compute hash on unencrypted data) HMACSHA256 hmac = new HMACSHA256(valKey); byte[] hash = hmac.ComputeHash(rawBytes); //Combine encrypted bytes and validation hash byte[] ret = encBytes.Concat<byte>(hash).ToArray(); return ret; } private static byte[] DecryptCookie(byte[] encBytes) { TripleDES des = TripleDES.Create(); des.Key = key; des.IV = iv; HMACSHA256 hmac = new HMACSHA256(valKey); int valSize = hmac.HashSize / 8; int msgLength = encBytes.Length - valSize; byte[] message = new byte[msgLength]; byte[] valBytes = new byte[valSize]; Buffer.BlockCopy(encBytes, 0, message, 0, msgLength); Buffer.BlockCopy(encBytes, msgLength, valBytes, 0, valSize); MemoryStream decryptionStreamBacking = new MemoryStream(); CryptoStream decrypt = new CryptoStream(decryptionStreamBacking, des.CreateDecryptor(), CryptoStreamMode.Write); decrypt.Write(message, 0, msgLength); decrypt.Flush(); byte[] decMessage = decryptionStreamBacking.ToArray(); //Verify key matches byte[] hash = hmac.ComputeHash(decMessage); if (valBytes.SequenceEqual(hash)) { return decMessage; } throw new SecurityException("Auth Cookie appears to have been tampered with!"); } private void OnAuthenticateRequest(object sender, EventArgs e) { var context = ((HttpApplication)sender).Context; var cookie = context.Request.Cookies[COOKIE_NAME]; if (cookie != null && cookie.Value.Length > 0) { try { var formatter = new BinaryFormatter(); MemoryStream stream = new MemoryStream(); var bytes = Convert.FromBase64String(cookie.Value); var decBytes = DecryptCookie(bytes); stream.Write(decBytes, 0, decBytes.Length); stream.Seek(0, SeekOrigin.Begin); AuthIdentity auth = formatter.Deserialize(stream) as AuthIdentity; AuthToken token = new AuthToken() { Identity = auth }; context.User = token; //Renew the cookie for another 100 days (TODO: Should only renew if cookie was originally set to persist) context.Response.Cookies[COOKIE_NAME].Value = cookie.Value; context.Response.Cookies[COOKIE_NAME].Expires = DateTime.Today.AddDays(100); } catch { } //Ignore any errors with bad cookies } } private void OnEndRequest(object sender, EventArgs e) { var context = ((HttpApplication)sender).Context; var response = context.Response; if (response.Cookies.Keys.Cast<string>().Contains(COOKIE_NAME)) { response.Cache.SetCacheability(HttpCacheability.NoCache, "Set-Cookie"); } } } } Also, be sure to include the following module in your web.config file: <httpModules> <add name="AuthModule" type="AuthTest.AuthModule" /> </httpModules> In your code, you can lookup the currently logged on user with: var id = HttpContext.Current.User.Identity as AuthIdentity; And set the auth cookie like so: AuthIdentity token = new AuthIdentity(Guid.NewGuid(), "Mike"); AuthModule.SetCookie(token, false);
{ "pile_set_name": "StackExchange" }
Q: What definition gives $\hat{x}\in X^{**}$? Definition: Let $X$ be a topological vector space and let $x\in X$. Then $x$ defines a linear functional $\hat{x}$ on $X^*$ via $\hat{x}(f)=f(x)$ $(f\in X^*)$. let $X$ be a normed space and let $x\in X$. I am trying to show that $\hat{x}\in X^{**}$. And my attempts are: Let $f\in X^*$ and let $(f_i)$ be a net in $X^*$ with $f_i\overset{\|\cdot\|}{\longrightarrow} f$ in $X^*$. Then $$|\hat{x}(f_i)-\hat{x}(f)|=|f_i(x)-f(x)|=|(f_i-f)(x)|\leqslant\|f_i-f\|\cdot\|x\|\rightarrow0.$$ Thus $\hat{x}(f_i)\rightarrow\hat{x}(f)$. Hence, $\hat{x}$ is a continuous linear functional on $X^*$; that is, $\hat{x}\in X^{**}$. Using a corollary of the Hahn-Banach Theorem, $$||\hat{x}||=\sup\{|\hat{x}(x^*)|:\|x^*\|\leqslant1\}=\sup\{|x^*(x)|:\|x^*\|\leqslant1\}=\|x\|.$$ Thus $\hat{x}\in X^{**}$. But my professor mentioned I didn't need any proof at all. It follows immediately by using definition in functional anaylsis. But I don't know what definition gives $\hat{x}\in X^{**}$? Any helps will be appreciated!! A: From the fact that $f$ is bounded, you have $$ |\hat x(f)|=|f(x)|\leq\|f\|\,\|x\|. $$ So $\hat x$ is bounded and $\|\hat x\|\leq\|x\|$.
{ "pile_set_name": "StackExchange" }
Q: Accessing values in list in Python I have a list in the form like this [(x1,y0,output), (x1,y1,output), (x1,y2,output), (x2,y0,output), (x2,y1,output), (x2,y2,output)] [(1, 0, 0), (1, 1, 1), (1, 2, 2), (2, 0, 0), (2, 1, 2), (2, 2, 4)] I would like to get cells in the list with a specific condition. For example, I want all of the cells which x = 1 I hope the result is: [(1, 0, 0), (1, 1, 1), (1, 2, 2)] I want all of the cells which x = 1, y = 2 I hope the result is: [(1, 2, 2)] How can I do this? import numpy as np result = [] for x in np.arange(1, 3, 1): for y in np.arange(0, 3, 1): res = y * x res = (x, y, res) result.append(res) print(result) A: Try a list comprehension: listy = [(1, 0, 0), (1, 1, 1), (1, 2, 2), (2, 0, 0), (2, 1, 2), (2, 2, 4)] list1 = [e for e in listy if e[0]==1] list2 = [e for e in listy if e[0]==1 and e[1]==2] You can change the conditions you choose by in that last if part of the list comprehension.
{ "pile_set_name": "StackExchange" }
Q: How should I deal with a client who refuses to pay after receiving the finish product? I was hired for a small web development job by a client. The job was small and of less budget. After I finished the job and notified the client, he didn't replied for several days. This made me think he is planning to not pay for the job. After several days and multiple E-Mails client finally told me that he is not going to pay me giving some some vague reasons. Since the job is of small amount I can't opt for legal options which it will be more costly and troublesome. But accidentally client forgot to change the cPanel login details and I can still login to the server. I have a few options I am considering: Delete all the client data (that was created as part of the project) and move on. Download the client data, delete it from server and ask client to pay (extra) to get his data back. What do you people think? Is is unethical to do so? I tried to be professional but it was client who started unethical behaviour. He has left with no options. How should I this situation A: Yes, it is definitely unethical. You do have other options. You could: Do nothing. Just take the loss and treat it as a learning experience. Next time you will know to never give the client the finished website until they have paid in full. Always host the site on your own servers during development and only transfer the site to the client's servers when they have paid all or most of the agreed price. Continually remind the client that payment is due. Send overdue invoice reminders every two weeks. I think this works better with paper invoices, but sometimes with small amounts, clients get annoyed at the hassle and pay the invoice just to make the annoyance go away. But given that the client did give vague reasons for non payment and not just tell you to get lost, they may eventually pay up. Remove only your work from his servers. You should not delete anything without having a backup, and you should definitely not touch anything that is not your work. If your work is now entwined with work done by others then you don't have this option. I don't really recommend this, but if you do it, you must let him know that you have taken the site down and that you will restore it as soon as he pays what he agreed to. Do not try to extort any "extra" from him. What you are trying to do is recreate what should have happened in the first place, namely that your site does not get onto his server until you are paid in full. Even if the client has behaved unprofessionally, that is no reason for you to do so also. My recommendation is to persist in asking for payment for a while, but to treat this as a lesson and move on. Be glad it was only for a small amount, and that you now know better how to protect yourself in the future. A: Wait, if I understood: you were hired to make a website you did it and the client did not pay it you have access to admin service hosting the website If I understood it properly, than the content on his website is still your. Transfer of intellectual rights happens when one side is paid. So IMHO you should definitely take back your property (source codes), delete content on his server and send him a message (polite one!) that you had to revoke all files because your work was not paid. This is completely ethical and IMHO proper thing to do. Unethical part comes when you act in revenge or destroy data permanently. A: The short answer: YES, it is unethical and likely illegal. The two last resorts that you describe can be boiled down to two words: vandalism and ransom. Would your business survive being associated with either of those? You can certainly refuse to do future work for that client, but you should limit your own potential losses (of reputation and future work) first. Since you describe it as being only a small amount, chalk it up to experience and let it go (or send the bill to collections). In the future, you'll want to change your billing practices to require a deposit (money for the project provided up-front as a sign of good faith). This way, if a client doesn't pay at the end of the project, you at least get something out of it.
{ "pile_set_name": "StackExchange" }
Q: How to get related data for each record? I have the following two tables, with the common variable being post_id: Posts table: +---------+---------+-------+---------+ | post_id | user_id | title | content | +---------+---------+-------+---------+ | 1 | 1 | Hello | World | +---------+---------+-------+---------+ Tags table: +--------+---------+----------+ | tag_id | post_id | tag_name | +--------+---------+----------+ | 1 | 1 | Tag1 | | 2 | 1 | Tag2 | | 3 | 1 | Tag3 | +--------+---------+----------+ Here is my current Post model, nothing out of the ordinary: class Post extends Eloquent { /** * The database table used by the model. * * @var string */ protected $table = 'posts'; /** * The primary key of the table. * * @var string */ protected $primaryKey = 'post_id'; /** * The post id. * * @var integer */ protected $post_id = 0; /** * The user id of the post. * * @var integer */ protected $user_id = 0; /** * The title of the post. * * @var string */ protected $title = ''; /** * The content of the post. * * @var string */ protected $content = ''; } And here is my Controller: class IndexController extends BaseController { /** * Show a list of post. * * @return view */ public function showIndex() { $posts = Post::all(); return View::make('index', array('posts' => $posts)); } } My question is, how would I, in addition to showing the posts in the view, get the related tags for each post. For example, using the above tables, post_id = 1 has 3 related tags in the Tags table: Tag1, Tag2, and Tag3. How would I get the relating tags for each post using the post_id and then be able to use them in the view? Thanks. A: This should work Model: class Tag extends Eloquent { // other stuff public function tags(){ return $this->hasMany('Tag'); } } Query Post::with('tags')->get(); Access tags with loop foreach($post->tags as $tag){ echo $post->tag_name; }
{ "pile_set_name": "StackExchange" }
Q: JavaScript library not working in IE, can't see error information I have been writing a JavaScript library for a few weeks now and it works brilliantly in Firefox, Chrome and Safari. I had not tested it in IE until recently. I do not own a Windows box so after testing it on my friends and realising it wasnt working I started going over my code for things that could be causing it to break. So far I have found nothing. I could not find any descriptions of the errors in the browser while I was there either. So I wondered if anyone could run my test script in an IE browser (6, 7 or 8) and let me know any information they can find as to why it crashed. Please ignore any information saying it works in IE6, I put that up there after testing it through http://ipinfo.info/netrenderer/ I just assumed it was working because I could set transparency and size via my script and see it run in this tool. Here is the link to my GitHub repository: https://github.com/Wolfy87/Spark If you download it and run spark.html it will attempt to run all of my functions from the library. So if anyone could be kind enough to run it in IE and either let me know what errors they are getting and possibly how to fix them then I will be extreamly grateful. Thank you in advance. EDIT: Here is it's website http://sparkjs.co.uk/ A: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2) Timestamp: Wed, 3 Nov 2010 11:19:20 UTC Message: Access is denied. Line: 1 Char: 17102 Code: 0 URI: file:///C:/.........../Wolfy87-Spark-v0.2.5-19-gab64629/Wolfy87-Spark-ab64629/spark.js The problem is when trying to load README.md file. It's an issue related to permission as you see above. everything except loading and printing this file is OK on IE8.
{ "pile_set_name": "StackExchange" }
Q: Generate right triangles c++ I am trying to make a program which would generate 3 sides given following input: longest allowed hypotenuse and number of triangles required. The sides can be only integers. The program I have written just hangs on me and does not return any output. If you are to downvote me explain why. #include <iostream> #include <cmath> #include <cstdlib> int generator(int number, int hypoth){ int a,b,c; while (number>0){ c=rand()%(hypoth-1)+1; for (a=1;a<hypoth-2;a++){ for (b=1;pow(a,2)+pow(b,2)<=pow(c,2); b++){ if (pow(a,2)+pow(b,2)==pow(c,2)){ std::cout<<"sides: "<<a<<" "<<b<<" "<<c<<std::endl; number--; } } } } return 0; } int main(){ int triangle_number, hypothenuse; std::cout << "How many triangles to generate? "; std::cin >> triangle_number; std::cout << "How long is max hypothenuse?"; std::cin >> hypothenuse; generator(triangle_number, hypothenuse); return 0; } If you think I should improve my algorithm please hint me in right direction. Thank you for your time. A: The code you provided works fine on my machine: inputting 1 and 6 gives output sides: 3 4 5. However, the problem probably arises from the line: pow(a,2)+pow(b,2)==pow(c,2). pow returns a double. Comparing floating-point numbers for equality is slippery, and practically never a good idea, since it's likely to be off by a tiny amount, and be false. Replace it with a*a + b*b == c*c (and the condition within the for loop just above with a*a + b*b <= c*c).
{ "pile_set_name": "StackExchange" }
Q: Move message between mailboxes using office 365 API I would like to move email messages from one mailbox to another in Office 365 using some sort of API. I looked up the API reference and the Move method provides the ability to move to a folder only (reference https://msdn.microsoft.com/office/office365/APi/mail-rest-operations#MessageoperationsMoveorcopymessages). I know it was possible back then using Exchange's EWS service. Do you know of a solution that might work in this case? It is important to know, that simple forward won't do since I need to preserve the message's sender and receiver as is for search purposes. Thanks A: Based on the info provided, Exchange Web Services would be the only option for you. The scope of a move and copy currently for our REST API is within a single mailbox.
{ "pile_set_name": "StackExchange" }
Q: Using Jackson annotations with inherited class I'm developing an Android App where I'm deserializing JSON with the Jackson Annotation API. It worked really well until I tried to include the AndroidActive ORM, which required your POJO to inherit from the Model class (https://github.com/pardom/ActiveAndroid/blob/master/src/com/activeandroid/Model.java). My JSON is deserialized in an asyncTask as such : Reader reader = new InputStreamReader(url.openStream()); try { ObjectMapper mapper = new ObjectMapper(); mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); rootJsonObj = mapper.readValue(reader, MyPojo.class); } catch (JsonGenerationException e) { e.printStackTrace(); } A quick look at my pogo : @Table(name = "RootRecipes") //AndroidActive annotation public class RootRecipes extends Model { @JsonProperty("deleted") //Jackson annotation @Column(name = "deleted") //AndroidActive annotation public ArrayList<Number> deleted; @JsonProperty("meta") @Column(name = "meta") public Meta meta; @JsonProperty("objects") @Column(name = "objects") public ArrayList<Objects> objects; The json is very large more than 2mo but the structure is the following : {"deleted": [107981, 107982, 107995, 107999, 108012, 108014], "meta": {"is_anonymous": true, "latest": 1405555349, "limit": 1000, "next": null, "offset": 0, "page": 1, "pages": 1, "previous": null, "total_count": 20}, "objects": [<more objects>]} The error given to me is : com.fasterxml.jackson.databind.JsonMappingException: Instantiation of [simple type, class com.example.app.json.MyPojo] value failed: null As soon as I removed the inheritance from Model the parsing is working normally. I can't figure out the reason of this error. Thanks. A: In my opinion you should created new POJO classes for working with JSON which are decoupled from Android world(I mean, they can not have properties, parents which come from Android packages). It could look like this: class JsonRootRecipes { @JsonProperty("deleted") public List<Number> deleted; @JsonProperty("meta") public JsonMeta meta; @JsonProperty("objects") public List<Object> objects; // getters, setters, toString } class JsonMeta { @JsonProperty("is_anonymous") private boolean anonymous; // getters, setters, toString } // another POJOs decoupled from Android classes. Now, you have to create a service class which will be able to parse JSON and convert JsonPojo classes to your Android POJO classes. Pseudocode: class JsonService { public RootRecipes parseJson(json) throws IOException { ObjectMapper mapper = new ObjectMapper(); mapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); JsonRootRecipes jsonRootRecipes = mapper.readValue(json, JsonRootRecipes.class); RootRecipes rootRecipes = null; //convert jsonRootRecipes to RootRecipes return rootRecipes; } }
{ "pile_set_name": "StackExchange" }
Q: Allow unauthorized users from specific IP/Domain only for some directories using tag in web.config I have some folders in my ASP.Net applications which requires access without login. For that I have already setup this configurations in my web.config file <location path="XXXX"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> Now I want to restrict the "path" to have anonymous/unauthorized access from one specific IP address or domain only. How do I setup this security configuration ? A: <location path="XXXX"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> <system.webServer> <security> <ipSecurity allowUnlisted="false"> <clear/> <add ipAddress="127.0.0.1" allowed="true"/> <!-- change ip here--> </ipSecurity> </security> </system.webServer> </location> Note 1 : you will need the IP Secuity module installed. Can be found here: Windows Features/Internet Information Services/World Wide Web Services/Security/IP Security Note 2: you will need to allow ipSecurity to be overridden in your applicationHost.config. You can change this by changing the ipSecurity section. e.g. <section name="ipSecurity" overrideModeDefault="Allow" /> The applicationHost.config file is usually found here: C:\Windows\System32\inetsrv\config If you don't have access to this file then you wont be able to do it without asking the server admin.
{ "pile_set_name": "StackExchange" }
Q: Azure analysis service connection using Service principal not working I am trying to connect to Azure Analysis services using ADOMD and authenticated using a Service principal. So I have done following: Create app in AAD. Granted the app (Service principal) read permission on the Azure Analysis service. Below is my code to connect to Azure Analysis service. var clientId = "******"; var clientSecret = "*****"; var domain = "****.onmicrosoft.com"; var ssasUrl = "northeurope.asazure.windows.net"; var token = await TokenHelper.GetAppOnlyAccessToken(domain, $"https://{ssasUrl}", clientId, clientSecret); var connectionString = $"Provider=MSOLAP;Data Source=asazure://{ssasUrl}/{modelname};Initial Catalog= adventureworks;User ID=;Password={token};Persist Security Info=True;Impersonation Level=Impersonate"; var ssasConnection = new AdomdConnection(connectionString); ssasConnection.Open(); var query = @"Evaluate TOPN(10,Customer,Customer[Customer Id],1)"; var cmd = new AdomdCommand(query) { Connection = ssasConnection }; using (var reader = cmd.ExecuteXmlReader()) { string value = reader.ReadOuterXml(); Console.WriteLine(value); } I am able to get a valid access token, but I get following error when trying to open the connection: AdomdErrorResponseException: Either the user, 'app:xxxxxxx@xxxxxx', does not have access to the 'adventureworks' database, or the database does not exist. Additional info: : I have verified that permissions (Reader & also tried with contribute) are given to Service principal to Azure analysis Service thru the Azure portal. I have tried same code with service account (username & password) and it works. If I remove "Initial Catalog= adventureworks" from the connection string then my connection will succeed. But I do not see why Analysis services permission is not propagated to the model. Resolution: Silly that I got the resolution by myself just after posting this. The point no 3 above gave me a clue.Granting permission on the Azure analysis services through the portal does not propagate to the model for the Service principals (Azuire AD apps). Steps: Open the Azure analysis service in Sql server Mgmt Studio. In the target model, go to Roles. Add the service principal into required role with permission. Service principal is added in below format: app:[appid]@[tenantid] example : app:8249E22B-CFF9-440C-AF27-60064A5743CE@86F119BE-D703-49E2-8B5F-72392615BB97 A: Silly that I got the resolution by myself just after posting this. The point no 3 above gave me a clue.Granting permission on the Azure analysis services through the portal does not propagate to the model for the Service principals (Azure AD apps). Steps: Open the Azure analysis service in Sql server Mgmt Studio. In the target model, go to Roles. Add the service principal into required role with permission. Service principal is added in below format: app:[appid]@[tenantid] example : app:8249E22B-CFF9-440C-AF27-60064A5743CE@86F119BE-D703-49E2-8B5F-72392615BB97 I have blogged my whole experience here:https://unnieayilliath.com/2017/11/12/connecting-to-azure-analysis-services-using-adomd/
{ "pile_set_name": "StackExchange" }
Q: How to find antilog(base 2) of a number? I used a for loop to find antilog of the given number. int g = 0, m, diff = 10; for(j = 0; g <= diff; j++) { g = pow(2, j); } m = j - 2; cout << m; It gives the power of 2 for which g is the number just less than diff. I tried the base change theorem of log to find the antilog of the number something like this: m = log(diff) / log(2); without the for loop, but in this case whenever there is a number that is an exact multiple of 2s for example 8, then it gives 2 as the answer and not 3. And using for loop for doing so is in a program is exceeding the time limit. Is there a shorter and reliable way to do so? A: Here is a fun solution without looping: function antilog(int input) { int pow2 = input - 1; pow2 |= pow2 >> 16; // turn on all bits < MSB pow2 |= pow2 >> 8; pow2 |= pow2 >> 4; pow2 |= pow2 >> 2; pow2 |= pow2 >> 1; pow2++; // get least pow2 >= input return // construct binary offset of pow2 bit ((pow2 & 0xffff0000) != 0) << 4 | ((pow2 & 0xff00ff00) != 0) << 3 | ((pow2 & 0xf0f0f0f0) != 0) << 2 | ((pow2 & 0xcccccccc) != 0) << 1 | ((pow2 & 0xaaaaaaaa) != 0); } The latter half of which was adapted from some part of the bit twiddling hacks. (Knowing the source, there is probably some other function faster than this doing what you've asked. Solutions aside, it should be noted that what particularly is causing your solution to be slow is the repeated calls to pow, which is a relatively expensive function. Because you are doing integer arithmetic (and what's more, multiplying by 2, every computer's favorite number), it is much more efficient to write your loop as the following: int g=1,m,diff=10; for(j = 0; g <= diff && g <<= 1; j++) /* empty */; m=j-2; cout<<m; Which is quite the hack. int g=1 initializes g to the value it takes on the first time the code executes the body of the loop you've written. The loop conditions g <= diff && g <<= 1 evaluates to g <= diff. (Notice that this is a problem if diff >= 1 << (8 * sizeof(int) - 2), the greatest power of two we can store in an int). The empty statement simply allows us to have a well-formed for statement the compiler (mostly) won't complain about.
{ "pile_set_name": "StackExchange" }
Q: Thumbnail Image is getting recreated and its position is getting shuffled on scrolling ListView I know this is asked before and I have already searched a lot about this but could not get any proper answer on my issue. I have created a list, the list gets filled with data that is coming from JSON. Now inside getView() method, I am inflating my custom row with data. Each row contains a Thumbnail, and I am creating each thumbnail from a different thread. Now the problem is, everything is going well until I don't scroll my list. When I scroll my list, my getView() method is called continuously , and all thumbnail images are getting recreated and its position is getting shuffled. Once thumbnails are created I don't want to recreate them and also I want to maintain the order of my thumbnails. Can you please guys help me on this? Any help will greatly be appreciated. My getView() method is: @Override public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder viewHolder = new ViewHolder(); if (convertView == null) { inflater = getLayoutInflater(); convertView = inflater.inflate(R.layout.list_item_row, parent, false); viewHolder.titleText = (TextView) convertView .findViewById(R.id.titleText); viewHolder.thumbImage = (ImageView) convertView .findViewById(R.id.thumbnail); convertView.setTag(viewHolder); } else { viewHolder = (ViewHolder) convertView.getTag(); } final String thumbnailURLString = mPostsList.get(position).get( "thumb"); createBitmapThread = (Thread) getLastNonConfigurationInstance(); createBitmapThread = new MyThread(thumbnailURLString, viewHolder.thumbImage); createBitmapThread.start(); viewHolder.titleText.setText(title); return convertView; } And the Thread class: public class MyThread extends Thread { String mThumbURLString = null; ImageView mImageView = null; public MyThread(String thumbnailURLString, ImageView imageView) { mThumbURLString = thumbnailURLString; mImageView = imageView; } @Override public void run() { try { URL newurl = null; try { newurl = new URL(mThumbURLString); } catch (MalformedURLException e) { e.printStackTrace(); } try { // Options opts = new Options(); /* * opts.inDither = true; opts.inInputShareable = true; * opts.inJustDecodeBounds = true; opts.inPurgeable = * true; */ Log.e("", "INSIDE IMAGE DOINBG"); mBitmap = BitmapFactory .decodeStream((InputStream) newurl.getContent()); } catch (IOException e) { e.printStackTrace(); } mHandler.post(new MyRunnable(mImageView)); } finally { } } public class MyRunnable implements Runnable { ImageView mImageView = null; public MyRunnable(ImageView imageView) { mImageView = imageView; } public void run() { mImageView.setImageBitmap(mBitmap); } } A: ok , here how i did a similar thing: for each getView , create a new Fetching class and put it into a concurrent/synchronized stack . in the meanwhile , set the bitmap of the view to be empty (or anything else you wish) . you can also use caching instead. the class will contain info of how to load the data (for example , the url of the bitmap) , the view to update , and the result of the fetching (for example , the bitmap itself) . now , back to the getView, create&execute an asyncTask that will use a loop on the stack , each time it gets a Fetching class instance from the stack (once it's empty,the asyncTask will break the loop and finish) , check that the view that needs to be updated still needs the data (using its viewHolder) load the bitmap , set the result into the Fetching class , and pass it through the publishProgress() function. in the onProgressUpdate method , do the same check as before using the Fetching class instance and the viewHolder . if all went well , update the view to have the bitmap that was fetched . a nice yet complicated example of how to handle the same problem can be found here .
{ "pile_set_name": "StackExchange" }
Q: MS Access 2010: How Can I Avoid Uneditable Query Results? I am working on my first Access 2010 database and have run into a problem editing the recordset returned from a query. This excellent blog entry details several scenarios which can result in uneditable query results. I believe my query results are not editable because my query has a Cartesian Join. I'm not sure how to avoid this, however. The three tables involved are: episodes Individual television episodes Primary key: "episode_id" aridates Individual airdates for a given episode Primary key: "airdate_id" Related to "episodes" by "airdate_episode_id" startdates Individual download start-dates for a given episode i.e. when a given episode will be available to download Primary key: "startdate_id" Related to "episodes" by "startdate_episode_id" So, there is no (and I think can be no) direct relationship between airdates and startdates. However, this makes the query: SELECT episodes.episode_id, episodes.episode_number, episodes.episode_title, airdates.airdate_region_id, airdates.airdate_date FROM (episodes LEFT JOIN airdates ON episodes.episode_id = airdates.airdate_episode_id) LEFT JOIN startdates ON episodes.episode_id = startdates.startdate_episode_id; return a recordset which is not editable. I need to be able to see the episode name and number along with the airdate in order to enter a startdate (episodes can not be made available for download before they have aired). So essentially, in this view I only need to be able to edit "startdates.stardate_date". Thanks in advance for any suggestions... a screenshot of the relationship in question can be seen here. A: Create this query: SELECT episodes.episode_id, episodes.episode_number, episodes.episode_title, airdates.airdate_region_id, airdates.airdate_date FROM episodes LEFT JOIN airdates ON episodes.episode_id = airdates.airdate_episode_id; Use it as the recordsource for a new form. Then create another form which uses a query of only the startdates table as its record source. Add the second form as a subform to the first form. On the property sheet for the subform control, make the link master field episode_id and the link child field startdate_episode_id. If you are successful, the subform will display startdates rows where the startdate_episode_id matches the episode_id of the main form's current record. And if you add a new row in the subform, its startdate_episode_id will "inherit" the episode_id from the main form. I emphasized control earlier because that point can be confusing. The subform control is a member of the main form's controls collection, and the subform control contains the subform. You must find the link master/child field properties on the subform control, not the actual subform itself.
{ "pile_set_name": "StackExchange" }
Q: Android - ActionBarToggle method not resolved by Android Studio I'm trying to implement Material Navigation Bar. I followed someone's tutorial for it. But I'm facing a little problem. Android Studio resolves everything except for drawer_open and drawer_close parameters for the constructor of ActionBarDrawerToggle e.g. mDrawerToggle = new ActionBarDrawerToggle(this,Drawer,R.string.drawer_open,R.string.drawer_close) Here it fails to resolve drawer_open and drawer_close. Google's Navigation Drawer sample works perfectly fine. I have imported all necessary packages. I can't figure out what's going wrong since I've just started learning android. Full code of MainActivitiy is: package com.startup.demo; import android.support.v4.widget.DrawerLayout; import android.support.v7.app.ActionBarActivity; import android.os.Bundle; import android.support.v7.app.ActionBarDrawerToggle; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.support.v7.widget.Toolbar; import android.view.Menu; import android.view.MenuItem; import android.view.View; public class MainActivity extends ActionBarActivity { //First We Declare Titles And Icons For Our Navigation Drawer List View //This Icons And Titles Are holded in an Array as you can see String TITLES[] = {"Home","Events","Mail","Shop","Travel"}; //int ICONS[] = {R.drawable.ic_home,R.drawable.ic_events,R.drawable.ic_mail,R.drawable.ic_shop,R.drawable.ic_travel}; //Similarly we Create a String Resource for the name and email in the header view //And we also create a int resource for profile picture in the header view String NAME = "Akash Bangad"; String EMAIL = "[email protected]"; //int PROFILE = R.drawable.aka; private Toolbar toolbar; // Declaring the Toolbar Object RecyclerView mRecyclerView; // Declaring RecyclerView RecyclerView.Adapter mAdapter; // Declaring Adapter For Recycler View RecyclerView.LayoutManager mLayoutManager; // Declaring Layout Manager as a linear layout manager DrawerLayout Drawer; // Declaring DrawerLayout ActionBarDrawerToggle mDrawerToggle; // Declaring Action Bar Drawer Toggle @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* Assinging the toolbar object ot the view and setting the the Action bar to our toolbar */ toolbar = (Toolbar) findViewById(R.id.tool_bar); setSupportActionBar(toolbar); mRecyclerView = (RecyclerView) findViewById(R.id.RecyclerView); // Assigning the RecyclerView Object to the xml View mRecyclerView.setHasFixedSize(true); // Letting the system know that the list objects are of fixed size mAdapter = new MyAdapter(TITLES,NAME,EMAIL); // Creating the Adapter of MyAdapter class(which we are going to see in a bit) // And passing the titles,icons,header view name, header view email, // and header view profile picture mRecyclerView.setAdapter(mAdapter); // Setting the adapter to RecyclerView mLayoutManager = new LinearLayoutManager(this); // Creating a layout Manager mRecyclerView.setLayoutManager(mLayoutManager); // Setting the layout Manager Drawer = (DrawerLayout) findViewById(R.id.DrawerLayout); // Drawer object Assigned to the view mDrawerToggle = new ActionBarDrawerToggle(this,Drawer,R.string.drawer_open,R.string.drawer_close){ @Override public void onDrawerOpened(View drawerView) { super.onDrawerOpened(drawerView); // code here will execute once the drawer is opened( As I dont want anything happened whe drawer is // open I am not going to put anything here) } @Override public void onDrawerClosed(View drawerView) { super.onDrawerClosed(drawerView); // Code here will execute once drawer is closed } }; // Drawer Toggle Object Made Drawer.setDrawerListener(mDrawerToggle); // Drawer Listener set to the Drawer toggle mDrawerToggle.syncState(); // Finally we set the drawer toggle sync State } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } } A: Place the following string in strings.xml drawer_open = "xxx" drawer_close = "xxx"
{ "pile_set_name": "StackExchange" }
Q: What should happen to a closed question that is re-asked? Some months ago, this question Who were the children of Jacob Fisher and Sarah Hodges? was asked, closed and not re-opened after a meta-discussion about how it could be improved, although it had two answers, one of which (wholly coincidentally) answers the question in its final form (by listing the children asked about). The question was then deleted at the OP's request. Yesterday this question was asked by the same OP: Jacob Fisher and Sarah Hodges (married in 1773 in Sharon, MA) - who were their childen?. Should I have closed it as a duplicate and offered (as I did) to re-open the original one edited to match the new one? Or should I have done something else? My reasoning for doing what I did: Two people put significant effort into answering the original question and ought to have that work recognised and visible. Nobody should have to repeat the work that had already been done when answering the original question, although they might choose to build on it. The OP may not have realised it was possible to resuscitate a deleted question. Leaving the new question open sets a dangerous precedent -- if the community has decided to close a question, in what circumstances is it right to re-ask it in almost exactly the same words? However, there may be a better solution that balances the interests of the OP, those who answered the original question and the site as a whole, and I'd be delighted to hear it. ETA: At the suggestion of jmort253, I've merged the old question into the new. A: In general, we shouldn't delete posts with upvoted answers. Closing a question is okay, but deleting should generally be reserved for cases where the post has no value. This was of course a special case, one that now has an impact today. It's possible the user who reposted the question didn't know the original could be recovered. In general, a user should edit his or her existing closed posts to try and improve them instead of reposting them. Reposting simply creates noise and confusion, and this is generally frowned upon on Stack Exchange. I'd suggest deleting the new duplicate, and then leaving a comment on the original closed post to encourage the author to edit the post and make it fit the guidelines of the site, then flag it for reopening or discuss the issue in chat to try and round up some reopen voters. Of course, if the scope of the site has indeed changed since the original question was posted, as a moderator, you can of course decide to reopen it yourself. If that goes against the wishes of the community, then it only takes 5 more close votes to close it again.
{ "pile_set_name": "StackExchange" }
Q: Using MOSFETS for a Level Shifter I want to use a pair of N and P channel mosfets in a totem pole arragement to shift a VFD display on and off using a 74LS247 decoder/driver. I would put the P channel with the source pin on the top and connected to the 25 volts the VFD requires. The P channel drain pin would be connected N channel drain pin. The junction of both drains would be my output (25 Volts). The source pin of the N channel would be connected to ground. The gates of both the P and N channel mosfets would be tied together and powered from the "Open Collector" output of the 74LS247 decoder/driver to turn on and off the appropriate mosfet. I plan on using a Fairchild FDS8958A mosfet complementary pair. Questions: Do I need a series resistor in series with the LS247 output and the two gates? What value do I need? Do I need pullup / pulldown resistors between the source pin and gate pins? A: If you want to use a logic level to switch 25V, then I'd suggest an arrangement like this - a classic high-side switch. The voltage divider on the gate of the P-Ch is to avoid Vgs exceeding the 20V limit for that part. simulate this circuit – Schematic created using CircuitLab The update that I've made to the circuit incorporates an extra P-channel MOSFET, M3, and a pullup resistor to 5V, R4. This can be pretty much any modest P-Ch MOSFET; it should be reasonably fast but it doesn't need to be high voltage or high current. Its purpose is to convert the open-collector output from your 74LS247 into a 5V logic level. Alternatively, you could use a pullup resistor (like R4) and an inverter, e.g. a single gate from a 74LS04. If you could choose a different decoder IC than the 74LS247, one with a logic level output, then you could go back to the previous circuit. Please do some testing with a single instance of the circuit to satisfy yourself that this does indeed work before you design and assemble a 42-channel version! A: Sorry, but this isn't going to work, or at least not directly. Yes, you need a pullup, but if you use one you'll destroy the 247. Reason? The 247 outputs have a max voltage rating of 15 volts, and you need to drive to 25. If you do manage to drive the P-type gate to 25 (to turn it off) you will violate the gate-source maximum voltage Vgss of 20 volts for the N-type. And if you drive the 247 output low you will exceed the gate-source voltage rating of the P-type.
{ "pile_set_name": "StackExchange" }
Q: Convergence of a Cauchy sequence of matrices I have a Cauchy sequence of matrices $C_i \in R^{p \times q}$, i.e. $\lim_{n\rightarrow \infty} \| C_{n+1} - C_{n} \| = 0$ for any norm (I just need the property that $\|C_1-C_2\|>\delta \Rightarrow C_1 \neq C_2$). I also know $\|C\|_{1,1} \leq t$ for a fixed $t$ (where $\|C\|_{1,1} = \sum_{i,j} |c_{ij}|$). Can I conclude that the sequence $C_i$ also converges? (i.e., is the corresponding metric space complete?) A: If a sequence in this space is Cauchy, then it converges. That is, $\Bbb R^{p \times q}$ under $\|\cdot\|_{1,1}$ (which is isometric to $\Bbb R^{pq}$ under $\|\cdot\|_1$) is indeed a complete metric space. In general: the finite Cartesian product of complete metric spaces will be complete (this is not true, however, for arbitrary products). That being said, the condition you provided is insufficient to guarantee that the sequence is Cauchy. As a counterexample, consider the sequence in $\Bbb R^{2 \times 1}$ given by $C_n = (\cos \theta_n,\sin\theta_n)$ where $$ \theta_n = \sum_{i=1}^n \frac 1i $$ Noting that $\|C_n\| < \sqrt{2}$ for each $n$.
{ "pile_set_name": "StackExchange" }
Q: Definition of multivariate martingale I cannot find a proper definition of multivariate martingale. If each component is $1$-dimensional martingale is it enough for a $d$-dimensional process to be a martingale? Thanks. A: You mean the parameter set is still one-dimensional, and the values are in a vector space like $\mathbb R^n$? Then yes, the definition can be that each component is individually a martingale. Or abstractly that when we apply any linear functional, the result is a scalar-valued martingale. "Multivariate martingale" might also mean a situation where "time" is replaced by a multi-dimensional parameter set.
{ "pile_set_name": "StackExchange" }
Q: Transform an array of objects structure At the moment I am working with an api that returns an array of objects that looks like [{match_id: "255232", country_id: "41", country_name: "England", league_id: "152", league_name: "National League", match_date: "2020-01-01", match_status: "", match_time: "16:00"}, {match_id: "255232", country_id: "41", country_name: "Italy", league_id: "152", league_name: "Serie a", match_date: "2020-01-01", match_status: "", match_time: "16:00"}, {match_id: "255232", country_id: "41", country_name: "Italy", league_id: "153", league_name: "Serie b", match_date: "2020-01-01", match_status: "", match_time: "16:00"},... ... ] I would like to transform it in a way that ends up looking like: const transformed = [ { country_name: "England", entries: [ {league_name: "National League", events: [{},{}]} ] }, {country_name: "Italy", entries: [ { league_name: "Serie a", events: [{},{}] }, { league_name: "Serie b", events: [{},{}] }... ]] I have tried to use .reduce but did not end up with expected output and I end up with just a reverted structure. Basically what I need is to catalogue by country_name first and league_name second. Obviously the data is dynamic and the names of countries/leagues change often. A: I've provided two solutions - one that returns an object keyed by country and league names (this would be my preference / recommendation) and one that extends the first solution to return the shape you requested. This first snippet transforms your input into an object keyed by country and league names: const data = [{match_id: "255232",country_id: "41",country_name: "England",league_id: "152",league_name: "National League",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255233",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255234",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"}] const transformed = data.reduce( (acc, { country_name, league_name, ...match }) => { acc[country_name] = acc[country_name] || {} acc[country_name][league_name] = acc[country_name][league_name] || [] acc[country_name][league_name].push(match) return acc }, {} ) console.log(transformed) This second snippet extends the first one, returning the shape you requested, originally: const data = [{match_id: "255232",country_id: "41",country_name: "England",league_id: "152",league_name: "National League",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255233",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255234",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"}] const tmp = data.reduce( (acc, { country_name, league_name, ...match }) => { acc[country_name] = acc[country_name] || {} acc[country_name][league_name] = acc[country_name][league_name] || [] acc[country_name][league_name].push(match) return acc }, {} ) const transformed = Object.entries(tmp).map( ([country_name, leagues]) => ({ country_name, entries: Object.entries(leagues).map( ([league_name, events]) => ({ league_name, events }) ) }) ) console.log(transformed) There are a few reasons I prefer the keyed object to the array: There's less nesting in the keyed object, and it will be smaller when stringified. It's easier to get all entries for a country and/or a country + league from the object (e.g. transformed.England["National League"]). It's less work to generate the object. One entry per country means the 'correct' data structure is Map (correct in most but not necessarily all cases). The object is a better approximation of a Map than the array. I use (and teach) a similar technique with success in redux. Rather than just storing an array of objects returned by an API, I also store an object based on the array that is indexed by the objects' ids. It's much easier to work with this object than it is with the raw array in many cases: const arr = [{ id: 'a', foo: 'bar' }, { id: 'b', foo: 'baz' }] const obj = { a: { foo: 'bar' }, b: { foo: 'baz' } } Here's a quick snippet showing how to go from the object to your desired React output, if it turns out the object is a useful intermediate data structure: const { Fragment } = React const data = [{match_id: "255232",country_id: "41",country_name: "England",league_id: "152",league_name: "National League",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255233",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"},{match_id: "255234",country_id: "41",country_name: "Italy",league_id: "152",league_name: "Serie a",match_date: "2020-01-01",match_status: "",match_time: "16:00"}] const transformed = data.reduce( (acc, { country_name, league_name, ...match }) => { acc[country_name] = acc[country_name] || {} acc[country_name][league_name] = acc[country_name][league_name] || [] acc[country_name][league_name].push(match) return acc }, {} ) const League = ({ league, matches }) => ( <Fragment> <h2>{league}</h2> {matches.map(({ match_id }) => (<p>{match_id}</p>))} </Fragment> ) ReactDOM.render( Object.entries(transformed).map(([country, leagues]) => ( <Fragment> <h1>{country}</h1> {Object.entries(leagues).map(([league, matches]) => ( <League league={league} matches={matches} /> ))} </Fragment> )), document.body ) <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> <div id="react"></div>
{ "pile_set_name": "StackExchange" }
Q: Nativescript - FirebaseApp is not initialized in this process I´m really desperate about this problem. I´m trying to solve this issue for such a long time now, but I just can´t get it working. My goal is to use Firebase withhin my Nativescript-App, which also uses Angular 7, but every time I try to "begin" using firebase, I get the following error: Default FirebaseApp is not initialized in this process com.nativescript. Make sure to call FirebaseApp.initializeApp(Context) first. I already tried that. I followed this tutorial:https://github.com/EddyVerbruggen/nativescript-plugin-firebase My Nativescript App is build in "Typescript", so I choose the Typescript-way. Since I don´t have the app.js, I started the .init in app.ts like so: import * as application from "tns-core-modules/application"; import * as FirebaseApp from "nativescript-plugin-firebase/app" import { environment } from './environments/environment'; FirebaseApp.initializeApp(environment.firebase); import * as firebase from "nativescript-plugin-firebase"; firebase.init({ // Optionally pass in properties for database, authentication and cloud messaging, // see their respective docs. }).then( () => { console.log("firebase.init done"); }, error => { console.log(`firebase.init error: ${error}`); } ); application.run({ moduleName: "app-root" }); When I try FirebaseApp.initializeApp(environment.firebase), it crashes and says in the log: S: firebase.init error: Firebase already initialized Successfully synced application com.nativescript. on device 97412f42. System.err: An uncaught Exception occurred on "main" thread. System.err: Unable to start activity ComponentInfo{com.nativescript./com.tns.NativeScriptActivity}: com.tns.NativeScriptException: Calling js method onCreate failed System.err: Error: java.lang.IllegalStateException: Default FirebaseApp is not initialized in this process com.nativescript.. Make sure to call FirebaseApp.initializeApp(Context) first. System.err: ... That´s interesting, because above it says "already initialized", but right after that it says "FirebaseApp is not initialized". Okay, so I decide to remove firebase.init({...}) because maybe that´s just too much init, but obviously nope: System.err: Unable to start activity ComponentInfo{com.nativescript./com.tns.NativeScriptActivity}: com.tns.NativeScriptException: Calling js method onCreate failed System.err: Error: java.lang.IllegalStateException: Default FirebaseApp is not initialized in this process com.nativescript. Make sure to call FirebaseApp.initializeApp(Context) first. System.err: ... Don´t worry, I purposly removed com.native.script.THEID, just in case. When I decide to replace FirebaseApp.initializeApp(environment.firebase) with the firebase.init-part it also gives the same error. My environment.ts is in a folder called environment and it´s in the same directory as app.ts (the folder). This is how environment.ts is looking like: export const environment = { production: true, firebase: { apiKey: 'THE API KEY', authDomain: 'THE AUTH DOMAIN', databaseURL: 'THE DATABASE URL', projectId: 'THE PROJECT ID', storageBucket: 'THE STORAGE BUCKET', messagingSenderId: 'THE SENDER ID' }, }; I already download the google-services.json from Firebase Console (where you setup your App) and pasted it into app/App_Resources/Android/ Please guys, I really don´t know any further. I even tried the second page of Google Search. If you need any infos, please let me know. A: MY SOLUTION Unfortunately I had to abandon this project and create a new one with angular-templates. I must have messed something up while enabling mutliDex since I had struggles enabling it back then. My theory is that, by messing up the activation of multiDex, I probably create two instances of my app build when I run tns run android. So, the only solution was to start over again, but at least the Firebase initialization works now just fine!
{ "pile_set_name": "StackExchange" }
Q: How can I disable sampling in Azure Application Insights with Node.js I've read the azure documents (https://docs.microsoft.com/en-us/azure/azure-monitor/app/sampling). There are examples with .Net and Java and also Javascript for client. But could not see a example for node.js (backend). How can I disable sampling in Azure Application Insights with Node.js (backend) A: As per this doc: By default, the SDK will send all collected data to the Application Insights service. So the sampling is disabled by default. And you can also use the following code to disable/enable sampling by setting samplingPercentage to 0 or non-zero value, like below: const appInsights = require("applicationinsights"); appInsights.setup("<instrumentation_key>"); appInsights.defaultClient.config.samplingPercentage = 33; // 33% of all telemetry will be sent to Application Insights appInsights.start();
{ "pile_set_name": "StackExchange" }
Q: Deep copy of entity and relationships using SQL I have three tables Store Book Page A store is one-to-many to books, book is one-to-many to pages and they all have the foreign keys set. I want to create copies of the store (and consequently, the books and pages) using a SQL query. I've tried using CTE's, but I'm having trouble maintaining the relationships between the entities. I'm not trying to create a new table, just creating a duplicate of a specific Store row (and its relationships), the ids on the tables are serial. So a copy of Store 1 Book 1 (store_id: 1) Page 1 (book_id: 1) Page 2 (book_id: 1) Would be Store 2 Book 2 (store_id: 2) Page 3 (book_id: 2) Page 4 (book_id: 2) A: I believe that Postgres will preserve ordering of the serial ids when an insert . . . select has an order by. So, you can do what you want by using returning and creating a mapping table from the old and the new values: with s as ( insert into stores ( . . . ) select . . . from stores where store_id = @x returning * ), b as ( insert into books (store_id, . . . ) select s.store_id, . . . from books b cross join s where b.store_id = @x order by b.book_id returning * ), bb as ( select bold.book_id as old_book_id, bnew.book_id as new_book_id from (select b.book_id, row_number() over (order by book_id) as seqnum from books b cross join s where b.store_id = @x ) bold join (select b.*, row_number() over (order by book_id) as seqnum from b ) bnew on bnew.seqnum = bold.seqnum ) insert into pages (book_id, . . .) select bb.new_book_id, . . . from pages p join bb b on p.book_id = bb.old_book_id;
{ "pile_set_name": "StackExchange" }
Q: Spec creating tmp file on circleCI is failing I have rspec test which creates a tmp file and it is read in the test. CircleCI fails saying Failure/Error: file_name = generate_csv_file(items) Errno::ENOENT: No such file or directory @ rb_sysopen - /home/ubuntu/project/tmp/batch_1443573588.csv A: CricleCi by default does not have tmp directory for rails projects. Your options are to: use system /tmp add tmp to git repository add post checkout hook in circle.yml that will create it
{ "pile_set_name": "StackExchange" }
Q: Signal Handling in C How can I implement signal Handling for Ctrl-C and Ctrl-D in C....So If Ctrl-C is pressed then the program will ignore and try to get the input from the user again...If Ctrl-D is pressed then the program will terminate... My program follows: int main(){ char msg[400]; while(1){ printf("Enter: "); fgets(msg,400,stdin); printf("%s\n",msg); } } Thanks, Dave A: When dealing with POSIX signals, you have two means at your disposal. First, the easy (but deprecated) way, signal(). Second, the more elegant, current but complex way, sigaction(). Please use sigaction() unless you find that it isn't available on some platform that you need to work on. This chapter of the glibc manual explains differences between the two and gives good example code on how to use both. It also lists the signals that can be handled, recommends how they should be handled and goes more in depth on how to tell how any given signal is (or is not) currently being handled. That's way more code than I'd want to paste into an answer here, hence the links. It really is worth the hour or two it would take you to read the links and work through the examples. Signal handling (especially in programs that daemonize) is extremely important. A good program should handle all fatal signals that can be handled (i.e. SIGHUP) and explicitly ignore signals that it might not be using (i.e. SIGUSR1 / SIGUSR2). It also won't hurt to study the difference between normal and real time signals, at least up to the understanding of how the kernel merges the prior and not the latter. Once you work through it, you'll probably feel inclined to write up an easy to modify set of functions to handle your signals and re-use that code over and over again. Sorry for not giving a quick and dirty code snippet to show you how to solve your immediate need, but this isn't a quick and dirty topic :) A: Firstly, Ctrl+D is an EOF indicator which you cannot trap, when a program is waiting for input, hitting Ctrl+D signifies end of file and to expect no more input. On the other hand, using Ctrl+C to terminate a program - that is SIGINT, which can be trapped by doing this: #include <stdio.h> #include <signal.h> #include <stdlib.h> #include <stdarg.h> static void signal_handler(int); static void cleanup(void); void init_signals(void); void panic(const char *, ...); struct sigaction sigact; char *progname; int main(int argc, char **argv){ char *s; progname = *(argv); atexit(cleanup); init_signals(); // do the work exit(0); } void init_signals(void){ sigact.sa_handler = signal_handler; sigemptyset(&sigact.sa_mask); sigact.sa_flags = 0; sigaction(SIGINT, &sigact, (struct sigaction *)NULL); } static void signal_handler(int sig){ if (sig == SIGINT) panic("Caught signal for Ctrl+C\n"); } void panic(const char *fmt, ...){ char buf[50]; va_list argptr; va_start(argptr, fmt); vsprintf(buf, fmt, argptr); va_end(argptr); fprintf(stderr, buf); exit(-1); } void cleanup(void){ sigemptyset(&sigact.sa_mask); /* Do any cleaning up chores here */ }
{ "pile_set_name": "StackExchange" }
Q: HTML POST form submit is mangling URL I am trying to get a HTML/PHP form to submit properly. Some details: base url = http://localhost/directory page = page/add complete address = http://localhost/directory/page/add Using htaccess to rewrite urls so http://localhost/directory/page/add is actually http://localhost/directory/index.php?q=page/add My HTML POST action is "page/add" so that the front controller knows which function to fire to sanitize and submit the data (it acts as a 'form id'). The page loads fine at http://localhost/directory/page/add but when I click on the submit button, the URL gets mangled to page/page/add. And every time I press "submit" I get another "page" added to the url. So 5 clicks will get "page/page/page/page/page/page/add" I can't seem to find why I am getting that "extra" "page". The actual PHP error (page/page/add doesn't exist in $routes since it isn't a valid route): Notice: Undefined index: page/page/add in C:\xampp\htdocs\script\includes\common.inc on line 92 Here is the function at line 92: function route_path($path = NULL) { $routes = get_routes(); //Returns array: approved "urls => function callbacks" if($path === NULL) { $path = get_path(); //Returns $_GET['q'] with trim and strip_tags } $function = $routes[$path]; <<<<<----This is LINE 92 if(isset($function)) { $form_name = str_replace('/', '_', $path); // page/add = function page_add() } if(function_exists($function)) { call_user_func($function, $form_name); } else { //TODO: Redirect to Login screen. } } The basic HTML is: <form action="page/add" method="post" /> //Form elements <input type="submit" value="Submit" /> </form> Thanks for the help. UPDATE: What I did was add the <base> tag to my HTML templates. This allows me to keep the action as page/add (since it is also a route in my simple router/dispatcher). A: By using a relative path, you're telling the form to submit at the existing path plus your action. So if you are at http://example.com/page/add, the form uses http://example.com/page/ as a base and adds the action page/add resulting in a POST to http://example.com/page/page/add. You can still use a relative path, just change the action accordingly: <form action="add" method="post" />
{ "pile_set_name": "StackExchange" }
Q: Map Array of objects to grouped object with arrays My problem is pretty straight forward I think, I just can't seem to figure it out. I need to go from an array of objects: let testArrays = [ { "containerType": "2 Gallon", "wasteType": "10000001", "vol": "2 Gallons" }, { "containerType": "2 Gallon", "wasteType": "10000001", "vol": "2 Gallons" }, { "containerType": "2 Gallon", "wasteType": "10000002", "vol": "2 Gallons" }, { "containerType": "2 Gallon", "wasteType": "10000002", "vol": "2 Gallons" }, { "containerType": "2 Gallon", "wasteType": "10000003", "vol": "2 Gallons" }, { "containerType": "5 Gallon", "wasteType": "10000003", "vol": "5 Gallons" }, { "containerType": "5 Gallon", "wasteType": "10000003", "vol": "5 Gallons" }, { "containerType": "5 Gallon", "wasteType": "10000003", "vol": "5 Gallons" }, { "containerType": "5 Gallon", "wasteType": "10000004", "vol": "5 Gallons" } ] To a grouped object with arrays inside, grouped by "wasteType" above with counts. The volume would be created by multiplying the count by the value in "vol", which I can get with parsefloat I believe : let wastes = { "10000001": [ { "containerType": "2 Gallon", "count": 2, "vol": "4 Gallons" } ], "10000002": [ { "containerType": "2 Gallon", "count": 2, "vol": "4 Gallons" } ], "10000003": [ { "containerType": "1 Gallon", "count": 1, "vol": "2 Gallons" }, { "containerType": "5 Gallon", "count": 3, "vol": "15 Gallons" } ], "10000004": [ { "containerType": "5 Gallon", "count": 1, "vol": "5 Gallons" } ], } I know I should use array.map() for this but I am not sure how to do it. I have looked for this specific example everywhere and can't find it. All help is greatly appreciated. A: You need to use reduce instead of map Loop through array use wasteType as property name If property is already not on output object initialize with current elements values Increase count by 1 Loop over the final again in order to get vol, which is count * vol let testArrays = [{"containerType": "2 Gallon","wasteType": "10000001","vol": "2 Gallons"}, {"containerType": "2 Gallon","wasteType": "10000001","vol": "2 Gallons"}, {"containerType": "2 Gallon","wasteType": "10000002","vol": "2 Gallons"}, {"containerType": "2 Gallon","wasteType": "10000002","vol": "2 Gallons"}, {"containerType": "2 Gallon","wasteType": "10000003","vol": "2 Gallons"}, {"containerType": "5 Gallon","wasteType": "10000003","vol": "5 Gallons"}, { "containerType": "5 Gallon","wasteType": "10000003","vol": "5 Gallons"}, {"containerType": "5 Gallon","wasteType": "10000003","vol": "5 Gallons"}, {"containerType": "5 Gallon","wasteType": "10000004","vol": "5 Gallons"}] let final = testArrays.reduce((op, { containerType, wasteType,vol}) => { let obj = { containerType, vol, count: 0 } op[wasteType] = op[wasteType] || new Map([[containerType,obj]]) if(op[wasteType].has(containerType)){ op[wasteType].get(containerType).count++ } else{ obj.count++ op[wasteType].set(containerType, obj) } return op }, {}) for(let key in final){ final[key] = [...final[key].values()].map(value=>{ let { containerType, vol, count} = value let finalVol = (vol.replace(/[.\D+]/g, '') * count) + " Gallons" return { containerType, vol:finalVol, count } }) } console.log(final)
{ "pile_set_name": "StackExchange" }
Q: Calling function without declaring class I want to achieve the "this line" in the following code. The most logical way is to set GetDog static, but then I cannot use "this". Is there a way to get around it? (not, since I was trying it out, there several lines not relevant to the question) #include <iostream> class Dog { public: static int a; Dog& GetDog(int k) { this->a = k; return *this; } int bark() { return a*a; } }; int Dog::a=0; int main() { Dog puppy; int i = puppy.GetDog(4).bark(); cout<<i<<endl; cout<<Dog::a<<endl; //i = Dog::GetDog(6).bark(); //this line return 0; } Not that doing this has much advantage (just that declaring a class is not required), but i saw it's used in some package I am using. I kind of want to understand how it is done. class EXOFastFourierTransformFFTW { public: static EXOFastFourierTransformFFTW& GetFFT(size_t length); virtual void PerformFFT(const EXODoubleWaveform& aWaveform, EXOWaveformFT& aWaveformFT); ... int main() { EXODoubleWaveform doublewf; EXOWaveformFT wfFT; ... EXOFastFourierTransformFFTW::GetFFT(doublewf.GetLength()).PerformFFT(doublewf,wfFT); ... This static function usage also appears in Geant4, which probably is written by physicists, and so they might not do the wisest thing in programming. I still want to want if doing so has other advantages though. From the vote down before I can see that this probably is not a regular method as I thought it is. Please comment so before doing it. A: It seems that it is an implementation of the Meyers singleton. I explain : In the example given, the class EXOFastFourierTransformFFTW does not seem to have a constructor but return a reference to a EXOFastFourierTransformFFTW object. And it looks like this implementation : class Singleton { public: static Singleton& Instance() { static Singleton obj; return obj; } private: Singleton(); }; From this book from Andrei Alexandrescu, it is said : This simple and elegant implementation was first published by Scott Meyers; therefore, we'll refer to it as the Meyers Singleton. The Meyers singleton relies on some compiler magic. A function-static object is initialized when the control flow is first passing its definition. Don't confuse static variables that are initialized at runtime[...] [...] In addition, the compiler generates code so that after initialization, the runtime support registers the variable for destruction. So it good to use static to call a method from a class not instantiated but don't do it if it is not necessary... Here to represent a Singleton Pattern you have to. But now if you want your class Dog look like that : class Dog { public: static Dog& GetDog(int k) { static Dog obj( k ); return obj; } int bark() { return a*a; } private: int a; Dog( int iA ) : a( iA ) {} };
{ "pile_set_name": "StackExchange" }
Q: How to increase all integers in all the sublists? a = [[1,2,3,4],[5,6],[7,8,9]] to name_doesn't_matter = [[11,12,13,14],[15,16],[17,18,19]] and the number of sublists are the result of user input so the answer should be for any given number of sublists. A: Use list comprehensions: foo = [[1, 2, 3, 4], [5, 6], [7, 8, 9]] bar = [[x + 10 for x in inner] for inner in foo]
{ "pile_set_name": "StackExchange" }
Q: JSON 'undefined' result very new to json. Ran into a slight bump. I have a feed that if there is no data for commentnotes, it returns that string as "undefined" i simply want to remove the "undefined" text and leave it as empty. here is my js document.getElementById("tooltipwrap0").innerHTML = '<span style="font-size:22px; font-weight:600;">' + data.users[0].member + " " + data.users[0].party + "-" + data.users[0].state + "<br/>" + '<span style="font-size:18px; font-weight:300;">' + data.users[0].commentnotes + ""; here is a sample of my feed var data={ "users": [ { "member": "first name", "party": "F", "state": "Ala.", }, any help is appreciated, thanks! A: Use the in operator to test the existence in a condition, like if ("commentnotes" in data.users[0]) // append data.users[0].commentnotes to your string However, since the undefined value it would yield is falsy you can simply try to get it and use the empty string if there is nothing: data.users[0].commentnotes ? data.users[0].commentnotes : "" which can be shortened to data.users[0].commentnotes || "" Notice that you will have to wrap it in parenthesis when you're using this expression inside the string concatenation.
{ "pile_set_name": "StackExchange" }
Q: How to get SQLite 'VACUUM' Progress Is there a way to get the progress of sqlite 'VACUUM'?I am using this line of code here in Java: connection1.createStatement().executeUpdate("VACUUM"); The User(MySelf & I) has to wait from some seconds to some minutes,i know that the actual .db file is being overriten with the help of a journal file that is created through the execution of the command. Can i get an estimation using JAVA IO or something?Thanks for help.. A: I found the answer to my question.So i know the size of the actual .db file and i wrote a Service in javaFX which calculates every 50 miliseconds the size of .db-journal file.So i check very frequently the size of journal file to see how of % is builded based on actual .db file: package windows; import java.io.File; import javafx.concurrent.Service; import javafx.concurrent.Task; /** Get the progress of Vacuum Operation */ public class VacuumProgress extends Service<Void> { File basicFile; File journalFile; /** * Starts the Vacuum Progress Service * * @param basicFile * @param journalFile */ public void start(File basicFile, File journalFile) { this.basicFile = basicFile; this.journalFile = journalFile; reset(); start(); } @Override protected Task<Void> createTask() { return new Task<Void>() { @Override protected Void call() throws Exception { System.out.println("Started..."); long bfL = basicFile.length(); while (!journalFile.exists()) { Thread.sleep(50); System.out.println("Journal File not yet Created!"); } long jfL = journalFile.length(); while (jfL <= bfL) { updateProgress(jfL = journalFile.length(), bfL); Thread.sleep(50); } System.out.println("Exited Vacuum Progress Service"); return null; } }; } }
{ "pile_set_name": "StackExchange" }
Q: Como posso atualizar um JLabel? Preciso atualizar um JLabel toda vez que aperto um determinado botão que busca um formulário do banco de dados para gerar um gráfico, mas isso funciona apenas na primeira vez. A minha ideia foi a seguinte: caso o JLabel esteja vazio ele adiciona o gráfico gerado pela busca, caso esteja preenchido ele removeria o gráfico e colocaria o novo. Desse modo meu JFrame ficou assim: public class MyFrame extends JFrame { private JPanel contentPane; public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { MyFrame frame = new MyFrame(); frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } /** * Create the frame. */ public MyFrame() { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 450, 300); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); setContentPane(contentPane); contentPane.setLayout(null); JLabel lblMeuIcone = new JLabel(""); lblMeuIcone.setBounds(23, 11, 363, 205); contentPane.add(lblMeuIcone); JButton btnTrocarI = new JButton("TrocarImagem"); btnTrocarI.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { GeradorDeGraficos graficos = new GeradorDeGraficos(); int[] valores = {1,1,2,23,3}; graficos.graficoPeriodoDeCrescimento(valores, "grafico 01", "valores", "valores"); try { graficos.salvarGrafico(new FileOutputStream("MyChart01.png")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } if(lblMeuIcone.getIcon() == null) { File file = new File("MyChart01.png"); ImageIcon icon = new ImageIcon(file.getAbsolutePath()); lblMeuIcone.setIcon(icon); }else { lblMeuIcone.setIcon(null); File file = new File("MyChart01.png"); ImageIcon icon = new ImageIcon(file.getAbsolutePath()); lblMeuIcone.setIcon(icon); } } }); btnTrocarI.setBounds(70, 227, 89, 23); contentPane.add(btnTrocarI); JButton btnTrocarII = new JButton("Trocar nova"); btnTrocarII.setBounds(311, 227, 89, 23); btnTrocarII.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { GeradorDeGraficos graficos = new GeradorDeGraficos(); int[] valores = {10,5,6,23,33}; graficos.graficoPeriodoDeCrescimento(valores, "grafico 02", "valores", "valores"); try { graficos.salvarGrafico(new FileOutputStream("MyChart01.png")); } catch (FileNotFoundException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } if(lblMeuIcone.getIcon() == null) { File file = new File("MyChart01.png"); ImageIcon icon = new ImageIcon(file.getAbsolutePath()); lblMeuIcone.setIcon(icon); }else { lblMeuIcone.setIcon(null); File file = new File("MyChart01.png"); ImageIcon icon = new ImageIcon(file.getAbsolutePath()); lblMeuIcone.setIcon(icon); } } }); contentPane.add(btnTrocarII); } } Classe que gera o gráfico public class GeradorDeGraficos { private double[] valores; private int inicio; private int fim; float dash[] = { 10.f }; private DefaultCategoryDataset data; private JFreeChart grafico; private JFreeChart graficoDeLinha; public JFreeChart graficoPeriodoDeCrescimento(int[] lista, String titulo, String labelBottom, String labelLeft) { DefaultCategoryDataset dataset = new DefaultCategoryDataset(); try { for (int i = 0; i < lista.length; i++) { dataset.addValue(lista[i], "Média", "valor" + i); } } catch (Exception e) { JOptionPane.showMessageDialog(null, "deu pau no grafico"); } graficoDeLinha = ChartFactory.createLineChart(titulo, labelBottom, labelLeft, dataset, PlotOrientation.VERTICAL, true, true, false); // fonte Font fonteNova = new Font("TimesRoman", Font.PLAIN, 18); CategoryItemRenderer renderer = graficoDeLinha.getCategoryPlot().getRenderer(); CategoryPlot plot = graficoDeLinha.getCategoryPlot(); plot.setBackgroundPaint(Color.WHITE); plot.setDomainGridlinePaint(Color.GREEN); plot.setAxisOffset(new RectangleInsets(12.0, 12.0, 5.0, 5.0)); plot.setRangeGridlinePaint(Color.RED); // cor e linha das séries renderer.setSeriesPaint(0, Color.BLUE); renderer.setSeriesStroke(0, new BasicStroke(1.0f, BasicStroke.CAP_BUTT, BasicStroke.JOIN_MITER, 10.f, dash, 0.0f)); renderer.setSeriesPositiveItemLabelPosition(0, new ItemLabelPosition(ItemLabelAnchor.CENTER, TextAnchor.BASELINE_CENTER)); renderer.setSeriesOutlineStroke(0, new BasicStroke(2.0f, BasicStroke.CAP_BUTT, BasicStroke.JOIN_MITER, 10.f, dash, 0.0f)); renderer.setSeriesOutlinePaint(0, Color.GREEN); // legendas LegendItemCollection legendas = new LegendItemCollection(); LegendItem legenda1 = new LegendItem("Crescimento"); legenda1.setSeriesIndex(0); legenda1.setFillPaint(Color.BLUE); legenda1.setLabelPaint(Color.BLUE); legenda1.setLabelFont(fonteNova); legendas.add(legenda1); plot.setFixedLegendItems(legendas); return graficoDeLinha; } public void salvarGrafico(OutputStream out) throws IOException { ChartUtilities.writeChartAsPNG(out, graficoDeLinha, 300, 200); } } qualquer duvida é só falar A: Pela documentação do ImageIcon, o construtor usado no código - ImageIcon(String) - usa um MediaTracker para carregar a imagem. O `MeidaTracker salva a imagem num cache e usa essa iamgem em vez de ler novamente do (mesmo) arquivo - como o nome da arquivo não mudou, é assumido que a imagem não foi alterada. Solução: use o ImageIO para ler a imagem e o construtor ImageIcon(Image): // gerar gráfico e salvar em 'file' BufferedImage img; try { img = ImageIO.read(file); } catch (IOException ex) { ex.printStackTrace(); return; } ImageIcon icon = new ImageIcon(img); label.setIcon(icon);
{ "pile_set_name": "StackExchange" }
Q: If a law is found to be unconstitutional; can an amendment to the state constitution matter? In the general sense, let's say a state law is passed, and the courts decide that the law is unconstitutional because the United States constitution prohibits such laws. Can a state amend its own constitution to allow the law; or would that not help because it still violates the US constitution? For a specific example, North Carolina passed a voter ID law which was struck down by the courts, finding that the law was created with the intent of targeting African Americans and suppressing their votes. I was unable to find more specific constitutional reasoning given, but it seems like this would be a 14th amendment issue; that the law was shown to deprive certain classes of citizens of their rights. Now, in the upcoming election, North Carolina has a ballot initiative to add an amendment to the NC State Constitution to require an id to vote. However, would such an amendment even be allowed; if it could still be found to violate the US Constitution's 14th amendment? Or, can a state's constitution override the US constitution in these matters? A: Article VI of the US Constitution says: This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding. So no, it doesn't matter whether the state puts it in a statute or in its constitution. Everything at the state level is subordinate to the federal constitution. A: The United States constitution has supremacy over state constitutions in areas where the two overlap. However, for many areas they do not overlap. The US constitution specifically says that if something is not explicitly authorized to the federal government, it is left to the states or to the individuals. However, voting is specifically covered by the constitution. The US Supreme Court has upheld ID requirements in some states. Thus, if the North Carolina amendment only says, "Must show ID to vote," it may pass constitutional muster. If the amendment were to instead say, "Blacks can't vote," then it would surely not pass constitutional muster. I'm not familiar with the specifics of the previous North Carolina law nor the proposed amendment. In general, the complaint is that ID requirements have a disparate impact. If the disparate impact is addressed, then the ID requirement can move forward. Addressing the disparate impact generally involves making it easier to get ID without paying fees. This can mean either waiving fees for poor people or making ID free in general. Because this would be an amendment to the state constitution, the Supreme Court might either say that it is unenforceable (people have to be allowed to vote without ID) or that the state must make free ID available to poor people (who are disproportionately members of minorities). It seems less likely that they would block the amendment entirely. It seems more likely that they will allow it to become part of the state constitution and then work around it. There is nothing in the US constitution about ID requirements. Whether or not to require ID is a state decision. But the federal government may still insist that states make voting freely available to all noncriminal adults. As such, the federal government may require that ID be freely available if ID is a requirement for voting.
{ "pile_set_name": "StackExchange" }
Q: Is "you are looking at the clock through the mirror" correct? Mindy: That clock on the wall is so strange! Henry: What’s so strange about it? Mindy: Its minute hand is moving in counterclockwise direction. Henry: No, it’s moving in clockwise direction! Mindy: No, it’s moving in counterclockwise direction! Henry: Hey, you are looking at the clock through the mirror! Is the last sentence in this dialogue correct? If not, what's the best way to point to Mindy on her mistake here? A: Through the mirror is uncommon usage, apparently in both British and American English. Typically one would use in the mirror, so Oh! You're looking at the clock in the mirror. Perhaps Oh! You're seeing the clock through the mirror would be be best, as seeing... through is a common construct (more commonly with windows). The verb seeing emphasizes the perception, whereas looking emphasizes the directing of attention.
{ "pile_set_name": "StackExchange" }
Q: Using reCAPTCHA with PHP on my webform not working when tested with XAMPP? I just tried to insert reCAPTCHA into my webform and test it with XAMPP 1.8.1. Here is what happens: 1. reCAPTCHA shows at the bottom of my form successfully 2. I fill out the form and the email is successfully forwarded to my email address The thing is that reCAPTCHA field is not mandatory, so no matter if I enter the required two words or not, I still receive the email. Shouldn't this reCAPTCHA field be mandatory so that I cannot receive the message if the user didn't fill reCAPTCHA field??? DOn't know what is it that I am doing wrong. Here is my email.php code (reCAPTCHA code is at the bottom): <?php require_once 'PHPMailer/class.phpmailer.php'; // Form url sanitizing $php_self = filter_input(INPUT_SERVER, 'PHP_SELF', FILTER_SANITIZE_FULL_SPECIAL_CHARS); // Variable initializing $name = ''; $email = ''; $message = ''; $errors = array(); // Is form sent? if( isset( $_POST['submit'] ) ) { // Validate $_POST['name'] $name = filter_input( INPUT_POST, 'name', FILTER_SANITIZE_STRING ); if( '' == $name ) { $errors[] = 'Please enter a valid name'; } // Validate $_POST['email'] $email = filter_input( INPUT_POST, 'email', FILTER_SANITIZE_EMAIL ); if( !filter_var($email, FILTER_VALIDATE_EMAIL) ) { $errors[] = 'Please enter a valid email'; } // Validate $_POST['message'] $message = filter_input( INPUT_POST, 'message', FILTER_SANITIZE_STRING ); if( '' == $message ) { $errors[] = 'Please enter a valid message'; } // If no errors if( empty( $errors ) ) { // Values are valid, lets send an email $mail = new PHPMailer(); // Base parameters that are working for me $mail->IsSMTP(); // Use SMTP $mail->Host = "smtp.gmail.com"; // GMail $mail->Port = 587; // If not working, you can try 465 $mail->SMTPSecure = "tls"; // If not working, you can try "ssl" $mail->SMTPAuth = true; // Turn on SMTP authentication // Adjust these lines $mail->Username = "[email protected]"; $mail->Password = "mypassword"; $mail->SetFrom($email, $name); $mail->AddAddress('[email protected]', 'MyName'); // This is the email address (inbox) to which the message from a webform will be sent $mail->Subject = "Web Form Message"; // This will be the subject of the message(s) you receive through the webform $mail->Body = $message; // Sending if(!$mail->Send()) { // First error message is just for debugging. This don't generate messages a user should read // Comment this and uncomment the second message for a more user friendly message $errors[] = "Mailer Error: " . $mail->ErrorInfo; //$errors[] = "email couldn't be send"; // Output Sanitizing for repopulating form $name = filter_var( $name, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); $email = filter_var( $email, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); $message = filter_var( $message, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); } else { // Generating a success message is good idea echo "<p>Thank you <strong>$name</strong>, your message has been successfully submitted.</p>"; // Clear fields $name = ''; $email = ''; $message = ''; } } } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>self referencing form</title> <link rel='stylesheet' href='http://code.jquery.com/ui/1.9.1/themes/base/jquery-ui.css'/> <link rel="stylesheet" href="main.css"> </head> <body> <div id="button" class="title"> <h6>Contact</h6> </div> <div id="dropbox"> <header class="title"> <h6>Whats up?</h6> </header> <?php if(!empty($errors)): ?> <ul class="error"> <li><?php echo join('</li><li>', $errors); ?></li> </ul> <?php endif; ?> <div class="contact-form"> <form action="<?php echo $php_self; ?>" method="post"> <!-- input element for the name --> <h6><img src="img/person.png" alt=""> Name</h6> <input type="text" name="name" value="<?php echo $name; ?>" placeholder="Please enter your full name here" required> <!-- input element for the email --> <h6><img src="img/email.png" alt=""> E-mail</h6> <input type="email" name="email" value="<?php echo $email; ?>" placeholder="Please enter your e-mail address" required> <!-- input element for the message --> <h6><img src="img/message.png" alt=""> Message</h6> <textarea name="message" placeholder="Type your message..." required><?php echo $message; ?></textarea> <!-- reCAPTCHA CODE --> <form method="post" action="verify.php"> <?php require_once('recaptchalib.php'); $publickey = "my_public_key_goes_here"; // you got this from the signup page echo recaptcha_get_html($publickey); ?></br> <input name="submit" type="submit" value="Submit" /> </form> </form> </div> </div> <script src='http://code.jquery.com/jquery-1.9.1.min.js'></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.9.1/jquery-ui.min.js"></script> <script src='dropbox.js'></script> </body> </html> Here is my verify.php code: <?php require_once('recaptchalib.php'); $privatekey = "my_private_code_goes_here"; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { // What happens when the CAPTCHA was entered incorrectly die ("The reCAPTCHA wasn't entered correctly. Go back and try it again." . "(reCAPTCHA said: " . $resp->error . ")"); } else { // Your code here to handle a successful verification } ?> I also downloaded recaptchalib.php file and all three files are inside my C:/xampp/htdocs/email folder. Then I requested localhost/email/email.php and filled out the form. I receive the message inside [email protected] but reCAPTCHA fields are not mandatory. So, how do I correct this??? Thanks in advance!!! Oops! Forgot to add my css file: @import url("reset.css"); #button { position: absolute; top: 0; right: 10%; color: #eee; z-index: 2; width: 175px; background: #c20000; text-align: center; height: 40px; -webkit-border-radius: 0px 0px 2x 2px; border-radius: 0px 0px 2px 2px; font-family: Tahoma, Geneva, sans-serif; font-size: 1em; text-transform: uppercase; } #button:hover { background: #da0000; cursor: pointer; } #button > h6{ line-height: 40px; margin: 0px; padding-top: 0px; font-family: Tahoma, Geneva, sans-serif; font-size: 0.8em; text-transform: uppercase; } #dropbox { position: absolute; top: 0px; right: 10%; color: #eee; z-index: 1; background: #222222; width: 350px; display: none; -webkit-box-shadow: 0px 0px 16px rgba(50, 50, 50, 0.75); -moz-box-shadow: 0px 0px 16px rgba(50, 50, 50, 0.75); box-shadow: 0px 0px 16px rgba(50, 50, 50, 0.75); } #dropbox .title { height: 40px; background: #414141; } #dropbox .title > h6{ line-height: 40px; padding-left: 58px; margin-top: 0px; } #dropbox { font-family: Tahoma, Geneva, sans-serif; font-size: 1em; text-transform: uppercase; } #dropbox .contact-form { margin: 10px; } #dropbox .contact-form h6{ margin: 5px; } #dropbox input { font-family: Tahoma, Geneva, sans-serif; font-size: 0.9em; outline: none; border: none; width: 320px; max-width: 330px; padding: 5px; margin: 10px 0px; background: #444444; color: #eee; } #dropbox textarea { height: 70px; font-family: Tahoma, Geneva, sans-serif; font-size: 0.9em; outline: none; border: none; width: 320px; max-width: 330px; padding: 5px; margin: 10px 0px; background: #444444; color: #eee; } #dropbox input[type=submit] { margin: 0px; width: 330px; cursor: pointer; color: #999; font-family: Tahoma, Geneva, sans-serif; font-size: 0.8em; text-transform: uppercase; font-weight: bold; } #dropbox input[type=submit]:hover { color: #eee; background: #c20000; } A: You forget to validate reCaptcha in email.php. You can't have form inside form. Create one form with name input, email input, message input and reCaptcha (without <form method="post" action="verify.php">). Use code from verify.php inside email.php like this: // load recaptcha library require_once('recaptchalib.php'); // config - you can read it from some config file $publickey = "my_public_key_goes_here"; // you got this from the signup page $privatekey = "my_private_code_goes_here"; // rest of your code // Is form sent? if( isset( $_POST['submit'] ) ) { // begin: reCAPTCHA CODE - validate answer $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { $errors[] = 'Please enter a valid captcha'; } // end: reCAPTCHA CODE - validate ansver // Validate $_POST['name'] // rest of your code } <!-- rest of your HTML --> <form method="POST"> <!-- you don't need `action` for the same page --> <!-- rest of your form --> <!-- begin: reCAPTCHA CODE - print widget --> <?php echo recaptcha_get_html($publickey); ?></br> <!-- end: reCAPTCHA CODE - print widget --> <input name="submit" type="submit" value="Submit" /> </form> Edit: Simplest working example: <?php require_once('recaptchalib.php'); $publickey = "your_public_key"; $privatekey = "your_private_key"; if( isset( $_POST["recaptcha_response_field"] ) ) { $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if(!$resp->is_valid) { echo "reCaptcha incorrect"; } else { echo "reCaptcha OK"; } } ?> <form method="POST"> <? echo recaptcha_get_html($publickey); ?> </form> Edit: your code with working recaptcha (also on my server) email.php <?php require_once 'PHPMailer/class.phpmailer.php'; // load recaptcha library require_once('recaptchalib.php'); // config - you could read it from some config file $publickey = "your_public_key"; $privatekey = "your_private_key"; // Form url sanitizing $php_self = filter_input(INPUT_SERVER, 'PHP_SELF', FILTER_SANITIZE_FULL_SPECIAL_CHARS); // Variable initializing $name = ''; $email = ''; $message = ''; $errors = array(); // Is form sent? if( isset( $_POST['submit'] ) ) { // begin: reCAPTCHA - VALIDATE $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { $errors[] = 'Please enter a valid captcha'; } // end: reCAPTCHA - VALIDATE // Validate $_POST['name'] $name = filter_input( INPUT_POST, 'name', FILTER_SANITIZE_STRING ); if( '' == $name ) { $errors[] = 'Please enter a valid name'; } // Validate $_POST['email'] $email = filter_input( INPUT_POST, 'email', FILTER_SANITIZE_EMAIL ); if( !filter_var($email, FILTER_VALIDATE_EMAIL) ) { $errors[] = 'Please enter a valid email'; } // Validate $_POST['message'] $message = filter_input( INPUT_POST, 'message', FILTER_SANITIZE_STRING ); if( '' == $message ) { $errors[] = 'Please enter a valid message'; } // If no errors if( empty( $errors ) ) { // Values are valid, lets send an email //echo "I'm send mail (virtually) ;)"; // debug $mail = new PHPMailer(); // Base parameters that are working for me $mail->IsSMTP(); // Use SMTP $mail->Host = "smtp.gmail.com"; // GMail $mail->Port = 587; // If not working, you can try 465 $mail->SMTPSecure = "tls"; // If not working, you can try "ssl" $mail->SMTPAuth = true; // Turn on SMTP authentication // Adjust these lines $mail->Username = "[email protected]"; $mail->Password = "mypassword"; $mail->SetFrom($email, $name); $mail->AddAddress('[email protected]', 'MyName'); // This is the email address (inbox) to which the message from a webform will be sent $mail->Subject = "Web Form Message"; // This will be the subject of the message(s) you receive through the webform $mail->Body = $message; // Sending if(!$mail->Send()) { // First error message is just for debugging. This don't generate messages a user should read // Comment this and uncomment the second message for a more user friendly message $errors[] = "Mailer Error: " . $mail->ErrorInfo; //$errors[] = "email couldn't be send"; // Output Sanitizing for repopulating form $name = filter_var( $name, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); $email = filter_var( $email, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); $message = filter_var( $message, FILTER_SANITIZE_FULL_SPECIAL_CHARS ); } else { // Generating a success message is good idea echo "<p>Thank you <strong>$name</strong>, your message has been successfully submitted.</p>"; // Clear fields $name = ''; $email = ''; $message = ''; } } } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>self referencing form</title> <link rel='stylesheet' href='http://code.jquery.com/ui/1.9.1/themes/base/jquery-ui.css'/> <link rel="stylesheet" href="main.css"> </head> <body> <div id="button" class="title"> <h6>Contact</h6> </div> <div id="dropbox"> <header class="title"> <h6>Whats up?</h6> </header> <?php if(!empty($errors)): ?> <ul class="error"> <li><?php echo join('</li><li>', $errors); ?></li> </ul> <?php endif; ?> <div class="contact-form"> <form method="POST"> <!-- input element for the name --> <h6><img src="img/person.png" alt=""> Name</h6> <input type="text" name="name" value="<?php echo $name; ?>" placeholder="Please enter your full name here" required> <!-- input element for the email --> <h6><img src="img/email.png" alt=""> E-mail</h6> <input type="email" name="email" value="<?php echo $email; ?>" placeholder="Please enter your e-mail address" required> <!-- input element for the message --> <h6><img src="img/message.png" alt=""> Message</h6> <textarea name="message" placeholder="Type your message..." required><?php echo $message; ?></textarea> <!-- begin: reCAPTCHA - RENDERING--> <?php echo recaptcha_get_html($publickey); ?></br> <!-- end: reCAPTCHA - RENDERING--> <input name="submit" type="submit" value="Submit" /> </form> </div> </div> <script src='http://code.jquery.com/jquery-1.9.1.min.js'></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.9.1/jquery-ui.min.js"></script> <script src='dropbox.js'></script> <?php if( !empty($errors) ): ?> <script> $(function() { $('#dropbox').show(); }); </script> <?php endif ?> </body> </html>
{ "pile_set_name": "StackExchange" }
Q: Exception in thread "AWT-EventQueue-0" java.lang.NoSuchMethodError: setControlKeepAliveTimeout(J)V I have created a runnable jar for my project and when I try to execute it I am getting exception, but from eclipse it is executing fine Exception in thread "AWT-EventQueue-0" java.lang.NoSuchMethodError: com.FTP.setControlKeepAliveTimeout(J)V at com.FTP.<init>(FTP.java:64) at com.build.Build.GetFtp(Build.java:71) at com.build.MainFile.step1(MainFile.java:79) at com.build.ui.ScreenL.jButton2MouseClicked(ScreenL.java:240) at com.build.ui.ScreenL.access$1(ScreenL.java:215) at cm21.build.ui.ScreenL$2.mouseClicked(ScreenL.java:88) at java.awt.AWTEventMulticaster.mouseClicked(Unknown Source) at java.awt.Component.processMouseEvent(Unknown Source) at javax.swing.JComponent.processMouseEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Window.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.EventQueue.dispatchEventImpl(Unknown Source) at java.awt.EventQueue.access$200(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source) at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue$4.run(Unknown Source) at java.awt.EventQueue$4.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) where i find out that setControlKeepAliveTimeout is a method from one of the Jar's included in the project. am not still resolve this issue. A: Your com.FTP library version in your OS environment differs from the one in the Eclipse environment. Make sure classpath references same JAR on both environments.
{ "pile_set_name": "StackExchange" }
Q: Workaround for Dell "Power supply not recognised" issue I have a Dell Inspirion and the power supply port appears to be damaged. Basically when I plug it in I get a nice popup telling me that it couldn't detect that its a Dell power supply so it won't charge the battery and underclocks the system. It still works for other purposes (that is, giving power). I thought it was the actual power supply cable so I bought a new one, that worked for a while, provided I inserted it at JUST THE RIGHT angle. But now that's not working anymore, so I assume its the part which connects to the computer. The battery charging I can live without, the underclocking I can't. I'd like a way around this issue. Things I've tried: Updating the BIOS Replacing the power supply cable Inserting it at different angles Turning it off and on again Swearing at it Twisting it while inserting it So, is there a workaround somehow? I'd like to avoid taking out my soldering kit and risking permanently damaging expensive equipment if that's allright. I'm hoping for a software solution. Added: The exact model is a Del Inspirion N5010 A: If you're ready for a hardcore solution, you can unsolder the ID chip from the AC adapter and solder it to the laptop's motherboard or even create a fake ID chip. An ideal solution would be to patch the BIOS, but I've found only this discussion, nobody did it (yet?) Can something be done on the software side? Yes! At least, we can overcome underclocking. Battery won't load. Linux: Add processor.ignore_ppc=1 to GRUB_CMDLINE_LINUX in /etc/default/grub, then run update-grub. Reboot. Windows: This paper describes Windows XP performance control policies and mentions this: NOTE: All policies will always respect the highest available performance state currently available as reported in the _PPC method by system firmware, when using the ACPI 2.0 interface. So, no native support. But there are third-party tools. I personally had success with RMClock, other people in this thread suggest using ThrottleStop instead. A: This solved my issue in two laptops, I searched and never came across this so I wanted to post it for someone else. These dell laptops were purchased refurbished with 3rd party chargers, batteries are dell. 8/17/2018 RE: Problem with laptops (E5430) REINSTALL BATTERY DRIVER - Shut down the laptop - Unplug the AC Adapter from laptop - Remove the battery - Reconnect the AC Adapter to the laptop - Power on - Go the Start and type in Device Manager (search program and files) then Enter - Under Batteries, uninstall Microsoft AC Adapter and Microsoft ACPI- Compliant Control Method Battery (both) - Please note that this will auto-install for you again - Shut down laptop - Remove the AC Adapter from laptop - Insert the Battery and reconnect the AC Adapter - Power on the computer - You should now see the message, Plugged in, Charging A: FWIW: I "solved" it by disabling the SpeedStep Technology part in the BIOS. Although the description there implies that this will then run the CPU at full throttle 100% of the time this not the case in reality. TaskManager shows the frequency to go up and down according to the load. This does not fix the actual "PSU not recognized"-issue, but at least the machine isn't stuck at 1.18GHz anymore because of that.
{ "pile_set_name": "StackExchange" }
Q: Who pays the costs for presidential preference "elections" in caucus states (US)? As I understand it, typically the costs of conducting a presidential primary election (renting facilities, use of voting machines, labor cost of county election officials and county poll workers, etc) is borne by the county and state in which the primary is being held. In states that employ a caucus system to conduct presidential preference selection, who bears the costs associated with those activities? A: In states that employ a caucus system to conduct primaries (aka voter preference selection), who bears the costs associated with those activities? This is something of an internally contradictory question. A primary and a caucus are two different things. A primary is an election paid for by the state and a caucus is a party function paid for by the party. (In Colorado, for races other than the Presidential race, one of the purposes of the caucus is to determine who gets onto the primary ballot which can also be accessed via petition.) Certainly, when I was a treasurer of a county party organization in 2008 in Colorado, the party paid entirely for the caucus and my understanding is that this pattern is followed everywhere or almost everywhere else that party caucuses are conducted in the United States.
{ "pile_set_name": "StackExchange" }
Q: Ruby syntax for passing block Why syntax with curly braces works as expected: class SomeClass include Parser::Http.new { |agent| # PASSED: This block was passed to Http::new } end While one with do...end passed to wrong target method? class OtherClass include Parser::Http.new do |agent| # FAILED: This block was passed to Module::include end end How to make this both syntaxes work the same? A: TL;DR do end has lower precedence than { }. Actually, those syntaxes are not intended to work in the same way. Here is a snippet from the official docs: The block argument sends a closure from the calling scope to the method. The block argument is always last when sending a message to a method. A block is sent to a method using do ... end or { ... }: my_method do # ... end or: my_method { # ... } do end has lower precedence than { } so: method_1 method_2 { # ... } Sends the block to method_2 while: method_1 method_2 do # ... end Sends the block to method_1. Note that in the first case if parentheses are used the block is sent to method_1. Hope this helps. A: There is little to add to @Marian13's answer, except that you can circumvent the precedence problem when using bracketed blocks by wrapping the first method arguments with parenthesis. class SomeClass include(Parser::Http.new) { |agent| # ... } end This is particularly useful when you want a one-liner to look good.
{ "pile_set_name": "StackExchange" }
Q: Cutting desktop power usage I'm on a general energy saving mission. I've finally swapped my old CRT monitor for a LCD, so the next step it to optimise the PC power usage. It's using an AMD 64 X2 4600+ CPU which I know can trottle down, but seems to be running at a constant 2.4GHz. A while back I heard about Granola. I've installed it, but when I try to run it (via sudo granola) I get granola[10568]: Error opening scaling governor file '/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor' in read mode granola[10568]: Is cpufreq enabled in this kernel and do you have a CPU which supports DVFS? granola[10568]: Can't manage DVFS for any CPUs I'm happy to use other applications if Granola is not optimal or viable, but am not looking to invest in new hardware just now. Running kernel 2.6.35-25-generic A: I'm not sure what the exact sequence of events was, but I just noticed that granola is running now. I know I tried installing cpufreqd and powernowd, but either caused Granola to be uninstalled. It may just be that the PC needed to restart. It would be nice if the app showed more details about how often the CPU is being throttled and to what speed. I can see current speed with cat /proc/cpuinfo and to time at each speed with cat /sys/devices/system/cpu/cpu0/cpufreq/stats/time_in_state I'm assuming both cores run at the same speed. That shows speeds from 1-2.4GHz with most time spent at the lower speeds. I have a whole-house power meter. I'll see if that can tell me the difference the speed makes. Update: I was too quick to celebrate. Today it's not working. I looked in /var/log/messages and found this for yesterday Feb 4 07:50:20 zaphod kernel: [ 0.560856] powernow-k8: Found 1 AMD Athlon(tm) 64 X2 Dual Core Processor 4600+ (2 cpu cores) (version 2.20.00) Feb 4 07:50:20 zaphod kernel: [ 0.560910] powernow-k8: 0 : fid 0x10 (2400 MHz), vid 0xc Feb 4 07:50:20 zaphod kernel: [ 0.560912] powernow-k8: 1 : fid 0xe (2200 MHz), vid 0xe Feb 4 07:50:20 zaphod kernel: [ 0.560914] powernow-k8: 2 : fid 0xc (2000 MHz), vid 0x10 Feb 4 07:50:20 zaphod kernel: [ 0.560917] powernow-k8: 3 : fid 0xa (1800 MHz), vid 0x10 Feb 4 07:50:20 zaphod kernel: [ 0.560919] powernow-k8: 4 : fid 0x2 (1000 MHz), vid 0x12 For today there is just the first of those lines. That suggests something went wrong, but where do I see the errors? Restarted and it was ok.
{ "pile_set_name": "StackExchange" }
Q: MPMediaItemArtwork returning wrong sized artwork I'm seeing a consistent issue with MPMediaItemArtwork in that it's returning artwork in a size different to that which I request. The code I'm using is as follows MPMediaItem *representativeItem = [self.representativeItems objectAtIndex:index]; MPMediaItemArtwork *artwork = [representativeItem valueForProperty:MPMediaItemPropertyArtwork]; UIImage *albumCover = [artwork imageWithSize:CGSizeMake(128.0f, 128.0f)]; This works as expected, except that the size of the returned image is always {320.0f, 320.0f} even though I specifically asked for {128.0f, 128.0f} and it's causing some memory issues due to the images being more than twice the size of those expected. Has anyone else witnessed this particular issue. How did you resolve it? Apples docs suggest this should work as I'm expecting it to rather than how it actually is A: I downloaded the AddMusic sample source from Apple that also uses MPMediaItemArtwork just to see how they handled things. In that project's MainViewController.m file, these lines: // Get the artwork from the current media item, if it has artwork. MPMediaItemArtwork *artwork = [currentItem valueForProperty: MPMediaItemPropertyArtwork]; // Obtain a UIImage object from the MPMediaItemArtwork object if (artwork) { artworkImage = [artwork imageWithSize: CGSizeMake (30, 30)]; } always returns an image of size 55 x 55 at a scale of 1.0. I would say MPMediaItemArtwork not respecting the requested size parameters is a bug that you should file via bugreporter.apple.com, although Apple might also have an excuse that "55 x 55" is some optimal size to be displayed on iPads & iPhones. For blunt force UIImage resizing, I'd recommend using Trevor Harman's "UIImage+Resize" methods found here: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way And once you add his category extensions to your project, you could do your desired memory-conserving resizing with a simple call like this: UIImage *albumCover = [artwork imageWithSize:CGSizeMake(128.0f, 128.0f)]; UIImage *resizedCover = [albumCover resizedImage: CGSizeMake(128.0f, 128.0f) interpolationQuality: kCGInterpolationLow];
{ "pile_set_name": "StackExchange" }
Q: when might magento Mage::getModel('customer/form'); fail? I have the following two lines of code in within a controller class. $customerForm = Mage::getModel('customer/form'); $customerForm->setFormCode('customer_account_create') ->setEntity($customer); I am getting "Fatal error: Call to a member function setFormCode() on a non-object in ..." on the second of those two lines. what might cause the first line to return a "non-object" ? (I guess it fails and returns a null but why would this happen ?) I am not sure if this is relevant but this is happening in a site that uses the Enterprise version of magento (Magento ver. 1.8.0.0). A: Look into your exeption.log, you should find some ideas there. It might happen if Mage_Customer module is disabled, you have rewrite for 'customer/form' model, or even file with Mage_Customer_Model_Form class is missing.
{ "pile_set_name": "StackExchange" }
Q: Invariant failed: You should not use outside a I am trying to create reactjs component and using in another tsx file but I get below error Invariant failed: You should not use outside a My code is as below and My codesandbox https://codesandbox.io/s/zen-sound-ztbjl class Sidebar extends Component<ISidebarProps & RouteComponentProps<{}>> { constructor(props: ISidebarProps & RouteComponentProps<{}>) { super(props) this.state = {} } componentDidMount = (): void => { this.initMenu() } componentDidUpdate = (prevProps: any): void => { if (this.props.type !== prevProps.type) { this.initMenu() } } initMenu = (): void => { const mm = new MetisMenu('#side-menu') let matchingMenuItem = null const ul = document.getElementById('side-menu') const items = ul.getElementsByTagName('a') for (let i = 0; i < items.length; ++i) { if (this.props.location.pathname === items[i].pathname) { matchingMenuItem = items[i] break } } if (matchingMenuItem) { this.activateParentDropdown(matchingMenuItem) } } activateParentDropdown = (item: any) => { item.classList.add('active') const parent = item.parentElement if (parent) { parent.classList.add('mm-active') const parent2 = parent.parentElement if (parent2) { parent2.classList.add('mm-show') const parent3 = parent2.parentElement if (parent3) { parent3.classList.add('mm-active') // li parent3.childNodes[0].classList.add('mm-active') // a const parent4 = parent3.parentElement if (parent4) { parent4.classList.add('mm-active') } } } return false } return false } render() { return ( <React.Fragment> <div className='vertical-menu'> <div data-simplebar className='h-100'> {this.props.type !== 'condensed' ? ( // <Scrollbars style={{ maxHeight: '100%' }}> <SidebarContent /> ) : ( // </Scrollbars> <SidebarContent /> )} </div> </div> </React.Fragment> ) } } can somebody tell me what is the issue in my code A: You forgot to add Router component. import { BrowserRouter } from "react-router-dom"; const rootElement = document.getElementById("root"); render(<BrowserRouter><App /></BrowserRouter>, rootElement); --Edit You can't use Link component without specified Router You can use BrowserRouter (which uses internall history api), HashRouter(url hash) or generic Router (you have to provide some configuration to it)
{ "pile_set_name": "StackExchange" }
Q: AS3 cacheAsBitmap questions? I'm a bit confused. If I import a png picture, drag it onto my stage, and right click > convert to bitmap. Is that the same thing is if I have a vector created in code, and then apple cacheAsBitmap = true? A: A PNG picture is a raster object, and it has a parent class of Bitmap already. You might, however, encapsulate that Bitmap into a drawn rectangle (effectively making a rectangle with bitmap fill, that is actually a vector object), and then convert to bitmap, applying cacheAsBitmap = true to the vector object made of the raster object. I don't understand why do you want double transformation raster->vector->raster in the first place. Probably Flash isn't so stupid if you just drag a library asset made out of a PNG to the stage, it'll make you a Bitmap based object instead, and "convert to bitmap" won't do a thing.
{ "pile_set_name": "StackExchange" }
Q: Direct proof for $a \rightarrow b, c \rightarrow b, d \rightarrow (a \lor c), d \Rightarrow b$ I am just starting learning proofs in my discrete math class. I need to find the direct proof for $a \rightarrow b, c \rightarrow b, d \rightarrow (a \lor c), d \Rightarrow b$. These are my steps: $ a \rightarrow b [Premise]$ $ c \rightarrow b [Premise]$ $ (a \rightarrow b) \land (c \rightarrow b) [Conjunction 1, 2] $ $ d \rightarrow ( a \lor c) [Premise]$ $d \rightarrow b [Constructive Dilemma 3, 4]\square$ Is this a correct proof? I tried multiple ways and nothing worked, but I am not sure if I am using Constructive Dilemma the right way. A: Hint Let assume that $⇒$ means $\vdash$, i.e. consequence. Use Modus Ponens with $d$ and $d \to (a \lor c)$ to derive : $a \lor c$. Then use Disjunction elimination with $a \lor c$ and the first two premises to derive the conclusion $b$.
{ "pile_set_name": "StackExchange" }
Q: Standards for reading code out loud? Has anyone defined a standard for reading code out loud, for any language? I imagine this is important to software like screen readers for the vision-impaired. This sort of thing also comes up when you are discussing code with someone, reviewing it in a group, or teaching a class. In the C family of languages, there are a lot of words with "obvious" pronunciations. Some are simply English words: for, break, case, default, etc. Some abbreviations, like int, are unambiguous. And then there's char. I always tend to say it (and hear it in my head) like the first syllable of "charcoal". It was jarring to me the first time I was talking about code with someone who pronounced it like "car", which actually makes more sense because char is really an abbreviation of the word "character", so clearly it should be pronounced the same. But even knowing that, char-as-in-coal feels more right to me. And then there are statements like foo = bar ? *(++baz) : zardoz. Has anyone anyone produced a document dictating the correct way (in their opinion) how to read code aloud? Either for a specific language or maybe code in general? A: Quick coverall: read this great article at Coding Horror Whenever I'm discussing code over the phone, I never read it literally. You have to "compile" it to human, and if there is still confusion on the other end of the line, you can move towards a more literal reading. For example, I'd read your example as "If bar is true, increment the baz pointer and assign the value at that address to foo. Otherwise set foo to zardoz." I've been a full time telecommuter since the mid-90's, so practically all of my interactions with my colleagues has been over the phone or other indirect means. Very often we're sharing either a screen (terminal) or VNC (X) session. Besides the regular camaraderie, we spend all day talking about code, design, planning, etc. When we talk about code, we use jargon that is deeply tied to the type of project being worked. One of the (many) reasons it takes so long for a new group member to become fully functional is because they're essentially learning a new language each time they join a new department/company. As I said above, and as others have said, we try to talk at as high a level as is appropriate for any discussion. But sometimes, you really have to just say to someone: "Type this" How do you say it? Well, we could just give an enumeration like... ~ tilde ` backtick ' single quote " quote (or double quote) / slash, \ is backslash # pound or hash ! bang (or exclamation mark) @ at $ dollar % percent or mod ^ caret or xor & and or bitwise and && and or logical and | pipe or 'or' or bitwise or || 'or' * value of, times, glob, multiplied by () parens, open paren, close paren {} braces, curlies, open stash, close stash [] brackets, square brackets, at & sub (for subscript) (for C-ish arrays) ... This are just how "we" say these characters. To get an idea of the entire range of saying "#" take a look at the wiki page for # So there's too much variability. It has to be specific to the language that you're coding in (just as I'm typing this in English for our human communication). Without the context of language you'd constantly have to revert to character by character spelling. So most folks I know of fall back to whatever the language standard calls things. SELECT COUNT(*) INTO x FROM ... (SQL) X IS Y + 1 (Prolog) (setq x 40) (Emacs lisp) /def x 40 (PostScript) x = 40 (C) $x = 40 (Perl) Each of those would be implied by just saying "Set X to ..." within the proper context. Don't even get me started on what code is read as "is string X equal to string Y". If you say "hash bang bin bash" or "shebang bash", just about everyone will know that means "#!/bin/bash". If they don't they'll say, "Huh?", and you step it down a notch "At the top of the file: Pound sign, exclamation mark, slash, bin, slash, bash, newline". If they still don't get it, you step it down yet again: "See that keyboard in front of you? See the "3" key? That mark on the top when you press shift is a pound sign, that." Bottom line: don't worry about it too much, you'll be wrong, everyone will get over it it's too specific to exactly what you do always carry a towel read the article over at Coding Horror A: I've never run across any standards for speaking language syntax out loud. I have run across little snippets where someone has expressed their own personal preference for instance referring to "#! /bin/sh" as "Hash-bang slash bin slash S H" as opposed to "pound exclamation forward-slash B I N forward-slash S H" the later might assume the listener has less familiarity with the construct. There is also a great disparity in the amount to which different languages are readable out loud. Take for instance the differences between Python which tends to be easier to speak out loud vs. say Perl which requires you to either say a lot of punctuation or translate from "$var[20]" to "the twentieth element of array var". My own experience is that it's very contextual based on the reason for me needing to read the code out loud, the knowledge level of the listener and the language in question. In the case of code reviews I'm more likely to explain a statement than try to read it out loud as it is usually more important to get the meaning or thought process across than just read the raw code to the listener(s). When I'm trying to get someone to type an exact line of C code into an editor (for example I'm looking over a junior programmer's shoulder and see how to fix a line of their code), I often end up speaking code out in keywords and symbols such as "if space open-paren null double-equals p close-paren..." That same interchange with a more senior developer might start out more like "you need to check for p being null here..."
{ "pile_set_name": "StackExchange" }
Q: Migrating from hardware to software RAID I have an old PCI-X controller running 8 drives in RAID 5. I'd like to dump the controller and go to software RAID under Ubuntu. Is there a way to do this and retain the data from current array? EDIT: (and a slight tangent) The answers below are certainly fine, but here's a bit of added detail in my specific situation. The hardware raid was being done by an old Promise raid card (don't remember the model number). My whole system went down (dead mobo, most likely) and the old controller was a PCI-X card (not to be confused with PCI-e). I asked the question hoping to salvage my data. What I did was buy another Promise (HighPoint) card, and plug all the drives in and install Ubuntu. I was expecting to have to rebuild the array, but surprisingly enough, the HighPoint card saw the old array and brought it up clean. Moral of the story - it looks like at least Promise controllers store their metadata on the arrays themselves, and appear to have some amount of forward compatibility. A: If you actually have a RAID configured through hardware (i.e., the operating system sees fewer physical disks than you actually have) there's no hardware to software conversion method. You have to back up the data to an alternate location, convert the RAID manually, and restore.
{ "pile_set_name": "StackExchange" }
Q: SQL - define keys to table Is there any considerations to define keys for table that has lot of records already and most of operation that are operated on it are Insert ? A: Key definition ultimately comes down to how you can uniquely and efficiently identify any specific row in a table. If a business key value fulfills that requirement, then it is a suitable candidate. An ideal key is also skinny. A GUID is horrible for this (IMHO) because it is far larger than it needs to be. If insert performance is the most important priority and a suitable business key is not available, you can use an integer based identity key. If you expect more than 2.1 billion records within a few years, use bigint (9 quintillion records) instead. Keep in mind that every index you make on the table will always include the PK. Having a skinny PK can make your indexes more efficient, using less storage, memory and CPU. Insert speed is affected by the clustered index sort order as well as the number and sort order of all non-clustered indexes on the table. Column-store indexes are not sorted and have minimal overhead on inserts.
{ "pile_set_name": "StackExchange" }
Q: Error When Using MS Access StrConv Function Introduction I have a form with a TComboBox that I want to populate with a field from a table in my database using a query. I also want the values in the field displayed in proper case, which can be achieved using Access’ StrConv function. Here’s my code: with dmCallNote.qryCompany, SQL do begin Clear; Text := 'SELECT StrConv(A_Company, 3) FROM tblAccounts'; Open; while not Eof do begin cmbCompany.Items.Add(dmCallNotes.qryCompany['A_Company']); Next; end; end; The Problem When compiling the line cmdCompany.Items.Add … I receive the error message: “qryCompany: Field 'A_Company' not found.” Why am I getting this error? When I run the query with a TDBGrid it executes successfully. A: Change this: Text := 'SELECT StrConv(A_Company, 3) FROM tblAccounts'; to this: Text := 'SELECT StrConv(A_Company, 3) AS A_Company FROM tblAccounts'; Your field had no name/alias.
{ "pile_set_name": "StackExchange" }
Q: Method to make revision control program I am currently developing a website in PHP with a MySQL for databases. In this website, I wanted to make a revision control for the content that will be user generated. Similar Wiki/Stack Exchange etc. What I currently have in mind (not put down on paper or anything) is just have a second table that contains the date, content and the user who submitted the change, but I'm not quite sure if that is the most efficient way to do this. Also, on top of that, what is the way to view revision changes visually. e.g. someone opens an article, clicks "view changes" and can see who edited it and what was edited. Looking for something similar to SVN's diff program. Hosh A: For those who care, I think going to http://pear.horde.org/ and scrolling down to Horde_Text_Diff (or search on the page), should be a good one.
{ "pile_set_name": "StackExchange" }
Q: How do you print values in lists with their index number in Python 2.7? I am trying to make a Blackjack game in python. I ran into a problem because I am trying to use the random module with my game. I used the random module to get a number that coordinates with the index number in my list. The list I made consisted of card face values. I don't know how to print these values using the random index number, though. Here is my code: # this is a blackjack game made in python import random import time # make a list full of the card values cards = (["A", "K", "Q", "J", 2, 3, 4, 5, 6, 7, 8, 9, 10]) indexNum1 = random.randint(0,12) indexNum2 = random.randint(0,12) indexNum3 = random.randint(0,12) indexNum4 = random.randint(0,12) indexNum5 = random.randint(0,12) for card in cards: print card(indexNum1) print card(indexNum2) print card(indexNum3) print card(indexNum4) print card(indexNum5) I hope someone can help me solve this problem. Thanks! A: You can index cards directly, e.g.: print(cards[indexNum1]) But if you want in a loop you should iterate over the indexes: for cardidx in (indexNum1, indexNum2, indexNum3, indexNum4, indexNum5): print(cards[cardidx]) But you are making this much harder than you need to, because currently your code could return 5 Aces - which I assume you don't want: cards = ["A", "K", "Q", "J", 2, 3, 4, 5, 6, 7, 8, 9, 10] hand = random.sample(cards, k=5) for card in hand: print(card)
{ "pile_set_name": "StackExchange" }
Q: File.Move, why do i get a FileNotFoundException? The file exist Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } } A: I believe you are trying to move a whole directory using File.Move which expects a filename. Try using Directory.Move instead since that allow you to move entire folders around.
{ "pile_set_name": "StackExchange" }
Q: Can a Castle DynamicProxy-generated proxy be forced to implement members as explicit interface implementations? For example, let's say I've defined an interface as follows: public interface IWhatever { string Text { get; set; } } And I implement it in a mixin: public class WhateverMixin : IWhatever { string IWhatever.Text { get; set; } } When I build a proxy of some given class, the whole explicitly-implemented interface member appears as implicitly implemented so it gets published. Do you know if there's some option I can give to Castle DynamicProxy to force implementing an interface with explicit implementations? A: Unfortunately, DynamicProx doesn't seem to have any options for this. There's no such setting in the ProxyGenerationOptions or MixinData classes, and if you look into the code (starting from MixinContributor, which leads to MethodGenerator), you can see that it simply copies the name and attributes (visibility, etc.) from the interface method.
{ "pile_set_name": "StackExchange" }
Q: Multiprocess via processbuilder Communication, freeze at readline() for BufferedReader() I am trying to allow communication between one program (the program launcher, if you will) and the programs it launches via processbuilder. I have the output working fine, but the input seems to stop when it reaches the readline() method in helloworld (the created process). Below is helloworld.java: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Scanner; public class helloworld { public static void main (String[] args) { System.out.println ("println(\"Hello World!\")"); System.out.println ("getInput()"); Scanner in = new Scanner(System.in); BufferedReader br = new BufferedReader( new InputStreamReader(System.in)); String input = ""; try { // wait until we have data to complete a readLine() while (!br.ready()) { Thread.sleep(200); } System.out.println("println(\"Attempting to resolve input\")"); input = br.readLine(); ^This is where program hangs^ if(input != null){ System.out.println("println(\"This should appear\")"); } System.out.println("println(\"input recieved " + input + "\")"); } catch (InterruptedException | IOException e) { System.out.println("ConsoleInputReadTask() cancelled"); } System.out.println("println(\"You said: " + input + "\")"); //System.out.println("println(\"You said: " + in. + "!\")"); in.close(); System.exit(0); } } This is where the output (println) from the other process is recieved: public void run() { try { //cfile = files[indexval].getAbsolutePath(); String[] commands = { "java", //Calling a java program "-cp" , //Denoting class path cfile.substring(0,cfile.lastIndexOf(File.separator) ), //File path program}; //Class name ProcessBuilder probuilder = new ProcessBuilder( commands ); //start the process Process process = probuilder.start(); //Read out dir output //probuilder.inheritIO(); //Can inherit all IO calls InputStream is = process.getInputStream(); OutputStream os = process.getOutputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(os)); BufferedReader br = new BufferedReader(isr); String line; /*System.out.printf("Output of running %s is:\n", Arrays.toString(commands));*/ while ((line = br.readLine()) != null) { myController.runCommand(line, "Please enter something!", bw); //System.out.println(line); } br.close(); os.close(); } catch (IOException e){ e.printStackTrace(); } System.out.println("programclosed"); } And here is the function that it calls: public synchronized void runCommand(String line, Object... arguments) throws IOException { String[] tokens; if(line.contains("(")){ tokens = line.split("\\(",2); switch(tokens[0]){ case "println": //Println - format println(String strng) tokens[1] = tokens[1].substring(1, tokens[1].length() - 2); System.out.println(tokens[1]); break; case "getInput": //Get input - format getInput(String command, String message, BufferedWriter br) Scanner reader = new Scanner(System.in); System.out.println(arguments.length); System.out.println(((String)arguments[0])); BufferedWriter in = ((BufferedWriter)arguments[1]); in.write(reader.nextLine()); System.out.println("sending input"); in.flush(); reader.close(); break; default: System.out.println("Invalid command recieved!"); } } else System.out.println("Invalid command recieved!"); } The output I recieve is: Hello World! 2 Please enter something! This is a test input sending input Attempting to resolve input As you can see, I successfully exit the while(!br.ready()) loop, and I stop at br.readLine(); I am aware inheritIO() exist, but for this case I am using the BufferedOuput to send commands which are then parsed and sent to the switch statement, which in turn calls the corresponding function. This is because multiple processes could be launched from the process manager, think of the fun when multiple System.in calls arrive, with nothing to determine which process it is for! In addition, this allows for me to call any type of function, even those not related to println or input. A: I believe the issue here is a result of the following: BufferedReader.ready() returns true if there are any characters available to be read. It does not guarantee that there are any carriage returns among them. (docs) BufferedReader.readLine() looks for a carriage return to complete a line. If one is not found, it blocks. BufferedWriter.write() does not automatically write a terminating carriage return. To test whether this is actually the problem, replace this line in runCommand(): in.write(reader.nextLine()); with: in.write(reader.nextLine() + "\n");
{ "pile_set_name": "StackExchange" }
Q: Usage of “short” Let’s say that I had a hundred dollars and I spent ten dollars in this case can I use “short” in sentence below? My money get ten dollars short. A: No. The use of "short" in the context of money means "less than required" I'm short on cash, so I can't buy a drink. The dress costs $60, but you've only given my $50, so you're $10 short. In the example you give, there is no shortage of money. Simple ways to express your example: I've spent ten dollars. I'm down ten dollars (This sounds like what you might say if you had lost $10 while gambling) I've got $90 left. However, if you wanted to buy something that was worth $100, then you could say I can't afford it! I'm short by $10. (or) I'm $10 short.
{ "pile_set_name": "StackExchange" }
Q: ASP.NET - Set DropDownList's value and text attributes using JS I have a Dropdownlist control in one of my ASCX page. <asp:DropDownList ID="demoddl" runat="server" onchange="apply(this.options[this.selectedIndex].value,event)" onclick="borderColorChange(this.id, 'Click')" onblur="borderColorChange(this.id)" CssClass="dropDownBox" DataTextField="EmpName" DataValueField="EmpID"> My objective is to fill this Dropdownlist with 'EmpID' as value attribute and 'EmpName' as text attribute. JS code to fetch these 'EmpName' and 'EmpID' values are as follows : $(document).ready(function () { loadSavedFreeTextSearchCombo(); } function loadSavedFreeTextSearchCombo() { var params = { loginID: $('#loginID').val() }; var paramsJSON = $.toJSON(params); $.ajax({ type: "POST", url: _WebRoot() + "/Ajax/EmpDetails.asmx/GetEmp", data: paramsJSON, contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { $('#demoddl').empty(); $('#demoddl').append($("<option></option>").val(0).html("--Employee Names --")); $.each(data.d, function (index, value) { $('#demoddl').append($("<option></option>").val(value.EmpID).html(value.EmpName)); }); }, error: function () { showError("Failed to load Saved Search Data!"); } }); } Although the entire code runs without any error (the EmpDetails.asmx method returns the valid data successfully), the dropdwonlist doesn't get filled with the required data returned. What am I doing wrong? I guess somehting's wrong at my 'success' event code A: Since you're intended to use DropDownList server control ID as selector, it is necessary to set ClientIDMode="Static", especially if you're using <asp:ContentPlaceHolder> or <asp:Content> to prevent ASPX engine creating <select> element with id attribute containing dropdown's placeholder name: <asp:DropDownList ID="demoddl" runat="server" ClientIDMode="Static" onchange="apply(this.options[this.selectedIndex].value,event)" onclick="borderColorChange(this.id, 'Click')" onblur="borderColorChange(this.id)" CssClass="dropDownBox" DataTextField="EmpName" DataValueField="EmpID"> If you cannot use ClientIDMode="Static" attribute for certain reasons (e.g. avoiding multiple <select> elements with same ID), use ClientID property of the control as selector, i.e. <%= demoddl.ClientID %>: $.ajax({ type: "POST", url: _WebRoot() + "/Ajax/EmpDetails.asmx/GetEmp", data: paramsJSON, contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { $('#<%= demoddl.ClientID %>').empty(); $('#<%= demoddl.ClientID %>').append($("<option></option>").val(0).html("--Employee Names --")); // recommended to check against undefined here $.each(data.d, function (index, value) { $('#<%= demoddl.ClientID %>').append($("<option></option>").val(value.EmpID).html(value.EmpName)); }); }, error: function () { showError("Failed to load Saved Search Data!"); } });
{ "pile_set_name": "StackExchange" }
Q: Is the sentence ' GRASS IS GREEN ' a universal truth? I want to know the correct indirect statement for this sentence: He said," grass is green." A: The category of "universal truth" is a little vague and imprecise - it is better to think of it as "generally accepted as true." However, and unfortunately, I can't think of a better term. (Compare He said "All swans are white" -> "He said that all swans are white." despite the fact that there are black swans in Australia.) "Grass is green" is such a statement despite there being other shades of grass, including black. Thus it is not necessary to backshift the tense in reported speech: "He said that grass is green."
{ "pile_set_name": "StackExchange" }
Q: Coaxing requirements out of business people? What methods seem to work best to coax requirements out of non-tech business people? I am working with a team that’s trying to get a spec together for a project. Every time we have met and it comes down to expectations for the next meeting, we ask for the business people to bring back their requirements. They usually respond something like this: “Well, do you think you guys could whip up a prototype so we can see what we like next week…you know, not with any data or anything since it’s a prototype, just the functionality.” This is a 6 month plus project so that is obviously infeasible (we would have to develop the entire thing!), and we don’t even know what to prototype without some sort of spec. Frankly, I think like most people, they have some idea of what they want, they just are not thinking about it in the focused sort of way necessary to gather true requirements. As an alternative to simply telling them, “give us what you want or we can’t/won’t do any work” (we do want them to be happy with the results), are there ways to help them decide what they want? For example, we could tell them: “Draw out some screens (in Powerpoint, on a napkin, whatever) that show the UI you would like with all of the data you want to see and a description of the functionality in the margins. From this, we will polish it up and build the backend based on this set of behavior requirements.” OR “Don’t worry about how it will look right now (the number 1 hang up). Just give us a list of all the data you want about each thing the program keeps track of. So for “Customer” you might list: name, address, phone number, orders, etc. It does not have to be a perfect database structure, but we can work something out from this and get an idea of what you are looking for” Do either of these alternative approaches to get business people focused on what they want make sense? Are there alternatives that you have seen in action? A: I have spent the last 3 months in an exhaustive - and exhausting - requirements-gathering phase of a major project and have learned, above all else, that there is no one-size-fits-all solution. There is no process, no secret, that will work in every case. Requirements analysis is a genuine skill, and just when you think you've finally figured it all out, you get exposed to a totally different group of people and have to throw everything you know out the window. Several lessons that I've learned: Different stakeholders think at different levels of abstraction. It is easy to say "talk at a business level, not technical", but it's not necessarily that easy to do. The system you're designing is an elephant and your stakeholders are the blind men examining it. Some people are so deeply immersed in process and routine that they don't even realize that there is a business. Others may work at the level of abstraction you want but be prone to making exaggerated or even false claims, or engage in wishful thinking. Unfortunately, you simply have to get to know all of the individuals as individuals and understand how they think, learn how to interpret the things they say, and even decide what to ignore. Divide and Conquer If you don't want something done, send it to a committee. Don't meet with committees. Keep those meetings as small as possible. YMMV, but in my experience, the ideal size is 3-4 people (including yourself) for open sessions and 2-3 people for closed sessions (i.e. when you need a specific question answered). I try to meet with people who have similar functions in the business. There's really very little to gain and very much to lose from tossing the marketing folks in the room with the bean counters. Seek out the people who are experts on one subject and get them to talk about that subject. A meeting without preparation is a meeting without purpose. A couple of other answers/comments have made reference to the straw-man technique, which is an excellent one for those troublesome folks that you just can't seem to get any answers out of. But don't rely on straw-men too much, or else people will start to feel like you're railroading them. You have to gently nudge people in the right direction and let them come up with the specifics themselves, so that they feel like they own them (and in a sense, they do own them). What you do need to have is some kind of mental model of how you think the business works, and how the system should work. You need to become a domain expert, even if you aren't an expert on the specific company in question. Do as much research as you can on your business, their competitors, existing systems on the market, and anything else that might even be remotely related. Once at that point, I've found it most effective to work with high-level constructs, such as Use Cases, which tend to be agreeable to everybody, but it's still critical to ask specific questions. If you start off with "How do you bill your customers?", you're in for a very long meeting. Ask questions that imply a process instead of belting out the process at the get-go: What are the line items? How are they calculated? How often do they change? How many different kinds of sales or contracts are there? Where do they get printed? You get the idea. If you miss a step, somebody will usually tell you. If nobody complains, then give yourself a pat on the back, because you've just implicitly confirmed the process. Defer off-topic discussions. As a requirements analyst you're also playing the role of facilitator, and unless you really enjoy spending all your time in meetings, you need to find a way to keep things on track. Ironically, this issue becomes most pernicious when you finally do get people talking. If you're not careful, it can derail the train that you spent so much time laying the tracks for. However - and I learned this the hard way a long time ago - you can't just tell people that an issue is irrelevant. It's obviously relevant to them, otherwise they wouldn't be talking about it. Your job is to get people saying "yes" as much as possible and putting up a barrier like that just knocks you into "no" territory. This is a delicate balance that many people are able to maintain with "action items" - basically a generic queue of discussions that you've promised to come back to sometime, normally tagged with the names of those stakeholders who thought it was really important. This isn't just for diplomacy's sake - it's also a valuable tool for helping you remember what went on during the meetings, and who to talk to if you need clarification later on. Different analysts handle this in different ways; some like the very public whiteboard or flip-chart log, others silently tap it into their laptops and gently segue into other topics. Whatever you feel comfortable with. You need an agenda This is probably true for almost any kind of meeting but it's doubly true for requirements meetings. As the discussions drag on, people's minds start to wander off and they start wondering when you're going to get to the things they really care about. Having an agenda provides some structure and also helps you to determine, as mentioned above, when you need to defer a discussion that's getting off-topic. Don't walk in there without a clear idea of exactly what it is that you want to cover and when. Without that, you have no way to evaluate your own progress, and the users will hate you for always running long (assuming they don't already hate you for other reasons). Mock It If you use PowerPoint or Visio as a mock-up tool, you're going to suffer from the issue of it looking too polished. It's almost an uncanny valley of user interfaces; people will feel comfortable with napkin drawings (or computer-generated drawings that look like napkin drawings, using a tool like Balsamiq or Sketchflow), because they know it's not the real thing - same reason people are able to watch cartoon characters. But the more it starts to look like a real UI, the more people will want to pick and paw at it, and the more time they'll spend arguing about details that are ultimately insignificant. So definitely do mock ups to test your understanding of the requirements (after the initial analysis stages) - they're a great way to get very quick and detailed feedback - but keep them lo-fi and don't rush into mocking until you're pretty sure that you're seeing eye-to-eye with your users. Keep in mind that a mock up is not a deliverable, it is a tool to aid in understanding. Just as you would not expect to be held captive to your mock when doing the UI design, you can't assume that the design is OK simply because they gave your mock-up the thumbs-up. I've seen mocks used as a crutch, or worse, an excuse to bypass the requirements entirely; make sure you're not doing that. Go back and turn that mock into a real set of requirements. Be patient. This is hard for a lot of programmers to believe, but for most non-trivial projects, you can't just sit down one time and hammer out a complete functional spec. I'm not just talking about patience during a single meeting; requirements analysis is iterative in the same way that code is. Group A says something and then Group B says something that totally contradicts what you heard from Group A. Then Group A explains the inconsistency and it turns out to be something that Group C forgot to mention. Repeat 500 times and you have something roughly resembling truth. Unless you're developing some tiny CRUD app (in which case why bother with requirements at all?) then don't expect to get everything you need in one meeting, or two, or five. You're going to be listening a lot, and talking a lot, and repeating yourself a lot. Which isn't a terrible thing, mind you; it's a chance to build some rapport with the people who are inevitably going to be signing off on your deliverable. Don't be afraid to change your technique or improvise. Different aspects of a project may actually call for different analysis techniques. In some cases classical UML (Use Case / Activity diagram) works great. In other cases, you might start out with business KSIs, or brainstorm with a mind map, or dive straight into mockups despite my earlier warning. The bottom line is that you need to understand the domain yourself, and do your homework before you waste anyone else's time. If you know that a particular department or component only has one use case, but it's an insanely complicated one, then skip the use case analysis and start talking about workflows or data flows. If you wouldn't use the same tool for every part of an app implementation, then why would you use the same tool for every part of the requirements? Keep your ear to the ground. Of all the hints and tips I've read for requirements analysis, this is probably the one that's most frequently overlooked. I honestly think I've learned more eavesdropping on and occasionally crashing water-cooler conversations than I have from scheduled meetings. If you're accustomed to working in isolation, try to get a spot around where the action is so that you can hear the chatter. If you can't, then just make frequent rounds, to the kitchen or the bathroom or wherever. You'll find out all kinds of interesting things about how the business really operates from listening to what people brag or complain about during their coffee and smoke breaks. Finally, read between the lines. One of my biggest mistakes in the past was being so focused on the end result that I didn't take the time to actually hear what people were saying. Sometimes - a lot of the time - it might sound like people are blathering on about nothing or harping about some procedure that sounds utterly pointless to you, but if you really concentrate on what they're saying, you'll realize that there really is a requirement buried in there - or several. As corny and insipid as it sounds, the Five Whys is a really useful technique here. Whenever you have that knee-jerk "that's stupid" reaction (not that you would ever say it out loud), stop yourself, and turn it into a question: Why? Why does this information get retyped four times, then printed, photocopied, scanned, printed again, pinned to a particle board, shot with a digital camera and finally e-mailed to the sales manager? There is a reason, and they may not know what it is, but it's your job to find out. Good luck with that. ;) A: If you can't get something out of them, write something up and get it approved. It is a lot easier for non-technical people say 'no, I don't like that' than 'this is how you should do it.' Often times what they want and what they tell you are two very different things. Take some time to write up a first draft of the spec with the info you currently know. Ask it the stakeholders to read it and approve it. When they read it, they will more than likely see things they don't like or agree with. Get their feedback and then revise. If there is something that you could go one way or another on, online both options and get the decision maker to make a choice. Don't leave them alone until they do. As for prototypes, make screen mock-ups, and explain how things would work instead. Again, seeing something helps them visualize what is going on. Take new screen mock-ups with you to meetings and get answers. In the past, I've actually opened FireBug and added in the changed that the customer had requested right in front of them so they could see what it would look like. They gave their feedback, I took a screen shot then implemented the changes. They really liked being able to see what the change would look like, and I liked it b/c it was fast and I got my answer in that meeting...not the next one. A: Get them to talk more about their business and less about applications. Find out what the real problems are: month end reporting takes too long, data entry errors, they've outgrown their current application, company growth is getting out of hand. I'm guessing these meetings are with the people doing the purchasing but not the people who will actually do the work involving the application. Ask if you can meet with a select few of these people. They can show you how things are really done. Make sure you are dealing with clients who have budgeted their time as well as the cost. See if they have any reports they are currently using or want to use. Obviously you can't create the report if you don't collect the data properly. They have to be doing something unless this is some line of business they haven't started yet. Many have these general notions that you're the programmer, so you know how to build all programs. eCommerce sites are all the same, right? Start small. Unfortunately, until you get something in front of them, the process just doesn't register. If you have nothing to go by, then just fake it.
{ "pile_set_name": "StackExchange" }
Q: Flutter: how to disable device orientation changes? Is there any way to make orientation only to portrait or landscape in flutter? Thanks. A: The best way to do this would be to use SystemChrome in your main method. For example: To lock orientation to landscape, use: await SystemChrome.setPreferredOrientations( [DeviceOrientation.landscapeRight, DeviceOrientation.landscapeLeft] ); To lock orientation to portrait, use: await SystemChrome.setPreferredOrientations( [DeviceOrientation.portraitRight, DeviceOrientation.portraitLeft] );
{ "pile_set_name": "StackExchange" }
Q: Is it possible to reboot a Linux OS without rebooting the hardware? Is there a way to reboot a Linux system (Debian in particular) without rebooting the hardware? I have a RAID controller that takes a bit to get itself running before the OS starts up, and I would like it if there was a way to quickly reboot the Linux OS without having to go through the whole reboot process of restarting the RAID controller, etc. A: I use kexec-reboot on nearly all of my production systems. It works incredibly well, allowing me to bypass the long POST time on HP ProLiant servers and reduce the boot cycle from 5 minutes to ~45 seconds. See: https://github.com/error10/kexec-reboot The only caveat is that it doesn't seem to work on RHEL/CentOS 6.x systems booting UEFI. But most sane OS/hardware combinations work. A: Yes, it is possible. kexec will allow a Linux kernel to be booted directly from Linux without going through the BIOS boot process. A: Yes. You must use kexec-tools. To make the life easier, I use kexec-reboot. $sudo apt-get install kexec-tools $wget https://raw.githubusercontent.com/vadmium/kexec-reboot/master/kexec-reboot $chmod +x kexec-reboot $sudo mv kexec-reboot /usr/local/sbin/kexec-reboot $sudo /usr/local/sbin/kexec-reboot
{ "pile_set_name": "StackExchange" }
Q: How do I repeat any command on regular interval in vim? Actually I want to autosave my current file, :w command writes the file so I thought that when I will repeat this command at regular interval (say each 30secs.) I will achieve what I want. But how can I do so? A: Vimscript itself is single-threaded; you cannot regularly execute a command per se. The closest to a periodic trigger are autocmds, especially the CursorHold event, which fires after no key has been pressed for 'updatetime', typically 4 seconds. Another interesting event for auto-saving is FocusLost, but that one may not be triggered in the terminal, just in GVIM. With it, something like this can be defined: autocmd CursorHold,CursorHoldI <buffer> silent! write A: Take a look into :help autosave
{ "pile_set_name": "StackExchange" }
Q: Given a string, how to iterate through many regexes to find a match? Given a string, and a mapping of regexes to integers, I want to find out which integer the string maps to (assume the string will match exactly one of those regexes) Is it inefficient to just iterate through the hash of regexes, trying each regex against the string, then outputting the value? Certainly I cannot explicitly enumerate all possible string => integer mappings, but it seems bad to try matching each regex in bunch of regexes. A: Just do as you suggest, loop over the hash of regex/numbers and return the first that matches a string: def find_match(regex_mapping, str) regex_mapping.each do |regex, n| return n if str =~ regex end return nil end The only thing to say about efficiency is this: it probably doesn't matter anyway. Just write your code as clearly and simply as you can, and then, in the end, if it is to slow, run it through a profiler (for example the absolutely awesome perftools.rb) and see what the hotspots are. Optimize those. Don't optimize before you've written any code. That said, a simple optimization you can do in this case, which doesn't cost you anything, is to put the regexes into the mapping hash in an order such that the most likely to match comes first, that way fewer comparisons have to be made (but this is a probabilistic optimization, the worst case running time remains the same). This only applies to Ruby 1.9 though, since hashes don't retain their insertion order in 1.8.
{ "pile_set_name": "StackExchange" }
Q: Lighttpd: only SSLv3 enabled, but TLSv1.2 is used I have a Lighttpd-1.4.33 installation and have to prove the use of SSL and TLS. Therefore, I used the ssl.use-sslv3 and the ssl.cipher-suite parameter. Please see my lighttpd.conf: $SERVER["socket"] == ":443" { server.document-root = "/var/www/" server.protocol-http11 = "enable" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/servercert.pem" ssl.ca-file = "/etc/lighttpd/cacert.pem" ssl.use-sslv3 = "enable" } When I make a request it gives me a TLSv1.2 connection. Now, ssl.use-sslv3 replaced with the following: ssl.cipher-list = "NULL-MD5:NULL-SHA:EXP-RC4-MD5:RC4-MD5:RC4-SHA:DES-CBC3-SHA:AES128-SHA:AES256-SHA:EXP1024-RC4-SHA" ssl.honor-cipher-order = "enable" also gives me a TLSv1.2 connection. So, my question is: Is it correct that TLSv1.2 "includes" all ciphers which are used in SSLv3 and therefore its used instead of SSLv3? Is there any chance to force Lighttpd to refuse TLSv1.2 connections but not SSLv3 if its configured that way? It is necessary to me to be able to force Lighttpd to use only SSLv3 or TLSv1.0. Thanks for any help. The cipher-list is from http://www.openssl.org/docs/apps/ciphers.html#tls_v1_0_cipher_suites_ A: Ciphersuites normally don't define the protocol version, i.e. you can't enable or disable protocol version by enabling or disabling ciphersuites. If server config allows, you need to explicitly specify the enabled versions of SSL/TLS protocol. Update: Looking at http://redmine.lighttpd.net/projects/1/wiki/docs_ssl it seems that use-sslv3 parameter also enables all TLS versions. The forum on Lighttpd.net suggests that while there's no setting [yet] to control TLS versions to use, you can enable only TLS-1.2-specific cipher suites thus kind of forcing TLS 1.2 (the same for TLS 1.1/1.2 combo, I think). I didn't look into the source code so I can't say how this works (if it works at all). Also here they suggest to set ssl.use-sslv3 parameter to "disable" in order to disable SSL 3 and leave TLS enabled.
{ "pile_set_name": "StackExchange" }
Q: How to create consistent lambda functions using a loop? I want to create several similar lambda functions. When I use a simple loop I do not get what I expect. Code: funcList = [] for i in range(2): f = lambda x: i*x print("Before: ", f(10)) funcList.append(f) print("After: ", funcList[0](10), funcList[1](10)) Output: Before: 0 Before: 10 After: 10 10 Code: funcList = [lambda x: i*x for i in range(2)] print("After: ", funcList[0](10), funcList[1](10)) Output: After: 10 10 How can I create several lambda functions, with it using the "original" value of i, instead of the last known value of i? A: The issue is as follows: closures (see lambdas vs closures) in Python close over their environment, which maps variable names to objects, and they do this by essentially adding the closed-over environment to their scope lookup chain. Consider a simpler, more explicit example: import sys def foo(): result = [] for i in range(2): def bar(): return i result.append(bar) return result The scope lookup chain for foo, with the environment right before it returns, is something like: - "foo" local variables: {i: 1, result: [bar-function-1, bar-function-2], bar: bar-function-2} - global variables: {foo: foo-function, sys: sys-module, etc...} That is, if foo tries to use a variable z, first it looks in the first environment (foo's local variables). If it is there, the lookup succeeds. If not, it moves onto the the next (the global variables). If it is there, the lookup succeeds. If not, then there are no more environments on the chain so you get a NameError. In this case, result and i would be found as local variables, foo and sys and perhaps others would be found as global variables, and all else would give a NameError. The scope lookup chain in each bar is something like: - "bar" local variables: {} - "foo" local variables: {i: 1, result: [bar-function-1, bar-function-2], bar: bar-function-2} - global variables: {foo: foo-function, sys: sys-module, etc...} Most importantly, foo's local variable environment is not copied into bar's local variable environment. So, bar can lookup i, but it does this by first failing to find it in bar's local variables, then going one up the scope chain and finding it in foo's local variables. So, when the first bar function is defined, its scope lookup chain looks like this: - "bar" local variables: {} - "foo" local variables: {i: 0, result: [], bar: bar-function-1} - global variables: {foo: foo-function, sys: sys-module, etc...} However, when foo changes its local variable i, bar's scope lookup chain now looks like this: - "bar" local variables: {} - "foo" local variables: {i: 1, result: [bar-function-1], bar: bar-function-2} - global variables: {foo: foo-function, sys: sys-module, etc...} So now when bar looks up i it yet again fails to find it in its local variables, looks one up the scope lookup chain, and finds foo's local variable i... which is now 1 since it's the same i as before. The trick I wrote in the comment is a bit of a hack. To be a bit more explicit about it, consider: def foo(): result = [] for i in range(2): def bar(j=i): return j result.append(bar) return result What's really happening is that you're declaring bar with a parameter, j, whose default is set to the value of i (i.e. the object which i is referring to at definition time... not the object which i refers to at any time j is used inside of bar). So the scope lookup chain at the first bar function looks like this: - "bar" local variables: {j: 0} - "foo" local variables: {i: 0, result: [], bar: bar-function-1} - global variables: {foo: foo-function, sys: sys-module, etc...} And by the time the loop goes around again, it looks like this: - "bar" local variables: {j: 0} - "foo" local variables: {i: 1, result: [bar-function-1], bar: bar-function-2} - global variables: {foo: foo-function, sys: sys-module, etc...} In both cases, the lookup of j succeeds right away in bar's local variables, and j never changes. Doing the following is a bit of a hack in that it just hides the outer i and makes it look like it's referring to the same i: def foo(): result = [] for i in range(2): def bar(i=i): return i result.append(bar) return result But in reality, they are two different is: - "bar" local variables: {i: 0} - "foo" local variables: {i: 0, result: [], bar: bar-function-1} - global variables: {foo: foo-function, sys: sys-module, etc...} And, on the second iteration of the loop: - "bar" local variables: {i: 0} - "foo" local variables: {i: 1, result: [bar-function-1], bar: bar-function-2} - global variables: {foo: foo-function, sys: sys-module, etc...} Probably a more "proper" way would be to do something like: def make_bar(j): def bar(): return j return bar def foo(): result = [] for i in range(2): bar = make_bar(i) result.append(bar) return result In this case the scope chain is: - "bar" local variables: {} - "make_bar" local variables: {j: 0} - "foo" local variables: {i: 0, result: [], bar: bar-function-1} - global variables: {make_bar: make_bar-function, foo: foo-function, sys: sys-module, etc...} And, on the second iteration of the loop: - "bar" local variables: {} - "make_bar" local variables: {j: 0} - "foo" local variables: {i: 1, result: [], bar: bar-function-2} - global variables: {make_bar: make_bar-function, foo: foo-function, sys: sys-module, etc...} In this case it works because make_bar is called with i, make_bar is called setting its local variable j to the object that i refers to at call-time (namely, 0). make_bar's local variable j doesn't change when foo's i changes. To do a completely explicit example in your case you'd do something like: funcList = [] for i in range(2): def make_f(j): def f(x): return j * x funcList.append(make_f(i)) Or, as @ShadowRanger commented: funcList = [] for i in range(2): funcList.append((lambda j: lambda x: j*x)(i)) Or just: funcList = [(lambda j: lambda x: j*x)(i) for i in range(2)] Or you can use the hack I proposed, now knowing the full story of why it works: funcList = [lambda x, i=i: i*x for i in range(2)]
{ "pile_set_name": "StackExchange" }
Q: Reliability of a compression barb fitting for a water hose I just installed water lines to a Brondell's bidet attachment. Cold water is supplied by the regular braided hose with compression 3/8 fittings. For hot water, however, they use a single barb fitting with a compression nut over a 1/4 thread over the barb, with no ferrule. The hose is, according to the company, thermoplastic polyurethane and is rated for boiling temperatures (100 degrees Celcius) and water pressures up to 150 PSI. I cannot tell the hardness of the material, but it is softer that the nylon tubing you'd normally find in ice maker lines that also use a compression mount (w/o barb with a ferrule tho), but harder that flexible Tyvec I use in my 50 PSI air applications at work. The hose is noticeably hard to cut with my lab hose cutter, which I use to cut 95A 5/32 polyurethane tubing for 150PSI pneumatics, so, given the larger diameter of the Bidell's line, I think it's no less than 85A. The Brondell's hose is a bit more flexible even with it's larger, 1/4 OD. The proprietary tee can be seen in this video, if you do not mind watching a few seconds of it, where this exact connection is assembled: https://youtu.be/DyVE2R4DGAw?t=2m8s My question is, how reliable this connection is for exposed plumbing? Every time I open/close the faucet or the bidet, the line shudders a little, so there is a bit of movement everywhere, including connection points. I am worrying one day the connector will decide to just spit out the tube and cause a flood (my lab air has no such concern). I just had a flood from an unrelated leak in in-wall plumbing, so I am nervous about leaks; sorry if I am overzealous. I tested my assembly by pulling on the mounted line not quite gently while pressurized, but it still stayed. Another concern, I used silicone oil on all gaskets (Buna-N, I believe, some non-silicone rubber 100% certainly) while mounting the brass hardware, which is a common assembly practice in the lab. Thus the tee had some oil on surface when I installed the plastic line. Should I better disassemble and degrease this hose mounting part of it? A: Answering own question: I disassembled the connection with the purpose of removing possible traces of leftover silicone oil from it. When I removed the compression nut and pulled the tube off the barb, I noticed that the end of the tube has become noticeably flared, because the compression nut, well, compressed it and pulled somewhat forward on the barb. Since the tubing is on the hard side, I am sure this natural flaring alone is more than adequate to prevent the tube from coming off the barb through the retaining nuts, despite some hydraulic hammering from faucet operation and jiggling of the tubing caused by it. I wanted to post a picture, but the transparent tubing material did not allow for a good snapshot.
{ "pile_set_name": "StackExchange" }
Q: HOw to find acceleration when its equal to zero? The velocity v with arrow of a particle moving in the xy plane is given by v with arrow = (6.0t − 4.0t^2)i + 7.1j, with v with arrow in meters per second and t (> 0) is in seconds. (a) What is the acceleration when t = 1.1 s? (Express your answer in vector form.) I have -2.8i (b) When (if ever) is the acceleration zero? (Enter 'never' if appropriate.) (c) When (if ever) is the velocity zero? (Enter 'never' if appropriate.) (d) When (if ever) does the speed equal 10 m/s? (Enter 'never' if appropriate.) I don't understand the last three problems. Am I supposed to set the differentiated equation to zero/ten and then solve? But there's nothing left to solve... A: B) when is the particle not accelerating in any direction. In other words when do we have, $$v'(t)=0 \vec i +0 \vec j$$ Solution $$(6-8t)\vec i +0 \vec j=0$$ $$6-8t=0$$ $$t=0.75 \text{seconds}$$ C) when is the car not in motion in any direction, in other words when is $$v(t)=0 \vec i +0 \vec j$$ Solution: The velocity is never $0$ because it he particle is always moving at a velocity of $7.1$ m/s in the $y$ direction. So there is no way for the velocity to be $0 \vec i+0 \vec j$. D) when is the magnitude of the velocity vector equal to 10 m/s In other words when is, $$\sqrt{(6.0t-4.0t^2)^2+7.1^2}=10$$ Solution: $$(6t-4t^2)^2+7.1^2=10^2$$ $$(6t-4t^2)^2=49.59$$ $$6t-4t^2= \pm \sqrt{49.59}$$ $$4t^2-6t \pm \sqrt{49.59}=0$$ Let's deal with $$4t^2-6t+\sqrt{49.59}=0$$ First By the quadratic formula, $$t=\frac{6 \pm \sqrt{36-4(4)(\sqrt{49.59})}}{8}$$ Obviously this is no good so we only need to consider, $$4t^2-6t-\sqrt{49.59}=0$$ $$t=\frac{6 +\sqrt{36+4(4)(\sqrt{49.59})}}{8}$$ Because we want $t \geq 0$. This means $t \approx 2.27414$.
{ "pile_set_name": "StackExchange" }
Q: Spark 2.0 ALS Recommendation how to recommend to a user I have followed the guide given in the link http://ampcamp.berkeley.edu/big-data-mini-course/movie-recommendation-with-mllib.html But this is outdated as it uses spark Mlib RDD approach. The New Spark 2.0 has DataFrame approach. Now My problem is I have got the updated code val ratings = spark.read.textFile("data/mllib/als/sample_movielens_ratings.txt") .map(parseRating) .toDF() val Array(training, test) = ratings.randomSplit(Array(0.8, 0.2)) // Build the recommendation model using ALS on the training data val als = new ALS() .setMaxIter(5) .setRegParam(0.01) .setUserCol("userId") .setItemCol("movieId") .setRatingCol("rating") val model = als.fit(training) // Evaluate the model by computing the RMSE on the test data val predictions = model.transform(test) Now Here is the problem, In the old code the model that was obtained was a MatrixFactorizationModel, Now it has its own model(ALSModel) In MatrixFactorizationModel you could directly do val recommendations = bestModel.get .predict(userID) Which will give the list of products with highest probability of user liking them. But Now there is no .predict method. Any Idea how to recommend a list of products given a user Id A: Use transform method on model: import spark.implicits._ val dataFrameToPredict = sparkContext.parallelize(Seq((111, 222))) .toDF("userId", "productId") val predictionsOfProducts = model.transform (dataFrameToPredict) There's a jira ticket to implement recommend(User|Product) method, but it's not yet on default branch Now you have DataFrame with score for user You can simply use orderBy and limit to show N recommended products: // where is for case when we have big DataFrame with many users model.transform (dataFrameToPredict.where('userId === givenUserId)) .select ('productId, 'prediction) .orderBy('prediction.desc) .limit(N) .map { case Row (productId: Int, prediction: Double) => (productId, prediction) } .collect() DataFrame dataFrameToPredict can be some large user-product DataFrame, for example all users x all products
{ "pile_set_name": "StackExchange" }
Q: Python: How can I use an external queue with a ProcessPoolExecutor? I've very recently started using Python's multi threading and multi processing features. I tried to write code which uses a producer/consumer approach to read chunks from a JSON log file, write those chunks as events into a queue and then start a set of processes that will poll events from that queue (file chunks) and process each one of them, printing out the results. My intent is to start the processes first, and leave them waiting for the events to start coming into the queue. I'm currently using this code, which seems to work, using some bits and pieces from examples I found: import re, sys from multiprocessing import Process, Queue def process(file, chunk): f = open(file, "rb") f.seek(chunk[0]) for entry in pat.findall(f.read(chunk[1])): print(entry) def getchunks(file, size=1024*1024): f = open(file, "rb") while True: start = f.tell() f.seek(size, 1) s = f.readline() # skip forward to next line ending yield start, f.tell() - start if not s: break def processingChunks(queue): while True: queueEvent = queue.get() if (queueEvent == None): queue.put(None) break process(queueEvent[0], queueEvent[1]) if __name__ == "__main__": testFile = "testFile.json" pat = re.compile(r".*?\n") queue = Queue() for w in xrange(6): p = Process(target=processingChunks, args=(queue,)) p.start() for chunk in getchunks(testFile): queue.put((testFile, chunk)) print(queue.qsize()) queue.put(None) However, I wanted to learn how to use the concurrent.futures ProcessPoolExecutor to achieve the same results in an asynchronous manner, using Future result objects. My first attempt implied using an external queue, created with the multiprocessing Manager, which I would pass to the processes for polling. However this doesn't seem to work and I reckon it is possible that this is not the way ProcessPoolExecutor was designed to work, as it seems to use an internal queue of it's own. I used this code: import concurrent from concurrent.futures import as_completed import re, sys from multiprocessing import Lock, Process, Queue, current_process, Pool, Manager def process(file, chunk): entries = [] f = open(file, "rb") f.seek(chunk[0]) for entry in pat.findall(f.read(chunk[1])): entries.append(entry) return entries def getchunks(file, size=1024*1024): f = open(file, "rb") while True: start = f.tell() f.seek(size, 1) s = f.readline() # skip forward to next line ending yield start, f.tell() - start if not s: break def processingChunks(queue): while True: queueEvent = queue.get() if (queueEvent == None): queue.put(None) break return process(queueEvent[0], queueEvent[1]) if __name__ == "__main__": testFile = "testFile.json" pat = re.compile(r".*?\n") procManager = Manager() queue = procManager.Queue() with concurrent.futures.ProcessPoolExecutor(max_workers = 6) as executor: futureResults = [] for i in range(6): future_result = executor.submit(processingChunks, queue) futureResults.append(future_result) for complete in as_completed(futureResults): res = complete.result() for i in res: print(i) for chunk in getchunks(testFile): queue.put((testFile, chunk)) print(queue.qsize()) queue.put(None) I'm unable to obtain any results with this, so obviously I'm doing something wrong and there's something about the concept that I didn't understand. Can you guys please give me a hand understanding how I could implement this? A: If you're using a ProcessPoolExecutor, you don't need your processingChunks function at all, or any of the stuff you're importing from multiprocessing. The pool does essentially what your function was doing before automatically. Instead, use something like this to queue up and dispatch all the work in one go: with concurrent.futures.ProcessPoolExecutor(max_workers = 6) as executor: executor.map(process, itertools.repeat(testFile), getchunks(testFile)) I'm not sure how your original code worked with pat not being an argument to process (I'd have expected every worker process to fail with a NameError exception). If that's a real issue (and not just an artifact of your example code), you may need to modify things a bit more to pass it in to the worker processes along with file and chunk (itertools.repeat may come in handy).
{ "pile_set_name": "StackExchange" }
Q: How do I know a div is display: none in two or more tabs of a browser? Consider I've this page: sample.php <div id="id1" style="display: none"> Hello </div> Initially when the page loads, this div is not displayed, but I've a button1. When I click on this button, it changes the display value to block. There's another button2 that I use to change the display value back to none. So, this div is visible only when the button1 is clicked. In other cases (when the page is reloaded or when button2 is clicked), it would be invisible. Now, suppose that I've opened this same page in 4 more tabs (it can be even more, I'm giving just example) of a browser. And I've pressed button1 in any 2 of them. That means this div is visible in those 2 tabs. Now, what I want is that when the div is visible, the text inside it should store in a SESSION variable. And I can do it using JavaScript, jQuery, Ajax and PHP. When the two tabs have 2 visible divs, their text "Hello" is stored in a same SESSION variable (it will store only "Hello" only, not "HelloHello" since SESSION will get replaced by new request as my Ajax and PHP code will run on all tabs at the same time). Now what I finally need is that when those 2 visible divs in 2 differents tabs are set to display: none, it should unset the SESSION variable that has "Hello" in it. Because my condition for unsetting it is that none of the 4 tabs should have that div visible. If any of the tab has div visible, the SESSION would have "Hello" in it. PS: I've tried to explain a lot my problem, but still if you don't understand, please mention it in comment, before answering: Thank you. A: In this code ,when button1 or show button is clicked,it will send the data to new.php. The new.php will set the session variable.I have use another session variable count for taking the count of the number of tabs in which it is visible. When the hide button is clicked, a post request is sent and the session variable count is reduced.When the count variable is reduced to 0 , there are no more tabs in which it is visible.So the session is destroyed Here is the code: HTML: <html> <head> <script src="jquery/jquery-1.12.2.min.js"></script> <script> $(document).ready(function(){ $("#show").click(function(){ $("#id1").css("display","block"); var x=$("#id1").text(); $.post("new.php",{val: x}); }); $("#hide").click(function(){ $("#id1").css("display","none"); $.post("new.php"); }); }); </script> </head> <body> <div id="id1" style="display: none"> Hello </div> <button type="button" id="show">show</button> <button type="button" id="hide">Hide</button> </body> </html> new.php: <?php session_start(); if(!empty($_POST["val"])) { if(!isset($_SESSION["val"])) { $_SESSION["val"]=$_POST["val"]; $_SESSION["count"]=1; } else { $_SESSION["count"]+=1; } } else { if(isset($_SESSION["val"])) { if($_SESSION["count"]==1) { session_destroy(); } else { $_SESSION["count"]-=1; } } } ?>
{ "pile_set_name": "StackExchange" }
Q: Destructor getting called on protobuf object I've implemented two simple protobuf messages package TestNamespace; message Object1 { int32 age = 1; int32 speed = 2; } message Object2 { Object1 obj = 1; int32 page_number = 2; } I then have two wrapper classes of the protobuf objects. Inside the Object2Wrapper, after I call ToProtobuf when it returns the destructor is called on obj1 for some reason I cannot understand. class Object1Wrapper { public: int32_t Age{}; int32_t Speed{}; void ToProtobuf(TestNamespace::Object1 obj1) { obj1.set_age(Age); obj1.set_speed(Speed); //After this line the ~destructor is called on obj1 } }; class Object2Wrapper{ public: Object1Wrapper obj1{}; int32_t page_number{}; void ToProtobuf(TestNamespace::Object2 obj2Param) { TestNamespace::Object1 * obj1Proto = obj2Param.mutable_obj(); obj1::ToProtobuf(*obj1Proto); // Do stuff.... but obj1Proto is empty } }; A: The reason is that you pass the object obj1 by value. A copy of the parameter is created and this copy is destroyed when the function returns.\ I guess what you really wanted to do is to pass the object by reference. I.e. void ToProtobuf(TestNamespace::Object1& obj1) { obj1.set_age(Age); obj1.set_speed(Speed); //After this line the ~destructor is called on obj1 } Only when you pass-by-reference, you can alter the object inside the function. As you have it now, any changes to obj1 inside the function have zero effect on the parameter that is passed to the function.
{ "pile_set_name": "StackExchange" }
Q: Char Array Initialization How does this work:: char Test1[8] = {"abcde"} ; AFAIK, this should be stored in memory at Test1 as a b c d e 0 SomeJunkValue SomeJunkValue instead it get stored as: a b c d e 0 0 0 Initializing only adds one trailing NULL char after the string literals but how and why all other array members are initialized to NULL ? Also, any links or any conceptual idea on what is the underlying method or function that does:char TEST1[8] = {"abcde"} ; would be very helpful. How is: char Test1[8] = {"abcde"} ; different from char Test1[8] = "abcde" ; ? A: Unspecified members of a partially initialized aggregate are initialized to the zero of that type. 6.7.9 Initialization 21 - If there are fewer initializers in a brace-enclosed list than there are elements or members of an aggregate, or fewer characters in a string literal used to initialize an array of known size than there are elements in the array, the remainder of the aggregate shall be initialized implicitly the same as objects that have static storage duration. 10 - [...] If an object that has static or thread storage duration is not initialized explicitly, then: if it has pointer type, it is initialized to a null pointer; if it has arithmetic type, it is initialized to (positive or unsigned) zero; [...] For the array char Test1[8], the initializers {"abcde"} and "abcde" are completely equivalent per 6.7.9:14: An array of character type may be initialized by a character string literal or UTF−8 string literal, optionally enclosed in braces.
{ "pile_set_name": "StackExchange" }
Q: How to absolute position imageless corners with css I'm looking for a way to absolute position the four corners used in the following css style. I tried the following, but that wasn't the right one. .rbottom{display:block; background:#f57f20; position:absolute; top:500px;} This is the original css: .container5 {background:#666666; color:#fff; margin:0 15px;} .rbottom{display:block; background:#f57f20;} .rtop{display:block; background:#eaeade;} .rtop *, .rbottom *{display: block; height: 1px; overflow: hidden; background:#666666;} .r1{margin: 0 5px} .r2{margin: 0 3px} .r3{margin: 0 2px} .r4{margin: 0 1px; height: 2px} .rl1 {margin: 0 0 0 5px; } .rl2 {margin: 0 0 0 3px; } .rl3 {margin: 0 0 0 2px; } .rl4 {margin: 0 0 0 1px; height: 2px;} .rr1 {margin: 0 5px 0 0; } .rr2 {margin: 0 3px 0 0; } .rr3 {margin: 0 2px 0 0; } .rr4 {margin: 0 1px 0 0; height: 2px;} A: It's not entirely clear what you're asking. The way to position something absolutely in CSS is to use the position: absolute property, and then specify where that element should be positioned, e.g.: .foo { position: absolute; top: 0px; left: 100px; } On the other hand, it sounds like you're trying to implement CSS rounded corners. If you don't mind your corners being square (not rounded) in IE, you can use the browser-specific CSS3 rounded corner properties: .bar { border: 1px solid #000000; border-radius: 3px; -moz-border-radius: 3px; -webkit-border-radius: 3px; } Which should work on Firefox, Safari, and Google Chrome, but not any version of IE.
{ "pile_set_name": "StackExchange" }
Q: Mapping an array to consecutive non-nil values of a method? So, I have x variable names I want to assign to x consecutive non-nil values from a method… how can I do that? For example, I want to map %w[alpha beta gamma] to the three consecutive non-nil values of the function get(x) beginning with 0. So, say the values of get(x) are get(0)=1, get(1)=54, get(2)=nil, get(3)=6. I'd want alpha = 1, beta = 54, and gamma = 6. How can I do this? A: Setting Hash key/value pairs may not really answer the question but it's almost always the right solution for a real program ... def get x # test sub [1, 54, nil, 6][x] end # find the next n non-nil values of an integer function def find n, sofar, nextval return sofar if sofar.length >= n return find n, (sofar << get(nextval)).compact, nextval + 1 end h = {} h[:alpha], h[:beta], h[:gamma] = find 3, [], 0 p h
{ "pile_set_name": "StackExchange" }
Q: How to sort a list by datetime in python? I am writing a .py file to sort lists by time which contain the following information date, time, emp_id, action_performed There is a question asked about this on stackoverflow but I couldn't exactly follow(I am new to python) I also checked out the sort function and the datetime library but couldnt get it to work. list = [ ('2017/09/10 13:19:38', 'employee_id', 'enrolled'), ('2017/09/10 12:15:21', 'employee_id', 'deleted'), ('2017/09/10 21:19:34', 'employee_id', 'enrolled'), ('2017/09/10 22:42:50', 'employee_id', 'deleted'), ('2017/09/10 16:53:03', 'employee_id', 'enrolled') ] I just want to know which action was performed first. Can someone help me out? A: from datetime import datetime list = [ ('2017/09/10 13:19:38', 'employee_id', 'enrolled'), ('2017/09/10 12:15:21', 'employee_id', 'deleted'), ('2017/09/10 21:19:34', 'employee_id', 'enrolled'), ('2017/09/10 22:42:50', 'employee_id', 'deleted'), ('2017/09/10 16:53:03', 'employee_id', 'enrolled') ] sorted_list = sorted(list, key=lambda t: datetime.strptime(t[0], '%Y/%m/%d %H:%M:%S')) Use the key parameter of the sorted function, in this case it tells the function to parse the first element of each tuple as a datetime string with the format '%Y/%m/%d %H:%M:%S' and use that value for sorting.
{ "pile_set_name": "StackExchange" }
Q: MySQL query - limit results of a BLOB to 1 unique element in the blob (row) I've got a query to find abandoned carts that looks like this: SELECT c.customerid, c.custconfirstname, c.custconemail, o.ordstatus, o.orddate, GROUP_CONCAT( 'Order Id: ', orderid, ' | Product name: ', ordprodname, ' | Quantity: ', ordprodqty, '<br>' ) AS ordered_items FROM isc_customers c LEFT OUTER JOIN isc_orders o ON o.ordcustid = c.customerid LEFT OUTER JOIN isc_order_products op ON op.orderorderid = o.orderid LEFT OUTER JOIN isc_product_images pi ON pi.imageprodid = op.orderprodid GROUP BY c.customerid HAVING COUNT( DISTINCT o.ordcustid ) >0 AND o.ordstatus = 0 AND o.orddate < UNIX_TIMESTAMP( ) - '18000' AND o.orddate > UNIX_TIMESTAMP( ) - '259200' For each customer (unique customerid) The BLOB will produce something like this for ordered_items: Order Id: 15256 | Product name: PROD A | Quantity: 1 ,Order Id: 15256 | Product name: PROD B | Quantity: 1 ,Order Id: 15299 | Product name: PROD A | Quantity: 1 ,Order Id: 15301 | Product name: PROD A | Quantity: 1 This can basically be interpreted that the customer has had 3 abandoned carts in the time frame. Because this query will be used to send out an abandoned cart email I don't want to spam and send an email with every product from every abandoned cart (the unique orderid) for various reasons, including that in the example above the customer has tried to put Product A in the cart 3 times over 3 orders and thus would get the item 3 times on the email. So how can I limit the query so that it will only return the results of 1 orderid per customerid? A: Add DISTINCT in your query and that will make it. http://www.w3schools.com/sql/sql_distinct.asp
{ "pile_set_name": "StackExchange" }
Q: count after join on multiple tables and count of multiple column values Please help me with below problem. table 1 employee details emp name empno. --------------------------------- John 1234 Joe 6789 table 2 employee assignment empno assignmentstartdate assignmentenddate assignmentID empassignmentID ----------------------------------------------------------------------------- 1234 01JAN2017 02JAN2017 A1 X1 6789 01jan2017 02JAN2017 B1 Z1 table 3 employee assignment property empassignmentID assignmentID propertyname propertyvalue ------------------------------------------------------------------- X1 A1 COMPLETED true X1 A1 STARTED true Z1 B1 STARTED true Z1 B1 COMPLETED false Result wanted: (count of completed and started for each employee) emp name emp no. COMPLETED STARTED ------------------------------------------ John 1234 1 1 Joe 6789 0 1 Currently with my query it is not putting count correctly for propertyvalue if I run for one employee it works correctly but not for multiple employees. Please help. SELECT empno , empname , (SELECT COUNT(A.propertyvalue) FROM employeedetails C , employees_ASSIGNMENT RCA, employee_assignment_property A WHERE TRUNC(startdate) >= '14jun2017' AND TRUNC(endate) <= '20jun2017' AND RCA.empno = C.empno AND RCA.empassignmetid = A.empassignmetid AND rca.EMPNO IN ('1234','6789') AND RCA.assignmentid = A.assignmentid AND A.Name = 'COMPLETED' AND A.propertyvalue = 'true') , (SELECT COUNT(A.propertyvalue) FROM employeedetails C , employees_ASSIGNMENT RCA, employee_assignment_property A WHERE TRUNC(startdate) >= '14jun2017' AND TRUNC(endate) <= '20jun2017' AND RCA.empno = C.empno AND RCA.empassignmetid = A.empassignmetid AND rca.EMPNO IN ('1234','6789') AND RCA.assignmentid = A.assignmentid AND A.Name = 'STARTED' AND A.propertyvalue = 'true')FROM employeedetails WHERE EMPNO IN ('1234','6789') GROUP BY C.empno , C.EMPNAME A: if you want you result as a query without CTEs this should work: select empName, empNo, (select employee_details.empNo, count(employee_assignment.assId) from employee_details as t1 join employee_assignment on (t1.empno = employee_assignment.empno) join employee_assignment_property on (employee_assignment.assId = employee_assignment_property.assId) where employee_assignment.ptop = 'COMPLETED' and t.empNo = t1.empNo group by t1.empNo ) as [COMPLETED], (select employee_details.empNo, count(employee_assignment.assId) from employee_details as t1 join employee_assignment on (t1.empno = employee_assignment.empno) join employee_assignment_property on (employee_assignment.assId = employee_assignment_property.assId) where employee_assignment.ptop = 'STARTED' and t.empNo = t1.empNo group by t1.empNo ) as [STARTED], from employee_details as t
{ "pile_set_name": "StackExchange" }
Q: ASP NET attribute routing array So I am using attribute routing for my controller like this: [Route("last/{id:int}")] public IHttpActionResult GetUpdateLast([FromUri] int id) { return Ok(); } Now I want to convert it to accept an array of integers, in the controller parameter I just switch to [FromUri] int id[] with no problems, however, how do I switch the route attribute [Route("last/{id:int}")] to make it accept an array of integers? A: Your were almoust there, there is no way of doing what you want using the route, so use as a query string :
{ "pile_set_name": "StackExchange" }
Q: ViewScoped Constructor being called before save method in JSF 2.1.16 Related to my JSF application, I noticed there's a problem with the Mojarra JSF 2.1.16 library. I have a ViewScoped bean which loads a user from a login got as ViewParam. After that loaded user data can be managed and saved. Below is the view code, where I have skiped the main form fields as I have tested there's no problem with them. <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:c="http://java.sun.com/jsp/jstl/core" xmlns:p="http://primefaces.org/ui" template="/templates/general_template.xhtml"> <ui:define name="metadata"> <f:metadata> <f:viewParam id="user" name="user" value="#{manager._ParamUser}" /> <f:event type="preRenderView" listener="#{manager.initialize}" /> </f:metadata> </ui:define> <ui:define name="general_content"> <p:outputPanel autoUpdate="false" id="loggedData" name="loggedData" layout="block"> <h:form id="SystemUserForm"> <h:panelGrid columns="4" cellspacing="10px" style="border-color:red;"> <h:outputText value="#{msg.LOGIN}:" /> <h:outputText value="#{manager._UserBean._login}" /> </h:panelGrid> <p:commandButton value="#{msg.UPDATE}" action="#{manager.actionSave}" ajax="false" /> <p:commandButton value="#{msg.CANCEL}" action="#{manager.actionCancelSave}" ajax="false" /> </h:form> </p:outputPanel> </ui:define> At the beginning bean is created and User itself is loaded from data base using received param. Problem comes when I call the action method to save it, because the ViewScoped bean called manager is being constructed again. So there's no param and I have a null pointer Exception. That's working properly with Mojarra 2.1.14 and 2.1.15. Backing bean code: @ManagedBean @ViewScoped public class Manager extends UserData { public static final String PARAM_USER = "ParamUser"; private String _ParamUser; public String get_ParamUser() { return this._ParamUser; } public void set_ParamUser(String _ParamUser) { this._ParamUser = _ParamUser; } public Manager() { super(); } @Override public void initialize(ComponentSystemEvent event) { if (!FacesContext.getCurrentInstance().isPostback()) { loadUserBean(this._ParamUser); if (this._UserBean == null) { redirectTo404(); } } } @Override public String actionSave() { super.actionSave(); return NavigationResults.USER_LIST; } UserData is, of course, an abstract class. When actionSave() is called bean is constructed again and there is no _ParamUser attribute, because this is get by viewParam. That constructor recall is only happening with Mojarra 2.1.6. A: This issue has been solved in Mojarra JSF 2.1.17, tried and tested. Could be a problem with Mojarra JSF 2.1.16 and Tomcat 6. However, I haven't found any known issues for that version.
{ "pile_set_name": "StackExchange" }
Q: Load partial view multiple times on click of a button Background: I have 2 entities, Course and Module. A course can have many modules. On the page where you update/add modules, it's basically an update page of the course where each module has a partial page rendered in an accordion. See screenshot: I have this to populate existing Modules: @foreach (var module in Model.Modules) { Html.RenderPartial("~/Views/Module/_Update.cshtml", (RocketLabs.Models.Module)module); } How do I add Partial Renders (with new models) every time the Add Module is clicked? Any piece of advise would be highly appreciated. Thanks! A: I would recommend you taking a look at the Editing a Variable Length List article from Steven Sanderson where he illustrates how you could use an AJAX request to a server side partial to add new rows dynamically. The idea here is to subscribe to the click event of the Add button and trigger an AJAX request to a controller action that will return a new partial containing a blank row to be edited. And if you want a pure client side solution, he also wrote an article where the same could be achieved with knockoutjs for example without any AJAX requests to add new entries to the list.
{ "pile_set_name": "StackExchange" }
Q: Nested resources in Chef InSpec Is it possible to use one resource inside other resource in Chef InSpec? Example: describe command('su srijava') do describe file ('/app/java/latest') do it{ should exist } end end It throws an error like: `method_missing': undefined method `file' for RSpec::ExampleGroups::CommandSuSriava:Class (NoMethodError) Actually what I want to do is that I need to run a utility that is installed in other user and I have to check the output returned from that session and verify it. Example : I installed java as srijava user Now in Inspec I wrote the command to test the Java version (Assume that the java -version runs only in that user and not as root). if I use su srijava, then I do not get the output returned back to the root session and the test fails If I run without su srijava then my utility will throw an error that the user is not SriJava Code with su : describe command('su srijava ; cd /app/java; ./java --version') do its('stdout') { should match('1.7') } end Code without su: describe command('cd /app/java; ./java --version') do its('stdout') { should match('1.7') } end How can I do that? A: As Noah pointed out, nested describe blocks are not supported yet. I also think you do not need those. result = command('runcommand').stdout filename = result + '/path' describe file (filename) do it{ should exist } end On the other hand, you could use the bash resource to run multiple commands. command uses the default shell of the user, bash enforces it. This enables you to: describe bash('su srijava ; cd /app/java; ./java --version') do its('stdout') { should match('1.7') } end
{ "pile_set_name": "StackExchange" }
Q: How do miners/clients prove a transaction is valid? To prove a transaction is valid, you must prove that the source address has a balance at least as much as the transaction. How does one prove this in a performant manner? The block chain of bitcoin last I checked is 16G. Surely miners don't scan through 16G of data to check every block to track all transactions that address has ever had. How do miners and clients speed this up to a fast and cheap operation? A: Bitcoin doesn't work on balances, it works on transaction inputs and outputs. A transaction's inputs must specify unspent transaction outputs (UTXO) of previous transactions. And clients maintain an index of UTXO to easily find the referenced outputs.
{ "pile_set_name": "StackExchange" }
Q: R user-defined function saving graphics - error with dev.off()? I am trying to create a function that would do the plots of my PCA (prcomp object). These plots would be then saved in the directory set as one of the arguments of the function. Whenever I do it separately it works. The graphics can be printed from the function (so there is no major issue in the code). The function does create a file in the right directory, with the right name BUT it is all blank... Here is the code (a bit long..): #arg1: the PCA object that we want to plot #arg2: the list of sample names #arg3: where to save the plots #arg4: description of the treatment done PlotPCA<- function(arg1, arg2, arg3, arg4){ library(ggplot2) DFplot<-data.frame(arg1$x, Sple=arg2) arg2<-as.factor(arg2) DFplot.PoV <- arg1$sdev^2/sum(arg1$sdev^2) LoadMat1 <- arg1$rotation # the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors) ScoreMat1 <- predict(arg1) percentage<-paste( colnames(DFplot), "(", paste(as.character((round(DFplot.PoV, digits = 4))*100), "%", ")", sep="") ) themePCA<-theme(panel.background = element_blank(),panel.border=element_rect(fill=NA),panel.grid.major = element_blank(),panel.grid.minor = element_blank(),strip.background=element_blank(),axis.text.x=element_text(colour="black"),axis.text.y=element_text(colour="black"),axis.ticks=element_line(colour="black"),plot.margin=unit(c(1,1,1,1),"line")) setwd(arg3) png(file=paste0("PC1$2 on ",arg4,Sys.Date(),".png"),width=4000,height=3000,res =300) ggplot(DFplot,aes(x=PC1,y=PC2,col=arg2, label=arg2))+ labs(title=paste0("PC1$2 on",arg4,Sys.Date()), colour="Samples (same color = same composition)")+ geom_text()+ # Add labels themePCA+ #for whit background xlab(percentage[1])+ ylab(percentage[2])+ theme(legend.position="none") dev.off() setwd(arg3) png(file=paste0("PC2$3 on ", arg4, Sys.Date(), ".png"), width=4000,height=3000,res =300,antialias = c("default")) ggplot(DFplot,aes(x=PC3,y=PC4,col=arg2, label=arg2))+ labs(title=paste0("PC2$3 on", arg4, Sys.Date()), colour="Samples (same color = same composition)")+ #geom_point(size=0.5,alpha=0.5)+ #Size and alpha just for fun geom_text()+ # Add labels #scale_color_manual(values=ColPB)+ themePCA+ #for whit background xlab(percentage[3])+ ylab(percentage[4])+ theme(legend.position="none") dev.off() }' I have the impression in comes from dev.off() as I get null device 1 Any idea how to solve this issue? A: You need to save your plot in a variable like p <- ggplot(...) + ... and then either print the variable print(p) or -much better- use the function ggsave which is the intended way to save plots created with ggplot2.
{ "pile_set_name": "StackExchange" }
Q: Convention over Configuration go against loose coupling? Something a lot of programmers seem to be abiding by is Convention of Configuration. In the context of IoC this means using the API instead of XML configuration. How are you supposed to keep the loose coupling idea behind DI/IoC when you have to reference the DLL containing the concrete implementations of the abstract interfaces/classes to use convention of configuration? A: Configuring via a code API is not "convention." It is "configuration." Convention over configuration means, for example, that a particular application framework (such as Rails or CodeIgniter) may require that certain types of code be placed in certain directories. You are not required to tell the application framework where these files are via configuration.
{ "pile_set_name": "StackExchange" }
Q: Creating celery pipeline with tasks, that run only when some number of items in previous task are stacked I'm trying to create some kind of pipeline for my application. I have a problem - main target of application is to read video, take every N-th shot of video, and put it in the pipeline. Inside the pipeline there is 5 different tasks, for example: 1. Crop image 2. Store image in the array. if array length = IMAGES_NEEDED_FOR_TASK3, launch task 3 3. Apply some transforms to image, make one big image from IMAGES_NEEDED_FOR_TASK3,. 4. Stack transformed images in the array. if array length = IMAGES_NEEDED_FOR_TASK5, launch task 5 5. Write info about income images from task 4 to database I struggle with implementations of task 2 and 4, because they have conditions. If they wouldn't have conditions, I would just use chain method. I thought about calling task 3 from task 2 (I thought to create a different queue for every task), but I read that this is considered as bad practice. Thank you in advance A: It is very difficult to express this in terms of the Celery workflow (if that is what you are struggling with), so I suggest you write the task 4 so it simply queues task #5 when the stack has IMAGES_NEEDED_FOR_TASK5 images (simply by calling app.send_task() or app.apply_async()). Same goes for the task #2. You are fine as long as you do not need to deal with the result of tasks 3 and 5, in which case the logic becomes more complicated. It would be really nice if Celery has a primitive that could make it easier to express cases like this. If you still insist on using the Celery workflow, then you should perhaps rethink the logic, and take advantage of the Chunk primitive. In this case your task #2 would add.chunks(). Same goes for the task #5.
{ "pile_set_name": "StackExchange" }
Q: Just Works Bluetooth Low Energy Security I have been puzzled the past couple days about bluetooth low energy security. My situation: I am looking at what it might take to secure unpaired BLE connections. In some cases, and essentially this main use case, a mobile device (Android) may want to connect to several different peripherals randomly at different times throughout the day. By 'randomly' I mean I am walking by one if I have a dozen scattered around my apartment and I personally don't know exactly which one without physically checking. The process of physically accepting a pair request seems unnecessary and quite time consuming. I don't what to walk in the room the first time and have to manually pair each device, that would be insane if I had 100+ devices. Note that these devices don't necessarily have to be connected at the same time, but could. Also note that I understand this isn't generally the main use case of the typical peripheral to mobile communication considering the max connected devices at one time is 7. These peripheral devices have no I/O thus the Numeric Comparison, Passkey Entry, and Out of Band connection methods don't seem to be the ones that I'm looking for. It seems that Just Works should work, however there doesn't seem to be security support for it. Anyone with a sniffer would be able to read all communication. It seems I may be asking too much of Bluetooth LE and will have to figure out another means--maybe in the application layer? A: if you want to have encrypted communication on bluetooth layer, you will need a pairing process. Just Works is the most simple one but using Android or iOS you will still need to press a button at least on phone side. A basic idea of the bluetooth pairing process is that no predefined keys are necessary. Therefor, the user can secure the connection between two devices out of the box. From you post above, some questions arise: Do you need encryption or authentication? What is the sense of encryption in your use case if any device is able to connect, encrypt and communicate? What level of security do you need? Security against professional atackers or against someone accidentally connecting to your device with a BLE scanner app? Do you actually need encryption or signing? Dependent on your requirements, there are different solutions. To my knowledge, BLE encryption without pairing is not possible (with usual smartphones). You could install predefined encryption keys and encrypt data in your application. But then, you have another problem: How to distribute the keys? And if one device is hacked, how do you revoke keys? If all devices share the same key, your whole system is broken if one device is hacked.
{ "pile_set_name": "StackExchange" }
Q: How to solve the Issue declaring a cursor in a Procedure/Package? I'm new to Oracle and having issues declaring a cursor within my package. Can anyone tell me what i'm doing wrong? Thanks CREATE OR REPLACE PACKAGE CHECK_STOCK_LEVELS AS PROCEDURE CHECK_PAYMTS_AGAINST_INVOICES; FUNCTION CHECK_PROD_PRICE RETURN NUMBER; END; / CREATE OR REPLACE PACKAGE BODY CHECK_STOCK_LEVELS AS FUNCTION CHECK_PROD_PRICE (INP_PROD_NAME VARCHAR2) RETURN NUMBER IS PROD_CNT PRODUCT.PROD_NAME%TYPE; BEGIN SELECT PROD_PRICE INTO PROD_CNT FROM PRODUCT WHERE PROD_PRICE = INP_PROD_NAME; RETURN (PROD_CNT); END CHECK_PROD_PRICE; / SELECT CHECK_STOCK_LEVELS.CHECK_PROD_PRICE(3.5mm Jack) FROM DUAL; PROCEDURE CHECK_PAYMTS_AGAINST_INVOICES DECLARE CURSOR SEL_PAYMTS_AND_INVS IS SELECT P.INVOICE_ID, P.PAYMENT_AMOUNT, I.INVOICE_AMOUNT FROM PAYMENT P INNER JOIN INVOICE I ON P.INVOICE_ID = I.INVOICE_ID; V_TEMP_RESULTS_ROW SEL_PAYMTS_AND_INVS%ROWTYPE; PAYMT_INVOICE_MISMATCH EXCEPTION; BEGIN OPEN SEL_PAYMTS_AND_INVS; FETCH SEL_PAYMTS_AND_INVS INTO V_TEMP_RESULTS_ROW; WHILE SEL_PAYMTS_AND_INVS%FOUND LOOP IF V_TEMP_RESULTS_ROW.PAYMENT_AMOUNT != V_TEMP_RESULTS_ROW.INVOICE_AMOUNT THEN DBMS_OUTPUT.PUT_LINE('EMPLOYEE ID : ' || V_EMPLOYEE_ROW.EMPID || ' HAS A SALARY OF : ' || V_EMPLOYEE_ROW.SALARY); RAISE PAYMT_INVOICE_MISMATCH; END IF; FETCH SEL_PAYMTS_AND_INVS INTO V_TEMP_RESULTS_ROW; END LOOP; CLOSE SEL_PAYMTS_AND_INVS; EXCEPTION WHEN PAYMT_INVOICE_MISMATCH THEN DBMS_OUTPUT.PUT_LINE('PAYMENT AMOUNT DOES NOT MATCH INVOICE AMOUNT'); RAISE; END; END; A: This: SELECT CHECK_STOCK_LEVELS.CHECK_PROD_PRICE(3.5mm Jack) FROM DUAL; should be: SELECT CHECK_STOCK_LEVELS.CHECK_PROD_PRICE('3.5mm Jack') FROM DUAL; And this: PROCEDURE CHECK_PAYMTS_AGAINST_INVOICES DECLARE CURSOR SEL_PAYMTS_AND_INVS IS SELECT P.INVOICE_ID, P.PAYMENT_AMOUNT, I.INVOICE_AMOUNT FROM PAYMENT P INNER JOIN INVOICE I ON P.INVOICE_ID = I.INVOICE_ID; V_TEMP_RESULTS_ROW SEL_PAYMTS_AND_INVS%ROWTYPE; PAYMT_INVOICE_MISMATCH EXCEPTION; BEGIN ... should be: PROCEDURE CHECK_PAYMTS_AGAINST_INVOICES IS -- not DECLARE CURSOR SEL_PAYMTS_AND_INVS IS SELECT P.INVOICE_ID, P.PAYMENT_AMOUNT, I.INVOICE_AMOUNT FROM PAYMENT P INNER JOIN INVOICE I ON P.INVOICE_ID = I.INVOICE_ID; V_TEMP_RESULTS_ROW SEL_PAYMTS_AND_INVS%ROWTYPE; PAYMT_INVOICE_MISMATCH EXCEPTION; BEGIN ...
{ "pile_set_name": "StackExchange" }
Q: css calc() not producing expected result I am trying to achieve a specific look of responsive search bar where I display search input, select option for filters and submit button in one row. Submit button and select have defined widths of 44px and 110px respectively and I want search input field to occupy rest of available space, therefore I use width: calc(100% - 154px);. But it is not working as intended. Please see example here https://jsfiddle.net/t97fqqxu/ HTML: <div id="main-search"> <form> <input type="text" placeholder="Search.." /> <select> <option>Option</option> </select> <input type="submit" value="Submit"/> </form> </div> CSS #main-search { background-color: @palette-white; } #main-search form { width: 100%; background: green; } #main-search input[type="text"] { width: calc(100% - 154px); } #main-search select { width: 110px; } #main-search input[type="submit"] { text-indent: -9999px; width: 44px; height: 44px; } EDIT: I'm taking advantage of bootstrap 3's box-model, thus no mater margins and paddings, widths will always be 44px and 110px A: You didn't account for the border on the input elements. By default, the border/padding is added to the element's width/height. In other words, even if you give an element a width of 110px, you still need to account for the element padding/border. Updated Example You could either remove the border, or use box-sizing: border-box to account for the border in the element's width/height calculation: input[type="text"] { border: none; } or.. input[type="text"] { box-sizing: border-box; } In addition, it's worth pointing out that inline elements respect the whitespace in the markup, you also needed to remove that too. See the updated example above.
{ "pile_set_name": "StackExchange" }
Q: If date time == 01:01:01 I get a date from a database and want to check if it is equal to a specific date. HOw can I do this. i have tried if (date == 01:01:0001) { } else { } Cheers A: if (date == new DateTime(2001, 12, 5)) { ... }
{ "pile_set_name": "StackExchange" }
Q: i want to write a Windows application about inventory check, which software language should I use? I want to write a program and that program should work in my shop's inventory check and also that program should run in Windows. I have a just 40 items and when i checked, i want to write that check information on application which i want to write. Which programming language should i use ? A: For such a small scale use case I would take all the help I can get. In this case probably Excel and Visual Basic (VBA). I assume that this will be “single user”? To be more precisely “no two people messing with the data at the same time”? Yes? Then try Excel! You’ll get (basically for free): a “database” very likely more than sufficient for your needs (the work sheets) Pivot tables are awesome for planning and inventory management lots of UI and tools, e.g. coloring items based on some condition (“color all red where stock is depleted “) If you need more complex features, use VBA macros. Just one tip: Find a good way to make backups, regularly (at least daily) and religiously (at least weekly check that the backup is restoreable) Writing and maintaining (!!) a software AND doing what you do to earn money is very, very risky because you have a good chance that you will be overwhelmed and fail at both.
{ "pile_set_name": "StackExchange" }
Q: How to connect to mongoDB compass with Node.js I am trying to send data from node.js to a mongoDB compass server. I have created a MongoDB cluster and downloaded Compass. I can connect Compass to the cluster it all works fine. However when I try to connect my Node.js server to Compass I get an error, Below is my node code. const express = require('express'); const MongoClient = require('mongodb').MongoClient; const assert = require('assert'); const app = express(); // Connect to mongodb // Connection URL const url = "mongodb://tfi-mfgbt.mongodb.net/test" ; // Database Name const dbName = 'TFI'; // Use connect method to connect to the server MongoClient.connect(url, function(err, client) { assert.equal(null, err); console.log("Connected successfully to server"); const db = client.db(dbName); client.close(); }); const port = 5000; app.listen(port, () => { console.log(`Server started on port ${port}`); }); One I run Node app.js in terminal I get MongoClient.connect. Server started on port 5000 F:\code\vidjot\node_modules\mongodb\lib\operations\mongo_client_ops.js:439 throw err; ^ AssertionError [ERR_ASSERTION]: null == 'MongoNetworkError: failed to connect to server [tfi-mfgbt.mongodb.net:27017] on first connect [MongoNetworkError: getaddrinfo E at F:\code\vidjot\app.js:20:10 at err (F:\code\vidjot\node_modules\mongodb\lib\utils.js:415:14) at executeCallback (F:\code\vidjot\node_modules\mongodb\lib\utils.js:404:25) at err (F:\code\vidjot\node_modules\mongodb\lib\operations\mongo_client_ops.js:284:21) at connectCallback (F:\code\vidjot\node_modules\mongodb\lib\operations\mongo_client_ops.js:240:5) at process.nextTick (F:\code\vidjot\node_modules\mongodb\lib\operations\mongo_client_ops.js:436:7) at _combinedTickCallback (internal/process/next_tick.js:131:7) at process._tickCallback (internal/process/next_tick.js:180:9) [nodemon] app crashed - waiting for file changes before starting... The hostname "mongodb://tfi-mfgbt.mongodb.net/test" is the host name of my Compass session. As seen here A: To connect with Mongo I use this lines of code: var mongoUrl = '"mongodb://tfi-mfgbt.mongodb.net/test"' var mongoose = require('mongoose') mongoose.connect(mongoUrl, { useMongoClient: true }) mongoose.connection.on('error', err => debug('MongoDB connection error: ${err}')); This should work for you! And to query: mySchema.find({},function(err, docs){... My code ...})
{ "pile_set_name": "StackExchange" }
Q: C++ operator overloading and parameterized constructors Why does the below C++ program output "ABaBbAc"? #include "stdafx.h" #include <iostream> using namespace std; class A { public: int i; A(int j=0):i(j) { cout<<"A"; } operator int() { cout<<"a"; return 2; } }; class B { public: B(int j=1):i(j){ cout<<"B"; } operator int() { cout<<"b"; return 3; } int i; }; int operator+(const A&a, const B&b){ cout<<"C"; return a.i + b.i; } int main() { A a; B b; int i = (A)b + (B)a; return 0; } A: First, a and b are default constructed (in this order), and this causes AB to be printed to the standard output. After this, the output of the program may be different from the one you are observing (or may be exactly the one you are observing - read on). This is because the operands of operator + do not need to be evaluated in a definite order. They are only granted to be evaluated before operator + is invoked (see paragraph 1.9/15 of the C++11 Standard). So this expression: (A)b Will result in a construction of a temporary object of type A given an object of type B. How is this possible? Well, A has a constructor accepting an int, and B has a user-defined conversion to int. So the user-defined conversion from b to int (1) is responsible for printing a b. Then, the construction of the A temporary from the resulting int (2) is responsible for printing an A. Symmetrically, the expression: (B)a Will result in the construction of a temporary object of type B, constructed from the result of the user-defined conversion of a into int. This conversion (3) is responsible from printing a, while the construction of the B temporary (4) is responsible for printing B. Eventually, the two temporaries resulting from the evaluation of the expressions (A)b and (B)a are used as argument for the operator + (5), which is responsible for printing a C. Now the C++ Standard only specifies that: (1)-(4) must be evaluated before (5) (1) must be evaluated before (2) (3) must be evaluated before (4) Other than that, the ordering of the evaluations is unspecified. Which means the output must be something like: AB????C Where the question marks should be replaced by an admissible permutation of the outputs of (1), (2), (3), and (4) that satisfies the constraints specified above.
{ "pile_set_name": "StackExchange" }
Q: How to start nginx via different port(other than 80) Hi I am a newbie on nginx, I tried to set it up on my server(running Ubuntu 4), which already has apache running. So after I apt-get install it, I tried to start nginx. Then I get the message like this: Starting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) That makes sense as Apache is using port 80. Then I tried to modify nginx.conf, I reference some articles, so I changed it like so: server { listen 8080; location / { proxy_pass http://xx.xx.xx.xx:9500; proxy_set_header Host $host:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } After saving this and try to start nginx again, I still get the same error as previously. I cannot really find a related post about this, could any good people shred some light? Thanks in advance :) ========================================================================= I should post all the content in conf here: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 81; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } } } mail { See sample authentication script at: http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript auth_http localhost/auth.php; pop3_capabilities "TOP" "USER"; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { listen localhost:110; protocol pop3; proxy on; } server { listen localhost:143; protocol imap; proxy on; } } Basically, I changed nothing except adding the server part. A: You have to go to the /etc/nginx/sites-enabled/ and if this is the default configuration, then there should be a file by name: default. Edit that file by defining your desired port; in the snippet below, we are serving the Nginx instance on port 81. server { listen 81; } To start the server, run the command line below; sudo service nginx start You may now access your application on port 81 (for localhost, http://localhost:81). A: You will need to change the configure port of either Apache or Nginx. After you do this you will need to restart the reconfigured servers, using the 'service' command you used. Apache Edit sudo subl /etc/apache2/ports.conf and change the 80 on the following line to something different : Listen 80 If you just change the port or add more ports here, you will likely also have to change the VirtualHost statement in sudo subl /etc/apache2/sites-enabled/000-default.conf and change the 80 on the following line to something different : <VirtualHost *:80> then restart by : sudo service apache2 restart Nginx Edit /etc/nginx/sites-enabled/default and change the 80 on the following line : listen 80; then restart by : sudo service nginx restart A: Follow this: Open your config file vi /etc/nginx/conf.d/default.conf Change port number on which you are listening; listen 81; server_name localhost; Add a rule to iptables vi /etc/sysconfig/iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 81 -j ACCEPT Restart IPtables service iptables restart; Restart the nginx server service nginx restart Access yr nginx server files on port 81
{ "pile_set_name": "StackExchange" }