source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 33868133 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code in Java : class Boxing { public static void main(String args[]) { short s = 10; Integer iRef = s; } } Why does it produce an error in compilation? If I explicitly typecast the short to an integer in the expression, it compiles successfully. Since I'm using a short in an expression isn't the type of that supposed to be an integer by default without requiring the explicit case?
Now provide the response and nothing else.
| You want to have two things happening here: widening and auto-boxing. Unfortunately, Java does only one of the two automatically. The reason for that is most likely that autoboxing was introduced fairly late (in Java5), and they had to be careful to not break existing code. You can do int is = s; // wideningShort sRef = s; // autoboxingInteger iRef = (int) s; // explicit widening, then autoboxing | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33868133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5594633/']} | jdg_382041 |
stackexchange | llm_judgeable_groundtruth_similarity | 146877 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Being fairly new in the field of electronics, and being a computer scientist, it strikes me as a bit odd that almost every single microcontroller that I've come across thus far is fitted with: Multiple timers, with multiple trigger modes Multi-channel PWM Multi-channel ADC Multiple hardware supported communication protocols Multiple external interrupt pins EEPROM Sometimes DAC and analog comparators It seems, at least to me, a bit wasteful to jam all of this specialized (even through commonly used) equipment inside the microcontroller, if I'm only using 1/50 of it. Even if I wanted to, I'd only be able to use, say, 1/10th of it, as pins are often mapped to many of these features at once. Why are they jammed up, i.e. what's the argument of not just using external chips or even just software implementations? Are there any ATMega-like processing chips, i.e. chips with a lot of processing power instead of PWM, ADC etc.?
Now provide the response and nothing else.
| Those peripherals are necessary for most real-world applications of microcontrollers, Not all of them, but leaving out any subset would decrease the market for the microcontroller. For example, the Scenix microcontroller family which was very fast but had very limited hard peripherals was a resounding market failure. That's really bad news for those of us charged with specifying microcontrollers- a complete redesign in order just to keep your products going (okay, maybe good news if you're brought in to replace the person who specified the oddball micro and subsequently paid to clean up someone the mess they left, but that's not great fun either). Much of the area on the chip is taken up by the memory and the bonding pad/drivers and the CPU so those little hardware peripherals are pretty minor. If you need more processing power, leave the world of 8-bit micros behind and move to one of the 32-bit ARM cores which are generally used in microcontroller-like situations but have more of the chip area devoted to the processor and often to the memory. Or a DSP or FPGA can offer orders of magnitude more processing power, suitable for video processing, high end audio, high end instrumentation and data acquisition etc. As it is, the processing power of modern 8/16 bit micros is not all that bad, and often we 'waste' it by using a high-level language to gain other advantages (faster development and prototyping, use of commercially available libraries such as protocol stacks) rather than tediously hand-crafting bespoke code in assembly. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/146877', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/61957/']} | jdg_382042 |
stackexchange | llm_judgeable_groundtruth_similarity | 3811890 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the following Javascript to read strings out of a text file and process them with a regular expression while (!textFile.AtEndOfStream){ currLine = textFile.ReadLine(); match = re.exec(currLine); do stuff with match} The problem I have is that every other time re.exec is called it fails and returns null; so the first row is processed correctly, but the second row results in null, then the third row works, and the fourth row results in null. I can use the following code to get the result I want while (!textFile.AtEndOfStream){ currLine = textFile.ReadLine(); match = re.exec(currLine); if (match == null) match = re.exec(currLine);} but that seems a bit of a nasty kludge. Can anyone tell my why this happens and what I can do to fix it properly?
Now provide the response and nothing else.
| Your re is defined with the ‘global’ modifier, eg. something like /foo/g . When a RegExp is global, it retains hidden state in the RegExp instance itself to remember the last place it matched. The next time you search, it'll search forward from the index of the end of the last match, and find the next match from there. If you're passing a different string to the one you passed last time, this will give highly unpredictable results! When you use g lobal regexps, you should exhaust them by calling them repeatedly until you get null . Then the next time you use it you'll be matching from the start of the string again. Alternatively you can explicitly set re.lastIndex to 0 before using one. If you only want to test for the existence of one match, as in this example, simplest is just not to use g . The JS RegExp interfaces is one of the most confusing, poorly-designed parts of the language. (And this is JavaScript, so that's saying a lot.) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3811890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113021/']} | jdg_382043 |
stackexchange | llm_judgeable_groundtruth_similarity | 1730377 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form on another website (using a different backend) that I want to be able to POST to my Rails application (on a different domain). How do I generate a valid authenticity token for the external form so that my Rails app will accept it? Assuming I can do the answer to the above question--is there anything else special I need to do to make this work? Apart from the authenticity token, the rest of it seems pretty straightforward to me... Thanks for the help!
Now provide the response and nothing else.
| You can't generate an autenticity token from outside your Rails app.What you can do, is to disable the token protection only for this action and use a custom implementation based on a before_filter. skip_before_filter :verify_authenticity_token, :only => :my_actionbefore_filter :verify_custom_authenticity_token, :only => :my_actiondef verify_custom_authenticity_token # checks whether the request comes from a trusted sourceend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1730377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/32154/']} | jdg_382044 |
stackexchange | llm_judgeable_groundtruth_similarity | 4306019 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After searching for a while I saw that they way to set a visible border on a groupbox is to use the StyleSheet property. I added: border: 2px solid gray; but there are a couple of problems. 1) Everything inside the groupbox also inherits this setting! 2) The border has a little hole/piece missing near the title. Here is a picture of what I'm talking about: Anyone know how to do this properly? Thanks, David
Now provide the response and nothing else.
| The first problem is simple enough When you add a stylesheet to a control it automatically propagates the style to all child widgets. However, you can restrict the use of the style sheet in a couple of ways. You can specify the type of control you want the style sheet to apply to. Example: QGroupBox { border: 2px solid gray; border-radius: 3px; } This style sheet will only be set on Group boxes. However, if you put a second group box inside this one, the style will propagate to this one as well. Which may be good or bad. Another way is to specifically the objectName of the widget you are applying the style to. Example: QGroupBox#MyGroupBox { border: 2px solid gray; border-radius: 3px; } This will only apply the style to a group box with an object name of MyGroupBox. As for the space, it is happening because the title is being drawn on top of your border. You can also add a section to your style sheet to change your groupbox title. This includes setting it's background to transparent, and to move the title around to your hearts content. Example: This will set your title to the top left corner of the group box just inside your border, with no gap. QGroupBox::title { background-color: transparent; subcontrol-position: top left; /* position at the top left*/ padding:2 13px; } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4306019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284529/']} | jdg_382045 |
stackexchange | llm_judgeable_groundtruth_similarity | 44018590 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a couple of questions related to moving XMM values to general purpose registers. All the questions found on SO focus on the opposite, namely transfering values in gp registers to XMM. How can I move an XMM register value (128-bit) to two 64-bit general purpose registers? movq RAX XMM1 ; 0th bit to 63th bitmov? RCX XMM1 ; 64th bit to 127th bit Similarly, how can I move an XMM register value (128-bit) to four 32-bit general purpose registers? movd EAX XMM1 ; 0th bit to 31th bitmov? ECX XMM1 ; 32th bit to 63th bitmov? EDX XMM1 ; 64th bit to 95th bitmov? ESI XMM1 ; 96th bit to 127 bit
Now provide the response and nothing else.
| You cannot move the upper bits of an XMM register into a general purpose register directly. You'll have to follow a two-step process, which may or may not involve a roundtrip to memory or the destruction of a register. in registers (SSE2) movq rax,xmm0 ;lower 64 bitsmovhlps xmm0,xmm0 ;move high 64 bits to low 64 bits.movq rbx,xmm0 ;high 64 bits. punpckhqdq xmm0,xmm0 is the SSE2 integer equivalent of movhlps xmm0,xmm0 . Some CPUs may avoid a cycle or two of bypass latency if xmm0 was last written by an integer instruction, not FP. via memory (SSE2) movdqu [mem],xmm0mov rax,[mem]mov rbx,[mem+8] slow, but does not destroy xmm register (SSE4.1) mov rax,xmm0pextrq rbx,xmm0,1 ;3 cycle latency on Ryzen! (and 2 uops) A hybrid strategy is possible, e.g. store to memory, movd/q e/rax,xmm0 so it's ready quickly, then reload the higher elements. (Store-forwarding latency is not much worse than ALU, though.) That gives you a balance of uops for different back-end execution units. Store/reload is especially good when you want lots of small elements. ( mov / movzx loads into 32-bit registers are cheap and have 2/clock throughput.) For 32 bits, the code is similar: in registers movd eax,xmm0psrldq xmm0,xmm0,4 ;shift 4 bytes to the rightmovd ebx,xmm0psrldq xmm0,xmm0,4 ; pshufd could copy-and-shuffle the original regmovd ecx,xmm0 ; not destroying the XMM and maybe creating some ILPpsrlq xmm0,xmm0,4movd edx,xmm0 via memory movdqu [mem],xmm0mov eax,[mem]mov ebx,[mem+4]mov ecx,[mem+8]mov edx,[mem+12] Not destroying xmm register (SSE4.1) (slow like the psrldq / pshufd version) movd eax,xmm0pextrd ebx,xmm0,1 ;3 cycle latency on Skylake!pextrd ecx,xmm0,2 ;also 2 uops: like a shuffle(port5) + movd(port0)pextrd edx,xmm0,3 The 64-bit shift variant can run in 2 cycles. The pextrq version takes 4 minimum. For 32-bit, the numbers are 4 and 10, respectively. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44018590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1724140/']} | jdg_382046 |
stackexchange | llm_judgeable_groundtruth_similarity | 32126003 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why node.js does not recognize document.GetElementById? It says 'ReferenceError: document is not defined'.What can I do? ReferenceError: document is not definedat Object.<anonymous> (C:\Users\Desktop\main.js:9:18)at Module._compile (module.js:460:26)at Object.Module._extensions..js (module.js:478:10)at Module.load (module.js:355:32)at Function.Module._load (module.js:310:12)at Function.Module.runMain (module.js:501:10)at startup (node.js:129:16)at node.js:814:3
Now provide the response and nothing else.
| document relates to the DOM (Document Object Model) in a web browser. Node.js, however, is not a browser environment. It is a server environment, much like PHP or Perl, and as such, you can’t access the browser’s DOM or do anything specific to browser-hosted JavaScript. The closest you could get is using something like browserify to include Node.js modules in your client-side code. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32126003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5244321/']} | jdg_382047 |
stackexchange | llm_judgeable_groundtruth_similarity | 3303483 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that $$\int_{0}^{\frac{\pi}4} \ln(\sin{x}+\cos{x}+\sqrt{\sin{2x}})dx =\frac{\pi}{4} \ln2$$ I tried to use King's rule and to scale by $2$ and then to add the integrals, to get product of terms and use the result $$\int_{0}^{\frac{\pi}2} \ln(\sin{x})dx=\int_{0}^{\frac{\pi}2} \ln(\cos{x})dx=-\frac{\pi}2\ln2$$ but it didnt work. Any help?
Now provide the response and nothing else.
| $$I=\int_{0}^{\frac{\pi}4} \ln(\sin{x}+\cos{x}+\sqrt{\sin(2x)})dx =\frac12 \int_0^\frac{\pi}{2} x'\ln(\sin x+\cos x+\sqrt{\sin (2x)})dx$$ $$\overset{IBP}=\frac12 \int_0^\frac{\pi}{2}x\,\frac{\sin x-\cos x}{\sqrt{\sin(2x)}}dx\overset{x=\arctan t}=\frac{1}{2\sqrt 2}\int_0^\infty \frac{\arctan t}{1+t^2}\frac{t-1}{\sqrt t}dt$$ $$I(a)=\int_0^\infty \frac{\arctan(at)}{1+t^2}\frac{t-1}{\sqrt t}dt\Rightarrow I'(a)=\int_0^\infty \frac{(t-1)\sqrt t}{(1+a^2 t^2)(1+t^2)}dt$$ $$\overset{t=x^2}=\frac{2}{1-a^2}\int_0^\infty \frac{1+a^2 x^2}{1+a^2 x^4}dx-\frac{2}{1-a^2}\int_0^\infty \frac{1+x^2}{1+x^4}dx=\frac{\pi}{\sqrt 2}\frac{1-\sqrt a}{\sqrt a (1+a)(1+\sqrt a)}$$ $$\Rightarrow I=\frac{\pi}{4}\int_0^1 \frac{1-\sqrt a}{\sqrt a (1+a)(1+\sqrt a)}da\overset{\sqrt a=x}=\frac{\pi}{2}\int_0^1 \frac{1-x}{(1+x^2)(1+x)}dx=\frac{\pi}{4}\ln 2 $$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3303483', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/654867/']} | jdg_382048 |
stackexchange | llm_judgeable_groundtruth_similarity | 479630 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Until recently my server with Postfix has worked well. Then I enforced some restrictions to a) combat spam b) disable sending emails to me on behalf on my own name -- I have begun receiving emails from my own email address demanding to send bitcoin to someone. I want to fix both a and b. And now I can't send email via my own postfix server. Client host rejected: cannot find your reverse hostname, [<my ip here>] Note that I carry my laptot to different places and countries, and connect to WiFi from those. And I want to be able to send email always. Here's a part of my config of Postfix. For database of the accounts and domains I use Postgresql. smtpd_helo_required = yessmtpd_client_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unknown_reverse_client_hostname, reject_unknown_client_hostname, reject_unauth_pipeliningsmtpd_helo_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_helo_hostname,### reject_non_fqdn_helo_hostname, reject_unauth_pipeliningsmtpd_sender_restrictions = permit_mynetworks, reject_sender_login_mismatch, permit_sasl_authenticated, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipeliningsmtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destinationsmtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipeliningsmtpd_data_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_multi_recipient_bounce, reject_unauth_pipelining# deliver mail for virtual users to Dovecot's LMTP socketvirtual_transport = lmtp:unix:private/dovecot-lmtp# query to find which domains we accept mail forvirtual_mailbox_domains = pgsql:/etc/postfix/virtual_mailbox_domains.cf# query to find which email addresses we accept mail forvirtual_mailbox_maps = pgsql:/etc/postfix/virtual_mailbox_maps.cf# query to find a user's email aliasesvirtual_alias_maps = pgsql:/etc/postfix/virtual_alias_maps.cfvirtual_alias_domains = alias_database = alias_maps = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128inet_interfaces = all
Now provide the response and nothing else.
| Short Answer Your postfix configuration is unnecessarily complex. It seems likely that some of the restrictions placed in your configuration either negate one another or are so restrictive that you may need to ssh into your server and manually send each outgoing mail. Rather than go through the posted configuration, this answer will provide an overview of what is generally required to configure a reasonably safe email system for most purposes. It's not intended to be an exhaustive tutorial on how to configure each component. However, there is a list of online resources at the end which I have found to be rather helpful and valuable in configuring my own email servers. There are a few extra requirements from your comments which will not be addressed, such as handling multiple domains using a single postfix installation. It is assumed that a reasonably adept administrator will be able to tweak the settings and add the necessary multi-domain configuration elements. Overview of Elements for Modern Small Email Service Providers Graphical View of Security and Reputation Related Email Headers Modern email systems have evolved to include many security and domain related reputation elements. Perhaps the easiest way to begin is looking at a diagram of some of the more important newer elements contained in an email's header. Protecting a Domain from Spoof Attempts and Reputation Problems There are three essential components to configure for ensuring the authenticity of email traffic that seems to originate from a domain. These are: Sender Policy Framework (SPF) Domain Keys Identified Mail (DKIM) Domain-based Message Authentication Reporting & Conformance (DMARC) Each of these has a daemon running on the server as well as DNS records for connecting servers in order to automate checking of domain policies and verifying cryptographic signatures. Simple SPF explanation: Postfix passes outgoing email through the SPF daemon which evaluates whether or not the sender matches the outgoing mail policy. The receiving mail server retrieves the domain's SPF record from DNS and checks the record against the SPF header the sending server placed on the email. postfix compatible SPF implementation Simple DKIM explanation: Postfix passes outgoing email through the DKIM daemon which automatically signs the message and includes a hash of the message in the email headers. The receiving mail server retrieves the domain's DKIM public key from a DNS record and verifies the body hash of the message. postfix compatible DKIM implementation Simple DMARC explanation: The receiving mail server retrieves the DMARC policy record from DNS and accepts or rejects the message or performs a soft fail of the message. postfix compatible DMARC implementation It is considered Best Security Practices to enter a "reject" DMARC policy record even if your domain is not sending any email. Example of DNS entries for SPF, DKIM, and DMARC MX 10 mail.domain.tld.TXT "v=spf1 a:mail.domain.tld -all"mail._domainkey IN TXT ( "v=DKIM1; h=sha256; k=rsa; " "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0w7N0fWtTndtlR+zOTbHyZOlvFiM73gyjjbHDN1OhhcPCbhRUqTsA7A8uXHGHao6nZ5qejlVtn6NfZwbn7rdhJ0MTjlgTnTsVa8E9rgS6dFo0bEIzeFecDr/4XOF9wpNjhHlnHm4wllkPheFnAWpZQiElZeYDN5Md47W1onwZ3DwcYJNX/3/GtfVZ0PrjisC4P0qeu+Z8jIgZc" "MLvBm8gj2pX3V6ntJY9QY09fWSVskvC6BQhi6ESOrqbM63f8ZJ4N/9ixPAMiD6k/lyGCokqc6sMuP6EC7z5McEOBbAVEuNy3idKi1sjwQH8WZHrvlSBlzx1wwmpFC1gqWcdTiEGwIDAQAB" ) ; ----- DKIM key mail for domain_dmarc IN TXT v=DMARC1;p=reject;sp=reject;fo=0:d;adkim=s;aspf=s;rua=mailto:[email protected];ruf=mailto:[email protected];_domainkey IN TXT o=-; You may notice that the DNS record named mail._domainkey contains a cryptographic public key. This key and associated record can be generated using the opendkim-genkey program installed when the opendkim package installed on your server. Key generation is rather simple: opendkim-genkey -b 2048 -d yourdomain -h sha256 -s mail This command will generate a private key, public key, and correctly formatted DNS record. The private key needs to be placed in the directory listed in your opendkim configuration. While the public key and its associated DNS record is placed in your domain's DNS zone file. Unfortunately, some DNS providers have length restrictions on records. So, make sure your DNS provider can accommodate the public key's length. Adding SPF and DKIM Milters SPF Excerpt from the policyd-spf man page: POSTFIX INTEGRATION 1. Add the following to /etc/postfix/master.cf: policyd-spf unix - n n - 0 spawn user=policyd-spf argv=/usr/bin/policyd-spf2. Configure the Postfix policy service in /etc/postfix/main.cf: smtpd_recipient_restrictions = ... reject_unauth_destination check_policy_service unix:private/policyd-spf ... policyd-spf_time_limit = 3600 DKIM The opendkim daemon runs on a UNIX socket which is configurable either as a standard UNIX socket or running on an inetd service port. On my Debian installations, this configuration is located at /etc/default/opendkim . Once opendkim is running, the milter needs to be added to the postfix configuration in /etc/postfix/main.cf . Here's an example from a working server: # DKIMmilter_default_action = acceptmilter_protocol = 2smtpd_milters = inet:localhost:8891 DMARC For small or personal email servers, DMARC can be simply limited to the DNS record. The DMARC checking daemon allows for rejecting incoming mail per sending domain's policy as well as sending any requested reporting back to the sending domain. The reporting is considered being "well-behaved neighbors". However, I generally don't enable it for small or personal systems since the configuration overhead is quite high. The DMARC DNS record, however, is very important to maintain domain reputation. The record is used by all modern large email providers to accept or reject mails that seem to originate from your domain. So, without the DMARC record, all incoming mail that looks like it was sent by your domain gets counted toward your domain's reputation score. Thus, a domain that doesn't expect to send any mail at all should publish a "reject" DMARC record to avoid reputation problems from spoofed messages sent by spammers. TLS Connections for Email Servers and Clients Your configuration information indicates you are running Dovecot and Postfix. Dovecot connects with Postfix on your server. In many small installations, the server connection is performed on the same physical/logical hardware through Unix sockets. So, the Mail User Agent (MUA) connection is handled by the middleware and not the actual mail server. In your case, that would be Dovecot. TLS should be enabled and setup properly in Dovecot in order to securely transmit your username and password from your MUA (ex: Evolution, Sylpheed, Mutt, etc). For reference, see Dovecot's TLS setup documentation . It's possible, but not necessary for the "server-to-server" or "middleware" to postfix connection be encrypted by the same TLS certificate. However, in the case of a small email server, the "middleware" to postfix connection doesn't necessarily need to be encrypted since it's on the same hardware. Obtaining a LetsEncrypt TLS Certificate for your Mail Server and MUA interface (POP3, IMAP, etc) The LetsEncrypt project has done a very good job simplifying obtaining Domain Validated TLS certificates. Assuming your domain already has a certificate, you can add the mail server's sub-domain to the certificate using the --expand option. Stop the postfix and dovecot services. Stop the web server, if one is running. Stop any service running that is currently included on the certificate. Expand the certificate certbot certonly --expand -d domain.tld,www.domain.tld,mail.domain.tld Then add the certificate path to your main.cf configuration. smtpd_tls_key_file = /etc/letsencrypt/live/domain.tld/privkey.pemsmtpd_tls_cert_file = /etc/letsencrypt/live/domain.tld/fullchain.pem And also add the certificate path to your Dovecot configuration, per Dovecot's documentation listed above. Restart all services and check that the configuration works. It should be noted that SMTP TLS connection is the connection your server makes with other servers. While, the Dovecot TLS connection is generally what someone would connect to in order to send email from a non-webmail client. SMTP Server to Server TLS Compatibility Setting Some mail servers are still not utilizing TLS encrypted connections for mails received from other servers. In such cases, strict TLS enforcement will result in undeliverable mail to those servers and domains. However, many large email providers will mark an incoming email as suspicious if the connection is not secured with TLS. So, in order to maintain the best compatibility include the following setting in your /etc/postfix/main.cf smtpd_tls_security_level = may It's also important to note that most email providers do not require this server to server connection to use a CA approved certificate and validation checks are generally not performed even if the certificate is CA approved. However, the TLS certificate included in Dovecot should be CA approved. A self-signed certificate in Dovecot will result in a warning when using most MUAs such as sylpheed , evolution , or thunderbird . Reasonable SMTP Client Restrictions In my experience, 99% of spam can be rejected via SPF, DKIM checking along with RBL checking. Here's a portion of my "standard" client restrictions. It's important to note that the restrictions are processed in order. The order I have below works very well in my experience: smtpd_client_restrictions = permit_mynetworks permit_sasl_authenticated check_helo_access hash:/etc/postfix/helo_access check_client_access hash:/etc/postfix/client_checks reject_unauth_destination check_policy_service unix:private/policy-spf reject_rbl_client cbl.abuseat.org reject_rbl_client pbl.spamhaus.org reject_rbl_client sbl.spamhaus.org reject_rbl_client bl.blocklist.de reject_unknown_client SMTPD Client Restrictions Compatibility Setting The restriction that will have the most exceptions will be the reject_unknown_client setting. Many online services do not configure their reverse domain correctly and/or utilize a series of sending domains which may or may not be mapped properly. So, for the most compatibility with poorly configured email providers, remove that restriction. However, nearly 100% of spam is sent from email servers without proper reverse domain records. HELO Checks It's common for spammers to attempt to spoof a HELO by sending your domain's name or IP address, or localhost. These spoof attempts can be rejected immediately using the check_helo_access option as shown above. The HELO text database consists of a domain name or IP address or IP address range followed by the action and a message to send back. A fairly simple HELO check follows: # helo access# check_helo_access hash:/etc/postfix/helo_accesslocalhost REJECT Only I am me127.0.0.1 REJECT Only I am meexample.com REJECT Only I am medns.host.ip.addr REJECT Only I am me "example.com" is your domain, and "dns.host.ip.addr" is your server's DNS listed IP address. This database example results in something like this from one my actual server logs: Oct 30 06:32:49 <domain> postfix/smtpd[22915]: NOQUEUE: reject: RCPT from xxx-161-xxx-132.dynamic-ip.xxxx.net[xxx.161.xxx.132]: 554 5.7.1 <xxx.xxx.xxx.xxx>: Helo command rejected: Only I am me; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<xxx.xxx.xxx.xxx> The potential spammer/spoofer gets the message "Only I am me". It doesn't matter what the message is, but at least the spammer/spoofer knows you know. Make sure to generate the postfix database using: postmap helo_access Adding Exceptions to the Restrictions via a client_check whitelist Individual client checking goes something like this: ip.addr.hack.attmpt REJECTmisconfig.server.but.good OK Make sure to generate the postfix database using: postmap client_checks And that's about it. I get about 3 spam mails a month, with hundreds of spam rejected. Resources DMARC/SPF Policy Evaluator DKIM Public Key Evaluator MxToolbox Website Email Security Grader | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/479630', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/319211/']} | jdg_382049 |
stackexchange | llm_judgeable_groundtruth_similarity | 2521822 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working through Vakil's notes about algebraic geometry right now, still in the first chapters, and one of the main results in section 2.5 is, that "Exactness of sequences can be checked on stalks ". Then one shows that taking the stalk at a point $p$ is an exact functor. But what does that sentence mean? Sure, if have a exact sequence of sheaves I will get a exact sequence of stalks. But what about the other direction? Sure, taking stalks and taking images and kernels and whatnot commute, but how do we get that for example if $ker(\phi)_p=im(\psi)_p$ holds for all $p$ that also $ker(\phi)=im(\psi)$ is true in the category of sheaves? I'm hesitant because he writes in his notes before that an isomorphism on all stalks doesn't imply an isomorphism of the sheaves. Maybe it's a naive question but I'm still very new to the subject. A somewhat related question: What about the 'taking sections over $U$'- functor? Since this is left-exact, can i somehow also check left-exact sequences on sections? Glad for any advice!
Now provide the response and nothing else.
| This is something to be concerned about; it's not true in full generality! It turns out that the category of sheaves on a topological space satisfies an important technical condition: it has "enough points". A topos is said to have enough points if, for every $f : X \to Y$, the following are equivalent: $f$ is an isomorphism $f_p$ is an isomorphism for every point $p$ Here is the relevant page from the Stacks project. Regarding your comment I'm hesitant because he writes in his notes before that an isomorphism on all stalks doesn't imply an isomorphism of the sheaves. I assume you're subtly misinterpreting his statement. What is possible is that you have two sheaves $F$ and $G$ on a topological space such that: $F_p$ and $G_p$ are isomorphic for every point $p$ $F$ and $G$ are not isomorphic The key point is that the hypothesis here is merely the assertion "there exists an isomorphism" for each $p$. This is much weaker than "the specific morphism $f_p$ is an isomorphism". Checking left exactness by taking sections follows from the facts: Limits (and colimits) of presheaves are computed pointwise Limits of sheaves can be computed as presheaves Letting $\mathbf{a}: \mathrm{PSh} \to \mathrm{Sh}$ be sheafification (which is left exact) and $\mathbf{i} : \mathrm{Sh} \to \mathrm{PSh}$ be the forgetful functor (which is left continuous), we have for any finite diagram $F : J \to \mathrm{Sh}$: $$ \lim_j F_j = \lim_j \mathbf{ai}F_j= a \left( \lim_j \mathbf{i}F_j \right)$$ and the presheaf is computed pointwise by $$ \left( \lim_j \mathbf{i}F_j \right)(U)= \left( \lim_j \mathbf{i}F_j(U) \right) $$ Of course, $\mathbf{i}F_j(U) = F_j(U)$ if your site is subcanonical; i.e. if representable presheaves are sheaves. But I'll continue stating things in the general case. So if you have a cone $L \to F$ with the property that $\mathbf{i}L(U) \to \mathbf{i}F(U)$ is a limit cone for every $U$, then $\mathbf{i}L \to \mathbf{i}F$ is a limit cone, and thus $L \to F$ is a limit cone. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2521822', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/458683/']} | jdg_382050 |
stackexchange | llm_judgeable_groundtruth_similarity | 176278 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand better the behaviour and character of modular tensor categories over non-algebraically closed fields. How weird can they be? The reason I am interested in this is that my collaborators and I are investigating extended 3D tqfts. These are known to be related to Modular Tensor Categories via the Reshetikhin-Turaev construction . However most of the literature seems to be focused on the case where the MTC is defined over an algebraically closed field, usually the complex numbers. I don't see a reason to restrict to this case and I can imagine that there could be some very interesting examples, and hence interesting invariants, in other cases to. For example over field $k$, a finite semisimple linear category will have simple objects whose endomorphisms rings are division algebras over $k$. If $k$ is algebraically closed, then all we get are copies of $k$.If $k$ is not algebraically closed then it is more interesting as we can have objects with different division algebras as endomorphisms. Can this happen in a modular tensor category? i.e. is there an example of a Modular Tensor Category such that the simple objects have different division algebras for their endomorphism rings? What if we drop the requirement End(1) = k? How sticky can it get?
Now provide the response and nothing else.
| An example of MTC is Drinfeld double of a finite group $G$ (over any field ofcharacteristic zero). This category contains representation category of $G$ as a subcategory. So all endomorphisms rings that you can find in representations of finite groups, you can also find in MTC. For example the quaternions will show up in the Drinfeld double of the quaternion group. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/176278', 'https://mathoverflow.net', 'https://mathoverflow.net/users/184/']} | jdg_382051 |
stackexchange | llm_judgeable_groundtruth_similarity | 82109 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Java 1.4 with Log4J. Some of my code involves serializing and deserializing value objects (POJOs). Each of my POJOs declares a logger with private final Logger log = Logger.getLogger(getClass()); The serializer complains of org.apache.log4j.Logger not being Serializable. Should I use private final transient Logger log = Logger.getLogger(getClass()); instead?
Now provide the response and nothing else.
| How about using a static logger? Or do you need a different logger reference for each instance of the class? Static fields are not serialized by default; you can explicitly declare fields to serialize with a private, static, final array of ObjectStreamField named serialPersistentFields . See Oracle documentation Added content: As you use getLogger(getClass()) , you will use the same logger in each instance. If you want to use separate logger for each instance you have to differentiate on the name of the logger in the getLogger() -method. e.g. getLogger(getClass().getName() + hashCode()). You should then use the transient attribute to make sure that the logger is not serialized. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/82109', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15452/']} | jdg_382052 |
stackexchange | llm_judgeable_groundtruth_similarity | 363169 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a GNU make alternative if don't want to use tab indents in my make program (or make -like) program? For example, when I use make , I need to indent everything after the make opener, ( % : ). This is a recipe for some problems in some circumstances (for example, I work cross-platform and I use a Windows10 AutoHotkey mechanism that strips tabs from codes I paste into Linux terminals from different reasons and it doesn't pass over make hence I need a non tab including solution). The necessity to tab-indent everything under % : makes my work with make non fluent. This is the make I use to create new virtual host conf files. I execute it with make domain.tld.conf : % : printf '%s\n' \ '<VirtualHost *:80>' \ 'DocumentRoot "/var/www/html/$@"' \ 'ServerName $@' \ '<Directory "/var/www/html/$@">' \ 'Options +SymLinksIfOwnerMatch' \ 'Require all granted' \ '</Directory>' \ 'ServerAlias www.$@' \ '</VirtualHost>' \ > "$@" a2ensite "$@" systemctl restart apache2.service Is there any alternative, maybe something that comes with Unix itself that provides similar functionality but without having to use tab indents in the pattern file itself?
Now provide the response and nothing else.
| If that’s your whole Makefile, and you’re not tracking any dependencies between files, just use a shell script: #!/bin/shfor domain; do> "/etc/apache2/sites-available/${domain}.conf" cat <<EOF<VirtualHost *:80>DocumentRoot "/var/www/html/${domain}"ServerName "${domain}"<Directory "/var/www/html/${domain}">Options +SymLinksIfOwnerMatchRequire all granted</Directory>ServerAlias www.${domain}</VirtualHost>EOFa2ensite "${domain}"donesystemctl restart apache2.service Copy the above into a file named for example create-vhost , make it executable: chmod 755 create-vhost then run it as ./create-vhost domain.tld This even supports creating multiple virtual hosts’ configuration files (with a single restart at the end): ./create-vhost domain1.tld domain2.tld | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/363169', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']} | jdg_382053 |
stackexchange | llm_judgeable_groundtruth_similarity | 952527 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Running "telnet website.com 80 HEAD / HTTP/1.0" claims that the server is nginx/1.14.1, however directory browsing is enabled on the site and visiting it clearly says "Apache Server at website.com Port 80". What could cause this?
Now provide the response and nothing else.
| For me there are two possible explanations. First the owner compile own version of apache and replace the identification string to mislead (attackers, externals...). The good security practice tell this ID banner should be disabled at all. Second, directory listing you see is from apache, but the frontend server is nginx which proxy requests to apache (for directory) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/952527', 'https://serverfault.com', 'https://serverfault.com/users/507421/']} | jdg_382054 |
stackexchange | llm_judgeable_groundtruth_similarity | 10917192 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Hibernate entity called IssueParticipant. It basically describes the relationship between a user and an issue (which is like a JIRA or Bugzilla issue). It represents a sort of many-to-many linking table in the database, linking a user ID to an issue ID, but also includes other information related to notification settings, and so it is treated as its own entity. I was having huge problems with using the userId and issueId as a composite key, so I created a synthethic key which is a String (and a varchar in the postgres database), which is formed as: _. Now, I have a screen where a user can edit all of the users associated with an issue, while also editing the notification settings. In a controller class I create a List of IssueParticipants like this: IssueParticipant participant = new IssueParticipant();participant.setUser(accountUser);participant.setIssue(issue); So these are of course not managed by Hibernate at this point. Then in my DAO I iterate through them and call saveOrUpdate(), expecting that if an IssueParticipant with the same synthetic key exists in the database, it will updated; otherwise it will be inserted: for (IssueParticipant participant : participants) { getCurrentSession().saveOrUpdate(participant); savedIds.add(participant.getIssueUserKey()); } (savedIds is a List I am maintaining so that I later will know what IssueParticipants I should delete from the database). Instead of what I expect, though, I get an Exception: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "issue_participant_pkey" Here is my entity class, abbreviated: public class IssueParticipant extends Entity { private String issueUserKey; private Long issueId; private Long userId; // Edit: adding 'dateAdded' definition private Date dateAdded;// ... // below may be null private SPUser user; private Issue issue; public static IssueParticipant nulledIssueParticipant() { IssueParticipant ip = new IssueParticipant(); return ip; } public String getIssueUserKey() { return issueUserKey; } public void setIssueUserKey(String issueUserKey) { this.issueUserKey = issueUserKey; } public Long getId() { // currently meaningless return 0L; } public Long getIssueId() { return this.issueId; } public void setIssueId(Long issueId) { this.issueId = issueId; updateKey(); } public Long getUserId() { return this.userId; } public void setUserId(Long userId) { this.userId = userId; updateKey(); } private void updateKey() { issueUserKey = getIssueId() + KEY_SEP + getUserId(); } public SPUser getUser() { return user; } public void setUser(SPUser user) { this.user = user; setUserId(user.getId()); } public Issue getIssue() { return issue; } public void setIssue(Issue issue) { this.issue = issue; setIssueId(issue.getId()); }// edit: adding 'dateAdded' methodspublic Date getDateAdded() { return dateAdded;}public void setDateAdded(Date dateAdded) { this.dateAdded = dateAdded;}...} Here is its hbm file: <?xml version="1.0"?><!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"><hibernate-mapping default-lazy="false"> <class name="com.xxx.yyy.IssueParticipant" table="issue_participant"> <id name="issueUserKey" column="issue_user_key" type="string"> <generator class="assigned"/> </id> <version name="dateAdded" column="date_added" type="timestamp" unsaved-value="null" /> <property name="issueId" column="issue_id" /> <many-to-one name="user" column="user_id" class="com.xxx.yyy.SPUser" not-null="true" cascade="none" /> <property name="alertRss" column="alert_rss" type="boolean" /> <property name="alertEmail" column="alert_email" type="boolean" /> <property name="alertWeb" column="alert_web" type="boolean" /> <property name="alertClient" column="alert_client" type="boolean" /> </class></hibernate-mapping> And indeed user_issue_key is the primary key in the corresponding database table. I feel like the right solution might just be to use SpringJDBC in this case, but I'd really love to figure out what's going on here. Anyone have any thoughts? Thanks in advance.
Now provide the response and nothing else.
| saveOrUpdate() doesn't query the database to decide whether it should save or update the given entity. It makes that decision based on the state of the entity, as follows: if the object is already persistent in this session, do nothing if another object associated with the session has the same identifier, throw an exception if the object has no identifier property, save() it if the object's identifier has the value assigned to a newly instantiated object, save() it if the object is versioned by a <version> or <timestamp>, and the version property value is the same value assigned to a newly instantiated object, save() it otherwise update() the object So, as far as I understand in your case decision is based on the value of dateAdded field, therefore you need to keep it to distinguish between new and detached instances. See also: 11.7. Automatic state detection | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10917192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/631425/']} | jdg_382055 |
stackexchange | llm_judgeable_groundtruth_similarity | 1168914 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to access some .NET assemblies written in C# from Python code. A little research showed I have two choices: IronPython with .NET interface capability/support built-in Python with the Python .NET package What are the trade-offs between both solutions?
Now provide the response and nothing else.
| If you want to mainly base your code on the .NET framework, I'd highly recommend IronPython vs Python.NET. IronPython is pretty much native .NET - so it just works great when integrating with other .NET langauges. Python.NET is good if you want to just integrate one or two components from .NET into a standard python application. There are notable differences when using IronPython - but most of them are fairly subtle. Python.NET uses the standard CPython runtime, so this Wiki page is a relevant discussion of the differences between the two implementations. The largest differences occur in the cost of exceptions - so some of the standard python libraries don't perform as well in IronPython due to their implementation. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1168914', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2386/']} | jdg_382056 |
stackexchange | llm_judgeable_groundtruth_similarity | 4950084 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering a bit about this templating business. In C and C++ it is very common to put declarations in header files and definitions in source files, and keep the two completely separate. However, this doesn't even seem to be possible (in any great way) when it comes to templates, and as we all know, templates are a great tool. Also, Boost is mostly headers, so this is a real issue. Is separating headers and source still a good idea in C++, or should I just not rely heavily on templates?
Now provide the response and nothing else.
| Instantiating a template is costly at compile time but virtualy free at runtime. Basically, everytime you use a new template type, the compiler has to generate the code for that new type, that's why the code is in a header, so that the compiler have access to the code later. Putting all your code in a .cpp lets the compiler compile that code only once which greatly speeds up compilation. You could in theory write all your code in headers, it will work fine, but it will take forever to compile very large projects. Also, as soon as you will change one line anywhere, you will have to rebuild everything. Now you might ask, how comes the STL and BOOST are not so slow? That's where precompiled headers come to the rescue. PCHs let the compiler do the most costly work only once. This works well with code that won't change often like libraries, but its effect is totally nullified for code that changes a lot as you will have to recompile the whole set of precompiled headers everytime. The compiler also use a couple of tricks to avoid recompiling all template code in every compilation unit. Also note that C++0x will introduce explicit mechanisms to better control template instantiation. You will be able to explicitly instantiate templates and, most importantly, prevent instanciation in some compilation units. However, most of that work is already being done by most compilers without our knowledge. So, the rule of thumb is, put as much code (and include directives) as possible in your .cpp. If you can't, well, you can't. My advice would be: don't template just for the heck of it . If you have to template, be careful and be aware that you are in fact choosing between speed of compilation and usability the template will bring. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4950084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/349414/']} | jdg_382057 |
stackexchange | llm_judgeable_groundtruth_similarity | 3330270 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reviewing for the Math GRE Subject test and came across this question in the excellent UCLA notes . $$\lim_{x\to 0}\left(\frac{1}{x^2}-\frac{1}{\sin^2 x}\right).$$ If one attacks this with naive applications of L'Hospital after combining fractions, it quickly gets out of hand.The suggested solution involves factoring it as $$\frac{1}{x^2}-\frac{1}{\sin^2 x} = \frac{\sin^2 x - x^2}{x^2\sin^2 x} = \left(\frac{x^2}{\sin^2 x}\right)\left(\frac{\sin x + x}{x}\right)\left(\frac{\sin x - x}{x^{3}}\right),$$ where indeed each factor has a real positive limit.I am wondering: What intuition or thought process might lead me to this particular factorization? Such a factorization seems non-obvious to me as it requires the introduction of another factor of $x^2$ into the numerator. Edit: Thanks to everyone's clear responses, I now understand: Taylor expansions of trig functions are immensely powerful in evaluating limits, in this case turning the problem into a limit of a rational function. If a factor of a limit exists and is nonzero, it can be factored out without affecting convergence of the product. Formally, suppose $\lim f(x)$ exists and $g(x)$ is a factor of $f(x)$ .Then if $\lim g(x)$ exists in $\mathbb{R}\backslash\{0\}$ , $\lim f(x)/g(x)$ exists also. That is to say, one can be "opportunistic" about simplifying ones limits if a factor with a real nonzero limit is discovered.
Now provide the response and nothing else.
| Since $$\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots,$$ you know that $$\sin(x)+x=2x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots$$ and that $$\sin(x)-x=-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots.$$ Therefore both limits $$\lim_{x\to0}\frac{\sin(x)+x}x\text{ and }\lim_{x\to0}\frac{\sin(x)-x}{x^3}$$ exist; they are equal to $2$ and to $-\frac16$ respectively. This explains why that decomposition is used. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3330270', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/155942/']} | jdg_382058 |
stackexchange | llm_judgeable_groundtruth_similarity | 64331169 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In c# 7.0, you can use discards. What is the difference between using a discard and simply not assigning a variable? public List<string> DoSomething(List<string> aList){ //does something and return the same list}_ = DoSomething(myList);DoSomething(myList); Is there any difference?
Now provide the response and nothing else.
| There's absolutely no difference between the two code lines. Both of them translate to exactly the same IL: public void A(List<string> myList){ _ = DoSomething(myList);}public void B(List<string> myList){ DoSomething(myList);} Both translate to: IL_0000: ldarg.0IL_0001: ldarg.1IL_0002: call instance class [System.Private.CoreLib]System.Collections.Generic.List`1<string> C::DoSomething(class [System.Private.CoreLib]System.Collections.Generic.List`1<string>)IL_0007: popIL_0008: ret You can see it yourself on SharpLab (Note: I can't actually read IL, but this is the result of both A and B methods) Discards are useful, as Liam wrote in his answer, for out parameters you're not going to use, for tuple deconstructions, for pattern matching, and for switch expressions. You can read all about it in the official documentation. Update following Liam's comment: Please note that I'm only referring to this specific scenario. When used as intended, discards are memory-efficient and/or improve the readability of your code. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64331169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13183022/']} | jdg_382059 |
stackexchange | llm_judgeable_groundtruth_similarity | 2261858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently writing a C++ extension for Python using Boost.Python. A function in this extension may generate an exception containing information about the error (beyond just a human-readable string describing what happened). I was hoping I could export this exception to Python so I could catch it and do something with the extra information. For example: import my_cpp_moduletry: my_cpp_module.my_cpp_function()except my_cpp_module.MyCPPException, e: print e.my_extra_data Unfortunately Boost.Python seems to translate all C++ exceptions (that are subclasses of std::exception ) into RuntimeError . I realize that Boost.Python allows one to implement custom exception translation however, one needs to use PyErr_SetObject which takes a PyObject* (for the exception's type) and a PyObject* (for the exception's value)--neither of which I know how to get from my Boost.Python classes. Perhaps there is a way (which would be great) that I simply have not found yet. Otherwise does anyone know how to export a custom C++ exception so that I may catch it in Python?
Now provide the response and nothing else.
| The solution is to create your exception class like any normal C++ class class MyCPPException : public std::exception {...} The trick is that all boost::python::class_ instances hold a reference to the object's type which is accessible through their ptr() function. You can get this as you register the class with boost::python like so: class_<MyCPPException> myCPPExceptionClass("MyCPPException"...);PyObject *myCPPExceptionType=myCPPExceptionClass.ptr();register_exception_translator<MyCPPException>(&translateFunc); Finally, when you are translating the C++ exception to a Python exception, you do so as follows: void translate(MyCPPException const &e){ PyErr_SetObject(myCPPExceptionType, boost::python::object(e).ptr());} Here is a full working example: #include <boost/python.hpp>#include <assert.h>#include <iostream>class MyCPPException : public std::exception{private: std::string message; std::string extraData;public: MyCPPException(std::string message, std::string extraData) { this->message = message; this->extraData = extraData; } const char *what() const throw() { return this->message.c_str(); } ~MyCPPException() throw() { } std::string getMessage() { return this->message; } std::string getExtraData() { return this->extraData; }};void my_cpp_function(bool throwException){ std::cout << "Called a C++ function." << std::endl; if (throwException) { throw MyCPPException("Throwing an exception as requested.", "This is the extra data."); }}PyObject *myCPPExceptionType = NULL;void translateMyCPPException(MyCPPException const &e){ assert(myCPPExceptionType != NULL); boost::python::object pythonExceptionInstance(e); PyErr_SetObject(myCPPExceptionType, pythonExceptionInstance.ptr());}BOOST_PYTHON_MODULE(my_cpp_extension){ boost::python::class_<MyCPPException> myCPPExceptionClass("MyCPPException", boost::python::init<std::string, std::string>()); myCPPExceptionClass.add_property("message", &MyCPPException::getMessage) .add_property("extra_data", &MyCPPException::getExtraData); myCPPExceptionType = myCPPExceptionClass.ptr(); boost::python::register_exception_translator<MyCPPException> (&translateMyCPPException); boost::python::def("my_cpp_function", &my_cpp_function);} Here is the Python code that calls the extension: import my_cpp_extensiontry: my_cpp_extension.my_cpp_function(False) print 'This line should be reached as no exception should be thrown.'except my_cpp_extension.MyCPPException, e: print 'Message:', e.message print 'Extra data:',e.extra_datatry: my_cpp_extension.my_cpp_function(True) print ('This line should not be reached as an exception should have been' + 'thrown by now.')except my_cpp_extension.MyCPPException, e: print 'Message:', e.message print 'Extra data:',e.extra_data | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2261858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61663/']} | jdg_382060 |
stackexchange | llm_judgeable_groundtruth_similarity | 6893714 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I do $ ps -ef | grep cron I get root 1036 1 0 Jul28 ? 00:00:00 cronabc 21025 14334 0 19:15 pts/2 00:00:00 grep --color=auto cron My question is why do I see the second line. From my understanding, ps lists the processes and pipes the list to grep . grep hasn't even started running while ps is listing processes, then how come grep process is listed in the o/p ? Related second question: When I do $ ps -ef | grep [c]ron I get only root 1036 1 0 Jul28 ? 00:00:00 cron What is the difference between first and second grep executions?
Now provide the response and nothing else.
| When you execute the command: ps -ef | grep cron the shell you are using (...I assume bash in your case, due to the color attribute of grep I think you are running a gnu system like a linux distribution, but it's the same on other unix/shell as well...) will execute the pipe() call to create a FIFO, then it will fork() (make a running copy of itself). This will create a new child process. This new generated child process will close() its standard output file descriptor (fd 1) and attach the fd 1 to the write side of the pipe created by the father process (the shell where you executed the command). This is possible because the fork() syscall will maintain, for each, a valid open file descriptor (the pipe fd in this case). After doing so it will exec() the first (in your case) ps command found in your PATH environment variable. With the exec() call the process will become the command you executed. So, you now have the shell process with a child that is, in your case, the ps command with -ef attributes. At this point, the parent (the shell) fork() s again. This newly generated child process close() s its standard input file descriptor (fd 0) and attaches the fd 0 to the read side of the pipe created by the father process (the shell where you executed the command). After doing so it will exec() the first (in your case) grep command found in your PATH environment variable. Now you have the shell process with two children (that are siblings) where the first one is the ps command with -ef attributes and the second one is the grep command with the cron attribute. The read side of the pipe is attached to the STDIN of the grep command and the write side is attached to the STDOUT of the ps command: the standard output of the ps command is attached to the standard input of the grep command. Since ps is written to send on the standard output info on each running process, while grep is written to get on its standard input something that has to match a given pattern, you'll have the answer to your first question: the shell runs: ps -ef; the shell runs: grep cron; ps sends data (that even contains the string "grep cron") to grep grep matches its search pattern from the STDIN and it matches the string "grep cron" because of the "cron" attribute you passed in to grep : you are instructing grep to match the "cron" string and it does because "grep cron" is a string returned by ps at the time grep has started its execution. When you execute: ps -ef | grep '[c]ron' the attribute passed instructs grep to match something containing "c" followed by "ron". Like the first example, but in this case it will break the match string returned by ps because: the shell runs: ps -ef; the shell runs: grep [c]ron; ps sends data (that even contains the string grep [c]ron ) to grep grep does not match its search pattern from the stdin because a string containing "c" followed by "ron" it's not found, but it has found a string containing "c" followed by "]ron" GNU grep does not have any string matching limit, and on some platforms (I think Solaris, HPUX, aix) the limit of the string is given by the "$COLUMN" variable or by the terminal's screen width. Hopefully this long response clarifies the shell pipe process a bit. TIP: ps -ef | grep cron | grep -v grep | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6893714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/494074/']} | jdg_382061 |
stackexchange | llm_judgeable_groundtruth_similarity | 138841 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen some controversy when solving physical equations on whether to put units all the time after I insert a numerical value to a variable with dimensions or to put the final unit at the last equality. A trivial example; $F= 5\mathrm{N}, m=3 \textrm{kg}, a=?$ $$F=m a \iff 5\,\mathrm{N}= \left( 3 \,\textrm{kg}\right) a \iff a=\frac{5\,\mathrm{N}}{3 \,\textrm{kg}} =\frac{5}{3} \frac{\mathrm{m}}{\mathrm{s}^2}$$ This feels more consistent when merging algebra in physics, as we can divide the equalities at any step and have $\frac{\mathrm{N}}{\textrm{kg}}=\frac{\mathrm{m}}{\mathrm{s}^2}$ while if we didn't insert units we would have $1=\frac{\mathrm{m}}{\mathrm{s}^2}$. Inspite of this, many physics teachers consult me to only includ the units at the end result. Additional info; We usually work in SI.
Now provide the response and nothing else.
| Keeping units in ALL steps leading up to your answer is one of the best ways to avoid silly arithmetic mistakes. I teach high school physics, and when my students neglect to write units as they're working, its much easier to make a silly mistake. Using units allows you to double check that you are only adding or subtracting numbers with like units, similar to how you would only add or subtract like terms in Algebra class. Additionally, if (as in your example above) you are solving for acceleration, you would expect the correct unit of $m/s^2$. If you accidentally divide mass by force (which I see students do sometimes) you would know before you get to you answer that something is not right, because a kg/N is not a unit of acceleration. I'd also like to point out that if you leave units in your calculations, you can see where the units of your final answer are coming from. Instead, if you omit units during the calculations, you may just be tempted to slap the unit that seems to "fit" on your final answer, without the greater understanding of where it came from. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/138841', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/58152/']} | jdg_382062 |
stackexchange | llm_judgeable_groundtruth_similarity | 2552153 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to pass parameters ($_POST or $_GET) with redirectory helper in Zend Framework? The following code redirect to index action of current controller, but I would like to pass some parameters to it as well. $this->_helper->redirector("index"); Zend Documenataion does not say anything about it.
Now provide the response and nothing else.
| Of course. This is a code sample from the Action Helpers documentation (see the Redirector section, about 2/3 of the way down the page.) You may need to grab a reference to the redirector helper and call one of the goto* methods like this code is doing. class ForwardController extends Zend_Controller_Action{ /** * Redirector - defined for code completion * * @var Zend_Controller_Action_Helper_Redirector */ protected $_redirector = null; public function init() { $this->_redirector = $this->_helper->getHelper('Redirector'); } public function myAction() { /* do some stuff */ // Redirect to 'my-action' of 'my-controller' in the current // module, using the params param1 => test and param2 => test2 $this->_redirector->gotoSimple('my-action', 'my-controller', null, array('param1' => 'test', 'param2' => 'test2')); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2552153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/100240/']} | jdg_382063 |
stackexchange | llm_judgeable_groundtruth_similarity | 14218 |
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider a quantity \begin{equation}\mathbb{E}\big[\langle z|\rho|z\rangle\big],\end{equation} where $\rho = |\psi \rangle \langle \psi|$ is a Haar-random state $n$ -qubit quantum state and $z$ is the label of a fixed $n$ -qubit basis vector. Now, consider \begin{equation}\sigma = \underset{\text{diagonal}~U}{\mathbb{E}}\big[U\rho U^{*}\big],\end{equation} where $\rho$ is as defined before, and $U$ is a diagonal unitary matrix such that the diagonal entries are uniformly random complex phases. I am trying to prove that \begin{equation}\mathbb{E}\big[\langle z|\rho|z\rangle\big] = \mathbb{E}\big[\langle z|\sigma|z\rangle\big]\end{equation} Intuitively, the result is clear as the Haar measure is invariant under left and right multiplication by a unitary. But, the RHS has two expectations - one nested inside the other - and I do not know how to simplify that.
Now provide the response and nothing else.
| With the chosen structure of $ U $ , i think it's even possible to prove the stronger statement: $$ \langle z| \rho|z \rangle = \langle z| \sigma_\rho|z \rangle, \hspace{0.2em} \text{where} \hspace{0.2em} \sigma_\rho = \mathbb{E}_U \big[U\rho U^\dagger\big] \text{and} \hspace{0.3em} |z\rangle \hspace{0.3em} \text{a computational basis vector.}$$ You may write $ U = \sum_{k \in \{0, 1\}^n} e^{i \phi_k} |k\rangle \langle k| $ , for uniform $ \phi_k \in_R [0, 2\pi] $ and calculate \begin{align*} U\rho U^\dagger &= \sum_{k, m} e^{i (\phi_k - \phi_m)} \langle k|\rho|m \rangle \cdot |k\rangle \langle m| \implies \\\mathbb{E}_U \big[U\rho U^\dagger\big] &= \sum_{k, m} \mathbb{E}_\phi \big[e^{i (\phi_k - \phi_m)}\big] \cdot \langle k|\rho|m \rangle \cdot |k\rangle \langle m| \implies\\\langle z |\sigma_\rho| z \rangle &= \sum_{k, m} \mathbb{E}_\phi \big[e^{i (\phi_k - \phi_m)}\big] \cdot \langle k|\rho|m \rangle \cdot \langle z|k\rangle \langle m|z \rangle \implies\\\langle z |\sigma_\rho| z \rangle &= \sum_{k, m} \mathbb{E}_\phi \big[e^{i (\phi_k - \phi_m)}\big] \cdot \langle k|\rho|m \rangle \cdot \delta_{z,k} \delta_{z,m} = \langle z|\rho|z \rangle\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/14218', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/1351/']} | jdg_382064 |
stackexchange | llm_judgeable_groundtruth_similarity | 1880252 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Please tell, if anything wrong has been done by me. The question is from : "HIGHER ALGEBRA" by Hall and Knight.
Now provide the response and nothing else.
| It does not hold for finite groups either. Let $G=K=\Bbb{Z}_4\oplus\Bbb{Z}_2$, and let $H=\langle (0,1)\rangle$ and $M=\langle(2,0)\rangle$ be the given cyclic subgroups, both of order two. Then$$G/H\simeq\Bbb{Z}_4\qquad\text{and}\qquad K/M\simeq\Bbb{Z}_2\oplus\Bbb{Z}_2.$$You need to have an isomorphism $f:G\to K$ such that $f(H)=M$ to be sure about the conclusion. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1880252', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/341129/']} | jdg_382065 |
stackexchange | llm_judgeable_groundtruth_similarity | 599096 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Follow up on: What is the work done when pressure fully changes in thermodynamics? We know thermodynamic work is $$pdV$$ however wikipedia states that going work gone in bringing a system to its state is from its enthalpy article given but: $$d(pV)$$ So what work is $$nRdT=d(PV) \ what \ is \ its \ importance?$$ so why what work is what ?
Now provide the response and nothing else.
| This follows directly from the chain rule: $$\frac{\partial}{\partial x}[f(x-x')] = f'(x-x')\frac{\partial}{\partial x}[x-x'] = f'(x-x')$$ whereas $$\frac{\partial}{\partial x'}[f(x-x')] = f'(x-x')\frac{\partial}{\partial x'}[x-x'] = -f'(x-x').$$ (Here I take $f'(x-x')$ to mean that (total) derivative of $f$ with respect to its single independent variable.) Thus, we see that the two expressions are simply the negations of each other. More complex versions of this can similarly be derived for other vector calculus operators, such as $$\nabla_x f(x-x') = -\nabla_{x'} f(x-x'),$$ where $\nabla_x$ denotes the gradient with respect to $x$ . I think this is also explained in Griffiths somewhere, but hopefully this explanation suffices. Hope this helps. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/599096', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/281945/']} | jdg_382066 |
stackexchange | llm_judgeable_groundtruth_similarity | 40392676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm somewhat comfortable with R, lot less with Shiny, though it's not my first Shiny application. I have a data frame with lon/lat and the date/time of the entry in the system for every new customer. I also created other variables based on the startDate variable like the year, month, week, year-month (ym) and year-week (yw): id lat lon startDate year month week ym yw1 1 45.53814 -73.63672 2014-04-09 2014 4 15 2014-04-01 2014-04-062 2 45.51076 -73.61029 2014-06-04 2014 6 23 2014-06-01 2014-06-013 3 45.43560 -73.60100 2014-04-30 2014 4 18 2014-04-01 2014-04-274 4 45.54332 -73.56000 2014-05-30 2014 5 22 2014-05-01 2014-05-255 5 45.52234 -73.59022 2014-05-01 2014 5 18 2014-05-01 2014-04-27 I want to map every customer with leaflet (this is done), but then I would like to animate my application by showing only customers who are new for a specific date range. I would like to step through monthly dates (ym variable : 2016-01-01, 2016-02-01, 2016-03-01...) and not by day (or by x days which is already supported) because monthly dates are not always a 30 days step toward the next month. Here is my current application: library(shiny)library(leaflet)library(dplyr)df <- data.frame(id = 1:5, lat = c(45.53814, 45.51076, 45.4356, 45.54332, 45.52234), lon = c(-73.63672, -73.61029, -73.6010, -73.56000, -73.59022), startDate = as.Date(c("2014-04-09", "2014-06-04", "2014-04-30", "2014-05-30", "2014-05-01")), year = c(2014, 2014, 2014, 2014, 2014), month = c(4, 6, 4, 5, 5), week = c(15, 23, 18, 22, 18), ym = as.Date(c("2014-04-01", "2014-06-01", "2014-04-01", "2014-05-01", "2014-05-01")), # Year-Month yw = as.Date(c("2014-04-06", "2014-06-01", "2014-04-27", "2014-05-25", "2014-04-27")) # Year-Week )ui <- bootstrapPage( tags$style(type = "text/css", "html, body {width:100%;height:100%}"), leafletOutput("map", width = "83%", height = "100%"), absolutePanel(top = 1,right = 10,div( style = "height: 80px;", sliderInput( "time", "Time Slider", min(df$month), max(df$month), value = c(min(df$month), max(df$month)), step = 1, animate = animationOptions(interval = 2500) ) # end sliderInput) # end div ) # end absolutePanel) # end bootstrapPageserver <- shinyServer(function(input, output, session){ output$map <- renderLeaflet({ leaflet(data = df %>% filter(month >= input$time[1], month <= input$time[2])) %>% addTiles() %>% addMarkers(~lon, ~lat) %>% setView(lng = -73.6, lat = 45.52, zoom = 12) }) })shinyApp(ui = ui, server = server) Question: How can I filter the data using the slider animation option to shift to the next month and so on? For now I cycle through the variable month, but I have data for 8 years, so I need to take into consideration the year also, thus cycling through the ym variable for example. I saw some work done here and here , but either it's not responding to my needs or I didn't understand the suplied js code. If its the case, how a need to change my code to reflect my needs? Thank you.
Now provide the response and nothing else.
| EDIT 2017-10-13: This function is now avalaible in package shinyWidgets (with a different name : sliderTextInput() ). You can use this custom slider function. It takes a character vector for choices, so you can use whatever you want as format and step through the choices. The downside is that you have to manually split the input in the server : app example : # List of monthschoices_month <- format(seq.Date(from = as.Date("2014-01-01"), by = "month", length.out = 36), "%B-%Y")library("shiny")# uiui <- fluidPage( br(), # custom slider function sliderValues( inputId = "test", label = "Month", width = "100%", values = choices_month, from = choices_month[2], to = choices_month[6], grid = FALSE, animate = animationOptions(interval = 1500) ), verbatimTextOutput("res"))# serverserver <- function(input, output, session) { output$res <- renderPrint({ print(input$test) # you have to split manually the result by ";" print(as.Date(paste("01", unlist(strsplit(input$test, ";")), sep="-"), format="%d-%B-%Y")) })}# AppshinyApp(ui = ui, server = server) sliderValues function : sliderValues <- function (inputId, label, values, from, to = NULL, grid = TRUE, width = NULL, postfix = NULL, prefix = NULL, dragRange = TRUE, disable = FALSE, animate = FALSE) { validate_fromto <- function(fromto = NULL, values = NULL, default = 0) { if (!is.null(fromto)) { if (is.character(values) & is.numeric(fromto)) { fromto <- fromto - 1 } else { fromto <- which(values == fromto) - 1 } } else { fromto <- default } return(fromto) } sliderProps <- shiny:::dropNulls( list( class = "js-range-slider", id = inputId, `data-type` = if (!is.null(to)) "double" else "single", `data-from` = validate_fromto(fromto = from, values = values), `data-to` = validate_fromto( fromto = to, values = values, default = length(values) ), `data-grid` = grid, `data-prefix` = if (is.null(prefix)) { "null" } else { shQuote(prefix, "sh") }, `data-postfix` = if (is.null(postfix)) { "null" } else { shQuote(postfix, "sh") }, `data-drag-interval` = dragRange, `data-disable` = disable, `data-values` = if (is.numeric(values)) { paste(values, collapse = ", ") } else { paste(shQuote(values, type = "sh"), collapse = ", ") } ) ) sliderProps <- lapply( X = sliderProps, FUN = function(x) { if (identical(x, TRUE)) "true" else if (identical(x, FALSE)) "false" else x } ) sliderTag <- tags$div( class = "form-group shiny-input-container", style = if (!is.null(width)) paste0("width: ", htmltools::validateCssUnit(width), ";"), if (!is.null(label)) shiny:::controlLabel(inputId, label), do.call( tags$input, list( type = if (is.numeric(values) & is.null(to)) { "number" } else { "text" }, #class = "js-range-slider", id = inputId, name = inputId, value = "" ) ), tags$style( whisker::whisker.render( template = "input[id='{{id}}'] { -moz-appearance:textfield;}input[id='{{id}}']::-webkit-outer-spin-button,input[id='{{id}}']::-webkit-inner-spin-button {-webkit-appearance: none;margin: 0;}", data = list(id = inputId)) ), tags$script( HTML( whisker::whisker.render( template = '$("#{{id}}").ionRangeSlider({ type: "{{data-type}}", from: {{data-from}}, to: {{data-to}}, grid: {{data-grid}}, keyboard: true, keyboard_step: 1, postfix: {{data-postfix}}, prefix: {{data-prefix}}, drag_interval: {{data-drag-interval}}, values: [{{data-values}}], disable: {{data-disable}} });', data = sliderProps ) )) ) if (identical(animate, TRUE)) animate <- animationOptions() if (!is.null(animate) && !identical(animate, FALSE)) { if (is.null(animate$playButton)) animate$playButton <- icon("play", lib = "glyphicon") if (is.null(animate$pauseButton)) animate$pauseButton <- icon("pause", lib = "glyphicon") sliderTag <- htmltools::tagAppendChild( sliderTag, tags$div(class = "slider-animate-container", tags$a(href = "#", class = "slider-animate-button", `data-target-id` = inputId, `data-interval` = animate$interval, `data-loop` = animate$loop, span(class = "play", animate$playButton), span(class = "pause", animate$pauseButton))) ) } dep <- htmltools::htmlDependency( "ionrangeslider", "2.1.12", c(href = "shared/ionrangeslider"), script = "js/ion.rangeSlider.min.js", stylesheet = c( "css/ion.rangeSlider.css", "css/ion.rangeSlider.skinShiny.css" ) ) htmltools::attachDependencies(sliderTag, dep)} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40392676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7034187/']} | jdg_382067 |
stackexchange | llm_judgeable_groundtruth_similarity | 620500 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the precise mathematical definition of an equation to be Lorentz invariant? Is it the same as being invariant under the maps $x \mapsto \Lambda x$ , with $\Lambda$ being a given Lorentz transformation? I believe this is indeed the appropriate definition, but taking as an example the Klein-Gordon equation: $$\bigg{(}\frac{\partial^{2}}{\partial t^{2}} + \Delta + m^{2}\bigg{)} \psi(x) = 0$$ let us take $\Lambda = (\Lambda_{ij})$ a $4\times 4$ matrix and $x' = \Lambda x$ . Then, if $x_{\mu}$ denotes any component of $x=(x_{0},x_{1},x_{2},x_{3})$ : $$\frac{\partial}{\partial x_{\mu}}\psi(\Lambda x) = \sum_{i=1}^{4}\frac{\partial \psi}{\partial x_{i}'}(x')\frac{\partial x_{i}'}{\partial x_{\mu}} = \sum_{i=1}^{4}\frac{\partial \psi}{\partial x_{i}'}(x')\Lambda_{i\mu}$$ and, consequently: $$\frac{\partial^{2}}{\partial x_{\mu}^{2}}\psi(\Lambda x) = \sum_{i=1}^{4}\sum_{j=1}^{4}\frac{\partial^{2}\psi}{\partial x_{i}'\partial x_{j}'}(x')\Lambda_{i\mu}\Lambda_{j\mu}$$ How does this imply Lorentz invariance, i.e. how does one recover the original Klein-Gordon equation from this?
Now provide the response and nothing else.
| Consider the following two states describing a free particle in 1 dimension \begin{eqnarray}\psi_1(x,t) &=& \frac{1}{\sqrt{2\pi}} \exp\left[i \left(k x - \frac{\hbar k^2}{2m}t\right)\right] \\\psi_2(x,t) &=& \frac{1}{\sqrt{2\pi}} \exp\left[i \left((k+\Delta k) x - \frac{\hbar (k+\Delta k)^2}{2 m}t\right)\right]\end{eqnarray} Both $\psi_1$ and $\psi_2$ satisfy the Schrodinger equation \begin{equation}H \psi_{1,2} = \frac{p^2}{2m}\psi_{1,2} = E_{1,2} \psi_{1,2} \implies -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} \psi_{1,2} = E_{1,2} \psi_{1,2}\end{equation} where for $\psi_1$ , the energy $E_1$ is $E_1 = \hbar \omega_1 = \frac{\hbar^2 k^2}{2m}$ , and for $\psi_2$ , the energy $E_2$ is $E_2=\hbar \omega_2 = \frac{\hbar^2 (k^2+\Delta k)^2}{2m}$ . Note that $\psi_1(x,0)$ and $\psi_2(x,0)$ (the two initial wavefunctions) are only different by an overall spatially-dependent phase \begin{equation}\psi_2(x,0) = e^{i \Delta k x}\psi_1(x,0)\end{equation} However they are clearly different states. For example, the energies are different, as noticed above. Furthermore the momenta are different \begin{eqnarray}p \psi_1 &=& -i \hbar \frac{d\psi_1}{dx} = \hbar k \psi_1 \\p \psi_2 &=& -i \hbar \frac{d\psi_2}{dx} = \hbar (k+\Delta k) \psi_2\end{eqnarray} In particular, the momenta differ by $\hbar \Delta k$ . In other words: knowing initial probability distribution for the position does not fully determine the state! One needs to know the probability amplitude distribution $^\star$ , which contains more information. $\star$ To be technically correct I should say you need to know the probability amplitudes as a function of position, or the probability amplitudes as a function of momentum, or in general, you need to know the inner product of the state with every member a complete basis. I used the words "probability amplitude distribution" to contrast linguistically with "probability distribution", but please don't take this to mean that the there are any implied mathematical properties beyond defining the probability amplitude at every point in position space. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/620500', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/283053/']} | jdg_382068 |
stackexchange | llm_judgeable_groundtruth_similarity | 68246 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
i want to detach command either with 'command &' in the end, or with 'nohup command &', but it stops right after detaching it. command is little specific, if it receives eof on input, it breaks so /dev/null as input will lead to end and solution that usualy works: $ command < /dev/null > /dev/null 2>&1 & not working... is there other device in unix/linux, that can replace /dev/null and behave like empty input, but not sending eof. (by the way, command is very useful multicasting tool emcast, I can try patching it my self, or find patched version for this purpose... but it seems that problem can be solved outside) I'm adding this EDIT to make my question more clear. I made this C program of 2 lines that works perfectly: program name is "donothing" #include <unistd.h>int main() { while (1) { sleep(10); } return 0; } and it is what I am looking for, some device/program, that doing nothing, but leave its stdout open. Both ("command & ... disown" and "nohup command &") works. $ donothing | mycommand >/dev/null & $ disown %1 works well, so now question is only: what unix device/program behaves like my 'donothing'.
Now provide the response and nothing else.
| For your command to detect eof , it has to read from stdin. So presumably it is expecting some input. So it sounds like what you need is not an empty input ( /dev/null is exactly meant for that), but input that never comes. It can be simulated with a pipe where nobody is ever going to write on the other end like: sleep 999999999 | the-command Or to avoid having to run that extra sleep command, it could be done with a named pipe: fifo=$(mktemp -u) && mkfifo "$fifo" && (rm "$fifo" && the-command <&3 3<&- &) 3<> "$fifo" Here using an intermediary file descriptor to work around the fact that the shell connects stdin to /dev/null implicitely when you start a command with & (unless you add an explicit stdin redirection like our <&3 here). On Linux (and probably on Linux only), you can also do: the-command < /dev/fd/1 3>&1 > /dev/null | : /dev/fd/1 where fd 1 is connected to a pipe, on Linux, behaves like a named pipe. That is, when you open it in read mode, you get the reading end of the pipe. So above, fd 0 will be connected to the reading end of a pipe whose other end is on the fd 3 of the-command . Because the-command is not going to write anything on its fd 3, any read attempt on fd 0 will block (or a non-blocking read will return with there's nothing to read yet , or a select/poll will return nothing to read either as the-command is probably doing if it's doing anything else than waiting for input that never comes). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/68246', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/34366/']} | jdg_382069 |
stackexchange | llm_judgeable_groundtruth_similarity | 1241819 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does the => operator mean in the following code? foreach ($user_list as $user => $pass) The code is a comment at PHP.net.The user does not specify the value of $user_list , $user or $pass .I normally see that => means equal or greater than. However, I am not sure about its purpose here because it is not assigned.I read the code as process a list of users in integers such that the value of each user is equal or greater than password The above does not make sense to me.
Now provide the response and nothing else.
| => is the separator for associative arrays. In the context of that foreach loop, it assigns the key of the array to $user and the value to $pass . Example: $user_list = array( 'dave' => 'apassword', 'steve' => 'secr3t');foreach ($user_list as $user => $pass) { echo "{$user}'s pass is: {$pass}\n";}// Prints: // "dave's pass is: apassword"// "steve's pass is: secr3t" Note that this can be used for numerically indexed arrays too. Example: $foo = array('car', 'truck', 'van', 'bike', 'rickshaw');foreach ($foo as $i => $type) { echo "{$i}: {$type}\n";}// prints:// 0: car// 1: truck// 2: van// 3: bike// 4: rickshaw | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1241819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54964/']} | jdg_382070 |
stackexchange | llm_judgeable_groundtruth_similarity | 5094908 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I define a function (maybe a class member function but not inlined) in a header file that is included by two different translation units I get a link error since that function is multiply defined. Not so with templates since they are not compilable types until the compiler resolves a declaration of an object of a templatized type. This made me realize I don't know where compiled template code resides and how it is linked since C++ does not just create multiple copies of code to define SomeTemplateClass. Any info would be appreciated.Thanks!
Now provide the response and nothing else.
| There are 3 implementation schemes used by C++ compilers: greedy instantiation, where the compiler generates an instantiation in each compilation unit that uses it, then the linker throws away all but one of them (this is not just a code-size optimization, it's required so that function addresses, static variables, and the like are unique). This is the most common model. queried instantiation, where the compiler has a database of instantiations already done. When an instantiation is needed, the DB is checked and updated. The only compiler I know which uses this is Sun's, and it isn't used by default anymore. iterated instantiation, where the instantiations are made by the linker (either directly or by assigning them to a compilation unit, which will then be recompiled). This is the model used by CFront -- i.e. historically it was the first one used -- and also by compilers using the EDG front-end (with some optimisations compared to CFront). (See C++ Templates, The Complete Guide by David Vandevoorde and Nicolai Josuttis. Another online reference is http://www.bourguet.org/v2/cpplang/export.pdf , which is more concerned about the compilation model but still has descriptions of the instantiation mechanisms). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5094908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/630749/']} | jdg_382071 |
stackexchange | llm_judgeable_groundtruth_similarity | 20605888 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i am developping a project base on J2EE EJB JSF, database is MYsql, the project works very well last week. but today, it can't be deployed when i run it. here are some exception: Initial deploying ECOM to C:\Users\John624\Documents\NetBeansProjects\PromoCoupon\ECOM\dist\gfdeploy\ECOMCompleted initial distribution of ECOMInitializing...invalid header field name: Exception DescriptionC:\Users\John624\Documents\NetBeansProjects\PromoCoupon\ECOM\nbproject\build-impl.xml:307: The module has not been deployed.See the server log for details.BUILD FAILED (total time: 5 seconds) Glassfish: <code> SEVERE: Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare methodSEVERE: Exception while invoking class org.glassfish.javaee.full.deployment.EarDeployer prepare methodSEVERE: org.glassfish.deployment.common.DeploymentException: Exception [EclipseLink-28018] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.EntityManagerSetupExceptionException Description: Predeployment of PersistenceUnit [ECOM-ejbPU] failed.Internal Exception: Exception [EclipseLink-7158] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.ValidationExceptionException Description: Error encountered when building the @NamedQuery [Adresse.maxId] from entity class [class org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata].Internal Exception: java.lang.ClassCastException: org.eclipse.persistence.jpa.jpql.parser.NullExpression cannot be cast to org.eclipse.persistence.jpa.jpql.parser.IdentificationVariable at org.glassfish.javaee.full.deployment.EarDeployer.prepare(EarDeployer.java:180) at com.sun.enterprise.v3.server.ApplicationLifecycle.prepareModule(ApplicationLifecycle.java:922) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:431) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:527) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:523) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:522) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:546) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1423) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1500(CommandRunnerImpl.java:108) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1762) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1674) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:534) at com.sun.enterprise.v3.admin.AdminAdapter.onMissingResource(AdminAdapter.java:224) at org.glassfish.grizzly.http.server.StaticHttpHandler.service(StaticHttpHandler.java:297) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:246) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:724)Caused by: javax.persistence.PersistenceException: Exception [EclipseLink-28018] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.EntityManagerSetupExceptionException Description: Predeployment of PersistenceUnit [ECOM-ejbPU] failed.Internal Exception: Exception [EclipseLink-7158] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.ValidationExceptionException Description: Error encountered when building the @NamedQuery [Adresse.maxId] from entity class [class org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata].Internal Exception: java.lang.ClassCastException: org.eclipse.persistence.jpa.jpql.parser.NullExpression cannot be cast to org.eclipse.persistence.jpa.jpql.parser.IdentificationVariable at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.createPredeployFailedPersistenceException(EntityManagerSetupImpl.java:1950) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:1941) at org.eclipse.persistence.jpa.PersistenceProvider.createContainerEntityManagerFactory(PersistenceProvider.java:322) at org.glassfish.persistence.jpa.PersistenceUnitLoader.loadPU(PersistenceUnitLoader.java:199) at org.glassfish.persistence.jpa.PersistenceUnitLoader.<init>(PersistenceUnitLoader.java:107) at org.glassfish.persistence.jpa.JPADeployer$1.visitPUD(JPADeployer.java:223) at org.glassfish.persistence.jpa.JPADeployer$PersistenceUnitDescriptorIterator.iteratePUDs(JPADeployer.java:510) at org.glassfish.persistence.jpa.JPADeployer.createEMFs(JPADeployer.java:230) at org.glassfish.persistence.jpa.JPADeployer.prepare(JPADeployer.java:168) at com.sun.enterprise.v3.server.ApplicationLifecycle.prepareModule(ApplicationLifecycle.java:922) at org.glassfish.javaee.full.deployment.EarDeployer.prepareBundle(EarDeployer.java:307) at org.glassfish.javaee.full.deployment.EarDeployer.access$200(EarDeployer.java:88) at org.glassfish.javaee.full.deployment.EarDeployer$1.doBundle(EarDeployer.java:153) at org.glassfish.javaee.full.deployment.EarDeployer$1.doBundle(EarDeployer.java:150) at org.glassfish.javaee.full.deployment.EarDeployer.doOnBundles(EarDeployer.java:230) at org.glassfish.javaee.full.deployment.EarDeployer.doOnAllTypedBundles(EarDeployer.java:239) at org.glassfish.javaee.full.deployment.EarDeployer.doOnAllBundles(EarDeployer.java:265) at org.glassfish.javaee.full.deployment.EarDeployer.prepare(EarDeployer.java:150) ... 35 moreCaused by: Exception [EclipseLink-28018] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.EntityManagerSetupExceptionException Description: Predeployment of PersistenceUnit [ECOM-ejbPU] failed.Internal Exception: Exception [EclipseLink-7158] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.ValidationExceptionException Description: Error encountered when building the @NamedQuery [Adresse.maxId] from entity class [class org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata].Internal Exception: java.lang.ClassCastException: org.eclipse.persistence.jpa.jpql.parser.NullExpression cannot be cast to org.eclipse.persistence.jpa.jpql.parser.IdentificationVariable at org.eclipse.persistence.exceptions.EntityManagerSetupException.predeployFailed(EntityManagerSetupException.java:230) ... 53 moreCaused by: Exception [EclipseLink-7158] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.ValidationExceptionException Description: Error encountered when building the @NamedQuery [Adresse.maxId] from entity class [class org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata].Internal Exception: java.lang.ClassCastException: org.eclipse.persistence.jpa.jpql.parser.NullExpression cannot be cast to org.eclipse.persistence.jpa.jpql.parser.IdentificationVariable at org.eclipse.persistence.exceptions.ValidationException.errorProcessingNamedQuery(ValidationException.java:824) at org.SEVERE: Exception while preparing the appSEVERE: eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata.process(NamedQueryMetadata.java:194) at org.eclipse.persistence.internal.jpa.metadata.MetadataProject.processQueries(MetadataProject.java:1628) at org.eclipse.persistence.internal.jpa.metadata.MetadataProcessor.addNamedQueries(MetadataProcessor.java:148) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:1894) ... 51 moreCaused by: java.lang.ClassCastException: org.eclipse.persistence.jpa.jpql.parser.NullExpression cannot be cast to org.eclipse.persistence.jpa.jpql.parser.IdentificationVariable at org.eclipse.persistence.internal.jpa.jpql.DeclarationResolver$DeclarationVisitor.visit(DeclarationResolver.java:626) at org.eclipse.persistence.jpa.jpql.parser.RangeVariableDeclaration.accept(RangeVariableDeclaration.java:98) at org.eclipse.persistence.internal.jpa.jpql.DeclarationResolver$DeclarationVisitor.visit(DeclarationResolver.java:577) at org.eclipse.persistence.jpa.jpql.parser.IdentificationVariableDeclaration.accept(IdentificationVariableDeclaration.java:71) at org.eclipse.persistence.internal.jpa.jpql.DeclarationResolver$DeclarationVisitor.visit(DeclarationResolver.java:566) at org.eclipse.persistence.jpa.jpql.parser.FromClause.accept(FromClause.java:48) at org.eclipse.persistence.internal.jpa.jpql.DeclarationResolver.populateImp(DeclarationResolver.java:417) at org.eclipse.persistence.internal.jpa.jpql.DeclarationResolver.populate(DeclarationResolver.java:407) at org.eclipse.persistence.internal.jpa.jpql.JPQLQueryHelper$DescriptorCollector.collectDescriptors(JPQLQueryHelper.java:179) at org.eclipse.persistence.internal.jpa.jpql.JPQLQueryHelper$DescriptorCollector.visit(JPQLQueryHelper.java:204) at org.eclipse.persistence.jpa.jpql.parser.FromClause.accept(FromClause.java:48) at org.eclipse.persistence.jpa.jpql.parser.AbstractSelectStatement.acceptChildren(AbstractSelectStatement.java:93) at org.eclipse.persistence.jpa.jpql.parser.SelectStatement.acceptChildren(SelectStatement.java:110) at org.eclipse.persistence.jpa.jpql.parser.AbstractTraverseChildrenVisitor.visit(AbstractTraverseChildrenVisitor.java:32) at org.eclipse.persistence.jpa.jpql.parser.AnonymousExpressionVisitor.visit(AnonymousExpressionVisitor.java:470) at org.eclipse.persistence.jpa.jpql.parser.SelectStatement.accept(SelectStatement.java:102) at org.eclipse.persistence.jpa.jpql.parser.JPQLExpression.acceptChildren(JPQLExpression.java:143) at org.eclipse.persistence.jpa.jpql.parser.AbstractTraverseChildrenVisitor.visit(AbstractTraverseChildrenVisitor.java:32) at org.eclipse.persistence.jpa.jpql.parser.AnonymousExpressionVisitor.visit(AnonymousExpressionVisitor.java:302) at org.eclipse.persistence.jpa.jpql.parser.JPQLExpression.accept(JPQLExpression.java:136) at org.eclipse.persistence.internal.jpa.jpql.JPQLQueryHelper.getClassDescriptors(JPQLQueryHelper.java:87) at org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata.addJPAQuery(NamedQueryMetadata.java:105) at org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata.process(NamedQueryMetadata.java:192) ... 54 more<code> entity bean /* * To change this template, choose Tools | Templates * and open the template in the editor. */package entities;import java.io.Serializable;import java.util.Collection;import java.util.Date;import javax.persistence.Basic;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.Id;import javax.persistence.NamedQueries;import javax.persistence.NamedQuery;import javax.persistence.OneToMany;import javax.persistence.Table;import javax.persistence.Temporal;import javax.persistence.TemporalType;import javax.validation.constraints.NotNull;import javax.validation.constraints.Size;import javax.xml.bind.annotation.XmlRootElement;import javax.xml.bind.annotation.XmlTransient;/** * * @author John624 */@Entity@Table(name = "Adresse")@XmlRootElement@NamedQueries({ @NamedQuery(name = "Adresse.maxId", query = "SELECT max(idAdresse) FROM Adresse"), @NamedQuery(name = "Adresse.findAll", query = "SELECT a FROM Adresse a"), @NamedQuery(name = "Adresse.findByIdAdresse", query = "SELECT a FROM Adresse a WHERE a.idAdresse = :idAdresse"), @NamedQuery(name = "Adresse.findByNumEtRue", query = "SELECT a FROM Adresse a WHERE a.numEtRue = :numEtRue"), @NamedQuery(name = "Adresse.findByComple", query = "SELECT a FROM Adresse a WHERE a.comple = :comple"), @NamedQuery(name = "Adresse.findByCodePostale", query = "SELECT a FROM Adresse a WHERE a.codePostale = :codePostale"), @NamedQuery(name = "Adresse.findByVille", query = "SELECT a FROM Adresse a WHERE a.ville = :ville"), @NamedQuery(name = "Adresse.findByPays", query = "SELECT a FROM Adresse a WHERE a.pays = :pays"), @NamedQuery(name = "Adresse.findByDateModif", query = "SELECT a FROM Adresse a WHERE a.dateModif = :dateModif")})public class Adresse implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @NotNull @Column(name = "idAdresse") private Long idAdresse; @Size(max = 100) @Column(name = "numEtRue") private String numEtRue; @Size(max = 100) @Column(name = "comple") private String comple; @Size(max = 5) @Column(name = "codePostale") private String codePostale; @Size(max = 35) @Column(name = "ville") private String ville; @Size(max = 35) @Column(name = "pays") private String pays; @Column(name = "dateModif") @Temporal(TemporalType.DATE) private Date dateModif; @OneToMany(mappedBy = "adrU") private Collection<Utilisateur> utilisateurCollection; @OneToMany(mappedBy = "adrRecep") private Collection<Livraison> livraisonCollection; @OneToMany(mappedBy = "adrE") private Collection<Entreprise> entrepriseCollection; public Adresse() { } public Adresse(Long idAdresse) { this.idAdresse = idAdresse; } public Long getIdAdresse() { return idAdresse; } public void setIdAdresse(Long idAdresse) { this.idAdresse = idAdresse; } public String getNumEtRue() { return numEtRue; } public void setNumEtRue(String numEtRue) { this.numEtRue = numEtRue; } public String getComple() { return comple; } public void setComple(String comple) { this.comple = comple; } public String getCodePostale() { return codePostale; } public void setCodePostale(String codePostale) { this.codePostale = codePostale; } public String getVille() { return ville; } public void setVille(String ville) { this.ville = ville; } public String getPays() { return pays; } public void setPays(String pays) { this.pays = pays; } public Date getDateModif() { return dateModif; } public void setDateModif(Date dateModif) { this.dateModif = dateModif; } @XmlTransient public Collection<Utilisateur> getUtilisateurCollection() { return utilisateurCollection; } public void setUtilisateurCollection(Collection<Utilisateur> utilisateurCollection) { this.utilisateurCollection = utilisateurCollection; } @XmlTransient public Collection<Livraison> getLivraisonCollection() { return livraisonCollection; } public void setLivraisonCollection(Collection<Livraison> livraisonCollection) { this.livraisonCollection = livraisonCollection; } @XmlTransient public Collection<Entreprise> getEntrepriseCollection() { return entrepriseCollection; } public void setEntrepriseCollection(Collection<Entreprise> entrepriseCollection) { this.entrepriseCollection = entrepriseCollection; } @Override public int hashCode() { int hash = 0; hash += (idAdresse != null ? idAdresse.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Adresse)) { return false; } Adresse other = (Adresse) object; if ((this.idAdresse == null && other.idAdresse != null) || (this.idAdresse != null && !this.idAdresse.equals(other.idAdresse))) { return false; } return true; } @Override public String toString() { return "entities.Adresse[ idAdresse=" + idAdresse + " ]"; }} session bean /* * To change this template, choose Tools | Templates * and open the template in the editor. */package session; import entities.Adresse;import java.util.List;import javax.ejb.Stateless;import javax.ejb.LocalBean;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;import javax.persistence.Query;/** * * @author John624 */@Stateless@LocalBeanpublic class AdresseManager { @PersistenceContext(unitName = "ECOM-ejbPU") private EntityManager em; public List<Adresse> getAllAdresses() { Query query=em.createNamedQuery("Adresse.findAll"); return query.getResultList(); } public Adresse update(Adresse adresse) { return em.merge(adresse); } public void persist(Object object) { em.persist(object); } public Long nextId(){ Query query = em.createNamedQuery("Adresse.maxId"); long res; res = query.getResultList().indexOf(0)+1; return res; }} JSF managedbean /* * To change this template, choose Tools | Templates * and open the template in the editor. */package managedbeans;import entities.Adresse;import java.io.Serializable;import java.util.List;import javax.ejb.EJB;import javax.enterprise.context.SessionScoped;import javax.inject.Named;import session.AdresseManager;/** * * @author John624 */@Named(value="adresseMBean")@SessionScopedpublic class AdresseMBean implements Serializable{ private List<Adresse> adresses; private Adresse adresse; @EJB private AdresseManager adresseManager; public AdresseMBean() { adresse=new Adresse(); adresseManager = new AdresseManager(); } /** * returns customer list for display in a datatable DataTable * @return */ public List<Adresse> getAdresses() { if((adresses == null) || (adresses.isEmpty())) adresses = adresseManager.getAllAdresses(); return adresseManager.getAllAdresses(); } // public void refresh() { // tousLesComptes = compteBancaireFacade.findAll(); // } /** * returns details of a customer. Useful for displaying in a form a customer's details * @return */ public Adresse getDetails() { return adresse; } /** * Action handler - Called when a line in the table is clicked * @param adresse * @return */ public String showDetails(Adresse adresse) { this.adresse = adresse; return "AdresseDetails"; // will display CustomerDetails.xml JSF page } /** * Action handler - update the customer model in the database. * called when one press the update button in the form * @return */ public String update() { System.out.println("###UPDATE###"); adresse = adresseManager.update(adresse); return "AdresseList"; // will display the customer list in a table } /** * Action handler - returns to the list of customers in the table */ public String list() { System.out.println("###LIST###"); return "AdresseList"; } public void update(Adresse adrU) { System.out.println("###UPDATE###"); adresseManager.update(adrU); }} Thanks in advance.
Now provide the response and nothing else.
| As indicated by the following exception stacktrace Exception Description: Error encountered when building the @NamedQuery [Adresse.maxId] from entity class [class org.eclipse.persistence.internal.jpa.metadata.queries.NamedQueryMetadata]. the problem is here : @NamedQuery(name = "Adresse.maxId", query = "SELECT max(idAdresse) FROM Adresse"), To solve the problem, try this instead: @NamedQuery(name = "Adresse.maxId", query = "SELECT max(a.idAdresse) FROM Adresse a"), | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20605888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2084921/']} | jdg_382072 |
stackexchange | llm_judgeable_groundtruth_similarity | 59007 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
By now the Useless Use of cat Award is very well known, and there's also a mention of a Useless Use of echo (not relevant for this question). I'm wondering if there should be a "Useless Use of echo in Bash Award": Piping seems to be much slower than heredocs and herestrings according to some highly unscientific measurements: Heredocs: for reps in 1 2 3do time for i in {1..1000} do cat <<'END'test stringEND done > /dev/nulldonereal 0m1.786suser 0m0.212ssys 0m0.332sreal 0m1.817suser 0m0.232ssys 0m0.332sreal 0m1.846suser 0m0.256ssys 0m0.320s Herestrings for reps in 1 2 3do time for i in {1..1000} do cat <<< 'test string' done > /dev/nulldonereal 0m1.932suser 0m0.280ssys 0m0.288sreal 0m1.956suser 0m0.248ssys 0m0.348sreal 0m1.968suser 0m0.268ssys 0m0.324s Redirection for reps in 1 2 3do time for i in {1..1000} do echo 'test string' | cat done > /dev/nulldonereal 0m3.562suser 0m0.416ssys 0m0.548sreal 0m3.924suser 0m0.384ssys 0m0.604sreal 0m3.343suser 0m0.400ssys 0m0.552s In general, heredocs and herestrings are about the same speed (this is just one data set from several tests) while redirection is consistently more than 50% slower. Am I misunderstanding something, or could this be used as a general rule for commands reading standard input in Bash?
Now provide the response and nothing else.
| First, let's concentrate on performance. I ran benchmarks for a slightly different program on an otherwise mostly idle x86_64 processor running Debian squeeze. herestring.bash , using a herestring to pass a line of input: #! /bin/bashi=0while [ $i -lt $1 ]; do tr a-z A-Z <<<'hello world' i=$((i+1))done >/dev/null heredoc.bash , using a heredoc to pass a line of input: #! /bin/bashi=0while [ $i -lt $1 ]; do tr a-z A-Z <<'EOF'hello worldEOF i=$((i+1))done >/dev/null echo.bash , using echo and a pipe to pass a line of input: #! /bin/bashi=0while [ $i -lt $1 ]; do echo 'hello world' | tr a-z A-Z i=$((i+1))done >/dev/null For comparison, I also timed the scripts under ATT ksh93 and under dash (except for herestring.bash , because dash doesn't have herestrings). Here are median-of-three times: $ time bash ./herestring.bash 10000./herestring.bash 10000 0.32s user 0.79s system 15% cpu 7.088 total$ time ksh ./herestring.bash 10000ksh ./herestring.bash 10000 0.54s user 0.41s system 17% cpu 5.277 total$ time bash ./heredoc.bash 10000./heredoc.bash 10000 0.35s user 0.75s system 17% cpu 6.406 total$ time ksh ./heredoc.bash 10000 ksh ./heredoc.sh 10000 0.54s user 0.44s system 19% cpu 4.925 total$ time sh ./heredoc.bash 10000 ./heredoc.sh 10000 0.08s user 0.58s system 12% cpu 5.313 total$ time bash ./echo.bash 10000./echo.bash 10000 0.36s user 1.40s system 20% cpu 8.641 total$ time ksh ./echo.bash 10000ksh ./echo.sh 10000 0.47s user 1.51s system 28% cpu 6.918 total$ time sh ./echo.sh 10000./echo.sh 10000 0.07s user 1.00s system 16% cpu 6.463 total Conclusions: A heredoc is faster than a herestring. echo and a pipe is noticeably, but not dramatically faster. (Keep in mind that this is a toy program: in a real program, most of the processing time would be in whatever the tr call stands for here.) If you want speed, ditch bash and call dash or even better ksh instead. Bash's features don't make up for its relative slowness, but ksh has both features and speed. Beyond performance, there's also clarity and portability. <<< is a ksh93/bash/zsh extension which is less well-known than echo … | or << . It doesn't work in ksh88/pdksh or in POSIX sh. The only place where <<< is arguably significantly clearer is inside a heredoc: foo=$(tr a-z A-Z <<<'hello world') vs foo=$(tr a-z A-Z <<'EOF'hello worldEOF) (Most shells can't cope with closing the parenthesis at the end of the line containing <<EOF .) | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/59007', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/3645/']} | jdg_382073 |
stackexchange | llm_judgeable_groundtruth_similarity | 107859 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello everyone,This is a puzzle I got from someone via social media. Basically, we need to fill up the boxes with the numbers 1-9 (no repetitions) that fit the multiplication and addition operations. I managed to solve this puzzle by using a brute force method in Excel+VBA. However, it would be very interesting if it can be solved in Mathematica with its specialty as computational software. Any idea will be appreciated. Thanks.
Now provide the response and nothing else.
| A non brute-force approach is the following, similar to my answer for the Zebra Puzzle . Both puzzles are examples of constrainst satisfaction problems, that can be solved with Reduce / Minimize / Maximize or, more efficiently, with LinearProgramming . The good about this approach is that you can easily extend and apply to many similar problems. The common part: Assign an index $i$ to each box from top left, $i=1,2,\ldots,9$. In each box you should put a digit $k$, $k=1,\ldots,9$. Assign an index $l$ to the whole number/row, $l=1,\ldots,5$. the variable x[i,k] is $1$ if there is the digit $k$ in the cell $i$ and $0$ otherwise. d[i] is the digit in cell $i$. n[l] is the whole number in the row $l$ (one or two cell). The easier and slower approach is with Maximize . Build constraints and pass to Maximize with a constant objective function, so Maximize will try only to satisfy constraints. Constraints are: n[1] * n[2] == n[3] n[3] + n[4] == n[5] each cell should be filled with exactly one digit each digit should be placed in exactly one cell 0 <= x[i,k] <= 1 , x[i,k] \elem Integers That's all. d[i_] := Sum[x[i, k] k, {k, 9}]n[l_] := FromDigits[d /@ {{1, 2}, {3}, {4, 5}, {6, 7}, {8, 9}}[[l]]]solution = Last@Maximize[{0, { n[1]*n[2] == n[3], n[3] + n[4] == n[5], Table[Sum[x[i, k], {k, 9}] == 1, {i, 9}], Table[Sum[x[i, k], {i, 9}] == 1, {k, 9}], Thread[0 <= Flatten@Array[x, {9, 9}] <= 1]}}, Flatten@Array[x, {9, 9}], Integers];Array[n, 5] /. solution {17, 4, 68, 25, 93} Not fast (not linear). A faster approach is to use LinearProgramming , but you need to: change the first constraint so that it become linear manually build matrix and vectors input for LinearProgramming (see docs) The next piece of code do that. Please note that the single non-linear constraint n[1]*n[2] == n[3] has been replaced with 18 linear "conditional" constraints. d[i_] := Sum[x[i, k] k, {k, 9}]n[l_] := FromDigits[d /@ {{1, 2}, {3}, {4, 5}, {6, 7}, {8, 9}}[[l]]]vars = Flatten@Array[x, {9, 9}];constraints = Flatten@{ Table[{ k n[1] >= n[3] - 75 (1 - x[3, k]), k n[1] <= n[3] + 859 (1 - x[3, k]) }, {k, 9}], n[3] + n[4] == n[5], Table[Sum[x[i, k], {k, 9}] == 1, {i, 9}], Table[Sum[x[i, k], {i, 9}] == 1, {k, 9}]};bm = CoefficientArrays[Equal @@@ constraints, vars];solution = LinearProgramming[ Table[0, Length@vars], bm[[2]], Transpose@{-bm[[1]], constraints[[All, 0]] /. {LessEqual -> -1, Equal -> 0, GreaterEqual -> 1}}, Table[{0, 1}, Length@vars], Integers ];Array[n, 5] /. Thread[vars -> solution] {17, 4, 68, 25, 93} The execution is now about instantaneous. | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/107859', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/37759/']} | jdg_382074 |
stackexchange | llm_judgeable_groundtruth_similarity | 25951739 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to join the elements of a String array via the reduce function. A tried for a bit now, but I can't get what the problem exactly is. This is what I believe should do the trick. I have tried other alternatives too, but given the huge amount I will wait for some input: var genres = ["towel", "42"]var jointGenres : String = genres.reduce(0, combine: { $0 + "," + $1 }) Error: ..:14:44: Cannot invoke '+' with an argument list of type '(IntegerLiteralConvertible, combine: (($T6, ($T6, $T7) -> ($T6, $T7) -> $T5) -> ($T6, ($T6, $T7) -> $T5) -> $T5, (($T6, $T7) -> ($T6, $T7) -> $T5, $T7) -> (($T6, $T7) -> $T5, $T7) -> $T5) -> (($T6, ($T6, $T7) -> $T5) -> $T5, (($T6, $T7) -> $T5, $T7) -> $T5) -> $T5)' From my understanding, $0 should be inferred as a String and $1, by combination with $0, should result as a String too. I don't know what's the deal with the type system here. Any idea?
Now provide the response and nothing else.
| Your reduce closure should probably look like this: var jointGenres : String = genres.reduce("", combine: { $0 == "" ? $1 : $0 + "," + $1 }) This has the "" instead of 0 like you had, and makes sure that there is no extra comma in the beginning of the return value. The original code did not work because the return type that is represented as U in documentation was originally 0 in your answer, while you are trying to add a String to it. In your case, you really want both U and T to represent Strings instead of Ints. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25951739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1093494/']} | jdg_382075 |
stackexchange | llm_judgeable_groundtruth_similarity | 3144954 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a combinatorial math problem, I found the need to have a short form for $\sum_{k=1}^n {2n-k \choose n}$ .I searched it on Wolfram|Alpha and it gave me the result $\frac{2n{2n-1 \choose n}}{n+1}$ which indeed solves my problem, but no steps were available and I wasn't able to find a way to prove it by myself. I tried by writing the sum term by term and using ${n \choose k} = {n! \over k!(n-k)!}$ but it overcomplicated everything. Could anyone help me with this? I'm sorry if this is a duplicate. As far as I searched, I haven't found any question regarding this sum, but maybe I haven't searched enough.
Now provide the response and nothing else.
| You can use the Hockey-stick identity . Using $\binom{i}{j}+\binom{i}{j+1} =\binom{i+1}{j+1}$ recursively, we have $$\begin{align*}\sum_{k=1}^n {2n-k \choose n} &= \color{red}{\binom{n}{n}}+\binom{n+1}{n}+\cdots+\binom{2n-1}{n}\\&=\color{red}{\binom{n+1}{n+1}}+\binom{n+1}{n}+\cdots+\binom{2n-1}{n}\\&=\binom{n+2}{n+1}+\binom{n+2}{n}+\cdots+\binom{2n-1}{n}\\&=\binom{n+3}{n+1}+\binom{n+3}{n}+\cdots+\binom{2n-1}{n}\\&=\cdots\\&=\binom{2n}{n+1} =\frac{2n\binom{2n-1}{n}}{n+1}.\end{align*}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3144954', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/550228/']} | jdg_382076 |
stackexchange | llm_judgeable_groundtruth_similarity | 34582405 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am building a small react app and my local images won't load. Images like placehold.it/200x200 loads. I thought maybe it could be something with the server? Here is my App.js import React, { Component } from 'react';class App extends Component { render() { return ( <div className="home-container"> <div className="home-content"> <div className="home-text"> <h1>foo</h1> </div> <div className="home-arrow"> <p className="arrow-text"> Vzdělání </p> <img src={"/images/resto.png"} /> </div> </div> </div> ); }}export default App; index.js: import React, { Component } from 'react';import { render } from 'react-dom';import { Router, Route, Link } from 'react-router';import { createHistory } from 'history';import App from './components/app';let history = createHistory();render( <Router history={history} > <Route path="/" component={App} > <Route path="vzdelani" component="" /> <Route path="znalosti" component="" /> <Route path="prace" component="" /> <Route path="kontakt" component="" /> </Route> <Route path="*" component="" /> </Router>, document.getElementById('app')); and server.js: var path = require('path');var express = require('express');var webpack = require('webpack');var config = require('./webpack.config.dev');var app = express();var compiler = webpack(config);app.use(require('webpack-dev-middleware')(compiler, { noInfo: true, publicPath: config.output.publicPath}));app.use(require('webpack-hot-middleware')(compiler));app.get('*', function(req, res) { res.sendFile(path.join(__dirname, 'index.html'));});app.listen(3000, 'localhost', function(err) { if (err) { console.log(err); return; } console.log('Listening at http://localhost:3000');});
Now provide the response and nothing else.
| When using Webpack you need to require images in order for Webpack to process them, which would explain why external images load while internal do not, so instead of <img src={"/images/resto.png"} /> you need to use <img src={require('/images/image-name.png')} /> replacing image-name.png with the correct image name for each of them. That way Webpack is able to process and replace the source img. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/34582405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5547218/']} | jdg_382077 |
stackexchange | llm_judgeable_groundtruth_similarity | 2558893 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to be able to do that kind of selection: $('input[value between 6 and 11]'); Which would give me the 9 and 10.Is there a way to do that ?
Now provide the response and nothing else.
| You can do this with .filter() like this: $("input").filter(function() { return $(this).val() >= 6 && $(this).val() <= 11; }).somethingHere(); If you need to use this often, you could make it a selector as well: jQuery.expr[':'].between = function(a, b, c) { var args = c[3].split(','); var val = parseInt(jQuery(a).val()); return val >= parseInt(args[0]) && val <= parseInt(args[1]);}; Then you can select by: $("input:between(6, 11)") Just change the >= and <= if you don't want the range to be inclusive. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2558893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/305189/']} | jdg_382078 |
stackexchange | llm_judgeable_groundtruth_similarity | 1608842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I had a strange bug when porting a feature to the Python 3.1 fork of my program. I narrowed it down to the following hypothesis: In contrast to Python 2.x, in Python 3.x if an object has an __eq__ method it is automatically unhashable. Is this true? Here's what happens in Python 3.1: >>> class O(object):... def __eq__(self, other):... return 'whatever'...>>> o = O()>>> d = {o: 0}Traceback (most recent call last): File "<pyshell#16>", line 1, in <module> d = {o: 0}TypeError: unhashable type: 'O' The follow-up question is, how do I solve my personal problem? I have an object ChangeTracker which stores a WeakKeyDictionary that points to several objects, giving for each the value of their pickle dump at a certain time point in the past. Whenever an existing object is checked in, the change tracker says whether its new pickle is identical to its old one, therefore saying whether the object has changed in the meantime. Problem is, now I can't even check if the given object is in the library, because it makes it raise an exception about the object being unhashable. (Cause it has a __eq__ method.) How can I work around this?
Now provide the response and nothing else.
| Yes, if you define __eq__ , the default __hash__ (namely, hashing the address of the object in memory) goes away. This is important because hashing needs to be consistent with equality: equal objects need to hash the same. The solution is simple: just define __hash__ along with defining __eq__ . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1608842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76701/']} | jdg_382079 |
stackexchange | llm_judgeable_groundtruth_similarity | 32933174 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If a GitLab project is configured on GitLab CI, is there a way to run the build locally? I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI. Here is an example of how to run Travis builds locally using Docker , I'm looking for something similar with GitLab.
Now provide the response and nothing else.
| Since a few months ago this is possible using gitlab-runner : gitlab-runner exec docker my-job-name Note that you need both docker and gitlab-runner installed on your computer to get this working. You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work. Here's the line I currently use for testing locally using gitlab-runner : gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro" Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml . See the official documentation for more details . Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.). Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally: gitlab-runner --helpNAME: gitlab-runner - a GitLab RunnerUSAGE: gitlab-runner [global options] command [command options] [arguments...] VERSION: 1.1.0~beta.135.g24365ee (24365ee) AUTHOR(S): Kamil Trzciński <[email protected]> COMMANDS: exec execute a build locally [...] GLOBAL OPTIONS: --debug debug mode [$DEBUG] [...] As you can see, the exec command is to execute a build locally . Even though there was an issue to deprecate the current gitlab-runner exec behavior , it ended up being reconsidered and a new version with greater features will replace the current exec functionality. Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32933174', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245552/']} | jdg_382080 |
stackexchange | llm_judgeable_groundtruth_similarity | 1884041 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm seeking a portable way to receive the (handy) $_SERVER['PATH_INFO'] variable. After reading a while, it turns out PATH_INFO is originated from CGI/1.1, and my not always be present in all configuration. What is the best (mostly security-wise) way to get that variable - apart from extracting it manually (security concern).
Now provide the response and nothing else.
| Well, I'm (almost) sure that without making use of the $_SERVER superglobal keys, providing a alternative way to figure out PATH_INFO is just impossible, that being said lets first list all of the $_SERVER keys that we may possibly use: 'PHP_SELF' 'QUERY_STRING' 'SCRIPT_FILENAME' 'PATH_TRANSLATED' 'SCRIPT_NAME' 'REQUEST_URI' 'PATH_INFO' 'ORIG_PATH_INFO' We obviously need to ignore the last two. Now we should ( I don't know this for a fact, I'm just assuming because you said so ) filter all the keys that exist in the link you provided ( which BTW is offline ATM ), that leaves us with the following keys: 'PHP_SELF' 'SCRIPT_FILENAME' 'REQUEST_URI' Regarding your comment to Anthonys answer : You are just juggling variables now. SCRIPT_FILENAME is a part of the CGI spec. It will not be available if PATH_INFO is unavailable. As for REQUEST_URI , it's apache's mod_rewrite specific. – LiraNuna I'm running LightTPD/1.4.20-1 (Win32) with PHP 5.3.0 as CGI, cgi.fix_pathinfo = 1 and $_SERVER['REQUEST_URI'] is very available to me , I also remember using that same variable back in the days when no one used mod_rewrite so my honest humble guess is that you're plain wrong in this point . Regarding the SCRIPT_FILENAME key I'm unable to test that one out ATM. Still, if we close our eyes really hard and believe that you're right that leaves us with only one variable: 'PHP_SELF' I'm not trying in being harsh here (and I still believe that there are more solutions) but if PHP_SELF is the only key you want us to work with ( assuming there are no impositions on PHP_SELF itself ) there is only one solution left: function PATH_INFO(){ if (array_key_exists('PATH_INFO', $_SERVER) === true) { return $_SERVER['PATH_INFO']; } $whatToUse = basename(__FILE__); // see below return substr($_SERVER['PHP_SELF'], strpos($_SERVER['PHP_SELF'], $whatToUse) + strlen($whatToUse));} This function should work, however there may be some problems using the __FILE__ constant since it returns the path to the file where the __FILE__ constant is declared and not the path to the requested PHP script , so that's why the $whatToUse is there for: sou you can replace it with 'SCRIPT_FILENAME' or if you really believe in what you are saying, just use '.php' . You should also read this regarding why not to use PHP_SELF . If this doesn't work for you, I'm sorry but I can think of anything else. EDIT - Some more reading for you: Drupal request_uri() (why do they keep saying REQUEST_URI is Apache specific?) PHP_SELF vs PATH_INFO vs SCRIPT_NAME vs REQUEST_URI | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1884041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41983/']} | jdg_382081 |
stackexchange | llm_judgeable_groundtruth_similarity | 42308244 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a helper function in my component. When I console.log(helperFunction()) it, I get this in the console. When I try to add the helper function to an input field for its value. I get this showing up. How do I get the [[PromiseValue]] in my input? render() { console.log(getProjectName()); return ( <form ref={(input) => this.eventForm = input} onSubmit={(e) => this.createEvent(e)} className="slds-form"> <div className="slds-form-element"> <label className="slds-form-element__label">Assigned To</label> <div className="slds-form-element__control"> <input ref={(input) => this.assigned = input} type="text" className="slds-input" disabled/> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Related To</label> <div className="slds-form-element__control"> <input ref={(input) => this.related = input} type="text" className="slds-input" defaultValue={getProjectName()} disabled/> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Subject</label> <div className="slds-form-element__control"> <input ref={(input) => this.subject = input} type="text" className="slds-input"/> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Location</label> <div className="slds-form-element__control"> <input ref={(input) => this.location = input} type="text" className="slds-input" /> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Event Start</label> <div className="slds-form-element__control"> <input ref={(input) => this.start = input} type="text" onChange={(e) => this.onChange(e)} className="slds-input" value={ this.state.start }/> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Event End</label> <div className="slds-form-element__control"> <input ref={(input) => this.end = input} type="text" onChange={(e) => this.onChange(e)} className="slds-input" value={ this.state.end } /> </div> </div> <div className="slds-form-element"> <label className="slds-form-element__label">Contact</label> <div className="slds-form-element__control"> <input ref={(input) => this.contact = input} type="text" className="slds-input" disabled/> </div> </div> <button type="button" className="slds-button slds-button--neutral">Cancel</button> <button type="submit" className="slds-button slds-button--brand">Create</button> </form> ); } Helper Function import axios from 'axios'export function getProjectName() { return new Promise(function(resolve,reject){ // gets the record id from the current url function getQueryVariable(variable) { var query = window.location.search.substring(1); var vars = query.split("&"); for (var i=0;i<vars.length;i++) { var pair = vars[i].split("="); if(pair[0] == variable){return pair[1];} } return(false); } // used to access the rest api on my system const REST_API_URL = restApiUrl; const API_TOKEN = { headers: { "Authorization" : "Bearer " + sessionId, "Content-Type" : "application/json" } } const OPPORTUNITY_QUERY = "SELECT+Id,Name+FROM+OPPORTUNITY+WHERE+Id=" // get projectId const id = getQueryVariable('projectId'); // make requst for record name var request = axios.get(`${REST_API_URL}query/?q=${OPPORTUNITY_QUERY}'${id}'`, API_TOKEN ).then(function (response){ return resolve(response.data.records[0].Name); }) })}
Now provide the response and nothing else.
| When dealing with a value that the render method will be using and is also retrieved asynchronously you should be having that value exist in the state of the component and take advantage of the componentDidMount lifecycle method to retrieve the value. class SomeComponent extends React.component { constructor() { super(); this.state = { projectName: '' } } componentDidMount() { // fetch the project name, once it retrieves resolve the promsie and update the state. this.getProjectName().then(result => this.setState({ projectName: result })) } getProjectName() { // replace with whatever your api logic is. return api.call.something() } render() { return ( <input type="text" defaultValue={projectName}> ) } } you don't want to call the function and resolve the promise in the render method because render method should be a pure function based on state and props. This is a base example but should give you an idea of what needs to change. Just need to set projectName as a state variable and make and resolve the promise in the componentDidMount on the first render it will equal an empty string, once it comes back it will update to whatever the api returns. If you don't want to show the input until the api call resolves then you can just add additional checks to see if this.state.projectName equals anything and if so render the input. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42308244', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1086315/']} | jdg_382082 |
stackexchange | llm_judgeable_groundtruth_similarity | 72264 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Windows C# program that uses a C++ dll for data i/o. My goal is to deploy the application as a single EXE. What are the steps to create such an executable?
Now provide the response and nothing else.
| Single Assembly Deployment of Managed and Unmanaged CodeSunday, February 4, 2007 .NET developers love XCOPY deployment. And they love single assembly components. At least I always feel kinda uneasy, if I have to use some component and need remember a list of files to also include with the main assembly of that component. So when I recently had to develop a managed code component and had to augment it with some unmanaged code from a C DLL (thx to Marcus Heege for helping me with this!), I thought about how to make it easier to deploy the two DLLs. If this were just two assemblies I could have used ILmerge to pack them up in just one file. But this doesn´t work for mixed code components with managed as well as unmanaged DLLs. So here´s what I came up with for a solution: I include whatever DLLs I want to deploy with my component´s main assembly as embedded resources.Then I set up a class constructor to extract those DLLs like below. The class ctor is called just once within each AppDomain so it´s a neglible overhead, I think. namespace MyLib{ public class MyClass { static MyClass() { ResourceExtractor.ExtractResourceToFile("MyLib.ManagedService.dll", "managedservice.dll"); ResourceExtractor.ExtractResourceToFile("MyLib.UnmanagedService.dll", "unmanagedservice.dll"); } ... In this example I included two DLLs as resources, one being an unmanaged code DLL, and one being a managed code DLL (just for demonstration purposes), to show, how this technique works for both kinds of code. The code to extract the DLLs into files of their own is simple: public static class ResourceExtractor{ public static void ExtractResourceToFile(string resourceName, string filename) { if (!System.IO.File.Exists(filename)) using (System.IO.Stream s = System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName)) using (System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Create)) { byte[] b = new byte[s.Length]; s.Read(b, 0, b.Length); fs.Write(b, 0, b.Length); } }} Working with a managed code assembly like this is the same as usual - almost. You reference it (here: ManagedService.dll) in your component´s main project (here: MyLib), but set the Copy Local property to false. Additionally you link in the assembly as an Existing Item and set the Build Action to Embedded Resource. For the unmanaged code (here: UnmanagedService.dll) you just link in the DLL as an Existing Item and set the Build Action to Embedded Resource. To access its functions use the DllImport attribute as usual, e.g. [DllImport("unmanagedservice.dll")] public extern static int Add(int a, int b); That´s it! As soon as you create the first instance of the class with the static ctor the embedded DLLs get extracted into files of their own and are ready to use as if you deployed them as separate files. As long as you have write permissions for the execution directory this should work fine for you. At least for prototypical code I think this way of single assembly deployment is quite convenient. Enjoy! http://weblogs.asp.net/ralfw/archive/2007/02/04/single-assembly-deployment-of-managed-and-unmanaged-code.aspx | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/72264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12113/']} | jdg_382083 |
stackexchange | llm_judgeable_groundtruth_similarity | 9705637 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When attempting to connect to MSSQL database via ASP.NET online, I will get the following when two or more people connect simultaneously: ExecuteReader requires an open and available Connection. The connection's current state is Connecting. The site works fine on my localhost server. This is the rough code. public Promotion retrievePromotion(){ int promotionID = 0; string promotionTitle = ""; string promotionUrl = ""; Promotion promotion = null; SqlOpenConnection(); SqlCommand sql = SqlCommandConnection(); sql.CommandText = "SELECT TOP 1 PromotionID, PromotionTitle, PromotionURL FROM Promotion"; SqlDataReader dr = sql.ExecuteReader(); while (dr.Read()) { promotionID = DB2int(dr["PromotionID"]); promotionTitle = DB2string(dr["PromotionTitle"]); promotionUrl = DB2string(dr["PromotionURL"]); promotion = new Promotion(promotionID, promotionTitle, promotionUrl); } dr.Dispose(); sql.Dispose(); CloseConnection(); return promotion;} May I know what might have gone wrong and how do I fix it? Edit: Not to forget, my connection string and connection are both in static. I believe this is the reason. Please advise. public static string conString = ConfigurationManager.ConnectionStrings["dbConnection"].ConnectionString;public static SqlConnection conn = null;
Now provide the response and nothing else.
| Sorry for only commenting in the first place, but i'm posting almost every day a similar comment since many people think that it would be smart to encapsulate ADO.NET functionality into a DB-Class(me too 10 years ago). Mostly they decide to use static/shared objects since it seems to be faster than to create a new object for any action. That is neither a good idea in terms of performance nor in terms of fail-safety. Don't poach on the Connection-Pool's territory There's a good reason why ADO.NET internally manages the underlying Connections to the DBMS in the ADO-NET Connection-Pool : In practice, most applications use only one or a few differentconfigurations for connections. This means that during applicationexecution, many identical connections will be repeatedly opened andclosed. To minimize the cost of opening connections, ADO.NET uses anoptimization technique called connection pooling. Connection pooling reduces the number of times that new connectionsmust be opened. The pooler maintains ownership of the physicalconnection. It manages connections by keeping alive a set of activeconnections for each given connection configuration. Whenever a usercalls Open on a connection, the pooler looks for an availableconnection in the pool. If a pooled connection is available, itreturns it to the caller instead of opening a new connection. When theapplication calls Close on the connection, the pooler returns it tothe pooled set of active connections instead of closing it. Once theconnection is returned to the pool, it is ready to be reused on thenext Open call. So obviously there's no reason to avoid creating,opening or closing connections since actually they aren't created,opened and closed at all. This is "only" a flag for the connection pool to know when a connection can be reused or not. But it's a very important flag, because if a connection is "in use"(the connection pool assumes), a new physical connection must be opened to the DBMS what is very expensive. So you're gaining no performance improvement but the opposite. If the maximum pool size specified (100 is the default) is reached, you would even get exceptions(too many open connections ...). So this will not only impact the performance tremendously but also be a source for nasty errors and (without using Transactions) a data-dumping-area. If you're even using static connections you're creating a lock for every thread trying to access this object. ASP.NET is a multithreading environment by nature. So there's a great chance for these locks which causes performance issues at best. Actually sooner or later you'll get many different exceptions(like your ExecuteReader requires an open and available Connection ). Conclusion : Don't reuse connections or any ADO.NET objects at all. Don't make them static/shared(in VB.NET) Always create, open(in case of Connections), use, close and dispose them where you need them(f.e. in a method) use the using-statement to dispose and close(in case of Connections) implicitly That's true not only for Connections(although most noticeable). Every object implementing IDisposable should be disposed(simplest by using-statement ), all the more in the System.Data.SqlClient namespace. All the above speaks against a custom DB-Class which encapsulates and reuse all objects. That's the reason why I commented to trash it. That's only a problem source. Edit : Here's a possible implementation of your retrievePromotion -method: public Promotion retrievePromotion(int promotionID){ Promotion promo = null; var connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MainConnStr"].ConnectionString; using (SqlConnection connection = new SqlConnection(connectionString)) { var queryString = "SELECT PromotionID, PromotionTitle, PromotionURL FROM Promotion WHERE PromotionID=@PromotionID"; using (var da = new SqlDataAdapter(queryString, connection)) { // you could also use a SqlDataReader instead // note that a DataTable does not need to be disposed since it does not implement IDisposable var tblPromotion = new DataTable(); // avoid SQL-Injection da.SelectCommand.Parameters.Add("@PromotionID", SqlDbType.Int); da.SelectCommand.Parameters["@PromotionID"].Value = promotionID; try { connection.Open(); // not necessarily needed in this case because DataAdapter.Fill does it otherwise da.Fill(tblPromotion); if (tblPromotion.Rows.Count != 0) { var promoRow = tblPromotion.Rows[0]; promo = new Promotion() { promotionID = promotionID, promotionTitle = promoRow.Field<String>("PromotionTitle"), promotionUrl = promoRow.Field<String>("PromotionURL") }; } } catch (Exception ex) { // log this exception or throw it up the StackTrace // we do not need a finally-block to close the connection since it will be closed implicitly in an using-statement throw; } } } return promo;} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9705637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/641123/']} | jdg_382084 |
stackexchange | llm_judgeable_groundtruth_similarity | 7694215 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To create an UiImage with a image file, I use the code as below: UIImage *aImage = [[UIImage imageNamed:@"demo.jpg"]autorelease]; If I want to create an UiImage with the URL http://example.com/demo.jpg , how to do that? Thanks UPDATE
Now provide the response and nothing else.
| This is a three step process. First you will create an NSURL object to hold the URL we are attempting to access. We will supply this URL to the NSData class method, +dataWithContentsOfURL: to obtain the image over the network as raw data, then use the +imageWithData: class method on UIImage to convert the data into an image. NSURL *imageURL = [NSURL URLWithString:@"http://example.com/demo.jpg"];NSData *imageData = [NSData dataWithContentsOfURL:imageURL];UIImage *image = [UIImage imageWithData:imageData]; Please note that +dataWithContentsOfURL: executes a synchronous network request. If you run this on the main thread, it will block the UI until the image data is received from the network. Best practice is to run any network code on a background thread. If you're targeting OS 4.0+ you could do something like this... NSURL *imageURL = [NSURL URLWithString:@"http://example.com/demo.jpg"];dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{ NSData *imageData = [NSData dataWithContentsOfURL:imageURL]; dispatch_async(dispatch_get_main_queue(), ^{ // Update the UI self.imageView.image = [UIImage imageWithData:imageData]; });}); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7694215', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/495452/']} | jdg_382085 |
stackexchange | llm_judgeable_groundtruth_similarity | 42170127 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my React Native app, I am fetching images from an API with unknown dimensions. How do I auto scale the height if I know my desired width? Example: I set the width to Dimensions.get('window').width . How do set the height and keep the same ratio? export default class MyComponent extends Component { constructor(props) { super(props) this.state = { imgUrl: 'http://someimg.com/coolstuff.jpg' } } componentDidMount() { // sets the image url to state this.props.getImageFromAPi() } render() { return ( <View> <Image source={uri: this.state.imgUrl} style={styles.myImg} /> <Text>Some description</Text> </View> ) }}const styles = StyleSheet.create( myImg: { width: Dimensions.get('window').width, height: >>>???what goes here???<<< })
Now provide the response and nothing else.
| Try this: import React, { Component, PropTypes } from "react"; import { Image } from "react-native";export default class ScaledImage extends Component {constructor(props) { super(props); this.state = { source: { uri: this.props.uri } };}componentWillMount() { Image.getSize(this.props.uri, (width, height) => { if (this.props.width && !this.props.height) { this.setState({ width: this.props.width, height: height * (this.props.width / width) }); } else if (!this.props.width && this.props.height) { this.setState({ width: width * (this.props.height / height), height: this.props.height }); } else { this.setState({ width: width, height: height }); } });}render() { return ( <Image source={this.state.source} style={{ height: this.state.height, width: this.state.width }} /> );}}ScaledImage.propTypes = {uri: PropTypes.string.isRequired,width: PropTypes.number,height: PropTypes.number}; I'm passing the URL as a prop called uri . You can specify your width prop as Dimensions.get('window').width and that should cover it. Note that this will also work if you know what you want to set the height to and you need to resize the width to maintain the ratio. In that case, you would specify the height prop instead of the width one. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42170127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3833246/']} | jdg_382086 |
stackexchange | llm_judgeable_groundtruth_similarity | 16440863 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i can getting the image width through MediaStore.Images.Media normally but i need to getting the image width and height from image which selected from dropbox so currently i have following method to getting image size from dropbox private void getDropboxIMGSize(Uri uri){ String size = Long.toString(new File(uri.getPath()).length()); return size;} but what i actually need are getting the file width and height value anyone know how to achieve that?please help!
Now provide the response and nothing else.
| private void getDropboxIMGSize(Uri uri){ BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeFile(new File(uri.getPath()).getAbsolutePath(), options); int imageHeight = options.outHeight; int imageWidth = options.outWidth; } no there is no way. You have to create a Bitmap object. if you use the inJustDecodeBounds flag the bitmap would not be loaded in memory. In fact BitmapFactory.decodeFile will return null. In my example uri is the phisical path to the image | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16440863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1882156/']} | jdg_382087 |
stackexchange | llm_judgeable_groundtruth_similarity | 19323138 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Program is in C using std=c99, this is on a 64-bit machine. struct epochs { volatile unsigned int epoch : 1; volatile unsigned int pulse : 1; volatile unsigned int active0 : 7; volatile unsigned int active1 : 7; volatile unsigned int counter0 : 24; volatile unsigned int counter1 : 24; }; when I check sizeof(epochs) it gives me 12. I can tell gcc not to pad it by adding __attribute((packed)); so I can work around it. However I would really like to know why 4 bytes are added to pad this 64-bit structure? The main thing here is that this structure NEEDS to be 64 bits because it is updated all at once in 64-bit atomic swap operations, which of course will not work on a 12-byte value.
Now provide the response and nothing else.
| volatile unsigned int epoch : 1;volatile unsigned int pulse : 1;volatile unsigned int active0 : 7;volatile unsigned int active1 : 7; ^ 32-bit (4 bytes) volatile unsigned int counter0 : 24; ^ 32-bit (4 bytes) volatile unsigned int counter1 : 24; ^ 32-bit (4 bytes) So 4 bytes more. C says : (C99, 6.7.2.1p10) "If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit" There is not enough space to put 24-bit ( counter0 ) more in a 32-bit unit (likely the size of unsigned int in your system) that already holds 16-bit ( epoch , pulse , active0 , active1 ). You can use uin64_t instead of using unsigned int to pack your bit-fields in a 64-bit unit but it is implementation-defined whether your system supports it or not. (C99, 6.7.2.1p4) "A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type ." | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19323138', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/550852/']} | jdg_382088 |
stackexchange | llm_judgeable_groundtruth_similarity | 362824 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Context I recently read about the object-oriented technique of making a distinction between objects and passive data structures , best summarized in Clean Code: "Objects hide their data [...] and expose functions [...]. Data structures expose their data and have no meaningful functions." I am considering using C# structs for passive data structures . To clarify this: IF part of my code needs to function as a passive data structure , THEN I want to use a struct for that. Advantages It would provide a language-given distinction between objects and passive data structures . Also, if an object has a private field of a class , but exposes that in a function, it can be changed somewhere else. That means the object data is changed from the outside, which is not good. I know you should e.g. expose an internal List as ReadOnlyList , but that is good practice , which even good programmers don't always follow. Using structs instead would automatically enforce this. What I found out so far I know the question " When to use struct " is already answered several times. The answers always boil down to the advice from the official docs: AVOID defining a struct unless the type has all of the following characteristics: It logically represents a single value, similar to primitive types (int, double, etc.). It has an instance size under 16 bytes. It is immutable. It will not have to be boxed frequently. I think the first 2 points are for improving performance on the stack. However, as far as I understand structs are better on the stack, but not worse on the heap. Point 3 I can still adhere to. Might make the code cleaner, might make it more awkward, but I don't know yet. Point 4 is also about performance improvements, but I also actually don't need a lot of performance. Even if, at this point that would be early, early optimization - I'm not working with big data here. With a name like that, I want to think structs are exactly the thing to use for object oriented passive data structures . The documentation from the official docs makes me doubt that though, especially the size limitation. Even 2 strings for an address with 2 rows would already be too much. The question Are there other arguments against using structs for these passive data structures? Or did I understand something wrong? Example public struct EmployeeId // data structure (exposed data, no functions){ public string Value;}public struct Address // data structure{ public string Line1; public string Line2;}public struct Performance // data structure{ public int Successes; public int Failures;}public struct Employee : IEquatable<Employee>// data structure{ public EmployeeId Id: public Address Address; public Performance Performance; public bool Equals(Employee employee) { return Id == employee.Id; }}public class OfficialEmployeeRegistry // object (hidden data, exposed functions){ private Dictionary<EmployeeId, Employee> _employees; public void Add(Employee employee) { _employees.Add(employee.Id, employee); } public List<Employee> GetPositivePerformers() {...}}public class SantaClause // object{ private EmployeeRegistry _employeeRegistry; private PresentSender _presentSender; public void SendChristmasPresents() { List<Employee> goodEmployees = employeeRegistry.GetPositivePerformers(); foreach(Employee employee in goodEmployees) { _presentSender.SendPresent(employee.Address); } }} All structs in this code are examples of what I want to do. For example, we can get the performance of an employee from the OfficialEmployeeRegistry now. We can send that data to a printer class, but if that class changes it in the process, the entries in the OfficialEmployeeRegistry are protected. OfficialEmployeeRegistry data will only be manipulated by itself. Oh, and the structs are supposed to be immutable of course, but I feel adding a constructor to each would bloat this post to much. Reaction to commments Do you require data serialization? No. Will this need to be passed into and from functions/methods? Yes. Will it be iterated and modified on a fairly significant basis? No. I guess this is about performance; but performance is definitely not an issue
Now provide the response and nothing else.
| You're confusing the struct keyword with the term "data structure." They are two very different things. Greatly simplifying, a struct in C# is basically a class with value semantics, whereas a data structure is a collection of data with storage and retrieval algorithms that have specific performance characteristics. "Objects hide their data [...] and expose functions [...]. Data structures expose their data and have no meaningful functions." Given that definition, what you're looking for is not a struct , but rather a class with data members only (no functions). struct s have value semantics, which is probably not what you want to put in a collection unless you really are collecting values. Accordingly, using the struct keyword to signal that you're collecting data with no behavior is probably not a good approach. Competent software developers already know that their data structures are going to contain classes of objects; your use of a language keyword that suggests otherwise will be confusing to them. Use struct to indicate value semantics, not to indicate "data-only objects." See Also Data Transfer Object Data Structure Passive Data Structure | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/362824', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/226041/']} | jdg_382089 |
stackexchange | llm_judgeable_groundtruth_similarity | 2248413 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
During my course in linear algebra, the instructor stated that A cross B is the same as the "skew symmetric matrix" of A times B. So, first of all, can someone clarify or provide sources about skew symmetric matrices? Secondly, I can't really comprehend the idea of how a single column vector crossed with another could be represented by a matrix. Anyhow, thanks in advance!
Now provide the response and nothing else.
| The skew-symmetric tensor product of two vectors with components $A_i$ and $B_i$ is the tensor represented by the matrix with components $S_{ij}=A_iB_j - A_jB_i$. It is skew-symmetric (antisymmetric) because $S_{ij}=-S_{ji}$. The advantage of this representation is that unlike the vector cross product, which is specific to three dimensions, the skew-symmetric product generalizes the concept to arbitrary dimensions. Explicitly (in three dimensions), $$A_iB_j-A_jB_i=\begin{pmatrix}0&A_1B_2-A_2B_1&A_1B_3-A_3B_1\\A_2B_1-A_1B_2&0&A_2B_3-A_3B_2\\A_3B_1-A_1B_3&A_3B_2-A_2B_3&0\end{pmatrix}.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2248413', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/309859/']} | jdg_382090 |
stackexchange | llm_judgeable_groundtruth_similarity | 3692591 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following program shows that we can use return or pthread_exit to return a void* variable that is available to pthread_join 's status variable. Should there be a preference for using one over the other? Why does using return work? Normally we think of return putting a value on the stack but since the thread is completed the stack should vanish. Or does the stack not get destroyed until after pthread_join ? In your work, do you see much use of the status variable? It seems 90% of the code I see just NULLs out the status parameter. Since anything changed via the void* ptr is already reflected in the calling thread there doesn't seem much point to returning it. Any new void* ptr returned would have to point to something malloc ed by the start thread, which leaves the receiving thread with the responsibility to dispose of it. Am I wrong in thinking the status variable is semi-pointless? Here is the code: #include <iostream>#include <pthread.h>using namespace std;struct taskdata{ int x; float y; string z;};void* task1(void *data){ taskdata *t = (taskdata *) data; t->x += 25; t->y -= 4.5; t->z = "Goodbye"; return(data);}void* task2(void *data){ taskdata *t = (taskdata *) data; t->x -= 25; t->y += 4.5; t->z = "World"; pthread_exit(data);}int main(int argc, char *argv[]){ pthread_t threadID; taskdata t = {10, 10.0, "Hello"}; void *status; cout << "before " << t.x << " " << t.y << " " << t.z << endl; //by return() pthread_create(&threadID, NULL, task1, (void *) &t); pthread_join(threadID, &status); taskdata *ts = (taskdata *) status; cout << "after task1 " << ts->x << " " << ts->y << " " << ts->z << endl; //by pthread_exit() pthread_create(&threadID, NULL, task2, (void *) &t); pthread_join(threadID, &status); ts = (taskdata *) status; cout << "after task2 " << ts->x << " " << ts->y << " " << ts->z << endl;} With output of: before 10 10 Helloafter task1 35 5.5 Goodbyeafter task2 10 10 World
Now provide the response and nothing else.
| (1) In C++ code, using return causes the stack to be unwound and local variables destroyed, whereas pthread_exit is only guaranteed to invoke cancellation handlers registered with pthread_cancel_push() . On some systems this mechanism will also cause the destructors for C++ local variables to be called, but this is not guaranteed for portable code --- check your platform documentation. Also, in main() , return will implicitly call exit() , and thus terminate the program, whereas pthread_exit() will merely terminate the thread, and the program will remain running until all threads have terminated or some thread calls exit() , abort() or another function that terminates the program. (2) The use of return works because the POSIX specification says so. The returned value is stored in a place where pthread_join() can retrieve it. The resources used by the thread are not reclaimed until pthread_join() is called. (3) I never use the return value of a thread in raw POSIX threads. However, I tend to use higher level facilities such as the Boost thread library, and more recently the C++0x thread library, which provide alternative means for transferring values between threads such as futures, which avoid the problems associated with memory management that you allude to. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3692591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/291763/']} | jdg_382091 |
stackexchange | llm_judgeable_groundtruth_similarity | 8537 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the name "Dihydrogen monoxide" actually what chemists would use to refer to $\ce{H2O}$ (assuming there was no common name, "water")? Of course, this is all over the internet. I'm a little skeptical though because the similar chemical $\ce{H2S}$ is called "hydrogen sulfide", not "dihydrogen monosulfide".
Now provide the response and nothing else.
| No, it's not. The "dihydrogen monoxide" name is used as part of a hoax. In the scientific community, there are chemical names for water, and which one is used in the literature generally depends on how it interacts with something else (hydroxic acid and hydrogen hydroxide were two I heard most often in acid-base reactions). IUPAC, the standards committee that sets standard names for chemical structures, suggests "oxidane" as a starting point for the construction of other names for chemicals that are derived from water. However, they do not suggest "oxidane" itself be used to refer to plain water. Most chemists would use "water", even when writing scientific papers. | {} | {'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/8537', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/4553/']} | jdg_382092 |
stackexchange | llm_judgeable_groundtruth_similarity | 10332 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was driving down the road at roughly the speed of traffic. I saw a police officer parked on the side of the road, and also noticed that a Semi was traveling in the lane right next to him. This got me thinking, is it possible to avoid being seen from the officer by using the Semi Truck and how fast would I have to go? Let's say the officer is parked on the side of the road and the semi is traveling on the lane next to the officer: Let's also assume that the truck is traveling at roughly constant 65 mph. Also assume that I see the officer and line up my car to block the officer's view via the semi at roughly a mile away. (Ya I know a little far). What I want to know: How fast do I have to go to avoid being seen by the officer? Is this speed constant or variable? Note: I am in NO way advocating speeding on the highways and breaking the law. This was just was just sparked as it somewhat happened while I was driving down the road.
Now provide the response and nothing else.
| What you want to do is keep the angle between your direction of motion and the line of sight to police car the same as the angle between the truck's direction of motion and the truck's line of sight to the police car. In other words, we want to keep $a1=a2$ in the picture above. This is a problem in similar triangles. The answer will be that the ratio of your motion to your distance to the cop will have to be the same as the ratio of the truck's motion to the truck's distance to the cop. This can be seen from the following: We note that $$\cot{(a1)}=\frac{\textrm{Truck's Speed}}{d1}$$$$\cot{(a2)}=\frac{\textrm{Car's Speed}}{d2}$$ Set $a1=a2$ so we are always hidden behind the truck and solve for $\textrm{Car's Speed}$:$$a1=a2$$$$\cot{(a1)}=\cot{(a2)}$$$$\frac{\textrm{Truck's Speed}}{d1}=\frac{\textrm{Car's Speed}}{d2}$$$$\textrm{Car's Speed}=\frac{\textrm{Truck's Speed}\times d2}{d1}$$ So, suppose the lanes are the same size (10 feet wide, say), and the cop is 5 feet off of the highway. Then, $d1=5\textrm{ feet}$, $d2=15\textrm{ feet}$. The speed you need if the truck is going 65 miles per hour is $$\textrm{Car's Speed}=\frac{65 \textrm{mph}\times 15}{5}=195\textrm{ mph}$$ Edit: Some concerns were raised in the comments that this treats the truck as a point. This turns out not to matter. Here's a second picture like the first, but now we have a zone (colored in green) which the truck covers. The green triangle gives you a little bit of wiggle room, since you can be covered by the front of the truck or the back or anything in between. However, the total size of your wiggle room does not change while you move (in other words, it doesn't depend on a4). As a result, it should be pretty clear that this doesn't change things much at all - we can think of it as two point-size trucks going at the same speed, and we have to stay between them. Of course, this will give exactly the same answer as the first case - it's really just like hiding behind one point-sized truck. There is actually one small change, as David notes: If you start out covered by the front of the truck, you can go a little slower than the 195 mph cited above, because you can slowly slide back until covered by the back of the truck. However, if the length of the truck is $L_{tr}=40\textrm{ feet}$ (say), then this change in the velocity is quite small. For example, suppose that we slide back 40 feet from the front of the truck to the back over the course of a mile. We're going 195 miles per hour, so it takes us 18.5 seconds to go one mile. In those 18.5 seconds, we move 40 feet relative to the truck; this is a speed of about 1.5 mph. So, we can go 1.5 mph slower if we start at the front and go to the back over a mile; taking this into consideration, we get that the speed needed is actually 193.5 mph . | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/10332', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/249/']} | jdg_382093 |
stackexchange | llm_judgeable_groundtruth_similarity | 44944581 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ARKit (with SceneKit) to add the virtual object (e.g. ball). I am tracking real world object (e.g. foot) by using Vision framework and receiving its updated position in vision request completion handler method. let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate) I wants to replace the tracked real world object with virtual (for example replace foot with cube) but I am not sure how to replace the boundingBox rect (which we receive in vision request completion) into scene kit node as coordinate system are different. below is the code of vision request completion handler: private func handleVisionRequestUpdate(_ request: VNRequest, error: Error?) { // Dispatch to the main queue because we are touching non-atomic, non-thread safe properties of the view controller DispatchQueue.main.async { // make sure we have an actual result guard let newObservation = request.results?.first as? VNDetectedObjectObservation else { return } // prepare for next loop self.lastObservation = newObservation // check the confidence level before updating the UI guard newObservation.confidence >= 0.3 else { return } // calculate view rect var transformedRect = newObservation.boundingBox //How to convert transformedRect into AR Coordinate self.node.position = SCNVector3Make(?.worldTransform.columns.3.x, ?.worldTransform.columns.3.y, } } Please guide me to transfer the coordinate system.
Now provide the response and nothing else.
| Assuming the rectangle is on a horizontal plane, you can perform a hit test against the scene on all 4 corners and use 3 of those corners to calculate the width, height, center, and orientation of the rectangle. I have a demo app available on GitHub that does exactly that: https://github.com/mludowise/ARKitRectangleDetection The coordinates for the rectangle corners from VNRectangleObservation will be relative to the size of the image and in different coordinates depending on the phone's rotation. You'll need multiply them by the view size and invert them based on the phone's rotation: func convertFromCamera(_ point: CGPoint, view sceneView: ARSCNView) -> CGPoint { let orientation = UIApplication.shared.statusBarOrientation switch orientation { case .portrait, .unknown: return CGPoint(x: point.y * sceneView.frame.width, y: point.x * sceneView.frame.height) case .landscapeLeft: return CGPoint(x: (1 - point.x) * sceneView.frame.width, y: point.y * sceneView.frame.height) case .landscapeRight: return CGPoint(x: point.x * sceneView.frame.width, y: (1 - point.y) * sceneView.frame.height) case .portraitUpsideDown: return CGPoint(x: (1 - point.y) * sceneView.frame.width, y: (1 - point.x) * sceneView.frame.height) }} Then you can perform a hit test on all 4 corners. It's important to use the type .existingPlaneUsingExtent when performing the hit test so that ARKit returns hits for horizontal planes. let tl = sceneView.hitTest(convertFromCamera(rectangle.topLeft, view: sceneView), types: .existingPlaneUsingExtent)let tr = sceneView.hitTest(convertFromCamera(rectangle.topRight, view: sceneView), types: .existingPlaneUsingExtent)let bl = sceneView.hitTest(convertFromCamera(rectangle.bottomLeft, view: sceneView), types: .existingPlaneUsingExtent)let br = sceneView.hitTest(convertFromCamera(rectangle.bottomRight, view: sceneView), types: .existingPlaneUsingExtent) Then it gets a little complicated... Because each hit test could return with 0 to n results, you will need to filter out any hit tests that are contained on a different plane. You can do this by comparing the anchors for each ARHitTestResult : hit1.anchor == hit2.anchor Also, you only need 3 out of 4 corners to identify the rectangle's dimensions, position, and orientation so it's okay if one corner doesn't return any hit test results. Take a look here for how I did that. You can calculate the rectangle's width from the distance between the left and right corners (for either top or bottom). Likewise you can calculate the rectangle's height from the distance between the top & bottom corners (for either left or right). func distance(_ a: SCNVector3, from b: SCNVector3) -> CGFloat { let deltaX = a.x - b.x let deltaY = a.y - b.y let deltaZ = a.z - b.z return CGFloat(sqrt(deltaX * deltaX + deltaY * deltaY + deltaZ * deltaZ))}let width = distance(right, from: left)let height = distance(top, from: bottom) You can calculate its position by getting the midpoint from the opposite corners of the rectangle (either topLeft & bottomRight or topRight & bottomLeft): let midX = (c1.x + c2.x) / 2let midY = (c1.y + c2.y) / 2let midZ = (c1.z + c2.z) / 2let center = SCNVector3Make(midX, midY, midZ) You can also calculate the orientation of the rectangle (rotation along the y-axis) from the left and right corners (for either top or bottom): let distX = right.x - left.xlet distZ = right.z - left.zlet orientation = -atan(distZ / distX) Then put that all together and display something in AR overlaid on the rectangle. Here's an example of displaying a virtual rectangle by subclassing SCNNode : class RectangleNode: SCNNode { init(center: SCNVector3, width: CGFloat, height: CGFloat, orientation: Float) { super.init() // Create the 3D plane geometry with the dimensions calculated from corners let planeGeometry = SCNPlane(width: width, height: height) let rectNode = SCNNode(geometry: planeGeometry) // Planes in SceneKit are vertical by default so we need to rotate // 90 degrees to match planes in ARKit var transform = SCNMatrix4MakeRotation(-Float.pi / 2.0, 1.0, 0.0, 0.0) // Set rotation to the corner of the rectangle transform = SCNMatrix4Rotate(transform, orientation, 0, 1, 0) rectNode.transform = transform // We add the new node to ourself since we inherited from SCNNode self.addChildNode(rectNode) // Set position to the center of rectangle self.position = center }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44944581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3009236/']} | jdg_382094 |
stackexchange | llm_judgeable_groundtruth_similarity | 92710 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way I can capture the passwords or hashes being used by a "dictionary attack" against my ssh server? I would like to see what they are trying, to be able to better guard against it.
Now provide the response and nothing else.
| I believe you can do this with strace against the ssh daemon. See this example / script . I think this will probably slow the ssh daemon down. It will show the actual password, not the hash. The core of that example is (need to be root most likely): strace -f -etrace=write -s 64 -p $ssh_pid 2>&1 My test with the above command where $ssh_pid is the pid of /usr/sbin/sshd: ssh localhostkbrandt@localhost's password: Permission denied, please try again....pid 14742] write(4, "\0\0\0\10foobazes"..., 12) = 12 | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/92710', 'https://serverfault.com', 'https://serverfault.com/users/1466/']} | jdg_382095 |
stackexchange | llm_judgeable_groundtruth_similarity | 25104623 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A while back I had to use a jQuery plugin in my project. I needed some different functionality,so I rewrote the plugin and a few days back I published a fork on github. I wanted to add thepackage to the bower repository. The forked repository I added a bower.json file to the repository and registered the package with the usual "bower register" command.The problem is, when I try to install my package, bower installs the original script and not the fork. What I already tried: At first I thought it's because I didn't make a release, so I fixed that part. But It didn't help. I also tried to change the version number to the version number of the original script with no luck. So maybe the bower.json file I wrote was not well written, right? My next attempt was using Bower to make a propper bower.json file for me using "bower init". No luck. So what could I be doing wrong? The GitHub help page defines a fork as a method to use someone else's project as a starting point for your own idea. That was my intention since I rewrote the plugin to be oo oriented and added some functionality, but 80% of the codeused is still from the original plugin and it didn't feel right to just make a new repository. Should I instead make a new repositoryand will registering my repo with Bower work then? What is the usual approach if you did some medium to major changes to a repository? Do you fork it or publish a new repo?Do you still make a pull request even if the changes are bigger?
Now provide the response and nothing else.
| You don't need to create a new repository. A fork will work fine. But you can't overload on someone else's registered package name with bower. It does look like you've changed the name from onepage-scroll to onepage-scroll-extended though. If you want to figure out what Bower knows about your package: Do: bower info onepage-scroll-extended { name: 'onepage-scroll-extended', homepage: 'https://github.com/itd24/onepage-scroll-extended', version: '1.1.1'}Available versions: - 1.1.1 - 1.0.1 Here you can see that it does not have the full bower.json manifest information and the latest information that it has is for version 1.1.1 (not 1.1.3, your latest). This is because you don't have a v1.1.3 tag in your repository's master branch. I can see a v1.1.1 and v1.2 tag, but no v1.1.3 tag. Create that tag and push it up to GitHub to enable you to bower install that new version. You may also need to re-run the bower register command to tell it to pick up the latest manifest. This should be happening automatically (AFAIK). You don't include the bower register command that you ran, perhaps you used the wrong repo URL there. You should use something like: bower register onepage-scroll-extended [email protected]:itd24/onepage-scroll-extended.git | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25104623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/430997/']} | jdg_382096 |
stackexchange | llm_judgeable_groundtruth_similarity | 13477451 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found many answers on Stack Overflow about how to refresh an iframe with JavaScript. For example: Iframe reload button What's the best way to reload / refresh an iframe using JavaScript? How to refresh an IFrame using Javascript? They work fine. However, if the page in the iframe has changed recently, the refresh will not show this change. Is there any way I can force a hard refresh on the designated iframe so that the new version is shown?
Now provide the response and nothing else.
| If the iframe is same-origin , you can force a full page reload using iframe.contentWindow.location.reload(true);//The argument "true" will force all loaded resources, such as images, to be re-checked for expiry at the server ↪ View an example at jsFiddle ↪ More information at the Mozilla Developer wiki If you have control over the HTTP headers sent for the page in the iframe , you can also instruct the browser not to cache it in the first place . Furthermore, most web pages completely ignore parameters in the query string for which they have no handling code, so you could add a random value or a timestamp to the query string to make sure the browser sees it as a "new" page and does not use the cached copy: if(iframe.src.indexOf('timestamp') > -1){ // Check if we still have a timestamp in the URL from a previous refresh iframe.src = iframe.src.replace(/timestamp=[^&]+/, 'timestamp=' + Date.now()); // And if so, replace it instead of appending so the URL doesn't grow too long.}else{ // Else we will append the timestamp iframe.src += (iframe.src.indexOf('?') > -1 ? "&" : "?") + 'timestamp=' + Date.now();// If the URL contains a ?, append ×tamp=...; otherwise, append ?timestamp=...} Use new Date().getTime() instead of Date.now() if support for older browsers is important. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13477451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556079/']} | jdg_382097 |
stackexchange | llm_judgeable_groundtruth_similarity | 16384 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Assuming the Axiom of Choice, (it seems that) there is a bijection between $\mathbb{R}$ and $\mathbb{N}$ that follows from any well-ordering of the reals. That is, given a well-ordering of $\mathbb{R}$, the nth real number in the ordering would correspond to the nth natural number. On the contrary, if the reals are assumed to be countable, a contradiction can quickly be reached using Cantor's Diagonal Argument . Where am I mistaken? Is it my understanding of well-ordering? Thanks in advance
Now provide the response and nothing else.
| Your error is thinking that "well-ordered and infinite" means "bijectable with $\mathbb{N}$". Your argument is not even enough to give a bijection between $\mathbb{N}$ and the following well-ordering of the integers: order the nonnegative integers in the usual way; make every negative number larger than any nonnegative number, and compare negative numbers by comparing their absolute value. That is, the well ordering $$0, 1, 2, 3,\ldots, n,\ldots ; -1, -2, -3, \ldots, -n, \ldots$$where ";" means that $-1$ is larger than any nonnegative integer. This type of order is called $\omega+\omega$, because it is essentially two copies of $\mathbb{N}$, one placed after the other ($\omega$ is the ordinal name of the well-order of the natural numbers). This is still countable, of course, but you can probably see already that your argument about well-ordering the reals to get a bijection with $\mathbb{N}$ is already in serious trouble: you have no warrant for assuming that it will actually "hit" every real number (and in fact, it won't). Added: Just for completeness: to show this is a well ordering of $\mathbb{Z}$, let $A$ be any nonempty subset of $\mathbb{Z}$. If $A\cap\mathbb{N}$ is nonempty, then the least element of $A$ is the least element $\mathbf{a}$ of $A\cap\mathbb{N}$ (my naturals include $0$, by the by), since given any $a\in A$, if $a\in\mathbb{N}$ then by definition of $\mathbf{a}$ we have $\mathbf{a}\leq a$. And if $a$ is negative, then since $\mathbf{a}$ is nonnegative we have $\mathbf{a}\leq a$. Thus, $\mathbf{a}$ is the least element of $A$. If, on the other hand, we have $A\cap\mathbb{N}=\emptyset$, then that means that $A$ consists only of negative numbers. Let $B=\{ |a|\mid a\in A\}$. Then $B\subseteq\mathbb{N}$ and is nonempty, so it has a least element $\mathbf{b}$. Then $\mathbf{a}=-\mathbf{b}\in A$ is the least element of $A$, since given any $a\in A$, we have that $a$ is negative by assumption and so that $|\mathbf{a}| = \mathbf{b}\leq |a|$; since this is how we compare negative numbers in this order, we have that $\mathbf{a}$ is less than or equal to $a$, hence $\mathbf{a}$ is the least element of $A$, as claimed. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/16384', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1352/']} | jdg_382098 |
stackexchange | llm_judgeable_groundtruth_similarity | 1754289 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to maniplulate a string without making a big issue out of it and spreading it out onto multiple lines, so I'm using some chaining to achieve this. The question I have is, how do I use string.Substring() to drop the last character off my string in this context ? In PHP I can pass a negative number as an argument (i.e. substr(-1) ) to achieve this, but obviously this isn't how C# works. mystring = mystring.Replace('_', ' ').Substring(???); Also, what is the actual name for the technique used above? I always referred to it as a callback chain, but a callback chain I now think is something completely different. Please note I want to avoid: mystring = mystring.Replace('_', ' ');mystring = mystring.Substring(0, mystring.Length - 1); Thanks in advance for your time and kind consideration. Iain Thanks for your answers guys. It's funny that people can have such strong opinions about string manipulation and other "competing" languages :)
Now provide the response and nothing else.
| You could write an Extension method RightStrip() . You can't overload SubString for negative start positions. static string RightStrip(this string s, int n){ return s.Substring(0, s.Length - n);}string s = "Hello World!";s = s.Replace('e', 'a').RightStrip(1); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1754289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/146897/']} | jdg_382099 |
stackexchange | llm_judgeable_groundtruth_similarity | 3656391 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes the compiler produces .dSYM files. I guess this is a debugging related file, but I don't know what it is, and how to use it. What is a .dSYM? How do I use it?
Now provide the response and nothing else.
| dSYM files store the debug symbols for your app Services like Crashlytics use it to replace the symbols in the crash logs with the appropriate methods names so it will be readable and will make sense. The benefit of using the dSYM is that you don't need to ship your App with its symbols making it harder to reverse engineer it and also reduce your binary size In order to use to symbolicate the crash log you need to drag the crash log into the device's device logs in the organizer of the machine that compiled the app binary (a machine that stores the dSYM) If you have the dSYM but don't have the machine the compiled the app binary follow the instructions in this link in order to install the dSYM into the machine. There is a mac app that helps you symbolicate a crash log in case you need to do it yourself. For more information please see apple technical note TN2151 | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3656391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246776/']} | jdg_382100 |
stackexchange | llm_judgeable_groundtruth_similarity | 7571208 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Azure web role and I want to store some settings in web.config under <appSettings> tag. Yes, I'm aware of service configuration files, yet I have reasons to prefer web.config. When I execute (from here ): System.Configuration.Configuration rootWebConfig = System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration(null);if (rootWebConfig1.AppSettings.Settings.Count > 0) {} settings count is always zero although I've added a key-value pair under <appSettings> . What am I doing wrong? Is it possible to read settings from web.config from inside web role entry point?
Now provide the response and nothing else.
| The reason for this is that Microsoft has introduced Full IIS capability since Azure SDK 1.3.A side effect of this is that the RoleEntryPoint gets walled off from the rest of the web application. The following excerpt from Microsofts blog post describes what you're seeing. ...with full IIS, the RoleEntryPoint runs under WaIISHost.exe , while the web site runs under a normal IIS w3wp.exe process. ...so it expects its configuration to be in a file called WaIISHost.exe.config . Therefore, if you create a file with this name in the your web project and set the "Copy to Output Directory" property to "Copy Always" you'll find that the RoleEntryPoint can read this happily. Apart from the solution mentioned, an option might be to try to use Hosted Web Core (HWC) mode instead of full IIS mode. Update - changes introduced in Azure SDK 1.8 Azure SDK 1.3 -1.7 will look in WaIISHost.exe.config Azure SDK 1.8+ will look in the WebRoleProjectName.dll.config . With the newest change to the SDK, you should be able to place an app.config in your project and your role entry point should then have access to it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7571208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/57428/']} | jdg_382101 |
stackexchange | llm_judgeable_groundtruth_similarity | 10560922 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently making an awesome (in my mind) zombie game and I need to know a good way to make a camera. I am using the Slick2d library along with MarteEngine for java. I'm kinda new to java and jumped straight into a library before really getting deep into swing and such so this is probably a lack of graphics knowledge. I read on a tutorial that you can't actually move the window of the game on the map so instead you need to move the map and objects to make it seem like the camera is moving. If I was to do it this way it seems like it would be very resource intensive. I would have to loop through every zombie, survivor, object, hitbox, etc to move their coordinates. I've already tried this method once and things weren't really moving the same speed. That may have been due to me handling it with different update speeds. I've seen a few things on using graphics.translate but I don't really understand it. So.. any suggestions to making a camera move would be awesome! Thanks in advance.
Now provide the response and nothing else.
| You can definitely move the camera. See my answer to this question explaining how to move the camera, render the world relative to said camera, and some other useful tips on how to do this and how to decide what to draw on the screen as the camera moves. While it's totally possible to use translate(x, y) , that alone doesn't solve clipping issues (how to know what to draw on the screen), and it's not a catch all. Also, when you think about it, there's really not much of a difference between translating the surface and moving the camera. It's all relative, and so long as the pixels are moving in the right direction, it doesn't really matter if you're "moving the camera relative to the world" or "moving the world relative to the camera" - they're essentially the same operation. As far as Swing is concerned, be glad you didn't start there . Swing is designed for desktop applications with windows, menus, buttons, complex event systems, and the like. For a number of reasons Swing is terrible for games, not the least of which is the way the rendering pipeline works in Swing; it's fine for desktop apps, but becomes a pit of despair for games and most types of rendering that need to be real-time, and graphics intensive. This is all okay because Swing wasn't built to solve the problems of game developers anyway. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10560922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1112987/']} | jdg_382102 |
stackexchange | llm_judgeable_groundtruth_similarity | 95137 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am an engineer working in radar research. I came accross a problem on which I cannot seem to find literature. I can ask it in two different ways. Perhaps depending on the reader, the alternative question is easier to answer. First way Assume I have a real symmetric matrix $\mathbf{C} \in \mathbb{R}^{M \times M}$. I know its eigenvalues that are non-negative: $\lambda_1, \ldots, \lambda_M$. And The trace of the matrix, i.e., the sum of all eigenvalues is $\lambda_1+\cdots+\lambda_M = M$. The diagonal matrix of eigenvalues is $\mathbf{\Lambda}$ and the matrix with eigenvectors in its colums is $\mathbf{V}$. The eigendecomposition is then $\mathbf{C} = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^T$. Also the diagonal of the matrix is all ones, i.e., $\operatorname{diag}(\mathbf{C}) = [1,\ldots,1]$. Define $$c_\max = \max\limits_{i\neq j}|c_{ij}|$$ where $c_{ij}$ is the element of $\mathbf{C}$ in the $i$-th row and $j$-th column. Given that I can choose $\mathbf{V}$ freely, i.e., any matrix with those eigenvalues, what is the minimum of the maximum of all off-diagonal elements that I can attain (in absolute value)? In other words what is the minimum of $c_\max$? Second way Given that you have $M$ vectors $\{\mathbf{v}_1, \ldots, \mathbf{v}_M\}$. They are orthonormal $\mathbf{v}_i^T \mathbf{v}_j = \delta(i-j)$ by standard dot product definition. They have norm one $\| \mathbf{v}_i \|=1$ by standard dot product definition. Define the weighted inner product as $\mathbf{v}_i^T \mathbf{\Lambda} \mathbf{v}_j$, where $\mathbf{\Lambda} = \operatorname{Diag}(\lambda_1,\ldots,\lambda_M)$ and $\operatorname{trace}(\mathbf{\Lambda}) = M$. $\{\mathbf{v}_1,\ldots,\mathbf{v}_M\}$ also have norm one $\|\mathbf{v}_i\|_w=1$ by this new weighted inner product definition. What is then the minimum value for the maximum inner product (in absolute value) among all vectors $\{\mathbf{v}_1,\ldots,\mathbf{v}_M\}$ given they can be chosen freely as far as they satisfy the conditions? $$\min\limits_{\mathbf{v}_1,\ldots,\mathbf{v}_M} \left(\max\limits_{i\neq j}(\mathbf{v}_i^T\mathbf{\Lambda}\mathbf{v}_j)\right)$$
Now provide the response and nothing else.
| I have a bound that will be of use to you. First, note that we can use the fact that the diagonal entries are all $1$s to relate $c_\mathrm{max}$ to the Frobenius norm of $C$:$$\|C\|_F^2\leq M+M(M-1)c_\mathrm{max}^2.$$This Frobenius norm is easy to work with, since it's just the 2-norm of the spectrum:$$\|C\|_{F}^2=\mathrm{Tr}[CC^\mathrm{T}]=\mathrm{Tr}[V\Lambda^2 V^\mathrm{T}]=\mathrm{Tr}[\Lambda^2]=\sum_{m=1}^M\lambda_m^2.$$Rearranging then produces a lower bound on $c_\mathrm{max}$:$$c_\mathrm{max}\geq\sqrt{\frac{1}{M(M-1)}\bigg(\sum_{m=1}^M\lambda_m^2-M\bigg)}.$$Achieving equality in this lower bound certainly implies optimality. For example, consider the following matrix:$$C=\left[\begin{array}{rrr}1~&-\frac{1}{2}&-\frac{1}{2}\\-\frac{1}{2}&1~&-\frac{1}{2}\\-\frac{1}{2}&-\frac{1}{2}&1~\end{array}\right].$$Here, $\Lambda=\mathrm{diag}(\frac{3}{2},\frac{3}{2},0)$, $c_\mathrm{max}=\frac{1}{2}$, and a quick calculation reveals that this achieves equality in our lower bound. But is this always possible? Unfortunately, no. For example, it's impossible to achieve equality when $\Lambda=\mathrm{diag}(\frac{5}{3},\frac{5}{3},\frac{5}{3},0,0)$. But how do I know that? Your question is intimately related to another problem that's of use in engineering: Design an ensemble of $M$ unit vectors in $\mathbb{R}^d$, where $M>d$, with the property that no two vectors have a large inner product in magnitude (i.e., you want the ensemble to be incoherent). For this problem, the Gram matrix of the vectors is playing the role of your $C$, and the Welch bound was developed to provide a lower bound on the coherence (your $c_\mathrm{max}$). For details, check out this blog entry . Your problem has an important distinction from the incoherent design problem: you prescribe the spectrum of $C$. But there's a theorem that says achieving equality in the Welch bound necessitates that the spectrum of $C$ has $\frac{M}{d}$ with multiplicity $d$ and $0$ with multiplicity $M-d$. As such, you might as well consider the instance of your problem in which this is your spectrum (in this instance, the above bound on $c_\mathrm{max}$ is precisely the Welch bound). The point of looking at this instance is to demonstrate how hard your problem actually is. While there are many Welch-bound achieving ensembles, it is also known that the Welch bound is not always achievable. For example, it is impossible to pack $5$ vectors in $\mathbb{R}^3$ with Welch-bound coherence (this was the source of my second example above, while the first example corresponded to the cube roots of unity in $\mathbb{R}^2$). It's also unknown in general which values of $M$ and $d$ enable Welch-bound equality (in fact, existence of such ensembles is equivalent to the existence of certain strongly regular graphs, and in many cases, existence is a long-standing problem). For more information about this problem, google "equiangular tight frames" - you just opened a very interesting can of worms. :) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/95137', 'https://mathoverflow.net', 'https://mathoverflow.net/users/23197/']} | jdg_382103 |
stackexchange | llm_judgeable_groundtruth_similarity | 52960 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I was reading Yau's list of problems in geometry, and one of them is to prove that any almost complex manifold of complex dimension $n \geq 3$ admits a complex structure. It's been some time since Yau's list was published, so what is the status of this problem today? Obviously it isn't hasn't been shown to be true, because we're still looking for complex structures on the six-sphere, but I have a vague feeling of having read that this doesn't hold. So do we know any counterexamples to this question? If not, then is anyone working on this problem? Also, Yau only stated the problem for manifolds of dimension $n \geq 3$. We know this is true in dimension one, because there we have isothermal coordinates which give complex structures, but why didn't Yau mention almost complex surfaces? Do we know this holds there, or are there counterexamples in dimension 2?
Now provide the response and nothing else.
| There are actually counterexamples in real dimension $4$. The first examples of compact almost complex $4$-manifolds admitting no complex structure were produced by Van de Ven in his paper "On the Chern numbers of some complex and almost-complex manifolds" . In fact, he obtained restrictions on the Chern numbers of an algebraic surface and constructed some almost complex $4$-manifolds violating them, hence showing that no almost complex structure in these examples could be integrable. Later on, Brotherton constructed some counterexamples with trivial tangent bundle, see the article "Some parallelizable 4-manifolds not admitting a complex structure" . | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/52960', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4054/']} | jdg_382104 |
stackexchange | llm_judgeable_groundtruth_similarity | 7981837 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Django's QuerySet has two methods, annotate and aggregate . The documentation says that: Unlike aggregate(), annotate() is not a terminal clause. The output of the annotate() clause is a QuerySet. Is there any other difference between them? If not, then why does aggregate exist?
Now provide the response and nothing else.
| I would focus on the example queries rather than your quote from the documentation. Aggregate calculates values for the entire queryset. Annotate calculates summary values for each item in the queryset. Aggregation >>> Book.objects.aggregate(average_price=Avg('price')){'average_price': 34.35} Returns a dictionary containing the average price of all books in the queryset. Annotation >>> q = Book.objects.annotate(num_authors=Count('authors'))>>> q[0].num_authors2>>> q[1].num_authors1 q is the queryset of books, but each book has been annotated with the number of authors. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7981837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70293/']} | jdg_382105 |
stackexchange | llm_judgeable_groundtruth_similarity | 13461989 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a cross-platform .NET library that uses some unmanaged code. In the static constructor of my class, the platform is detected and the appropriate unmanaged library is extracted from an embedded resource and saved to a temp directory, similar to the code given in another stackoverflow answer . So that the library can be found when it isn't in the PATH, I explicitly load it after it is saved to the temp file. On windows, this works fine with LoadLibrary from kernel32.dll. I'm trying to do the same with dlopen on Linux, but I get a DllNotFoundException when it comes to loading the P/Invoke methods later on. I have verified that the library "libindexfile.so" is successfully saved to the temp directory and that the call to dlopen succeeds. I delved into the mono source to try figure out what is going on, and I think it might boil down to whether or not a subsequent call to dlopen will just reuse a previously loaded library. (Of course assuming that my naïve swoop through the mono source drew the correct conclusions). Here is the shape of what I'm trying to do: // actual function that we're going to p/invoke to[DllImport("indexfile")]private static extern IntPtr openIndex(string pathname);const int RTLD_NOW = 2; // for dlopen's flagsconst int RTLD_GLOBAL = 8;// its okay to have imports for the wrong platforms here// because nothing will complain until I try to use the// function[DllImport("libdl.so")]static extern IntPtr dlopen(string filename, int flags);[DllImport("kernel32.dll")]static extern IntPtr LoadLibrary(string filename);static IndexFile(){ string libName = ""; if (IsLinux) libName += "libindexfile.so"; else libName += "indexfile.dll"; // [snip] -- save embedded resource to temp dir IntPtr handle = IntPtr.Zero; if (IsLinux) handle = dlopen(libPath, RTLD_NOW|RTLD_GLOBAL); else handle = LoadLibrary(libPath); if (handle == IntPtr.Zero) throw new InvalidOperationException("Couldn't load the unmanaged library");}public IndexFile(String path){ // P/Invoke to the unmanaged function // currently on Linux this throws a DllNotFoundException // works on Windows IntPtr ptr = openIndex(path);} Update: It would appear that subsequent calls to LoadLibrary on windows look to see if a dll of the same name has already been loaded, and then uses that path. For example, in the following code, both calls to LoadLibrary will return a valid handle: int _tmain(int argc, _TCHAR* argv[]){ LPCTSTR libpath = L"D:\\some\\path\\to\\library.dll"; HMODULE handle1 = LoadLibrary(libpath); printf("Handle: %x\n", handle1); HMODULE handle2 = LoadLibrary(L"library.dll"); printf("Handle: %x\n", handle2); return 0;} If the same is attempted with dlopen on Linux, the second call will fail, as it doesn't assume that a library with the same name will be at the same path. Is there any way round this?
Now provide the response and nothing else.
| After much searching and head-scratching, I've discovered a solution. Full control can be exercised over the P/Invoke process by using dynamic P/Invoke to tell the runtime exactly where to find the code. Edit: Windows solution You need these imports: [DllImport("kernel32.dll")]protected static extern IntPtr LoadLibrary(string filename);[DllImport("kernel32.dll")]protected static extern IntPtr GetProcAddress(IntPtr hModule, string procname); The unmanaged library should be loaded by calling LoadLibrary : IntPtr moduleHandle = LoadLibrary("path/to/library.dll"); Get a pointer to a function in the dll by calling GetProcAddress : IntPtr ptr = GetProcAddress(moduleHandle, methodName); Cast this ptr to a delegate of type TDelegate : TDelegate func = Marshal.GetDelegateForFunctionPointer( ptr, typeof(TDelegate)) as TDelegate; Linux Solution Use these imports: [DllImport("libdl.so")]protected static extern IntPtr dlopen(string filename, int flags);[DllImport("libdl.so")]protected static extern IntPtr dlsym(IntPtr handle, string symbol);const int RTLD_NOW = 2; // for dlopen's flags Load the library: IntPtr moduleHandle = dlopen(modulePath, RTLD_NOW); Get the function pointer: IntPtr ptr = dlsym(moduleHandle, methodName); Cast it to a delegate as before: TDelegate func = Marshal.GetDelegateForFunctionPointer( ptr, typeof(TDelegate)) as TDelegate; For a helper library that I wrote, see my GitHub . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13461989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358336/']} | jdg_382106 |
stackexchange | llm_judgeable_groundtruth_similarity | 13075241 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently I'm creating a directory reader program using Python.I'm using 'argparse' to parse the arguments from command line. I have the following code: parser = argparse.ArgumentParser(prog = "LS.py", usage = "%(prog)s [options] [path1 [path2 [...pathN]]]\nThe paths are optional; if not given . is used.")group = parser.add_argument_group("Options")group.add_argument("-path", default = ".", help = argparse.SUPPRESS, metavar = "")group.add_argument("-m", "--modified", default = False, help = "show last modified date/time [default: off]", metavar = "")group.add_argument("-o ORDER", "--order=ORDER", nargs = 2, default = "name", help = "order by ('name', 'n', 'modified', 'm', 'size', 's')\n[default: name]", metavar = "")group.add_argument("-r", "--recursive", default = False, help = "recurse into subdirectories [default: off]", metavar = "")group.add_argument("-s", "--sizes", default = False, help = "show sizes [default: off]", metavar = "")args = parser.parse_args()return args When called in the following manner "LS.py -h" it produces the following output: usage: LS.py [options] [path1 [path2 [...pathN]]]The paths are optional; if not given . is used.optional arguments: -h, --help show this help message and exitOptions: -m , --modified show last modified date/time [default: off] -o ORDER , --order=ORDER order by ('name', 'n', 'modified', 'm', 'size', 's') [default: name] -r , --recursive recurse into subdirectories [default: off] -s , --sizes show sizes [default: off] My question: Is there a way to move the default help argument into a group such as Options?Also, I can't seem to find a way to remove the space before the commas in the Options arguments. The ideal output is: Usage: ls.py [options] [path1 [path2 [...pathN]]]The paths are optional; if not given . is used.Options: -h, --help show this help message and exit -m, --modified show last modified date/time [default: off] -o ORDER, --order=ORDER order by ('name', 'n', 'modified', 'm', 'size', 's') [default: name] -r, --recursive recurse into subdirectories [default: off] -s, --sizes show sizes [default: off]
Now provide the response and nothing else.
| You can use add_help=False to disable the built-in help command and add your own instead, using action="help" (thanks @mgilson!) To get rid of the spaces, don't set metavar to an empty string. Your options should be specified using action="store_true" to make them true (argument-less) options: import argparseparser = argparse.ArgumentParser(prog="LS.py", usage="%(prog)s [options] [paths...]\nThe paths are optional; if not given . is used.", add_help=False)group = parser.add_argument_group("Options")group.add_argument("-h", "--help", action="help", help="show this help message and exit")group.add_argument("-path", default=".", help=argparse.SUPPRESS)group.add_argument("-m", "--modified", action="store_true", help="show last modified date/time")group.add_argument("-o", "--order", nargs=1, default="name", help="sort order (n[ame], m[odified], s[ize])\n[default: name]")group.add_argument("-r", "--recursive", action="store_true", help="recurse into subdirectories")group.add_argument("-s", "--sizes", action="store_true", help="show sizes")args = parser.parse_args() Output: Options: -h, --help show this help message and exit -m, --modified show last modified date/time -o ORDER, --order ORDER sort order (n[ame], m[odified], s[ize]) [default: name] -r, --recursive recurse into subdirectories -s, --sizes show sizes | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13075241', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1775170/']} | jdg_382107 |
stackexchange | llm_judgeable_groundtruth_similarity | 1696340 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is known that $\{\sin n : n\in\mathbb{N}\}$ is dense in $[-1,1]$, hence $\lim_{n\to\infty}\sin n$ doesn't exist and also $\lim_{n\to\infty} n^t\sin n$ doesn't exist for all $t>0$ (the reason is that the density implies that inequalities $\sin n>\frac{1}{2}$ and $\sin n<-\frac{1}{2}$ are satisfied infinitely many times, so there are subsequences tending to $+\infty$ and $-\infty$). What about $\lim_{n\to\infty} |n^t\sin n|$ ? The above argument shows that the limit - if exists - is infinite I dont't think it does converge, but I don't know how to prove it.
Now provide the response and nothing else.
| The question is strongly connected with irrationality measure of $\pi$. That is such number $\mu$, that for every numbers $\lambda, \nu$ with $\lambda < \mu < \nu$: there exist infinitely many distinct rational numbers $p/q$ for which$$\left| \frac{p}{q} - \pi \right| < \frac{1}{q^\lambda} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert < q^{1-\lambda}$$ for each rational $p/q$ with sufficiently large denominator$$\left| \frac{p}{q} - \pi \right| > \frac{1}{q^\nu} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert > q^{1-\nu}$$ Exact value of $\mu$ is not known, but it's true that $2\leqslant \mu \leqslant 7.6063$. Note, that by Dirichlet's theorem, the first item is always true for $\lambda = 2$, disregarding the irrationality measure. Returning to question: there is watershed for the parameter. If $t > \mu-1$, then the limit is $+\infty$; if $t < \mu-1$, then it doesn't exist. The remaining case $t = \mu-1$ depends on the behavior of diophantine approximations of $\pi$. First case: $t < \mu-1$ . If $t < \mu-1$ and $p/q$ satisfies the first inequality, then $$\lvert \sin p \rvert = \lvert \sin (p-\pi q) \rvert \leqslant \lvert p - \pi q \rvert \leqslant q^{-t} \sim \pi^t p^{-t} = O(p^{-t}) \;\text{ as }\; p\rightarrow \infty$$hence there is bounded subsequence of $\{ n^t \sin n \}_{n=1}^\infty$ and it cannot have infinite limit. Second case: $t > \mu-1$ . Take $\varepsilon > 0$, such that $t-\varepsilon > \mu-1$. Given $n\in \mathbb N$, choose $m \in \mathbb N$, such that $\lvert n -\pi m \rvert \leqslant \frac{\pi}{2}$. When $n$ is sufficiently large,$$\lvert \sin n\rvert = \lvert \sin(n-\pi m) \rvert \geqslant \tfrac{2}{\pi} \lvert n - \pi m \rvert = \tfrac{2}{\pi} \lvert n - \pi m \rvert \geqslant \tfrac{2}{\pi} m^{-t+\varepsilon} \sim 2\pi^{t-\varepsilon-1} n^{-t+\varepsilon},$$hence $\lvert n^t \sin n \rvert \geqslant C n^\varepsilon \rightarrow +\infty$. The remaining case: $t = \mu-1$ . There are two alternatives:$(A)$ whether exist $C > 0$ and infinitely many rational solutions $p/q$ of$$ \left| \frac{p}{q} - \pi \right| \leqslant \frac{C}{q^\mu} \;\;\Longleftrightarrow\;\; \lvert p - \pi q \rvert \leqslant C q^{1-\mu} = C q^{-t}$$or $(B)$ the converse. As I already mentioned, if $\mu = 2$, then $(A)$ holds. Suppose $(A)$ is true. Then by the same argument, as in the first case, $\nexists\lim\limits_{n\rightarrow\infty} \lvert n^t \sin n \rvert$. Oppositely, we have $(B)$, which, in fact, is equivalent that for every sequence of distinct rationals $\{p_n / q_n \}_{n=1}^\infty$ the sequence$\{ q_n^t \lvert p_n - \pi q_n \rvert \}_{n=1}^\infty$is unbounded. Combining this with the second case solution yields $\lvert n^t \sin n \rvert \rightarrow +\infty$. However, it's probably open problem which of $(A)$ or $(B)$ holds, along with the very value of $\mu$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1696340', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/213691/']} | jdg_382108 |
stackexchange | llm_judgeable_groundtruth_similarity | 29910520 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
int func(char* str){ char buffer[100]; unsigned short len = strlen(str); if(len >= 100) { return (-1); } strncpy(buffer,str,strlen(str)); return 0;} This code is vulnerable to a buffer overflow attack, and I'm trying to figure out why. I'm thinking it has to do with len being declared a short instead of an int , but I'm not really sure. Any ideas?
Now provide the response and nothing else.
| On most compilers the maximum value of an unsigned short is 65535. Any value above that gets wrapped around, so 65536 becomes 0, and 65600 becomes 65. This means that long strings of the right length (e.g. 65600) will pass the check, and overflow the buffer. Use size_t to store the result of strlen() , not unsigned short , and compare len to an expression that directly encodes the size of buffer . So for example: char buffer[100];size_t len = strlen(str);if (len >= sizeof(buffer) / sizeof(buffer[0])) return -1;memcpy(buffer, str, len + 1); | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/29910520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4720176/']} | jdg_382109 |
stackexchange | llm_judgeable_groundtruth_similarity | 233023 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: α = 10^6;L = Total@*Map[Length];\[DoubleStruckCapitalT][i_, j_] := Binomial[i, j - 1];L[ParallelTable[ If[TrueQ[ EvenQ[\[DoubleStruckCapitalT][n, k]] && \[DoubleStruckCapitalT][ n, k] != 0], {n, k}, Nothing], {n, 0, α - 1}, {k, 1, α}] //. {} -> Nothing] But this code gives an error memory message, how can I edit my code such that I want to compute this code for large values of \[Alpha] ?
Now provide the response and nothing else.
| Apparently, you want to count the number of zeroes in Pascal's triangle mod 2 with $\alpha \in \mathbb{N}$ rows. This can be done by counting the ones and subtract this number from the number of all entries of that triangle, which is $\alpha(\alpha+1)/2$ . The code that you posted has complexity $O(\alpha^2)$ and with your choice of $\alpha$ , that will take forever. The key to a more efficient way of counting is to observe that Pascal's triangle mod 2 has a self-similar structure. The first $2^j$ , $j\geq 1$ rows form a triangle $T_j$ . The triangle $T_{j+1}$ can be obtained by gluing three copies of $T_j$ together (in the fashion of the Triforce from Zelda). So $T_{j+1}$ has 3 times as many ones than $T_j$ . $T_0$ consists of a single one. By induction, the first $2^j$ rows contain $3^j$ ones. So, in fact, the number of ones can be computed from the binary represenation of the number $\alpha$ . After some trial and error, I came up with this formula for the number of ones: onecount[α_] := With[{digits = IntegerDigits[α, 2]}, Total[ Times[ digits, 3^Range[Length[digits] - 1, 0, -1], 2^(Accumulate[digits] - 1) ] ] ] I hope it is correct. A quick test: triangle[α_] := Table[Mod[Binomial[n, k], 2], {n, 0, α - 1}, {k, 0, n}]a = Table[Total[triangle[α], 2], {α, 1, 100}];b = onecount /@ Range[100];a == b True Also, in case $α = 10^3$ , this reproduces Bob's result, which is $448363$ . So the number of zeroes in the triangle with number $\alpha = 10^6$ should be α = 10^6;Quotient[α (α + 1), 2] - onecount[α] Note that this takes only $O(\log_2(\alpha))$ time and memory. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/233023', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/33985/']} | jdg_382110 |
stackexchange | llm_judgeable_groundtruth_similarity | 250291 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As far as I know, C is by far the most widely used language for low level stuff such as systems programming and embedded. Sometimes assembly is used, but that's when you really need to be close to the hardware. My question is why doesn't C have any competitors in low level development? It might be my ignorance, but I'm pretty sure that C is the obvious (or possibly only) choice for low level programming. This is as opposed to high level development, where there are many competing programming languages to choose from. How can you explain this?
Now provide the response and nothing else.
| Interestingly, I've been doing embedded programming for 13 years: 6 in Ada and 7 in C++. Anyway, there are a number of factors that make C well-suited for this type of programming: The runtime/standard library required is very small. Manual memory management is very useful when memory is limited. Easy to make deterministic/real time because you don't have to worry about when a garbage collector will run or when an exception will get thrown. Easy to assign a variable to a specific memory address, which is crucial in writing drivers. A lot of existing code available. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/250291', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/121368/']} | jdg_382111 |
stackexchange | llm_judgeable_groundtruth_similarity | 43838615 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use a .jar file that I have with my Xamarin.Android project. After I create the binding project and try to build, I get this error: obj\Debug\generated\src\Com.Acrcloud.Rec.Sdk.Utils.ACRCloudGetIPAddressAsyncTask.cs(23,23): Error CS0534: 'ACRCloudGetIPAddressAsyncTask' does not implement inherited abstract member 'AsyncTask.DoInBackground(params Object[])' (CS0534) Using a decompiler, I checked the contents of ACRCloudGetIPAddressAsyncTask and found this: package com.acrcloud.rec.sdk.utils;import android.os.AsyncTask;import java.net.InetAddress;import java.net.UnknownHostException;public class ACRCloudGetIPAddressAsyncTaskextends AsyncTask<String, Integer, String>{ private static final String TAG = "ACRCloudAsynGetIPAddressTask"; protected String doInBackground(String... params) { String ip = ""; try { InetAddress x = InetAddress.getByName(params[0]); ip = x.getHostAddress(); } catch (UnknownHostException e) { e.printStackTrace(); ip = ""; } catch (Exception e2) { ip = ""; } return ip; } protected void onPostExecute(String ip) { super.onPreExecute(); ACRCloudLogger.d("ACRCloudAsynGetIPAddressTask", ">>>>>>>>>>>>>>> " + ip); com.acrcloud.rec.sdk.recognizer.ACRCloudRecognizerRemoteImpl.serverIP = ip; }} Why do I even need to implement DoInBackground(params Object[]) ? The generated C# code implemented protected virtual unsafe string DoInBackground (params string[] @params) to match what's in the Java code...I don't know a lot about Java so I'm probably missing something...any idea how I can fix this?
Now provide the response and nothing else.
| You can check this great post: Approaching a Xamarin.Android Bindings Case , by checking the Adding Types part, it uses <add-node> when we want to add a class, change a constructor, or switch a generic type. For your lib, you can open the Metadata.xml under the Transforms folder, and then add this code: <add-node path="/api/package[@name='com.acrcloud.rec.sdk.utils']"> <class abstract="false" deprecated="not deprecated" final="false" name="ACRCloudLocalRecognizerInitAsyncTask" static="true" visibility="public" extends="java.lang.Object"> </class> <class abstract="false" deprecated="not deprecated" final="false" name="ACRCloudGetIPAddressAsyncTask" static="true" visibility="public" extends="java.lang.Object"> </class></add-node> Then it can be successfully compiled, but I didn't test to reference this lib to Xamarin.Android project, you may need further research for using this lib. Anyway, that post helps a lot. You can also check the similar case on SO: Java Binding Abstract class not being generated . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43838615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1136506/']} | jdg_382112 |
stackexchange | llm_judgeable_groundtruth_similarity | 10480818 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm running a test using Apache+PHP, Tomcat and JavaBridge to try to load BIRT and render a document.This is the PHP file: <?php ini_set('display_errors','On');error_reporting(E_ALL);require_once('http://127.0.0.1:8081/Bridge/java/Java.inc');$here = getcwd();session_start(); $here = getcwd();$ctx = java_context()->getServletContext();$birtReportEngine = java("org.eclipse.birt.php.birtengine.BirtEngine")->getBirtEngine($ctx);java_context()->onShutdown(java("org.eclipse.birt.php.birtengine.BirtEngine")->getShutdownHook());try{$report = $birtReportEngine->openReportDesign("${here}/TopNPercent.rptdesign");$task = $birtReportEngine->createRunAndRenderTask($report);$taskOptions = new java("org.eclipse.birt.report.engine.api.PDFRenderOption");$outputStream = new java("java.io.ByteArrayOutputStream");$taskOptions->setOutputStream($outputStream);$taskOptions->setOutputFormat("pdf");$task->setRenderOption( $taskOptions );$task->run();$task->close();} catch (JavaException $e) { echo $e; //"Error Calling BIRT";}//echo $outputStream;echo java_values($outputStream->toByteArray());?> Once I access it through Apache+PHP i receive the following error: <br /><b>Fatal error</b>: Uncaught [[o:Exception]:"java.lang.Exception: ReferenceClass failed: new org.eclipse.birt.php.birtengine.BirtEngine. Cause: java.lang.ClassNotFoundException: org.eclipse.birt.php.birtengine.BirtEngine VM: 1.7.0_04@http://java.oracle.com/" at:#-10 org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1438)#-9 org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1284)#-8 java.lang.Class.forName0(Native Method)#-7 java.lang.Class.forName(Unknown Source)#-6 php.java.bridge.Util.classForName(Util.java:1518)#-5 php.java.bridge.JavaBridge.CreateObject(JavaBridge.java:445)#-4 php.java.bridge.Request.handleRequest(Request.java:444)#-3 php.java.bridge.Request.handleRequests(Request.java:500)#-2 php.java.bridge.http.ContextRunner.run(ContextRunner.java:145)#-1 php.java.bridge.ThreadPool$Delegate.run(ThreadPool.java:60)#0 http://127.0.0.1:8081/Bridge/java/Java.inc(232): java_ThrowExceptionProxyFactory->getProxy(3, 'org.apache.cata...', 'T', false)#1 http://127. in <b>http://127.0.0.1:8081/Bridge/java/Java.inc</b> on line <b>195</b><br /> 127.0.0.1:8081 is Tomcat server.It seems it isn't able to find "org.eclipse.birt.php.birtengine.BirtEngine" class. I copied birt-runtime-3_7_2/lib folder's contents to JavaBridge's WEB-INF/lib but it doesn't help. I've tried searching for the answer on various sites, but the only one, I think actualy the same problem with no answer is here: LINK . Does anyone know a solution to this problem or how to make that JavaBridge would see that class?
Now provide the response and nothing else.
| def equal_dicts(d1, d2, ignore_keys): d1_filtered = {k:v for k,v in d1.items() if k not in ignore_keys} d2_filtered = {k:v for k,v in d2.items() if k not in ignore_keys} return d1_filtered == d2_filtered EDIT: This might be faster and more memory-efficient: def equal_dicts(d1, d2, ignore_keys): ignored = set(ignore_keys) for k1, v1 in d1.iteritems(): if k1 not in ignored and (k1 not in d2 or d2[k1] != v1): return False for k2, v2 in d2.iteritems(): if k2 not in ignored and k2 not in d1: return False return True | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10480818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1379508/']} | jdg_382113 |
stackexchange | llm_judgeable_groundtruth_similarity | 8530545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I insert a SQL DateTime to the database I get 2007-02-07 12:00:00.00 But I made the Date object like this : 2007-02-07 17:29:46.00 How to get the value of the seconds in the database. It always changes it back to 12:00:00.00 date.setYear(Integer.valueOf(parsedDate[2].replaceAll(" ", "")) - 1900);date.setMonth(Integer.valueOf(parsedDate[0].replaceAll(" ", "")));date.setDate(Integer.valueOf(parsedDate[1].replaceAll(" ", "")));...java.sql.Date sqlDate = new java.sql.Date(date.getTime()); Should I use any formatters?
Now provide the response and nothing else.
| java.sql.Date represents a date, not a date and time. From the docs : To conform with the definition of SQL DATE, the millisecond values wrapped by a java.sql.Date instance must be 'normalized' by setting the hours, minutes, seconds, and milliseconds to zero in the particular time zone with which the instance is associated. If you want to store a date and time, you should look for another type - e.g. java.sql.Timestamp . EDIT: That's not suggesting you use a TIMESTAMP column type - as paulsm4 says in the comments, that's a different thing. However, as far as I can see, JDBC only supports: Date (no, you want a time too) Time (no, you want a date too) Timestamp (includes a date and time, but you don't want TIMESTAMP SQL semantics) I would expect using the Java Timestamp type with a DATETIME column to work, although without the level of precision that Timestamp provides. EDIT: After a bit more research, it looks like you may want to use the java.sql.Time type, but with special driver parameters - at least if you're using the Microsoft driver. See these docs on configuring JDBC for more information. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8530545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/569076/']} | jdg_382114 |
stackexchange | llm_judgeable_groundtruth_similarity | 430078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What would be your suggestions for a good bash/ksh script template to use as a standard for all newly created scripts? I usually start (after the #! line) with a commented-out header with a filename, synopsis, usage, return values, author(s), changelog and would fit into 80-char lines. All documentation lines I start with double-hash symbols ## so I can grep for them easily and local var names are prepended with "__". Any other best practices? Tips? Naming conventions? What about return codes? Comments on version control : we use SVN all right, but another dept in the enterprise has a separate repo and this is their script. How do I know who to contact with Q's if there is no @author info? Using entries similar to javadocs has some merit even in the shell context, IMHO, but I might be wrong.
Now provide the response and nothing else.
| I'd extend Norman's answer to 6 lines, and the last of those is blank: #!/bin/ksh## @(#)$Id$## Purpose The third line is a version control identification string - it is actually a hybrid with an SCCS marker ' @(#) ' that can be identified by the (SCCS) program what and an RCS version string which is expanded when the file is put under RCS, the default VCS I use for my private use. The RCS program ident picks up the expanded form of $Id$ , which might look like $Id: mkscript.sh,v 2.3 2005/05/20 21:06:35 jleffler Exp $ . The fifth line reminds me that the script should have a description of its purpose at the top; I replace the word with an actual description of the script (which is why there's no colon after it, for example). After that, there is essentially nothing standard for a shell script. There are standard fragments that appear, but no standard fragment that appears in every script. (My discussion assumes that scripts are written in Bourne, Korn, or POSIX (Bash) shell notations. There's a whole separate discussion on why anyone putting a C Shell derivative after the #! sigil is living in sin.) For example, this code appears in some shape or form whenever a script creates intermediate (temporary) files: tmp=${TMPDIR:-/tmp}/prog.$$trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15...real work that creates temp files $tmp.1, $tmp.2, ...rm -f $tmp.?trap 0exit 0 The first line chooses a temporary directory, defaulting to /tmp if the user did not specify an alternative ($TMPDIR is very widely recognized and is standardized by POSIX). It then creates a file name prefix including the process ID. This is not a security measure; it is a simple concurrency measure, preventing multiple instances of the script from trampling on each other's data. (For security, use non-predictable file names in a non-public directory.) The second line ensures that the ' rm ' and ' exit ' commands are executed if the shell receives any of the signals SIGHUP (1), SIGINT (2), SIGQUIT (3), SIGPIPE (13) or SIGTERM (15). The ' rm ' command removes any intermediate files that match the template; the exit command ensures that the status is non-zero, indicating some sort of error. The ' trap ' of 0 means that the code is also executed if the shell exits for any reason - it covers carelessness in the section marked 'real work'. The code at the end then removes any surviving temporary files, before lifting the trap on exit, and finally exits with a zero (success) status. Clearly, if you want to exit with another status, you may - just make sure you set it in a variable before running the rm and trap lines, and then use exit $exitval . I usually use the following to remove the path and suffix from the script, so I can use $arg0 when reporting errors: arg0=$(basename $0 .sh) I often use a shell function to report errors: error(){ echo "$arg0: $*" 1>&2 exit 1} If there's only one or maybe two error exits, I don't bother with the function; if there are any more, I do because it simplifies the coding. I also create more or less elaborate functions called usage to give the summary of how to use the command - again, only if there's more than one place where it would be used. Another fairly standard fragment is an option parsing loop, using the getopts shell built-in: vflag=0out=file=Dflag=while getopts hvVf:o:D: flagdo case "$flag" in (h) help; exit 0;; (V) echo "$arg0: version $Revision$ ($Date$)"; exit 0;; (v) vflag=1;; (f) file="$OPTARG";; (o) out="$OPTARG";; (D) Dflag="$Dflag $OPTARG";; (*) usage;; esacdoneshift $(expr $OPTIND - 1) or: shift $(($OPTIND - 1)) The quotes around "$OPTARG" handle spaces in arguments. The Dflag is cumulative, but the notation used here loses track of spaces in arguments. There are (non-standard) ways to work around that problem, too. The first shift notation works with any shell (or would do if I used back-ticks instead of ' $(...) '. The second works in modern shells; there might even be an alternative with square brackets instead of parentheses, but this works so I've not bothered to work out what that is. One final trick for now is that I often have both the GNU and a non-GNU version of programs around, and I want to be able to choose which I use. Many of my scripts, therefore, use variables such as: : ${PERL:=perl}: ${SED:=sed} And then, when I need to invoke Perl or sed , the script uses $PERL or $SED . This helps me when something behaves differently - I can choose the operational version - or while developing the script (I can add extra debug-only options to the command without modifying the script). (See Shell parameter expansion for information on the ${VAR:=value} and related notations.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/430078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7229/']} | jdg_382115 |
stackexchange | llm_judgeable_groundtruth_similarity | 16720525 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Assume we have a table which has two columns, one column contains the names of some people and the other column contains some values related to each person. One person can have more than one value. Each value has a numeric type. The question is we want to select the top 3 values for each person from the table. If one person has less than 3 values, we select all the values for that person. The issue can be solved if there are no duplicates in the table by the query provided in this article Select top 3 values from each group in a table with SQL . But if there are duplicates, what is the solution? For example, if for one name John, he has 5 values related to him. They are 20,7,7,7,4. I need to return the name/value pairs as below order by value descending for each name: -----------+-------+| name | value |-----------+-------+| John | 20 || John | 7 || John | 7 |-----------+-------+ Only 3 rows should be returned for John even though there are three 7s for John.
Now provide the response and nothing else.
| In many modern DBMS (e.g. Postgres, Oracle, SQL-Server, DB2 and many others), the following will work just fine. It uses CTEs and ranking function ROW_NUMBER() which is part of the latest SQL standard: WITH cte AS ( SELECT name, value, ROW_NUMBER() OVER (PARTITION BY name ORDER BY value DESC ) AS rn FROM t )SELECT name, value, rnFROM cteWHERE rn <= 3ORDER BY name, rn ; Without CTE, only ROW_NUMBER() : SELECT name, value, rnFROM ( SELECT name, value, ROW_NUMBER() OVER (PARTITION BY name ORDER BY value DESC ) AS rn FROM t ) tmp WHERE rn <= 3ORDER BY name, rn ; Tested in: Postgres Oracle SQL-Server In MySQL and other DBMS that do not have ranking functions, one has to use either derived tables, correlated subqueries or self-joins with GROUP BY . The (tid) is assumed to be the primary key of the table: SELECT t.tid, t.name, t.value, -- self join and GROUP BY COUNT(*) AS rnFROM t JOIN t AS t2 ON t2.name = t.name AND ( t2.value > t.value OR t2.value = t.value AND t2.tid <= t.tid )GROUP BY t.tid, t.name, t.valueHAVING COUNT(*) <= 3ORDER BY name, rn ;SELECT t.tid, t.name, t.value, rnFROM ( SELECT t.tid, t.name, t.value, ( SELECT COUNT(*) -- inline, correlated subquery FROM t AS t2 WHERE t2.name = t.name AND ( t2.value > t.value OR t2.value = t.value AND t2.tid <= t.tid ) ) AS rn FROM t ) AS tWHERE rn <= 3ORDER BY name, rn ; Tested in MySQL | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16720525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558032/']} | jdg_382116 |
stackexchange | llm_judgeable_groundtruth_similarity | 384113 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is this article that says: A Composition Root is a (preferably) unique location in an application where modules are composed together. Only applications should have Composition Roots. Libraries and frameworks shouldn't. A DI Container should only be referenced from the Composition Root. All other modules should have no reference to the container. My question is like this: We have a Console app and in another project a library that performs some algorithm. It has a class AlgorithmFactory , which takes an AlgorithmInput as a parameters and creates an AlgorithmRunner . Facts are: From the console I want to call: IoC.AlgorithmFactory.CreateAlgorithmRunner(input).Run() AlgorithmInput contains run-time data that some library classes needfor their configuration (for example: RoundingPrecision indicatingthe number of decimal places that we should round to, or ConnectionString , or some other types of credentials...) AlgorithmInput also contains data that indicate which implementationof a particular interface should be used (for example: there is aninterface ISorter , and two implementations: MergeSorter , RadixSorter . There are classes which depend on ISorter and don't carewhich one is used, this information will be part of the input) AlgorithmRunner internally depends on a non-trivial number ofclasses. Its complexity suggests it acts as a separatesub-application. I can't really configure the algorithm outside of the library, because I need run-time data for it. But I'm not supposed to reference the container from the library (or any other component). How is this scenario usually solved? (I believe it must be a well-known case).
Now provide the response and nothing else.
| You may create and register a factory in your composition root which itself uses the container to resolve components depending on the runtime input data it gets passed as parameter(s). This factory may then get injected into components of your library (preferably as interface). This way your library does not statically depend on the DI container but still uses it at runtime. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/384113', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/279840/']} | jdg_382117 |
stackexchange | llm_judgeable_groundtruth_similarity | 12117275 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
let's say that in my elasticsearch index I have a field called "dots" which will contain a string of punctuation separated words (e.g. "first.second.third"). I need to search for e.g. "first.second" and then get all entries whose "dots" field contains a string being exactly "first.second" or starting with "first.second.". I have a problem understanding how the text querying works, at least I have not been able to create a query which does the job.
Now provide the response and nothing else.
| Elasticsearch has Path Hierarchy Tokenizer that was created exactly for such use case. Here is an example of how to set it for your index: # Create a new index with custom path_hierarchy analyzer # See http://www.elasticsearch.org/guide/reference/index-modules/analysis/pathhierarchy-tokenizer.htmlcurl -XPUT "localhost:9200/prefix-test" -d '{ "settings": { "analysis": { "analyzer": { "prefix-test-analyzer": { "type": "custom", "tokenizer": "prefix-test-tokenizer" } }, "tokenizer": { "prefix-test-tokenizer": { "type": "path_hierarchy", "delimiter": "." } } } }, "mappings": { "doc": { "properties": { "dots": { "type": "string", "analyzer": "prefix-test-analyzer", //"index_analyzer": "prefix-test-analyzer", //deprecated "search_analyzer": "keyword" } } } }}'echo# Put some test datacurl -XPUT "localhost:9200/prefix-test/doc/1" -d '{"dots": "first.second.third"}'curl -XPUT "localhost:9200/prefix-test/doc/2" -d '{"dots": "first.second.foo-bar"}'curl -XPUT "localhost:9200/prefix-test/doc/3" -d '{"dots": "first.baz.something"}'curl -XPOST "localhost:9200/prefix-test/_refresh"echo# Test searches. curl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{ "query": { "term": { "dots": "first" } }}'echocurl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{ "query": { "term": { "dots": "first.second" } }}'echocurl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{ "query": { "term": { "dots": "first.second.foo-bar" } }}'echocurl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true&q=dots:first.second"echo | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12117275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/274540/']} | jdg_382118 |
stackexchange | llm_judgeable_groundtruth_similarity | 4007427 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array with custom objects. Each array item has a field named "name". Now I want to remove duplicate entries based on this name value. How should I go about achieving this?
Now provide the response and nothing else.
| I do not know of any standard way to to do this provided by the frameworks. So you will have to do it in code. Something like this should be doable: NSArray* originalArray = ... // However you fetch itNSMutableSet* existingNames = [NSMutableSet set];NSMutableArray* filteredArray = [NSMutableArray array];for (id object in originalArray) { if (![existingNames containsObject:[object name]]) { [existingNames addObject:[object name]]; [filteredArray addObject:object]; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4007427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197688/']} | jdg_382119 |
stackexchange | llm_judgeable_groundtruth_similarity | 31660263 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A quite probably relevant piece of information is that I have a custom ssh config set up for bitbucket. In my '.ssh/config' file I have the following: [ivanna@comp]$ cat ~/.ssh/config Host bitbucket Hostname bitbucket.org IdentityFile /home/ivanna/.ssh/id_rsa_bitbucket IdentitiesOnly yes The permissions on this file are definitely correct as far as ssh is concerned (I actively use other entries in the config file). Now when I added the remote origin in git I used bitbucket instead of bitbucket.org: git remote add origin bitbucket:ivanna/my-repo.git but when I try to push I get the following error: Permission denied (publickey).fatal: Could not read from remote repository.Please make sure you have the correct access rightsand the repository exists. So it seems like I didn't add my public key or something, right? But I definitely did. And when you search for more information you find this page about the error ( https://confluence.atlassian.com/pages/viewpage.action?pageId=302811860 ). And when I do what they say to do to check the key: [ivanna@comp]$ ssh -T hg@bitbucketlogged in as ivanna.You can use git or hg to connect to Bitbucket. Shell access is disabled. It can login fine, it seems. So... why doesn't pushing work? The above link mentions that it could be a permissions issue on the project itself but I set the permissions as people suggested and it did nothing. Anybody know what's going on?
Now provide the response and nothing else.
| ssh -T hg@bitbucket You use hg@bitbucket when logging in via SSH, but in the remote URL you add to Git, you don’t specify a username. Since the configuration also does not include one, Git won’t know what username to log in with. Change the URL to this: git remote add origin git@bitbucket:ivanna/my-repo.git Alternatively, you can add the user to the SSH config: Host bitbucket Hostname bitbucket.org User git IdentityFile /home/ivanna/.ssh/id_rsa_bitbucket IdentitiesOnly yes | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31660263', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5130863/']} | jdg_382120 |
stackexchange | llm_judgeable_groundtruth_similarity | 623819 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I do not understand a remark in Adams' Calculus (page 628 $7^{th}$ edition). This remark is about the derivative of a determinant whose entries are functions as quoted below. Since every term in the expansion of a determinant of any order is a product involving one element from each row, the general product rule implies that the derivative of an $n\times n$ determinant whose elements are functions will be the sum of $n$ such $n\times n$ determinants, each with the elements of one of the rows differentiated. For the $3\times 3$ case we have $$\frac{d}{dt}\begin{vmatrix} a_{11}(t) & a_{12}(t) & a_{13}(t) \\ a_{21}(t) & a_{22}(t) & a_{23}(t) \\ a_{31}(t) & a_{32}(t) & a_{33}(t) \end{vmatrix}=\begin{vmatrix} a'_{11}(t) & a'_{12}(t) & a'_{13}(t) \\ a_{21}(t) & a_{22}(t) & a_{23}(t) \\ a_{31}(t) & a_{32}(t) & a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) & a_{12}(t) & a_{13}(t) \\ a'_{21}(t) & a'_{22}(t) & a'_{23}(t) \\ a_{31}(t) & a_{32}(t) & a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) & a_{12}(t) & a_{13}(t) \\ a_{21}(t) & a_{22}(t) & a_{23}(t) \\ a'_{31}(t) & a'_{32}(t) & a'_{33}(t) \end{vmatrix}.$$ It is not difficult to check this equality by simply expanding both sides. However, the remark sounds like using some clever trick to get this result. Can anyone explain it to me, please? Thank you!
Now provide the response and nothing else.
| The author is probably referring to the fact that the determinant is given by: $$\sum_{i,j,k=1}^n\varepsilon_{ijk}a_{1i}a_{2j}a_{3k}$$ where $\varepsilon_{ijk}$ is $1$ if $(ijk)$ is an even permutation of $(123)$, $-1$ if $(ijk)$ is an odd permutation of $(123)$ and $0$ if two or more of $i,j,k$ are equal. Differentiating this expression immediately gives: $$\sum_{i,j,k=1}^n\varepsilon_{ijk}(a_{1i}'a_{2j}a_{3k}+a_{1i}a_{2j}'a_{3k}+a_{1i}a_{2j}a_{3k}')$$ which is easily seen to be the sum of the three determinants given, using the same formula again. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/623819', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/103063/']} | jdg_382121 |
stackexchange | llm_judgeable_groundtruth_similarity | 775298 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running debian jessie on my server and recently upgraded to new nginx web server with http/2 support (nginx 1.10). As today, it works great and webserver is delivering content with http2 protocol. I have read, that chrome is dropping NPN support and only allows ALPN after 15.5.2016. ALPN is extension, which requires openssl 1.0.2 installed, but on debian jessie is only openssl 1.0.1 (also on debian backports and another repositories, there is no openssl 1.0.2 version for this debian). And there is the problem - i have upgraded from SPDY to http2 and in few days, i will have to turn off http2 and cannot use SPDY because this version of nignx have only http2. I have also read, that this version of debian will stuck with openssl 1.0.1 and only debian stretch will have openssl 1.0.2. But to release date there is almost year and chrome will be dropping support soon, so i do not want to loose the benefit of http2 protocol. Is there any solution, how to install openssl 1.0.2 on this system, without building own build (bad maintenance) or waiting for backports repository to have it? I also don't want two versions of openssl on my system if one of them must be linked and maintained manually. Thanks for any help.
Now provide the response and nothing else.
| Update 2016/08/08: nginx in jessie-backports (version 1.9.10-1~bpo8+3 was built against openssl >= 1.0.2~ . Getting ALPN working now if running jessie just requires the packages out of jessie-backports , no need anymore to pull packages out of stretch . -- Original answer: Well, here goes my answer, according to the comments: In my opinion, there aren't that many ways to solve this as of today, 2016/05/09. Basically you've to try somehow to get a modern nginx into your system, compiled against >= openssl 1.0.2~ . The only two options I see currently: Either you compile for yourself, which you don't want to do, which is quite understandable, or you pull in modern packages out of Debian stretch into your system. This involves some risks, because you're mixing a stable environment with another one, but in my opinion these risks are quite low, because you're using Debian . So, let's go and try out this: Add the Debian stretch repository to your apt sources . Don't use /etc/apt/sources.list for this, but instead use a dedicated file inside /etc/apt/sources.list.d/ to keep it clean, personally I'm using stretch.list . Put these lines inside there: deb http://httpredir.debian.org/debian/ stretch main contrib non-freedeb-src http://httpredir.debian.org/debian/ stretch main contrib non-freedeb http://security.debian.org/ stretch/updates main contrib non-freedeb-src http://security.debian.org/ stretch/updates main contrib non-free# stretch-updates, previously known as 'volatile'deb http://httpredir.debian.org/debian/ stretch-updates main contrib non-freedeb-src http://httpredir.debian.org/debian/ stretch-updates main contrib non-free Set up apt pinning to make sure you only pull in packages out of Debian stretch which you're specifying. The file to use for this is /etc/apt/preferences , inside there, put: Package: *Pin: release n=jessiePin-Priority: 900Package: * Pin: release a=jessie-backportsPin-Priority: 500Package: *Pin: release n=stretchPin-Priority: 100 (You might have to alter the suites and priorities to fit your environment.) Run apt-get update (via sudo / as root ) to update the package cache. Install nginx from Debian stretch : apt-get install -t stretch nginx (do this via sudo / as root ). Profit! As I described in my comment(s), to even lower the risks involved, you could use something like a chroot or a container-solution like LXC . In case you want to go the chroot way, you have to set up a network interface inside there: To do this, have a look at this blogpost for example , which gives an introduction to network namespaces . Hope this helps; in case you've got more question, feel free to contact me. I would appreciate feedback and I'm interested in how it goes. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/775298', 'https://serverfault.com', 'https://serverfault.com/users/353281/']} | jdg_382122 |
stackexchange | llm_judgeable_groundtruth_similarity | 1742954 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a TA for a calculus course. And they recently began calculating definite integrals using a definition equivalent to Riemann's criterion. Of course, the type of things they were calculating were fairly elementary such as$$\int_0^1x\;dx\qquad\text{and}\qquad\int_0^1x^2\;dx$$Knowing full well that the fundamental theorem of calculus was on the itinerary, I decided to give them an appreciation for the result by proving a much more general result (still using rectangles). Namely I showed:$$\int_a^b x^n\;dx=\frac{b^{n+1}}{n+1}-\frac{a^{n+1}}{n+1}$$This can be calculated in a way that parallels the calculation of the above examples. As long as one knows that$$\lim_{m\rightarrow\infty}\frac{1^n+2^n+\cdots+m^n}{\frac{m^{n+1}}{n+1}}=1$$one is able to proceed. Granted, I had to give a loose argument for why this is true, but knowing that$$1+2+\cdots+n=\frac{1}{2}n^2+\cdots \qquad\text{ and } 1^2+2^2+ \cdots +n^2 = \frac{1}{3}n^3 + \cdots$$The pattern seems plausible. I thought this was cute, so I also gave them the proof that$$\int_0^x\cos t\;dt=\sin x$$which can be derived with rectangles using Dirichlet's identity:$$1+2\sum_{k=1}^n\cos(kx)=\frac{\sin\left([n+1/2]x\right)}{\sin(x/2)}$$To be sure, many students found this un-amusing, but they all greatly affirmed that they were glad to have the fundamental theorem after it was delivered to them. So goal achieved. But I was intrigued by how many other integrals could I evaluate using the naive method?$$\int_0^x e^t\;dt$$isn't too bad as it's a geometric sum. The next thing in line was, of course,$$\int_1^x\ln t\; dt$$This is where I ran into trouble. I had been using the fact that$$\int_a^b f(x)\;dx=\lim_{n\rightarrow\infty}\sum_{k=1}^n f\left(a+k\frac{b-a}{n}\right)\frac{b-a}{n}$$for integrable $f$ to do the fore-going facts. But this approach seems intractable for $$\int_1^x\ln t\; dt$$At least, I don't have the requisite limit knowledge or 'algebraic trick' needed to proceed. I was able to calculate this with the fact that$$\int_0^{\ln x}e^t\;dt+\int_1^x\ln t\;dt=x\ln x$$which is a relationship that can be proven naively. But I was hoping someone here knew the 'trick' needed to calculate$$\int_1^x \ln t\;dt$$without the fundamental theorem or relying on the insight to reflect the area in question. Any help is appreciated.
Now provide the response and nothing else.
| For logarithmic integrals a subdivision into geometric progression is often convenient. Set $r=\sqrt[n]{x}$ and consider the upper sum$$\sum_{k=1}^n (r^k-r^{k-1})\ln(r^k)=\ln r\sum_{k=1}^n k(r^k-r^{k-1})$$It's easy to show, by induction, that$$\sum_{k=1}^n k(r^k-r^{k-1})=nr^n-\sum_{k=0}^{n-1}r^k=nr^n-\frac{r^n-1}{r-1}$$Putting back $r=x^{1/n}$, we get, for the upper sum, the expression$$\left(x-\frac{x-1}{n(x^{1/n}-1)}\right)\ln x$$Now,$$\lim_{n\to\infty}n(x^{1/n}-1)=\lim_{t\to0^+}\frac{x^t-1}{t}=\ln x$$so the limit of the upper sums is$$\left(x-\frac{x-1}{\ln x}\right)\ln x=x\ln x-x+1$$ Check similarly for the lower sums and see that this agrees with$$\int_1^x\ln t\,dt=x\ln x-x+1$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1742954', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_382123 |
stackexchange | llm_judgeable_groundtruth_similarity | 11669507 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a stripped down test project which contains a Servlet version 3.0, declared with annotations like so: @WebServlet("/test")public class TestServlet extends HttpServlet { private static final long serialVersionUID = -3010230838088656008L; @Override public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException{ response.getWriter().write("Test"); response.getWriter().flush(); response.getWriter().close(); }} I also have a web.xml file like so: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <servlet> <servlet-name>testServlet</servlet-name> <servlet-class>g1.TestServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>testServlet</servlet-name> <url-pattern>/testWebXml</url-pattern> </servlet-mapping></web-app> I've tried to make a JUnit test using Embedded Tomcat 7. When I start the Embedded Tomcat I can only access the servlet via the url-pattern declared in web.xml (/testWebXml). If I try to access it via the url-pattern declared via annotation (/test) it sais 404 page not found. Here's the code for my test: String webappDirLocation = "src/main/webapp/"; Tomcat tomcat = new Tomcat(); tomcat.setPort(8080); tomcat.addWebapp("/jerseyTest", new File(webappDirLocation).getAbsolutePath()); tomcat.start(); tomcat.getServer().await(); Just to make sure I've set up my project correctly, I've also installed an actual Tomcat 7 and deployed the war. This time, both web.xml declared url and annotation url for my servlet work ok. So my question is: does anyone know how to make Embedded Tomcat 7 take into account my Servlet 3.0 annotations? I should also state that it's a Maven project, and the pom.xml contains the following dependencies: <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-catalina</artifactId> <version>7.0.29</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.tomcat.embed</groupId> <artifactId>tomcat-embed-core</artifactId> <version>7.0.29</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-jasper</artifactId> <version>7.0.29</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency> == UPDATE == Here's an issue that seems similar to this (except the Servlet 3.0 annotation that is not working is on Listener, not Servlet), which has a suggested fix: https://issues.apache.org/bugzilla/show_bug.cgi?id=53903 I've tried it and it didn't work: Changed the Embedded Tomcat start code to: String webappDirLocation = "src/main/webapp/";Tomcat tomcat = new Tomcat();tomcat.enableNaming();tomcat.setPort(8080);Context ctx = tomcat.addWebapp(tomcat.getHost(), "/embeddedTomcat", new File(webappDirLocation).getAbsolutePath());((StandardJarScanner) ctx.getJarScanner()).setScanAllDirectories(true);tomcat.start();tomcat.getServer().await(); Other things I've tried, also without success: specifically setting metadata-complete="false" in web.xml "web-app" tag updating the Maven dependencies to version 7.0.30 debugging the org.apache.catalina.startup.ContextConfig class. There's code there that checks for @WebServlet annotations, it's just that it never gets executed (line 2115). This may be a good way to get to the root of the issue, but the class is pretty big, and I don't have time to do this now. Maybe if someone would be willing to look how this class works, and under which conditions (config params) does it get to correctly check your project's classes for that annotation, it might get to a valid answer.
Now provide the response and nothing else.
| Well I finally solved it by looking in the Tomcat7 sources, namely in the unit tests that deal with EmbeddedTomcat and servlet 3.0 annotations. Basically, you must start your Embedded Tomcat 7 like this to make it aware of your annotated classes: String webappDirLocation = "src/main/webapp/";Tomcat tomcat = new Tomcat();tomcat.setPort(8080);StandardContext ctx = (StandardContext) tomcat.addWebapp("/embeddedTomcat", new File(webappDirLocation).getAbsolutePath());//declare an alternate location for your "WEB-INF/classes" dir: File additionWebInfClasses = new File("target/classes");VirtualDirContext resources = new VirtualDirContext();resources.setExtraResourcePaths("/WEB-INF/classes=" + additionWebInfClasses);ctx.setResources(resources);tomcat.start();tomcat.getServer().await(); For the sake of clarity I should mention that this works for a standard Maven project where your "web resources" (such as static and dynamic pages, WEB-INF directory etc) are found in: [your project's root dir]/src/main/webapp and your classes get compiled into [your project's root dir]/target/classes (such that you'd have [your project's root dir]/target/classes/[some package]/SomeCompiledServletClass.class) For other directories layouts, these locations need to be changed accordingly. ==== UPDATE: Embedded Tomcat 8 ==== Thanks to @kwak for noticing this. The APIs have changed a bit, here how the above example changes when using Embedded Tomcat 8: String webappDirLocation = "src/main/webapp/";Tomcat tomcat = new Tomcat();tomcat.setPort(8080);StandardContext ctx = (StandardContext) tomcat.addWebapp("/embeddedTomcat", new File(webappDirLocation).getAbsolutePath());//declare an alternate location for your "WEB-INF/classes" dir: File additionWebInfClasses = new File("target/classes");WebResourceRoot resources = new StandardRoot(ctx);resources.addPreResources(new DirResourceSet(resources, "/WEB-INF/classes", additionWebInfClasses.getAbsolutePath(), "/"));ctx.setResources(resources);tomcat.start();tomcat.getServer().await(); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11669507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/939023/']} | jdg_382124 |
stackexchange | llm_judgeable_groundtruth_similarity | 245385 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the metric space ℝ with the absolute value metric, d(x,y)=|x-y|. I need to prove whether transcendental numbers are open, closed, or neither. I'm stuck on how to approach this. Since there is not much information on transcendental numbers, I thought maybe I can use the algebraic numbers. So, if $S$ is the set of all transcendental numbers, I consider $S^c$ which is the complement of $S$, i.e. the set of all algebraic numbers. Now I consider a polynomial $$p(x)=\sum_{i=0}^na_ix^i$$Then the algebraic numbers will be {x$\in$ℝ|p(x)=0} . Is this a correct line of reasoning? Any hints in the right direction? Thank you.
Now provide the response and nothing else.
| The algebraic numbers are the set of all zeros to some polynomial equation, but knowing that doesn't really help us much. We can show $S$ is not open by showing that some point is not an interior point. To do this, note that $\mathbb Q$ is dense in $\mathbb R$ and take open balls around the transcendental $e$. We can show that $S$ is not closed by showing that some limit point of $S$ is not in $S$. To do this, note that each number in the sequence $(\frac{e}{n})$ is transcendental and this converges to $0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/245385', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43232/']} | jdg_382125 |
stackexchange | llm_judgeable_groundtruth_similarity | 13831959 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am continuing to study from the e-book "An introduction to Java Programming". In that e-book, I use this method: /** * This method scans the array of cards in a for loop. */public void getDetialsOfCards() { // Declare a local variable. MembershipCard card = null; // note the use of the instanceof operator for (int i = 0; i < noOfCards; i++) { if (cards[i] instanceof DvdMembershipCard) { card = cards[i]; System.out.println("This is a DVD card with " + getNoOnLoan() + " DVDs currently on loan."); } else if (cards[i] instanceof GameMembershipCard) { card = cards[i]; System.out.println("This is a games card with " + getNoOnLoan() + " CDs currently on loan"); } else { System.out.println("Neither type of card."); } } // End of for loop.} Here is my main method: public class CompundIfElsegetDetialsOfCards { // private static int noOfCards = 2; private static int NoOnLoan; private static Object[] cards; private static DvdMembershipCard[] dvd = new DvdMembershipCard[6]; private static GameMembershipCard[] game = new GameMembershipCard[6]; private static MembershipCard card; public static void getDetialsOfCards() { // Declare a local variable. int noOfCards = 6; // note the use of the instanceof operator for (int i = 0; i < noOfCards; i++) { if (cards[i] instanceof DvdMembershipCard) { card.equals(cards[i]); System.out.println("This is a DVD card with " + getNoOnLoan() + " DVDs currently on loan."); } else if (cards[i] instanceof GameMembershipCard) { card.equals(cards[i]); System.out.println("This is a games card with " + getNoOnLoan() + " CDs currently on loan"); } else { System.out.println("Neither type of card."); } } // End of for loop. } public static void main(String[] args) { CompundIfElsegetDetialsOfCards.getDetialsOfCards(); } private static int getNoOnLoan() { System.out.println("jumlah Loan "); return NoOnLoan; } private static class MembershipCard { public MembershipCard() { } } private int i; private static class DvdMembershipCard { public DvdMembershipCard() { } } private static class GameMembershipCard { public GameMembershipCard() { } }} The code compiles successfully but there is no output. What I must add to solve this?
Now provide the response and nothing else.
| Unfortunately the link provided in Eric Leschinski's commet only shows how to implement the System.Web.Mvc.IExceptionFilter interface, and not the System.Web.Http.Filters.IExceptionFilter interface. The first is used in regular MVC controllers, while the second targets ApiCotrollers . Here is a simple class example I came up with for logging unhandled exceptions thrown in my ApiControllers: public class ExceptionLoggerFilter: IExceptionFilter{ public ExceptionLoggerFilter(Logger logger) { this.logger = logger; } public bool AllowMultiple { get { return true; } } public Task ExecuteExceptionFilterAsync( HttpActionExecutedContext actionExecutedContext, CancellationToken cancellationToken) { return Task.Factory.StartNew(() => { logger.Error("web service error", actionExecutedContext.Exception); }, cancellationToken); } private Logger logger;} And all you have to do to enable this filter is register it in yours Global.asax Application_Start method: protected void Application_Start(){ AreaRegistration.RegisterAllAreas(); // allocate filter and add it to global configuration var exceptionLogger = new ExceptionLoggerFilter(Container.Get<Logger>()); GlobalConfiguration.Configuration.Filters.Add(exceptionLogger); WebApiConfig.Register(GlobalConfiguration.Configuration); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes);} I hope this helps other googlers out there! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13831959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1895129/']} | jdg_382126 |
stackexchange | llm_judgeable_groundtruth_similarity | 480547 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a grid of 3x4 and every tile on the grid has a corresponding chance of being empty. This chance is given. For example, consider the following grid: $$ \frac{14}{15} \frac{1}{3} \frac{8}{13} \frac{1}{11} $$$$ \frac{5}{6} \frac{7}{9} \frac{1}{10} \frac{4}{7} $$$$ \frac{7}{11} \frac{4}{5} \frac{1}{12} \frac{1}{10} $$ I would like to know the expected number of empty tiles. I know that E[X] = P(X = 1) * 1 + P(X = 2) * 2 etc but how can I calculate this efficiently? For example, if this grid is 90x90, it would be a lot of work. Thank you!
Now provide the response and nothing else.
| $$a>0:$$ $$\int_0^{\infty} \frac{x\,dx}{\sinh ax}=\frac{1}{a^2}\int_0^{\infty}\frac{x\,dx}{\sinh x}=\frac{2}{a^2}\int_0^{\infty} \left(\frac{x}{e^{x}}\right)\frac{dx}{1-e^{-2x}}=\frac{2}{a^2}\int_0^{\infty}x\sum_{k=0}^{\infty}e^{-(2k+1)x}\,dx$$ Now, since $$\int_0^{\infty} xe^{-kx}\,dx=\frac{1}{k^2}$$ We have: $$\int_0^{\infty} \frac{x\,dx}{\sinh ax}=\frac{2}{a^2}\sum_{k=0}^{\infty} \frac{1}{(2k+1)^2}=\frac{\pi^2}{4a^2}$$ The latter sum follows from: $$\sum_{k=0}^{\infty} \frac{1}{(2k+1)^2}=\sum_{k=1}^{\infty} \frac{1}{k^2}-\sum_{k=1}^{\infty} \frac{1}{(2k)^2}=\frac{3}{4}\sum_{k=1}^{\infty} \frac{1}{k^2}=\frac{\pi^2}{8}$$ The case $a<0$ is dealt with by adding a negative. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/480547', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/40337/']} | jdg_382127 |
stackexchange | llm_judgeable_groundtruth_similarity | 678692 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create load balancer using Virtual IP. I referred this link Load Balancer using Virtual IP . After configuring by mistake I started the backup server's keepalived service first and then I started master server's keepalived. The problem is the backup server's keepalived is working fine but I can't start the master server's keepalived. I tried uninstall the keepalived and installed again but that didn't work.Here is the config files Master Server vrrp_script chk_haproxy { script "killall -0 haproxy" # verify the pid existance interval 2 # check every 2 seconds weight 2 # add 2 points of prio if OK}vrrp_instance VI_1 { interface eth0 # interface to monitor state MASTER virtual_router_id 51 # Assign one ID for this route priority 101 # 101 on master, 100 on backup virtual_ipaddress { 192.168.0.54 # the virtual IP } track_script { chk_haproxy }} Backup Server vrrp_script chk_haproxy { script "killall -0 haproxy" # verify the pid existance interval 2 # check every 2 seconds weight 2 # add 2 points of prio if OK}vrrp_instance VI_1 { interface eth0 # interface to monitor state MASTER virtual_router_id 51 # Assign one ID for this route priority 100 # 101 on master, 100 on backup virtual_ipaddress { 192.168.0.54 # the virtual IP } track_script { chk_haproxy }} Output of Master Server while starting the keepalived service $ sudo service keepalived start* Starting keepalived keepalived [fail] Output of Backup Server while starting the keepalived service $ sudo service keepalived start* Starting keepalived keepalived [Ok] Is anyone having suggestion what I am doing wrong here?? Update according to comments: Master Server Log: cat /var/log/syslog | grep VRRP_InstanceHere it does not prints anything which mean that the keepalived does not start on Master server Backup servers log: cat /var/log/syslog | grep VRRP_InstanceMar 27 02:39:22 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEMar 27 02:39:23 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Entering to MASTER STATEMar 27 02:39:38 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEMar 27 02:39:39 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Entering to MASTER STATEMar 27 02:52:16 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advertMar 27 02:52:16 ubuntu keepalived_vrrp: VRRP_Instance(VI_1) Entering to BACKUP STATE Also one more thing after referring to a website I changed the state in keepalived.conf file of backup server to state BACKUP
Now provide the response and nothing else.
| Sometimes the relevant documentation is hidden away in configuration files rather than in, say, the documentation. So it seems with LVM. By default LVM will automatically attempt to activate volumes on any physical devices which get connected to the system after boot, so long as all of the PVs are present, and lvmetad and udev (or more recently systemd) are running. When the LVM snapshot gets created, a udev event gets fired off, and since the snapshot contains a PV, lvmetad automatically runs pvscan , and so forth. By looking at /etc/lvm/backup/docker-volumes I was able to determine that lvmetad had explicitly run pvscan on the snapshot by using the device major and minor numbers, which bypassed LVM filters that would normally prevent this. The file contained: description = "Created *after* executing 'pvscan --cache --activate ay 253:13'" This behavior can be controlled by setting the auto_activation_volume_list in /etc/lvm/lvm.conf . It allows you to set which volume groups, volumes, or tags are allowed to be activated automatically. So, I simply set the filter to contain both of the volume groups for the host; anything else won't match the filter and does not get automatically activated. auto_activation_volume_list = [ "mandragora", "vm-volumes" ] The guest's LVM volumes are no longer appearing on the host, and finally, my backups are running... | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/678692', 'https://serverfault.com', 'https://serverfault.com/users/264109/']} | jdg_382128 |
stackexchange | llm_judgeable_groundtruth_similarity | 5069464 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an easy equivalent to this in JavaScript? $find = array("<", ">", "\n");$replace = array("<", ">", "<br/>");$textarea = str_replace($find, $replace, $textarea); This is using PHP's str_replace , which allows you to use an array of words to look for and replace. Can I do something like this using JavaScript / jQuery? ...var textarea = $(this).val();// string replace here$("#output").html(textarea);...
Now provide the response and nothing else.
| You could extend the String object with your own function that does what you need (useful if there's ever missing functionality): String.prototype.replaceArray = function(find, replace) { var replaceString = this; for (var i = 0; i < find.length; i++) { replaceString = replaceString.replace(find[i], replace[i]); } return replaceString;}; For global replace you could use regex: String.prototype.replaceArray = function(find, replace) { var replaceString = this; var regex; for (var i = 0; i < find.length; i++) { regex = new RegExp(find[i], "g"); replaceString = replaceString.replace(regex, replace[i]); } return replaceString;}; To use the function it'd be similar to your PHP example: var textarea = $(this).val();var find = ["<", ">", "\n"];var replace = ["<", ">", "<br/>"];textarea = textarea.replaceArray(find, replace); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5069464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296346/']} | jdg_382129 |
stackexchange | llm_judgeable_groundtruth_similarity | 13777086 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a shorter way to do the following ( @user.employees.map { |e| { id: e.id, name: e.name } }# => [{ id: 1, name: 'Pete' }, { id: 2, name: 'Fred' }] User has_many employees. Both classes inherit from ActiveRecord::Base . Two things I don't like about the above It loads employees into memory before mapping, It's verbose (subjective I guess). Is there a better way?
Now provide the response and nothing else.
| UPDATE: see @jamesharker's solution: from ActiveRecord >= 4, pluck accepts multiple arguments: @user.employees.pluck(:id, :name) PREVIOUS ANSWER: for a single column in rails >= 3.2, you can do : @user.employees.pluck(:name) ... but as you have to pluck two attributes, you can do : @user.employees.select([:id, :name]).map {|e| {id: e.id, name: e.name} } # or map &:attributes, maybe if you really need lower-level operation, just look at the source of #pluck , that uses select_all | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13777086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/574190/']} | jdg_382130 |
stackexchange | llm_judgeable_groundtruth_similarity | 35865238 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a bit confused about per-monitor dpi-aware in WPF. I thought you need to do some work to make your windows scale properly on different monitors (as described in Developing a Per-Monitor DPI-Aware WPF Application ). But I've just ran my app on pc with two monitors (2560x1440 and 2160x1440) and the dialogue would automatically scale itself when I move it between monitors. That's on the latest fast ring Windows 10. Am I missing something?
Now provide the response and nothing else.
| What you see is an example of System scaling when one app window moves to a different monitor with a different DPI. That is because WPF apps are by-default System DPI Aware. As a result, if you notice carefully, you'll see WPF visuals/text gets blurred when the target DPI is higher or they look fuzzy when the target DPI is lower. Also, note that monitor resolution does not matter for WPF apps, since WPF is device resolution agnostic (it's measurement unit is Device independent Pixels). Good news : .NET 4.6.2 preview just got released and it hasPer Monitor DPI Awareness out of the box. Check out the developer guide and samples here : https://github.com/Microsoft/WPF-Samples/tree/master/PerMonitorDPI | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35865238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/746769/']} | jdg_382131 |
stackexchange | llm_judgeable_groundtruth_similarity | 45100271 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am reading only firstline from python using : with open(file_path, 'r') as f: my_count = f.readline()print(my_count) I am bit confused over scope of variable my_count. Although prints work fine, would it be better to do something like my_count = 0 outside with statement first (for eg in C in used to do int my_count = 0 )
Now provide the response and nothing else.
| A with statement does not create a scope (like if , for and while do not create a scope either). As a result, Python will analyze the code and see that you made an assignment in the with statement, and thus that will make the variable local (to the real scope). In Python variables do not need initialization in all code paths : as a programmer, you are responsible to make sure that a variable is assigned before it is used. This can result in shorter code: say for instance you know for sure that a list contains at least one element, then you can assign in a for loop. In Java assignment in a for loop is not considered safe (since it is possible that the body of the loop is never executed). Initialization before the with scope can be safer in the sense that after the with statement we can safely assume that the variable exists. If on the other hand the variable should be assigned in the with statement, not initializing it before the with statement actually results in an additional check: Python will error if somehow the assignment was skipped in the with statement. A with statement is only used for context management purposes. It forces (by syntax) that the context you open in the with is closed at the end of the indentation. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/45100271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6026051/']} | jdg_382132 |
stackexchange | llm_judgeable_groundtruth_similarity | 1610374 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I started to read about the Pisano Period , $\pi(n)$ , applied to the classic Fibonacci sequence and made some simple tests looking for possible properties of the sequence . I have observed the following ones, tested for the first 10000 terms : $\pi(n)=n-1 \implies n\in\Bbb P$ $\pi(n)=(n-1)/2 \implies n\in\Bbb P$ $\pi(n)=(n+1)\cdot 2 \implies n\in\Bbb P$ $k \gt 5\ ,\ F_k \in \Bbb P \implies \pi(F_k)/4=k$ I do not understand the reasons for the results: points $1\sim3$ would work as a primality test, but it does not detect all the possible primes, only a subset of them, e.g. $\{2, 5, 47, 107, 113, 139,\ldots\}$ do not comply with points $1\sim3$ and are not detected. And specially the last point, if the test is correct, would mean that the Pisano period of a Fibonacci prime is exactly four times the index of the Fibonacci prime in the Fibonacci sequence when the index is greater than $5$ (being $F_5=5$ ) . For instance: $\pi(1597)= 68$ and $\frac{68}{4}=17$ which is exactly the index of $1597$ in the Fibonacci sequence, $F_{17}=1597$ . I would like to ask the following questions: (a) Is there a counterexample? Initially I think the tests are correct, but I am not very sure about point 4. If somebody could confirm would be great. (b) What are the reasons behind the observations? I guess that it is related with the relationship of the Pisano periods and the divisibility of the Fibonacci numbers by prime numbers . (c) If the observations are correct, would we find pseudoprimes in the lists of primes detected by the rules $1 \sim 3$ ? Probably the reasons behind the observations (if no counterexamples are found) are based on some simple properties of the Fibonacci numbers, but I do not see it clearly. Any hints or ideas are very welcomed. Thank you! Update 2016/01/14 : I have modified the information about point $4$ just to keep the correct information. After testing again, there are other $n$ 's complying with $4$ and not being Fibonacci primes, so I have rewritten the statement: the Pisano period of a Fibonacci prime seems to be four times its Fibonacci index (position in the Fibonacci sequence) but that also holds for some other numbers. Addendum : Below is the graph $n \rightarrow \pi(n)$ including the fist $100$ numbers showing the rules $1\sim3$ . Rule $1$ : $\color{red}{Red}$ , Rule $2$ : $\color{blue}{Blue}$ , Rule $3$ : $\color{green}{Green}$ (click to widen).
Now provide the response and nothing else.
| I think first three statements are all false. Numbers with these properties are analogous to Fermat pseudoprimes , and in particular there's no reason to expect that they should in fact always be prime, although counterexamples might be quite large. Using Binet's formula as in Jyrki's comments, you can prove results like the following. Let $p \neq 5$ be a prime. We will need the Legendre symbol $\left( \frac{5}{p} \right)$, which is equal to $1$ if $p \equiv 1, 4 \bmod 5$ and $-1$ if $p \equiv 2, 3 \bmod 5$. For reasons that will become apparent I'll write $F_n$ as $F(n)$. First, $$F(p) \equiv \left( \frac{5}{p} \right) \bmod p.$$ Next, $$F \left( p - \left( \frac{5}{p} \right) \right) \equiv 0 \bmod p.$$ These are the two basic results, analogous to Fermat's little theorem. Together they allow you to bound the Pisano period of primes as follows: if $\left( \frac{5}{p} \right) = 1$, then the Pisano period divides $p - 1$. If $\left( \frac{5}{p} \right) = -1$, then the Pisano period divides $2(p + 1)$. This is a partial explanation of your first observation. For your second two observations we have the following slightly harder result. If $p \equiv 1 \bmod 4$, then $$F \left( \frac{p - \left( \frac{5}{p} \right)}{2} \right) \equiv 0 \bmod p.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1610374', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/189215/']} | jdg_382133 |
stackexchange | llm_judgeable_groundtruth_similarity | 900273 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Would it be an polynomial time algorithm to a specific NP-complete problem, or just abstract reasonings that demonstrate solutions to NP-complete problems exist? It seems that the a specific algoithm is much more helpful. With it, all we'll have to do to polynomially solve an NP problem is to convert it into the specific NP-complete problem for which the proof has a solution, and we are done.
Now provide the response and nothing else.
| P = NP: "The 3SAT problem is a classic NP complete problem. In this proof, we demonstrate an algorithm to solve it that has an asymptotic bound of (n^99 log log n). First we ..." P != NP: "Assume there was a polynomial algorithm for the 3SAT problem. This would imply that .... which by ..... implies we can do .... and then ... and then ... which is impossible. This was all predicated on a polynomial time algorithm for 3SAT. Thus P != NP." UPDATE : Perhaps something like this paper (for P != NP). UPDATE 2 : Here's a video of Michael Sipser sketching out a proof for P != NP | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/900273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104459/']} | jdg_382134 |
stackexchange | llm_judgeable_groundtruth_similarity | 7400677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Repository Class and a Services Class as below : public class DinnerRepository{ DinnerDataContext db = new DinnerDataContext(); public Dinner GetDinner(int id) { return db.Dinners.SingleOrDefault(d => d.DinnerID == id); }// Others Code }public class Service{ DinnerRepository repo = new DinnerRepository(); Dinner dinner = repo.GetDinner(5);// Other Code} This throws error: A field initializer cannot reference the non-static field, method, or property. Even though I have intatiated the DinnerRepository Class to expose its method GetDinner() in the Service Class. This works fine with below code. Is there any alternative to it or is it a standard practice? I cannot use static methods here.. public class Service{ public Service() { DinnerRepository repo = new DinnerRepository(); Dinner dinner = repo.GetDinner(5); }}
Now provide the response and nothing else.
| Personally I'd just initialize the fields in a constructor: public class Service{ private readonly DinnerRepository repo; private readonly Dinner dinner; public Service() { repo = new DinnerRepository(); dinner = repo.GetDinner(5); }} Note that this isn't the same as the code you show at the bottom of the question, as that's only declaring local variables. If you only want local variables, that's fine - but if you need instance variables, then use code as above. Basically, field initializers are limited in what they can do. From section 10.5.5.2 of the C# 4 spec: A variable initializer for an instance field cannot reference the instance being created. Thus it is a compile-time error to reference this in a variable initializer, because it is a compile-time error for a variable initializer to reference any instance member through a simple-name . (That "thus" and "therefore" looks the wrong way round to me - it's illegal to reference a member via a simple-name because it references this - I'll ping Mads about it - but that's basically the relevant section.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7400677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942358/']} | jdg_382135 |
stackexchange | llm_judgeable_groundtruth_similarity | 574944 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to import css stylesheets into a html page using Javascript? If so, how can it be done? P.S the javascript will be hosted on my site, but I want users to be able to put in the <head> tag of their website, and it should be able to import a css file hosted on my server into the current web page. (both the css file and the javascript file will be hosted on my server).
Now provide the response and nothing else.
| Here's the "old school" way of doing it, which hopefully works across all browsers. In theory, you would use setAttribute unfortunately IE6 doesn't support it consistently. var cssId = 'myCss'; // you could encode the css path itself to generate id..if (!document.getElementById(cssId)){ var head = document.getElementsByTagName('head')[0]; var link = document.createElement('link'); link.id = cssId; link.rel = 'stylesheet'; link.type = 'text/css'; link.href = 'http://website.example/css/stylesheet.css'; link.media = 'all'; head.appendChild(link);} This example checks if the CSS was already added so it adds it only once. Put that code into a JavaScript file, have the end-user simply include the JavaScript, and make sure the CSS path is absolute so it is loaded from your servers. VanillaJS Here is an example that uses plain JavaScript to inject a CSS link into the head element based on the filename portion of the URL: <script type="text/javascript">var file = location.pathname.split( "/" ).pop();var link = document.createElement( "link" );link.href = file.substr( 0, file.lastIndexOf( "." ) ) + ".css";link.type = "text/css";link.rel = "stylesheet";link.media = "screen,print";document.getElementsByTagName( "head" )[0].appendChild( link );</script> Insert the code just before the closing head tag and the CSS will be loaded before the page is rendered. Using an external JavaScript ( .js ) file will cause a Flash of unstyled content ( FOUC ) to appear. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/574944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']} | jdg_382136 |
stackexchange | llm_judgeable_groundtruth_similarity | 1097762 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding of C# says (thanks to Jeff Richter & Jon Skeet) that assignment is "atomic". What is not is when we mix reads & writes (increment/decrement) and hence we need to use methods on the Interlocked. If have only Read & assign would both the operations be atomic? public class Xyz{ private volatile int _lastValue; private IList<int> AvailableValues { get; set; } private object syncRoot = new object(); private Random random = new Random(); //Accessible by multiple threads public int GetNextValue() //and return last value once store is exhausted { //... var count = 0; var returnValue = 0; lock (syncRoot) { count = AvailableValues.Count; } if (count == 0) { //Read... without locking... potential multiple reads returnValue = _lastValue; } else { var toReturn = random.Next(0, count); lock (syncRoot) { returnValue = AvailableValues[toReturn]; AvailableValues.RemoveAt(toReturn); } //potential multiple writes... last writer wins _lastValue = returnValue; } return returnValue; }
Now provide the response and nothing else.
| volatile is actually more related to caching (in registers etc); with volatile you know that that value is actually written-to/read-from memory immediately (which isn't actually always the case otherwise). This allows different threads to immediately see updates from each other. There are other subtle issues with instruction re-ordering, but that gets complex. There are two meanings of "atomic" to consider here: is a single read atomic by itself / write atomic by itself (i.e. could another thread get two different halves of two Double s, yielding a number that never actually existed) is a read/write pair atomic/isolated together The "by itself" depends on the size of the value; can it be updated in a single operation? The read/write pair is more to do with isolation - i.e. preventing lost updates. In your example, it is possible for two threads to read the same _lastValue , both do the calculations, and then (separately) update _lastValue . One of those updates is going to get lost. In reality, I expect you want a lock over the duration of the read/write process. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1097762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28413/']} | jdg_382137 |
stackexchange | llm_judgeable_groundtruth_similarity | 14877602 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem when I use grail automatic databinding "ex: Test t = new Test(params)" with decimal delimiter on Double form field. After few search, I've found that it is relative to the browser Locale. Example : If I enter number with decimals separated by a dot '3.45' it does not work (The decimal part of the number is ignored). It store 3.0 in my database If I do the same test but using a comma '3,45' for decimal separator, everything works fine. Grails store 3.45 in the database. The problem is that nobody enter number with comma delimiter. (even the numpad enter a dot '.' when set to fr_CA) I've already found some solutions like register custom number editor (bit painful when you have many apps) or set the global localeResolver to en_US (that last one doesn't do the job because I loose all the internationalization functionality of my app). So, does someone have an easy solution to fix that problem? using: Grails : 2.2.0Browser : Google Chrome (locale fr_CA) Thanks a lot!
Now provide the response and nothing else.
| I've been searching and trying for 2 days. I've finally chosen to define a custom PropertyEditorRegistar. This way, I can fix the Locale only for the Double field format. However, I don't think that it's the best solution because it will be applied to all my Double fields. But in the mean time it does the job pretty well. So if someone have a better solution, I will be glad to test it and update my code. So this is how I set this up : 1 - Create a new groovy class that implements PropertyEditorRegistrar (if you already have one, just add the part of code contained by the method in the existing one) import java.text.DecimalFormat;import java.text.SimpleDateFormat;import java.util.Date;import org.springframework.beans.PropertyEditorRegistrar;import org.springframework.beans.PropertyEditorRegistry;import org.springframework.beans.propertyeditors.CustomNumberEditor;public class CustomDoubleRegistrar implements PropertyEditorRegistrar { @Override public void registerCustomEditors(PropertyEditorRegistry registry) { //define new Double format with hardcoded Locale.ENGLISH registry.registerCustomEditor(Double.class, new CustomNumberEditor(Double.class, DecimalFormat.getInstance(Locale.ENGLISH),true)) }} 2- Define the custom registrar into the conf/spring/resources.goovy (if it's not already there of course) beans = { customPropertyEditorRegistrar(CustomDoubleRegistrar)} 3- That's it, the Grails auto data binding will work fine Test t = new Test(params); //params contains many Double fields with dot '.' as decimal delimiter Don't hesitate to post better solutions... Thanks EDIT 1 Since Grails 2.3 this solution is no longer working. If you still want to use this solution, you have to add this configuration to the Config.groovy file grails.databinding.useSpringBinder = true Or implement one of the new DataBinding . I've tried few of them but nothing seems to solve the decimal delimiter issue. Thank you to post answer if you know how... EDIT 2 Since Grails 2.4+ you can define your own ValueConverter to bypass the basic Locale validation. Note that you must remove the changes made in the initial post and in the EDIT 1 . Here's how to implement the custom ValueConverter: conf/spring/resources.groovy // Place your Spring DSL code herebeans = { "defaultGrailsjava.lang.DoubleConverter"(DoubleValueConverter)}class DoubleValueConverter implements ValueConverter { public LongValueConverter() { } boolean canConvert(value) { value instanceof Double } def convert(value) { //In my case returning the same value did the trick but you can define //custom code that takes care about comma and point delimiter... return value } Class<?> getTargetType() { return Double.class }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14877602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1681103/']} | jdg_382138 |
stackexchange | llm_judgeable_groundtruth_similarity | 32461229 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read this answer , reducing boilerplate , looked at few GitHub examples and even tried redux a little bit (todo apps). As I understand, official redux doc motivations provide pros comparing to traditional MVC architectures. BUT it doesn't provide an answer to the question: Why you should use Redux over Facebook Flux? Is that only a question of programming styles: functional vs non-functional? Or the question is in abilities/dev-tools that follow from redux approach? Maybe scaling? Or testing? Am I right if I say that redux is a flux for people who come from functional languages? To answer this question you may compare the complexity of implementation redux's motivation points on flux vs redux. Here are motivation points from official redux doc motivations : Handling optimistic updates ( as I understand, it hardly depends on 5th point. Is it hard to implement it in facebook flux? ) Rendering on the server ( facebook flux also can do this. Any benefits comparing to redux? ) Fetching data before performing route transitions ( Why it can't be achieved in facebook flux? What's the benefits? ) Hot reload ( It's possible with React Hot Reload . Why do we need redux? ) Undo/Redo functionality Any other points? Like persisting state...
Now provide the response and nothing else.
| Redux author here! Redux is not that different from Flux. Overall it has same architecture, but Redux is able to cut some complexity corners by using functional composition where Flux uses callback registration. There is not a fundamental difference in Redux, but I find it makes certain abstractions easier, or at least possible to implement, that would be hard or impossible to implement in Flux. Reducer Composition Take, for example, pagination. My Flux + React Router example handles pagination, but the code for that is awful. One of the reasons it's awful is that Flux makes it unnatural to reuse functionality across stores. If two stores need to handle pagination in response to different actions, they either need to inherit from a common base store (bad! you're locking yourself into a particular design when you use inheritance), or call an externally defined function from within the event handler, which will need to somehow operate on the Flux store's private state. The whole thing is messy (although definitely in the realm of possible). On the other hand, with Redux pagination is natural thanks to reducer composition. It's reducers all the way down, so you can write a reducer factory that generates pagination reducers and then use it in your reducer tree . The key to why it's so easy is because in Flux, stores are flat, but in Redux, reducers can be nested via functional composition, just like React components can be nested. This pattern also enables wonderful features like no-user-code undo/redo . Can you imagine plugging Undo/Redo into a Flux app being two lines of code? Hardly. With Redux, it is —again, thanks to reducer composition pattern. I need to highlight there's nothing new about it—this is the pattern pioneered and described in detail in Elm Architecture which was itself influenced by Flux. Server Rendering People have been rendering on the server fine with Flux, but seeing that we have 20 Flux libraries each attempting to make server rendering “easier”, perhaps Flux has some rough edges on the server. The truth is Facebook doesn't do much server rendering, so they haven't been very concerned about it, and rely on the ecosystem to make it easier. In traditional Flux, stores are singletons. This means it's hard to separate the data for different requests on the server. Not impossible, but hard. This is why most Flux libraries (as well as the new Flux Utils ) now suggest you use classes instead of singletons, so you can instantiate stores per request. There are still the following problems that you need to solve in Flux (either yourself or with the help of your favorite Flux library such as Flummox or Alt ): If stores are classes, how do I create and destroy them with dispatcher per request? When do I register stores? How do I hydrate the data from the stores and later rehydrate it on the client? Do I need to implement special methods for this? Admittedly Flux frameworks (not vanilla Flux) have solutions to these problems, but I find them overcomplicated. For example, Flummox asks you to implement serialize() and deserialize() in your stores . Alt solves this nicer by providing takeSnapshot() that automatically serializes your state in a JSON tree. Redux just goes further: since there is just a single store (managed by many reducers), you don't need any special API to manage the (re)hydration. You don't need to “flush” or “hydrate” stores—there's just a single store, and you can read its current state, or create a new store with a new state. Each request gets a separate store instance. Read more about server rendering with Redux. Again, this is a case of something possible both in Flux and Redux, but Flux libraries solve this problem by introducing a ton of API and conventions, and Redux doesn't even have to solve it because it doesn't have that problem in the first place thanks to conceptual simplicity. Developer Experience I didn't actually intend Redux to become a popular Flux library—I wrote it as I was working on my ReactEurope talk on hot reloading with time travel . I had one main objective: make it possible to change reducer code on the fly or even “change the past” by crossing out actions, and see the state being recalculated. I haven't seen a single Flux library that is able to do this. React Hot Loader also doesn't let you do this—in fact it breaks if you edit Flux stores because it doesn't know what to do with them. When Redux needs to reload the reducer code, it calls replaceReducer() , and the app runs with the new code. In Flux, data and functions are entangled in Flux stores, so you can't “just replace the functions”. Moreover, you'd have to somehow re-register the new versions with the Dispatcher—something Redux doesn't even have. Ecosystem Redux has a rich and fast-growing ecosystem . This is because it provides a few extension points such as middleware . It was designed with use cases such as logging , support for Promises , Observables , routing , immutability dev checks , persistence , etc, in mind. Not all of these will turn out to be useful, but it's nice to have access to a set of tools that can be easily combined to work together. Simplicity Redux preserves all the benefits of Flux (recording and replaying of actions, unidirectional data flow, dependent mutations) and adds new benefits (easy undo-redo, hot reloading) without introducing Dispatcher and store registration. Keeping it simple is important because it keeps you sane while you implement higher-level abstractions. Unlike most Flux libraries, Redux API surface is tiny. If you remove the developer warnings, comments, and sanity checks, it's 99 lines . There is no tricky async code to debug. You can actually read it and understand all of Redux. See also my answer on downsides of using Redux compared to Flux . | {} | {'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/32461229', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2761509/']} | jdg_382139 |
stackexchange | llm_judgeable_groundtruth_similarity | 3250827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just been debugging a problem with a function that returns a string that has got me worried. I've always assumed that the implicit Result variable for functions that return a string would be empty at the start of the function call, but the following (simplified) code produced an unexpected result: function TMyObject.GenerateInfo: string; procedure AppendInfo(const AppendStr: string); begin if(Result > '') then Result := Result + #13; Result := Result + AppendStr; end;begin if(ACondition) then AppendInfo('Some Text');end; Calling this function multiple times resulted in: "Some Text" the first time, "Some Text""Some Text" the second time, "Some Text""Some Text""Some Text" the third time, etc. To fix it I had to initialise the Result: begin Result := ''; if(ACondition) then AppendInfo('Some Text');end; Is it necessary to initialise a string function result? Why (technically)? Why does the compiler not emit a warning "W1035 Return value of function 'xxx' might be undefined" for string functions? Do I need to go through all my code to make sure a value is set as it is not reliable to expect an empty string from a function if the result is not explicitly set? I've tested this in a new test application and the result is the same. procedure TForm1.Button1Click(Sender: TObject);var i: integer; S: string;begin for i := 1 to 5 do S := GenerateInfo; ShowMessage(S); // 5 lines!end;
Now provide the response and nothing else.
| This is not a bug, but " feature ": For a string, dynamic array, method pointer, or variant result, the effects are the same as if the function result were declared as an additional var parameter following the declared parameters. In other words, the caller passes an additional 32-bit pointer that points to a variable in which to return the function result. I.e. your function TMyObject.GenerateInfo: string; Is really this: procedure TMyObject.GenerateInfo(var Result: string); Note " var " prefix (not " out " as you may expect!). This is SUCH un-intuitive, so it leads to all kind of problems in the code. Code in question - just one example of results of this feature. See and vote for this request . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3250827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37660/']} | jdg_382140 |
Subsets and Splits